Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
2,700 | 3,448 | Supervised Dictionary Learning
Julien Mairal
INRIA-Willow project
[email protected]
Jean Ponce
Ecole Normale Sup?erieure
[email protected]
Francis Bach
INRIA-Willow project
[email protected]
Guillermo Sapiro
University of Minnesota
[email protected]
Andrew Zisserman
University of Oxford
[email protected]
Abstract
It is now well established that sparse signal models are well suited for restoration tasks and can be effectively learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of
purely reconstructive ones. This paper proposes a new step in that direction, with
a novel sparse representation for signals belonging to different classes in terms of
a shared dictionary and discriminative class models. The linear version of the proposed model admits a simple probabilistic interpretation, while its most general
variant admits an interpretation in terms of kernels. An optimization framework
for learning all the components of the proposed model is presented, along with
experimental results on standard handwritten digit and texture classification tasks.
1
Introduction
Sparse and overcomplete image models were first introduced in [1] for modeling the spatial receptive fields of simple cells in the human visual system. The linear decomposition of a signal using a
few atoms of a learned dictionary, instead of predefined ones?such as wavelets?has recently led to
state-of-the-art results for numerous low-level image processing tasks such as denoising [2], showing that sparse models are well adapted to natural images. Unlike principal component analysis
decompositions, these models are in general overcomplete, with a number of basis elements greater
than the dimension of the data. Recent research has shown that sparsity helps to capture higher-order
correlation in data. In [3, 4], sparse decompositions are used with predefined dictionaries for face
and signal recognition. In [5], dictionaries are learned for a reconstruction task, and the corresponding sparse models are used as features in an SVM. In [6], a discriminative method is introduced
for various classification tasks, learning one dictionary per class; the classification process itself is
based on the corresponding reconstruction error, and does not exploit the actual decomposition coefficients. In [7], a generative model for documents is learned at the same time as the parameters of
a deep network structure. In [8], multi-task learning is performed by learning features and tasks are
selected using a sparsity criterion. The framework we present in this paper extends these approaches
by learning simultaneously a single shared dictionary as well as models for different signal classes
in a mixed generative and discriminative formulation (see also [9], where a different discriminative
term is added to the classical reconstructive one). Similar joint generative/discriminative frameworks have started to appear in probabilistic approaches to learning, e.g., [10, 11, 12, 13, 14], and
in neural networks [15], but not, to the best of our knowledge, in the sparse dictionary learning
framework. Section 2 presents a formulation for learning a dictionary tuned for a classification task,
which we call supervised dictionary learning, and Section 3 its interpretation in term of probability and kernel frameworks. The optimization procedure is detailed in Section 4, and experimental
results are presented in Section 5.
2
Supervised dictionary learning
We present in this section the core of the proposed model. In classical sparse coding tasks, one considers a signal x in Rn and a fixed dictionary D = [d1 , . . . , dk ] in Rn?k (allowing k > n, making
the dictionary overcomplete). In this setting, sparse coding with an ?1 regularization1 amounts to
computing
R? (x, D) = min ||x ? D?||22 + ?1 ||?||1 .
(1)
??Rk
It is well known in the statistics, optimization, and compressed sensing communities that the ?1
penalty yields a sparse solution, very few non-zero coefficients in ?, although there is no explicit
analytic link between the value of ?1 and the effective sparsity that this model yields. Other sparsity
penalties using the ?0 regularization2 can be used as well. Since it uses a proper norm, the ?1
formulation of sparse coding is a convex problem, which makes the optimization tractable with
algorithms such as those introduced in [16, 17], and has proven in practice to be more stable than its
?0 counterpart, in the sense that the resulting decompositions are less sensitive to small perturbations
of the input signal x. Note that sparse coding with an ?0 penalty is an NP-hard problem and is often
approximated using greedy algorithms.
In this paper, we consider a setting, where the signal may belong to any of p different classes. We
first consider the case of p = 2 classes and later discuss the multiclass extension. We consider a
n
m
training set of m labeled signals (xi )m
i=1 in R , associated with binary labels (yi ? {?1, +1})i=1 .
Our goal is to learn jointly a single dictionary D adapted to the classification task and a function
f which should be positive for any signal in class +1 and negative otherwise. We consider in this
paper two different models to use the sparse code ? for the classification task:
(i) linear in ?: f (x, ?, ?) = wT ? + b, where ? = {w ? Rk , b ? R} parametrizes the model.
(ii) bilinear in x and ?: f (x, ?, ?) = xT W? + b, where ? = {W ? Rn?k , b ? R}. In this case,
the model is bilinear and f acts on both x and its sparse code ?.
The number of parameters in (ii) is greater than in (i), which allows for richer models. Note that
one can interpret W as a linear filter encoding the input signal x into a model for the coefficients ?,
which has a role similar to the encoder in [18] but for a discriminative task.
A classical approach to obtain ? for (i) or (ii) is to first adapt D to the data, solving
m
X
min
||xi ? D?i ||22 + ?1 ||?i ||1 ,
D,?
(2)
i=1
Note also that since the reconstruction errors ||xi ? D?i ||22 are invariant to scaling simultaneously
D by a scalar and ?i by its inverse, we need to constrain the ?2 norm of the columns of D. Such a
constraint is classical in sparse coding [2]. This reconstructive approach (dubbed REC in this paper)
provides sparse codes ?i for each signal xi , which can be used a posteriori in a regular classifier
such as logistic regression, which would require to solve
m
X
C yi f (xi , ?i , ?) + ?2 ||?||22 ,
(3)
min
?
i=1
where C is the logistic loss function (C(x) = log(1 + e?x )), which enjoys properties similar to
that of the hinge loss from the SVM literature, while being differentiable, and ?2 is a regularization
parameter, which prevents overfitting. This is the approach chosen in [5] (with SVMs). However,
our goal is to learn jointly D and the model parameters ?. To that effect, we propose the formulation
m
X
min
C yi f (xi , ?i , ?) + ?0 ||xi ? D?i ||22 + ?1 ||?i ||1 + ?2 ||?||22 ,
(4)
D,?,?
i=1
where ?0 controls the importance of the reconstruction term, and the loss for a pair (xi , yi ) is
S ? (xi , D, ?, yi ) = min S(?, xi , D, ?, yi ),
?
where S(?, xi , D, ?, yi ) = C yi f (xi , ?i , ?) + ?0 ||xi ? D?i ||22 + ?1 ||?i ||1 .
(5)
In this setting, the classification procedure of a new signal x with an unknown label y, given a
learned dictionary D and parameters ?, involves supervised sparse coding:
min S ? (x, D, ?, y),
(6)
y?{?1;+1}
The learning procedure of Eq. (4) minimizes the sum of the costs for the pairs (xi , yi )m
i=1 and corresponds to a generative model. We will refer later to this model as SDL-G (supervised dictionary
1
2
P
The ?1 norm of a vector x of size n is defined as ||x||1 = n
i=1 |x[i]|.
The ?0 pseudo-norm of a vector x is the number of nonzeros coefficients of x. Note that it is not a norm.
i = 1, . . . , m
D
w
?i
xi
yi
Figure 1: Graphical model for the proposed generative/discriminative learning framework.
learning, generative). Note the explicit incorporation of the reconstructive and discriminative component into sparse coding, in addition to the classical reconstructive term (see [9] for a different
classification component).
However, since the classification procedure from Eq. (6) compares the different costs S ? (x, D, ?, y)
of a given signal for each class y = ?1, +1, a more discriminative approach is to not only make
the costs S ? (xi , D, ?, yi ) small, as in (4), but also make the value of S ? (xi , D, ?, ?yi ) greater than
S ? (xi , D, ?, yi ), which is the purpose of the logistic loss function C. This leads to:
m
X
min
C(S ? (xi , D, ?, ?yi ) ? S ? (xi , D, ?, yi )) + ?2 ||?||22 .
(7)
D,?
i=1
As detailed below, this problem is more difficult to solve than (4), and therefore we adopt instead a
mixed formulation between the minimization of the generative Eq. (4) and its discriminative version
(7), (see also [13])?that is,
m
X
?C(S ? (xi , D, ?, ?yi ) ? S ? (xi , D, ?, yi )) + (1 ? ?)S ? (xi , D, ?, yi ) + ?2 ||?||22 ,
(8)
i=1
where ? controls the trade-off between the reconstruction from Eq. (4) and the discrimination from
Eq. (7). This is the proposed generative/discriminative model for sparse signal representation and
classification from learned dictionary D and model ?. We will refer to this mixed model as SDL-D,
(supervised dictionary learning, discriminative). Note also that, again, we constrain the norm of the
columns of D to be less than or equal to one.
All of these formulations admit a straightforward
multiclass extension, using softmax discriminative
Pp
cost functions Ci (x1 , ..., xp ) = log( j=1 exj ?xi ), which are multiclass versions of the logistic
function, and learning one model ? i per class. Other possible approaches such as one-vs-all or
one-vs-one are of course possible, and the question of choosing the best approach among these
possibilities is still open. Compared with earlier work using one dictionary per class [6], our model
has the advantage of letting multiple classes share some features, and uses the coefficients ? of
the sparse representations as part of the classification procedure, thereby following the works from
[3, 4, 5], but with learned representations optimized for the classification task similar to [9, 10].
Before presenting the optimization procedure, we provide below two interpretations of the linear
and bilinear versions of our formulation in terms of a probabilistic graphical model and a kernel.
3
3.1
Interpreting the model
A probabilistic interpretation of the linear model
Let us first construct a graphical model which gives a probabilistic interpretation to the training and
classification criteria given above when using a linear model with zero bias (no constant term) on
the coefficients?that is, f (x, ?, ?) = wT ?. It consists of the following components (Figure 1):
? The matrices D and the vector w are parameters of the problem, with a Gaussian prior on w,
2
p(w) ? e??2 ||w||2 , and a constraint on the columns of D?that is, ||dj ||22 = 1 for all j. All the dj ?s
are considered independent of each other.
? The coefficients ?i are latent variables with a Laplace prior, p(?i ) ? e??1 ||?i ||1 .
? The signals xi are generated according to a Gaussian probability distribution conditioned on D
2
and ?i , p(xi |?i , D) ? e??0 ||xi ?D?i ||2 . All the xi ?s are considered independent from each other.
? The labels yi are generated according to a probability distribution conditioned on w and ?i , and
T
T
T
given by p(yi = ?|?i , W) = e??w ?i / e?W ?i + eW ?i . Given D and w, all the triplets
(?i , xi , yi ) are independent.
What is commonly called ?generative training? in the literature (e.g., [12, 13]), amounts to
finding the maximum likelihood estimates for D and w according to the joint distribution
p({xi , yi }m
i=1 , D, W), where the xi ?s and the yi ?s are the training signals and their labels respectively. It can easily be shown (details omitted due to space limitations) that there is an equivalence between this generative training and our formulation in Eq. (4) under MAP approximations.3
Although joint generative modeling of x and y through a shared representation has shown great
promise [10], we show in this paper that a more discriminative approach is desirable. ?Discrimm
inative training? is slightly different and amounts to maximizing p({yi }m
i=1 , D, w|{xi }i=1 ) with
respect to D and w: Given some input data, one finds the best parameters that will predict the labels
of the data. The same kind of MAP approximation relates this discriminative training formulation
to the discriminative model of Eq. (7) (again, details omitted due to space limitations). The mixed
approach from Eq. (8) is a classical trade-off between generative and discriminative (e.g., [12, 13]),
where generative components are often added to discriminative frameworks to add robustness, e.g.,
to noise and occlusions (see examples of this for the model in [9]).
3.2 A kernel interpretation of the bilinear model
Our bilinear model with f (x, ?, ?) = xT W? + b does not admit a straightforward probabilistic
interpretation. On the other hand, it can easily be interpreted in terms of kernels: Given two signals
x1 and x2 , with coefficients ?1 and ?2 , using the kernel K(x1 , x2 ) = ?T1 ?2 xT1 x2 in a logistic
regression classifier amounts to finding a decision function of the same form as f . It is a product
of two linear kernels, one on the ??s and one on the input signals x. Interestingly, Raina et al. [5]
learn a dictionary adapted to reconstruction on a training set, then train an SVM a posteriori on
the decomposition coefficients ?. They derive and use a Fisher kernel, which can be written as
K ? (x1 , x2 ) = ?T1 ?2 rT1 r2 in this setting, where the r?s are the residuals of the decompositions. In
simple experiments, which are not reported in this paper, we have observed that the kernel K, where
the signals x replace the residuals r, generally yields a level of performance similar to K ? and often
actually does better when the number of training samples is small or the data are noisy.
4
Optimization procedure
Classical dictionary learning techniques (e.g., [1, 5, 19]), address the problem of learning a reconstructive dictionary D in Rn?k well adapted to a training set, which is presented in Eq. (3). It can
be seen as an optimization problem with respect to the dictionary D and the coefficients ?. Altough
not jointly convex in (D, ?), it is convex with respect to each unknown when the other one is fixed.
This is why block coordinate descent on D and ? performs reasonably well [1, 5, 19], although not
necessarily providing the global optimum. Training when ? = 0 (generative case), i.e., from Eq.
(4), enjoys similar properties and can be addressed with the same optimization procedure. Equation
(4) can be rewritten as:
m
X
min
S(xj , ?j , D, ?, yi ) + ?2 ||?||22 , s.t. ? j = 1, . . . , k, ||dj ||2 ? 1.
(9)
D,?,?
i=1
Block coordinate descent consists therefore of iterating between supervised sparse coding, where
D and ? are fixed and one optimizes with respect to the ??s and supervised dictionary update,
where the coefficients ?i ?s are fixed, but D and ? are updated. Details on how to solve these two
problems are given in sections 4.1 and 4.2. The discriminative version SDL-D from Eq. (7) is more
problematic. To reach a local minimum for this difficult non-convex optimization problem, we have
chosen a continuation method, starting from the generative case and ending with the discriminative
one as in [6]. The algorithm is presented in Figure 2, and details on the hyperparameters? settings
are given in Section 5.
4.1 Supervised sparse coding
The supervised sparse coding problem from Eq. (6) (D and ? are fixed in this step) amounts to
minimizing a convex function under an ?1 penalty. The fixed-point continuation method (FPC) from
3
We are also investigating how to properly estimate D by marginalizing over ? instead of maximizing with
respect to ?.
Input: n (signal dimensions); (xi , yi )m
i=1 (training signals); k (size of the dictionary); ?0 , ?1 , ?2
(parameters); 0 ? ?1 ? ?2 ? . . . ? ?m ? 1 (increasing sequence).
Output: D ? Rn?k (dictionary); ? (parameters).
Initialization: Set D to a random Gaussian matrix with normalized columns. Set ? to zero.
Loop: For ? = ?1 , . . . , ?m ,
Loop: Repeat until convergence (or a fixed number of iterations),
? Supervised sparse coding: Solve, for all i = 1, . . . , m,
?
?i,? = arg min? S(?, xi , D, ?, ?1)
.
(10)
??i,+ = arg min? S(?, xi , D, ?, +1)
? Dictionary and parameters update: Solve
min
D,?
m
X
i=1
?C (S(??i,? , xi , D, ?, ?yi ) ? S(??i,+ , xj , D, ?, yi )) +
(1 ? ?)S(??i,yi , xi , D, ?, yi ) + ?2 ||?||22
s.t. ?j, ||dj ||2 ? 1. (11)
Figure 2: SDL: Supervised dictionary learning algorithm.
[17] achieves good results in terms of convergence speed for this class of problems. For our specific
problem, denoting by g the convex function to minimize, this method only requires ?g and a bound
on the spectral norm of its Hessian Hg . Since the we have chosen models g which are both linear in
?, there exists, for each supervised sparse coding problem, a vector a in Rk and a scalar c in R such
that
(
g(?) =
C(aT ? + c) + ?0 ||x ? D?||22 ,
?g(?) = ?C(aT ? + c)a ? 2?0 DT (x ? D?),
and it can be shown that, if ||U||2 denotes the spectral norm of a matrix U (which is the magnitude of
its largest eigenvalue), then we can obtain the following bound, ||Hg (?)||2 ? |HC (aT ?+c)|||a||22 +
2?0 ||DT D||2 .
4.2
Dictionary update
The problem of updating D and ? in Eq. (11) is not convex in general (except when ? is close to 0),
but a local minimum can be obtained using projected gradient descent (as in the general literature on
dictionary learning, this local minimum has experimentally been found to be good enough in terms
of classification performance). ). Denoting E(D, ?) the function we want to minimize in Eq. (11),
we just need the partial derivatives of E with respect to D and the parameters ?. When considering
the linear model for the ??s, f (x, ?, ?) = wT ? + b, and ? = {w ? Rk , b ? R}, we obtain
?
m
X
X
? ?E
?
?
=
?2?
?i,z (xi ? D??i,z )??T
,
0
?
i,z
? ?D
?
?
i=1
z={?1,+1}
?
?
?
?
m
? ?E
X
X
=
?i,z z?C(wT ??i,z + b)??i,z ,
(12)
?w
?
?
i=1 z={?1,+1}
?
?
?
m
?
X
X
?
?E
?
?
?
=
?i,z z?C(wT ??i,z + b),
?
? ?b
i=1 z={?1,+1}
?
where ?i,z = ??z?C S(?i,? , xi , D, ?, ?yi ) ? S(??i,+ , xi , D, ?, yi ) + (1 ? ?)1z=yi .
Partial derivatives when using our model with multiple classes or with the bilinear models
f (x, ?, ?) = xT W? + b are not presented in this paper due to space limitations.
5
Experimental validation
We compare in this section the reconstructive approach, dubbed REC, which consists of learning
a reconstructive dictionary D as in [5] and then learning the parameters ? a posteriori; SDL with
generative training (dubbed SDL-G); and SDL with discriminative learning (dubbed SDL-D). We
also compare the performance of the linear (L) and bilinear (BL) models.
MNIST
USPS
REC L
4.33
6.83
SDL-G L
3.56
6.67
SDL-D L
1.05
3.54
REC BL
3.41
4.38
k-NN, ?2
5.0
5.2
SVM-Gauss
1.4
4.2
Table 1: Error rates on the MNIST and USPS datasets in percents for the REC, SDL-G L and
SDL-D L approaches, compared with k-nearest neighbor and SVM with a Gaussian kernel [20].
Before presenting experimental results, let us briefly discuss the choice of the five model parameters
?0 , ?1 , ?2 , ? and k (size of the dictionary). Tuning all of them using cross-validation is cumbersome
and unnecessary since some simple choices can be made, some of which can be made sequentially.
We define first the sparsity parameter ? = ??01 , which dictates how sparse the decompositions are.
When the input data points have unit ?2 norm, choosing ? = 0.15 was empirically found to be a good
choice. For reconstructive tasks, a typical value often used in the literature (e.g., [19]) is k = 256 for
m = 100 000 signals. Nevertheless, for discriminative tasks, increasing the number of parameters is
likely to lead to overfitting, and smaller values like k = 64 or k = 32 are preferred. The scalar ?2 is
a regularization parameter for preventing the model to overfit the input data. As in logistic regression
or support vector machines, this parameter is crucial when the number of training samples is small.
Performing cross validation with the fast method REC quickly provides a reasonable value for this
parameter, which can be used afterward for SDL-G or SDL-D.
Once ?, k and ?2 are chosen, let us see how to find ?0 , which plays the important role of controlling
the trade-off between reconstruction and discrimination. First, we perform cross-validation for a few
iterations with ? = 0 to find a good value for SDL-G. Then, a scale factor making the costs S ? discriminative for ? > 0 can be chosen during the optimization process:
PmGiven a set of computed costs
S ? , one can compute a scale factor ? ? such that ? ? = arg min? i=1 C({?(S ? (xi , D, ?, ?yi ) ?
S ? (xi , D, ?, yi )). We therefore propose the following strategy, which has proven to be effective in
our experiments: Starting from small values for ?0 and a fixed ?, we apply the algorithm in Figure
2, and after a supervised sparse coding step, we compute the best scale factor ? ? , and replace ?0
and ?1 by ? ? ?0 and ??1 . Typically, applying this procedure during the first 10 iterations has proven
to lead to reasonable values for these parameters. Since we are following a continuation path from
? = 0 to ? = 1, the optimal value of ? is found along the path by measuring the classification
performance of the model on a validation set during the optimization.
5.1
Digits recognition
In this section, we present experiments on the popular MNIST [20] and USPS handwritten digit
datasets. MNIST is composed of 70 000 28 ? 28 images, 60 000 for training, 10 000 for testing, each
of them containing one handwritten digit. USPS is composed of 7291 training images and 2007 test
images of size 16 ? 16. As is often done in classification, we have chosen to learn pairwise binary
classifiers, one for each pair of digits. Although our framework extends to a multiclass formulation,
pairwise binary classifiers have resulted in slightly better performance in practice. Five-fold cross
validation is performed to find the best pair (k, ?). The tested values for k are {24, 32, 48, 64, 96},
and for ?, {0.13, 0.14, 0.15, 0.16, 0.17}. We keep the three best pairs of parameters and use them to
train three sets of pairwise classifiers. For a given image x, the test procedure consists of selecting
the class which receives the most votes from the pairwise classifiers. All the other parameters are
obtained using the procedure explained above. Classification results are presented on Table 1 using
the linear model. We see that for the linear model L, SDL-D L performs the best. REC BL offers
a larger feature space and performs better than REC L, but we have observed no gain by using
SDL-G BL or SDL-D BL instead of REC BL (this results are not reported in this table). Since the
linear model is already performing very well, one side effect of using BL instead of L is to increase
the number of free parameters and thus to cause overfitting. Note that our method is competitive
since the best error rates published on these datasets (without any modification of the training set)
are 0.60% [18] for MNIST and 2.4% [21] for USPS, using methods tailored to these tasks, whereas
ours is generic and has not been tuned for the handwritten digit classification domain.
The purpose of our second experiment is not to measure the raw performance of our algorithm, but
to answer the question ?are the obtained dictionaries D discriminative per se??. To do so, we have
trained on the USPS dataset 10 binary classifiers, one per digit in a one vs all fashion on the training
set. For a given value of ?, we obtain 10 dictionaries D and 10 sets of parameters ?, learned by the
SDL-D L model.
To evaluate the discriminative power of the dictionaries D, we discard the learned parameters ? and
use the dictionaries as if they had been learned in a reconstructive REC model: For each dictionary,
2.5
2.0
1.5
1.0
0.5
(a) REC, MNIST
(b) SDL-D, MNIST
0
0
0.2
0.4
0.6
0.8
1.0
Figure 3: On the left, a reconstructive and a discriminative dictionary. On the right, average error
rate in percents obtained by our dictionaries learned in a discriminative framework (SDL-D L) for
various values of ?, when used at test time in a reconstructive framework (REC-L).
m
300
1 500
3 000
6 000
15 000
30 000
REC L
48.84
46.8
45.17
45.71
47.54
47.28
SDL-G L
47.34
46.3
45.1
43.68
46.15
45.1
SDL-D L
44.84
42
40.6
39.77
38.99
38.3
REC BL
26.34
22.7
21.99
19.77
18.2
18.99
SDL-G BL
26.34
22.3
21.22
18.75
17.26
16.84
SDL-D BL
26.34
22.3
21.22
18.61
15.48
14.26
Gain
0%
2%
4%
6%
15%
25%
Table 2: Error rates for the texture classification task using various methods and sizes m of the
training set. The last column indicates the gain between the error rate of REC BL and SDL-D BL.
we decompose each image from the training set by solving the simple sparse reconstruction problem
from Eq. (1) instead of using supervised sparse coding. This provides us with some coefficients ?,
which we use as features in a linear SVM. Repeating the sparse decomposition procedure on the
test set permits us to evaluate the performance of these learned linear SVMs. We plot the average
error rate of these classifiers on Figure 3 for each value of ?. We see that using the dictionaries
obtained with discrimative learning (? > 0, SDL-D L) dramatically improves the performance of
the basic linear classifier learned a posteriori on the ??s, showing that our learned dictionaries are
discriminative per se. Figure 3 also shows a dictionary adapted to the reconstruction of the MNIST
dataset and a discriminative one, adapted to ?9 vs all?.
5.2
Texture classification
In the digit recognition task, our bilinear framework did not perform better than the linear one L. We
believe that one of the main reasons is due to the simplicity of the task, where a linear model is rich
enough. The purpose of our next experiment is to answer the question ?When is BL worth using??.
We have chosen to consider two texture images from the Brodatz dataset, presented in Figure 4, and
to build two classes, composed of 12 ? 12 patches taken from these two textures. We have compared
the classification performance of all our methods, including BL, for a dictionary of size k = 64 and
? = 0.15. The training set was composed of patches from the left half of each texture and the test
sets of patches from the right half, so that there is no overlap between them in the training and test
set. Error rates are reported in Table 2 for varying sizes of the training set. This experiment shows
that in some cases, the linear model performs very poorly where BL does better. Discrimination
helps especially when the size of the training set is large. Note that we did not perform any crossvalidation to optimize the parameters k and ? for this experiment. Dictionaries obtained with REC
and SDL-D BL are presented in Figure 4. Note that though they are visually quite similar, they lead
to very different performances.
6
Conclusion
we have introduced in this paper a discriminative approach to supervised dictionary learning that
effectively exploits the corresponding sparse signal decompositions in image classification tasks, and
have proposed an effective method for learning a shared dictionary and multiple (linear or bilinear)
models. Future work will be devoted to adapting the proposed framework to shift-invariant models
that are standard in image processing tasks, but not readily generalized to the sparse dictionary
learning setting. We are also investigating extensions to unsupervised and semi-supervised learning
and applications to natural image classification.
(a) Texture 1
(b) Texture 2
(c) REC
(d) SDL-D BL
Figure 4: Left: test textures. Right: reconstructive and discriminative dictionaries
Acknowledgments
This paper was supported in part by ANR under grant MGA. Guillermo Sapiro would like to thank
Fernando Rodriguez for insights into the learning of discriminatory sparsity patterns. His work is
partially supported by NSF, NGA, ONR, ARO, and DARPA.
References
[1] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by
v1? Vision Research, 37, 1997.
[2] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. IP, 54(12), 2006.
[3] K. Huang and S. Aviyente. Sparse representation for signal classification. In NIPS, 2006.
[4] J. Wright, A. Y. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation.
In PAMI, 2008. to appear.
[5] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: transfer learning from unlabeled data. In ICML, 2007.
[6] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Learning discriminative dictionaries for local
image analysis. In CVPR, 2008.
[7] M. Ranzato and M. Szummer. Semi-supervised learning of compact document representations with deep
networks. In ICML, 2008.
[8] A. Argyriou and T. Evgeniou and M. Pontil Multi-Task Feature Learning. In NIPS, 2006.
[9] F. Rodriguez and G. Sapiro. Sparse representations for image classification: Learning discriminative and
reconstructive non-parametric dictionaries. IMA Preprint 2213, 2007.
[10] D. Blei and J. McAuliffe. Supervised topic models. In NIPS, 2007.
[11] A. Holub and P. Perona. A discriminative framework for modeling object classes. In CVPR, 2005.
[12] J.A. Lasserre, C.M. Bishop, and T.P. Minka. Principled hybrids of generative and discriminative models.
In CVPR, 2006.
[13] R. Raina, Y. Shen, A. Y. Ng, and A. McCallum. Classification with hybrid generative/discriminative
models. In NIPS, 2004.
[14] R. R. Salakhutdinov and G. E. Hinton. Learning a non-linear embedding by preserving class neighbourhood structure. In AI and Statistics, 2007.
[15] H. Larochelle, and Y. Bengio. Classification using discriminative restricted boltzmann machines. in
ICML, 2008.
[16] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Ann. Stat., 32(2), 2004.
[17] E. T. Hale, W. Yin, and Y. Zhang. A fixed-point continuation method for l1-regularized minimization with
applications to compressed sensing. CAAM Tech Report TR07-07, 2007.
[18] M. Ranzato, C. Poultney, S. Chopra, and Y. LeCun. Efficient learning of sparse representations with an
energy-based model. In NIPS, 2006.
[19] M. Aharon, M. Elad, and A. M. Bruckstein. The K-SVD: An algorithm for designing of overcomplete
dictionaries for sparse representations. IEEE Trans. SP, 54(11), 2006.
[20] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proc. of the IEEE, 86(11), 1998.
[21] B. Haasdonk and D. Keysers. Tangent distant kernels for support vector machines. In ICPR, 2002.
| 3448 |@word briefly:1 version:5 norm:9 open:1 decomposition:10 thereby:1 selecting:1 ecole:1 document:3 tuned:2 interestingly:1 denoting:2 ours:1 written:1 readily:1 distant:1 analytic:1 plot:1 update:3 discrimination:3 v:4 generative:18 selected:1 greedy:1 half:2 mccallum:1 core:1 blei:1 provides:3 zhang:1 five:2 along:2 mga:1 consists:4 pairwise:4 multi:2 salakhutdinov:1 actual:1 considering:1 increasing:2 project:2 what:1 kind:1 interpreted:1 minimizes:1 finding:2 dubbed:4 sapiro:4 pseudo:1 act:1 fpc:1 classifier:9 uk:1 control:2 unit:1 grant:1 appear:2 mcauliffe:1 positive:1 before:2 t1:2 local:4 bilinear:9 encoding:1 oxford:1 path:2 pami:1 inria:4 initialization:1 equivalence:1 discriminatory:1 acknowledgment:1 lecun:2 testing:1 practice:2 block:2 digit:8 procedure:12 pontil:1 adapting:1 dictate:1 regular:1 close:1 unlabeled:1 applying:1 optimize:1 map:2 maximizing:2 straightforward:2 starting:2 convex:7 shen:1 simplicity:1 insight:1 argyriou:1 d1:1 his:1 embedding:1 coordinate:2 laplace:1 updated:1 controlling:1 play:1 us:2 designing:1 element:1 recognition:5 approximated:1 rec:17 updating:1 labeled:1 observed:2 role:2 preprint:1 haasdonk:1 capture:1 ranzato:2 trade:3 principled:1 trained:1 solving:2 purely:1 sdl:29 basis:2 usps:6 easily:2 joint:3 darpa:1 various:3 train:2 fast:1 effective:3 reconstructive:14 choosing:2 jean:2 richer:1 elad:2 larger:1 solve:5 cvpr:3 quite:1 otherwise:1 compressed:2 encoder:1 anr:1 statistic:2 jointly:3 itself:1 noisy:1 ip:1 advantage:1 differentiable:1 sequence:1 eigenvalue:1 reconstruction:9 propose:2 aro:1 product:1 fr:3 loop:2 poorly:1 az:1 crossvalidation:1 convergence:2 optimum:1 brodatz:1 object:1 help:2 derive:1 andrew:1 ac:1 stat:1 nearest:1 eq:15 involves:1 larochelle:1 direction:1 filter:1 human:1 require:1 decompose:1 extension:3 considered:2 wright:1 visually:1 great:1 predict:1 dictionary:52 adopt:1 achieves:1 omitted:2 purpose:3 proc:1 label:5 sensitive:1 largest:1 minimization:2 gaussian:4 normale:1 varying:1 ponce:3 properly:1 likelihood:1 indicates:1 tech:1 sense:1 posteriori:4 nn:1 typically:1 perona:1 willow:2 arg:3 classification:27 among:1 proposes:1 spatial:1 art:1 softmax:1 field:2 equal:1 construct:1 once:1 ng:2 atom:1 evgeniou:1 unsupervised:1 icml:3 future:1 parametrizes:1 np:1 report:1 few:3 composed:4 simultaneously:2 resulted:1 packer:1 ima:1 occlusion:1 possibility:1 umn:1 hg:2 devoted:1 predefined:2 partial:2 overcomplete:5 column:5 modeling:3 earlier:1 measuring:1 restoration:1 cost:6 reported:3 answer:2 probabilistic:6 off:3 lee:1 quickly:1 again:2 containing:1 huang:1 admit:2 derivative:2 coding:15 coefficient:12 performed:2 later:2 sup:1 francis:2 competitive:1 minimize:2 yield:3 handwritten:4 raw:1 worth:1 published:1 reach:1 cumbersome:1 energy:1 pp:1 minka:1 associated:1 gain:3 dataset:3 popular:1 knowledge:1 efron:1 improves:1 holub:1 actually:1 higher:1 dt:2 supervised:19 zisserman:2 formulation:10 done:1 ox:1 though:1 just:1 correlation:1 until:1 hand:1 overfit:1 receives:1 ganesh:1 rodriguez:2 logistic:6 believe:1 olshausen:1 effect:2 normalized:1 counterpart:1 regularization:2 during:3 self:1 criterion:2 generalized:1 presenting:2 performs:4 l1:1 interpreting:1 percent:2 image:16 novel:1 recently:1 empirically:1 belong:1 interpretation:8 interpret:1 refer:2 ai:1 tuning:1 erieure:1 sastry:1 exj:1 dj:4 had:1 minnesota:1 robot:1 stable:1 add:1 recent:2 optimizes:1 discard:1 binary:4 onr:1 yi:35 seen:1 minimum:3 greater:3 preserving:1 employed:1 fernando:1 redundant:1 signal:25 ii:3 relates:1 multiple:3 desirable:1 semi:2 nonzeros:1 adapt:1 bach:3 cross:4 offer:1 variant:1 regression:4 basic:1 vision:1 iteration:3 kernel:11 tailored:1 cell:1 addition:1 want:1 whereas:1 addressed:1 crucial:1 unlike:1 call:1 chopra:1 yang:1 bengio:2 enough:2 xj:2 tr07:1 hastie:1 haffner:1 multiclass:4 shift:1 penalty:4 hessian:1 cause:1 deep:2 dramatically:1 generally:1 iterating:1 detailed:2 aimed:1 se:2 amount:5 repeating:1 guille:1 svms:2 regularization1:1 continuation:4 problematic:1 nsf:1 per:6 tibshirani:1 promise:1 taught:1 nevertheless:1 v1:1 sum:1 nga:1 inverse:1 angle:1 extends:2 reasonable:2 patch:3 decision:1 scaling:1 bound:2 fold:1 adapted:6 constraint:2 incorporation:1 constrain:2 x2:4 speed:1 min:12 performing:2 according:3 icpr:1 belonging:1 battle:1 smaller:1 slightly:2 making:2 modification:1 caam:1 explained:1 invariant:2 restricted:1 taken:1 equation:1 discus:2 letting:1 tractable:1 aharon:2 rewritten:1 permit:1 apply:1 spectral:2 generic:1 neighbourhood:1 robustness:1 denotes:1 aviyente:1 graphical:3 hinge:1 exploit:2 build:1 especially:1 classical:7 bl:17 added:2 question:3 already:1 receptive:1 strategy:2 parametric:1 gradient:2 link:1 thank:1 topic:1 considers:1 reason:1 code:3 providing:1 minimizing:1 difficult:2 negative:1 proper:1 boltzmann:1 unknown:2 perform:3 allowing:1 datasets:3 descent:3 hinton:1 rn:5 perturbation:1 community:1 introduced:4 pair:5 optimized:1 learned:15 established:1 nip:5 trans:2 address:1 below:2 rt1:1 pattern:1 sparsity:6 poultney:1 including:1 video:1 power:1 overlap:1 natural:2 hybrid:2 regularized:1 raina:3 residual:2 numerous:1 julien:2 started:1 prior:2 literature:4 tangent:1 marginalizing:1 loss:4 mixed:4 limitation:3 afterward:1 proven:3 validation:6 xp:1 share:1 course:1 guillermo:2 repeat:1 last:1 free:1 supported:2 enjoys:2 bias:1 side:1 johnstone:1 neighbor:1 face:2 sparse:40 dimension:2 ending:1 rich:1 preventing:1 commonly:1 made:2 projected:1 compact:1 preferred:1 keep:1 global:1 overfitting:3 investigating:2 sequentially:1 mairal:3 inative:1 xt1:1 unnecessary:1 bruckstein:1 discriminative:38 xi:42 latent:1 triplet:1 why:1 table:5 lasserre:1 learn:4 reasonably:1 robust:1 transfer:1 hc:1 necessarily:1 bottou:1 domain:1 did:2 sp:1 main:1 noise:1 hyperparameters:1 x1:4 en:1 fashion:1 explicit:2 wavelet:1 rk:4 xt:3 specific:1 bishop:1 showing:2 hale:1 sensing:2 r2:1 dk:1 admits:2 svm:6 exists:1 mnist:8 effectively:2 importance:1 ci:1 texture:9 magnitude:1 conditioned:2 keysers:1 suited:1 led:1 yin:1 likely:1 visual:1 prevents:1 partially:1 scalar:3 corresponds:1 ma:1 goal:2 ann:1 shared:4 fisher:1 replace:2 hard:1 experimentally:1 typical:1 except:1 wt:5 denoising:2 principal:1 called:1 ece:1 experimental:4 gauss:1 svd:1 vote:1 ew:1 support:2 szummer:1 evaluate:2 audio:1 tested:1 |
2,701 | 3,449 | Offline Handwriting Recognition with
Multidimensional Recurrent Neural Networks
Alex Graves
TU Munich, Germany
[email protected]
?
Jurgen
Schmidhuber
IDSIA, Switzerland and TU Munich, Germany
[email protected]
Abstract
Offline handwriting recognition?the automatic transcription of images of handwritten text?is a challenging task that combines computer vision with sequence
learning. In most systems the two elements are handled separately, with sophisticated preprocessing techniques used to extract the image features and sequential
models such as HMMs used to provide the transcriptions. By combining two recent innovations in neural networks?multidimensional recurrent neural networks
and connectionist temporal classification?this paper introduces a globally trained
offline handwriting recogniser that takes raw pixel data as input. Unlike competing
systems, it does not require any alphabet specific preprocessing, and can therefore
be used unchanged for any language. Evidence of its generality and power is provided by data from a recent international Arabic recognition competition, where it
outperformed all entries (91.4% accuracy compared to 87.2% for the competition
winner) despite the fact that neither author understands a word of Arabic.
1
Introduction
Offline handwriting recognition is generally observed to be harder than online handwriting recognition [14]. In the online case, features can be extracted from both the pen trajectory and the resulting
image, whereas in the offline case only the image is available. Nonetheless, the standard recognition
process is essentially the same: a sequence of features are extracted from the data, then matched to a
sequence of labels (usually characters or sub-character strokes) using either a hidden Markov model
(HMM) [9] or an HMM-neural network hybrid [10].
The main drawback of this approach is that the input features must meet the stringent independence
assumptions imposed by HMMs (these assumptions are somewhat relaxed in the case of hybrid
systems, but long-range input dependencies are still problematic). In practice this means the features
must be redesigned for every alphabet, and, to a lesser extent, for every language. For example it
would be impossible to use the same system to recognise both English and Arabic.
Following our recent success in transcribing raw online handwriting data with recurrent networks [6], we wanted to build an offline recognition system that would work on raw pixels. As well
as being alphabet-independent, such a system would have the advantage of being globally trainable,
with the image features optimised along with the classifier.
The online case was relatively straightforward, since the input data formed a 1D sequence that could
be fed directly to a recurrent network. The long short-term memory (LSTM) network architecture [8, 3] was chosen for its ability to access long-range context, and the connectionist temporal
classification [5] output layer allowed the network to transcribe the data with no prior segmentation.
The offline case, however, is more challenging, since the input is no longer one-dimensional. A
naive approach would be to present the images to the network one vertical line at a time, thereby
transforming them into 1D sequences. However such a system would be unable to handle distor1
Figure 1: Two dimensional MDRNN. The thick lines show connections to the current point (i, j).
The connections within the hidden layer plane are recurrent. The dashed lines show the scanning
strips along which previous points were visited, starting at the top left corner.
tions along the vertical axis; for example the same image shifted up by one pixel would appear
completely different. A more flexible solution is offered by multidimensional recurrent neural networks (MDRNNs) [7]. MDRNNs, which are a special case of directed acyclic graph networks [1],
generalise standard RNNs by providing recurrent connections along all spatio-temporal dimensions
present in the data. These connections make MDRNNs robust to local distortions along any combination of input dimensions (e.g. image rotations and shears, which mix vertical and horizontal
displacements) and allow them to model multidimensional context in a flexible way. We use multidimensional LSTM because it is able to access long-range context.
The problem remains, though, of how to transform two-dimensional images into one-dimensional
label sequences. Our solution is to pass the data through a hierarchy of MDRNN layers, with
blocks of activations gathered together after each level. The heights of the blocks are chosen to
incrementally collapse the 2D images onto 1D sequences, which can then be labelled by the output
layer. Such hierarchical structures are common in computer vision [15], because they allow complex
features to be built up in stages. In particular our multilayered structure is similar to that used by
convolution networks [11], although it should be noted that because convolution networks are not
recurrent, they cannot be used for cursive handwriting recognition without presegmented inputs.
The method is described in detail in Section 2, experimental results are given in Section 3, and
conclusions and directions for future work are given in Section 4.
2
Method
The three components of our recognition system are: (1) multidimensional recurrent neural networks, and multidimensional LSTM in particular; (2) the connectionist temporal classification output layer; and (3) the hierarchical structure. In what follows we describe each component in turn,
then show how they fit together to form a complete system. For a more detailed description of (1)
and (2) we refer the reader to [4]
2.1
Multidimensional Recurrent Neural Networks
The basic idea of multidimensional recurrent neural networks (MDRNNs) [7] is to replace the single
recurrent connection found in standard recurrent networks with as many connections as there are
spatio-temporal dimensions in the data. These connections allow the network to create a flexible
internal representation of surrounding context, which is robust to localised distortions.
An MDRNN hidden layer scans through the input in 1D strips, storing its activations in a buffer. The
strips are ordered in such a way that at every point the layer has already visited the points one step
back along every dimension. The hidden activations at these previous points are fed to the current
point through recurrent connections, along with the input. The 2D case is illustrated in Fig. 1.
One such layer is sufficient to give the network access to all context against the direction of scanning from the current point (e.g. to the top and left of (i, j) in Fig. 1). However we usually want
surrounding context in all directions. The same problem exists in 1D networks, where it is often
useful to have information about the future as well as the past. The canonical 1D solution is bidi2
rectional recurrent networks [16], where two separate hidden layers scan through the input forwards
and backwards. The generalisation of bidirectional networks to n dimensions requires 2n hidden
layers, starting in every corner of the n dimensional hypercube and scanning in opposite directions.
For example, a 2D network has four layers, one starting in the top left and scanning down and right,
one starting in the bottom left and scanning up and right, etc. All the hidden layers are connected to
a single output layer, which therefore receives information about all surrounding context.
The error gradient of an MDRNN can be calculated with an n-dimensional extension of backpropagation through time. As in the 1D case, the data is processed in the reverse order of the forward
pass, with each hidden layer receiving both the output derivatives and its own n ?future? derivatives
at every timestep.
p
Let ap
j and bj be respectively the input and activation of unit j at point p = (p1 , . . . , pn ) in an ndimensional input sequence x with dimensions (D1 , . . . , Dn ). Let p?
d = (p1 , . . . , pd ? 1, . . . , pn )
d
and p+
be
respectively
the weight of the feedforward
=
(p
,
.
.
.
,
p
+
1,
.
.
.
,
p
).
Let
w
and
w
1
d
n
ij
ij
d
connection from unit i to unit j and the recurrent connection from i to j along dimension d. Let ?h
be the activation function of hidden unit h, and for some unit j and some differentiable objective
?O
function O let ?jp = ?a
p . Then the forward and backward equations for an n-dimensional MDRNN
j
with I input units, K output units, and H hidden summation units are as follows:
Forward Pass
ap
h =
I
X
xp
i wih +
i=1
H
n X
X
Backward Pass
0
p?
d
n
K
X
BX p
w
+
)
?
?hp = ?h0 (ap
@
hk
k
h
d
bh? whh
?
d=1: h=1
?
pd >0
1
p+
d
C
?h? whdh? A
d=1:
?
h=1
pd <Dd ?1
k=1
p
bp
h = ?h (ah )
2.1.1
H
X
Multidimensional LSTM
Long Short-Term Memory (LSTM) [8, 3] is an RNN architecture designed for data with long-range
interdependencies. An LSTM layer consists of recurrently connected ?memory cells?, whose activations are controlled by three multiplicative gate units: the input gate, forget gate and output gate. The
gates allows the cells to store and retrieve information over time, giving them access to long-range
context.
The standard formulation of LSTM is explicitly one-dimensional, since each cell contains a single
recurrent connection, whose activation is controlled by a single forget gate. However we can extend
this to n dimensions by using instead n recurrent connections (one for each of the cell?s previous
states along every dimension) with n forget gates.
Consider an MDLSTM memory cell in a hidden layer of H cells, connected to I input units and K
output units. The subscripts c, ?, ? and ? refer to the cell, input gate, forget gate and output gate
p
respectively. bp
h is the output of cell h in the hidden layer at point p in the input sequence, and sc is
the state of cell c at p. f1 is the activation function of the gates, and f2 and f3 are respectively the
cell input and output activation functions. The suffix ?, d denotes the forget gate corresponding to
recurrent connection d. The input gate ? is connected to previous cell c along all dimensions with
the same weight (wc? ) whereas the forget gates are connected to cell c with a separate weight wc(?,d)
for each dimension d. Then the forward and backward equations are as follows:
Forward Pass
0
I
n
X
BX p
xi wi? +
Input Gate: bp
? = f1 @
i=1
p?
d
wc? sc
+
d=1:
pd >0
H
X
p?
d
!
1
C
d
bh wh?
A
h=1
0
1
I
X
B
B
Forget Gate: bp
?,d = f1 @
i=1
xp
i wi(?,d) +
n
X
H
X
d0 =1: h=1
pd0 >0
3
p?0
0
d
bhd wh(?,d)
+
(
p?
wc(?,d) sc d
0 otherwise
if pd > 0
C
C
A
Cell: ap
c =
I
X
xp
i wic +
i=1
n X
H
X
p?
p
p
State: sp
c = b? f2 (ac ) +
d
bhd whc
d=1: h=1
pd >0
n
X
p?
sc d b p
?,d
d=1:
pd >0
0
1
I
n X
H
?
X
X
p
B
C
d
Output Gate: bp
xp
bhd wh?
+ wc? sp
? = f1 @
cA
i wi? +
i=1
bp
c
Cell Output:
=
d=1: h=1
pd >0
p
bp
? f3 (sc )
Backward Pass
K
def
Cell Output: p
c =
X p
?O
?k wck +
p =
?bc
k=1
n
X
H
X
p+
d
?h d wch
d=1:
h=1
pd <Dd ?1
p
p
Output Gate: ??p = f10 (ap
? )c f3 (sc )
def
State: p
s =
?
n ? +
X
?O
pd p+
p+
p+
p 0 p p
p
d
d
d
=
b
f
(s
)
+
?
w
+
b
+
?
w
+
?
w
c?
s
?
? 3 c
c
? c?
?,d
?,d c(?,d)
?sp
c
d=1:
pd <Dd ?1
(
Cell:
?cp
=
0
p p
bp
? f2 (ac )s
Forget Gate:
p
??,d
=
p?
d p
f10 (ap
?,d )sc s if pd > 0
0 otherwise
p p
Input Gate: ??p = f10 (ap
? )f2 (ac )s
2.2
Connectionist Temporal Classification
Connectionist temporal classification (CTC) [5] is an output layer designed for sequence labelling
with RNNs. Unlike other neural network output layers it does not require pre-segmented training
data, or postprocessing to transform its outputs into transcriptions. Instead, it trains the network to
directly estimate the conditional probabilities of the possible labellings given the input sequences.
A CTC output layer contains one more unit than there are elements in the alphabet L of labels for the
task. The output activations are normalised at each timestep with the softmax activation function [2].
The first |L| outputs estimate the probabilities of observing the corresponding labels at that time, and
the extra output estimates the probability of observing a ?blank?, or no label. The combined output
sequence estimates the joint probability of all possible alignments of the input sequence with all
sequences of labels and blanks. The probability of a particular labelling can then be estimated by
summing over the probabilities of all the alignments that correspond to it.
More precisely, for a length T input sequence x, the CTC outputs define a probability distribution
T
over the set L0 of length T sequences over the alphabet L0 = L ? {blank}. To distinguish them
T
from labellings, we refer to the elements of L0 as paths. Since the probabilities of the labels at
T
each timestep are conditionally independent given x, the conditional probability of a path ? ? L0
QT
t
t
is given by p(?|x) = t=1 y?t . where yk is the activation of output unit k at time t.
Paths are mapped onto labellings l ? L?T by an operator B that removes first the repeated labels,
then the blanks. So for example, both B(a, ?, a, b, ?) and B(?, a, a, ?, ?, a, b, b) yield the labelling
(a, a, b). Since the paths are mutually exclusive, the conditional probability of P
some labelling l ?
L?T is the sum of the probabilities of all paths corresponding to it: p(l|x) = ??B?1 (l) p(?|x).
Although a naive calculation of this sum is unfeasible, it can be efficiently evaluated with a dynamic
programming algorithm, similar to the forward-backward algorithm for HMMs.
To allow for blanks in the output paths, for each labelling l ? L?T consider a modified labelling
?T
l0 ? L0 , with blanks added to the beginning and the end and inserted between every pair of labels.
The length |l0 | of l0 is therefore 2|l| + 1.
For a labelling l, define the forward variable ?t (s) as the summed probability of all path beginnings
reaching index s of l0 at time t, and the backward variables ?t (s) as the summed probability of all
path endings that would complete the labelling l if the path beginning had reached s at time t. Both
4
the forward and backward variables are calculated recursively [5]. The label sequence probability
is given by the sum of the products of the forward and backward variables at any timestep, i.e.
P|l0 |
p(l|x) = s=1 ?t (s)?t (s).
Let S be a training set, consisting of pairs of input and target sequences (x, z), where |z| ? |x|.
Then the objective function
P O for CTC is the negative log probability of the network correctly
labelling all of S: O = ? (x,z)?S ln p(z|x). The network can be trained with gradient descent by
first differentiating O with respect to the outputs, then using backpropagation through time to find
the derivatives with respect to the weights.
Note that the same label (or blank) may be repeated several times for a single labelling l. We define
the set of positions where label k occurs as lab(l, k) = {s : l0s = k}, which may be empty.
Setting l = z and differentiating O with respect to the network outputs, we obtain:
?
?O
? ln p(z|x)
1
=?
= ykt ?
?atk
?atk
p(z|x)
X
?t (s)?t (s),
s?lab(z,k)
where atk and ykt are respectively the input and output of CTC unit k at time t for some (x, z) ? S.
Once the network is trained, we can label some unknown input sequence x by choosing the labelling
l? with the highest conditional probability, i.e. l? = arg maxl p(l|x). In cases where a dictionary
is used, the labelling can be constrained to yield only sequences of complete words by using the
CTC token passing algorithm [6]. For the experiments in this paper, the labellings were further
constrained to give single word sequences only, and the ten most probable words were recorded.
2.3
Network Hierarchy
Many computer vision systems use a hierarchical approach to feature extraction, with the features
at each level used as input to the next level [15]. This allows complex visual properties to be built
up in stages. Typically, such systems use subsampling, with the feature resolution decreased at each
stage. They also generally have more features at the higher levels. The basic idea is to progress from
a small number of simple local features to a large number of complex global features.
We created a hierarchical structure by repeatedly composing MDLSTM layers with feedforward
layers. The basic procedure is as follows: (1) the image is divided into small pixel blocks, each of
which is presented as a single input to the first set of MDLSTM layers (e.g. a 4x3 block is reduced
to a length 12 vector). If the image does not divide exactly into blocks, it is padded with zeros.
(2) the four MDLSTM layers scan through the pixel blocks in all directions. (3) the activations of
the MDLSTM layers are collected into blocks. (4) these blocks are given as input to a feedforward
layer. Note that all the layers have a 2D array of activations: e.g. a 10 unit feedforward layer with
input from a 5x5 array of MDLSTM blocks has a total of 250 activations.
The above process is repeated as many times as required, with the activations of the feedforward
layer taking the place of the original image. The purpose of the blocks is twofold: to collect local
contextual information, and to reduce the area of the activation arrays. In particular, we want to
reduce the vertical dimension, since the CTC output layer requires a 1D sequence as input. Note
that the blocks themselves do not reduce the overall amount of data; that is done by the layers that
process them, which are therefore analogous to the subsampling steps in other approaches (although
with trainable weights rather than a fixed subsampling function).
For most tasks we find that a hierarchy of three MDLSTM/feedforward stages gives the best results.
We use the standard ?inverted pyramid? structure, with small layers at the bottom and large layers at
the top. As well as allowing for more features at higher levels, this leads to efficient networks, since
most of the weights are concentrated in the upper layers, which have a smaller input area.
In general we cannot assume that the input images are of fixed size. Therefore it is difficult to choose
block heights that ensure that the final activation array will always be one-dimensional, as required
by CTC. A simple solution is to collapse the final array by summing over all the inputs in each
P (x,t)
(x,y)
vertical line, i.e. the input at time t to CTC unit k is given by atk = x ak , where ak
is the
uncollapsed input to unit k at point (x, y) in the final array.
5
Figure 2: The complete recognition system. First the input image is collected into boxes 3 pixels
wide and 4 pixels high which are then scanned by four MDLSTM layers. The activations of the cells
in each layer are displayed separately, and the arrows in the corners indicates the scanning direction.
Next the MDLSTM activations are gathered into 4 x 3 boxes and fed to a feedforward layer of tanh
summation units. This process is repeated two more times, until the final MDLSTM activations are
collapsed to a 1D sequence and transcribed by the CTC layer. In this case all characters are correctly
labelled except the second last one, and the correct town name is chosen from the dictionary.
3
Experiments
To see how our method compared to the state of the art, we applied it to data from the ICDAR
2007 Arabic handwriting recognition competition [12]. Although we were too late to enter the
competition itself, the organisers kindly agreed to evaluate our system according to the competition
criteria. We did not receive the test data at any point, and all evaluations were carried out by them.
The goal of the competition was to identify the postcodes of Tunisian town and village names. The
names are presented individually, so it is an isolated word recognition task. However we would
like to point out that our system is equally applicable to unconstrained handwriting, and has been
successfully applied to complete lines of English text.
3.1
Data
The competition was based on the IFN/ENIT database of handwritten Arabic words [13]. The
publically available data consists of 32,492 images of handwritten Tunisian town names, of which
we used 30,000 for training, and 2,492 for validation. The images were extracted from artificial
6
Table 1: Results on the ICDAR 2007 Arabic handwriting recognition contest. All scores are
percentages of correctly identified postcodes. The systems are ordered by the ?top 1? results on test
set ?f?. The best score in each column is shown in bold.
S YSTEM
CACI-3
CACI-2
CEDAR
MITRE
UOB-ENST-1
PARIS V
ICRA
UOB-ENST-2
UOB-ENST-4
UOB-ENST-3
SIEMENS-1
MIE
SIEMENS-2
Ours
top 1
14.28
15.79
59.01
61.70
79.10
80.18
81.47
81.65
81.81
81.93
82.77
83.34
87.22
91.43
SET f
top 5
29.88
21.34
78.76
81.61
87.69
91.09
90.07
90.81
88.71
91.20
92.37
91.67
94.05
96.12
top 10
37.91
22.33
83.70
85.69
90.21
92.98
92.15
92.35
90.40
92.76
93.92
93.48
95.42
96.75
top 1
10.68
14.24
41.32
49.91
64.97
64.38
72.22
69.61
70.57
69.93
68.09
68.40
73.94
78.83
SET s
top 5
21.74
19.39
61.98
70.50
78.39
78.12
82.84
83.79
79.85
84.11
81.70
80.93
85.44
88.00
top 10
30.20
20.53
69.87
76.48
82.20
82.13
86.27
85.89
83.34
87.03
85.19
83.73
88.18
91.05
forms filled in by over 400 Tunisian people. The forms were designed to simulate writing on a
letter, and contained no lines or boxes to constrain the writing style.
Each image was supplied with a ground truth transcription for the individual characters1 . There were
120 distinct characters in total. A list of 937 town names and postcodes was provided. Many of the
town names had transcription variants, giving a total of 1,518 entries in the complete dictionary.
The test data (which is not published) was divided into sets ?f? and ?s?. The main competition results
were based on set ?f?. Set ?s? contains data collected in the United Arab Emirates using the same
forms; its purpose was to test the robustness of the recognisers to regional writing variations. The
systems were allowed to choose up to 10 postcodes for each image, in order of preference. The test
set performance using the top 1, top 5, and top 10 answers was recorded by the organisers.
3.2
Network Parameters
The structure shown in Figure 2 was used, with each layer fully connected to the next layer in the
hierarchy, all MDLSTM layers connected to themselves, and all units connected to a bias weight.
There were 159,369 weights in total. This may sound like a lot, but as mentioned in Section 2.3, the
?inverted pyramid? structure greatly reduces the actual number of weight operations. In effect the
higher up networks (where the vast majority of the weights are concentrated) are processing much
smaller images than those lower down. The squashing function for the gates was the logistic sigmoid
f1 (x) = 1/(1 + e?x ), while tanh was used for f2 and f3 . Each pass through the training set took
about an hour on a desktop computer, and the network converged after 85 passes.
The complete system was trained with online gradient descent, using a learning rate of 10?4 and
a momentum of 0.9. The character error rate was evaluated on the validation set after every pass
through the training set, and training was stopped after 50 evaluations with no improvement. The
weights giving the lowest error rate on the validation set were passed to the competition organisers
for assessment on the test sets.
3.3
Results
Table 1 clearly shows that our system outperformed all entries in the 2007 ICDAR Arabic recognition contest. The other systems, most of which are based on hidden Markov models, are identified
by the names of the groups that submitted them (see [12] for more information).
1
At first we forgot that Arabic reads right to left and presented the transcriptions backwards. The system
performed surprisingly well, with a character error rate of 17.8%, compared to 10.7% for the correct targets.
7
4
Conclusions and Future Work
We have combined multidimensional LSTM with connectionist temporal classification and a hierarchical layer structure to create a powerful offline handwriting recogniser. The system is very general,
and has been successfully applied to English as well as Arabic. Indeed, since the dimensionality of
the networks can be changed to match that of the data, it could in principle be used for almost any
supervised sequence labelling task.
Acknowledgements
We would like to thank Haikal El Abed for giving us access to the ICDAR competition data, and
for persisting in the face of technical despair to install and evaluate our software. This work was
supported by the excellence cluster ?Cognition for Technical Systems? (CoTeSys) from the German
Research Foundation (DFG).
References
[1] P. Baldi and G. Pollastri. The principled design of large-scale recursive neural network architectures?dagrnns and the protein structure prediction problem. J. Mach. Learn. Res., 4:575?602, 2003.
[2] J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships
to statistical pattern recognition. In F. Fogleman-Soulie and J.Herault, editors, Neurocomputing: Algorithms, Architectures and Applications, pages 227?236. Springer-Verlag, 1990.
[3] F. Gers, N. Schraudolph, and J. Schmidhuber. Learning precise timing with LSTM recurrent networks.
Journal of Machine Learning Research, 3:115?143, 2002.
[4] A. Graves. Supervised Sequence Labelling with Recurrent Neural Networks. PhD thesis.
[5] A. Graves, S. Fern?andez, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the International
Conference on Machine Learning, ICML 2006, Pittsburgh, USA, 2006.
[6] A. Graves, S. Fern?andez, M. Liwicki, H. Bunke, and J. Schmidhuber. Unconstrained online handwriting
recognition with recurrent neural networks. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors,
Advances in Neural Information Processing Systems 20. MIT Press, Cambridge, MA, 2008.
[7] A. Graves, S. Fern?andez, and J. Schmidhuber. Multidimensional recurrent neural networks. In Proceedings of the 2007 International Conference on Artificial Neural Networks, Porto, Portugal, September
2007.
[8] S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735?1780,
1997.
[9] J. Hu, S. G. Lim, and M. K. Brown. Writer independent on-line handwriting recognition using an HMM
approach. Pattern Recognition, 33:133?147, 2000.
[10] S. Jaeger, S. Manke, J. Reichert, and A. Waibel. On-line handwriting recognition: the NPen++ recognizer.
International Journal on Document Analysis and Recognition, 3:169?180, 2001.
[11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[12] V. Margner and H. E. Abed. Arabic handwriting recognition competition. In ICDAR ?07: Proceedings of
the Ninth International Conference on Document Analysis and Recognition (ICDAR 2007) Vol 2, pages
1274?1278, Washington, DC, USA, 2007. IEEE Computer Society.
[13] M. Pechwitz, S. S. Maddouri, V. Mrgner, N. Ellouze, and H. Amiri. IFN/ENIT-database of handwritten
arabic words. In 7th Colloque International Francophone sur l?Ecrit et le Document (CIFED 2002),
Hammamet, Tunis, 2002.
[14] R. Plamondon and S. N. Srihari. On-line and off-line handwriting recognition: a comprehensive survey.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000.
[15] M. Reisenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience,
2(11):1019?1025, 1999.
[16] M. Schuster and K. K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal
Processing, 45:2673?2681, November 1997.
8
| 3449 |@word arabic:11 hu:1 thereby:1 mdlstm:11 harder:1 recursively:1 contains:3 score:2 united:1 bc:1 ours:1 document:4 past:1 current:3 blank:7 contextual:1 activation:21 must:2 wanted:1 remove:1 designed:3 intelligence:1 plane:1 desktop:1 beginning:3 l0s:1 short:3 preference:1 height:2 along:10 dn:1 install:1 npen:1 consists:2 combine:1 baldi:1 excellence:1 indeed:1 p1:2 themselves:2 globally:2 actual:1 provided:2 matched:1 lowest:1 what:1 temporal:9 every:9 multidimensional:12 exactly:1 classifier:1 platt:1 unit:19 appear:1 local:3 timing:1 despite:1 mach:1 ak:2 meet:1 optimised:1 subscript:1 ap:7 bhd:3 path:9 rnns:2 collect:1 challenging:2 hmms:3 collapse:2 range:5 directed:1 lecun:1 practice:1 block:12 whc:1 recursive:1 backpropagation:2 x3:1 procedure:1 displacement:1 area:2 rnn:1 word:7 pre:1 protein:1 onto:2 cannot:2 unfeasible:1 operator:1 bh:2 context:8 impossible:1 collapsed:1 writing:3 imposed:1 straightforward:1 starting:4 survey:1 resolution:1 array:6 retrieve:1 handle:1 variation:1 analogous:1 hierarchy:4 target:2 programming:1 element:3 idsia:2 recognition:25 database:2 observed:1 bottom:2 inserted:1 connected:8 highest:1 yk:1 mentioned:1 principled:1 transforming:1 pd:12 dynamic:1 trained:4 writer:1 f2:5 completely:1 joint:1 alphabet:5 surrounding:3 train:1 distinct:1 describe:1 liwicki:1 artificial:2 sc:7 choosing:1 h0:1 whose:2 distortion:2 otherwise:2 ability:1 transform:2 itself:1 final:4 online:6 sequence:26 advantage:1 mdrnns:4 differentiable:1 took:1 wih:1 product:1 tu:2 combining:1 roweis:1 f10:3 description:1 competition:11 empty:1 cluster:1 jaeger:1 uncollapsed:1 object:1 tions:1 recurrent:25 ac:3 ij:2 qt:1 jurgen:1 progress:1 switzerland:1 direction:6 thick:1 drawback:1 correct:2 porto:1 mie:1 stringent:1 atk:4 organiser:3 require:2 f1:5 andez:3 probable:1 summation:2 extension:1 ground:1 cognition:1 bj:1 dictionary:3 purpose:2 recognizer:1 outperformed:2 applicable:1 label:13 tanh:2 visited:2 individually:1 village:1 create:2 successfully:2 mit:1 clearly:1 always:1 modified:1 reaching:1 rather:1 pn:2 bunke:1 l0:10 improvement:1 indicates:1 hk:1 greatly:1 suffix:1 publically:1 el:1 mdrnn:5 typically:1 hidden:13 koller:1 germany:2 pixel:7 arg:1 classification:8 flexible:3 overall:1 herault:1 constrained:2 special:1 softmax:1 summed:2 art:1 once:1 f3:4 extraction:1 washington:1 presegmented:1 icml:1 future:4 connectionist:7 neurocomputing:1 individual:1 dfg:1 comprehensive:1 consisting:1 evaluation:2 alignment:2 introduces:1 redesigned:1 poggio:1 filled:1 divide:1 re:1 isolated:1 stopped:1 column:1 juergen:1 whh:1 entry:3 cedar:1 too:1 dependency:1 answer:1 scanning:6 combined:2 international:6 lstm:9 probabilistic:1 off:1 receiving:1 manke:1 together:2 thesis:1 recorded:2 town:5 choose:2 transcribed:1 corner:3 derivative:3 style:1 bx:2 de:1 wck:1 bold:1 explicitly:1 recogniser:2 multiplicative:1 performed:1 lot:1 lab:2 observing:2 reached:1 formed:1 accuracy:1 efficiently:1 gathered:2 correspond:1 yield:2 identify:1 handwritten:4 raw:3 fern:3 trajectory:1 published:1 ah:1 stroke:1 converged:1 submitted:1 strip:3 against:1 pollastri:1 nonetheless:1 handwriting:16 bridle:1 wh:3 lim:1 arab:1 dimensionality:1 segmentation:1 agreed:1 sophisticated:1 back:1 understands:1 bidirectional:2 tum:1 higher:3 supervised:2 formulation:1 evaluated:2 though:1 done:1 generality:1 box:3 stage:4 until:1 receives:1 horizontal:1 unsegmented:1 assessment:1 incrementally:1 logistic:1 name:7 effect:1 usa:2 brown:1 read:1 illustrated:1 conditionally:1 x5:1 noted:1 criterion:1 complete:7 cp:1 postprocessing:1 image:20 common:1 rotation:1 sigmoid:1 shear:1 ctc:10 winner:1 jp:1 extend:1 interpretation:1 refer:3 cambridge:1 enter:1 automatic:1 unconstrained:2 hp:1 portugal:1 contest:2 language:2 had:2 persisting:1 access:5 longer:1 cortex:1 etc:1 own:1 recent:3 reverse:1 schmidhuber:6 store:1 buffer:1 verlag:1 paliwal:1 success:1 inverted:2 somewhat:1 relaxed:1 signal:1 dashed:1 mix:1 interdependency:1 sound:1 d0:1 reduces:1 segmented:1 technical:2 match:1 calculation:1 schraudolph:1 long:8 divided:2 equally:1 controlled:2 prediction:1 variant:1 basic:3 vision:3 essentially:1 ifn:2 pyramid:2 hochreiter:1 cell:17 receive:1 whereas:2 want:2 separately:2 decreased:1 extra:1 unlike:2 regional:1 pass:1 ellouze:1 backwards:2 feedforward:8 bengio:1 independence:1 fit:1 architecture:4 competing:1 opposite:1 identified:2 reduce:3 idea:2 lesser:1 haffner:1 handled:1 passed:1 transcribing:1 passing:1 repeatedly:1 generally:2 useful:1 detailed:1 cursive:1 amount:1 ten:1 concentrated:2 processed:1 reduced:1 supplied:1 percentage:1 problematic:1 canonical:1 shifted:1 estimated:1 neuroscience:1 correctly:3 vol:1 group:1 four:3 neither:1 backward:8 timestep:4 graph:1 vast:1 padded:1 sum:3 letter:1 powerful:1 place:1 almost:1 reader:1 recognise:1 layer:42 def:2 distinguish:1 gomez:1 scanned:1 precisely:1 alex:1 bp:8 constrain:1 software:1 wc:5 simulate:1 relatively:1 munich:2 according:1 waibel:1 combination:1 smaller:2 enst:4 character:6 wi:3 labellings:4 abed:2 ln:2 equation:2 mutually:1 remains:1 turn:1 icdar:6 german:1 singer:1 fed:3 end:1 available:2 operation:1 hierarchical:6 robustness:1 gate:21 reichert:1 original:1 top:14 denotes:1 subsampling:3 ensure:1 giving:4 build:1 hypercube:1 icra:1 unchanged:1 society:1 objective:2 already:1 added:1 occurs:1 exclusive:1 september:1 gradient:4 unable:1 separate:2 mapped:1 thank:1 hmm:3 majority:1 extent:1 collected:3 length:4 sur:1 index:1 relationship:1 providing:1 innovation:1 difficult:1 localised:1 negative:1 design:1 unknown:1 allowing:1 upper:1 vertical:5 convolution:2 markov:2 descent:2 november:2 displayed:1 precise:1 dc:1 ninth:1 pair:2 required:2 paris:1 trainable:2 connection:13 hour:1 able:1 usually:2 pattern:3 built:2 memory:5 power:1 hybrid:2 ndimensional:1 axis:1 created:1 carried:1 enit:2 extract:1 naive:2 text:2 prior:1 acknowledgement:1 graf:6 fully:1 acyclic:1 validation:3 foundation:1 offered:1 sufficient:1 xp:4 dd:3 wic:1 principle:1 editor:2 storing:1 squashing:1 changed:1 token:1 surprisingly:1 last:1 supported:1 english:3 offline:8 bias:1 allow:4 normalised:1 generalise:1 wide:1 taking:1 face:1 differentiating:2 soulie:1 dimension:12 calculated:2 ending:1 maxl:1 author:1 forward:10 preprocessing:2 transaction:2 transcription:6 global:1 summing:2 pittsburgh:1 spatio:2 xi:1 pen:1 table:2 learn:1 nature:1 robust:2 ca:1 composing:1 bottou:1 complex:3 mitre:1 sp:3 kindly:1 main:2 did:1 multilayered:1 arrow:1 allowed:2 repeated:4 fig:2 sub:1 position:1 momentum:1 gers:1 late:1 down:2 specific:1 recurrently:1 list:1 evidence:1 exists:1 sequential:1 phd:1 labelling:15 forget:8 srihari:1 visual:1 ordered:2 contained:1 ykt:2 springer:1 ch:1 truth:1 extracted:3 ma:1 transcribe:1 conditional:4 goal:1 labelled:2 replace:1 twofold:1 plamondon:1 generalisation:1 except:1 total:4 pas:8 experimental:1 siemens:2 internal:1 people:1 scan:3 evaluate:2 d1:1 schuster:1 |
2,702 | 3,450 | Recursive Segmentation and Recognition Templates
for 2D Parsing
Long (Leo) Zhu
CSAIL MIT
[email protected]
Yuanhao Chen
USTC
[email protected]
Chenxi Lin
Microsoft Research Asia
[email protected]
Yuan Lin
Shanghai Jiaotong University
[email protected]
Alan Yuille
UCLA
[email protected]
Abstract
Language and image understanding are two major goals of artificial intelligence
which can both be conceptually formulated in terms of parsing the input signal
into a hierarchical representation. Natural language researchers have made great
progress by exploiting the 1D structure of language to design efficient polynomialtime parsing algorithms. By contrast, the two-dimensional nature of images makes
it much harder to design efficient image parsers and the form of the hierarchical
representations is also unclear. Attempts to adapt representations and algorithms
from natural language have only been partially successful.
In this paper, we propose a Hierarchical Image Model (HIM) for 2D image parsing which outputs image segmentation and object recognition. This HIM is represented by recursive segmentation and recognition templates in multiple layers
and has advantages for representation, inference, and learning. Firstly, the HIM
has a coarse-to-fine representation which is capable of capturing long-range dependency and exploiting different levels of contextual information. Secondly, the
structure of the HIM allows us to design a rapid inference algorithm, based on dynamic programming, which enables us to parse the image rapidly in polynomial
time. Thirdly, we can learn the HIM efficiently in a discriminative manner from
a labeled dataset. We demonstrate that HIM outperforms other state-of-the-art
methods by evaluation on the challenging public MSRC image dataset. Finally,
we sketch how the HIM architecture can be extended to model more complex
image phenomena.
1
Introduction
Language and image understanding are two major tasks in artificial intelligence. Natural language
researchers have formalized this task in terms of parsing an input signal into a hierarchical representation. They have made great progress in both representation and inference (i.e. parsing). Firstly,
they have developed probabilistic grammars (e.g. stochastic context free grammar (SCFG) [1] and
beyond [2]) which are capable of representing complex syntactic and semantic language phenomena. For example, speech contains elementary constituents, such as nouns and verbs, that can be
recursively composed into a hierarchy of (e.g. noun phrase or verb phrase) of increasing complexity. Secondly, they have exploited the one-dimensional structure of language to obtain efficient
polynomial-time parsing algorithms (e.g. the inside-outside algorithm [3]).
By contrast, the nature of images makes it much harder to design efficient image parsers which are
capable of simultaneously performing segmentation (parsing an image into regions) and recognition (labeling the regions). Firstly, it is unclear what hierarchical representations should be used to
model images and there are no direct analogies to the syntactic categories and phrase structures that
occur in speech. Secondly, the inference problem is formidable due to the well-known complexity
1
and ambiguity of segmentation and recognition. Unlike most languages (Chinese is an exception),
whose constituents are well-separated words, the boundaries between different image regions are
usually highly unclear. Exploring all the different image partitions results in combinatorial explosions because of the two-dimensional nature of images (which makes it impossible to order these
partitions to enable dynamic programming). Overall it has been hard to adapt methods from natural
language parsing and apply them to vision despite the high-level conceptual similarities (except for
restricted problems such as text [4]).
Attempts at image parsing must make trade-offs between the complexity of the models and the
complexity of the computation (for inference and learning). Broadly speaking, recent attempts can
be divided into two different styles. The first style emphasizes the modeling problem and develops
stochastic grammars [5, 6] capable of representing a rich class of visual relationships and conceptual
knowledge about objects, scenes, and images. This style of research pays less attention to the complexity of computation. Learning is usually performed, if at all, only for individual components of
the models. Parsing is performed by MCMC sampling and is only efficient provided effective proposal probabilities can be designed [6]. The second style builds on the success of conditional random
fields (CRF?s) [7] and emphasizes efficient computation. This yields simpler (discriminative) models which are less capable of representing complex image structures and long range interactions.
Efficient inference (e.g. belief propagation and graph-cuts) and learning (e.g. AdaBoost, MLE)
are available for basic CRF?s and make these methods attractive. But these inference algorithms
become less effective, and can fail, if we attempt to make the CRF models more powerful. For example, TextonBoost [8] requires the parameters of the CRF to be tuned manually. Overall, it seems
hard to extend the CRF style methods to include long-range relationships and contextual knowledge
without significantly altering the models and the algorithms.
In this paper, we introduce Hierarchical Image Models (HIM)?s for image parsing. HIM?s balance
the trade-off between model and inference complexity by introducing a hierarchy of hidden states.
In particular, we introduce recursive segmentation and recognition templates which represent complex image knowledge and serve as elementary constituents analogous to those used in speech. As
in speech, we can recursively compose these constituents at lower levels to form more complex
constituents at higher level. Each node of the hierarchy corresponds to an image region (whose size
depends on the level in the hierarchy). The state of each node represents both the partitioning of
the corresponding region into segments and the labeling of these segments (i.e. in terms of objects).
Segmentations at the top levels of the hierarchy give coarse descriptions of the image which are
refined by the segmentations at the lower levels. Learning and inference (parsing) are made efficient
by exploiting the hierarchical structure (and the absence of loops). In short, this novel architecture
offers two advantages: (I) Representation ? the hierarchical model using segmentation templates is
able to capture long-range dependency and exploiting different levels of contextual information, (II)
Computation ? the hierarchical tree structure enables rapid inference (polynomial time) and learning
by variants of dynamic programming (with pruning) and the use of machine learning (e.g. structured
perceptrons [9]).
To illustrate the HIM we implement it for parsing images and we evaluate it on the public MSRC
image dataset [8]. Our results show that the HIM outperforms the other state-of-the-art approaches.
We discuss ways that HIM?s can be extended naturally to model more complex image phenomena.
2
Hierarchical Image Model
2.1 The Model
We represent an image by a hierarchical graph defined by parent-child relationships. See figure 1.
The hierarchy corresponds to the image pyramid (with 5 layers in this paper). The top node of the
hierarchy represents the whole image. The intermediate nodes represent different sub-regions of the
image. The leaf nodes represent local image patches (27 ? 27 in this paper). We use a to index
nodes of the hierarchy. A node a has only one parent node denoted by P a(a) and four child nodes
denoted by Ch(a). Thus, the hierarchy is a quad tree and Ch(a) encodes all its vertical edges. The
image region represented by node a is denoted by R(a). A pixel in R(a), indexed by r, corresponds
to an image pixel. The set of pairs of neighbor pixels in R(a) is denoted by E(a).
A configuration of the hierarchy is an assignment of state variables y = {ya } with ya = (sa , ca )
at each node a, where s and c denote region partition and object labeling, respectively and (s, c) is
called the ?Segmentation and Recognition? pair, which we call an S-R pair. All state variables are
2
Figure 1:
The left panel shows the structure of the Hierarchical Image Model. The grey circles are the nodes of the hierarchy. All nodes,
except the top node, have only one parent nodes. All nodes except the leafs are connected to four child nodes. The middle panel shows a
dictionary of 30 segmentation templates. The color of the sub-parts of each template indicates the object class. Different sub-parts may share
the same label. For example, three sub-parts may have only two distinct labels. The last panel shows that the ground truth pixel labels (upper
right panel) can be well approximated by composing a set of labeled segmentation templates (bottom right panel).
Figure 2:
This figure illustrates how the segmentation templates and object labels (S-R pair) represent image regions in a coarse-to-fine
way. The left figure is the input image which is followed by global, mid-level and local S-R pairs. The global S-R pair gives a coarse description
of the object identity (horse), its background (grass), and its position in the image (central). The mid-level S-R pair corresponds to the region
bounded by the black box in the input image. It represents (roughly) the shape of the horse?s leg. The four S-R pairs at the lower level combine
to represent the same leg more accurately.
unobservable. More precisely, each region R(a) is described by a segmentation templates which is
selected from a dictionary DS . Each segmentation template consists of a partition of the region into
K non-overlapping sub-parts, see figure 1. In this paper K ? 3, |Ds | = 30, and the segmentation
templates are designed by hand to cover the taxonomy of shape segmentations that happen in images,
such as T-junctions, Y-junctions, and so on. The variable s refers to the indexes of the segmentation
templates in the dictionary, i.e., sa ? {1..|Ds |}. c gives the object labels of K sub-parts (i.e. labels
one sub-part as ?horse? another as ?dog? and another as ?grass?). Hence ca is a K-dimension vector
whose components take values 1, ..., M where M is the number of object classes. The labeling of
a pixel r in region R(a) is denoted by ora ? {1..M } and is directly obtained from sa , ca . Any two
pixels belonging to the same sub-part share the same label. The labeling ora is defined at the level of
node a. In other words, each level of the hierarchy has a separate labeling field. We will show how
our model encourages the labelings ora at different levels to be consistent.
A novel feature of this hierarchical representation is the multi-level S-R pairs which explicitly model
both the segmentation and labeling of its corresponding region, while traditional vision approaches
[8, 10, 11] use labeling only. The S-R pairs defined in a hierarchical form provide a coarse-to-fine
representation which captures the ?gist? (semantical meaning) of image regions. As one can see
in figure 2, the global S-R pair gives a coarse description (the identities of objects and their spatial
layout) of the whole image which is accurate enough to encode high level image properties in a
compact form. The mid-level one represents the leg of a horse roughly. The four templates at the
lower level further refine the interpretations. We will show this approximation quality empirically
in section 3.
The conditional distribution over all the states is given by:
p(y|x; ?) =
1
exp{?E1 (x, s, c; ?1 ) ? E2 (x, s, c; ?2 ) ? E3 (s, c; ?3 )
Z(x; ?)
?E4 (c; ?4 ) ? E5 (s; ?5 ) ? E6 (s, c; ?6 )}
(1)
where x refers to the input image, y is the parse tree, ? are the parameters to be estimated, Z(x; ?)
is the partition function and Ei (x, y) are energy terms. Equivalently, the conditional distribution can
be reformulated in a log-linear form:
log p(y|x; ?) = ?(x, y) ? ? ? log Z(x; ?)
3
(2)
Each energy term is of linear form, Ei (x, y) = ??i (x, y) ? ?i , where the inner product is calculated
on potential functions defined over the hierarchical structure. There are six types of energy terms
defined as follows.
The first term E1 (x, s, c)Pis an object specific data term
P which represents image features of regions.
We set E1 (x, s, c) = ? a ?1 ?1 (x, sa , ca ) where a is the summation over all nodes at different
levels of the hierarchy, and ?1 (x, sa , ca ) is of the form:
?1 (x, sa , ca ) =
X
1
log p(ora |x)
|R(a)|
(3)
r?R(a)
exp{F (xr ,or )}
where p(ora |x) = P 0 exp{F (xra,o0 )} , xr is a local image region centered at the location of r, and
o
F (?, ?) is a strong classifier output by multi-class boosting [12]. The image features used by the
classifier (47 in total) are the greyscale intensity, the color (R,G, B channels), the intensity gradient,
the Canny edge, the response of DOG (difference of Gaussians) and DOOG (Difference of Offset
Gaussian) filters at different scales (13*13 and 22*22) and orientations (0,30,60,...), and so on. We
use 55 types of shape (spatial) filters (similar to [8]) to calculate the responses of 47 image features.
There are 2585 = 47 ? 55 features in total.
P
The second term (segmentation specific) E2 (x, s, c) = ? a ?2 ?2 (x, sa , ca ) is designed to favor
the segmentation templates in which the pixels belonging to the same partitions (i.e., having the
same labels) have similar appearance. We define:
X
1
?(xr , xq |ora , oqa )
(4)
?2 (x, sa , ca ) =
|E(a)|
(q,r)?E(a)
where E(a) are the set of edges
q, r in a neighborhood and ?(xr , xq |ora , oqa ) has the
? connecting pixels
r
2
?(r, q) if oa = oqa
(r,q)
1
, where ?(r, q) = ? exp{? g 2?
form of ?(xr , xq |ora , oqa ) =
2 } dist(r,q) ,
0
if ora 6= oqa
g(., .) is a distance measure on the colors xr , xq and dist(r, q) measures the spatial distance between
r and q. ?(xr , xq |ora , oqa ) is so called the contrast sensitive Potts model which is widely used in
graph-cut algorithms [13] as edge potentials (only in one level) to favors pixels with similar colour
having the same labels.
P
The third term, defined as E3 (s, c) = ? a,b=P a(a) ?3 ?3 (sa , ca , sb , cb ) (i.e. the nodes a at all
levels are considered and b is the parent of a) is proposed to encourage the consistency between
the configurations of every pair of parent-child nodes in two consecutive layers. ?3 (sa , ca , sb , cb ) is
defined by the Hamming distance:
X
1
?3 (sa , ca , sb , cb ) =
?(ora , orb )
(5)
|R(a)|
r?R(a)
where ?(ora , orb ) is the Kronecker delta, which equals one whenever ora = orb and zero otherwise. The
hamming function ensures to glue the segmentation templates (and their labels) at different levels
together in a consistent hierarchical form. This energy term is a generalization of the interaction
energy in the Potts model. However, E3 (s, c) has a hierarchical form which allows multi-level
interactions.
The fourth term E4 (c) is designed to model the co-occurrence of two object classes (e.g., a cow is
unlikely to appear next to an aeroplane):
X X
X
X
E4 (c) = ?
?4 (i, j)?4 (i, j, ca , ca ) ?
?4 (i, j)?4 (i, j, ca , cb ) (6)
a i,j=1..M
a,b=P a(a) i,j=1..M
where ?4 (i, j, ca , cb ) is an indicator function which equals one while i ? ca and j ? cb (i ? ca
means i is a component of ca ) hold true and zero otherwise. ?4 is a matrix where each entry ?4 (i, j)
encodes the compatibility between two classes i and j. The first term on the r.h.s encodes the classes
in a single template while the second term encodes the classes in two templates of the parent-child
nodes. It is worth noting that class dependency is encoded at all levels to capture both short-range
and long-range interactions.
4
P
The fifth term E5 (s) = ? a ?5 ?5 (sa ), where ?5 (sa ) = log p(sP
the generic prior of
a ) encode
P
the segmentation template. Similarly the sixth term E6 (s, c) = ? a j?ca ?6 ?6 (sa , j), where
?6 (sa , j) = log p(sa , j), models the co-occurrence of the segmentation templates and the object
classes. ?5 (sa ) and ?6 (sa , j) are directly obtained from training data by label counting. The parameters ?5 and ?6 are both scalars.
Justifications. The HIM has several partial similarities with other work. HIM is a coarse-to-fine
representation which captures the ?gist? of image regions by using the S-R pairs at multiple levels.
But the traditional concept of ?gist? [14] relies only on image features and does not include segmentation templates. Levin and Weiss [15] use a segmentation mask which is more object-specific than
our segmentation templates (and they do not have a hierarchy). It is worth nothing that, in contrast
to TextonBoost [8], we do not use ?location features? in order to avoid the dangers of overfitting to
a restricted set of scene layouts. Our approach has some similarities to some hierarchical models
(which have two-layers only) [10],[11] ? but these models also lack segmentation templates. The
hierarchial model proposed by [16] is an interesting alternative but which does not perform explicit
segmentation.
2.2 Parsing by Dynamic Programming
Parsing an image is performed as inference of the HIM. More precisely, the task of parsing is to
obtain the maximum a posterior (MAP):
y ? = arg max p(y|x; ?) = arg max ?(x, y) ? ?
y
y
(7)
The size of the states of each node is O(M K |Ds |) where K = 3, M = 21, |Ds | = 30 in our case.
Since the form of y is a tree, Dynamic Programming (DP) can be applied to calculate the best parse
tree y ? according to equation 7. Note that the pixel label oa is determined by (s, c), so we only
need consider a subset of pixel labelings. It is unlike flat MRF representation where we need to do
exhaustive search over all pixel labels o (which would be impractical for DP). The final output of
the model for segmentation is the pixel labeling determined by the (s, c) of the lowest level.
It is straight forward to see that the computational complexity of DP is O(M 2K |Ds |2 H) where H is
the number of edges of the hierarchy. Although DP can be performed in polynomial time, the huge
number of states make exact DP still impractical. Therefore, we resort to a pruned version of DP
similar to the method described in [17]. For brevity we omit the details.
2.3 Learning the Model
Since HIM is a conditional model, in principle, estimation of its parameters can be achieved by
any discriminative learning approach, such as maximum likelihood learning as used in Conditional
Random Field (CRF) [7], max-margin learning [18], and structure-perceptron learning [9]. In this
paper, we adopt the structure-perceptron learning which has been applied for learning the recursive
deformable template (see paper [19]). Note that structure-perceptron learning is simple to implement and only needs to calculate the most probable configurations (parses) of the model. By contrast, maximum likelihood learning requires calculating the expectation of features which is difficult
due to the large states of HIM. Therefore, structure-perceptron learning is more flexible and computationally simpler. Moreover, Collins [9] proved theoretical results for convergence properties, for
both separable and non-separable cases, and for generalization.
The structure-perceptron learning will not compute the partition function Z(x; ?). Therefore we do
not have a formal probabilistic interpretation. The goal of structure-perceptron learning is to learn
a mapping from inputs x ? X to output structure y ? Y . In our case, X is a set of images, with
Y being a set of possible parse trees which specify the labels of image regions in a hierarchical
form. It seems that the ground truth of parsing trees needs all labels of both segmentation template
and pixel labelings. In our experiment, we will show that how to obtain the ground truth directly
from the segmentation labels without extra human labeling. We use a set of training examples
{(xi , yi ) : i = 1...n} and a set of functions ? which map each (x, y) ? X ? Y to a feature vector
?(x, y) ? Rd . The task is to estimate a parameter vector ? ? Rd for the weights of the features.
The feature vectors ?(x, y) can include arbitrary features of parse trees, as we discussed in section
2.1. The loss function used in structure-perceptron learning is usually of form:
Loss(?) = ?(x, y) ? ? ? max ?(x, y) ? ?,
y
5
(8)
Input: A set of training images with ground truth (xi , y i ) for i = 1..N . Initialize parameter vector ? = 0.
For t = 1..T, i = 1..N
? find the best state of the model on the i?th training image with current parameter setting, i.e., y ? = arg maxy ?(xi , y) ? ?
? Update the parameters: ? = ? + ?(xi , y i ) ? ?(xi , y ? )
? Store: ?t,i = ?
P
t,i
Output: Parameters ? =
/N T
t,i ?
Figure 3:
Structure-perceptron learning
where y is the correct structure for input x, and y is a dummy variable.
The basic structure-perceptron algorithm is designed to minimize the loss function. We adapt ?the
averaged parameters? version whose pseudo-code is given in figure 3. The algorithm proceeds in
a simple way (similar to the perceptron algorithm for classification). The parameters are initialized
to zero and the algorithm loops over the training examples. If the highest scoring parse tree for
input x is not correct, then the parameters ? are updated by an additive term. The most difficult
step of the method is finding y ? = arg maxy ?(xi , y) ? ?. This is precisely the parsing (inference)
problem. Hence the practicality of structure-perceptron learning, and its computational efficiency,
depends on the inference algorithm. As discussed earlier, see section 2.2, the inference algorithm
has polynomial computational complexity for an HIM which makes structure-perceptron learning
PT PN
practical for HIM. The averaged parameters are defined to be ? = t=1 i=1 ?t,i /N T , where T
is the number of epochs, N T is the total number of iterations. It is straightforward to store these
averaged parameters and output them as the final estimates.
3
Experimental Results
Dataset. We use a standard public dataset, the MSRC 21-class Image Dataset [8], to perform experimental evaluations for the HIM. This dataset is designed to evaluate scene labeling including both
image segmentation and multi-class object recognition. The ground truth only gives the labeling of
the image pixels. To supplement this ground truth (to enable learning), we estimate the true labels
(states of the S-R pair ) of the nodes in the five-layer hierarchy of HIM by selecting the S-R pairs
which have maximum overlap with the labels of the image pixels. This approximation only results
in 2% error in labeling image pixels. There are a total of 591 images. We use the identical splitting
as [8], i.e., 45% for training, 10% for validation, and 45% for testing. The parameters learnt from
the training set, with the best performance on validation set, are selected.
Implementation Details. For a given image x, the parsing result is obtained by estimating the best
configuration y ? of the HIM. To evaluate the performance of parsing we use the global accuracy
measured in terms of all pixels and the average accuracy over the 21 object classes (global accuracy
pays most attention to frequently occurring objects and penalizes infrequent objects). A computer
with 8 GB memory and 2.4 GHz CPU was used for training and testing. For each class, there are
around 4, 500 weak classifiers selected by multi-class boosting. The boosting learning takes about
35 hours of which 27 hours are spent on I/O processing and 8 hours on computing. The structureperceptron learning takes about 20 hours to converge in 5520(T = 20, N = 276) iterations. In the
testing stage, it takes 30 seconds to parse an image with size of 320 ? 200 (6s for extracting image
features, 9s for computing the strong classifier of boosting and 15s for parsing the HIM).
Results. Figure 4 (best viewed in color) shows several parsing results obtained by the HIM and by
the classifier by itself (i.e. p(ora |x) learnt by boosting). One can see that the HIM is able to roughly
capture different shaped segmentation boundaries (see the legs of the cow and sheep in rows 1 and
3, and the boundary curve between sky and building in row 4). Table 1 shows that HIM improves
the results obtained by the classifier by 6.9% for average accuracy and 5.3% for global accuracy. In
particular, in rows 6 and 7 in figure 4, one can observe that boosting gives many incorrect labels.
It is impossible to correct such large mislabeled regions without the long-range interactions in the
HIM, which improves the results by 20% and 32%.
Comparisons. In table 1, we compare the performance of our approach with other successful methods [8, 20, 21]. Our approach outperforms those alternatives by 6% in average accuracy and 4%
in global accuracy. Our boosting results are better than Textonboost [8] because of image features.
Would we get better results if we use a flat CRF with our boosting instead of a hierarchy? We argue
that we would not because the CRF only improves TextonBoost?s performance by 3 percent [8],
while we gain 5 percent by using the hierarchy (and we start with a higher baseline). Some other
6
Figure 4:
This figure is best viewed in color. The colors indicate the labels of 21 object classes as in [8]. The columns (except the fourth
?accuracy? column) show the input images, ground truth, the labels obtained by HIM and the boosting classifier respectively. The ?accuracy?
column shows the global accuracy obtained by HIM (left) and the boosting classifier (right). In these 7 examples, HIM improves boosting by
1%, -1% (an outlier!), 1%, 10%, 18%, 20% and 32% in terms of global accuracy.
Average
Global
Textonboost[8]
57.7
72.2
PLSA-MRF [20]
64.0
73.5
Auto-context [21]
68
77.7
Classifier only
67.2
75.9
HIM
74.1
81.2
Table 1:
Performance Comparisons for average accuracy and global accuracy. ?Classifier only? are the results where the pixel labels are
predicted by the classifier obtained by boosting only.
methods [22, 11, 10], which are worse than [20, 21] and evaluated on simpler datasets [10, 11] (less
than 10 classes), are not listed here due to lack of space. In summary, our results are significantly
better than the state-of-the-art methods.
Diagnosis on the function of S-R Pair. Figure 5 shows how the S-R pairs (which include the
segmentation templates) can be used to (partially) parse an object into its constituent parts, by the
correspondence between S-R pairs and specific parts of objects. We plot the states of a subset of S-R
pairs for some images. For example, the S-R pair consisting of two horizontal bars labeled ?cow?
and ?grass? respectively indicates the cow?s stomach consistently across different images. Similarly,
the cow?s tail can be located according to the configuration of another S-R pair with vertical bars.
In principle, the whole object can be parsed into its constituent parts which are aligned consistently.
Developing this idea further is an exciting aspect of our current research.
4
Conclusion
This paper describes a novel hierarchical image model (HIM) for 2D image parsing. The hierarchical
nature of the model, and the use of recursive segmentation and recognition templates, enables the
HIM to represent complex image structures in a coarse-to-fine manner. We can perform inference
(parsing) rapidly in polynomial time by exploiting the hierarchical structure. Moreover, we can learn
the HIM probability distribution from labeled training data by adapting the structure-perceptron
algorithm. We demonstrated the effectiveness of HIM?s by applying them to the challenging task of
segmentation and labeling of the public MSRC image database. Our results show that we outperform
other state-of-the-art approaches.
7
Figure 5:
The S-R pairs can be used to parse the object into parts. The colors indicate the identities of objects. The shapes (spacial layout)
of the segmentation templates distinguish the constituent parts of the object. Observe that the same S-R pairs (e.g. stomach above grass, and
tail to the left of grass) correspond to the same object part in different images.
The design of the HIM was motivated by drawing parallels between language and vision processing.
We have attempted to capture the underlying spirit of the successful language processing approaches
? the hierarchical representations based on the recursive composition of constituents and efficient
inference and learning algorithms. Our current work attempts to extend the HIM?s to improve their
representational power while maintaining computational efficiency.
5
Acknowledgments
This research was supported by NSF grant 0413214 and the W.M. Keck foundation.
References
[1] F. Jelinek and J. D. Lafferty, ?Computation of the probability of initial substring generation by stochastic context-free grammars,?
Computational Linguistics, vol. 17, no. 3, pp. 315?323, 1991.
[2] M. Collins, ?Head-driven statistical models for natural language parsing,? Ph.D. Thesis, University of Pennsylvania, 1999.
[3] K. Lari and S. J. Young, ?The estimation of stochastic context-free grammars using the inside-outside algorithm,? in Computer Speech
and Languag, 1990.
[4] M. Shilman, P. Liang, and P. A. Viola, ?Learning non-generative grammatical models for document analysis,? in Proceedings of IEEE
International Conference on Computer Vision, 2005, pp. 962?969.
[5] Z. Tu and S. C. Zhu, ?Image segmentation by data-driven markov chain monte carlo,? IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 24, no. 5, pp. 657?673, 2002.
[6] Z. Tu, X. Chen, A. L. Yuille, and S. C. Zhu, ?Image parsing: Unifying segmentation, detection, and recognition,? in Proceedings of IEEE
International Conference on Computer Vision, 2003, pp. 18?25.
[7] J. D. Lafferty, A. McCallum, and F. C. N. Pereira, ?Conditional random fields: Probabilistic models for segmenting and labeling sequence
data,? in Proceedings of International Conference on Machine Learning, 2001, pp. 282?289.
[8] J. Shotton, J. M. Winn, C. Rother, and A. Criminisi, ?TextonBoost: Joint appearance, shape and context modeling for multi-class object
recognition and segmentation,? in Proceedings of European Conference on Computer Vision, 2006, pp. 1?15.
[9] M. Collins, ?Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms,? in Proceedings of Annual Meeting on Association for Computational Linguistics conference on Empirical methods in natural language processing, 2002, pp. 1?8.
? Carreira-Perpi?na? n, ?Multiscale conditional random fields for image labeling,? in Proceedings of IEEE
[10] X. He, R. S. Zemel, and M. A.
Computer Society Conference on Computer Vision and Pattern Recognition, 2004, pp. 695?702.
[11] S. Kumar and M. Hebert, ?A hierarchical field framework for unified context-based classification,? in Proceedings of IEEE International
Conference on Computer Vision, 2005, pp. 1284?1291.
[12] E. L. Allwein, R. E. Schapire, and Y. Singer, ?Reducing multiclass to binary: A unifying approach for margin classifiers,? Journal of
Machine Learning Research, vol. 1, pp. 113?141, 2000.
[13] Y. Boykov and M.-P. Jolly, ?Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images,? in Proceedings
of IEEE International Conference on Computer Vision, 2001, pp. 105?112.
[14] A. Oliva and A. Torralba, ?Building the gist of a scene: the role of global image features in recognition,? IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 155, pp. 23?36, 2006.
[15] A. Levin and Y. Weiss, ?Learning to combine bottom-up and top-down segmentation,? in Proceedings of European Conference on
Computer Vision, 2006, pp. 581?594.
[16] E. B. Sudderth, A. B. Torralba, W. T. Freeman, and A. S. Willsky, ?Learning hierarchical models of scenes, objects, and parts,? in
Proceedings of IEEE International Conference on Computer Vision, 2005, pp. 1331?1338.
[17] Y. Chen, L. Zhu, C. Lin, A. L. Yuille, and H. Zhang, ?Rapid inference on a novel and/or graph for object detection, segmentation and
parsing,? in Advances in Neural Information Processing Systems, 2007.
[18] B. Taskar, D. Klein, M. Collins, D. Koller, and C. Manning, ?Max-margin parsing,? in Proceedings of Annual Meeting on Association
for Computational Linguistics conference on Empirical methods in natural language processing, 2004.
[19] L. Zhu, Y. Chen, X. Ye, and A. L. Yuille, ?Structure-perceptron learning of a hierarchical log-linear model,? in Proceedings of IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, 2008.
[20] J. Verbeek and B. Triggs, ?Region classification with markov field aspect models,? in Proceedings of IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, 2007.
[21] Z. Tu, ?Auto-context and its application to high-level vision tasks,? in Proceedings of IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, 2008.
[22] J. Verbeek and B. Triggs, ?Scene segmentation with crfs learned from partially labeled images,? in Advances in Neural Information
Processing Systems, vol. 20, 2008.
8
| 3450 |@word version:2 middle:1 polynomial:6 seems:2 glue:1 plsa:1 triggs:2 grey:1 textonboost:6 harder:2 recursively:2 initial:1 configuration:5 contains:1 selecting:1 tuned:1 document:1 outperforms:3 current:3 com:1 contextual:3 must:1 parsing:29 additive:1 partition:7 happen:1 shape:5 enables:3 designed:6 gist:4 update:1 plot:1 grass:5 intelligence:4 leaf:2 selected:3 generative:1 mccallum:1 short:2 coarse:8 boosting:12 node:24 location:2 firstly:3 simpler:3 zhang:1 five:1 direct:1 become:1 yuan:1 consists:1 incorrect:1 compose:1 combine:2 yhchen4:1 inside:2 introduce:2 manner:2 mask:1 rapid:3 os:14 roughly:3 dist:2 frequently:1 multi:6 jolly:1 freeman:1 cpu:1 quad:1 increasing:1 provided:1 estimating:1 bounded:1 moreover:2 formidable:1 panel:5 underlying:1 lowest:1 what:1 developed:1 unified:1 finding:1 impractical:2 pseudo:1 sky:1 every:1 interactive:1 classifier:12 partitioning:1 grant:1 omit:1 appear:1 segmenting:1 local:3 despite:1 black:1 challenging:2 co:2 range:7 averaged:3 practical:1 acknowledgment:1 testing:3 recursive:6 implement:2 xr:7 danger:1 empirical:2 significantly:2 adapting:1 word:2 refers:2 get:1 context:7 impossible:2 applying:1 map:2 demonstrated:1 crfs:1 layout:3 attention:2 straightforward:1 formalized:1 splitting:1 justification:1 analogous:1 updated:1 hierarchy:18 pt:1 parser:2 infrequent:1 exact:1 programming:5 recognition:16 approximated:1 located:1 cut:3 labeled:5 database:1 bottom:2 role:1 taskar:1 capture:6 calculate:3 region:22 ensures:1 connected:1 trade:2 highest:1 complexity:8 dynamic:5 segment:2 yuille:5 serve:1 efficiency:2 mislabeled:1 joint:1 represented:2 leo:1 separated:1 distinct:1 effective:2 monte:1 artificial:2 zemel:1 labeling:16 horse:4 outside:2 refined:1 neighborhood:1 exhaustive:1 whose:4 encoded:1 widely:1 drawing:1 otherwise:2 grammar:5 favor:2 syntactic:2 itself:1 final:2 advantage:2 sequence:1 propose:1 interaction:5 product:1 canny:1 tu:3 aligned:1 loop:2 rapidly:2 deformable:1 representational:1 description:3 constituent:9 exploiting:5 parent:6 convergence:1 keck:1 object:30 spent:1 illustrate:1 stat:1 measured:1 progress:2 sa:18 strong:2 predicted:1 indicate:2 orb:3 correct:3 filter:2 stochastic:4 criminisi:1 centered:1 human:1 enable:2 public:4 generalization:2 probable:1 elementary:2 secondly:3 summation:1 exploring:1 hold:1 around:1 considered:1 ground:7 exp:4 great:2 cb:6 mapping:1 major:2 dictionary:3 consecutive:1 adopt:1 torralba:2 estimation:2 combinatorial:1 label:22 sensitive:1 him:37 mit:2 offs:1 gaussian:1 avoid:1 pn:1 allwein:1 encode:2 potts:2 consistently:2 indicates:2 likelihood:2 contrast:5 baseline:1 inference:17 sb:3 unlikely:1 hidden:2 koller:1 labelings:3 pixel:19 overall:2 unobservable:1 orientation:1 flexible:1 denoted:5 classification:3 compatibility:1 arg:4 art:4 noun:2 spatial:3 initialize:1 field:7 equal:2 having:2 shaped:1 sampling:1 manually:1 identical:1 represents:5 develops:1 composed:1 simultaneously:1 individual:1 consisting:1 microsoft:2 attempt:5 detection:2 huge:1 highly:1 evaluation:2 sheep:1 chain:1 accurate:1 edge:5 capable:5 explosion:1 encourage:1 partial:1 tree:9 indexed:1 initialized:1 circle:1 penalizes:1 theoretical:1 column:3 modeling:2 earlier:1 cover:1 altering:1 assignment:1 phrase:3 introducing:1 entry:1 subset:2 successful:3 levin:2 dependency:3 learnt:2 international:6 csail:2 probabilistic:3 off:1 connecting:1 together:1 na:1 thesis:1 ambiguity:1 central:1 worse:1 resort:1 style:5 potential:2 explicitly:1 depends:2 performed:4 start:1 parallel:1 polynomialtime:1 yuanhao:1 minimize:1 accuracy:13 efficiently:1 yield:1 correspond:1 conceptually:1 weak:1 accurately:1 emphasizes:2 substring:1 semantical:1 carlo:1 worth:2 researcher:2 straight:1 whenever:1 sixth:1 energy:5 pp:14 e2:2 naturally:1 hamming:2 gain:1 dataset:7 proved:1 stomach:2 knowledge:3 color:7 improves:4 segmentation:45 higher:2 asia:1 adaboost:1 response:2 wei:2 specify:1 evaluated:1 box:1 stage:1 shilman:1 d:6 sketch:1 hand:1 horizontal:1 parse:9 ei:2 multiscale:1 overlapping:1 propagation:1 lack:2 quality:1 building:2 ye:1 concept:1 true:2 hence:2 semantic:1 attractive:1 encourages:1 crf:8 demonstrate:1 percent:2 image:81 meaning:1 novel:4 boykov:1 empirically:1 shanghai:1 thirdly:1 extend:2 interpretation:2 discussed:2 tail:2 association:2 he:1 composition:1 rd:2 consistency:1 msrc:4 similarly:2 language:15 similarity:3 posterior:1 recent:1 driven:2 store:2 binary:1 success:1 meeting:2 yi:1 exploited:1 scoring:1 jiaotong:1 converge:1 signal:2 ii:1 multiple:2 alan:1 adapt:3 offer:1 long:7 lin:3 divided:1 mle:1 e1:3 verbeek:2 variant:1 basic:2 mrf:2 oliva:1 vision:15 expectation:1 iteration:2 represent:7 pyramid:1 achieved:1 proposal:1 background:1 fine:5 winn:1 sudderth:1 extra:1 unlike:2 lafferty:2 spirit:1 effectiveness:1 call:1 extracting:1 noting:1 counting:1 intermediate:1 shotton:1 enough:1 architecture:2 pennsylvania:1 cow:5 inner:1 idea:1 cn:2 multiclass:1 six:1 o0:1 motivated:1 colour:1 aeroplane:1 gb:1 reformulated:1 speech:5 speaking:1 e3:3 listed:1 mid:3 ph:1 category:1 schapire:1 outperform:1 nsf:1 estimated:1 delta:1 dummy:1 klein:1 broadly:1 diagnosis:1 vol:5 four:4 graph:5 powerful:1 fourth:2 patch:1 capturing:1 layer:5 pay:2 followed:1 distinguish:1 correspondence:1 refine:1 annual:2 occur:1 precisely:3 kronecker:1 scene:6 flat:2 encodes:4 ucla:2 aspect:2 pruned:1 performing:1 separable:2 kumar:1 structured:1 developing:1 according:2 manning:1 belonging:2 across:1 describes:1 maxy:2 leg:4 outlier:1 restricted:2 computationally:1 equation:1 lari:1 discus:1 fail:1 singer:1 available:1 junction:2 gaussians:1 apply:1 observe:2 hierarchical:26 generic:1 occurrence:2 alternative:2 scfg:1 top:4 include:4 linguistics:3 maintaining:1 unifying:2 calculating:1 hierarchial:1 practicality:1 parsed:1 chinese:1 build:1 society:4 traditional:2 unclear:3 gradient:1 dp:6 distance:3 separate:1 oa:2 argue:1 sjtu:1 willsky:1 rother:1 code:1 index:2 relationship:3 balance:1 equivalently:1 difficult:2 liang:1 taxonomy:1 greyscale:1 design:5 implementation:1 perform:3 upper:1 vertical:2 datasets:1 markov:3 viola:1 extended:2 head:1 verb:2 arbitrary:1 intensity:2 pair:23 dog:2 learned:1 hour:4 beyond:1 able:2 proceeds:1 usually:3 bar:2 pattern:6 max:5 including:1 memory:1 belief:1 power:1 overlap:1 natural:7 indicator:1 zhu:5 representing:3 improve:1 auto:2 xq:5 text:1 prior:1 understanding:2 epoch:1 loss:3 par:1 interesting:1 generation:1 analogy:1 validation:2 foundation:1 consistent:2 principle:2 exciting:1 share:2 pi:1 row:3 summary:1 supported:1 last:1 free:3 hebert:1 formal:1 perceptron:15 neighbor:1 template:27 fifth:1 jelinek:1 ghz:1 grammatical:1 boundary:4 dimension:1 calculated:1 curve:1 rich:1 forward:1 made:3 transaction:2 pruning:1 compact:1 global:12 overfitting:1 conceptual:2 discriminative:4 xi:6 search:1 spacial:1 table:3 nature:4 learn:3 channel:1 ca:19 composing:1 e5:2 complex:7 european:2 sp:1 whole:3 nothing:1 child:5 sub:8 position:1 pereira:1 explicit:1 third:1 young:1 e4:3 down:1 perpi:1 specific:4 offset:1 supplement:1 illustrates:1 occurring:1 margin:3 chen:4 appearance:2 visual:1 partially:3 scalar:1 ch:2 corresponds:4 truth:7 relies:1 conditional:7 goal:2 formulated:1 identity:3 viewed:2 absence:1 hard:2 carreira:1 determined:2 except:4 reducing:1 called:2 total:4 experimental:2 ya:2 attempted:1 perceptrons:1 exception:1 e6:2 ustc:2 collins:4 brevity:1 evaluate:3 mcmc:1 phenomenon:3 |
2,703 | 3,451 | Tighter Bounds for Structured Estimation
Chuong B. Do, Quoc Le
Stanford University
{chuongdo,quocle}@cs.stanford.edu
Choon Hui Teo
Australian National University and NICTA
[email protected]
Olivier Chapelle, Alex Smola
Yahoo! Research
[email protected],[email protected]
Abstract
Large-margin structured estimation methods minimize a convex upper bound of
loss functions. While they allow for efficient optimization algorithms, these convex formulations are not tight and sacrifice the ability to accurately model the true
loss. We present tighter non-convex bounds based on generalizing the notion of
a ramp loss from binary classification to structured estimation. We show that a
small modification of existing optimization algorithms suffices to solve this modified problem. On structured prediction tasks such as protein sequence alignment
and web page ranking, our algorithm leads to improved accuracy.
1
Introduction
Structured estimation [18, 20] and related techniques has proven very successful in many areas
ranging from collaborative filtering to optimal path planning, sequence alignment, graph matching
and named entity tagging.
At the heart of those methods is an inverse optimization problem, namely that of finding a function f (x, y) such that the prediction y ? which maximizes f (x, y ? ) for a given x, minimizes some
loss ?(y, y ? ) on a training set. Typically x ? X is referred to as a pattern, whereas y ? Y is a
corresponding label. Y can represent a rich class of possible data structures, ranging from binary
sequences (tagging), to permutations (matching and ranking), to alignments (sequence matching),
to path plans [15]. To make such inherently discontinuous and nonconvex optimization problems
tractable, one applies a convex upper bound on the incurred loss. This has two benefits: firstly, the
problem has no local minima, and secondly, the optimization problem is continuous and piecewise
differentiable, which allows for effective optimization [17, 19, 20]. This setting, however, exhibits a
significant problem: the looseness of the convex upper bounds can sometimes lead to poor accuracy.
For binary classification, [2] proposed to switch from the hinge loss, a convex upper bound, to
a tighter nonconvex upper bound, namely the ramp loss. Their motivation was not the accuracy
though, but the faster optimization due to the decreased number of support vectors. The resulting
optimization uses the convex-concave procedure of [22], which is well known in optimization as the
DC-programming method [9].
We extend the notion of ramp loss to structured estimation. We show that with some minor modifications, the DC algorithms used in the binary case carry over to the structured setting. Unlike
the binary case, however, we observe that for structured prediction problems with noisy data, DC
programming can lead to improved accuracy in practice. This is due to increased robustness. Effectively, the algorithm discards observations which it labels incorrectly if the error is too large. This
ensures that one ends up with a lower-complexity solution while ensuring that the ?correctable?
errors are taken care of.
1
2
Structured Estimation
Denote by X the set of patterns and let Y be the set of labels. We will denote by X := {x1 , . . . , xm }
the observations and by Y := {y1 , . . . , ym } the corresponding set of labels. Here the pairs (xi , yi )
are assumed to be drawn from some distribution Pr on X ? Y.
Let f : X ? Y ? R be a function defined on the product space. Finally, denote by ? : Y ? Y ? R+
0
a loss function which maps pairs of labels to nonnegative numbers. This could be, for instance, the
number of bits in which y and y 0 differ, i.e. ?(y, y 0 ) = ky ? y 0 k1 or considerably more complicated
loss functions, e.g., for ranking and retrieval [21]. We want to find f such that for
y ? (x, f ) := argmax f (x, y 0 )
(1)
y0
the loss ?(y, y ? (x, f )) is minimized: given X and Y we want to minimize the regularized risk,
m
Rreg [f, X, Y ] :=
1 X
?(yi , y ? (xi , f )) + ??[f ].
m i=1
(2)
2
Here ?[f ] is a regularizer, such as an RKHS norm ?[f ] = kf kH and ? > 0 is the associated regularization constant, which safeguards us against overfitting. Since (2) is notoriously hard to minimize
several convex upper bounds have been proposed to make ?(yi , y ? (xi , f )) tractable in f . The following lemma, which is a generalization of a result of [20] provides a strategy for convexification:
+
Lemma 1 Denote by ? : R+
0 ? R0 a monotonically increasing nonnegative function. Then
l(x, y, y 00 , f ) := sup ?(?(y, y 0 )) [f (x, y 0 ) ? f (x, y 00 )] + ?(y, y 0 ) ? ? (y, y ? (x, f ))
y0
for all y, y 00 ? Y. Moreover, l(x, y, y 00 , f ) is convex in f .
Proof Convexity follows immediately from the fact that l is the supremum over linear functions
in f . To see the inequality, plug y 0 = y ? (x, f ) into the LHS of the inequality: by construction
f (x, y ? (x, f )) ? f (x, y 00 ) for all y 00 ? Y.
In regular convex structured estimation, l(x, y, y, f ) is used. Methods in [18] choose the constant
function ?(?) = 1, whereas methods in [20] choose margin rescaling by means of ?(?) = ?. This
also shows why both formulations lead to convex upper bounds of the loss. It depends very much
on the form of f and ? which choice of ? is easier to handle. Note that the inequality holds for all
y 00 rather than only for the ?correct? label y 00 = y. We will exploit this later.
3
A Tighter Bound
For convenience denote by ?(x, y, y 0 , f ) the relative margin between y and y 0 induced by f via
?(x, y, y 0 , f ) := ?(?(y, y 0 ))[f (x, y 0 ) ? f (x, y)].
(3)
The loss bound of Lemma 1 suffers from a significant problem: for large values of f the loss may
grow without bound, provided that the estimate is incorrect. This is not desirable since in this
setting even a single observation may completely ruin the quality of the convex upper bound on the
misclassification error.
Another case where the convex upper bound is not desirable is the following: imagine that there are
a lot of y which are as good as the label in the training set; this happens frequently in ranking where
there are ties between the optimal permutations. Let us denote by Yopt := {y 00 such that ?(y, y 0 ) =
?(y 00 , y 0 ), ?y 0 } this set of equally good labels. Then one can replace y by any element of Yopt in
the bound of Lemma 1. Minimization over y 00 ? Yopt leads to a tighter non-convex upper bound:
l(x, y, y, f ) ?
inf
sup ?(x, y 00 , y 0 , f ) + ?(y 00 , y 0 ) ? ? (y, y ? (x, f )) .
y 00 ?Yopt y 0
In the case of binary classification, [2] proposed the following non-convex loss that can be minimized
using DC programming:
l(x, y, f ) := min(1, max(0, 1 ? yf (x))) = max(0, 1 ? yf (x)) ? max(0, ?yf (x)).
2
(4)
We see that (4) is the difference between a soft-margin loss and a hinge loss. That is, the difference
between a loss using a large margin related quantity and one using simply the violation of the margin.
This difference ensures that l cannot increase without bound, since in the limit the derivative of l
with respect to f vanishes. The intuition for extending this to structured losses is that the generalized
hinge loss underestimates the actual loss whereas the soft margin loss overestimates the actual loss.
Taking the difference removes linear scaling behavior while retaining the continuous properties.
Lemma 2 Denote as follows the rescaled estimate and the margin violator
y?(x, y, f ) := argmax ?(x, y, y 0 , f ) and y?(x, y, f ) := argmax ?(x, y, y 0 , f ) + ?(y, y 0 )
y0
(5)
y0
Moreover, denote by l(x, y, f ) the following loss function
l(x, y, f ) := sup[?(x, y, y 0 , f ) + ?(y, y 0 )] ? sup ?(x, y, y 0 , f ).
y0
(6)
y0
Then under the assumptions of Lemma 1 the following bound holds
?(y, y?(x, y, f )) ? l(x, y, f ) ? ?(y, y ? (x, f ))
(7)
This loss is a difference between two convex functions, hence it may be (approximately) minimized
by a DC programming procedure. Moreover, it is easy to see that for ?(?) = 1 and f (x, y) =
1
2 yf (x) and y ? {?1} we recover the ramp loss of (4).
Proof Since y?(x, y, f ) maximizes the first term in (6), replacing y 0 by y?(x, y, f ) in both terms yields
l(x, y, f ) ? ?(x, y, y?, f ) + ?(y, y?) ? ?(x, y, y?, f ) = ?(y, y?).
To show the lower bound, we distinguish the following two cases:
Case 1: y ? is a maximizer of supy0 ?(x, y, y 0 , f )
Replacing y 0 by y ? in both terms of (6) leads to l(x, y, f ) ? ?(y, y ? ).
Case 2: y ? is not a maximizer of supy0 ?(x, y, y 0 , f )
Let y? be any maximizer. Because f (x, y ? ) ? f (x, y?), we have ?(?(y, y?)) [f (x, y ? ) ? f (x, y)] >
?(?(y, y?)) [f (x, y?) ? f (x, y)] > ?(?(y, y ? )) [f (x, y ? ) ? f (x, y)] and thus ?(?(y, y?)) >
?(?(y, y ? )). Since ? is non-decreasing this implies ?(y, y?) > ?(y, y ? ). On the other hand,
plugging y? in (6) gives l(x, y, f ) ? ?(y, y?). Combining both inequalities proves the claim.
Note that the main difference between the cases of constant ? and monotonic ? is that in the latter
case the bounds are not quite as tight as they could potentially be, since we still have some slack with
respect to ?(y, y?). Monotonic ? tend to overscale the margin such that more emphasis is placed on
avoiding large deviations from the correct estimate rather than restricting small deviations.
Note that this nonconvex upper bound is not likely to be Bayes consistent. However, it will generate
solutions which have a smaller model complexity since it is never larger than the convex upper bound
on the loss, hence the regularizer on f plays a more important role in regularized risk minimization.
As a consequence one can expect better statistical concentration properties.
4
DC Programming
We briefly review the basic template of DC programming, as described in [22]. For a function
f (x) = fcave (x) + fvex (x)
which can be expressed as the sum of a convex fvex and a concave fcave function, we can find a
0
convex upper bound by fcave (x0 ) + hx ? x0 , fcave
(x0 )i + fvex (x). This follows from the first-order
Taylor expansion of the concave part fcave at the current value of x. Subsequently, this upper bound
is minimized, a new Taylor approximation is computed, and the procedure is repeated. This will
lead to a local minimum, as shown in [22].
We now proceed to deriving an explicit instantiation for structured estimation. To keep things simple,
in particular the representation of the functional subgradients of l(x, y, f ) with respect to f , we
assume that f is drawn from a Reproducing Kernel Hilbert Space H.
3
Algorithm 1 Structured Estimation with Tighter Bounds
Pm
0
0
Using the loss of Lemma 1 initialize f = argminf 0
i=1 l(xi , yi , yi , f ) + ??[f ]
repeat
Compute y?i := y?(xi , yi , f ) for all i.
Pm
Using the tightened loss bound recompute f = argminf 0 i=1 ?l(xi , yi , y?i , f 0 ) + ??[f 0 ]
until converged
Denote by k the kernel associated with H, defined on (X ? Y) ? (X ? Y). In this case for f ? H
we have by the reproducing property that f (x, y) = hf, k((x, y), ?)i and the functional derivative is
given by ?f f (x, y) = k((x, y), ?). Likewise we may perform the linearization in (6) as follows:
? sup ?(x, y, y 0 , f ) ? ??(x, y, y?, f )
y0
In other words, we use the rescaled estimate y? to provide an upper bound on the concave part of
the loss function. This leads to the following instantiation of standard convex-concave procedure:
instead of the structured estimation loss it uses the loss bound ?l(x, y, y?, f )
?l(x, y, y?, f ) := sup [?(x, y, y 0 , f ) + ?(y, y 0 )] ? ?(x, y, y?, f )
y 0 ?Y
In the case of ?(?) = 1 this can be simplified significantly: the terms in f (x, y) cancel and ?l becomes
?l(x, y, y?, f ) = sup [f (x, y 0 ) ? f (x, y?)] + ?(y, y 0 ).
y 0 ?Y
In other words, we replace the correct label y by the rescaled estimate y?. Such modifications can be
easily implemented in bundle method solvers and related algorithms which only require access to
the gradient information (and the function value). In fact, the above strategy follows directly from
Lemma 1 when replacing y 00 by the rescaled estimate y?.
5
5.1
Experiments
Multiclass Classification
In this experiment, we investigate the performance of convex and ramp loss versions of the WinnerTakes-All multiclass classification [1] when the training data is noisy. We performed the experiments
on some UCI/Statlog datasets: DNA, LETTER, SATIMAGE, SEGMENT, SHUTTLE, and USPS,
with some fixed percentages of the labels shuffled, respectively. Note that we reshuffled the labels
in a stratified fashion. That is, we chose a fixed fraction from each class and we permuted the label
assignment subsequently.
Table 1 shows the results (average accuracy ? standard deviation) on several datasets with different
percentages of labels shuffled. We used nested 10-fold crossvalidation to adjust the regularization
constant and to compute the accuracy. A linear kernel was used. It can be seen that ramp loss
outperforms the convex upper bound when the datasets are noisy. For clean data the convex upper
bound is slightly superior, albeit not in a statistically significant fashion. This supports our conjecture
that, compared to the convex upper bound, the ramp loss is more robust on noisy datasets.
5.2
Ranking with Normalized Discounted Cumulative Gains
Recently, [12] proposed a method for learning to rank for web search. They compared several methods showing that optimizing the Normalized Discounted Cumulative Gains (NDCG) score using
a form of structured estimation yields best performance. The algorithm used a linear assignment
problem to deal with ranking.
In this experiment, we perform ranking experiments with the OHSUMED dataset which is publicly
available [13]. The dataset is already preprocessed and split into 5 folds. We first carried out the
structured output training algorithm which optimizes the convex upper bound of NDCG as described
in [21]. Unfortunately, the returned solution was f = 0. The convex upper bounds led to the
4
Dataset
DNA
LETTER
SATIMAGE
SEGMENT
SHUTTLE
USPS
Methods
convex
ramp loss
convex
ramp loss
convex
ramp loss
convex
ramp loss
convex
ramp loss
convex
ramp loss
0%
95.2 ? 1.1
95.1 ? 0.8
76.8 ? 0.9
78.6 ? 0.8
85.1 ? 0.9
85.4 ? 1.2
95.4 ? 0.9
95.2 ? 1.0
97.4 ? 0.2
97.1 ? 0.2
95.1 ? 0.7
95.1 ? 0.9
10%
88.9 ? 1.5
89.1 ? 1.3
64.6 ? 0.7
70.8 ? 0.8
77.0 ? 1.6
78.1 ? 1.6
84.8 ? 2.3
85.9 ? 2.1
89.5 ? 0.2
90.6 ? 0.8
85.3 ? 1.3
86.1 ? 1.6
20%
83.1 ? 2.4
83.5 ? 2.2
50.1 ? 1.4
63.0 ? 1.5
66.4 ? 1.3
70.7 ? 1.0
73.8 ? 2.1
77.5 ? 2.0
83.8 ? 0.2
88.1 ? 0.3
76.5 ? 1.4
77.6 ? 1.1
Table 1: Average accuracy for multiclass classification using the convex upper bound and the ramp
loss. The third through fifth
columns represent results for
datasets with none, 10%, and
20% of the labels randomly
shuffled, respectively.
0.53
svmrank
rankboost
ndcg optimization
0.52
0.51
Figure 1: NDCG comparison against
ranking SVM and RankBoost. We
report the NDCG computed at various truncation levels. Our non-convex
upper bound consistently outperforms
other rankers.
In the context of
web page ranking an improvement of
0.01 ? 0.02 in the NDCG score is considered substantial.
NDCG@k
0.5
0.49
0.48
0.47
0.46
0.45
0.44
0.43
1
2
3
4
5
6
truncation level
7
8
9
10
undesirable situation where no nonzero solution would yield any improvement, since the linear
function class was too simple.
This problem is related to the fact that there are a lot of rankings which are equally good because of
the ties in the editorial judgments (see beginning of section 3). As a result, there is no w that learns
the data well, and for each w the associated maxy0 f (x, y 0 ) ? f (x, y) + ?(y, y 0 ) causes either the
first part or the second part of the loss to be big such that the total value of the loss function always
exceeds max ?(y, y 0 ).
When using the non-convex formulation the problem can be resolved because we do not entirely rely
on the y given in the training set, but instead find the y that minimizes the loss. We compared the
results of our method and two standard methods for ranking: ranking SVM [10, 8] and RankBoost
[6] (the baselines for OHSUMED are shown in [13]) and used NDCG as the performance criterion.
We report the aggregate performance in Figure 1.
As can be seen from the figure, the results from the new formulation are better than standard methods
for ranking. It is worth emphasizing that the most important contribution is not only that the new
formulation can give comparable results to the state-of-the-art algorithms for ranking but also that it
provides useful solutions when the convex structured estimation setting provides only useless results
(obviously f = 0 is highly undesirable).
5.3
Structured classification
We also assessed the performance of the algorithm on two different structured classification tasks for
computational biology, namely protein sequence alignment and RNA secondary structure prediction.
Protein sequence alignment is the problem of comparing the amino acid sequences corresponding to two different proteins in order to identify regions of the sequences which have common ancestry or biological function. In the pairwise sequence alignment task, the elements of the input space
X consist of pairs of amino acid sequences, represented as strings of approximately 100-1000 char5
Method
CRF
convex
ramp loss
0-10%
(324)
0.111
0.116
0.138
11-20%
(793)
0.316
0.369
0.387
21-30%
(429)
0.634
0.699
0.708
31-40%
(239)
0.877
0.891
0.905
Overall
(1785)
0.430
0.472
0.488
Table 2: Protein pairwise sequence alignment results, stratified by reference alignment percentage
identity. The second through fifth columns refer to the four non-overlapping reference alignment
percentage identity ranges described in the text, and the sixth column corresponds to overall results,
pooled across all four subsets. Each non-bolded value represents the average test set recall for
a particular algorithm on alignment from the corresponding subset. The numbers in parentheses
indicate the total number of sequences in each subset.
Method
CRF
convex
ramp loss
1-50
(118)
0.546 / 0.862
0.690 / 0.755
0.725 / 0.708
51-100
(489)
0.586 / 0.727
0.664 / 0.629
0.705 / 0.602
101-200
(478)
0.467 / 0.523
0.571 / 0.501
0.612 / 0.489
201+
(274)
0.414 / 0.472
0.542 / 0.484
0.569 / 0.461
Overall
(1359)
0.505 / 0.614
0.608 / 0.565
0.646 / 0.542
Table 3: RNA secondary structure prediction results. The second through fifth columns represent
subsets of the data stratified by sequence length. The last column presents overall results, pooled
across all four subsets. Each pair of non-bolded numbers indicates the sensitivity / selectivity for
structures in the two-fold cross-validation. The numbers in parentheses indicate the total number of
sequences in each subset.
acters in length. The output space Y contains candidate alignments which identify the corresponding
positions in the two sequences which are hypothesized to be evolutionarily related.
We developed a structured prediction model for pairwise protein sequence alignment, using the
types of features described in [3, 11] For the loss function, we used ?(y, y 0 ) = 1 ? recall (where
recall is the proportion of aligned amino acid matches in the true alignment y that appear in the
predicted alignment y 0 . For each inner optimization step, we used a fast-converging subgradientbased optimization algorithm with an adaptive Polyak-like step size [23].
We performed two-fold cross-validation over a collection of 1785 pairs of structurally aligned protein domains [14]. All hyperparameters were selected via holdout cross validation on the training
set, and we pooled the results from the two folds. For evaluation, we used recall, as described previously, and compared the performance of our algorithm to a standard conditional random field (CRF)
model and max-margin model using the same features. The percentage identity of a reference alignment is defined as the proportion of aligned residue pairs corresponding to identical amino acids.
We partitioned the alignments in the testing collection into four subsets based on percent identity
(0-10%, 11-20%, 21-30%, and 31+%), showed the recall of the algorithm for each subset in addition
to overall recall (see Table 2).
Here, it is clear that our method obtains better accuracy than both the CRF and max-margin models.1
We note that the accuracy differences are most pronounced at the low percentage identity ranges,
the ?twilight zone? regime where better alignment accuracy has far reaching consequences in many
other computational biology applications [16].
RNA secondary structure prediction Ribonucleic acid (RNA) refers to a class of long linear
polymers composed of four different types of nucleotides (A, C, G, U). Nucleotides within a single
RNA molecule base-pair with each other, giving rise to a pattern of base-pairing known as the
RNA?s secondary structure. In the RNA secondary structure prediction problem, we are given an
RNA sequence (a string of approximately 20-500 characters) and are asked to predict the secondary
structure that the RNA molecule will form in vivo. Conceptually, an RNA secondary structure can
be thought of as a set of unordered pairs of nucleotide indices, where each pair designates two
1
We note that the results here are based on using the Viterbi algorithm for parsing, which differs from the
inference method used in [3]. In practice this is preferable to posterior decoding as it is significantly faster
which is crucial applications to large amounts of data.
6
(a)
(b)
(c)
Figure 2: Tightness of the nonconvex bound. Figures (a) and (b) show the value of the nonconvex
loss, the convex loss and the actual loss as a function of the number of iterations when minimizing the
nonconvex upper bound. At each relinearization, which occurs every 1000 iterations, the nonconvex
upper bound decreases. Note that the convex upper bound increases in the process as convex and
nonconvex bound diverge further from each other. We chose ? = 2?6 in Figure (a) and ? = 27 for
Figure (b). Figure (c) shows the tightness of the final nonconvex bound at the end of optimization
for different values of the regularization parameter ?.
nucleotides in the RNA molecule which base-pair with each other. Following convention, we take
the structured output space Y to be the set of all possible pseudoknot-free structures.
We used a max-margin model for secondary structure prediction. The features of the model were
chosen to match the energetic terms in standard thermodynamic models for RNA folding [4]. As
our loss function, we used ?(y, y 0 ) = 1 ? recall (where recall is the proportion of base-pairs in the
reference structure y that are recovered in the predicted structure y 0 ). We again used the subgradient
algorithm for optimization.
To test the algorithm, we performed two-fold cross-validation over a large collection of 1359 RNA
sequences with known secondary structures from the RFAM database (release 8.1) [7]. We evaluated
the methods using two standard metrics for RNA secondary structure prediction accuracy known as
sensitivity and selectivity (which are the equivalent of recall and precision, respectively, for this
domain). For reporting, we binned the sequences in the test collection by length into four ranges (150, 51-100, 101-200, 201+ nucleotides), and evaluated the sensitivity and selectivity of the algorithm
for each subset in addition to overall accuracy (see Table 3).
Again, our algorithm consistently outperforms an equivalently parameterized CRF and max-margin
model in terms of sensitivity.2 The selectivity of the predictions from our algorithm is often worse
than that of the other two models. This is likely because we opted for a loss function that penalizes
for ?false negative? base-pairings but not ?false-positives? since our main interest is in identifying
correct base-pairings (a harder task than predicting only a small number of high-confidence basepairings). An alternative loss function that chooses a different balance between penalizing false
positives and false negatives would achieve a different trade-off of sensitivity and selectivity.
Tightness of the bound: We generated plots of the convex, nonconvex, and actual losses (which
correspond to l(x, y, y, f ), l(x, y, f ), and ?(y, y ? (x, f )), respectively, from Lemma 2) over the
course of optimization for our RNA folding task (see Figure 2). From Figures 2a and 2b, we see
that the nonconvex loss provides a much tighter upper bound on the actual loss function. Figure 2c
shows that the tightness of the bound decreases for increasing regularization parameters ?.
In summary, our bound leads to improvements whenever there is a large number of instances (x, y)
which cannot be classified perfectly. This is not surprising as for ?clean? datasets even the convex
upper bound vanishes when no margin errors are encountered. Hence noticeable improvements can
be gained mainly in the structured output setting rather than in binary classification.
2
Note that the results here are based on using the CYK algorithm for parsing, which differs from the inference method used in [4].
7
6
Summary and Discussion
We proposed a simple modification of the convex upper bound of the loss in structured estimation
which can be used to obtain tighter bounds on sophisticated loss functions. The advantage of our
approach is that it requires next to no modification of existing optimization algorithms but rather
repeated invocation of a structured estimation solver such as SVMStruct, BMRM, or Pegasos.
In several applications our approach outperforms the convex upper bounds. This can be seen both
for multiclass classification, for ranking where we encountered underfitting and undesirable trivial
solutions for the convex upper bound, and in the context of sequence alignment where in particular
for the hard-to-align observations significant gains can be found.
From this experimental study, it seems that the tighter non-convex upper bound is useful in two
scenarios: when the labels are noisy and when for each example there is a large set of labels which
are (almost) as good as the label in the training set. Future work includes studying other types
of structured estimation problems such as the ones encountered in NLP to check if our new upper
bound can also be useful for these problems.
References
[1] K. Crammer, and Y. Singer. On the Learnability and Design of Output Codes for Multiclass Problems. In
COLT 2000, pages 35?46. Morgan Kaufmann, 2000.
[2] R. Collobert, F.H. Sinz, J. Weston, and L. Bottou. Trading convexity for scalability. In ICML 2006, pages
201?208. ACM, 2006.
[3] C. B. Do, S. S. Gross, and S. Batzoglou. CONTRAlign: discriminative training for protein sequence
alignment. In RECOMB, pages 160?174, 2006.
[4] C. B. Do, D. A. Woods, and S. Batzoglou. CONTRAfold: RNA secondary structure prediction without
physics-based models. Bioinformatics, 22(14):e90?e98, 2006.
[5] S. R. Eddy. Non-coding RNA genes and the modern RNA world. Nature Reviews Genetics, 2(12):919?
929, 2001.
[6] Y. Freund, R. Iyer, R.E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. In ICML 1998, pages 170?178., 1998.
[7] S. Griffiths-Jones, S. Moxon, M. Marshall, A. Khanna, S. R. Eddy, and A. Bateman. Rfam: annotating
non-coding RNAs in complete genomes. Nucl. Acids Res., 33:D121?D124, 2005.
[8] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In
Advances in Large Margin Classifiers, pages 115?132, 2000. MIT Press.
[9] T. Hoang. DC optimization: Theory, methods, and applications. In R. Horst and P. Pardalos, editors,
Handbook of Global Optimization, Kluwer.
[10] T. Joachims. Optimizing search engines using clickthrough data. In KDD. ACM, 2002.
[11] T. Joachims, T. Galor, and R. Elber. Learning to align sequences: A maximum-margin approach. In New
Algorithms for Macromolecular Simulation, LNCS 49, 57?68. Springer, 2005.
[12] Q. Le and A.J. Smola. Direct optimization of ranking measures. NICTA-TR, 2007.
[13] T.-Y. Liu, J. Xu, T. Qin, W. Xiong, and H. Li. Letor: Benchmark dataset for research on learning to rank
for information retrieval. In LR4IR, 2007.
[14] J. Pei and N. V. Grishin. MUMMALS: multiple sequence alignment improved by using hidden Markov
models with local structural information. Nucl. Acids Res., 34(16):4364?4374, 2006.
[15] N. Ratliff, J. Bagnell, and M. Zinkevich. (online) subgradient methods for structured prediction. In
AISTATS, 2007.
[16] B. Rost. Twilight zone of protein sequence alignments. Protein Eng., 12(2):85?94, 1999.
[17] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for svm. In
Proc. Intl. Conf. Machine Learning, 2007.
[18] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS 16, pages 25?32, 2004.
MIT Press.
[19] C.H. Teo, Q. Le, A.J. Smola, and S.V.N. Vishwanathan. A scalable modular convex solver for regularized
risk minimization. In KDD. ACM, 2007.
[20] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. J. Mach. Learn. Res., 6:1453?1484, 2005.
[21] M. Weimer, A. Karatzoglou, Q. Le, and A. Smola. Cofi rank - maximum margin matrix factorization for
collaborative ranking. In NIPS 20. MIT Press, 2008.
[22] A.L. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 15:915?936, 2003.
[23] A. Nedic and D. P. Bertsekas. Incremental subgradient methods for nondifferentiable optimization. Siam
J. on Optimization, 12:109?138, 2001.
8
| 3451 |@word rreg:1 version:1 briefly:1 norm:1 proportion:3 seems:1 simulation:1 eng:1 tr:1 harder:1 carry:1 liu:1 contains:1 score:2 rkhs:1 outperforms:4 existing:2 current:1 com:1 comparing:1 recovered:1 surprising:1 parsing:2 kdd:2 hofmann:1 remove:1 plot:1 selected:1 beginning:1 provides:4 recompute:1 boosting:1 preference:1 herbrich:1 org:1 firstly:1 direct:1 pairing:3 incorrect:1 underfitting:1 pairwise:3 x0:3 sacrifice:1 tagging:2 behavior:1 planning:1 frequently:1 discounted:2 chap:1 decreasing:1 actual:5 ohsumed:2 solver:4 increasing:2 becomes:1 provided:1 moreover:3 maximizes:2 minimizes:2 string:2 developed:1 finding:1 sinz:1 every:1 concave:6 tie:2 preferable:1 classifier:1 choonhui:1 appear:1 bertsekas:1 overestimate:1 positive:2 local:3 limit:1 consequence:2 mach:1 path:2 approximately:3 ndcg:8 chose:2 emphasis:1 au:1 factorization:1 stratified:3 range:3 statistically:1 testing:1 practice:2 differs:2 procedure:5 lncs:1 area:1 significantly:2 thought:1 matching:3 word:2 confidence:1 regular:1 refers:1 griffith:1 protein:10 batzoglou:2 altun:1 convenience:1 cannot:2 undesirable:3 pegasos:2 tsochantaridis:1 risk:3 context:2 equivalent:1 map:1 zinkevich:1 yopt:4 convex:49 identifying:1 immediately:1 deriving:1 svmrank:1 handle:1 notion:2 svmstruct:1 construction:1 imagine:1 play:1 olivier:1 programming:6 us:2 element:2 database:1 convexification:1 role:1 taskar:1 region:1 ensures:2 decrease:2 rescaled:4 trade:1 substantial:1 intuition:1 vanishes:2 convexity:2 complexity:2 gross:1 asked:1 tight:2 segment:2 yuille:1 completely:1 usps:2 easily:1 resolved:1 various:1 represented:1 regularizer:2 fast:1 effective:1 aggregate:1 shalev:1 quite:1 modular:1 stanford:2 solve:1 larger:1 ramp:16 tightness:4 annotating:1 ability:1 noisy:5 final:1 online:1 obviously:1 sequence:24 differentiable:1 advantage:1 product:1 qin:1 uci:1 combining:2 aligned:3 achieve:1 kh:1 pronounced:1 ky:1 crossvalidation:1 scalability:1 rangarajan:1 extending:1 letor:1 intl:1 incremental:1 quocle:1 minor:1 noticeable:1 implemented:1 c:1 predicted:2 implies:1 australian:1 indicate:2 differ:1 convention:1 trading:1 discontinuous:1 correct:4 subsequently:2 karatzoglou:1 pardalos:1 require:1 hx:1 suffices:1 generalization:1 polymer:1 tighter:9 statlog:1 secondly:1 biological:1 e98:1 hold:2 considered:1 ruin:1 viterbi:1 predict:1 claim:1 estimation:15 proc:1 label:17 teo:3 minimization:3 mit:3 rna:19 always:1 rankboost:3 modified:1 rather:4 reaching:1 shuttle:2 release:1 joachim:3 improvement:4 consistently:2 rank:4 indicates:1 mainly:1 check:1 opted:1 baseline:1 inference:2 typically:1 hidden:1 koller:1 overall:6 classification:10 colt:1 yahoo:2 retaining:1 plan:1 art:1 initialize:1 field:1 never:1 biology:2 represents:1 identical:1 jones:1 cancel:1 icml:2 bateman:1 future:1 minimized:4 report:2 piecewise:1 modern:1 randomly:1 composed:1 national:1 choon:1 argmax:3 interest:1 investigate:1 highly:1 evaluation:1 adjust:1 alignment:21 violation:1 primal:1 bundle:1 lh:1 nucleotide:5 taylor:2 penalizes:1 re:3 increased:1 column:5 soft:2 instance:2 marshall:1 moxon:1 assignment:2 deviation:3 subset:9 successful:1 too:2 learnability:1 considerably:1 chooses:1 sensitivity:5 siam:1 off:1 physic:1 decoding:1 diverge:1 safeguard:1 ym:1 again:2 choose:2 worse:1 conf:1 derivative:2 rescaling:1 li:1 unordered:1 pooled:3 coding:2 includes:1 inc:1 ranking:17 depends:1 collobert:1 performed:3 later:1 chuong:1 lot:2 sup:7 recover:1 bayes:1 complicated:1 hf:1 vivo:1 collaborative:2 minimize:3 contribution:1 publicly:1 accuracy:12 acid:7 bolded:2 likewise:1 kaufmann:1 yield:3 judgment:1 identify:2 correspond:1 conceptually:1 accurately:1 none:1 notoriously:1 worth:1 converged:1 classified:1 suffers:1 whenever:1 sixth:1 against:2 underestimate:1 associated:3 proof:2 gain:3 dataset:4 holdout:1 recall:9 hilbert:1 eddy:2 graepel:1 sophisticated:1 improved:3 formulation:5 evaluated:2 though:1 smola:5 until:1 hand:1 web:3 replacing:3 maximizer:3 overlapping:1 khanna:1 yf:4 quality:1 hypothesized:1 normalized:2 true:2 regularization:4 hence:3 shuffled:3 nonzero:1 deal:1 criterion:1 generalized:1 crf:5 complete:1 ribonucleic:1 percent:1 ranging:2 recently:1 superior:1 common:1 permuted:1 functional:2 extend:1 kluwer:1 significant:4 refer:1 pm:2 rfam:2 bmrm:1 chapelle:1 access:1 base:6 align:2 posterior:1 showed:1 optimizing:2 inf:1 optimizes:1 discard:1 scenario:1 selectivity:5 nonconvex:11 inequality:4 binary:7 yi:7 seen:3 minimum:2 morgan:1 care:1 guestrin:1 r0:1 monotonically:1 thermodynamic:1 desirable:2 multiple:1 lr4ir:1 exceeds:1 faster:2 match:2 plug:1 cross:4 long:1 retrieval:2 equally:2 plugging:1 parenthesis:2 ensuring:1 prediction:13 converging:1 basic:1 regression:1 scalable:1 metric:1 editorial:1 iteration:2 represent:3 sometimes:1 kernel:3 folding:2 whereas:3 want:2 residue:1 addition:2 decreased:1 grow:1 reshuffled:1 crucial:1 unlike:1 induced:1 tend:1 thing:1 structural:1 split:1 easy:1 switch:1 winnertakes:1 perfectly:1 polyak:1 inner:1 multiclass:5 ranker:1 energetic:1 returned:1 proceed:1 cause:1 useful:3 clear:1 amount:1 dna:2 generate:1 schapire:1 percentage:6 estimated:1 correctable:1 four:6 drawn:2 preprocessed:1 penalizing:1 clean:2 graph:1 subgradient:3 fraction:1 sum:1 wood:1 inverse:1 letter:2 parameterized:1 contralign:1 named:1 reporting:1 almost:1 scaling:1 comparable:1 bit:1 entirely:1 bound:52 distinguish:1 fold:6 encountered:3 nonnegative:2 binned:1 fcave:5 vishwanathan:1 alex:2 fvex:3 min:1 chuongdo:1 subgradients:1 cofi:1 conjecture:1 structured:27 poor:1 smaller:1 slightly:1 across:2 y0:7 character:1 partitioned:1 modification:5 happens:1 quoc:1 pr:1 heart:1 taken:1 contrafold:1 previously:1 slack:1 singer:3 ordinal:1 tractable:2 end:2 studying:1 available:1 observe:1 rost:1 xiong:1 alternative:1 robustness:1 nlp:1 hinge:3 exploit:1 giving:1 k1:1 prof:1 already:1 quantity:1 occurs:1 strategy:2 concentration:1 bagnell:1 obermayer:1 exhibit:1 gradient:2 entity:1 nondifferentiable:1 e90:1 trivial:1 nicta:2 length:3 code:1 useless:1 index:1 minimizing:1 balance:1 equivalently:1 unfortunately:1 potentially:1 argminf:2 negative:2 rise:1 twilight:2 ratliff:1 design:1 clickthrough:1 pei:1 looseness:1 perform:2 upper:32 observation:4 datasets:6 markov:2 benchmark:1 incorrectly:1 situation:1 y1:1 dc:8 reproducing:2 namely:3 pair:11 engine:1 nip:2 pattern:3 xm:1 regime:1 max:9 misclassification:1 rely:1 regularized:3 predicting:1 nucl:2 nedic:1 carried:1 text:1 review:2 interdependent:1 kf:1 relative:1 freund:1 loss:59 expect:1 permutation:2 filtering:1 recomb:1 proven:1 srebro:1 hoang:1 validation:4 incurred:1 consistent:1 tightened:1 editor:1 course:1 summary:2 genetics:1 placed:1 repeat:1 truncation:2 last:1 free:1 allow:1 template:1 taking:1 fifth:3 benefit:1 boundary:1 world:1 cumulative:2 rich:1 genome:1 horst:1 collection:4 adaptive:1 simplified:1 far:1 maxy0:1 obtains:1 supremum:1 keep:1 relinearization:1 gene:1 overfitting:1 instantiation:2 global:1 handbook:1 assumed:1 xi:6 discriminative:1 shwartz:1 continuous:2 search:2 ancestry:1 designates:1 why:1 table:6 nature:1 learn:1 robust:1 molecule:3 inherently:1 expansion:1 bottou:1 domain:2 aistats:1 cyk:1 main:2 weimer:1 motivation:1 big:1 hyperparameters:1 repeated:2 evolutionarily:1 amino:4 x1:1 xu:1 referred:1 fashion:2 precision:1 structurally:1 position:1 sub:1 explicit:1 candidate:1 invocation:1 third:1 learns:1 emphasizing:1 showing:1 svm:3 consist:1 restricting:1 albeit:1 effectively:1 false:4 hui:1 gained:1 linearization:1 iyer:1 anu:1 margin:20 easier:1 generalizing:1 led:1 simply:1 likely:2 expressed:1 applies:1 monotonic:2 springer:1 nested:1 corresponds:1 violator:1 acm:3 weston:1 conditional:1 identity:5 satimage:2 replace:2 hard:2 lemma:9 total:3 secondary:11 experimental:1 zone:2 support:2 latter:1 crammer:1 assessed:1 bioinformatics:1 avoiding:1 |
2,704 | 3,452 | Algorithms for Infinitely Many-Armed Bandits
Yizao Wang?
Department of Statistics - University of Michigan
437 West Hall, 1085 South University, Ann Arbor, MI, 48109-1107, USA
[email protected]
Jean-Yves Audibert
Universit? Paris Est, Ecole des Ponts, ParisTech, Certis
& Willow - ENS / INRIA, Paris, France
[email protected]
R?mi Munos
INRIA Lille - Nord Europe, SequeL project,
40 avenue Halley, 59650 Villeneuve d?Ascq, France
[email protected]
Abstract
We consider multi-armed bandit problems where the number of arms is larger
than the possible number of experiments. We make a stochastic assumption on
the mean-reward of a new selected arm which characterizes its probability of being a near-optimal arm. Our assumption is weaker than in previous works. We
describe algorithms based on upper-confidence-bounds applied to a restricted set
of randomly selected arms and provide upper-bounds on the resulting expected
regret. We also derive a lower-bound which matches (up to a logarithmic factor)
the upper-bound in some cases.
1
Introduction
Multi-armed bandit problems describe typical situations where learning and optimization should be
balanced in order to achieve good cumulative performances. Usual multi-armed bandit problems
(see e.g. [8]) consider a finite number of possible actions (or arms) from which the learner may
choose at each iteration. The number of arms is typically much smaller than the number of experiments allowed, so exploration of all possible options is usually performed and combined with
exploitation of the apparently best ones.
In this paper, we investigate the case when the number of arms is infinite (or larger than the available
number of experiments), which makes the exploration of all the arms an impossible task to achieve:
if no additional assumption is made, it may be arbitrarily hard to find a near-optimal arm. Here we
consider a stochastic assumption on the mean-reward of any new selected arm. When a new arm
k is pulled, its mean-reward ?k is assumed to be an independent sample from a fixed distribution.
Moreover, given the mean-reward ?k for any arm k, the distribution of the reward is only required
to be uniformly bounded and non-negative without any further assumption. Our assumptions essentially characterize the probability of pulling near-optimal arms. That is, given ?? ? [0, 1] as the best
possible mean-reward and ? ? 0 a parameter of the mean-reward distribution, the probability that a
new arm is -optimal is of order ? for small , i.e. P(?k ? ?? ?) = ?(? ) for ? 0. Note that we
write f () = ?(g()) for ? 0 when ?c1 , c2 , 0 > 0 such that ? ? 0 , c1 g() ? f () ? c2 g().
?
The major part of this work was completed during the research internship at Certis and INRIA SequeL.
1
Like in multi-armed bandits, this setting exhibits a trade off between exploitation (selection of the
arms that are believed to perform well) and exploration. The exploration takes two forms here:
discovery (pulling a new arm that has never been tried before) and sampling (pulling an arm already
discovered in order to gain information about its actual mean-reward).
Numerous applications can be found e.g. in [5]. It includes labor markets (a worker has many
opportunities for jobs), mining for valuable resources (such as gold or oil) when there are many
areas available for exploration (the miner can move to another location or continue in the same
location, depending on results), and path planning under uncertainty in which the path planner has
to decide among a route that has proved to be efficient in the past (exploitation), or a known route
that has not been explored many times (sampling), or a brand new route that has never been tried
before (discovery).
Let us write ktP
the arm selected by our algorithm at time t. We define the regret up to time n as
n
Rn = n?? ? t=1 ?kt . From the tower rule, ERn is the expectation of the difference between
the rewards we would have obtained by drawing an optimal arm (an arm having a mean-reward
equal to ?? ) and the rewards we did obtain during the time steps 1, . . . , n. Our goal is to design an
arm-pulling strategy such as to minimize this regret.
? n ) when for some n0 , C > 0, vn ? Cun (log(un ))2 ,
Overview of our results: We write vn = O(u
for all n ? n0 . We assume that the rewards of the arms lie in [0, 1]. Our regret bounds depend on
? ?/(1+?) ). For
whether ?? = 1 or ?? < 1. For ?? = 1, our algorithms are such that ERn = O(n
?
?/(1+?)
1/2
?
?
? < 1, we have ERn = O(n
) if ? > 1, and (only) ERn = O(n
) if ? ? 1. Moreover
we derive the lower-bound: for any ? > 0, ?? ? 1, any algorithm satisfies ERn ? Cn?/(1+?) for
some C > 0. Finally we propose an algorithm having the anytime property, which is based on an
arm-increasing rule.
Our algorithms essentially consist in pulling K different arms randomly chosen, where K is of
order n?/2 if ?? < 1 and ? < 1, and n?/(1+?) otherwise, and using a variant of the UCB (Upper
Confidence Bound) algorithm ([3],[2]) on this set of K arms, which takes into account the empirical
variance of the rewards. This last point is crucial to get the proposed rate for ?? = 1 and ? < 1, i.e.
in cases where there are many arms with small variance.
Previous works on many-armed bandits: In [5], a specific setting of an infinitely many-armed
bandit is considered, namely that the rewards are Bernoulli random variables with parameter p,
where p follows a uniform law over a given interval [0, ?? ]. All mean-rewards are therefore in
[0, ?? ]. They proposed three algorithms. (1) The 1-failure strategy where an arm is played as long
as 1s are received. When a 0 is received, a new arm is played and this strategy is repeated forever.
(2) The m-run strategy uses the 1-failure strategy until either m continuous 1s are received (from the
same arm) or m different arms have been played. In the first case, we continue to play forever the
current arm. In the second case, the arm that gave the most wins is chosen to play for the remaining
rounds. Finally, (3) the m-learning strategy uses the 1-failure strategy during the first m rounds, and
for the remaining rounds it chooses the arm that gave the most 1s during the first m rounds.
?
?
For ?? = 1, the authors of [5] have shown
? that 1-failure strategy, n-run strategy, and log(n) nlearning strategy have
n ? 2 n. They also provided a lower bound on the regret of any
? a regret ER
?
?
strategy:
ER
?
2n.
For
?
<
1, the corresponding optimal strategies are n?? -run strategy
? ? n ?
and n? log(n? )-learning strategy. All these algorithms require the knowledge of the horizon n
of the game. In many applications, it is important to design algorithms having the anytime property,
that is, the upper bounds on the expected regret ERn have the similar order for all n. Under the
same Bernoulli assumption on the reward distributions, such algorithms has been obtained in [9].
In comparison to their setting
(uniform distribution corresponds to ? = 1), our upper- and lower?
bounds are also of order n up to a logarithmic factor, and we do not assume that we know exactly
the distribution of the mean-reward. However it is worth noting that the proposed algorithms in
[5, 9] heavily depend on the Bernoulli assumption of the rewards and are not easily transposable to
general distributions. Note also that the Bernoulli assumption does not work for the real problems
mentioned above, where the outcomes may take several possible values.
Thus an important aspect of our work, compared to previous many-armed bandits, is that our setting
allows general reward distributions for the arms, under a simple assumption on the mean-reward.
2
2
Main results
In our framework, each arm of a bandit is characterized by the distribution of the rewards (obtained
by drawing that arm) and the essential parameter of the distribution of rewards is its expectation.
Another parameter of interest is the standard deviation. With low variance, poor arms will be easier
to spot while good arms will have higher probability of not being disregarded at the beginning due
to unlucky
trials. To draw
R
R an arm is equivalent to draw a distribution ? of mean-rewards. Let
? = w?(dw) and ? 2 = (w ? ?)2 ?(dw) denote the expectation and variance of ?. The quantities
? and ? are random variables. Our assumptions are the following:
(A) Rewards are uniformly bounded: without loss of generality, we assume all rewards are in [0, 1].
(B) the expected reward of a randomly drawn arm satisfies: there exist ?? ? (0, 1] and ? > 0 s.t.
P{? > ?? ? } = ?(? ), for ? 0
(1)
(C) there is a function V : [0, 1] ? R such that P{? 2 ? V (?? ? ?)} = 1.
The key assumption here is (B). It gives us (the order of) the number of arms that needs to be drawn
before finding an arm that is -close to the optimum1 (i.e., an arm for which ? ? ?? ?). Assumption
(B) implies that there exists positive constants c1 and c2 such that for any ? [0, ?? ], we have2
c1 ? ? P{? > ?? ? } ? P{? ? ?? ? } ? c2 ? .
(2)
For example, the uniform distribution on (0, ?? ) satisfies the Condition (1) with ? = 1.
Assumption (C) always holds for V (u) = ?? (1 ? ?? + u) (since Var W ? EW (1 ? EW ) when
W ? [0, 1]). However it is convenient when the near-optimal arms have low variance (for instance,
this happens when ?? = 1).
Let Xk,1 , Xk,2 , . . . denote the rewards obtained when pulling arm k. These are i.i.d. random
Ps
variables with common expected value denoted ?k . Let X k,s , 1s j=1 Xk,j and Vk,s ,
Ps
1
2
j=1 (Xk,j ? X k,s ) be the empirical mean and variance associated with the first s draws of
s
arm k. Let Tk (t) denote the number of times arm k is chosen by the policy during the first t plays.
We will use as a subroutine of our algorithms the following version of UCB (Upper Confidence
Bound) algorithm as introduced in [2]. Let (Et )t?0 be a nondecreasing sequence of nonnegative
real numbers. It will be referred to as the exploration sequence since the larger it is, the more UCB
explores. For any arm k and nonnegative integers s, t, introduce
r
2Vk,s Et
3Et
+
(3)
Bk,s,t , X k,s +
s
s
with the convention 1/0 = +?. Define the UCB-V (for Variance estimate) policy:
UCB-V policy for a set K of arms:
At time t, play an arm in K maximizing Bk,Tk (t?1),t .
From [2, Theorem 1], the main property of Bk,s,t is that with probability at least 1 ? 5(log t)e?Et /2 ,
for any s ? [0, t] we have ?k ? Bk,s,t . So provided that Et is large, Bk,Tk (t?1),t is an observable
quantity at time t which upper bounds ?k with high probability. We consider nondecreasing sequence (Et ) in order that these bounds hold with probability increasing with time. This ensures that
the low probability event, that the algorithm might concentrate the draws on suboptimal arms, has a
decreasing probability with time.
2.1
UCB revisited for the infinitely many-armed bandit
When the number of arms of the bandit is greater than the total number of plays, it makes no sense
to apply UCB-V algorithm (or other variants of UCB [3]) since its first step is to draw each arm once
(to have Bk,Tk (t?1),t finite). A more meaningful and natural approach is to decide at the beginning
1
Precise computations lead to a number which is of order ?? up to possibly a logarithmic factor.
Indeed, (1) implies that for some 0 < c01 < c02 , there exists 0 < 0 < ?? such that for any ? 0 ,
0 ?
0
c1 ? P{? > ?? ? } ? P{? ? ?? ? } ? c02 ? . Then one may take c1 = c01 ?0 and c2 = max(??
0 , c2 ).
2
3
that only K arms will be investigated in the entire experiment. The K should be sufficiently small
with respect to n (the total number of plays), as in this way we have fewer plays on bad arms and
most of the plays will be on the best of K arms. The number K should not be too small either, since
we want that the best of the K arms has an expected reward close to the best possible arm.
It is shown in [2, Theorem 4] that in the multi-armed bandit, taking a too small exploration sequence (e.g. such as Et ? 12 log t) might lead to polynomial regret (instead of logarithmic for e.g.
Et = 2 log t) in a simple 2-armed bandit problem. However, we will show that this is not the case
in the infinitely many-armed bandit, where one may (and should) take much smaller exploration
sequences (typically of order log log t). The reason for this phenomenon is that in this setting, there
are typically many near-optimal arms so that the subroutine UCB-V may miss some good arms (by
unlucky trials) without being hurt: there are many other near-optimal arms to discover! This illustrates a trade off between the two aspects of exploration: sample the current, not well-known, arms
or discover new arms.
We will start our analysis by considering the following UCB-V(?) algorithm:
UCB-V(?) algorithm: Given parameters K and the exploration sequence (Et )
? Randomly choose K arms,
? Run the UCB-V policy on the set of the K selected arms.
Theorem 1 If the exploration sequence satisfies 2 log(10 log t) ? Et ? log t, then for n ? 2 and
K ? 2 the expected regret of the UCB-V(?) algorithm satisfies:
n
h
io
(?)
ERn ? C (log K)nK ?1/? + K(log n)E V ?
+ 1 ? (n?) ,
(4)
where ? = ?? ? ? with ? the random variable corresponding to the expected reward of a sampled
arm from the pool, and where C is a positive constant depending only on c1 and ? (see (2)).
Proof: The UCB-V(?) algorithm has two steps: randomly choose K arms and run a UCB subroutine on the selected arms. The first part of the proof studies what happens during the UCB
subroutine, that is, conditionally to the arms that have been randomly chosen during the first step
of the algorithm. In particular we consider in the following that ?1 , . . . , ?K are fixed. From the
equality (obtained using Wald?s theorem):
PK
(5)
ERn = k=1 E{Tk (n)}?k
with ?k = ?? ? ?k , it suffices to bound ETk (n). The proof is inspired from the ones of Theorems
2 and 3 in [2]. The novelty of the following lemma is to include the product of probabilities in the
last term of the right-hand-side. This enables us to incorporate the idea that if there are a lot of
near-optimal arms, it is very unlikely that suboptimal arms are often drawn.
Lemma 1 For any real number ? and any positive integer u, we have
Pn
Pn
Pt
Q
ETk (n) ? u + t=u+1 s=u P Bk,s,t > ? + t=u+1 k0 6=k P(?s0 ? [0, t], Bk0 ,s0 ,t ? ?
(6)
where the expectations and probabilities are conditioned on the set of selected arms.
Pn
Proof: We have Tk (n) ? u ? t=u+1 Zk (u, t) where Zk (u, t) = 1It =k;Tk (t)>u . We have
Zk (u, t)
? 1?k0 6=k Bk,Tk (t?1),t ?Bk0 ,T
k0
(t?1),t ;Tk (t?1)?u
? 1?s?[u,t] Bk,s,t >? + 1?k0 6=k ?s0 ?[0,t] Bk0 ,s0 ,t ??
where the last inequality holds since if the two terms in the last sum are equal to zero, then it implies
that there exists k 0 6= k such that for any s0 ? [0, t] and any s ? [u, t], Bk0 ,s0 ,t > ? ? Bk,s,t . Taking
the expectation of both sides, using a union bound and the independence between rewards obtained
from different arms, we obtain Lemma 1.
?
? +?k
Now we use Inequality
(6)
= ?k + ?2k = ?? ? ?2k , and u the smallest integer
2
with ? =
2
?k
larger than 32 ?2 + ?1k log n. These choices are made to ensure that the probabilities in the r.h.s.
k
4
of (6) are small. Precisely, for any s ? u and t ? n, we have
r
r
2[?k2 + ?k /4]Et
[2?k2 + ?k /2] log n
Et
log n
+3 ?
+3
s
u r
u
rs
?
2 +? /2]?2
[2?k
k
k
2 +? ]
32[?k
k
+
3?2k
2 +? ]
32[?k
k
=
?k
4
2 +? /4
?k
k
2 +?
?k
k
+
2
3 ?k
2 +?
8 ?k
k
r
?
?k
4 ,
? 2 +? /4
k
k
where the last inequality holds since it is equivalent to (x ? 1) ? 0 for x =
. Thus:
2 +?
?k
k
q
Et
2Vk,s Et
P(Bk,s,t > ? ) ? P X k,s +
+ 3 > ?k + ?k /2
s
s
q 2
2[?k +?k /4]Et
? P X k,s +
+ 3 Est > ?k + ?k /2 + P Vk,s ? ?k2 + ?k /4
s
(7)
P
s
2
j=1 (Xk,j ??k )
2
?
?
/4
? P X k,s ? ?k > ?k /4 + P
?
?
k
k
s
2
2
? 2e?s?k /(32?k +8?k /3) ,
where in the last step we used Bernstein?s inequality twice. Summing up we obtain
2
2
t
?
X
X
2
e?u?k /(32?k +8?k /3)
?s?2k /(32?k
+8?k /3)
P(Bk,s,t > ? ) ? 2
e
=2
2
2
1 ? e??k /(32?k +8?k /3)
s=u
s=u
2
2
2
2
80?
80?
? ?2k + ?7k e?u?k /(32?k +8?k /3) ? ?2k + ?7k n?1 ,
k
(8)
k
?x
where we have used that 1 ? e
? 4x/5 for 0 ? x ? 3/8. Now let us bound the product of
probabilities in (6). Since ? = ?? ? ?k /2, we have
Y
Y
P(?s ? [0, t], Bk0 ,s,t ? ? ?
P(?s ? [0, t], Bk0 ,s,t < ?0k .
k0 6=k
k0 :?k0 >?? ??k /2
Now from [2, Theorem 1], with probability at least 1 ? 5(log t)e?Et /2 , for any
s ? [0, t] we have
?k ? Bk,s,t . For Et ? 2 log(10 log t), this gives P(?s ? [0, t], Bk0 ,s,t < ?0k ? 1/2. Putting all
the bounds of the different terms of (6) leads to
2
?k
1
80?k2
7
ETk (n) ? 1 + 32
+
log
n
+
+
+ n2?N?k ,
?2k
?k
?2k
?k
with N?k the cardinal of k 0 ? {1, . . . , K} : ?k0 > a ? ?k /2 . Since ?k ? ?? ? 1 and
Tk (n) ? n, the previous inequality can be simplified into
nh 2
i
o
?
ETk (n) ? 50 ?k2 + ?1k log n ? n + n2?N?k ,
(9)
k
Here, for the sake of simplicity, we are not interested in having tight constants. From here on, we
will take the expectations with respect to all sources of randomness, that is including the one coming
from the first step of UCB-V(?). The quantities ?1 , . . . , ?K are i.i.d. random variables satisfying
0 ? ?k ? ?? and P(?k ? ) = ?(? ). The quantities ?1 , . . . , ?k are i.i.d. random variables
satisfying almost surely ?k2 ? V (?k ). From (5) and (9), we have
h
i
V (?1 )
?N?1
ERn = KE T1 (n)?1 ? KE 50 ?1 + 1 log n ? (n?1 ) + n?1 2
(10)
Let p denote the probability that the expected reward ? of a randomly drawn arm satisfies ? >
?? ? ?/2 for a given ?. Conditioning on ?1 = ?, the quantity N?1 follows a binomial distribution
with parameters K ? 1 and p, hence E(2?N?1 |?1 = ?) = (1 ? p + p/2)K?1 . By using (2), we get:
E ?1 2?N?1 = E ?1 (1 ? P(? > ?? ? ?1 /2)/2)K?1 ? E?(?1 ),
with ?(u) = u(1 ? c3 u? )K?1 and c3 = c1 /2? . We have ?0 (u) = (1 ? c3 u? )K?2 1 ? c3 (1 + (K ?
1
(1? 1+(K?1)?
)K?1
1
1)?)u? so that ?(u) ? ?(u0 ) with u0 = [c3 (1+(K?1)?)]
1/? and ?(u0 ) = [c (1+(K?1)?)]1/? ?
3
C 0 K ?1/? for C 0 a positive constant depending only c1 and ?. For any u1 ? [u0 , ?? ], we have
E?(?1 ) ? ?(u0 )P(?1 ? u1 ) + ?(u1 )P(?1 > u1 ) ? ?(u0 )P(?1 ? u1 ) + ?(u1 ) .
1/?
for C 00 a positive constant depending on c1 and ? sufficiently large
Let us take u1 = C 00 logKK
to ensure u1 ? u0 and ?(u1 ) ? K ?1?1/? . We obtain E?(?1 ) ? CK ?1/? logKK for an appropriate
constant C depending on c1 and ?. Putting this into (10), we obtain the result of Theorem 1.
5
The r.h.s. of Inequality (4) contains two terms. The first term is the bias: when we randomly draw
? ?1/? )-optimal. So the best algorithm,
K arms, the expected reward of the best drawn arm is O(K
?1/?
?
once the K arms are fixed, will yield a regret O(nK
). The second term is the estimation. It
indicates the difference between the UCB subroutine?s performance and the best drawn arm.
2.2
Strategy for fixed play number
Consider that we know in advance the total number of plays n and the value of ?. In this case,
one can use the UCB-V(?) algorithm with parameter K of order of the minimizer of the r.h.s. of
Inequality (4). This leads to the following UCB-F (for Fixed horizon) algorithm.
?
UCB-F (fixed horizon): given total number
( ? of plays n, and parameters ? and ? of (1)
?
n2
if ? < 1, ? < 1
? Choose K arms with K of order
?
?+1
otherwise, i.e. if ?? = 1 or ? ? 1
n
? Run the UCB-V algorithm with the K chosen arms and an exploration sequence satisfying
2 log(10 log t) ? Et ? log t
(11)
Theorem 2 For any n ? 2, the expected regret of the UCB-F algorithm satisfies
?
?
if ? < 1 and ?? < 1
? C(log n) ?n
2
if ? = 1 and ?? < 1
C(log n) n
ERn ?
(12)
?
?
C(log n)n 1+? otherwise, i.e. if ?? = 1 or ? > 1
with C a constant depending only on c1 , c2 and ? (see (2)).
(?)
Proof: The result comes from Theorem 1 by bounding the expectation E = E V ?
+1 ?(n?) .
First, as mentioned before, Assumption (C) is satisfied for V (?) = ?? (1 ? ?? + ?). So for ?? = 1
and this choice of function V , we have E ? 2. For ?? < 1, since ? ? ?? , we have E ? E?(?)
?
with ?(t) = 2?t ? (nt). The function ? is continuous and differentiable by parts. Using Fubini?s
theorem and Inequality (2), we have
R ??
R ??
E?(?) = ?(?? ) ? E ? ?0 (t)dt?
= ?(?? ) ? 0 ?0 (t)P(? ? t)dt
(1+?)/2
1??
?
if ? < 1
? 2 + 2 1?? c2 n 2
R?
1
2
?
? 2+
c
t
dt
?
2
+
c
log(n/2)
if ? = 1 .
2
2
2
2/n t
?
? 2 + 2c2
if
?>1
??1
Putting these bounds in Theorem 1, we get
? n
o
1??
?
C (log K)nK ?1/? + (log n)Kn 2
?
?
? n
o
ERn ?
C (log K)nK ?1/? + (log n)2 K
?
n
o
?
?
? C (log K)nK ?1/? + (log n)K
if ? < 1 and ?? < 1
if ? = 1 and ?? < 1
otherwise: ?? = 1 or ? > 1
with C a constant only depending on c1 , c2 and ?. The number K of selected arms in UCB-F is
taken of the order of the minimizer of these bounds up to a logarithmic factor.
Theorem 2 makes no difference between a logarithmic exploration sequence and an iterated logarithmic exploration sequence. However in practice, it is clearly better to take an iterated logarithmic
exploration sequence, for which the algorithm spends much less time on exploring all suboptimal
arms. For sake of simplicity, we have fixed the constants in (11). It is easy to check that for
Et = ? logt and ? ? 1, Inequality (12) still holds but with a constant C depending linearly in ?.
Theorem 2 shows that when ?? = 1 or ? ? 1, the bandit subroutine takes no time in spotting nearoptimal arms (the use of UCB-V algorithm using variance estimate is crucial for this), whereas for
? < 1 and ?? < 1, which means a lot of near-optimal arms with possibly high variances, the bandit
subroutine has difficulties in achieving low regret.
The next theorem shows that our regret upper bounds are optimal up to logarithmic terms except for
the case ? < 1 and ?? < 1. We do not know whether the rate O(n?/2 log n) for ? < 1 and ?? < 1
is improvable. This remains an open problem.
6
?
Theorem 3 For any ? > 0 and ?? ? 1, any algorithm suffers a regret larger than cn 1+? for some
small enough constant c depending on c2 and ?.
Sketch of proof. If we want to have a regret smaller than cn?/(1+?) we need that most draws are
done on an arm having an individual regret smaller than = cn?1/(1+?) . To find such an arm, we
need to try a number of arms larger than C 0 ?? = C 0 c?? n?/(1+?) arms for some C 0 > 0 depending
on c2 and ?. Since these arms are drawn at least once and since most of these arms give a constant
regret, it leads to a regret larger than C 00 c?? n?/(1+?) with C 00 depending on c2 and ?. For c small
enough, this contradicts that the regret is smaller than cn?/(1+?) . So it is not possible to improve on
the n?/(1+?) rate.
2.3
Strategy for unknown play number
To apply the UCB-F algorithm we need to know the total number of plays n and we choose the
corresponding K arms before starting. When n is unknown ahead of time, we propose here an
anytime algorithm with a simple and reasonable way of choosing K by adding a new arm from time
to time into the set of sampled arms. Let Kn denote the number of arms played up to time n. We
set K0 = 0. We define the UCB-AIR (for Arm-Increasing Rule):
UCB-AIR (Arm-Increasing Rule): given parameters ?? and ? of (1),
? At time n, try a new arm if
( ?
n2
if ? < 1 and ?? < 1
Kn?1 <
?
n ?+1 otherwise: ?? = 1 or ? ? 1
? Otherwise apply UCB-V on Kn?1 drawn arms with an exploration sequence satisfying
2 log(10 log t) ? Et ? log t
This arm-increasing rule makes our algorithm applicable for the anytime problem. This is a more
reasonable approach in practice than restarting-based algorithms like the ones using the doubling
trick (see e.g. [4, Section 5.3]). Our second main result is to show that the UCB-AIR algorithm has
the same properties as the UCB-F algorithm (proof omitted from this extended abstract).
Theorem 4 For any horizon time n ? 2, the expected regret of the UCB-AIR algorithm satisfies
?
C(log n)2 n
if ? < 1 and ?? < 1
?
ERn ?
(13)
2 1+?
otherwise, i.e. if ?? = 1 or ? ? 1
C(log n) n
with C a constant depending only on c1 , c2 and ? (see (2)).
3
Comparison with continuum-armed bandits and conclusion
In continuum-armed bandits (see e.g. [1, 6, 4]), an infinity of arms is also considered. The arms
lie in some Euclidean (or metric) space and their mean-reward is a deterministic and smooth (e.g.
Lipschitz) function of the arms. This setting is different from ours since our assumption is stochastic
and does not consider regularities of the mean-reward w.r.t. the arms. However, if we choose an
arm-pulling strategy which consists in selecting randomly the arms, then our setting encompasses
continuum-armed bandits. For example, consider the domain [0, 1]d and a mean-reward function ?
assumed to be locally equivalent to a H?lder function (of order ? ? [0, +?)) around any maximum
x? (the number of maxima is assumed to be finite), i.e.
?
?(x? ) ? ?(x) = ?(kx? ? xk ) when x ? x? .
(14)
d
Pulling randomly an arm X according to the Lebesgue measure on [0, 1] , we have: P(?(X) >
?
?? ? ) = ?(P(kX ? x? k < )) = ?(d/? ), for ? 0. Thus our assumption (1) holds with
? ?/(1+?) ) = O(n
? d/(?+d) ).
? = d/?, and our results say that if ?? = 1, we have ERn = O(n
?
For d = 1, under the assumption that ? is ?-H?lder (i.e. |?(x)??(y)| ? c kx ? yk for 0 < ? ? 1),
[6] provides upper- and lower-bounds on the regret Rn = ?(n(?+1)/(2?+1) ). Our results gives
7
? 1/(?+1) ) which is better for all values of ?. The reason for this apparent contradiction
ERn = O(n
is that the lower bound in [6] is obtained by the construction of a very irregular function, which
actually does not satisfy our local assumption (14).
Now, under assumptions (14) for any ? > 0 (around a finite set of maxima), [4] provides the rate
? ?n). Our result gives the same rate when ?? < 1 but in the case ?? = 1 we obtain the
ERn = O(
? 1/(?+1) ) which is better whenever ? > 1 (because we are able to exploit
improved rate ERn = O(n
the low variance of the good arms). Note that like our algorithm, the algorithms in [4] as well as in
[6], do not make an explicit use (in the procedure) of the smoothness of the function. They just use
a ?uniform? discretization of the domain.
On the other hand, the zooming algorithm of [7] adapts to the smoothness of ? (more arms are
? (d0 +1)/(d0 +2) ),
sampled at areas where ? is high). For any dimension d, they obtain ERn = O(n
where d0 ? d is their ?zooming dimension?. Under assumptions (14) we deduce d0 = ??1
? d using
? (d(??1)+?)/(d(??1)+2?) ). For
the Euclidean distance as metric, thus their regret is ERn = O(n
? (d+2)/(d+4) ), whereas ours is O(n
? d/(2+d) ).
locally quadratic functions (i.e. ? = 2), their rate is O(n
Again, we have a smaller regret although we do not use the smoothness of ? in our algorithm. Here
the reason is that the zooming algorithm does not make full use of the fact that the function is locally
quadratic (it considers a Lipschitz property only). However, in the case ? < 1, our rates are worse
than algorithms specifically designed for continuum armed bandits.
Hence, the comparison between the many-armed and continuum-armed bandits settings is not easy
because of the difference in nature of the basis assumptions. Our setting is an alternative to the
continuum-armed bandit setting which does not require the existence of an underlying metric space
in which the mean-reward function would be smooth. Our assumption (1) naturally deals with
possibly very complicated functions where maxima may be located in any part of the space. For the
continuum-armed bandit problems when there are relatively many near-optimal arms, our algorithm
will be also competitive compared to the specifically designed continuum-armed bandit algorithms.
This result matches the intuition that in such cases, a random selection strategy will perform well.
To conclude, our contributions are: (i) Compared to previous results on many-armed bandits, our
setting allows general mean-reward distributions for the arms, under a simple assumption on the
probability of pulling near-optimal arms. (ii) We show that, for infinitely many-armed bandits, we
need much less exploration of each arm than for finite-armed bandits (the log term may be replaced
by log log). (iii) Our variant of UCB algorithm, making use of the variance estimate, enables to
obtain higher rates in cases when the variance of the near-optimal arms is small. (iv) We propose the
UCB-AIR algorithm, which is anytime, taking advantage of an arm-increasing rule. (v) We provide
a lower-bound matching the upper-bound (up to a logarithmic factor) in the case ? ? 1 or ?? = 1.
References
[1] R. Agrawal. The continuum-armed bandit problem. SIAM J. Control and Optimization, 33:1926?1951,
1995.
[2] J.-Y. Audibert, R. Munos, and C. Szepesv?ri. Tuning bandit algorithms in stochastic environments. In
M. Hutter, R. A. Servedio, and E. Takimoto, editors, ALT, volume 4754 of Lecture Notes in Computer
Science, pages 150?165. Springer, 2007.
[3] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine
Learning, 47(2/3):235?256, 2002.
[4] P. Auer, R. Ortner, and C. Szepesv?ri. Improved rates for the stochastic continuum-armed bandit problem.
20th COLT, San Diego, CA, USA, 2007.
[5] D. A. Berry, R. W. Chen, A. Zame, D. C. Heath, and L. A. Shepp. Bandit problems with infinitely many
arms. The Annals of Statistics, 25(5):2103?2116, 1997.
[6] R. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In NIPS-2004, 2004.
[7] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandit problems in metric spaces. In Proceedings of
the 40th ACM Symposium on Theory of Computing, 2008.
[8] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4?22, 1985.
[9] O. Teytaud, S. Gelly, and M. Sebag. Anytime many-armed bandit. Conf?rence francophone sur
l?Apprentissage automatique (CAp) Grenoble, France, 2007.
8
| 3452 |@word trial:2 exploitation:3 version:1 polynomial:1 open:1 r:1 tried:2 contains:1 selecting:1 ecole:1 ours:2 past:1 current:2 enpc:1 nt:1 discretization:1 enables:2 designed:2 n0:2 selected:8 fewer:1 xk:6 beginning:2 provides:2 revisited:1 location:2 teytaud:1 c2:14 symposium:1 consists:1 introduce:1 market:1 indeed:1 expected:11 automatique:1 planning:1 multi:6 inspired:1 decreasing:1 actual:1 armed:29 considering:1 increasing:6 project:1 provided:2 moreover:2 bounded:2 discover:2 underlying:1 what:1 spends:1 c01:2 finding:1 exactly:1 universit:1 k2:6 control:1 before:5 positive:5 t1:1 local:1 io:1 ponts:1 path:2 inria:4 might:2 twice:1 halley:1 union:1 regret:23 practice:2 spot:1 procedure:1 area:2 empirical:2 convenient:1 matching:1 confidence:3 get:3 close:2 selection:2 impossible:1 equivalent:3 deterministic:1 maximizing:1 starting:1 ke:2 simplicity:2 contradiction:1 rule:7 dw:2 hurt:1 annals:1 pt:1 play:13 heavily:1 construction:1 diego:1 bk0:7 us:2 trick:1 satisfying:4 located:1 wang:1 ensures:1 trade:2 valuable:1 yk:1 balanced:1 mentioned:2 intuition:1 environment:1 reward:37 depend:2 tight:2 learner:1 basis:1 easily:1 k0:9 describe:2 outcome:1 choosing:1 jean:1 apparent:1 larger:7 say:1 drawing:2 otherwise:7 lder:2 statistic:2 fischer:1 nondecreasing:2 sequence:12 differentiable:1 advantage:1 agrawal:1 propose:3 product:2 coming:1 fr:2 achieve:2 adapts:1 gold:1 regularity:1 p:2 tk:10 derive:2 depending:12 received:3 job:1 implies:3 come:1 convention:1 concentrate:1 stochastic:5 exploration:17 require:2 suffices:1 villeneuve:1 exploring:1 hold:6 sufficiently:2 hall:1 considered:2 around:2 major:1 continuum:11 smallest:1 omitted:1 estimation:1 yizao:1 applicable:1 robbins:1 clearly:1 always:1 ck:1 pn:3 vk:4 bernoulli:4 indicates:1 check:1 sense:1 typically:3 entire:1 unlikely:1 bandit:35 willow:1 france:3 subroutine:7 interested:1 among:1 colt:1 denoted:1 equal:2 once:3 never:2 having:5 sampling:2 lille:1 nearly:1 cardinal:1 ortner:1 grenoble:1 randomly:10 individual:1 replaced:1 lebesgue:1 interest:1 investigate:1 mining:1 unlucky:2 kt:1 worker:1 iv:1 euclidean:2 hutter:1 instance:1 deviation:1 uniform:4 too:2 characterize:1 nearoptimal:1 kn:4 combined:1 chooses:1 explores:1 siam:1 sequel:2 off:2 pool:1 again:1 satisfied:1 cesa:1 choose:6 possibly:3 worse:1 conf:1 account:1 de:1 includes:1 satisfy:1 audibert:3 performed:1 try:2 lot:2 apparently:1 characterizes:1 start:1 competitive:1 option:1 complicated:1 contribution:1 minimize:1 yves:1 air:5 improvable:1 variance:12 yield:1 iterated:2 worth:1 randomness:1 suffers:1 whenever:1 failure:4 servedio:1 internship:1 naturally:1 associated:1 mi:2 proof:7 gain:1 sampled:3 proved:1 anytime:6 knowledge:1 cap:1 actually:1 auer:2 higher:2 fubini:1 dt:3 improved:2 done:1 generality:1 just:1 until:1 hand:2 sketch:1 pulling:9 usa:2 oil:1 equality:1 hence:2 deal:1 conditionally:1 round:4 during:7 game:1 common:1 overview:1 conditioning:1 nh:1 volume:1 multiarmed:1 smoothness:3 tuning:1 mathematics:1 europe:1 deduce:1 rence:1 route:3 inequality:9 arbitrarily:1 continue:2 additional:1 greater:1 surely:1 novelty:1 u0:7 ii:1 full:1 d0:4 smooth:2 match:2 characterized:1 believed:1 long:1 lai:1 variant:3 wald:1 essentially:2 expectation:7 metric:4 iteration:1 c1:14 irregular:1 whereas:2 want:2 szepesv:2 interval:1 source:1 crucial:2 zame:1 heath:1 south:1 integer:3 near:11 noting:1 bernstein:1 iii:1 easy:2 enough:2 independence:1 gave:2 suboptimal:3 idea:1 cn:5 avenue:1 whether:2 action:1 etk:4 locally:3 exist:1 write:3 key:1 putting:3 achieving:1 drawn:8 takimoto:1 asymptotically:1 sum:1 run:6 uncertainty:1 planner:1 c02:2 decide:2 almost:1 vn:2 reasonable:2 draw:7 bound:25 played:4 quadratic:2 nonnegative:2 ahead:1 precisely:1 infinity:1 ri:2 sake:2 kleinberg:2 aspect:2 u1:9 relatively:1 ern:18 department:1 according:1 poor:1 logt:1 smaller:6 contradicts:1 cun:1 certis:3 making:1 happens:2 restricted:1 taken:1 resource:1 remains:1 know:4 umich:1 available:2 apply:3 appropriate:1 alternative:1 existence:1 binomial:1 remaining:2 include:1 ensure:2 completed:1 opportunity:1 exploit:1 gelly:1 move:1 already:1 quantity:5 strategy:18 usual:1 exhibit:1 win:1 distance:1 zooming:3 tower:1 considers:1 reason:3 sur:1 nord:1 negative:1 design:2 policy:4 unknown:2 perform:2 bianchi:1 upper:11 finite:6 situation:1 extended:1 precise:1 discovered:1 rn:2 introduced:1 bk:13 namely:1 paris:2 required:1 c3:5 slivkins:1 nip:1 shepp:1 able:1 spotting:1 usually:1 encompasses:1 max:1 including:1 event:1 natural:1 difficulty:1 arm:115 improve:1 numerous:1 ascq:1 discovery:2 berry:1 law:1 loss:1 lecture:1 allocation:1 var:1 upfal:1 transposable:1 s0:6 apprentissage:1 editor:1 ktp:1 last:6 side:2 weaker:1 pulled:1 bias:1 taking:3 munos:3 dimension:2 cumulative:1 author:1 made:2 adaptive:1 san:1 miner:1 simplified:1 restarting:1 observable:1 forever:2 summing:1 assumed:3 conclude:1 un:1 continuous:2 nature:1 zk:3 ca:1 investigated:1 domain:2 did:1 pk:1 main:3 linearly:1 bounding:1 n2:4 sebag:1 allowed:1 repeated:1 have2:1 west:1 referred:1 en:1 explicit:1 lie:2 theorem:16 bad:1 specific:1 er:2 explored:1 alt:1 consist:1 essential:1 exists:3 adding:1 illustrates:1 conditioned:1 horizon:4 disregarded:1 nk:5 easier:1 kx:3 chen:1 michigan:1 remi:1 logarithmic:10 infinitely:6 labor:1 doubling:1 springer:1 corresponds:1 minimizer:2 satisfies:8 acm:1 goal:1 ann:1 lipschitz:2 paristech:1 hard:1 typical:1 infinite:1 uniformly:2 except:1 specifically:2 miss:1 lemma:3 total:5 arbor:1 est:2 brand:1 ucb:34 ew:2 meaningful:1 incorporate:1 phenomenon:1 |
2,705 | 3,453 | Bayesian Synchronous Grammar Induction
Phil Blunsom, Trevor Cohn, Miles Osborne
School of Informatics, University of Edinburgh
10 Crichton Street, Edinburgh, EH8 9AB, UK
{pblunsom,tcohn,miles}@inf.ed.ac.uk
Abstract
We present a novel method for inducing synchronous context free grammars
(SCFGs) from a corpus of parallel string pairs. SCFGs can model equivalence
between strings in terms of substitutions, insertions and deletions, and the reordering of sub-strings. We develop a non-parametric Bayesian model and apply it to a
machine translation task, using priors to replace the various heuristics commonly
used in this field. Using a variational Bayes training procedure, we learn the
latent structure of translation equivalence through the induction of synchronous
grammar categories for phrasal translations, showing improvements in translation
performance over maximum likelihood models.
1
Introduction
A recent trend in statistical machine translation (SMT) has been the use of synchronous grammar
based formalisms, permitting polynomial algorithms for exploring exponential forests of translation
options. Current state-of-the-art synchronous grammar translation systems rely upon heuristic relative frequency parameter estimates borrowed from phrase-based machine translation[1, 2]. In this
work we draw upon recent Bayesian models of monolingual parsing [3, 4] to develop a generative
synchronous grammar model of translation using a hierarchical Dirichlet process (HDP) [5].
There are two main contributions of this work. The first is that we include sparse priors over the
model parameters, encoding the intuition that source phrases will have few translations, and also addressing the problem of overfitting when using long multi-word translations pairs. Previous models
have relied upon heuristics to implicitly bias models towards such distributions [6]. In addition, we
investigate different priors based on standard machine translation models. This allows the performance benefits of these models to be combined with a principled estimation procedure.
Our second contribution is the induction of categories for the synchronous grammar using a HDP
prior. Such categories allow the model to learn the latent structure of translational equivalence between strings, such as a preference to reorder adjectives and nouns when translating between French
to English or to encode that a phrase pair should be used at the beginning or end of a sentence. Automatically induced non-terminal symbols give synchronous grammar models increased power over
single non-terminal systems such as [2], while avoiding the problems of relying on noisy domainspecific parsers, as in [7]. As the model is non-parametric, the HDP prior will provide a bias towards
parameter distributions using as many, or as few, non-terminals as necessary to model the training
data. Following [3] we optimise a truncated variational bound on the true posterior distribution.
We evaluate the model on both synthetic data, and the real task of translating from Chinese to
English, showing improvements over a maximum likelihood estimate (MLE) model. We focus
on modelling the generation of a translation for a source sentence, putting aside for further work
integration with common components of a state-of-the-art translation system, such as a language
model and minimum error rate training [6].
While we are not aware of any previous attempts to directly induce synchronous grammars with
more than a single category, a number of generatively trained machine translation models have been
B
B
A
A
B
A
?$!%
"#
to the Hundred Regiments Offensive
is the Monument
B
B
B
????
?????
.
Standing tall
on Taihang Mountain
.
Figure 1: An example SCFG derivation from a Chinese source sentence which yields the English
sentence: ?Standing tall on Taihang Mountain is the Monument to the Hundred Regiment Offensive.?
(Cross-bars indicate that the child nodes have been reordered in the English target.)
proposed. [8] described the ITG subclass of SCFGs and performed many experiments using MLE
training to induce translation models on small corpora. Most subsequent work with ITG grammars
has focused on the sub-task of word alignment [9], rather than actual translation, and has continued
to use MLE trained models. A notable recent exception is [10] who used Dirichlet priors to smooth
an ITG alignment model. Our results clearly indicate that MLE models considerably overfit when
used to estimate synchronous grammars, while the judicious use of priors can alleviate this problem.
This result raises the prospect that many MLE trained models of translation (e.g. [7, 11, 12]),
previously dismissed for under-performing heuristic approaches, should be revisited.
2
Synchronous context free grammar
A synchronous context free grammar (SCFG, [13]) describes the generation of pairs of strings.
A string pair is generated by applying a series of paired context-free rewrite rules of the form,
X ?
, where X is a non-terminal, and ? are strings of terminals and non-terminals and
specifies a one-to-one alignment between non-terminals in and ?. In the context of SMT, by
assigning the source and target languages to the respective sides of a SCFG it is possible to describe
translation as the process of parsing the source sentence, while generating the target translation [2].
In this paper we only consider binary normal-form SCFGs which allow productions to rewrite as
either a pair of a pair of non-terminals, or a pair of non-empty terminal strings (these may span
multiple words). Such grammars are equivalent to the inversion transduction grammars presented in
[8]. Note however that our approach is general and could be used with other synchronous grammar
transducers (e.g., [7]). The binary non-terminal productions can specify that the order of the child
non-terminals is the same in both languages (a monotone production), or is reversed (a reordering
production). Monotone and reordering rules are written:
Z X 1 Y
2
X 1 Y 2 and Z X 1 Y
2
Y
2
X 1
respectively, where X Y and Z are non-terminals and the boxed indices denote the alignment.
Without loss of generality, here we add the restriction that non-terminals on the source and target
sides of the grammar must have the same category. Although conceptually simple, a binary normalform SCFGs can still represent a wide range of linguistic phenomena required for translation [8].
Figure 1 shows an example derivation for Chinese to English. The grammar in this example has
non-terminals A and B which distinguish between translation phrases which permit re-orderings.
3
Generative Model
A sequence of SCFG rule applications which produces both a source and a target sentence is referred
to as a derivation, denoted z. The generative process of a derivation in our model is described in
Table 1. First a start symbol, z1 , is drawn, followed by its rule type. This rule type determines
if the symbol will rewrite as a source-target translation pair, or a pair of non-terminals with either
monotone or reversed order. The process then recurses to rewrite each pair of child non-terminals.
?|? ? GEM(?)
?S |?S , ? ? DP(?S , ?)
?Tz |?Y ? Dirichlet(?Y )
M
M
T
?M
z |? , ? ? DP(? , ?? )
R R
R
T
?z |? , ? ? DP(? , ?? )
E
E
?E
z |? , P0 ? DP(? , P0 )
HDP-SCFG
(Draw top-level constituent prior distribution)
(Draw start-symbol distribution)
(Draw rule-type parameters)
(Draw monotone binary production parameters)
(Draw reordering binary production parameters)
(Draw emission production parameters)
z1 |?S ? Multinomial(?S )
(First draw the start symbol)
For each node i in the synchronous derivation z with category zi :
ti |?Tzi ? Multinomial(?Tzi )
(Draw a rule type)
if ti = Emission then:
E
he, f i|?E
(Draw source and target phrases)
zi ? Multinomial(?zi )
if ti = Monotone Production then:
M
hzl 1 zr 2 , zl 1 zr 2 i|?M
(Draw left and right (source) child constituents)
zi ? Multinomial(?zi )
if ti = Reordering Production then:
R
(Draw left and right (source) child constituents)
hzl 1 zr 2 , zr 2 zl 1 i|?R
zi ? Multinomial(?zi )
Table 1: Hierarchical Dirichlet process model of the production of a synchronous tree from a SCFG.
This continues until no non-terminals are remaining, at which point the derivation is complete and
the source and target sentences can be read off. When expanding a production each decision is
drawn from a multinomial distribution specific to the non-terminal, zi . This allows different nonterminals to rewrite in different ways ? as an emission, reordering or monotone production. The prior
distribution for each binary production is parametrised by ?, the top-level stick-breaking weights,
thereby ensuring that each production draws its children from a shared inventory of category labels.
The parameters for each multinomial distributions are themselves drawn from their corresponding
prior. The hyperparameters, ?, ?S , ?Y , ?M , ?R , and ?E , encode prior knowledge about the sparsity
of each distribution. For instance, we can encode a preference towards longer or short derivations
using ?Y , and a preference for sparse or dense translation lexicons with ?E . To simplify matters
?
we assume a single hyperparameter for productions, i.e. ?P = ?S = ?M = ?R . In addition to
allowing for the incorporation of prior knowledge about sparsity, the priors have been chosen to be
conjugate to the multinomial distribution. In the following sections we describe and motivate our
choices for each one of these distributions.
3.1
Rule type distribution
The rule type distribution determines the relative likelihood of generating a terminal string pair,
a monotone production, or a reordering. Synchronous grammars that allow multiple words to be
emitted at the leaves of a derivation are prone to focusing probability mass on only the longest
translation pairs, i.e. if a training set sentence pair can be explained by many short translation pairs,
or a few long ones the maximum likelihood solution will be to use the longest pairs. This issue is
manifested by the rule type distribution assigning a high probability to emissions versus either of
the binary productions, resulting in short flat derivations with few productions. We can counter this
tendency by assuming a prior distribution that allows us to temper the model?s preference for short
derivations with large translation pairs. We do so by setting the concentration parameter, ?Y , to a
number greater than one which smooths the rule type distribution.
3.2
Emission distribution
The Dirichlet process prior on the terminal emission distribution serves two purposes. Firstly the
prior allows us to encode the intuition that our model should have few translation pairs. The translation pairs in our system are induced from noisy data and thus many of them will be of little use.
Therefore a sparse prior should lead to these noisy translation pairs being assigned probabilities
close to zero. Secondly, the base distribution P0 of the Dirichlet process can be used to include
sophisticated prior distributions over translation pairs from other popular models of translation. The
two structured priors we investigate in this work are IBM model 1, and the relative frequency count
estimators from phrase based translation:
IBM Model 1 (P0m1 ) IBM Model 1 [14] is a word based generative translation model that assigns
a joint probability to a source and target translation pair. The model is based on a noisy channel in
which we decompose the probability of f given e from the language model probability of e. The
conditional model assumes a latent alignment from words in e to those in f and that the probability
of word-to-word translations are independent:
|f |
P0m1 (f , e)
=P
m1
|e|
YX
1
(f |e) ? P (e) = P (e) ?
?
p(fj |ei ) ,
(|e| + 1)|f | j=1 i=0
where e0 represents word insertions. We use a unigram language model for the probability P (e), and
train the parameters p(fj |ei ) using a variational approximation, similar to that which is described in
Section 3.4.
Model 1 allows us to assign a prior probability to each translation pair in our model. This prior
suggests that lexically similar translation pairs should have similar probabilities. For example, if
the French-English pairs (chapeau, cap) and (rouge, red) both have high probability, then the pair
(chapeau rouge, red cap) should also.
Relative frequency (P0RF ) Most statistical machine translation models currently in use estimate
the probabilities for translation pairs using a simple relative frequency estimator. Under this model
the joint probability of a translation pair is simply the number of times the source was observed to
be aligned to the target in the word aligned corpus normalised by the total number of observed pairs:
P0RF (f , e) =
C(f , e)
,
C(?, ?)
where C(?, ?) is the total number of translation pair alignments observed. Although this estimator
doesn?t take into account any generative process for how the translation pairs were observed, and
by extension of the arguments for tree substitution grammars is biased and inconsistent [15], it has
proved effective in many state-of-the-art translation systems.1
3.3
Non-terminal distributions
We employ a structured prior for binary production rules inspired by similar approaches in monolingual grammar induction [3, 4]. The marginal distribution over non-terminals, ?, is drawn from
a stick-breaking prior [5]. This generates an infinite vector of scalars which sum to one and whose
expected values decrease geometrically, with the rate of decay being controlled by ?. The parameters of the start symbol distribution are drawn from a Dirichlet process parametrised by the
stick-breaking weights, ?. In addition, both the monotone and reordering production parameters are
drawn from a Dirichlet process parameterised by the matrix of the expectations for each pair of nonterminals, ?? T , assuming independence in the prior. This allows the model to prefer grammars with
few non-terminal labels and where each non-terminal has a sparse distribution over productions.
3.4
Inference
Previous work with monolingual HDP-CFG grammars have employed either Gibbs sampling [4] or
variational Bayes [3] approaches to inference. In this work we follow the mean-field approximation
presented in [16, 3], truncating the top-level stick-breaking prior on the non-terminals and optimising
a variational bound on the probability of the training sample. The mean-field approach offers better
scaling and convergence properties than a Gibbs sampler, at the expense of increased approximation.
First we start with our objective, the likelihood of the observed string pairs, x = {(e, f )}:
Z
Z
X
X
p(?)p(x, z|?)
log p(x) = log d?
p(?)p(x, z|?) ?
d?
q(?, z) log
,
q(?, z)
z
z
1
Current translation systems more commonly use the conditional, rather than joint, estimator.
where ? = (?, ?S , ?M , ?R , ?E , ?T ) are our model parameters and z are the hidden derivations.
We bound the above using Jensen?s inequality to move the logarithm (a convex function) inside
the integral and sum, and introduce the mean-field distribution q(?, z). Assuming this distribution
factorises over the model parameters and latent variables, q(?, z) = q(?)q(z),
!
Z
p(?) X
p(x, z|?) ?
log p(x) ? d?q(?) log
+
q(z) log
= F(q(?), q(z)) .
q(?)
q(z)
z
Upon taking the functional partial derivatives of F(q(?), q(z)) and equating to zero, we
obtain sub-normalised summary weights for each of the factorised variational distributions:
?
Wi = exp{Eq(?) [log ?i ]}. For the monotone and reordering distributions these become:
exp{? C z ? hzl 1 zr 2 , zl 1 zr 2 i + ?P ?zl ?zr }
WzM (zl , zr ) =
exp{? C z ? h? 1 ? 2 , ? 1 ? 2 i + ?P }
exp{? C z ? hzl 1 zr 2 , zr 2 zl 1 i + ?P ?zl ?zr }
R
,
Wz (zl , zr ) =
exp{? C z ? h? 1 ? 2 , ? 2 ? 1 i + ?P }
where C(z ? ? ? ? ) is the expected count of rewriting symbol z using the given production. The
starred rewrites in the denominators indicate a sum over any monotone or reordering production,
respectively. The weights for the rule-type and emission distributions are defined similarly. The
variational training cycles between optimising the q(?) distribution by re-estimating the weights W
and the stick-breaking prior ?, then using these estimates, with the inside-outside dynamic programming algorithm, to calculate the q(z) distribution. Optimising the top-level stick-breaking weights
has no closed form solution as a dependency is induced between the GEM prior and production
distributions. [3] advocate using a gradient projection method to locally optimise this function. As
our truncation levels are small, we instead use Monte-Carlo sampling to estimate a global optimum.
3.5
Prediction
The predictive distribution under our Bayesian model is given by:
Z
Z
Z
p(z|x, f ) =
d? p(?|x)p(z|f , ?) ?
d? q(?)p(z|f , ?) ? exp d? q(?) log p(z|f , ?) ,
where x is the training set of parallel sentence pairs, f is a testing source sentence and z its derivation.2 Calculating the predictive probability even under the variational approximation is intractable,
therefore we bound the approximation following [16]. The bound can then be maximised to find the
best derivation, z, with the Viterbi algorithm, using the sub-normalised W parameters from the last
E step of variational Bayes training as the model parameters.
4
Evaluation
We evaluate our HDP-SCFG model on both synthetic and real-world translation tasks.
Recovering a synthetic grammar This experiment investigates the ability of our model to recover
a simple synthetic grammar, using the minimum number of constituent categories. Ten thousand
training pairs were generated from the following synthetic grammar, with uniform weights, which
includes both reordering and ambiguous terminal distributions:
S ? hA 1 A 2 , A 1 A 2 i
A ? ha, ai|hb, bi|hc, ci
S ? hB 1 B 2 , B 2 B 1 i
B ? hd, di|he, ei|hf, f i
S ? hC 1 C 2 , C 1 C 2 i
C ? hg, gi|hh, hi|hi, ii
2
The derivation specifies the translation. Alternatively we could bound on the likelihood of a translation,
marginalising out the derivation. However, this bound cannot be maximised tractably when e is unobserved.
Binary production posterior distribution
Emission posterior distribution
1.0
1.0
HDP
MLE
0.8
0.6
0.6
Posterior
Posterior
HDP
MLE
0.8
0.4
0.2
0.4
0.2
0.0
0.0
1
2
3
4
5
Category
1
2
3
4
5
Category
Figure 2: Synthetic grammar experiments. The HDP model correctly allocates a single binary
production non-terminal and three equally weighted emission non-terminals.
Sentences
Sentences
Segments/Words
Av. Sentence Length
Longest Sentence
Training
Chinese English
33164
253724 279104
7
8
41
45
Development
Chinese English
500
3464
3752
6
7
58
62
Test
Chinese English
506
3784
3823
7
7
61
56
Table 2: Chinese to English translation corpus statistics.
Figure 2 shows the emission and production distributions produced by the HDP-SCFG model,3 as
well as an EM trained maximum likelihood (MLE) model. The variational inference for the HDP
model was truncated at five categories, likewise the MLE model was trained with five categories.
The hierarchical model finds the correct grammar. It allocates category 2 to the S category, giving
it a 32 probability of generating a monotone production (A,C), versus 13 for a reordering (B). For
the emission distribution the HDP model assigns category 1 to A, 3 to B and 5 to C, each of which
has a posterior probability of 13 . The stick-breaking prior biases the model towards using a small set
of categories, and therefore the model correctly uses only four categories, assigning zero posterior
probability mass to category 4.
The MLE model has no bias for small grammars and therefore uses all available categories to model
the data. For the production distribution it creates two categories with equal posteriors to model the
S category, while for emissions the model collapses categories A and C into category 1, and splits
category B over 3 and 5. This grammar is more expressive than the target grammar, over-generating
but including the target grammar as a subset. The particular grammar found by the MLE model is
dependent on the (random) initialisation and the fact that the EM algorithm can only find a local
maximum, however it will always use all available categories to model the data.
Chinese-English machine translation The real-world translation experiment aims to determine
whether the model can learn and generalise from a noisy large-scale parallel machine translation
corpus, and provide performance benefits on the standard evaluation metrics. We evaluate our model
on the IWSLT 2005 Chinese to English translation task [17], using the 2004 test set as development
data for tuning the hyperparameters. The statistics for this data are presented in Table 2. The training
data made available for this task consisted of 40k pairs of transcribed utterances, drawn from the
travel domain. The translation phrase pairs that form the base of our grammar are induced using the
standard alignment and translation phrase pair extraction heuristics used in phrase-based translation
models [6]. As these heuristics aren?t based on a generative model, and don?t guarantee that the
target translation will be reachable from the source, we discard those sentence pairs for which we
cannot produce a derivation, leaving 33,164 sentences for training. Model performance is evaluated
using the standard Bleu4 metric [18] which measures average n-gram precision, n ? 4.
3
No structured P0 was used in this model, rather a simple Dirichlet prior with uniform ?E was employed
for the emission distribution.
33.5
33.5
32.5
?
?
0.1
?
?
?
?
?
?
?
32.0
?
33.0
BLEU (%)
?
32.5
33.0
?
32.0
BLEU (%)
?
0.2
0.5
1.0
1e+00
1e+02
?E
1e+04
1e+06
?Y
Figure 3: Tuning the Dirichlet ? parameters for the emission and rule type distributions (development set).
Single Category
MLE
32.9
Uniform P0
35.5
P0 = M1
37.1
P0 = RF
38.7
Table 3: Test results for the model with a single non-terminal category and various emission priors
(B LEU ).
5 Categories
MLE
29.9
P0 = RF
38.8
Table 4: Test set results for the hierarchical model with the variational distribution truncated at five
non-terminal categories (B LEU ).
We first evaluate our model using a grammar with a single non-terminal category (rendering the
hierarchical prior redundant) and vary the prior P0 used for the emission parameters. For this model
we investigate the effect that the emission and rule-type priors have on translation performance.
Figure 3 graphs the variation in Bleu score versus the two free hyperparameters for the model with a
simple uniform P0 , evaluated on the development corpus. Both graphs show a convex relationship,
with ?Y being considerably more peaked. For the ?E hyperparameter the optimal value is 0.75,
indicating that the emission distribution benefits from a slightly sparse distribution, but not far from
the uniform value of 1.0. The sharp curve for the ?Y rule-type distribution hyperparameter confirms
our earlier hypothesis that the model requires considerable smoothing in order to force it to place
probability mass on long derivations rather than simply placing it all on the largest translation pairs.
The optimal hyperparameter values on the development data for the two structured emission distribution priors, Model 1 (M 1 ) and relative frequency (RF ), also provide insight into the underlying
models. The M 1 prior has a heavy bias towards smaller translation pairs, countering the model?s
inherent bias. Thus the optimal value for the ?Y parameter is 1.0, suggesting that the two biases
balance. Conversely the RF prior is biased towards larger translation pairs reinforcing the model?s
bias, thus a very large value (106 ) for the ?Y parameter gives optimal development set performance.
Table 3 shows the performance of the single category models with each of the priors on the test set.4
The results show that all the Bayesian models outperform the MLE, and that non-uniform priors
help considerably, with the RF prior obtaining the highest score.
In Table 4 we show the results for taking the best performing RF model from the previous experiment and increasing the variational approximation?s truncation limit to five non-terminals. The ?P
was set to 1.0, corresponding to a sparse distribution over binary productions.5 Here we see that the
HDP model improves slightly over the single category approximation. However the baseline MLE
model uses the extra categories to overfit the training data significantly, resulting in much poorer
generalisation performance.
4
For comparison, a state-of-the-art SCFG decoder based on the heuristic estimator, incorporating a trigram
language model and using minimum error rate training achieves a B LEU score of approximately 46.
5
As there are five non-terminal categories, an ?P = 52 would correspond to a uniform distribution.
5
Conclusion
We have proposed a Bayesian model for inducing synchronous grammars and demonstrated its efficacy on both synthetic and real machine translation tasks. The sophisticated priors over the model?s
parameters address limitations of MLE models, most notably overfitting, and effectively model the
nature of the translation task. In addition, the incorporation of a hierarchical prior opens the door to
the unsupervised induction of grammars capable of representing the latent structure of translation.
Our Bayesian model of translation using synchronous grammars provides a basis upon which more
sophisticated models can be built, enabling a move away from the current heuristically engineered
translation systems.
References
[1] Andreas Zollmann and Ashish Venugopal. Syntax augmented machine translation via chart parsing. In
Proc. of the HLT-NAACL 2006 Workshop on Statistical Machine Translation, New York City, June 2006.
[2] David Chiang. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201?228, 2007.
[3] Percy Liang, Slav Petrov, Michael Jordan, and Dan Klein. The infinite PCFG using hierarchical Dirichlet
processes. In Proc. of the 2007 Conference on Empirical Methods in Natural Language Processing
(EMNLP-2007), pages 688?697, Prague, Czech Republic, 2007.
[4] Jenny Rose Finkel, Trond Grenager, and Christopher D. Manning. The infinite tree. In Proc. of the 45th
Annual Meeting of the ACL (ACL-2007), Prague, Czech Republic, 2007.
[5] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[6] Philipp Koehn, Franz Josef Och, and Daniel Marcu. Statistical phrase-based translation. In Proc. of the
3rd International Conference on Human Language Technology Research and 4th Annual Meeting of the
NAACL (HLT-NAACL 2003), pages 81?88, Edmonton, Canada, May 2003.
[7] Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. Scalable inference and training of context-rich syntactic translation models. In Proc. of
the 44th Annual Meeting of the ACL and 21st International Conference on Computational Linguistics
(COLING/ACL-2006), pages 961?968, Sydney, Australia, July 2006.
[8] Dekai Wu. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377?403, 1997.
[9] Colin Cherry and Dekany Lin. Inversion transduction grammar for joint phrasal translation modeling.
In Proc. of the HLT-NAACL Workshop on Syntax and Structure in Statistical Translation (SSST 2007),
Rochester, USA, 2007.
[10] Hao Zhang, Chris Quirk, Robert C. Moore, and Daniel Gildea. Bayesian learning of non-compositional
phrases with synchronous parsing. In Proc. of the 46th Annual Conference of the Association for Computational Linguistics: Human Language Technologies (ACL-08:HLT), pages 97?105, Columbus, Ohio,
June 2008.
[11] Daniel Marcu and William Wong. A phrase-based, joint probability model for statistical machine translation. In Proc. of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP2002), pages 133?139, Philadelphia, July 2002. Association for Computational Linguistics.
[12] John DeNero, Dan Gillick, James Zhang, and Dan Klein. Why generative phrase models underperform
surface heuristics. In Proc. of the HLT-NAACL 2006 Workshop on Statistical Machine Translation, pages
31?38, New York City, June 2006.
[13] Philip M. Lewis II and Richard E. Stearns. Syntax-directed transduction. J. ACM, 15(3):465?488, 1968.
[14] P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. The mathematics of statistical machine
translation: Parameter estimation. Computational Linguistics, 19(2):263?311, 1993.
[15] Mark Johnson. The DOP estimation method is biased and inconsistent. Computational Linguistics,
28(1):71?76, 2002.
[16] Matthew Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, The Gatsby
Computational Neuroscience Unit, University College London, 2003.
[17] Matthias Eck and Chiori Hori. Overview of the IWSLT 2005 evaluation campaign. In Proc. of the
International Workshop on Spoken Language Translation, Pittsburgh, October 2005.
[18] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation
of machine translation. In Proc. of the 40th Annual Meeting of the ACL and 3rd Annual Meeting of the
NAACL (ACL-2002), pages 311?318, Philadelphia, Pennsylvania, 2002.
| 3453 |@word inversion:3 polynomial:1 open:1 heuristically:1 confirms:1 underperform:1 p0:10 thereby:1 substitution:2 generatively:1 series:1 score:3 efficacy:1 initialisation:1 daniel:4 current:3 assigning:3 written:1 parsing:5 must:1 john:1 subsequent:1 aside:1 generative:7 leaf:1 graehl:1 beginning:1 maximised:2 short:4 chiang:1 blei:1 provides:1 node:2 revisited:1 preference:4 lexicon:1 firstly:1 philipp:1 zhang:2 five:5 become:1 transducer:1 advocate:1 dan:3 inside:2 introduce:1 notably:1 expected:2 themselves:1 multi:1 terminal:33 inspired:1 relying:1 automatically:1 eck:1 actual:1 little:1 increasing:1 estimating:1 underlying:1 mass:3 mountain:2 string:10 spoken:1 unobserved:1 guarantee:1 subclass:1 ti:4 uk:2 zl:8 stick:7 unit:1 wzm:1 och:1 local:1 todd:1 limit:1 rouge:2 encoding:1 approximately:1 blunsom:1 acl:7 equating:1 equivalence:3 suggests:1 conversely:1 collapse:1 campaign:1 range:1 bi:1 directed:1 testing:1 procedure:2 empirical:2 significantly:1 projection:1 word:11 induce:2 cannot:2 close:1 salim:1 context:6 applying:1 wong:1 restriction:1 equivalent:1 demonstrated:1 phil:1 truncating:1 convex:2 focused:1 assigns:2 rule:16 continued:1 estimator:5 insight:1 hd:1 variation:1 phrasal:2 target:13 parser:1 programming:1 us:3 hypothesis:1 trend:1 continues:1 marcu:3 observed:5 wang:1 calculate:1 thousand:1 cycle:1 ordering:1 counter:1 prospect:1 decrease:1 highest:1 knight:1 principled:1 intuition:2 rose:1 insertion:2 dynamic:1 trained:5 raise:1 rewrite:6 motivate:1 segment:1 predictive:2 reordered:1 upon:5 creates:1 basis:1 joint:5 various:2 derivation:17 train:1 describe:2 effective:1 monte:1 london:1 kevin:1 outside:1 whose:1 heuristic:8 larger:1 koehn:1 grammar:40 cfg:1 ability:1 gi:1 statistic:2 grenager:1 syntactic:1 ward:1 noisy:5 beal:2 sequence:1 matthias:1 recurses:1 aligned:2 starred:1 papineni:1 inducing:2 constituent:4 convergence:1 empty:1 optimum:1 jing:1 produce:2 generating:4 tall:2 help:1 develop:2 ac:1 quirk:1 school:1 borrowed:1 eq:1 sydney:1 recovering:1 indicate:3 denero:1 correct:1 stochastic:1 hzl:4 human:2 engineered:1 australia:1 translating:2 assign:1 alleviate:1 decompose:1 secondly:1 exploring:1 extension:1 normal:1 exp:6 viterbi:1 matthew:1 trigram:1 vary:1 achieves:1 purpose:1 estimation:3 proc:11 travel:1 label:2 currently:1 largest:1 city:2 weighted:1 clearly:1 always:1 aim:1 rather:4 finkel:1 linguistic:1 encode:4 focus:1 emission:19 june:3 improvement:2 longest:3 modelling:1 likelihood:7 baseline:1 inference:5 dependent:1 hidden:1 pblunsom:1 josef:1 translational:1 issue:1 denoted:1 temper:1 development:6 art:4 noun:1 integration:1 smoothing:1 marginal:1 field:4 aware:1 equal:1 extraction:1 sampling:2 optimising:3 represents:1 placing:1 unsupervised:1 peaked:1 simplify:1 inherent:1 few:6 employ:1 richard:1 pietra:2 william:1 ab:1 attempt:1 investigate:3 evaluation:4 alignment:7 parametrised:2 hg:1 cherry:1 poorer:1 integral:1 capable:1 partial:1 necessary:1 respective:1 allocates:2 tree:3 logarithm:1 re:2 e0:1 increased:2 formalism:1 instance:1 earlier:1 modeling:1 ssst:1 phrase:14 addressing:1 subset:1 republic:2 hundred:2 uniform:7 nonterminals:2 galley:1 johnson:1 dependency:1 synthetic:7 combined:1 considerably:3 st:1 international:3 standing:2 off:1 informatics:1 michael:1 ashish:1 thesis:1 trond:1 emnlp:1 thayer:1 transcribed:1 tz:1 american:1 derivative:1 michel:1 account:1 suggesting:1 factorised:1 lexically:1 includes:1 matter:1 notable:1 performed:1 closed:1 red:2 start:5 bayes:3 option:1 parallel:4 relied:1 recover:1 hf:1 rochester:1 contribution:2 gildea:1 chart:1 who:1 likewise:1 yield:1 correspond:1 conceptually:1 bayesian:9 produced:1 carlo:1 itg:3 trevor:1 ed:1 hlt:5 petrov:1 frequency:5 james:1 di:1 proved:1 popular:1 leu:3 knowledge:2 cap:2 improves:1 sophisticated:3 focusing:1 steve:1 normalform:1 follow:1 specify:1 wei:2 evaluated:2 generality:1 marginalising:1 parameterised:1 until:1 overfit:2 ei:3 cohn:1 expressive:1 christopher:1 french:2 columbus:1 usa:1 effect:1 naacl:6 consisted:1 true:1 brown:1 assigned:1 read:1 moore:1 mile:2 ambiguous:1 syntax:3 complete:1 percy:1 fj:2 variational:13 novel:1 ohio:1 common:1 multinomial:8 functional:1 overview:1 association:3 he:2 m1:2 gibbs:2 ai:1 tuning:2 rd:2 automatic:1 mathematics:1 similarly:1 language:11 reachable:1 longer:1 surface:1 add:1 base:2 posterior:8 recent:3 inf:1 discard:1 manifested:1 inequality:1 binary:11 meeting:5 minimum:3 greater:1 employed:2 determine:1 redundant:1 colin:1 july:2 ii:2 jenny:1 multiple:2 smooth:2 cross:1 long:3 offer:1 lin:1 permitting:1 mle:16 equally:1 paired:1 controlled:1 ensuring:1 prediction:1 scalable:1 denominator:1 expectation:1 metric:2 represent:1 tzi:2 addition:4 source:16 leaving:1 biased:3 extra:1 smt:2 induced:4 inconsistent:2 jordan:2 emitted:1 prague:2 door:1 crichton:1 split:1 hb:2 offensive:2 independence:1 rendering:1 zi:8 pennsylvania:1 andreas:1 synchronous:19 whether:1 reinforcing:1 york:2 compositional:1 locally:1 ten:1 category:34 specifies:2 outperform:1 stearns:1 neuroscience:1 correctly:2 klein:2 hyperparameter:4 putting:1 four:1 drawn:7 rewriting:1 graph:2 monotone:11 geometrically:1 sum:3 place:1 wu:1 draw:13 decision:1 prefer:1 scaling:1 investigates:1 bound:7 hi:2 followed:1 distinguish:1 annual:6 incorporation:2 scfgs:5 flat:1 gillick:1 generates:1 argument:1 span:1 performing:2 structured:4 slav:1 manning:1 conjugate:1 describes:1 slightly:2 em:2 smaller:1 wi:1 explained:1 iwslt:2 hori:1 countering:1 previously:1 count:2 hh:1 end:1 serf:1 available:3 permit:1 apply:1 hierarchical:9 away:1 scfg:9 top:4 dirichlet:12 include:2 remaining:1 assumes:1 linguistics:7 yx:1 calculating:1 giving:1 chinese:9 objective:1 move:2 parametric:2 concentration:1 gradient:1 dp:4 reversed:2 street:1 decoder:1 philip:1 chris:1 bleu:4 induction:5 assuming:3 hdp:13 length:1 index:1 relationship:1 balance:1 liang:1 october:1 robert:1 expense:1 hao:1 allowing:1 teh:1 av:1 enabling:1 truncated:3 sharp:1 canada:1 david:1 pair:42 required:1 sentence:16 z1:2 deletion:1 czech:2 eh8:1 tractably:1 address:1 bar:1 sparsity:2 dismissed:1 adjective:1 rf:6 optimise:2 wz:1 including:1 built:1 power:1 natural:2 rely:1 force:1 zr:12 zhu:1 representing:1 technology:2 factorises:1 ignacio:1 utterance:1 philadelphia:2 prior:41 relative:6 reordering:12 loss:1 monolingual:3 generation:2 limitation:1 dop:1 versus:3 mercer:1 roukos:1 heavy:1 translation:78 production:30 prone:1 ibm:3 summary:1 last:1 free:5 english:12 truncation:2 bias:8 allow:3 side:2 normalised:3 generalise:1 wide:1 taking:2 sparse:6 edinburgh:2 benefit:3 curve:1 world:2 gram:1 rich:1 doesn:1 domainspecific:1 commonly:2 made:1 franz:1 far:1 kishore:1 dekai:1 approximate:1 implicitly:1 global:1 overfitting:2 corpus:7 pittsburgh:1 gem:2 reorder:1 alternatively:1 don:1 latent:5 why:1 table:8 nature:1 learn:3 channel:1 expanding:1 obtaining:1 forest:1 inventory:1 boxed:1 hc:2 domain:1 venugopal:1 main:1 dense:1 hyperparameters:3 bilingual:1 osborne:1 child:6 augmented:1 referred:1 edmonton:1 transduction:4 gatsby:1 precision:1 sub:4 exponential:1 breaking:7 coling:1 specific:1 unigram:1 showing:2 jensen:1 symbol:7 decay:1 intractable:1 incorporating:1 workshop:4 pcfg:1 effectively:1 ci:1 phd:1 aren:1 simply:2 scalar:1 determines:2 lewis:1 acm:1 conditional:2 towards:6 replace:1 shared:1 considerable:1 judicious:1 infinite:3 generalisation:1 sampler:1 total:2 tendency:1 exception:1 indicating:1 college:1 mark:1 jonathan:1 phenomenon:1 avoiding:1 evaluate:4 della:2 |
2,706 | 3,454 | Predictive Indexing for Fast Search
Sharad Goel
Yahoo! Research
New York, NY 10018
[email protected]
John Langford
Yahoo! Research
New York, NY 10018
[email protected]
Alex Strehl
Yahoo! Research
New York, NY 10018
[email protected]
Abstract
We tackle the computational problem of query-conditioned search. Given a
machine-learned scoring rule and a query distribution, we build a predictive index by precomputing lists of potential results sorted based on an expected score
of the result over future queries. The predictive index datastructure supports an
anytime algorithm for approximate retrieval of the top elements. The general approach is applicable to webpage ranking, internet advertisement, and approximate
nearest neighbor search. It is particularly effective in settings where standard techniques (e.g., inverted indices) are intractable. We experimentally find substantial
improvement over existing methods for internet advertisement and approximate
nearest neighbors.
1
Introduction
The Problem. The objective of web search is to quickly return the set of most relevant web pages
given a particular query string. Accomplishing this task for a fixed query involves both determining
the relevance of potential pages and then searching over the myriad set of all pages for the most
relevant ones. Here we consider only the second problem. More formally, let Q ? Rn be an input
space, W ? Rm a finite output space of size N , and f : Q ? W 7? R a known scoring function.
Given an input (search query) q ? Q, the goal is to find, or closely approximate, the top-k output
objects (web pages) p1 , . . . , pk in W (i.e., the top k objects as ranked by f (q, ?)).
The extreme speed constraint, often 100ms or less, and the large number of web pages (N ? 1010 )
makes web search a computationally-challenging problem. Even with perfect 1000-way parallelization on modern machines, there is far too little time to directly evaluate against every page when a
particular query is submitted. This observation limits the applicability of machine-learning methods
for building ranking functions. The question addressed here is: ?Can we quickly return the highest
scoring pages as ranked by complex scoring rules typical of learning algorithms??
Predictive Indexing. We describe a method for rapidly retrieving the top elements over a large set
as determined by general scoring functions. The standard method for mitigating the computational
difficulties of search is to pre-process the data so that far less computation is necessary at runtime.
Taking the empirical probability distribution of queries into account, we pre-compute collections of
web pages that have a large expected score conditioned on the query falling into particular sets of
related queries {Qi }. For example, we may pre-compute and store the list of web pages that have the
highest average score when the query contains the phrase ?machine learning?. To yield a practical
algorithm, these sets should form meaningful groups of pages with respect to the scoring function
and query distribution. At runtime, we then optimize only over those collections of top-scoring web
pages for sets Qi containing the submitted query.
Our main contribution is optimizing the search index with respect to the query distribution. The empirical evidence presented shows that predictive indexing is an effective technique, making general
machine learning style prediction methods viable for quickly ranking over large numbers of objects.
1
The general methodology applies to other optimization problems as well, including approximate
nearest neighbor search.
In the remainder of Section 1 we describe existing solutions to large-scale search, and their applicability to general scoring functions. Section 2 describes the predictive indexing algorithm and covers
an example and lemma suggesting that predictive indexing has significant advantages over existing
techniques. We present empirical evaluation of the method in Section 3, using both proprietary web
advertising data and public data for nearest neighbor search.
1.1
Feature Representation
One concrete way to map web search into the general predictive index framework is to represent
both queries and pages as sparse binary feature vectors in a high-dimensional Euclidean space.
Specifically, we associate each word with a coordinate: A query (page) has a value of 1 for that
coordinate if it contains the word, and a value of 0 otherwise. We call this the word-based feature
representation, because each query and page can be summarized by a list of its features (i.e., words)
that it contains. The general predictive framework supports many other possible representations,
including those that incorporate the difference between words in the title and words in the body of
the web page, the number of times a word occurs, or the IP address of the user entering the query.
1.2
Related Work
Given the substantial importance of large-scale search, a variety of techniques have been developed
to address the rapid ranking problem. Past work that has referenced the query distribution includes
(Cheng et al., 2006; Chierichetti et al., 2008). Here we describe two commonly applied methods
related to the predictive index approach.
Fagin?s Threshold Algorithm. Fagin?s threshold algorithmP
(Fagin et al., 2003) supports the top-k
n
problem for linear scoring functions of the form f (q, p) = i=1 qi gi (p), where qi ? {0, 1} is the
th
i coordinate of the query q, and gi : W 7? R are partial scores for pages as determined by the ith
feature1 . For each query feature i, construct an ordered list Li containing every web page, sorted
in descending order by their partial scores gi (p). We refer to this as the projective order, since it
is attained by projecting the scoring rule onto individual coordinates. Given a query q, we evaluate
web pages in the lists Li that correspond to features of q. The algorithm maintains two statistics,
upper and lower bounds on the score of the top-k th page, halting when these bounds cross. The
lower bound is the score of the k th best page seen so far; the upper bound is the sum of the partial
scores (i.e., gi (p)) for the next-to-be-scored page in each list. Since the lists are ordered by the
partial scores, the upper threshold does in fact bound the score of any page yet to be seen.
The threshold algorithm is particularly effective when a query contains a small number of features,
facilitating fast convergence of the upper bound. In our experiments, we find that the halting condition is rarely satisfied within the imposed computational restrictions. One can, of course, simply
halt the algorithm when it has expended the computational budget (Fagin, 2002), which we refer to
as the Halted Threshold Algorithm.
Inverted Indices. An inverted index is a datastructure that maps every page feature x to a list of
pages p that contain x. When a new query arrives, a subset of page features relevant to the query is
first determined. For instance, when the query contains ?dog?, the page feature set might be {?dog?,
?canine?, ?collar?, ...}. Note that a distinction is made between query features and page features, and
in particular, the relevant page features may include many more words than the query itself. Once a
set of page features is determined, their respective lists (i.e., inverted indices) are searched, and from
them the final list of output pages is chosen. One method for searching over these lists is to execute
Fagin?s threshold algorithm. Other methods, such as the ?Weighted-And? algorithm (Broder et al.,
2003), use one global order for pages in the lists and walk down the lists synchronously to compute
page scores. See (Zobel & Moffat, 2006) for an overview of inverted indices applied to web search.
Standard approaches based on inverted indices suffer from a shortcoming. The resulting algorithms
are efficient only when it is sufficient to search over a relatively small set of inverted indices for each
1
More general monotone scoring functions (e.g., coordinate-wise product and max) are in fact supported;
for clarity, however, we restrict to the linear case.
2
query. They require, for each query q, that there exists a small set2 Xq of page features such that the
score of any page against q depends only on its intersection with Xq . In other words, the scoring
rule must be extremely sparse, with most words or features in the page having zero contribution to
the score for q. In Section 3.1, we consider a machine-learned scoring rule, derived from internet
advertising data, with the property that almost all page features have substantial influence on the
score for every query, making any straightforward approach based on inverted indices intractable.
Furthermore, algorithms that use inverted indices do not typically optimize the datastructure against
the query distribution and our experiments suggest that doing so may be beneficial.
2
An Algorithm for Rapid Approximate Ranking
Suppose we are provided with a categorization of possible queries into related, potentially overlapping, sets. For example, these sets might be defined as, ?queries containing the word ?France?,?
or ?queries with the phrase ?car rental?.? For each query set, the associated predictive index is an
ordered list of web pages sorted by their expected score for random queries drawn from that set. In
particular, we expect web pages at the top of the ?France? list to be good, on average, for queries
containing the word ?France.? In contrast to an inverted index, the pages in the ?France? list need not
themselves contain the word ?France?. To retrieve results for a particular query (e.g., ?France car
rental?), we optimize only over web pages in the relevant, pre-computed lists. Note that the predictive index is built on top of an already existing categorization of queries, a critical, and potentially
difficult initial step. In the applications we consider, however, we find that predictive indexing works
well even when applied to naively defined query sets. Furthermore, in our application to approximate nearest neighbor search, we found predictive indexing to be robust to cover sets generated via
random projections whose size and shape were varied across experiments.
We represent queries and web pages as points in, respectively, Q ? Rn and W ? Rm . This setting
is general, but for the experimental application we consider n, m ? 106 , with any given page or
query having about 102 non-zero entries (see Section 3.1 for details). Thus, pages and points are
typically sparse vectors in very high dimensional spaces. A coordinate may indicate, for example,
whether a particular word is present in the page/query, or more generally, the number of times that
word appears. Given a scoring function f : Q ? W 7? R, and a query q, we attempt to rapidly find
the top-k pages p1 , . . . , pk . Typically, we find an approximate solution, a set of pages p?1 , . . . , p?k
that are among the top l for l ? k. We assume queries are generated from a probability distribution
D that may be sampled.
2.1
Predictive Indexing for General Scoring Functions
Consider a finite collection Q of sets Qi ? Q that cover the query space (i.e., Q ? ?i Qi ). For each
Qi , define the conditional probability distribution Di over queries in Qi by Di (?) = D(?|Qi ), and
define fi : W 7? R as fi (p) = Eq?Di [f (q, p)]. The function fi (p) is the expected score of the
web page p for the (related) queries in Qi . The hope is that any page p has approximately the same
score for any query q ? Qi . If, for example, Qi is the set of queries that contain the word ?dog?, we
may expect every query in Qi to score high against pages about dogs and to score low against those
pages not about dogs.
For each set of queries Qi we pre-compute a sorted list Li of pages pi1 , pi2 , . . . , piN ordered in
descending order of fi (p). At runtime, given a query q, we identify the query sets Qi containing
q, and compute the scoring function f only on the restricted set of pages at the beginning of their
associated lists Li . We search down these lists for as long as the computational budget allows.
In general, it is difficult to compute exactly the conditional expected scores of pages fi (p). One can,
however, approximate these scores by sampling from the query distribution D. Algorithm 1 outlines
the construction of the sampling-based predictive indexing datastructure. Algorithm 2 shows how
the method operates at run time.
Note that in the special case where we cover Q with a single set, we end up with a global ordering
of web pages, independent of the query, which is optimized for the underlying query distribution.
2
The size of these sets are typically on the order of 100 or smaller.
3
Algorithm 1 Construct-Predictive-Index(Cover Q, Dataset S)
Lj [s] = 0 for all objects s and query sets Qj .
for t random queries q ? D do
for all objects s in the data set do
for all query sets Qj containing q do
Lj [s] ? Lj [s] + f (q, s)
end for
end for
end for
for all lists Lj do
sort Lj
end for
return {L}
Algorithm 2 Find-Top(query q, count k)
i=0
top-k list V = ?
while time remains do
for each query set Qj containing q do
s ? Lj [i]
if f (q, s) > k th best seen so far then
insert s into ordered top-k list V
end if
end for
i?i+1
end while
return V
While this global ordering may not be effective in isolation, it could perhaps be used to order pages
in traditional inverted indices.
2.2
Discussion
We present an elementary example to help develop intuition for why we can sometimes expect
predictive indexing to improve upon projective datastructures such as those used in Fagin?s threshold
algorithm. Suppose we have: two query features t1 and t2 ; three possible queries q1 = {t1 },
q2 = {t2 } and q3 = {t1 , t2 }; and three web pages p1 , p2 and p3 . Further suppose we have a simple
linear scoring function defined by
f (q, p1 ) = It1 ?q ? It2 ?q
f (q, p2 ) = It2 ?q ? It1 ?q
f (q, p3 ) = .5 ? It2 ?q + .5 ? It1 ?q
where I is the indicator function. That is, pi is the best match for query qi , but p3 does not score
highly for either query feature alone. Thus, an ordered, projective datastructure would have
t1 ? {p1 , p3 , p2 }
t2 ? {p2 , p3 , p1 }.
Suppose, however, that we typically only see query q3 . In this case, if we know t1 is in the query,
we infer that t2 is likely to be in the query (and vice versa), and construct the predictive index
t1 ? {p3 , p1 , p2 }
t2 ? {p3 , p2 , p1 }.
On the high probability event, namely query q3 , we see the predictive index outperforms the projective, query independent, index.
We expect predictive indices to generally improve on datastructures that are agnostic to the query
distribution. In the simple case of a single cover set (i.e., a global web page ordering) and when
we wish to optimize the probability of returning the highest-scoring object, Lemma 2.1 shows that
a predictive ordering is the best ordering relative to any particular query distribution.
4
Lemma 2.1. Suppose we have a set of points S, a query distribution D, and a function f that scores
queries against points in S. Further assume that for each query q, there is a unique highest scoring
point Hq . For s ? S, let h(s) = Prq?D (s = Hq ), and let s1 , s2 , . . . , sN be ordered according to
h(s). For any fixed k,
Pr (Hq ? {s1 , ..., sk }) =
max
Pr (Hq ? {s?(1) , ..., s?(k) }).
q?D
permutations ? q?D
Proof. For any ordering of points, s?(1) , . . . , s?(k) , the probability of the highest scoring point apPk
pearing in the top k entries equals j=1 h(s?(j) ). This sum is clearly maximized by ordering the
list according to h(?).
3
Empirical Evaluation
We evaluate predictive indexing for two applications: Internet advertising and approximate nearest
neighbor.
3.1
Internet Advertising
We present results on Internet advertising, a problem closely related to web search. We have obtained proprietary data, both testing and training, from an online advertising company. The data are
comprised of logs of events, where each event represents a visit by a user to a particular web page
p, from a set of web pages Q ? Rn . From a large set of advertisements W ? Rm , the commercial
system chooses a smaller, ordered set of ads to display on the page (generally around 4). The set of
ads seen and clicked by users is logged. Note that the role played by web pages has switched, from
result to query. The total number of ads in the data set is |W | ? 6.5 ? 105 . Each ad contains, on
average, 30 ad features, and a total of m ? 106 ad features are observed. The training data consist
of 5 million events (web page ? ad displays). The total number of distinct web pages is 5 ? 105 .
Each page consists of approximately 50 page features, and a total of n ? 9 ? 105 total page features
are observed.
We used a sparseP
feature representation (see Section 1.1) and trained a linear scoring rule f of the
form f (p, a) = i,j wi,j pi aj , to approximately rank the ads by their probability of click. Here,
wi,j are the learned weights (parameters) of the linear model. The search algorithms we compare
were given the scoring rule f , the training pages, and the ads W for the necessary pre-computations.
They were then evaluated by their serving of k = 10 ads, under a time constraint, for each page
in the test set. There was a clear separation of test and training. We measured computation time
in terms of the number of full evaluations by the algorithm (i.e., the number of ads scored against
a given page). Thus, the true test of an algorithm was to quickly select the most promising T ads
to fully score against the page, where T ? {100, 200, 300, 400, 500} was externally imposed and
varied over the experiments. These numbers were chosen to be in line with real-world computational
constraints.
We tested four methods: halted threshold algorithm (TA), as described in Section 1.2, two variants
of predictive indexing (PI-AVG and PI-DCG), and a fourth method, called best global ordering
(BO), which is a degenerate form of PI discussed in Section 2.1. An inverted index approach is
prohibitively expensive since almost all ad features have substantial influence on the score for every
web page (see Section 1.2).
PI-AVG and PI-DCG require a covering of the web page space. We used the natural covering suggested by the binary features?each page feature i corresponds to a cover set consisting of precisely
those pages p that contain i. The resulting datastructure is therefore similar to that maintained by
the TA algorithm?lists,
for each page feature, containing all the ads. However, while TA orders ads
P
by partial score j wi,j pi aj for each fixed page feature i, the predictive methods order by expected
score. PI-AVG sorts ad lists by expected score of f , Ep?Di [f (p, a)] = Ep?D [f (p, a)|i ? p], conditioned on the page containing feature i. PI-DCG and BO optimize the expected value of a modified
scoring rule, DCGf (p, a) = Ir(p,a)?16 / log2 (r(p, a) + 1), where r is the rank function and I is the
indicator function. Here, r(p, a) = j indicates that ad a has rank j according to f (p, a) over all ads
in W . BO stores a single list of all ads, sorted by expected DCGf (p, a), while PI-DCG stores a list
for each page feature i sorted by Ep?Di [DCGf (p, a)]. We chose this measure because:
5
1. Compared with using the average score of f , we empirically observe that expected DCGf
greatly improves the performance of BO on these data.
2. It is related to ?discounted cumulative gain?, a common measure for evaluating search
results in the information retrieval literature (J?arvelin & Kek?al?ainen, 2002).
3. Expected DCGf is zero for many ads, enabling significant compression of the predictive
index.
4. Lemma 2.1 suggests ordering by the probability an ad is in the top 10. The DCGf score is
a softer version of indicator of top 10.
All three predictive methods were trained by sampling from the training set, as described in 2.1.
Figure 3.1 plots the results of testing the four algorithms on the web advertising data. Each point in
the figure corresponds to one experiment, which consisted of executing each algorithm on 1000 test
pages. Along the x-axis we vary the time constraint imposed on the algorithm. The y-axis plots the
frequency, over the test pages, that the algorithm succeeded in serving the top scoring ad for position
1 (Figure 1(a)) and for position 10 (Figure 1(b)). Thus, vertical slices through each plot show the
difference in performance between the algorithms when they are given the same amount of serving
time per page. The probabilities were computed by off-line scoring of all 6.5 ? 105 ads for each test
page and computing the true top-10 ads. Serving correctly for position 10 is more difficult than for
position 1, because it also requires correctly serving ads for positions 1 through 9. We see that all
three methods of predictive indexing are superior to Fagin?s halted threshold algorithm. In addition,
the use of a richer covering, for PI-DCG and PI-AVG, provides a large boost in performance. These
latter two predictive indexing methods attain relatively high accuracy even when fully evaluating
only 0.05% of the potential results.
0.6
?
0.4
?
0.2
?
?
100
200
300
PI?AVG
PI?DCG
Fixed Ordering
Halted TA
400
500
1.0
0.8
0.6
?
?
?
?
?
?
?
?
?
?
?
100
Number of Full Evaluations
PI?AVG
PI?DCG
Fixed Ordering
Halted TA
?
0.4
?
?
0.2
?
?
0.0
?
Probability of Exact Retrieval?10th Result
Comparison of Serving Algorithms
1.0
0.8
?
?
?
0.0
Probability of Exact Retrieval?1st Result
Comparison of Serving Algorithms
200
300
400
500
Number of Full Evaluations
(a)
(b)
Figure 1: Comparison of the first and tenth results returned from the four serving algorithms on the
web advertisement dataset.
Our implementation of the predictive index, and also the halted threshold algorithm, required about
50ms per display event when 500 ad evaluations are allowed. The RAM use for the predictive index
is also reasonable, requiring about a factor of 2 more RAM than the ads themselves.
3.2
Approximate Nearest Neighbor Search
A special case application of predictive indexing is approximate nearest neighbor search. Given a set
of points W in n-dimensional Euclidean space, and a query point x in that same space, the nearest
neighbor problem is to quickly return the top-k neighbors of x. This problem is of considerable
interest for a variety of applications, including data compression, information retrieval, and pattern
recognition. In the predictive indexing framework, the nearest neighbor problem corresponds to
6
minimizing a scoring function, f (x, y) = ||x ? y||2 , defined by Euclidean distance. We assume
query points are generated from a distribution D that can be sampled.
To start, we define a covering Q of the input space Rn , which we borrow from locality-sensitive
hashing (LSH) (Gionis et al., 1999; Datar et al., 2004), a commonly suggested scheme for the
approximate nearest neighbor problem. Fix positive integer parameters ?, ?. First, we form ?
random partitions of the input space. Geometrically, each partition splits the n-dimensional space
on ? random hyperplanes. Formally, for all 1 ? i ? ? and 1 ? j ? ?, generate a random unitnorm n-vector Y ij = (Y1ij , . . . , Ynij ) ? Rn from the Gaussian (normal) distribution. For fixed
i ? {1, . . . , ?} and subset J ? {1, . . . , ?} define the cover set Qi,J = {x ? Rn : x ? Y ij ?
0 if and only if j ? J}. Note that for fixed i, the set {Qi,J |J ? {1, . . . , k}} partitions the space by
random planes.
S
Given a query point x, consider the union Ux = {Qi,J ?Q | x ? Qi,J } Qi,J of all cover sets containing x. Standard LSH approaches to the nearest neighbor problem work by scoring points in the set
Qx = W ? Ux . That is, LSH considers only those points in W that are covered by at least one of
the same ? sets as x. Predictive indexing, in contrast, maps each cover set Qi,J to an ordered list
of points sorted by their probability of being a top-10 nearest point to points in Qi,J . That is, the
lists are sorted by hQi,J (p) = Prq?D|Qi,J (p is one of the nearest 10 points to q). For the query x,
we then consider those points in W with large probability hQi,J for at least one of the sets Qi,J that
cover x.
We compare LSH and predictive indexing over four data sets: (1) MNIST?60,000 training and
10,000 test points in 784 dimensions; (2) Corel?37,749 points in 32 dimensions, split randomly
into 95% training and 5% test subsets; (3) Pendigits?7494 training and 3498 test points in 17
dimensions; and (4) Optdigits?3823 training and 1797 test points in 65 dimensions. The MNIST
data is available at http://yann.lecun.com/exdb/mnist/ and the remaining three data
sets are available at the UCI Machine Learning Repository (http://archive.ics.uci.edu/
ml/). Random projections were generated for each experiment, inducing a covering of the space that
was provided to both LSH and predictive indexing. The predictive index was generated by sampling
over the training set as discussed in Section 2.1. The number of projections ? per partition was set to
24 for the larger sets (Corel and MNIST) and 63 for the smaller sets (Pendigits and Optdigits), while
the number of partitions ? was varied as an experimental parameter. Larger ? corresponds to more
full evaluations per query, resulting in improved accuracy at the expense of increased computation
time. Both algorithms were restricted to the same average number of full evaluations per query.
Predictive indexing offers substantial improvements over LSH for all four data sets. Figure 2(a)
displays the true rank of the first point returned by LSH and predictive indexing on the MNIST data
set as a function of ?, averaged over all points in the test set and over multiple trials. Predictive
indexing outperforms LSH at each parameter setting, with the difference particularly noticeable
when fewer full evaluation are permitted (i.e., small ?). Figure 2(b) displays the performance of
LSH and predictive indexing for the tenth point returned, over all four data sets, with values of ?
varying from 5 to 70, averaged over the test sets, and replicated by multiple runs. In over 300 trials,
we did not observe a single instance of LSH outperforming predictive indexing.
Recent work has proposed more sophisticated partitionings for LSH (Andoni & Indyk, 2006). Approaches based on metric trees (Liu et al., 2004), which take advantage of the distance metric structure, have also been shown to perform well for approximate nearest neighbor. Presumably, taking
advantage of the query distribution could further improve these algorithms as well, although that is
not studied here.
4
Conclusion
Predictive indexing is the first datastructure capable of supporting scalable, rapid ranking based on
general purpose machine-learned scoring rules. In contrast, existing alternatives such as the Threshold Algorithm (Fagin, 2002) and Inverted Index approaches (Broder et al., 2003) are either substantially slower, inadequately expressive, or both, for common machine-learned scoring rules. In the
special case of approximate nearest neighbors, predictive indexing offers substantial and consistent
improvements over the Locality Sensitive Hashing algorithm.
7
?
?
?
0
?
20
30
40
50
60
70
100
80
60
?
??
?
?
40
30
20
?
10
Rank of 1st Result
LSH
Predictive Indexing
?
?
?
? ?
? ? ?
??
?
?
?
?
?
?
?
???? ? ?
??
?
?
??
?
?
?
?
?
?
??
???
?? ?
? ? ??
??
?
?
?
?
???
???
??
??
??? ? ?
??
?
?
?
??
?
?
?
?? ?
??
??
?
?
?
?
????
?
?
? ? ??
? ?? ?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
???
?
?
?
??
?
?
?
?
?
?
?? ?
?? ?
?
?
?
?
??
?
?
???
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
???
?
?
??
?
?
?
?
????
?
20
Number of Partitions ?
??
?
?
20
?
?
LSH vs. Predictive Indexing ? All Data Sets
Predictive Indexing ? Rank of 10th Result
40
LSH vs. Predictive Indexing on MNIST Data
40
60
??
?
?
??
?
?
?
?
?
80
100
LSH ? Rank of 10th Result
(a) The y-axis, ?Rank of 1st Result? measures the (b) Each point represents the outcome of a single extrue rank of the first result returned by each method. periment for one of the four data sets at various paAs the number of partitions ? is increased, improved rameter settings.
accuracy is achieved at the expense of longer computation time.
Figure 2: Comparison of the first and tenth results returned from LSH and predictive indexing.
References
Andoni, A., & Indyk, P. (2006). Near-optimal hashing algorithms for near neighbor problem in high dimensions. Proceedings of the Symposium on Foundations of Computer Science (FOCS?06).
Broder, A. Z., Carmel, D., Herscovici, M., Soffer, A., & Zien, J. (2003). Efficient query evaluation using a
two-level retrieval process. CIKM ?03: Proceedings of the twelfth international conference on Information
and knowledge management (pp. 426?434).
Cheng, C.-S., Chung, C.-P., & Shann, J. J.-J. (2006). Fast query evaluation through document identifier assignment for inverted file-based information retrieval systems. Inf. Process. Manage., 42, 729?750.
Chierichetti, F., Lattanzi, S., Mari, F., & Panconesi, A. (2008). On placing skips optimally in expectation.
WSDM ?08: Proceedings of the international conference on Web search and web data mining (pp. 15?24).
New York, NY, USA: ACM.
Datar, M., Immorlica, N., Indyk, P., & Mirrokni, V. S. (2004). Locality-sensitive hashing scheme based on pstable distributions. SCG ?04: Proceedings of the twentieth annual symposium on Computational geometry
(pp. 253?262). New York, NY, USA: ACM.
Fagin, R. (2002). Combining fuzzy information: an overview. SIGMOD Rec., 31, 109?118.
Fagin, R., Lotem, A., & Naor, M. (2003). Optimal aggregation algorithms for middleware. J. Comput. Syst.
Sci., 66, 614?656.
Gionis, A., Indyk, P., & Motwani, R. (1999). Similarity search in high dimensions via hashing. The VLDB
Journal (pp. 518?529).
J?arvelin, K., & Kek?al?ainen, J. (2002). Cumulated gain-based evaluation of IR techniques. ACM Transactions
on Information Systems, 20, 422?446.
Liu, T., Moore, A., Gray, A., & Yang, K. (2004). An investigation of practical approximate nearest neighbor
algorithms. Neural Information Processing Systems.
Zobel, J., & Moffat, A. (2006). Inverted files for text search engines. ACM Comput. Surv., 38, 6.
8
| 3454 |@word trial:2 repository:1 version:1 compression:2 twelfth:1 vldb:1 scg:1 q1:1 it1:3 initial:1 liu:2 contains:6 score:30 document:1 past:1 existing:5 outperforms:2 com:4 mari:1 yet:1 must:1 john:1 partition:7 shape:1 ainen:2 plot:3 v:2 alone:1 fewer:1 plane:1 beginning:1 ith:1 provides:1 hyperplanes:1 along:1 symposium:2 viable:1 retrieving:1 focs:1 consists:1 soffer:1 naor:1 expected:11 rapid:3 p1:8 themselves:2 wsdm:1 discounted:1 company:1 little:1 clicked:1 provided:2 underlying:1 agnostic:1 string:1 substantially:1 q2:1 developed:1 fuzzy:1 every:6 fagin:10 tackle:1 runtime:3 exactly:1 returning:1 rm:3 prohibitively:1 partitioning:1 t1:6 positive:1 referenced:1 limit:1 datar:2 approximately:3 might:2 chose:1 pendigits:2 studied:1 suggests:1 challenging:1 projective:4 averaged:2 practical:2 unique:1 lecun:1 testing:2 union:1 empirical:4 attain:1 projection:3 pre:6 word:16 datastructures:2 suggest:1 onto:1 influence:2 descending:2 optimize:5 restriction:1 map:3 imposed:3 straightforward:1 rule:10 borrow:1 datastructure:7 retrieve:1 it2:3 searching:2 coordinate:6 construction:1 suppose:5 commercial:1 user:3 exact:2 associate:1 element:2 surv:1 expensive:1 particularly:3 recognition:1 rec:1 observed:2 role:1 ep:3 ordering:11 highest:5 substantial:6 intuition:1 trained:2 arvelin:2 predictive:49 myriad:1 upon:1 various:1 distinct:1 fast:3 effective:4 describe:3 shortcoming:1 query:87 outcome:1 whose:1 richer:1 larger:2 otherwise:1 statistic:1 gi:4 itself:1 ip:1 final:1 online:1 indyk:4 advantage:3 inadequately:1 product:1 remainder:1 relevant:5 uci:2 combining:1 rapidly:2 degenerate:1 inducing:1 webpage:1 convergence:1 motwani:1 categorization:2 perfect:1 executing:1 object:6 pi2:1 help:1 develop:1 measured:1 ij:2 nearest:17 noticeable:1 eq:1 p2:6 involves:1 indicate:1 skip:1 feature1:1 closely:2 softer:1 public:1 require:2 fix:1 investigation:1 elementary:1 insert:1 around:1 ic:1 normal:1 presumably:1 vary:1 purpose:1 applicable:1 title:1 sensitive:3 vice:1 weighted:1 hope:1 clearly:1 gaussian:1 modified:1 varying:1 derived:1 q3:3 improvement:3 rank:9 indicates:1 greatly:1 contrast:3 typically:5 lj:6 dcg:7 france:6 mitigating:1 among:1 yahoo:6 special:3 equal:1 construct:3 once:1 having:2 sampling:4 represents:2 placing:1 future:1 t2:6 modern:1 randomly:1 individual:1 geometry:1 consisting:1 attempt:1 interest:1 highly:1 mining:1 evaluation:12 extreme:1 arrives:1 succeeded:1 capable:1 partial:5 necessary:2 moffat:2 respective:1 tree:1 euclidean:3 walk:1 instance:2 increased:2 cover:11 halted:6 assignment:1 phrase:2 applicability:2 subset:3 entry:2 comprised:1 too:1 optimally:1 chooses:1 st:3 broder:3 international:2 off:1 quickly:5 concrete:1 satisfied:1 management:1 containing:10 manage:1 chung:1 style:1 return:5 li:4 syst:1 account:1 potential:3 suggesting:1 halting:2 expended:1 zobel:2 summarized:1 includes:1 inc:3 gionis:2 ranking:6 depends:1 ad:27 doing:1 start:1 sort:2 maintains:1 aggregation:1 contribution:2 ir:2 accuracy:3 accomplishing:1 kek:2 maximized:1 correspond:1 yield:1 identify:1 advertising:7 submitted:2 against:8 frequency:1 pp:4 associated:2 di:5 proof:1 sampled:2 gain:2 dataset:2 anytime:1 car:2 improves:1 knowledge:1 sophisticated:1 appears:1 attained:1 ta:5 hashing:5 methodology:1 permitted:1 improved:2 execute:1 evaluated:1 furthermore:2 langford:1 web:34 expressive:1 overlapping:1 aj:2 gray:1 perhaps:1 building:1 usa:2 contain:4 true:3 consisted:1 requiring:1 entering:1 moore:1 covering:5 maintained:1 m:2 exdb:1 outline:1 wise:1 fi:5 common:2 superior:1 empirically:1 overview:2 corel:2 jl:1 million:1 discussed:2 significant:2 refer:2 versa:1 collar:1 lsh:16 longer:1 similarity:1 rental:2 recent:1 optimizing:1 inf:1 store:3 binary:2 outperforming:1 scoring:29 inverted:15 seen:4 goel:2 zien:1 full:6 multiple:2 infer:1 match:1 cross:1 long:1 retrieval:7 offer:2 rameter:1 visit:1 halt:1 qi:25 prediction:1 variant:1 scalable:1 metric:2 expectation:1 represent:2 sometimes:1 achieved:1 addition:1 addressed:1 parallelization:1 archive:1 file:2 call:1 integer:1 near:2 yang:1 split:2 variety:2 isolation:1 restrict:1 click:1 qj:3 panconesi:1 whether:1 suffer:1 returned:5 york:5 proprietary:2 generally:3 clear:1 covered:1 amount:1 generate:1 http:2 cikm:1 per:5 correctly:2 serving:8 group:1 four:7 threshold:11 falling:1 drawn:1 clarity:1 tenth:3 ram:2 monotone:1 geometrically:1 sum:2 run:2 fourth:1 logged:1 almost:2 reasonable:1 yann:1 p3:7 separation:1 bound:6 internet:6 played:1 display:5 cheng:2 annual:1 constraint:4 precisely:1 alex:1 speed:1 extremely:1 pi1:1 relatively:2 according:3 describes:1 beneficial:1 across:1 smaller:3 wi:3 making:2 s1:2 projecting:1 restricted:2 indexing:31 pr:2 computationally:1 remains:1 pin:1 count:1 know:1 end:8 available:2 observe:2 alternative:1 slower:1 top:21 remaining:1 include:1 log2:1 sigmod:1 build:1 objective:1 question:1 already:1 occurs:1 mirrokni:1 traditional:1 prq:2 hq:4 distance:2 sci:1 considers:1 index:29 minimizing:1 difficult:3 potentially:2 expense:2 implementation:1 canine:1 perform:1 upper:4 vertical:1 observation:1 hqi:2 finite:2 enabling:1 supporting:1 rn:6 synchronously:1 varied:3 dog:5 namely:1 required:1 optimized:1 engine:1 learned:5 distinction:1 boost:1 address:2 suggested:2 middleware:1 pattern:1 built:1 including:3 max:2 critical:1 event:5 ranked:2 difficulty:1 natural:1 indicator:3 scheme:2 improve:3 axis:3 sn:1 xq:2 text:1 literature:1 determining:1 relative:1 fully:2 expect:4 permutation:1 foundation:1 switched:1 sufficient:1 consistent:1 pi:17 strehl:2 course:1 supported:1 neighbor:17 taking:2 sparse:3 slice:1 dimension:6 world:1 cumulative:1 evaluating:2 collection:3 commonly:2 made:1 avg:6 replicated:1 far:4 qx:1 transaction:1 approximate:16 ml:1 global:5 search:25 sk:1 why:1 promising:1 carmel:1 robust:1 complex:1 precomputing:1 did:1 pk:2 main:1 s2:1 scored:2 identifier:1 lattanzi:1 allowed:1 facilitating:1 body:1 periment:1 ny:5 chierichetti:2 position:5 wish:1 comput:2 advertisement:4 externally:1 down:2 list:30 evidence:1 intractable:2 exists:1 naively:1 consist:1 mnist:6 andoni:2 importance:1 cumulated:1 conditioned:3 budget:2 locality:3 intersection:1 simply:1 likely:1 twentieth:1 sharad:1 set2:1 ordered:9 ux:2 bo:4 applies:1 corresponds:4 acm:4 conditional:2 sorted:8 goal:1 optdigits:2 considerable:1 experimentally:1 typical:1 determined:4 specifically:1 operates:1 lemma:4 total:5 called:1 experimental:2 meaningful:1 rarely:1 formally:2 select:1 immorlica:1 support:3 searched:1 latter:1 relevance:1 incorporate:1 evaluate:3 tested:1 |
2,707 | 3,455 | Relative Performance Guarantees for
Approximate Inference in Latent Dirichlet Allocation
Indraneel Mukherjee
David M. Blei
Department of Computer Science
Princeton University
35 Olden Street
Princeton, NJ 08540
{imukherj,blei}@cs.princeton.edu
Abstract
Hierarchical probabilistic modeling of discrete data has emerged as a powerful
tool for text analysis. Posterior inference in such models is intractable, and practitioners rely on approximate posterior inference methods such as variational inference or Gibbs sampling. There has been much research in designing better
approximations, but there is yet little theoretical understanding of which of the
available techniques are appropriate, and in which data analysis settings. In this
paper we provide the beginnings of such understanding. We analyze the improvement that the recently proposed collapsed variational inference (CVB) provides
over mean field variational inference (VB) in latent Dirichlet allocation. We prove
that the difference in theptightness of the bound on the likelihood of a document
decreases as O(k ? 1) + log m/m, where k is the number of topics in the model
and m is the number of words in a document. As a consequence, the advantage of
CVB over VB is lost for long documents but increases with the number of topics.
We demonstrate empirically that the theory holds, using simulated text data and
two text corpora. We provide practical guidelines for choosing an approximation.
1
Introduction
Hierarchical probabilistic models of discrete data have emerged as powerful tool for large-scale text
analysis. Based on latent semantic indexing (LSI) [1] and probabilistic latent semantic indexing
(pLSI) [2], hierarchical topic models [3, 4] have been extended and applied to sequential settings [5,
6], authorship [7], email [8], social networks [9, 10], computer vision [11, 12], bioinformatics [5,
13], information retrieval [14], and other application areas [15, 16, 17, 18]. See [19] for a good
review.
A topic model posits a generative probabilistic process of a document collection using a small number of distributions over words, which are called topics. Conditioned on the observed documents,
the distribution of the underlying latent variables is inferred to probabilistically partition the data according to their hidden themes. Research in topic models has involved tailoring the latent structure
to new kinds of data and designing new posterior inference algortihms to infer that latent structure.
In generative models, such as latent Dirichlet allocation (LDA) and its extensions, inferring the
posterior of the latent variables is intractable [3, 4]. (Some topic models, such as LSI and pLSI,
are not fully generative.) Several algorithms have emerged in recent years to approximate such
posteriors, including mean-field variational inference [3], expectation propagation [20], collapsed
Gibbs sampling [19] and, most recently, collapsed variational inference [21]. Choosing from among
the several available algorithms is difficult. There has been some empirical comparison in the topic
modeling literature [4, 19], but little theoretical guidance.
1
We provide some of the first theoretical understanding of which of the available techniques is appropriate, and in which data analysis settings. We analyze two variational inference algorithms for topic
models, mean field variational inference (VB) [3] and collapsed variational inference (CVB) [21].
?Collapsing,? or marginalizing out, a latent variable is a known technique for speeding up the convergence of Gibbs samplers, and CVB brought this idea to the world of variational algorithms.
Empirically, CVB was more accurate than VB for LDA [21]. The advantage of CVB applied to
Dirichlet process mixtures was less conclusive [22].
Variational algorithms minimize the distance between a simple distribution of the latent variables
and the true posterior. This is equivalent to maximizing a lower bound on the log probability of a
document. We prove that the uncollapsed variational bound on the log probability of a document
approaches the collapsed variational bound as the number of words in the document increases. This
supports the empirical improvement observed for LDA, where documents are relatively short, and
the smaller improvement observed in the DP mixture, which is akin to inference in a single long
document. We also show how the number of topics and the sparsity of those topics affects the
performance of the two algorithms.
p
We prove that the difference between the two bounds decreases as O(k ? 1) + log m/m, where
k is the number of topics in the model, and m is the number of words in the document. Thus,
the advantage of CVB over VB is lost for longer documents. We examine the consequences of the
theory on both simulated and real text data, exploring the relative advantage of CVB under different
document lengths, topic sparsities, and numbers of topics. The consequences of our theory lead to
practical guidelines for choosing an appropriate variational algorithm.
2
Posterior inference for latent Dirichlet allocation
Latent Dirichlet allocation (LDA) is a model of an observed corpus of documents. Each document is
a collection of m words x1:m , where each word is from a fixed vocabulary ? of size N . The model
parameters are k topics, ?1 , . . . , ?k , each of which is a distribution on ?, and a k-vector ?
~ , which is
the parameter to a Dirichlet over the (k ? 1)-simplex. The topic matrix ? denotes the N ? k matrix
whose columns are the topic distributions.
Given the topic matrix and Dirichlet parameters, LDA assumes that each document arises from the
following process. First, choose topic proportions ? ? D(~
?). Then, for each word choose a topic
assignment zi ? ?. Finally, choose the word xi ? ?zi . This describes a joint probability distribution
of the observed and latent variables p(~x, ~z, ?|~
?, ?).
Analyzing data with LDA involves two tasks. In parameter estimation, we find the topics and Dirichlet parameters that maximize the likelihood of an observed corpus. In posterior inference, we fix the
model and compute the posterior distribution of the latent structure that underlies a particular document. Here, we focus on posterior inference. (Parameter estimation crucially depends on posterior
inference via the expectation-maximization algorithm.)
z ,~
x)
Given a document ~x, the posterior distribution of the latent variables is p(?, ~z|~x) = p(?,~
p(~
x) . This
distribution is infeasible to compute exactly because of the difficulty in computing the normalizing
constant, i.e., the marginal probability of the document,
!
!
P
Z Y
Y
?( z ?z ) X
?
?1
z
p(~x) = Q
?z
?zi ,xi ?zi d?.
z ?(?z )
z
i
~
z
Approximating the posterior is equivalent to approximating the normalizing constant.
Variational methods approximate an intractable posterior by finding the member of a simpler family
of distributions that is closest to it, where closeness is measured by relative entropy. This is equivalent to minimizing the Jensen?s bound on the negative log probability of the data [23]. We will
analyze two alternative variational methods.
Variational inference for LDA In the variational inference algorithm for LDA introduced in [3]
(VB), the posterior p(?, ~z|~x) is approximated by a fully-factorized variational distribution
Q
q(?, ~z|~? , ?1:m ) = q(?|~? ) i q(zi |?i ).
2
Here q(?|~? ) is a Dirichlet distribution with parameters ~? , and each q(zi |?i ) is a multinomial distribution on the set of K topic indices. This family does not contain the true posterior. In the true
posterior, the latent variables are dependent; in this family of distributions, they are independent [3].
The algorithm seeks to find the variational parameters that minimize the relative entropy between the
true posterior and the approximation, RE(q(?, ~z|~? , ?1:m ) k p(?, ~z|~x)). This is equivalent to finding
the optimal parameters ~?? , ??1:m as follows:
q(?, ~z|~? , ?1:m )
?
(~?? , ?1:m ) = arg min Eq(?,~z|~? ,?1:m ) log
.
~
? ,?1:m
p(?, ~z, ~x)
The expression minimized by ~?? , ??1:m is also known as the variational free energy of (~? , ?1:m ) and
will be denoted by F(~x, ~? , ?1:m ). Note that F(~x, ~?? , ??1:m ) is the Jensen?s bound on the negative
log probability of ~x. The value of the objective function is a measure of the quality of the VB
approximation. We denote this with
?
VB(~x) = min F(~x, ~? , ?1:m ).
~
? ,?1:m
(1)
Collapsed variational inference for LDA The collapsed variational inference algorithm (CVB)
reformulates the LDA model by marginalizing out the topic proportions ?. This yields a formulation
where the topic assignments z are fully dependent, but where the dimensionality of the latent space
has been reduced.
The variational family in CVB is a fully-factorized product of multinomial distributions,
Y
q(z) =
q(zi |?i ).
i
??1:m
CVB finds the optimal variational parameters
as follows:
q(~z|?1:m )
?
?1:m = arg min Eq(~z|?1:m ) log
.
?1:m
p(~z, ~x)
It approximates the negative log probability of ~x with the collapsed variational free energy F(~x, ~? ),
which is the expression that ??1:m minimizes. Analogous to VB, CVB?s performance is measured by
?
CVB(~x) = min F(~x, ?1:m ).
?1:m
(2)
Both CVB(~x) and VB(~x) approximate the negative log probability of ~x by Jensen?s inequality. It has
been shown that CVB(~x) will always be a better bound than VB(~x) [21].
Efficiency of the algorithms Both VB and CVB proceed by coordinate ascent to reach a local
minimum of their respective free energies. CVB achieves higher accuracy at the price of increased
computation. Each coordinate update for VB requires in O(mk) time, where m is the length of a
document and k is the number of topics. Each coordinate update for CVB requires O(m2 k) time.
The CVB updates are prohibitive for large documents and, moreover, are numerically unstable. Both
shortcomings are overcome in [21] by substituting exact computations with an efficient second-order
Taylor approximation. This approximation, however, does not yield a proper bound.1 It is thus
inappropriate for computing held out probability, a typical measure of quality of a topic model. For
such a quantity, exact CVB implementation takes quadratic time.
3
Relative performance of VB and CVB
We try to obtain a theoretical handle on the size of the advantage of CVB over VB, and how it
is affected by the length of the document, the number of topics, and the structure of those topics.
Our main result states that for sufficiently large documents, the difference in approximation quality
decreases with document length and converges to a constant that depends on the number of topics.
1
The first-order Taylor approximation yields an upper-bound, but these turn out to be too inaccurate. Such
an estimate can yield bounds worse than those achieved by VB.
3
Theorem 1. Consider any LDA model with k topics, and a document consisting of m words
x1 , . . . , xm , where m is sufficiently large. Recall that VB(~x) and CVB(~x), defined in (1) and (2), are
the free energies measured by VB and CVB respectively. Then,
0 ? [VB(~x) ? CVB(~x)] ? O(k ? 1) + o(1)
q
for this model. Here o(1) goes to 0 at least as fast as logmm .
(3)
A strength of Theorem 1 is that it holds for any document, and not necessarily one generated by
an LDA model. In previous work on analyzing mean-field variational inference, [24] analyze the
performance of VB for posterior inference in a Gaussian mixture model. Unlike the assumptions in
Theorem 1, their analysis requires that the data be generated by a specific model.
Topic models are often evaluated and compared by approximation of the per-word log probability.
Concerning this quantity, the following corollary is immediate because the total free energy scales
with the length of the document.
Corollary 1. The per word free energy change, as well as the percentage free energy change, between VB and CVB goes to zero with the length of the document.
Our results are stated in log-space. The bounds on the difference in free energy is equivalent to a
bound on the ratio of probability obtained by VB and CVB. Since the probability of a document falls
exponentially fast with the number of words, the additive difference in the probability estimates of
VB and CVB is again negligible for large documents.
Corollary 2. For sufficiently long documents, the difference in probability estimates of CVB and VB
decrease as cm?k for some constant c < 1 whose value depends on the model parameters ?.
The upper-bound in (3) is nearly tight. When all topics are uniform distributions, the difference in
the free energy estimates is ?(k) for long documents.
3.1
Proof Sketch
We sketch the proof of Theorem 1. The full proof is in the supporting material. We first introduce
some notation. We denote a vector with an arrow, like ~? . All vectors have k real coordinates. ?j
will denote its coordinates, with j ? [k] = {1, . . . , k}. When iterating over indices in [k], we will
use the variable j. To iterate from 1 to m we will use i.
We state three lemmas which are needed to prove (3). The left inequality in (3) follows from the fact
that CVB optimizes over a larger family of distributions [21]. We concentrate on the right inequality.
The first step is to carry out calculations similar to [24] to arrive at the following.
Q
Lemma 1. Suppose q(~z) = i qi (zi ) is the optimal approximation to the posterior p(~z|~x). Then,
X
VB(~x) ? CVB(~x) ?
Eq(~z) [log ?(mj + ?j )] ? log ?(?j + ?j )
(4)
z
where ?j =
P
i qi (Zi
= j), ?j ? [k], and mj is the number of occurrences of the topic j in ~z.
Note that to analyze the term Eq(~z) [log ?(mj + ?j )] corresponding to a particular topic j, we need
consider only those positions i where qi (Zi = j) 6= 0; we denote the number of such positions by
Nz . The difficulty in analyzing arbitrary documents lay in working with the right hand side of (4)
without any prior knowledge about the qi ?s. This was overcome by the following lemma.
Lemma 2. Suppose Xi is Bernoulli random with probability qi , for i = 1 to m. Let f : R ? R be
?
convex, and ? ? [0, m]. Then the following optimization problem is solved when each qi = m
maxq1 ,...,qm
s.t.
E[f (X1 + . . . + Xm )]
qi ? [0, 1]
q1 + . . . + qm = ?.
As an immediate corollary of the previous two lemmas and the fact that log ? is convex, we get
X
VB(~x) ? CVB(~x) ?
E[log ?(mj + ?j )] ? log ?(?j + ?j ).
j
4
150
?param = 0.1
60
0.025
100
free energy change
50
40
30
0
1000
2000
3000
4000
5000
0
0
0.000
10
20
free energy change
50
0.020
0.015
0.010
0.005
free energy change
?param = 0.01
70
?param = 1e?04
0
1000
2000
# words
3000
4000
5000
0
1000
2000
# words
3000
4000
5000
# words
(a) Difference in total free energy estimates
?param = 0.1
1000
2000
3000
4000
5000
5
10
25
50
0.1
0.0
0.0
0
k
0.2
5
10
25
50
free energy change
1.5
k
0.5
0.0005
5
10
25
50
1.0
free energy change
0.0010
k
0.3
2.0
0.4
?param = 0.01
0.0000
free energy change
0.0015
?param = 1e?04
0
1000
2000
# words
3000
4000
5000
0
# words
1000
2000
3000
4000
# words
(b) Percentage difference in free energy estimates
Figure 1: Results on synthetic text data. We sample k topics from a symmetric Dirichlet distribution
with parameter ?param . We sample 10 documents from LDA models with these topics. We consider
prefixes of varying lengths for each document. For each prefix length, the VB and CVB free energies
are averaged over the 10 documents.The curves obtained show how the advantage of CVB over VB
changes with the length of a document, number of topics and sparsity of topics.
where mj is now a Binomial random variable with probability
piece of the proof is the following concentration lemma.
?j
m
and number of trials m. The last
Lemma 3. Let X be the number of heads in m coin tosses each with probability q. We require
m > q ?(2+o(1)) . Let a > 0 be constants. Then
0 ? E[log ?(X + a)] ? log ?(E[X + a]) ? O(1 ? q) + o(1)
q
Here o(1) = O( logmm ).
(5)
The requirement of m > 1/q 2+o(1) is necessary, and translates to the condition that document
2+o(1)
lengths be greater than (Nj /?j )
for Theorem 1 to hold. This gives an implicit lower bound on
the required length of a document which depends on the sparsity of the topics. (Sparse topics place
their mass on few words, i.e., low entropy, and dense topics spread their mass on more words, i.e.,
high entropy). When the vocabulary is large, dense topics require long documents for the theory to
take effect. This is supported by our simulations.
4
Empirical results
We studied the results of this theory on synthetic and real text data. We implemented the algorithms
described in [3] and [21]. While these algorithms are only guaranteed to find a local optimum of the
objective, we aim to study whether our theorem about the global optimum is borne out in practice.
5
5000
Synthetic data The synthetic data was generated as follows. We first sampled k topics ?1 , . . . , ?k
independently from a symmetric Dirichlet distribution with parameter ?param . We then sampled a
corpus of 10 documents, each of length 5000 from an LDA model with these topics and Dirichlet
hyper-parameter 1/k. The vocabulary size was 10000.
For each document, we considered sub-documents of the first m words with lengths as small as 100.
On each sub-document, we ran both VB and CVB initialized from a common point. For every subdocument length, the average converged values of the free energy was recorded for both algorithms.
Thus, we obtain a trajectory representing how the advantage of CVB over VB changes with the
number of words m.
We repeated this simulation with different values of k to reveal the dependence of this advantage on
the number of topics. Moreover, we investigated the dependence of the advantage on topic sparsity.
We repeat the above experiment, with three different values of the Dirichlet parameter ?param for the
topic matrix. The topics become sparse rapidly as ?param decreases.
The results of this study are in Figure 1. We see similar trends across all data. The advantage
decreases with document length m and increases with the number of topics k. The theory predicts
that the difference in free energy converges to a constant, implying that the percentage advantage
decays as O(1)/m. Figure 1 reveals this phenomenon. Moreover, the constant is estimated to be
on the order of k, implying that the advantage is higher for more topics. Comparing the curves for
different values of k reveals this fact. Finally, for denser topic models the performances of CVB
and VB converge only for very long documents, as was discussed at the end of Section 3.1. When
?param = 0.1, CVB retains its advantage even for 5000 word long documents.
Real-world corpora We studied the relative performance of the algorithms on two text data sets.
First, we examined 3800 abstracts from the ArXiv, an on-line repository of scientific pre-prints.
We restricted attention to 5000 vocabulary terms, removing very frequent and very infrequent terms.
Second, we examined 1000 full documents from the Yale Law Journal. Again, we used a vocabulary
of 5000 terms. Each data set was split into a training and test corpus. The ArXiv test corpus
contained 2000 short documents. The Yale Law test corpus contained 200 documents of lengths
between a thousand and 10, 000 words.
For each data set, we fit LDA models of different numbers of topics to the training corpus
(k = 5, 10, 25, 50), and then evaluated the model on the held-out test set. In Figure 2, we plot
the percentage difference of the per-word variational free energies achieved by CVB and VB as a
function of document length and number of topics. We also plot the difference in the total free
energy. As for the simulated data, the graphs match our theory; the percent decrease in per word
free energy goes to zero with increasing document length, and the absolute difference approaches a
constant. The difference is more pronounced as the number of topics increases.
The predicted trends occur even for short documents containing around a hundred words. Topics
estimated from real-world data tend to be sparse. The issues seen with dense topics on simulated
data are not relevant for real-world applications.
5
Conclusion
We have provided a theoretical analysis of the relative performance of the two variational inference
algorithms for LDA. We showed that the advantage of CVB decreases as document length increases,
and increases with the number of topics and density of the topic distributions. Our simulations on
synthetic and real-world data empirically confirm our theoretical bounds and their consequences.
Unlike previous analyses of variational methods, our theorem does not require that the observed
data arise from the assumed model.
Since the approximation to the likelihood based on CVB is more expensive to compute than for
VB, this theory can inform our choice of a good variational approximation. Shorter documents and
models with more topics lend themselves to analysis with CVB. Longer documents and models with
fewer topics lend themselves to VB. One might use both, within the same data set, depending on the
length of the document.
6
Figure 2: Experiments with the two text data sets described in Section 4. We fit LDA models
with numbers of topics equal to 5, 10, 25, 50, and evaluated the models on a held-out corpus. We
plot the percentage difference of the per-word variational free energies achieved by CVB and VB
as a function of document length. We also plot the difference in the total free energy. The %-age
decrease in per word free energy goes to zero with increasing document length, and the absolute
difference approaches a constant. The difference is higher for larger k.
VB ? CVB: total free energies (10 mov. avgd.)
4
3
total free energy diff
1.0
1.5
5
10
25
50
0.0
1
2
2.0
k
0.5
%age change in free energy
2.5
5
VB vs CVB: per word free energy (10 mov. avgd.)
0
20
40
60
80
100
120
140
0
20
40
60
#words
80
100
120
140
#words
(a) ArXiv data-set
VB ? CVB: total free energies (1000 mov. avgd.)
12
10
8
4
0.04
0.06
5
10
25
50
6
total free energy diff
0.08
k
0.00
2
0.02
%age change in free energy
0.10
14
0.12
VB vs CVB: per word free energy (1000 mov. avgd.)
2000
4000
6000
8000
10000
#words
2000
4000
6000
8000
10000
#words
(b) Yale Law data-set
In one strain of future work, we will analyze the consequences of the approximate posterior inference
algorithm on parameter estimation. Our results regarding the sparsity of topics indicate that CVB
is a better algorithm early in the EM algorithm, when topics are dense, and that VB will be more
efficient as the fitted topics become more sparse.
References
[1] S. Deerwester, S. Dumais, T. Landauer, G. Furnas, and R. Harshman. Indexing by latent
semantic analysis. Journal of the American Society of Information Science, 41(6):391?407,
1990.
7
[2] T. Hofmann. Probabilistic latent semantic analysis. In UAI, 1999.
[3] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[4] W. Buntine and A. Jakulin. Discrete component analysis. In Subspace, Latent Structure and
Feature Selection. Springer, 2006.
[5] M. Girolami and A. Kaban. Simplicial mixtures of Markov chains: Distributed modelling of
dynamic user profiles. In NIPS 16, pages 9?16. MIT Press, 2004.
[6] H. Wallach. Topic modeling: Beyond bag of words. In Proceedings of the 23rd International
Conference on Machine Learning, 2006.
[7] M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smith. The author-topic model for authors and
documents. In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence,
pages 487?494. AUAI Press, 2004.
[8] A. McCallum, A. Corrada-Emmanuel, and X. Wang. The author-recipient-topic model for
topic and role discovery in social networks: Experiments with Enron and academic email.
Technical report, University of Massachusetts, Amherst, 2004.
[9] E. Airoldi, D. Blei, S. Fienberg, and E. Xing. Mixed membership stochastic blockmodels.
arXiv, May 2007.
[10] D. Zhou, E. Manavoglu, J. Li, C. Giles, and H. Zha. Probabilistic models for discovering
e-communities. In WWW Conference, pages 173?182, 2006.
[11] L. Fei-Fei and P. Perona. A Bayesian hierarchical model for learning natural scene categories.
IEEE Computer Vision and Pattern Recognition, pages 524?531, 2005.
[12] B. Russell, A. Efros, J. Sivic, W. Freeman, and A. Zisserman. Using multiple segmentations
to discover objects and their extent in image collections. In IEEE Conference on Computer
Vision and Pattern Recognition, pages 1605?1614, 2006.
[13] S. Rogers, M. Girolami, C. Campbell, and R. Breitling. The latent process decomposition of
cDNA microarray data sets. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2(2):143?156, 2005.
[14] X. Wei and B. Croft. LDA-based document models for ad-hoc retrieval. In SIGIR, 2006.
[15] D. Mimno and A. McCallum. Organizing the OCA: Learning faceted subjects from a library
of digital books. In Joint Conference on Digital Libraries, 2007.
[16] B. Marlin. Collaborative filtering: A machine learning perspective. Master?s thesis, University
of Toronto, 2004.
[17] C. Chemudugunta, P. Smyth, and M. Steyvers. Modeling general and specific aspects of documents with a probabilistic topic model. In NIPS 19, 2006.
[18] D. Andrzejewski, A. Mulhern, B. Liblit, and X. Zhu. Statistical debugging using latent topic
models. In European Conference on Machine Learning, 2007.
[19] T. Griffiths and M. Steyvers. Probabilistic topic models. In T. Landauer, D. McNamara,
S. Dennis, and W. Kintsch, editors, Latent Semantic Analysis: A Road to Meaning. Laurence
Erlbaum, 2006.
[20] T. Minka and J. Lafferty. Expectation-propagation for the generative aspect model. In Uncertainty in Artificial Intelligence (UAI), 2002.
[21] Y. Teh, D. Newman, and M. Welling. A collapsed variational bayesian inference algorithm for
latent dirichlet allocation. In NIPS, pages 1353?1360, 2006.
[22] K. Kurihara, M. Welling, and Y. Teh. Collapsed variational Dirichlet process mixture models.
2007.
[23] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. Introduction to variational methods for
graphical models. Machine Learning, 37:183?233, 1999.
[24] K. Watanabe and S. Watanabe. Stochastic complexities of gaussian mixtures in variational
bayesian approximation. Journal of Machine Learning Research, 7:625?644, 2006.
8
| 3455 |@word trial:1 repository:1 kintsch:1 proportion:2 laurence:1 logmm:2 seek:1 crucially:1 simulation:3 decomposition:1 q1:1 carry:1 document:62 prefix:2 comparing:1 yet:1 additive:1 partition:1 tailoring:1 hofmann:1 plot:4 update:3 v:2 implying:2 generative:4 prohibitive:1 fewer:1 intelligence:2 discovering:1 mccallum:2 beginning:1 smith:1 short:3 blei:4 provides:1 toronto:1 simpler:1 become:2 prove:4 introduce:1 liblit:1 faceted:1 themselves:2 examine:1 freeman:1 little:2 inappropriate:1 param:11 increasing:2 provided:1 corrada:1 underlying:1 moreover:3 notation:1 factorized:2 mass:2 discover:1 kind:1 cm:1 minimizes:1 finding:2 marlin:1 nj:2 guarantee:1 every:1 auai:1 exactly:1 qm:2 harshman:1 algortihms:1 negligible:1 local:2 reformulates:1 consequence:5 jakulin:1 analyzing:3 might:1 nz:1 studied:2 examined:2 wallach:1 averaged:1 practical:2 lost:2 practice:1 area:1 empirical:3 word:36 pre:1 griffith:2 road:1 get:1 selection:1 collapsed:10 www:1 equivalent:5 maximizing:1 go:4 attention:1 independently:1 convex:2 sigir:1 m2:1 steyvers:3 handle:1 coordinate:5 analogous:1 suppose:2 infrequent:1 user:1 exact:2 smyth:1 designing:2 trend:2 approximated:1 expensive:1 recognition:2 lay:1 mukherjee:1 predicts:1 observed:7 role:1 solved:1 wang:1 thousand:1 decrease:9 russell:1 ran:1 complexity:1 dynamic:1 tight:1 efficiency:1 joint:2 fast:2 shortcoming:1 artificial:2 newman:1 hyper:1 choosing:3 whose:2 emerged:3 larger:2 denser:1 hoc:1 advantage:14 product:1 frequent:1 relevant:1 rapidly:1 organizing:1 pronounced:1 convergence:1 requirement:1 optimum:2 uncollapsed:1 converges:2 object:1 depending:1 measured:3 eq:4 implemented:1 c:1 involves:1 predicted:1 indicate:1 girolami:2 concentrate:1 posit:1 stochastic:2 material:1 rogers:1 require:3 fix:1 indraneel:1 extension:1 exploring:1 hold:3 sufficiently:3 considered:1 around:1 substituting:1 efros:1 achieves:1 early:1 estimation:3 bag:1 tool:2 brought:1 mit:1 always:1 gaussian:2 aim:1 zhou:1 varying:1 jaakkola:1 probabilistically:1 corollary:4 focus:1 improvement:3 bernoulli:1 likelihood:3 modelling:1 inference:26 dependent:2 membership:1 inaccurate:1 hidden:1 perona:1 arg:2 among:1 issue:1 denoted:1 oca:1 marginal:1 field:4 equal:1 ng:1 sampling:2 biology:1 nearly:1 future:1 simplex:1 minimized:1 rosen:1 report:1 few:1 consisting:1 cvb:48 mixture:6 held:3 chain:1 accurate:1 necessary:1 respective:1 shorter:1 taylor:2 initialized:1 re:1 guidance:1 theoretical:6 mk:1 fitted:1 increased:1 column:1 modeling:4 giles:1 retains:1 assignment:2 maximization:1 mcnamara:1 uniform:1 hundred:1 erlbaum:1 too:1 buntine:1 avgd:4 zvi:1 synthetic:5 dumais:1 density:1 international:1 amherst:1 probabilistic:8 again:2 thesis:1 recorded:1 containing:1 choose:3 andrzejewski:1 collapsing:1 worse:1 borne:1 american:1 book:1 li:1 depends:4 ad:1 piece:1 try:1 analyze:6 xing:1 zha:1 collaborative:1 minimize:2 accuracy:1 yield:4 simplicial:1 bayesian:3 trajectory:1 converged:1 reach:1 inform:1 email:2 energy:34 involved:1 minka:1 proof:4 sampled:2 massachusetts:1 recall:1 knowledge:1 dimensionality:1 segmentation:1 campbell:1 higher:3 zisserman:1 wei:1 formulation:1 evaluated:3 implicit:1 sketch:2 working:1 hand:1 dennis:1 propagation:2 lda:18 reveal:1 quality:3 scientific:1 effect:1 contain:1 true:4 symmetric:2 semantic:5 authorship:1 demonstrate:1 percent:1 image:1 variational:35 meaning:1 recently:2 common:1 multinomial:2 empirically:3 exponentially:1 discussed:1 approximates:1 numerically:1 cdna:1 gibbs:3 rd:1 longer:2 posterior:21 closest:1 plsi:2 recent:1 showed:1 perspective:1 optimizes:1 inequality:3 seen:1 minimum:1 greater:1 converge:1 maximize:1 full:2 multiple:1 infer:1 technical:1 match:1 academic:1 calculation:1 long:7 retrieval:2 concerning:1 qi:7 underlies:1 vision:3 expectation:3 arxiv:4 achieved:3 microarray:1 unlike:2 enron:1 ascent:1 subject:1 tend:1 member:1 lafferty:1 jordan:2 practitioner:1 split:1 iterate:1 affect:1 fit:2 zi:10 idea:1 regarding:1 translates:1 whether:1 expression:2 akin:1 proceed:1 iterating:1 category:1 reduced:1 percentage:5 lsi:2 estimated:2 per:8 chemudugunta:1 discrete:3 affected:1 graph:1 year:1 deerwester:1 powerful:2 uncertainty:2 master:1 arrive:1 family:5 place:1 vb:40 bound:16 guaranteed:1 yale:3 quadratic:1 strength:1 occur:1 fei:2 scene:1 aspect:2 min:4 relatively:1 department:1 according:1 debugging:1 smaller:1 describes:1 across:1 em:1 restricted:1 indexing:3 fienberg:1 turn:1 needed:1 end:1 available:3 hierarchical:4 appropriate:3 occurrence:1 alternative:1 coin:1 recipient:1 assumes:1 dirichlet:17 denotes:1 binomial:1 graphical:1 emmanuel:1 ghahramani:1 approximating:2 society:1 objective:2 print:1 quantity:2 concentration:1 dependence:2 dp:1 subspace:1 distance:1 simulated:4 street:1 olden:1 topic:71 extent:1 unstable:1 length:22 index:2 ratio:1 minimizing:1 difficult:1 negative:4 stated:1 implementation:1 guideline:2 proper:1 teh:2 upper:2 markov:1 supporting:1 immediate:2 extended:1 head:1 strain:1 arbitrary:1 community:1 inferred:1 david:1 introduced:1 required:1 conclusive:1 sivic:1 nip:3 beyond:1 pattern:2 xm:2 sparsity:6 including:1 lend:2 difficulty:2 rely:1 natural:1 zhu:1 representing:1 library:2 speeding:1 text:9 review:1 understanding:3 literature:1 prior:1 discovery:1 marginalizing:2 relative:7 law:3 fully:4 mixed:1 allocation:7 filtering:1 age:3 digital:2 editor:1 supported:1 last:1 free:34 repeat:1 infeasible:1 side:1 fall:1 saul:1 absolute:2 sparse:4 distributed:1 mimno:1 overcome:2 curve:2 vocabulary:5 world:5 author:3 collection:3 social:2 transaction:1 welling:2 approximate:6 kaban:1 confirm:1 global:1 reveals:2 uai:2 corpus:10 assumed:1 xi:3 landauer:2 latent:26 mj:5 investigated:1 necessarily:1 european:1 blockmodels:1 main:1 dense:4 spread:1 arrow:1 arise:1 profile:1 repeated:1 x1:3 furnas:1 sub:2 theme:1 inferring:1 position:2 watanabe:2 mov:4 croft:1 theorem:7 removing:1 specific:2 jensen:3 decay:1 normalizing:2 closeness:1 intractable:3 sequential:1 airoldi:1 conditioned:1 entropy:4 contained:2 springer:1 acm:1 toss:1 price:1 change:12 typical:1 diff:2 sampler:1 kurihara:1 lemma:7 called:1 total:8 support:1 arises:1 bioinformatics:2 princeton:3 phenomenon:1 |
2,708 | 3,456 | Human Active Learning
Rui Castro1 , Charles Kalish2 , Robert Nowak3 , Ruichen Qian4 , Timothy Rogers2 , Xiaojin Zhu4?
1
Department of Electrical Engineering
Columbia University. New York, NY 10027
Department of {2 Psychology, 3 Electrical and Computer Engineering, 4 Computer Sciences}
University of Wisconsin-Madison. Madison, WI 53706
Abstract
We investigate a topic at the interface of machine learning and cognitive science.
Human active learning, where learners can actively query the world for information, is contrasted with passive learning from random examples. Furthermore,
we compare human active learning performance with predictions from statistical
learning theory. We conduct a series of human category learning experiments
inspired by a machine learning task for which active and passive learning error
bounds are well understood, and dramatically distinct. Our results indicate that
humans are capable of actively selecting informative queries, and in doing so
learn better and faster than if they are given random training data, as predicted
by learning theory. However, the improvement over passive learning is not as dramatic as that achieved by machine active learning algorithms. To the best of our
knowledge, this is the first quantitative study comparing human category learning
in active versus passive settings.
1
Introduction
Active learning is a paradigm in which the learner has the ability to sequentially select examples
for labeling. The selection process can take advantage of information gained from previously observed labeled examples in order to accelerate the learning process. In contrast, passive learning is
a paradigm in which the learner has no control over the labeled examples it is given. In machine
learning, active learning has been a topic of intense interest. In certain machine learning problems
it has been shown that active learning algorithms perform much better than passive learning, with
superior convergence bounds (see [1, 4] and references therein) and/or superior empirical performance [5, 19]. In this paper we focus on the application of active learning to classification, in both
machines and humans.
To our knowledge, no previous work has attempted to quantify human active learning performance
in probabilistic category learning (i.e., classification), contrast human active and passive learning,
and compare against theoretically optimal theory bounds. Theories of human category learning
often cast the learner as a passive learner, who observes some object (typically represented as a
feature vector), is presented with the object?s category label, and does some statistical processing to
determine how the label should generalize. Anyone who has ever interacted with a three-year-old
will recognize that this scenario is exceedingly unrealistic in at least one respect. Certainly toddlers
observe their environment, and certainly they pay attention when adults label objects for them ? but
they also ask a lot of questions. Active querying provides children with information that they would
otherwise be less likely to encounter through passive observation; and so, presumably, such active
querying has important implications for category learning.
Early research in human concept attainment suggested that learners do benefit from the opportunity
to actively select examples during learning [11]. However, it proved very difficult to establish cri?
Correspondence concerning this article should be send to [email protected].
1
Figure 1: The two-category learning
task with boundary ? and noise level .
Figure 2: Probabilistic bisection strategy. Shaded areas
have 1/2 probability mass.
teria for assessing the magnitude of the active learning benefit (e.g., compared to theoretical ideals,
or to passive learning). Partly as a result, nearly all contemporary research in classification and
categorization has ignored active learning. Furthermore, a rich literature on decision-making and
scientific inference has produced conflicting claims regarding people?s capacities to select optimal
learning examples [7, 10, 12, 13, 14, 15, 16, 17, 20]. Most famously, people make inappropriate
queries to assess simple logical hypotheses such as ?if p then q? (frequently examining q instances
to see if they are p, and failing to explore not-q instances [20]). Several authors have argued that
pessimistic views of the human ability to choose relevant queries are based on faulty task analyses;
and that, when the learning task is properly construed, humans do an excellent, even optimal job of
selection [7, 14]. As much of the debate in the psychological literature turns on task analysis and the
proper metric for assessing performance, there is significant opportunity to benefit from the formal
descriptions characteristic of machine learning research. The current study exploits one such analysis of a relatively simple binary classification task with fixed error rate in feedback. Specification of
the theoretical benefits of active learning in this context allows us to address the following questions
regarding human performance:
[Q1] Do humans perform better when they can select their own examples for labeling, compared to
passive observation of labeled examples?
[Q2] If so, do they achieve the full benefit of active learning suggested by statistical learning theory?
[Q3] If they do not, can machine learning be used to enhance human performance?
[Q4] Do the answers to these questions vary depending upon the difficulty of the learning problem?
The goal of this paper is to answer these questions in a quantitative way by studying human and
machine performance in one well-understood classification task. Answers to these questions have
important theoretical and practical implications for our understanding of human learning and cognition. As previously noted, most theories of human category learning assume passive sampling
of the environment. Some researchers have argued that the environment provides little information
regarding the category structure of the world, and so conclude that human category learning must
be subject to strong initial constraints [6, 3, 9]. If, however, human learning benefits from active
querying of the environment, it is not clear that such conclusions are justified. From an applied
perspective, if machines can be shown to aid human learning in certain predictable circumstances,
this has clear implications for the design of intelligent tutoring systems and other machine-human
hybrid applications.
2
A Two-Category Learning Task
For the study in this paper we consider learning in a relatively simple setting, where there is a good
theoretical understanding of both active and passive machine learning, offering an ideal test-bed for
assessing active learning in humans. The task is essentially a two-category learning problem (binary
classification) in the interval [0, 1]. Let ? ? [0, 1] be the unknown but fixed decision boundary. To
the left of ? the category is ?zero? and to the right of ? the category is ?one.? The goal of the learning
task is to infer ? as accurately as possible from a set of examples. The training data (set of examples)
consists of n sample and label pairs; {(Xi , Yi )}ni=1 , where Xi ? [0, 1] and Yi ? {0, 1}. The label
Yi is related to the sample Xi in the following noisy way: Yi is equal to the category of Xi with
probability 1 ? and equal to the other category with probability , where 0 ? < 1/2. In other
words, each label more probably is correct than incorrect, and is the probability of an incorrect
2
label1 . Note that the label Yi is simply a noisy answer to the question ?is Xi larger than ??? Figure 1
illustrates this model. Furthermore assume that, given Xi , Yi is statistically independent of {Yj }j6=i .
At this point we have not specified how the sample locations Xi are generated, and in this lies the
major difference between passive and active learning. In the passive learning setting the sample
locations are randomly distributed, independent of the labels. On the other hand, in the active
learning setting the learner can choose the sample locations in a sequential way depending on the
past, that is Xi = h(X1 , . . . , Xi?1 , Y1 , . . . , Yi?1 ) , where h is a (possibly random) function that
takes into account past experiences and proposes a new query Xi .
If = 0, that is when there is no label noise, the optimal methodologies for passive and active
learning are quite obvious. In passive learning, the optimal inference is that ? lies somewhere
between the rightmost location where a label of zero was observed and the leftmost location where a
label of one was observed. If the n sample locations are (approximately) evenly distributed between
0 and 1, then the error of the inference is on the order of 1/n. On the other hand, in active learning
the optimal strategy is a deterministic binary bisection: begin by taking X1 = 1/2. If Y1 = 0, then
? > 1/2, otherwise ? ? 1/2. Suppose Y1 = 1, then the next sample point is X2 = 1/4 and if
Y2 = 1, then ? < 1/4 otherwise ? ? 1/4. Proceeding in this fashion we see that the length of the
interval of possible values of ? is halved at every observation. Therefore after n samples the error
of the active learning inference is at most 2?(n+1) . Clearly active learning, where the error decays
exponentially with the number of samples, is much better than passive learning, where the error can
decay only polynomially.
If > 0 there is uncertainty in our label observation process and estimating ? becomes more delicate. Under passive learning, the maximum likelihood estimator yields the optimal rate of error
convergence. Furthermore it is possible to show a performance lower bound that clarifies what is
the best possible performance of any passive learning algorithm. In particular we have the following
result.
2
1 1 + 2
1
?
,
(1)
inf sup E[|?n ? ?|] ?
?
4 1 ? 2
n+1
?n ??[0,1]
where ??n is the estimate of ? obtained after n observations, and the infimum is taken over all possible
passive learning procedures. This is a so-called minimax lower bound, and gives an indication of the
best achievable performance of any passive learning algorithm. That is, no passive algorithm can
learn more rapidly. This bound can be easily shown using Theorem 2.2 of [18], and the performance
of the maximum likelihood estimator is within a constant factor of (1).
For active learning, deterministic bisection cannot be used due to the label noise. Nevertheless
active learning is still extremely beneficial in this setting. Horstein [8] proposed a method that is
suitable for our purposes. The key idea stems from Bayesian estimation. Suppose that we have a
prior probability density function p0 (?) on the unknown parameter ?, namely that ? is uniformly
distributed over the interval [0, 1]. To make the exposition clear let us assume ? = 1/4. Like
before, we start by making a query at X1 = 1/2. With probability 1 ? we observe the correct
label Y1 = 1, and with probability we observe the incorrect label Y1 = 0. Suppose Y1 = 1 was
observed. Given these facts we can update the posterior density by applying Bayes rule. In this case
we obtain p1 (t|X1 , Y1 ) = 2(1 ? ) if t ? 1/2, or 2 if t > 1/2. The next step is to choose the
sample location X2 . We choose X2 so that it bisects the posterior probability mass, that is, we take
X2 such that Prt?p1 (?) (t > X2 |X1 , Y1 ) = Prt?p1 (?) (t < X2 |X1 , Y1 ). In other words X2 is just the
median of the posterior distribution. We continue iterating this procedure until we have collected n
samples. The estimate ??n is then defined as the median of the final posterior distribution. Figure 2
illustrates the procedure. Note that if = 0 then this probabilistic bisection is simply the binary
bisection described above.
The above algorithm works extremely well in practice, but it is hard to analyze. In [2] a slightly
modified method was introduced, which is more amenable to analysis; the major difference involves
1
We use a constant noise level because the theoretical distinction between active and passive learning is
dramatic in this case. Other (perhaps more natural) noise models are possible, for example can decrease away
from the true class boundary. Noise models like this are well understood theoretically [4]; we will investigate
them in future work.
3
0
0.125
0.25
0.375
0.5
0.625
0.75
0.875
1
Figure 3: A few 3D visual stimuli and their X values used in our experiment.
a discretization of the possible query locations. For this method it can be shown [2] that
!n
r
p
1
sup E[|??n ? ?|] ? 2
+ (1 ? )
.
2
??[0,1]
(2)
Note that the expected estimation error decays exponentially with the number of observations, as
opposed to the polynomial decay achievable using passive learning (1). This shows that the accuracy
of active learning is significantly better than passive learning, even under the presence of uncertainty.
Furthermore no active (or passive) learning algorithm can have their expected error decaying faster
than exponentially with the number of samples, as in (2).
3
Human Passive and Active Learning Experiments
Equipped with the theoretical performance of passive learning (1) and active learning (2), we now
describe a behavioral study designed to answer Q1-Q4 posed earlier. The experiment is essentially
a human analog of the abstract learning problem described in the previous section in which the
learner tries to find the boundary between two classes defined along a single dimension, a setting
used to demonstrate semi-supervised learning behavior in humans in our previous work [21]. We
are particularly interested in comparing three distinct conditions:
Condition ?Random?. This is the passive learning condition where the human subject cannot
select the queries, and is instead presented sequentially with examples {Xi }ni=1 sampled uniformly
at random from [0, 1], and their noisy labels {Yi }ni=1 . The subject is regularly asked to guess the
boundary from these observations (without feedback). As in (1), the expected estimation error
|??n ? ?| of an optimal machine learning algorithm decreases at the rate 1/n. If humans are capable
of learning from passive observation of random samples, their boundary estimates should approach
the true boundary with this polynomial rate too.
Condition ?Human-Active?. This is the active learning condition where the human subject, at
iteration i, selects a query Xi based on her previous queries and their noisy labels {(Xj , Yj )}i?1
j=1 .
She then receives a subsequent noisy label Yi . If humans are making good use of previously collected
examples by selecting informative queries then the rate of error decrease should be exponential,
following (2).
Condition ?Machine-Yoked?. This is a hybrid human-machine-learning condition in which the
human passively observes samples selected by the active learning algorithm in [2], observes the
noisy label generated in response to each query, and is regularly asked to guess, without feedback,
where the boundary is ? as though the machine is teaching the human. It is motivated by question
Q3: Can machine learning assist human category learning?
Materials. Each sample X is a novel artificial 3D shape displayed to the subject on a computer
screen. The shapes change with X smoothly in several aspects simultaneously. Figure 3 shows a few
shapes and their X values. A difference of 0.06 in X value corresponds roughly to the psychological
?Just Noticeable Difference? determined by a pilot study. For implementation reasons our shapes
are discretized to a resolution of about 0.003 in X values, beyond which the visual difference is too
small to be of interest.
Participants. Participants were 33 university students, participating voluntarily or for partial course
credit. They were told that the 3D shapes are alien eggs. Spiky eggs (X close to 0) most likely hatch
alien snakes (category zero), and smooth eggs (X close to 1) most likely hatch alien birds (category
one), but there could be exceptions (label noise). Their task was to identify as precisely as possible
the egg shape (decision boundary) at which it switches from most likely snakes to most likely birds.
4
Procedure. Each participant was assigned one of the three conditions: Random (13 subjects),
Human-Active (14 subjects), Machine-Yoked (6 subjects). Machine-Yoked receives approximately
half the number of other groups, as pilot studies indicated that performance was much less variable in
this condition. In all conditions, subjects were explicitly informed of the one dimensional nature of
the task. The participant first completed a short practice session to familiarize her with the computer
interface and basic task, followed by 5 longer sessions of 45 iterations each. The noise level , which
determines the difficulty of the learning task, varied across sessions, taking the values 0, 0.05, 0.1,
0.2, 0.4 with order determined randomly for each participant. For each session and participant the
true decision boundary ? was randomly set in [1/16, 15/16] to avoid dependencies on the location
of the true boundary. The experiment thus involved one between-subject factor (learning condition)
and one within-subjects factor (noise level ).
At iteration i of the learning task, a single shape at Xi was displayed on a CRT monitor at a normal
viewing distance. In the Human-Active condition, the participant then used a computer mouse
wheel to scroll through the range of shapes. Once the participant found the shape she wished to
query (Xi+1 ), she clicked a ?hatch? button and observed the outcome (bird or snake, corresponding
to the noisy label), followed by a ?Continue? button to move on to the next query. In the Random
and Machine-Yoked conditions, each sample Xi+1 was generated by the computer with no user
intervention, and a short animation was displayed showing shapes smoothly transitioning from Xi
to Xi+1 in order to match the visual experience in the Human-Active condition. Once the transition
was completed, the outcome (label) for Xi+1 was observed, and participants clicked a ?Continue?
button to observe the next sample and outcome. In all conditions, the computer generated the noisy
label Yi+1 according to the true boundary ? and noise level , and displayed it to the participant with
either a snake picture (Yi+1 = 0) or a bird picture (Yi+1 = 1). The display was reset to the initial
shape after ever 3 queries to ensure that participants paid attention to the precise shape corresponding
to their estimate of the boundary location rather than simply searching locally around the current
shape (total 15 re-starts over 45 queries; 45 re-starts would be too tedious for the subjects).
? after every three iterations. In these
The participant was asked to guess the decision boundary (?)
?boundary queries,? the computer began by displaying the shape at X = 1/2, and the participant
used the mouse wheel to change the shape until it matched her current best guess about the boundary
shape. Once satisfied, she clicked a ?submit boundary? button. We thus collect ??3 , ??6 , ??9 , . . . , ??45
for each session. These boundary estimates allowed us to compute mean (across subjects) human
estimation errors |??n ? ?| for different n, under different conditions and different noise levels. We
compare these means (i) across the different experimental conditions and (ii) to the theoretical predictions in (1)(2).
4
Experimental Results
Figure 4 shows, for each condition and noise level, how every participant?s boundary guesses approach the true boundary ?. Qualitatively, human active learning (Human-Active) appears better
than passive learning (Random) because the curves are more concentrated around zero. Machineassisted human learning (Machine-Yoked) seems even better. As the task becomes harder (larger
noise ), performance suffers in all conditions, though less so for the Machine-Yoked learners. These
conclusions are further supported by our quantitative analysis below.
It is worth noting that the behavior of a few participants stand out in Figure 4. For example, one
subject?s boundary guesses shift considerably within a session, resulting in a rather zigzagged curve
in (Human-Active, = 0.1). All participants, however, perform relatively well in at least some
noise settings, suggesting that they took the experiment seriously. Any strange-looking behavior
likely reflect genuine difficulties in the task, and for this reason we have not removed any apparent
outliers in the following analyses. We now answer questions Q1?Q4 raised in Section 1.
[Q1] Do humans perform better when they can actively select samples for labeling compared
to passive observation of randomly-selected samples?
[A1] Yes ? at least for low noise levels. For higher noise the two are similar.
To support our answer, we show that the human estimation error |??n ? ?| is smaller in the HumanActive condition than Random condition. This is plotted in Figure 5, with ?1 standard error bars.
When noise is low, the Human-Active curve is well below the Random curve throughout the session.
5
noise = 0
noise = 0.05
noise = 0.1
noise = 0.2
noise = 0.4
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
?0.5
?0.5
?0.5
?0.5
?0.5
Random
?1
10
20
30
?1
40
10
20
30
?1
40
10
20
30
?1
40
10
20
30
?1
40
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
?0.5
?0.5
?0.5
?0.5
?0.5
Human
Active
?1
10
20
30
?1
40
10
20
30
?1
40
10
20
30
?1
40
10
20
30
?1
40
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
?0.5
?0.5
?0.5
?0.5
?0.5
Machine
Yoked
?1
10
20
30
?1
40
10
20
30
?1
40
10
20
30
?1
40
10
20
30
?1
40
10
20
30
40
10
20
30
40
10
20
30
40
Figure 4: Overview of experiment results. The x-axis is iteration n, y-axis is the (signed) difference
between human boundary guess and true boundary ??n ? ?. Each curve shows performance from one
human subject (though they overlap, it is sufficient to note the trends). Overall, human active learning (Human-Active) is better than passive learning (Random), and machine-assisted human learning
(Machine-Yoked) is even better. As the task becomes harder (larger noise ), all performances suffer.
estimation error
noise !=0.10
noise !=0.20
Human Active
Random
Machine Yoked
0.3
noise !=0.40
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
10
20
30
40
0
10
20
30
40
0
10
20
30
40
Figure 5: Human estimate error |??n ? ?| under different conditions and noise levels. The x-axis is
iteration n. The error bars are ?1 standard error. Human-Active is better than Random when noise
is low; Machine-Yoked is better than Human-Active when noise is high.
That is, with active learning the subjects quickly come up with better guesses and maintain this advantage till the end. Human-Active performance deteriorates with higher noise levels, however, and
at the highest noise levels is appears indistinguishable from performance in the Random condition.
[Q2] Can humans achieve the full benefit of active learning suggested by learning theory?
[A2] Human active learning does have exponential convergence, but with slower decay constants than the upper bound in (2). Human passive learning, on the other hand, sometimes
does not even achieve polynomial convergence as predicted in (1), and in no condition does the
rate approach optimal performance.
To support these conclusions, consider that, for active learning, the theoretical estimation er??n
ror bound in (2)
form 2e
and decays exponentially with n. The decay constant
has the
p
? = ?1/2 log 1/2 + (1 ? ) is determined by the noise level . The larger the decay constant, the faster the error approaches zero. If one plots log of the bound vs. n, it would be a line with
slope ??. To determine whether human error decays exponentially as predicted, and with a comparable slope, one can similarly plot the logarithm of human active learning estimation error vs. n. If
human active learning decreases error exponentially (which is desirable), this relationship is linear,
as Figure 6 (Upper) shows it to be. This exponential decay of error offers further evidence that human active learning exceeds passive learning performance, where error can only decay polynomially
(Figure 6, Lower). The speed (decay constant) of the exponential decay in human active learning is,
however, slower than the theoretical upper bound (2). To see this, we fit one line per noise level in
6
noise !=0.00
noise !=0.05
noise !=0.10
noise !=0.20
noise !=0.40
?1
?1
?1
?1
?1
?2
?2
?2
?2
?2
?3
?3
?3
?3
?3
?4
?4
?4
?4
?4
?5
?5
?5
?5
10
20
30
40
noise !=0.00
10
20
30
40
noise !=0.05
10
20
30
40
noise !=0.10
10
20
30
?5
40
noise !=0.20
?1
?1
?1
?1
?2
?2
?2
?2
?2
?3
?3
?3
?3
?3
?4
?4
?4
?4
2
4
?5
0
2
4
?5
0
2
4
?5
0
20
30
40
noise !=0.40
?1
?5
0
10
?4
2
4
?5
0
2
4
Figure 6: (Upper) Human active learning decreases error exponentially, as indicated by the linear
distribution of log(|??n ? ?|) (the y-axis) versus n (the x-axis). (Lower) Human passive learning in
the Random condition is slower than O(1/n), since the slopes are shallower than -1 on log(|??n ? ?|)
(the y-axis) versus log(n) (the x-axis).
Human-Active
bound (2)
=0
0.031
0.347
0.05
0.042
0.166
0.1
0.037
0.112
0.2
0.030
0.053
0.4
0.005
0.005
Table 1: The exponential decay constants of human active learning is slower than predicted by
statistical learning theory for lower noise levels.
Figure 6 and use the negative slope of the fitted lines as the estimate of the decay constant in human
active learning. For comparison, we computed the decay constant in the theoretical bound. Table 1
compares these decay constants under different noise levels. It is clear that human active learning?s
error decays at a slower rate, especially when the noise is low.
For passive learning, the minimax lower bound (1) has a polynomial decay of O(1/n), which is a
line with slope -1 on a plot of log(|??n ? ?|) vs. log(n). As shown in Figure 6 (Lower), the analogous
log-log plot from human passive learning in the Random condition does seem to fit a line, but the
slope is much shallower than -1. Indeed, for 2 of the 5 noise levels (0.1 and 0.2), the estimated slope
is not significantly different from zero! These results suggest that humans either fail to learn or learn
at a much lower rate than formal analysis suggests is possible.
[Q3] Can machine learning be used to enhance human learning?
[A3] Apparently in high noise levels ? But what really happened?
As shown in Figure 5, the Machine-Yoked curve is no different than Human-Active in low noise
levels, but substantially better in high noise levels. It is important to remember that Machine-Yoked
is human performance, not that of the machine learning algorithm. The results seem to indicate that
humans can utilize the training data chosen by a machine active learning algorithm to enhance their
performance in settings where humans are not generally performing well. Upon closer inspection,
however, we noticed that almost all subjects in the Machine-Yoked condition used the following
strategy. They quickly learned that the computer was generating training examples that soon converge to the true boundary. They then simply placed their boundary guess at (or near) the latest
training example generated by the machine. This ?memorizing? strategy worked very well in our
setting, but it is difficult to believe that the subjects were really ?learning? the decision boundary.
Instead, they likely learned to trust and depend upon the computer. In view of this, we consider
Q3 inconclusive, but hope these observations provoke thoughts on how to actually improve human
learning.
[Q4] Do answers to the above questions depend upon the difficulty of the learning task?
[A4] One form of difficulty, the label noise level , has profound effects on human learning.
Specifically, the advantage of active learning diminishes with noise; and at high noise levels active
learning arguably has no advantage over passive learning for humans in this setting. Formal analysis
7
suggests that the advantage of active over passive sampling should diminish with increasing noise;
but it also suggests that some benefit to active sampling should always be obtained. An important
goal for future research, then, is to understand why human performance is so adversely affected by
noise.
5
Conclusions and Future Work
We have conducted behavioral experiments to compare active versus passive learning by humans in a
simple classification task, and compared human performance to that predicted by statistical learning
theory. In short, humans are able to actively select queries and use them to achieve faster category
learning; but the advantages of active-learning diminish under higher noise conditions and do not
approach theoretical bounds. One important conclusion from this work is that passive learning may
not be a very good model for how human beings learn to categorize. Our research also raises several
interesting further questions, including how the current conclusions extend to more realistic learning
scenarios. The benefit of the current work is that it capitalizes on a simple learning task for which
passive and active performance has been formally characterized. The drawback is that the task is
not especially natural. In future work we plan to extend the current approach to learning situations
more similar to those faced by people in their day-to-day lives.
Acknowledgments: This work is supported in part by the Wisconsin Alumni Research Foundation,
and NSF Grant 0745423 from Developmental Learning Sciences.
References
[1] N. Balcan, S. Hanneke, and J. Wortman. The true sample complexity of active learning. to appear in
COLT 2008, Helsinki, Finland, 2008.
[2] M. V. Burnashev and K. Sh. Zigangirov. An interval estimation problem for controlled observations.
Problems in Information Transmission, 10:223?231, 1974.
[3] S. Carey. Conceptual change in childhood. MIT Press, 1985.
[4] R. Castro and R. Nowak. Minimax bounds for active learning. IEEE Transactions on Information Theory,
54(5):2339?2353, 2008.
[5] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning,
15(2):201?221, 1994.
[6] R. Gelman and E. M. Williams. Handbook of child psychology, chapter Enabling constraints for cognitive
development and learning: A domain-specific epigenetic theory. John Wiley and Sons, 1998.
[7] G. Gigerenzer and R. Selten. Bounded rationality: The adaptive toolbox. The MIT Press, 2001.
[8] M. Horstein. Sequential decoding using noiseless feedback. IEEE Trans. Info. Theory, 9(3):136?143,
1963.
[9] F. Keil. Concepts, kinds, and cognitive development. MIT Press, 1989.
[10] J. K. Kruschke. Bayesian approaches to associative learning: From passive to active learning. Learning
& Behavior, 36(3):210?226, 2008.
[11] P. A. Laughlin. Focusing strategy in concept attainment as a function of instructions and task complexity.
Journal of Experimental Psychology, 98(2):320?327, May 1973.
[12] C. R. Mynatt, M. E. Doherty, and R. D. Tweney. Confirmation bias in a simulated research environment: An experimental study of scientific inference. The Quarterly Journal of Experimental Psychology,
29(1):85?95, Feb 1977.
[13] J. Nelson. Finding useful questions: On Bayesian diagnosticity, probability, impact, and information gain.
Psychological Review, 112(4):979?999, 2005.
[14] M. Oaksford and N. Chater. Bayesian rationality the probabilistic approach to human reasoning. Oxford
University Press, 2007.
[15] L. E. Schulz, T. Kushnir, and A. Gopnik. Causal Learning; Psychology, Philosophy and Computation,
chapter Learning from doing: Interventions and causal inference. Oxford University Press, 2007.
[16] D. Sobel and T. Kushnir. Interventions do not solely benefit causal learning: Being told what to do results
in worse learning than doing it yourself. In Proceedings of the 25th Annual Meeting of the Cognitive
Science Society, 2003.
[17] M. Steyvers, J. Tenenbaumb, E. Wagenmakers, and B. Blum. Inferring causal networks from observations
and interventions. Cognitive Science, 27:453?489, 2003.
[18] Alexandre B. Tsybakov. Introduction a` l?estimation non-param?etrique. Math?ematiques et Applications,
41. Springer, 2004.
[19] G. Tur, D. Hakkani-T?ur, and R. E. Schapire. Combining active and semi-supervised learning for spoken
language understanding. Speech Communication, 45:171?186, 2005.
[20] P. C. Wason and P. N. Johnson-Laird. Psychology of reasoning: Structure and content. Harvard U. Press,
1972.
[21] X. Zhu, T. Rogers, R. Qian, and C. Kalish. Humans perform semi-supervised classification too. In
Twenty-Second AAAI Conference on Artificial Intelligence, 2007.
8
| 3456 |@word polynomial:4 achievable:2 seems:1 scroll:1 tedious:1 instruction:1 p0:1 q1:4 paid:1 dramatic:2 harder:2 initial:2 series:1 selecting:2 offering:1 seriously:1 rightmost:1 past:2 current:6 comparing:2 discretization:1 must:1 john:1 subsequent:1 realistic:1 informative:2 shape:16 designed:1 plot:4 update:1 atlas:1 v:3 half:1 selected:2 guess:9 intelligence:1 inspection:1 capitalizes:1 short:3 provides:2 math:1 location:10 along:1 profound:1 incorrect:3 consists:1 behavioral:2 theoretically:2 indeed:1 expected:3 roughly:1 p1:3 frequently:1 behavior:4 discretized:1 inspired:1 little:1 inappropriate:1 equipped:1 increasing:1 becomes:3 begin:1 estimating:1 clicked:3 matched:1 bounded:1 mass:2 what:3 kind:1 substantially:1 q2:2 informed:1 spoken:1 finding:1 horstein:2 quantitative:3 every:3 remember:1 control:1 grant:1 intervention:4 appear:1 arguably:1 before:1 engineering:2 understood:3 oxford:2 solely:1 approximately:2 signed:1 bird:4 therein:1 collect:1 shaded:1 suggests:3 range:1 statistically:1 practical:1 acknowledgment:1 yj:2 practice:2 procedure:4 diagnosticity:1 area:1 empirical:1 significantly:2 thought:1 word:2 suggest:1 cannot:2 close:2 selection:2 wheel:2 gelman:1 faulty:1 context:1 applying:1 deterministic:2 send:1 latest:1 attention:2 williams:1 kruschke:1 resolution:1 qian:1 estimator:2 rule:1 steyvers:1 searching:1 analogous:1 suppose:3 rationality:2 user:1 hypothesis:1 harvard:1 trend:1 particularly:1 labeled:3 observed:6 electrical:2 childhood:1 decrease:5 contemporary:1 removed:1 observes:3 highest:1 voluntarily:1 tur:1 environment:5 predictable:1 developmental:1 complexity:2 asked:3 depend:2 raise:1 gigerenzer:1 ror:1 upon:4 learner:9 accelerate:1 easily:1 represented:1 chapter:2 distinct:2 describe:1 query:18 artificial:2 labeling:3 outcome:3 quite:1 apparent:1 larger:4 posed:1 otherwise:3 ability:2 noisy:8 laird:1 final:1 associative:1 kalish:1 advantage:6 indication:1 took:1 reset:1 relevant:1 combining:1 rapidly:1 till:1 achieve:4 description:1 bed:1 participating:1 convergence:4 interacted:1 transmission:1 assessing:3 categorization:1 generating:1 object:3 depending:2 wished:1 noticeable:1 job:1 strong:1 predicted:5 c:1 indicate:2 involves:1 quantify:1 come:1 drawback:1 correct:2 gopnik:1 human:89 crt:1 viewing:1 material:1 rogers:1 argued:2 generalization:1 really:2 pessimistic:1 assisted:1 around:2 credit:1 diminish:2 normal:1 presumably:1 cognition:1 claim:1 major:2 vary:1 early:1 a2:1 finland:1 purpose:1 failing:1 estimation:10 diminishes:1 yoked:13 label:24 bisects:1 hope:1 mit:3 clearly:1 always:1 modified:1 rather:2 avoid:1 chater:1 q3:4 focus:1 improvement:1 properly:1 she:4 likelihood:2 selten:1 alien:3 contrast:2 inference:6 cri:1 prt:2 zigangirov:1 typically:1 snake:4 her:3 schulz:1 interested:1 selects:1 overall:1 classification:8 colt:1 proposes:1 plan:1 raised:1 development:2 equal:2 once:3 genuine:1 sampling:3 familiarize:1 nearly:1 future:4 stimulus:1 intelligent:1 few:3 randomly:4 simultaneously:1 recognize:1 delicate:1 maintain:1 epigenetic:1 interest:2 investigate:2 certainly:2 sh:1 sobel:1 implication:3 amenable:1 capable:2 partial:1 closer:1 experience:2 nowak:1 intense:1 conduct:1 old:1 logarithm:1 re:2 plotted:1 causal:4 theoretical:11 fitted:1 psychological:3 instance:2 earlier:1 jerryzhu:1 examining:1 wortman:1 conducted:1 johnson:1 too:4 dependency:1 answer:8 considerably:1 density:2 probabilistic:4 told:2 decoding:1 enhance:3 mouse:2 quickly:2 reflect:1 satisfied:1 aaai:1 opposed:1 choose:4 possibly:1 worse:1 cognitive:5 adversely:1 actively:5 account:1 suggesting:1 student:1 provoke:1 explicitly:1 view:2 lot:1 try:1 doing:3 sup:2 analyze:1 start:3 bayes:1 decaying:1 participant:16 apparently:1 slope:7 carey:1 ass:1 construed:1 ni:3 accuracy:1 who:2 characteristic:1 yield:1 clarifies:1 identify:1 yes:1 generalize:1 bayesian:4 accurately:1 produced:1 bisection:5 worth:1 researcher:1 hanneke:1 j6:1 suffers:1 against:1 involved:1 obvious:1 sampled:1 pilot:2 proved:1 gain:1 ask:1 logical:1 knowledge:2 actually:1 appears:2 focusing:1 alexandre:1 higher:3 supervised:3 day:2 methodology:1 response:1 though:3 furthermore:5 just:2 spiky:1 until:2 hand:3 receives:2 trust:1 cohn:1 infimum:1 perhaps:1 indicated:2 scientific:2 believe:1 effect:1 concept:3 y2:1 true:9 alumnus:1 assigned:1 indistinguishable:1 during:1 noted:1 leftmost:1 demonstrate:1 doherty:1 interface:2 passive:45 balcan:1 reasoning:2 novel:1 charles:1 began:1 superior:2 overview:1 exponentially:7 analog:1 extend:2 significant:1 session:7 teaching:1 similarly:1 language:1 specification:1 longer:1 feb:1 halved:1 own:1 posterior:4 perspective:1 inf:1 scenario:2 certain:2 binary:4 continue:3 life:1 meeting:1 yi:12 determine:2 paradigm:2 converge:1 semi:3 ii:1 full:2 desirable:1 infer:1 stem:1 smooth:1 exceeds:1 faster:4 match:1 characterized:1 offer:1 concerning:1 a1:1 controlled:1 impact:1 prediction:2 hakkani:1 basic:1 circumstance:1 metric:1 essentially:2 noiseless:1 iteration:6 sometimes:1 achieved:1 justified:1 interval:4 median:2 probably:1 subject:18 regularly:2 seem:2 near:1 presence:1 ideal:2 noting:1 switch:1 xj:1 fit:2 psychology:6 regarding:3 idea:1 toddler:1 shift:1 whether:1 motivated:1 assist:1 suffer:1 speech:1 york:1 burnashev:1 dramatically:1 ignored:1 iterating:1 clear:4 generally:1 useful:1 tsybakov:1 locally:1 concentrated:1 category:20 schapire:1 nsf:1 happened:1 deteriorates:1 estimated:1 per:1 affected:1 group:1 key:1 nevertheless:1 blum:1 monitor:1 wisc:1 utilize:1 button:4 year:1 uncertainty:2 throughout:1 almost:1 strange:1 decision:6 comparable:1 bound:15 pay:1 followed:2 display:1 correspondence:1 annual:1 constraint:2 precisely:1 worked:1 x2:7 helsinki:1 aspect:1 speed:1 anyone:1 extremely:2 passively:1 performing:1 relatively:3 department:2 according:1 beneficial:1 slightly:1 across:3 smaller:1 son:1 wi:1 ur:1 wason:1 making:3 castro:1 memorizing:1 outlier:1 taken:1 previously:3 turn:1 fail:1 end:1 studying:1 observe:4 quarterly:1 away:1 encounter:1 ematiques:1 slower:5 ensure:1 completed:2 a4:1 opportunity:2 madison:2 somewhere:1 exploit:1 especially:2 establish:1 society:1 wagenmakers:1 move:1 noticed:1 question:11 strategy:5 distance:1 simulated:1 capacity:1 evenly:1 topic:2 nelson:1 collected:2 tutoring:1 reason:2 length:1 relationship:1 difficult:2 robert:1 debate:1 info:1 negative:1 design:1 implementation:1 proper:1 kushnir:2 unknown:2 perform:5 shallower:2 upper:4 ladner:1 observation:12 twenty:1 enabling:1 keil:1 displayed:4 situation:1 ever:2 precise:1 looking:1 y1:9 communication:1 varied:1 introduced:1 cast:1 pair:1 specified:1 namely:1 toolbox:1 distinction:1 conflicting:1 learned:2 trans:1 adult:1 address:1 suggested:3 beyond:1 below:2 bar:2 able:1 param:1 including:1 hatch:3 unrealistic:1 suitable:1 overlap:1 difficulty:5 hybrid:2 natural:2 zhu:1 minimax:3 improve:1 oaksford:1 picture:2 axis:7 xiaojin:1 columbia:1 faced:1 prior:1 literature:2 understanding:3 review:1 wisconsin:2 interesting:1 querying:3 versus:4 foundation:1 sufficient:1 article:1 displaying:1 famously:1 course:1 supported:2 placed:1 soon:1 formal:3 bias:1 understand:1 laughlin:1 taking:2 benefit:10 distributed:3 boundary:26 feedback:4 dimension:1 world:2 transition:1 rich:1 exceedingly:1 curve:6 author:1 qualitatively:1 stand:1 adaptive:1 polynomially:2 transaction:1 active:78 sequentially:2 q4:4 conceptual:1 handbook:1 conclude:1 xi:18 why:1 table:2 learn:5 nature:1 confirmation:1 attainment:2 improving:1 excellent:1 domain:1 submit:1 noise:56 animation:1 child:2 allowed:1 x1:6 screen:1 fashion:1 egg:4 ny:1 aid:1 wiley:1 inferring:1 exponential:5 lie:2 theorem:1 transitioning:1 specific:1 showing:1 er:1 decay:19 evidence:1 a3:1 inconclusive:1 sequential:2 gained:1 magnitude:1 illustrates:2 rui:1 smoothly:2 timothy:1 simply:4 likely:7 explore:1 visual:3 springer:1 corresponds:1 determines:1 goal:3 exposition:1 content:1 hard:1 change:3 determined:3 specifically:1 contrasted:1 uniformly:2 yourself:1 called:1 total:1 partly:1 experimental:5 attempted:1 exception:1 select:7 formally:1 people:3 support:2 categorize:1 philosophy:1 |
2,709 | 3,457 | On the Generalization Ability of
Online Strongly Convex Programming Algorithms
Sham M. Kakade
TTI Chicago
Chicago, IL 60637
[email protected]
Ambuj Tewari
TTI Chicago
Chicago, IL 60637
[email protected]
Abstract
This paper examines the generalization properties of online convex programming
algorithms when the loss function is Lipschitz and strongly convex. Our main
result is a sharp bound, that holds with high probability, on the excess risk of the
output of an online algorithm in terms of the average regret. This allows one to
use recent algorithms with logarithmic cumulative regret guarantees to achieve
fast convergence rates for the excess risk with high probability. As a corollary, we
characterize the convergence rate of P EGASOS (with high probability), a recently
proposed method for solving the SVM optimization problem.
1
Introduction
Online regret minimizing algorithms provide some of the most successful algorithms for many machine learning problems, both in terms of the speed of optimization and the quality of generalization.
Notable examples include efficient learning algorithms for structured prediction [Collins, 2002] (an
algorithm now widely used) and for ranking problems [Crammer et al., 2006] (providing competitive
results with a fast implementation).
Online convex optimization is a sequential paradigm in which at each round, the learner predicts a
vector wt ? S ? Rn , nature responds with a convex loss function, `t , and the learner suffers loss
`t (wt ). In this setting, the goal of the learner is to minimize the regret:
T
X
t=1
`t (wt ) ? min
w?S
T
X
`t (w)
t=1
which is the difference between his cumulative loss and the cumulative loss of the optimal fixed
vector.
Typically, these algorithms are used to train a learning algorithm incrementally, by sequentially
feeding the algorithm a data sequence, (X1 , Y1 ), . . . , (XT , YT ) (generated in an i.i.d. manner). In
essence, the loss function used in the above paradigm at time t is `(w; (Xt , Yt )), and this leads to a
guaranteed bound on the regret:
RegT =
T
X
t=1
`(wt ; (Xt , Yt )) ? min
w?S
T
X
`(w; (Xt , Yt ))
t=1
b with good generHowever, in the batch setting, we are typically interested in finding a parameter w
alization ability, i.e. we would like:
b ? min R(w)
R(w)
w?S
to be small, where R(w) := E [`(w; (X, Y ))] is the risk.
Intuitively, it seems plausible that low regret on an i.i.d. sequence, should imply good generalization performance. In fact, for most of the empirically successful online algorithms, we have a set of
techniques to understand the generalization performance of these algorithms on new data via ?online
to batch? conversions ? the conversions relate the regret of the algorithm (on past data) to the generalization performance (on future data). These include cases
? which are tailored to general convex
functions [Cesa-Bianchi et al., 2004] (whose regret is O( T )) and mistake bound settings [CesaBianchi and Gentile, 2008] (where the the regret could be O(1) under separability assumptions).
b to be the average of the wt produced by our online
In these conversions, we typically choose w
algorithm.
Recently, there has been a growing body of work providing online algorithms for strongly convex
loss functions (i.e. `t is strongly convex), with regret guarantees that are merely O(ln T ). Such
algorithms have the potential to be highly applicable since many machine learning optimization
problems are in fact strongly convex ? either with strongly convex loss functions (e.g. log loss,
square loss) or, indirectly, via strongly convex regularizers (e.g. L2 or KL based regularization).
Note that in the latter case, the loss function itself may only be just convex but a strongly convex regularizer effectively makes this a strongly convex optimization problem; e.g. the SVM optimization
problem uses the hinge loss with L2 regularization. In fact, for this case, the P EGASOS algorithm
of Shalev-Shwartz et al. [2007] ? based on the online strongly convex programming algorithm of
Hazan et al. [2006] ? is a state-of-the-art SVM solver. Also, Ratliff et al. [2007] provide a similar
subgradient method for max-margin based structured prediction, which also has favorable empirical
performance.
The aim of this paper is to examine the generalization properties of online convex programming
algorithms when the loss function is strongly convex (where strong convexity can be defined in a
general sense, with respect to some arbitrary norm || ? ||). Suppose we have an online algorithm
which has some guaranteed cumulative regret bound RegT (e.g. say RegT ? ln T with T samples).
Then a corollary of our main result shows that with probability greater than 1 ? ? ln T , we obtain a
b from our online algorithm such that:
parameter w
?q
?
1
1
Reg
ln
ln
T
RegT
?
b ? min R(w) ?
R(w)
+O?
+ ?? .
w
T
T
T
Here, the constants hidden in the O-notation are determined by the Lipschitz constant and the strong
convexity parameter of the loss `. Importantly, note that the correction
term is of lower order than
?
ln T
the regret ? if the regret is ln T then the additional penalty is O( T ). If one naively uses the
Hoeffding-Azuma
? methods in Cesa-Bianchi et al. [2004], one would obtain a significantly worse
penalty of O(1/ T ).
This result solves an open problem in Shalev-Shwartz et al. [2007], which was on characterizing the
convergence rate of the P EGASOS algorithm, with high probability. P EGASOS is an online strongly
convex programming algorithm for the SVM objective function ? it repeatedly (and randomly)
subsamples the training set in order to minimize the empirical SVM objective function. A corollary
to this work essentially shows the convergence rate of P EGASOS (as a randomized optimization
algorithm) is concentrated rather sharply.
Ratliff et al. [2007] also provide an online algorithm (based on Hazan et al. [2006]) for max-margin
based structured prediction. Our results are also directly applicable in providing a sharper concentration result in their setting (In particular, see the regret bound in Equation 15, for which our results
can be applied to).
This paper continues the line of research initiated by several researchers [Littlestone, 1989, CesaBianchi et al., 2004, Zhang, 2005, Cesa-Bianchi and Gentile, 2008] which looks at how to convert
online algorithms into batch algorithms with provable guarantees. Cesa-Bianchi and Gentile [2008]
prove faster rates in the case when the cumulative loss of the online algorithm is small. Here,
we are interested in the case where the cumulative regret is small. The work of Zhang [2005] is
closest to ours. Zhang [2005] explicitly goes via the exponential moment method to derive sharper
concentration results. In particular, for the regression problem with squared loss, Zhang [2005] gives
a result similar to ours (see Theorem 8 therein). The present work can also be seen as generalizing
his result to the case where we have strong convexity with respect to a general norm. Coupled with
recent advances in low regret algorithms in this setting, we are able to provide a result that holds
more generally.
Our key technical tool is a probabilistic inequality due to Freedman [Freedman, 1975]. This, combined with a variance bound (Lemma 1) that follows from our assumptions about the loss function,
allows us to derive our main result (Theorem 2). We then apply it to statistical learning with bounded
loss, and to P EGASOS in Section 4.
2
Setting
Fix a compact convex subset S of some space equipped with a norm k ? k. Let k ? k? be the dual norm
defined by kvk? := supw : kwk?1 v ? w. Let Z be a random variable taking values in some space
Z. Our goal is to minimize F (w) := E [f (w; Z)] over w ? S. Here, f : S ? Z ? [0, B] is some
function satisfying the following assumption.
Assumption LIST. (LIpschitz and STrongly convex assumption) For all z ? Z, the function
fz (w) = f (w; z) is convex in w and satisfies:
1. fz has Lipschitz constant L w.r.t. to the norm k?k, i.e. ?w ? S, ?? ? ?fz (w) (?fz denotes
the subdifferential of fz ), k?k? ? L. Note that this assumption implies ?w, w0 ? S,
|fz (w) ? fz (w0 )| ? Lkw ? w0 k.
2. fz is ?-strongly convex w.r.t. k ? k, i.e. ?? ? [0, 1], ?w, w0 ? S,
?
fz (?w + (1 ? ?)w0 ) ? ?fz (w) + (1 ? ?)fz (w0 ) ? ?(1 ? ?)kw ? w0 k2 .
2
Denote the minimizer of F by w? , w? := arg minw?S F (w). We consider an online setting in
which independent (but not necessarily identically distributed) random variables Z1 , . . . , ZT become available to us in that order. These have the property that
?t, ?w ? S, E [f (w; Zt )] = F (w) .
Now consider an algorithm that starts out with some w1 and at time t, having seen Zt , updates the
parameter wt to wt+1 . Let Et?1 [?] denote conditional expectation w.r.t. Z1 , . . . , Zt?1 . Note that
wt is measurable w.r.t. Z1 , . . . , Zt?1 and hence Et?1 [f (wt ; Zt )] = F (wt ).
Define the statistics,
RegT :=
T
X
t=1
Diff T :=
T
X
t=1
f (wt ; Zt ) ? min
w?S
T
X
f (w; Zt ) ,
t=1
?
(F (wt ) ? F (w )) =
T
X
t=1
F (wt ) ? T F (w? ) .
Define the sequence of random variables
?t := F (wt ) ? F (w? ) ? (f (wt ; Zt ) ? f (w? ; Zt )) .
?
(1)
?
Since Et?1 [f (wt ; Zt )] = F (wt ) and Et?1 [f (w ; Zt )] = F (w ), ?t is a martingale difference
sequence. This definition needs some explanation as it is important to look at the right
P martingale
difference sequence to derive the results we want. Even under assumption LIST, T1 t f (wt ; Zt )
P
P
and T1 t f (w? ; Zt ) will not be concentrated around T1 t F (wt ) and F (w? ) respectively at a
?
rate better then O(1/ T ) in general. But if we look at the difference, we are able to get sharper
concentration.
3
A General Online to Batch Conversion
The following simple lemma is crucial for us. It says that under assumption LIST, the variance
of the increment in the regret f (wt ; Zt ) ? f (w? ; Zt ) is bounded by its (conditional) expectation
F (wt ) ? F (w? ). Such a control on the variance is often the main ingredient in obtaining sharper
concentration results.
Lemma 1. Suppose assumption LIST holds and let ?t be the martingale difference sequence defined
in (1). Let
Vart?1 ?t := Et?1 ?t2
be the conditional variance of ?t given Z1 , . . . , Zt?1 . Then, under assumption LIST, we have,
Vart?1 ?t ?
4L2
(F (wt ) ? F (w? )) .
?
The variance bound given by the above lemma allows us to prove our main theorem.
Theorem 2. Under assumption LIST, we have, with probability at least 1 ? 4 ln(T )?,
r
p
T
1X
RegT
L2 ln(1/?) RegT
16L2
ln(1/?)
?
F (wt ) ? F (w ) ?
+4
+ max
, 6B
T t=1
T
?
T
?
T
P
P
? where w
? := T1 t wt .
Further, using Jensen?s inequality, T1 t F (wt ) can be replaced by F (w)
3.1
Proofs
Proof of Lemma 1. We have,
h
i
2
Vart?1 ?t ? Et?1 (f (wt ; Zt ) ? f (w? ; Zt ))
[ Assumption LIST, part 1 ]
? Et?1 L2 kwt ? w? k2
= L2 kwt ? w? k2 .
(2)
On the other hand, using part 2 of assumption LIST, we also have for any w, w0 ? S,
w + w0
?
f (w; Z) + f (w0 ; Z)
?f
; Z + kw ? w0 k2 .
2
2
8
Taking expectation this gives, for any w, w0 ? S,
F (w) + F (w0 )
w + w0
?
?F
+ kw ? w0 k2 .
2
2
8
Now using this with w = wt , w0 = w? , we get
wt + w?
?
+ kwt ? w? k2
2
8
?
?
? F (w ) + kwt ? w? k2 .
8
F (wt ) + F (w? )
?F
2
[? w? minimizes F ]
This implies that
kwt ? w? k2 ?
4(F (wt ) ? F (w? ))
?
(3)
Combining (2) and (3) we get,
Vart?1 ?t ?
4L2
(F (wt ) ? F (w? ))
?
The proof of Theorem 2 relies on the following inequality for martingales which is an easy consequence of Freedman?s inequality [Freedman, 1975, Theorem 1.6]. The proof of this lemma can be
found in the appendix.
Lemma 3. Suppose X1 , . . . , XT is a martingale difference sequence with |Xt | ? b. Let
Vart Xt = Var (Xt | X1 , . . . , Xt?1 ) .
PT
Let V = t=1 Vart Xt be the sum of conditional variances of Xt ?s. Further, let ? =
have, for any ? < 1/e and T ? 3,
!
T
n
op
X
p
Prob
Xt > max 2?, 3b ln(1/?)
ln(1/?) ? 4 ln(T )? .
t=1
?
V . Then we
qP
q
T
4L2
Proof of Theorem 2. By Lemma 1, we have ? :=
Var
?
?
t t
t=1
? Diff T . Note that |?t | ?
2B because our f has range [0, B]. Therefore, Lemma 3 gives us that with probability at least
1 ? 4 ln(T )?, we have
T
n
op
X
p
?t ? max 2?, 6B ln(1/?)
ln(1/?) .
t=1
By definition of RegT ,
Diff T ? RegT ?
T
X
?t
t=1
and therefore, with probability, 1 ? 4 ln(T )?, we have
)
( r
p
p
L2
Diff T ? RegT ? max 4
Diff T , 6B ln(1/?)
ln(1/?) .
?
Using Lemma 4 below to solve the above quadratic inequality for Diff T , gives
r
p
PT
RegT
ln(1/?)
L2 ln(1/?) RegT
16L2
?
t=1 F (wt )
? F (w ) ?
+4
+ max
, 6B
T
T
?
T
?
T
The following elementary lemma was required to solve a recursive inequality in the proof of the
above theorem. Its proof can be found in the appendix.
Lemma 4. Suppose s, r, d, b, ? ? 0 and we have
?
s ? r ? max{4 ds, 6b?}? .
Then, it follows that
?
s ? r + 4 dr? + max{16d, 6b}?2 .
4
4.1
Applications
Online to Batch Conversion for Learning with Bounded Loss
Suppose (X1 , Y1 ), . . . , (XT , YT ) are drawn i.i.d. from a distribution. The pairs (Xi , Yi ) belong
to X ? Y and our algorithm are allowed to make predictions in a space D ? Y. A loss function
` : D ? Y ? [0, 1] measures quality of predictions. Fix a convex set S of some normed space and a
function h : X ? S ? D. Let our hypotheses class be {x 7? h(x; w) | w ? S}.
On input x, the hypothesis parameterized by w predicts h(x; w) and incurs loss `(h(x; w), y) if the
correct prediction is y. The risk of w is defined by
R(w) := E [`(h(X; w), Y )]
?
and let w := arg minw?S R(w) denote the (parameter for) the hypothesis with minimum risk. It
is easy to see that this setting falls under the general framework given above by thinking of the pair
(X, Y ) as Z and setting f (w; Z) = f (w; (X, Y )) to be `(h(X; w), Y ). Note that F (w) becomes
the risk R(w). The range of f is [0, 1] by our assumption about the loss functions so B = 1.
Suppose we run an online algorithm on our data that generates a sequence of hypotheses w0 , . . . , wT
such that wt is measurable w.r.t. X<t , Y<t . Define the statistics,
RegT :=
T
X
t=1
Diff T :=
T
X
t=1
`(h(Xt ; wt ), Yt ) ? min
w?S
(R(wt ) ? R(w? )) =
T
X
`(h(Xt ; w), Yt ) ,
t=1
T
X
t=1
R(wt ) ? T R(w? ) .
P
? := ( Tt=1 wt )/T . The following corollary then follows immediately from
At the end, we output w
? ? R(w? ).
Theorem 2. It bounds the excess risk R(w)
Corollary 5. Suppose assumption LIST is satisfied for f (w; (x, y)) := `(h(x; w), y). Then we
have, with probability at least 1 ? 4 ln(T )?,
r
p
RegT
L2 ln(1/?) RegT
16L2
ln(1/?)
?
? ? R(w ) ?
R(w)
+4
+ max
,6
T
?
T
?
T
Recently, it has been proved [Kakade and Shalev-Shwartz, 2008] that if assumption LIST is satisfied
for w 7? `(h(x; w), y) then there is an online algorithm that generates w1 , . . . , wT such that
RegT ?
L2 (1 + ln T )
.
2?
Plugging it in the corollary above gives the following result.
Corollary 6. Suppose assumption LIST is satisfied for f (w; (x, y)) := `(h(x; w), y). Then there is
? such that, with probability
an online algorithm that generates w1 , . . . , wT and in the end outputs w
at least 1 ? 4 ln(T )?,
s
?
2
2
4L
ln
T
16L2
ln(1/?)
L
ln
T
1
?
? ? R(w ) ?
+
+ max
,6
,
R(w)
ln
?T
?T
?
?
T
for any T ? 3.
4.2
High Probability Bound for P EGASOS
P EGASOS [Shalev-Shwartz et al., 2007] is a recently proposed method for solving the primal SVM
problem. Recall that in the SVM optimization problem we are given m example, label pairs
(xi , yi ) ? Rd ? {?1}. Assume that kxi k ? R for all i where k ? k is the standard L2 norm.
Let
m
?
1 X
F (w) = kwk2 +
`(w; (xi , yi ))
(4)
2
m i=1
be the SVM objective function. The loss function `(w; (x, y)) = [1 ? y(w ? x)]+ is the hinge loss.
At time t, P EGASOS takes a (random) approximation
?
1 X
f (w; Zt ) = kwk2 +
`(w; (x, y)) ,
2
k
(x,y)?Zt
of the SVM objective function to estimate the gradient and updates the current weight vector wt to
wt+1 . Here Zt is a random subset of the data set of size k. Note that F (w) can be written as
2
?
F (w) = E
kwk2 + `(w; Z)
2
where Z is an example (xi , yi ) drawn uniformly at random from the m data points. It is also easy
to verify that
?w, E [f (w; Zt )] = F (w) .
?
It can be shown that w? := arg min F (w) will satisfy kw? k ? 1/ ? so we set
1
d
.
S = w ? R : kwk ? ?
?
For any z that is a subset of the data set, the function
w 7? f (w; z) =
?
1 X
kwk2 +
`(w; (x, y))
2
|z|
(x,y)?z
?
is Lipschitz on
? S with Lipschitz constant L = ? + R and is ?-strongly convex. Also f (w; z) ?
[0, 3/2 + R/ ?]. So, the P EGASOS setting falls under our general framework and satisfies assumption LIST.
Theorem 1 in Shalev-Shwartz et al. [2007] says, for any w, T ? 3,
T
X
t=1
f (wt ; Zt ) ?
T
X
t=1
f (w; Zt ) +
L2 ln T
,
?
(5)
?
where L = ? + R. It was noted in that paper that plugging in w = w? and taking expectations,
we easily get
" T
#
X
L2 ln T
.
EZ1 ,...,ZT
F (wt ) ? T F (w? ) +
?
t=1
Here we use Theorem 2 to prove an inequality that holds with high probability, not just in expectation.
Corollary 7. Let F be the SVM objective function defined in (4) and w1 , . . . , wT be the sequence
of weight vectors generated by the P EGASOS algorithm. Further, let w? denote the minimizer of the
SVM objective. Then, with probability 1 ? 4? ln(T ), we have
s
?
T
X
L2 ln T 4L2 ln T
1
16L2
6R
1
?
F (wt )?T F (w ) ?
ln
+
+max
,9 + ?
ln
, (6)
?
?
?
?
?
?
t=1
for any T ? 3. Therefore, assuming R = 1, we have, for ? small enough, with probability at least
1 ? ?,
!
T
ln T?
1X
?
F (wt ) ? F (w ) = O
.
T t=1
?T
2
Proof. Note that (5) implies that RegT ? L ?ln T . The corollary then follows immediately from
?
Theorem 2 by plugging in ? = ? and B = 3/2 + R/ ?.
References
N. Cesa-Bianchi and C. Gentile. Improved risk tail bounds for on-line algorithms. IEEE Transactions on
Information Theory, 54(1):286?390, 2008.
N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms.
IEEE Transactions on Information Theory, 50(9):2050?2057, September 2004.
M. Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Conference on Empirical Methods in Natural Language Processing, 2002.
K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms.
Journal of Machine Learning Research, 7:551?585, Mar 2006.
David A. Freedman. On tail probabilities for martingales. The Annals of Probability, 3(1):100?118, Feb 1975.
E. Hazan, A. Kalai, S. Kale, and A. Agarwal. Logarithmic regret algorithms for online convex optimization. In
Proceedings of the Nineteenth Annual Conference on Computational Learning Theory, 2006.
S. Kakade and S. Shalev-Shwartz. Mind the duality gap: Logarithmic regret algorithms for online optimization.
Advances in Neural Information Processing Systems, 2008.
N. Littlestone. Mistake bounds and logarithmic linear-threshold learning algorithms. PhD thesis, U. C. Santa
Cruz, March 1989.
Nathan Ratliff, James (Drew) Bagnell, and Martin Zinkevich. (online) subgradient methods for structured
prediction. In Eleventh International Conference on Artificial Intelligence and Statistics (AIStats), March
2007.
Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal Estimated sub-GrAdient SOlver for
SVM. In Proceedings of the Twenty-Fourth International Conference on Machine Learning (ICML), pages
807?814, 2007.
T. Zhang. Data dependent concentration bounds for sequential prediction algorithms. In Proceedings of the
Eighteenth Annual Conference on Computational Learning Theory, pages 173?187, 2005.
Appendix
?
Proof of Lemma 3. Note that a crude upper bound on Vart Xt is b2 . Thus, ? ? b T . ?
We choose a
discretization 0 = ??1 < ?0 < . . . < ?l such that ?i+1 = r?i for i ? 0 and ?l ? b T . We will
specify the choice of ?0 and r shortly. We then have, for any c > 0,
X
Prob
t
=
l
X
!
p
Xt > c max{r?, ?0 } ln(1/?)
Prob
j=0
?
l
X
Prob
j=0
?
l
X
P
P
Prob
j=0
?
t
p
Xt > c max{r?, ?0 } ln(1/?)
& ?j?1 < ? ? ?j
p
Xt > c?j ln(1/?)
2
2
& ?j?1 < V ? ?j
t
X
Xt > c?j
t
p
ln(1/?) & V ?
?
?c2 ?j2 ln(1/?)
?
?
exp ?
p
2?j2 + 23 c?j ln(1/?) b
j=0
?
?
l
2
X
?c
?
ln(1/?)
j
?
p
exp ?
=
2?j + 32 c ln(1/?) b
j=0
(?)
l
X
?j2
!
p
where the inequality
(?) follows from Freedman?s inequality. If we now choose ?0 = bc ln(1/?)
p
then?j ? bc ln(1/?) for all j and hence every term in the above summation is bounded by
?c2 ln(1/?)
2+2/3
exp
which is less then ? if we choose c = 5/3. Set r = 2/c = 6/5. We want
p
?
?
?0 r ? b T . Since c ln(1/?) ? 1, choosing l = logr ( T ) ensures that. Thus we have
!
T
X
p
5
6 5 p
Prob
Xt > max{ ?, b ln(1/?)} ln(1/?)
3
5 3
t=1
!
X
p
= Prob
Xt > c max{r?, ?0 } ln(1/?)
l
t
?
? (l + 1)? = (log6/5 ( T ) + 1)?
?
? (6 ln( T ) + 1)? ? 4 ln(T )? .
(? T ? 3)
Proof of Lemma 4. The assumption of the lemma implies that one of the following inequalities
holds:
?
s ? r ? 6b?2
s ? r ? 4 ds? .
(7)
In the second case, we have
?
?
? 2
?
s ? (4 d?) s ? r ? 0
s should be smaller than the larger root of the above quadratic. This gives us,
?
2
p
?
s = ( s)2 ? 2 d? + 4d?2 + r
p
? 4d?2 + 4d?2 + r + 4 4d2 ?4 + d?2 r
?
?
?
?
[? x + y ? x + y]
? 8d?2 + r + 8d?2 + 4 dr?
?
? r + 4 dr? + 16d?2 .
(8)
which means that
Combining (7) and (8) finishes the proof.
| 3457 |@word seems:1 norm:6 dekel:1 open:1 d2:1 incurs:1 moment:1 ours:2 bc:2 past:1 current:1 discretization:1 written:1 cruz:1 chicago:4 update:2 intelligence:1 org:2 zhang:5 c2:2 become:1 prove:3 eleventh:1 manner:1 examine:1 growing:1 equipped:1 solver:2 becomes:1 notation:1 bounded:4 minimizes:1 finding:1 guarantee:3 every:1 k2:8 control:1 t1:5 mistake:2 consequence:1 initiated:1 therein:1 range:2 recursive:1 regret:19 empirical:3 significantly:1 get:4 pegasos:1 risk:8 measurable:2 zinkevich:1 yt:7 eighteenth:1 go:1 kale:1 normed:1 convex:25 immediately:2 examines:1 importantly:1 his:2 increment:1 annals:1 pt:2 suppose:8 programming:5 us:2 hypothesis:4 satisfying:1 continues:1 predicts:2 ensures:1 convexity:3 solving:2 learner:3 easily:1 regularizer:1 train:1 fast:2 artificial:1 shalev:8 choosing:1 whose:1 widely:1 plausible:1 solve:2 say:3 nineteenth:1 larger:1 ability:3 statistic:3 itself:1 online:27 subsamples:1 sequence:9 j2:3 combining:2 achieve:1 convergence:4 tti:4 derive:3 op:2 solves:1 strong:3 implies:4 correct:1 feeding:1 fix:2 generalization:8 elementary:1 summation:1 correction:1 hold:5 around:1 exp:3 favorable:1 applicable:2 label:1 tool:1 aim:1 rather:1 kalai:1 corollary:9 sense:1 dependent:1 typically:3 hidden:2 interested:2 arg:3 dual:1 supw:1 art:1 having:1 kw:4 look:3 icml:1 thinking:1 future:1 t2:1 randomly:1 kwt:5 replaced:1 highly:1 kvk:1 primal:2 regularizers:1 minw:2 littlestone:2 subset:3 successful:2 characterize:1 kxi:1 combined:1 international:2 randomized:1 probabilistic:1 w1:4 squared:1 thesis:1 cesa:6 satisfied:3 choose:4 hoeffding:1 dr:3 worse:1 logr:1 aggressive:1 potential:1 b2:1 satisfy:1 notable:1 explicitly:1 ranking:1 root:1 hazan:3 kwk:2 competitive:1 start:1 shai:1 minimize:3 il:2 square:1 variance:6 produced:1 researcher:1 suffers:1 definition:2 james:1 proof:11 proved:1 recall:1 egasos:11 specify:1 improved:1 strongly:15 mar:1 just:2 d:2 hand:1 incrementally:1 quality:2 verify:1 regularization:2 hence:2 round:1 essence:1 noted:1 tt:1 passive:1 recently:4 empirically:1 qp:1 belong:1 tail:2 kwk2:4 rd:1 language:1 feb:1 closest:1 recent:2 inequality:10 yi:4 seen:2 minimum:1 gentile:5 greater:1 additional:1 paradigm:2 cesabianchi:2 sham:2 technical:1 faster:1 ez1:1 plugging:3 prediction:8 regression:1 essentially:1 expectation:5 tailored:1 agarwal:1 subdifferential:1 want:2 crucial:1 identically:1 easy:3 enough:1 finish:1 regt:17 penalty:2 repeatedly:1 generally:1 tewari:2 santa:1 concentrated:2 fz:11 estimated:1 key:1 threshold:1 drawn:2 subgradient:2 merely:1 convert:1 sum:1 run:1 prob:7 parameterized:1 fourth:1 appendix:3 bound:13 guaranteed:2 quadratic:2 annual:2 sharply:1 generates:3 nathan:2 speed:1 lkw:1 min:7 martin:1 structured:4 march:2 smaller:1 separability:1 kakade:3 vart:7 intuitively:1 ln:56 equation:1 singer:2 mind:1 end:2 alization:1 available:1 apply:1 indirectly:1 batch:5 shortly:1 denotes:1 include:2 hinge:2 yoram:1 objective:6 concentration:5 responds:1 bagnell:1 september:1 gradient:2 w0:17 provable:1 assuming:1 providing:3 minimizing:1 sharper:4 relate:1 ratliff:3 implementation:1 zt:26 twenty:1 bianchi:6 conversion:5 upper:1 markov:1 y1:2 rn:1 sharp:1 arbitrary:1 david:1 pair:3 required:1 kl:1 z1:4 able:2 below:1 azuma:1 ambuj:1 max:16 explanation:1 natural:1 imply:1 coupled:1 l2:22 loss:24 log6:1 srebro:1 var:2 ingredient:1 understand:1 perceptron:1 fall:2 characterizing:1 taking:3 distributed:1 cumulative:6 transaction:2 excess:3 compact:1 sequentially:1 xi:4 shwartz:8 discriminative:1 nature:1 obtaining:1 necessarily:1 aistats:1 main:5 freedman:6 allowed:1 x1:4 body:1 martingale:6 sub:1 exponential:1 crude:1 theorem:12 xt:22 jensen:1 list:12 svm:12 naively:1 sequential:2 effectively:1 drew:1 keshet:1 phd:1 margin:2 gap:1 generalizing:1 logarithmic:4 conconi:1 minimizer:2 satisfies:2 relies:1 conditional:4 goal:2 lipschitz:6 determined:1 diff:7 uniformly:1 wt:47 lemma:15 duality:1 latter:1 crammer:2 collins:2 reg:1 |
2,710 | 3,458 | Model selection and velocity estimation using novel
priors for motion patterns
Hongjing Lu
Shuang Wu
Department of Psychology
Department of Statistics
UCLA, Los Angeles, CA 90095
UCLA, Los Angeles, CA 90095
[email protected]
[email protected]
Alan Yuille
Department of Statistics
UCLA
Los Angeles, CA 90095
[email protected]
Abstract
Psychophysical experiments show that humans are better at perceiving rotation
and expansion than translation. These findings are inconsistent with standard
models of motion integration which predict best performance for translation [6].
To explain this discrepancy, our theory formulates motion perception at two levels of inference: we first perform model selection between the competing models
(e.g. translation, rotation, and expansion) and then estimate the velocity using the
selected model. We define novel prior models for smooth rotation and expansion
using techniques similar to those in the slow-and-smooth model [17] (e.g. Green
functions of differential operators). The theory gives good agreement with the
trends observed in human experiments.
1
Introduction
As an observer moves through the environment, the retinal image changes over time to create multiple complex motion flows, including translational, circular and radial motion. Human observers
are able to process different motion patterns and infer ego motion and global structure of the world.
However, the inherent ambiguity of local motion signals requires the visual system to employ an efficient integration strategy to combine many local measurements in order to perceive global motion.
Psychophysical experiments have identified a variety of phenomena, such as motion capture and
motion cooperativity [11], which appear to be consequences of such integration. A number of computational Bayesian models have been proposed to explain these effects based on prior assumptions
about motion. In particular, it has been shown that a slow-and-smooth prior, and related models, can
qualitatively account for a range of experimental results [17, 15, 16] and can quantitatively account
for others [7, 12].
However, the integration strategy modeled by the slow-and-smooth prior may not generalize to more
complex motion types, such as circular and radial motion, which are critically important for estimating ego motion. In this paper we are concerned with two questions. (1) What integration priors
should be used for a particular motion input? (2) How can local motion measurements be combined
with the proper priors to estimate motion flow? Within the framework of Bayesian inference, the
answers to these two questions are respectively based on model selection and parameter estimation.
In the field of motion perception, most work has focused on the second question, using parameter estimation to estimate motion flow. However, Stocker and Simoncelli [13] recently proposed a
conditioned Bayesian model in which strong biases in precise motion direction estimates arise as a
consequence of a preceding decision about a particular hypothesis (left vs. right motion).
The goal of this paper is to provide a computational explanation for both of the above questions
using Bayesian inference. To address the first question, we develop new prior models for smooth
rotation and expansion motion. To address the second, we propose that the human visual system has
available multiple models of motion integration appropriate for different motion patterns. The visual
system decides the best integration strategy based upon the perceived motion information, and this
choice in turn affects the estimation of motion flow.
In this paper, we first present a computational theory in section (3) that includes three different integration strategies, all derived within the same framework. We test this theory in sections (4,5) by
comparing its predictions with human performance in psychophysical experiments, in which subjects were asked to discriminate motion direction in translational, rotational, and expanding stimuli.
We employ two commonly used stimuli, random dot patterns and moving gratings, to show that the
model can apply to a variety of inputs.
2
Background
There is an enormous literature on visual motion phenomena and there is only room to summarize
the work most relevant to this paper. Our computational model relates most closely to work [17, 15,
7] that formulates motion perception as Bayesian inference with a prior probability biasing towards
slow-and-smooth motion. But psychophysical [4, 8, 1, 6], physiological [14, 3] and fMRI data [9]
suggests that humans are sensitive to a variety of motion patterns including translation, rotation, and
expansion. In particular, Lee et al [6] demonstrated that human performance on discrimination tasks
for translation, rotation, and expansion motion was inconsistent with the predictions of the slow-andsmooth theory (our simulations independently verify this result). Instead, we propose that human
motion perception is performed at two levels of inference: (i) model selection, and (ii) estimating
the velocity with the selected model. The concept of model selection has been described in the
literature, see [5], but has only recently been applied to model motion phenomena [13]. Our new
motion models for rotation and expansion are formulated very similarly to the original slow-andsmooth model [17] and similar mathematical analysis [2] is used to obtain the forms of the solutions
in terms of Greens functions of the differential operators used in the priors.
3
3.1
Model Formulation
Bayesian Framework
We formulate motion perception as a problem of Bayesian inference with two parts. The first part
selects a model that best explains the observed motion pattern. The second part estimates motion
properties using the selected model.
The velocity field {~v } is estimated from velocity measurements {~u} at discrete positions {~ri , i =
1, . . . N } by maximizing
p({~u}|{~v })p({~v }|M )
p({~v }|{~u}, M ) =
,
(1)
p({~u}|M )
The prior
p({~v }|M ) = exp(?E({~v }|M )/T ),
(2)
differs for different models M and is discussed in section 3.2.
The likelihood function
p({~u}|{~v }) = exp(?E({~u}|{~v })/T )
depends on the measurement process and is discussed in section 3.3.
The best model that explains measurement {~u} is chosen by maximizing the model evidence
Z
p({~u}|M ) = p({~u}|{~v })p({~v }|M )d{~v }
(3)
(4)
which is equivalent to maximizing the posterior probability of the model M (assuming uniform prior
on the models):
P ({~u}|M )P (M )
M ? = arg max P (M |{~u}) = arg max
= arg max P ({~u}|M ).
(5)
M
M
M
P ({~u})
3.2
The Priors
We define three priors corresponding to the three different types of motion ? translation, rotation, and
expansion. For each motion type, we encourage slowness and smoothness. The prior for translation
is very similar to the slow-and-smooth prior [17] except we drop the higher-order derivative terms
and introduce an extra parameter (to ensure that all three models have similar degrees of freedom).
We define the priors by their energy functions E({~v }|M ), see equation (2). We label the models by
M ? {t, r, e}, where t, r, e denote translation, rotation, and expansion respectively. (We note that
the prior for expansion will also account for contraction).
1. slow-and-smooth-translation:
Z
E({~v }|M = t) =
?(|~v |2 + ?|?~v |2 + ?|?2~v |2 )d~r
(6)
2. slow-and-smooth-rotation:
Z
?vx 2
?vy 2
?vx ?vy 2
E({~v }|M = r) = ?{|~v |2 + ?[(
) +(
) +(
+
) ] + ?|?2~v |2 }d~r (7)
?x
?y
?y
?x
3. slow-and-smooth-expansion:
Z
?vy 2
?vx ?vy 2
?vx 2
) +(
) +(
?
) ] + ?|?2~v |2 }d~r (8)
E({~v }|M = e) = ?{|~v |2 + ?[(
?y
?x
?x
?y
These models are motivated as follows. The |~v |2 and |?2~v |2 bias towards slowness and smoothness
and are common to all models. The first derivative term gives the differences among the models.
The translation model prefers constant translation motion with ~v constant, since ?~v = 0 for this
type of motion. The rotation model prefers rigid rotation and expansion, respectively, of ideal form
{vx = ??(y ? y0 ), vy = ?(x ? x0 )}, {vx = e(x ? x0 ), vy = e(y ? y0 )
(9)
where (x0 , y0 ) are the (unknown) centers, ? is the angular speed and e is the expansion rate. These
forms of motion are preferred by the two models since, for the first type of motion (rotation) we have
?vy
?vy
?vx
x
{ ?v
?y + ?x = 0, ?x = ?y = 0} (independent of (x0 , y0 ) and ?). Similarly, the second type of
x
motion is preferred by the expansion (or contraction) model since { ?v
?x ?
(again independent of (x0 , y0 ) and e).
?vy
?y
= 0,
?vx
?y
=
?vy
?x
= 0}
The translation model is similar to the first three terms of the slow-and-smooth energy function
2
4
[17] but with a restriction on the set of parameters. Formally ?(|~v |2 + ?2 |?~v |2 + ?8 |?2~v |2 )d~r
P?
? 2m
m 2
v | d~r. Our computer simulations showed that the translation model performs
? ? m=0 m!2
m |D ~
similar to the slow-and-smooth model.
3.3
The Likelihood Functions
The likelihood function differs for the two classes of stimuli we examined: (i) For the moving dot
stimuli, as used in [4], there is enough information to estimate the local velocity ~u; (ii) For the
gratings stimuli [10], there is only enough information to estimate one component of the velocity
field.
For the dot stimuli, the energy term in the likelihood function is set to be
E({~u|~v }) =
N
X
|~v (~ri ) ? ~u(~ri )|2
(10)
i=1
For the gratings stimuli, see 2, the likelihood function uses the energy function
En ({~u}|{~v }) =
N
X
?(~ri ) ? |~u(~ri )||2
|~v (~ri ) ? ~u
(11)
i=1
?(~ri ) is the unit vector in the direction of ~u(~ri ) and normally it is the direction of local image
where ~u
gradient.
3.4
MAP estimator of velocities
The MAP estimate of the velocities for each model is obtained by solving
~v ? = arg max p({~v }|{~u}, M ) = arg min{E({~u|~v }) + E({~v }|M )}
~
v
~
v
(12)
For the slow-and-smooth model [17], it was shown using regularization analysis [2] that this solution
can be expressed in terms of a linear combination of the Green function G of the differential operator
which imposes the slow-and-smoothness constraint (the precise form of this constraint was chosen
so that G was a Gaussian).
We can obtain similar results for the three types of models M ? {t, r, e} we have introduced in this
~M =
paper. The main difference is that the models require two vector valued Green functions G
1
M
M
M
M
M
M
M
M
M
~ = (G , G ), with the constraint that G = G and G = G . These
(G1x , G1y ) and G
2
2x
2y
1x
2y
2x
1y
vector-valued Green functions are required to perform the coupling between the different velocity
component required for rotation and expansion, see figure (1). For the translation model there is no
M
coupling required and so GM
2x = G1y = 0.
~ = (G1 , G2 ). Top panel, left-to-right:
Figure 1: The vector-valued Green function G
M =t
M =r
M =e
G1x , G1x , G1x for the translation, rotation and expansion models. Bottom panel: left-to
=t
M =r
M =e
right: GM
for translation, rotation, and expansion models. Observe that the GM
2x , G2x , G2x
1x
=t
are similar for all models, GM
vanishes for the translation model (i.e. no coupling between veloc2x
=r
=e
ity components), and GM
and GM
both have two peaks which correspond to the two directions
2x
2x
M
M
M
of rotation and expansion. Recall that GM
1y = G2x and G2y = G1x .
The estimated velocity for the M model is of the form:
~v (~r) =
N
X
~M
~M
[?i G
r ? ~ri ) + ?i G
r ? ~ri )],
1 (~
2 (~
(13)
i=1
For the dot stimuli, the {?}, {?} are obtained by solving the linear equations:
N
X
~M
~M
[?j G
ri ? ~rj ) + ?j G
ri ? ~rj )] + ?i~e1 + ?i~e2 = ~u(ri ), i = 1, . . . N,
1 (~
2 (~
(14)
j=1
where ~e1 , ~e2 denote the (orthogonal) coordinate axes. If we express the {?}, {?} as two N-dim
vectors A and B, the {ux } and {uy } as vectors U = (Ux , Uy )T , and define N ? N matrices
M
M
M
M
to have components GM
ri ? ~rj ), GM
ri ? ~rj ), GM
ri ? ~rj ), GM
ri ? ~rj ) re, g2x
, g1y
, g2y
g1x
2x (~
1y (~
2y (~
1x (~
spectively, then we can express these linear equations as:
M
M
g1x + I
g2x
A
Ux
=
(15)
M
M
B
Uy
g1y
g2y
+I
Similarly for the gratings stimuli,
M
g?1x + I
M
g?1y
M
g?2x
M
g?2y + I
A
B
=
Ux
Uy
(16)
M
?(ri )]~u
?x (ri ), and
? M (~ri ? ~rj ) = [G
~ M (~ri ? ~rj ) ? ~u
in which g?1x
is the matrix with components G
1x
1
M
M
M
similarly for g?1y , g?2x and g?2y .
3.5
Model Selection
We re-express model evidence p({~u}|M ) in terms of (A, B):
Z
p({~u}|M ) = p({~u}|A, B, M )p(A, B)dAdB
We introduce new notation in the form of 2N ? 2N matrices: g
M
=
(17)
M
g1x
M
g1y
M
g2x
M
g2y
, similarly for
M
g? .
The model evidence for the dot stimuli can be computed analytically (exploiting properties of multidimensional Gaussians) to obtain:
p({~u}|M ) =
1
gM
1
p
exp[? (U T U ? U T M
U )]
T
g +I
(?T )N det(g M + I)
(18)
Similarly, for the gratings stimuli we obtain:
p
det(g M )
1
1
? ?1 (?
q
p({~u}|M ) =
exp[? (U T U ? U T g?M ?
g M )T U )]
N
(?T )
T
?
det(?)
(19)
? = (?
where ?
g M )T g?M + g M .
4
Results on random dot motion
We first investigate motion perception with the moving dots stimuli used by Freeman and Harris
[4], as shown in figure (2). The stimuli consist of 128 moving dots in a random spatial pattern.
All the dots have the same speed in all three motion patterns, including translation, rotation and
expansion. Our simulations first select the correct model for each stimulus and then estimate the
speed threshold of detection for each type of motion. The parameter values used are ? = 0.001,
? = 12.5, ? = 78.125 and T = 0.0054.
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
?1
?1.5
?1.5
?2
?2
?2.5
?3
?2
?1
0
1
2
3
?2.5
?2.5
3
2
1
0
?1
?2
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
2.5
?3
?3
?2
?1
0
1
2
Figure 2: Moving random dot stimuli. Left panel: translation; middle panel: rotation; right panel:
expansion.
4.1
Model selection
Model selection results are shown in figure (3). As speed increases in the range of 0.05 to 0.1, model
evidence decreases for all models. This is due to slowness term in all model priors. Nevertheless the
correct model is always selected over the entire range of speed, and for all 3 type of motion stimuli.
3
482.9
482
rotation model
expansion model
translation model
482
rotation model
expansion model
translation model
481
rotation model
expansion model
translation model
481
480
log(P(u))
log(P(u))
log(P(u))
482.85
479
480
479
482.8
478
482.75
0.05
0.06
0.07
0.08
speed
0.09
0.1
477
0.05
478
0.06
0.07
0.08
speed
0.09
0.1
477
0.05
0.06
0.07
0.08
speed
0.09
Figure 3: Model selection results with random dot motion. Plots the log probability of the model as
a function of speed for each type of stimuli. left: translation stimuli; middle: rotation stimuli; right:
expansion stimuli. Green curves with cross are from translation model. Red curves with circles are
from rotation model. Blue curves with squares are from expansion model.
4.2
Speed threshold of Detection
As reported in [4], humans have lower speed threshold in detecting rotation/expansion than translation motion. The experiment is formulated as a model selection task with an additional ?static?
motion prior. The ?static? motion prior is modeled as a translation prior with ? = 0 and ? significantly large to emphasize slowness. In the simulation, ? = 0.3 for this ?static? model, while
? = 0.001 for all other models.
At low speed, the ?static? model is favored due to its stronger bias towards slowness, as stimulus
speed increases, it loses its advantage to other models. The speed thresholds of detection for different
motion patterns can be seen from the model evidence plots in figure (4), and they are lower for
rotation/expansion than translation. The threshold values are about 0.05 for rotation and expansion
and 0.1 for translation. This is consistent with experimental result in [4].
488
486
484
482
rotation model
expansion model
translation model
static model
481.4
481.2
log(P(u))
log(P(u))
481.6
rotation model
expansion model
translation model
static model
481
480.8
480.6
480.4
480
480.2
478
0.1
0.102
0.104 0.106
Speed
481.5
0.108
0.11
0.0502
rotation model
expansion model
translation model
static model
log(P(u))
480.5
0.0508
0.12
0.1
Speed threshold
481
0.0504
0.0506
Speed
0.08
0.06
0.04
0.02
480
0.0502
0.0504
0.0506
Speed
0.0508
0
translation
rotation
expansion
Figure 4: Speed threshold of detection. Upper left panel: model evidence plot for translation stimuli.
Upper right panel: model evidence plot for rotation stimuli. Lower left panel: model eviddence plot
for expansion stimuli. Lower right panel: bar graph of speed thresholds.
0.1
5
5.1
Results on randomly oriented gratings
Stimuli
When randomly oriented grating elements drift behind apertures, the perceived direction of motion
is heavily biased by the orientation of the gratings, as well as by the shape and contrast of the apertures. Recently, Nishida and his colleagues developed a novel global motion stimulus consisting of
a number of gratings elements, each with randomly assigned orientation [10]. A coherent motion
is perceived when the drifting velocities of all elements are consistent with a given velocity. Examples of the stimuli used in these psychophysical experiments are shown in left side of figure (6).
The stimuli consisted of 728 gratings (drifting sine-wave gratings windowed by stationary Gaussians). The orientations of the gratings were randomly assigned, and their drifting velocities were
determined by a specified global motion flow pattern. The motions of signal grating elements were
consistent with global motion, but the motions of noise grating elements were randomized. The
task was to identify the global motion direction as one of two alternatives: left/right for translation,
clockwise/counterclockwise for rotation, and inward/outward for expansion. Motion sensitivity was
measured by the coherence threshold, defined as the proportion of signal elements that yielded a
performance level of 75% correct.
Similar stimuli with 328 gratings were generated to test our computational models. The input for
the models is the velocity component perpendicular to the assigned orientation for each grating, as
illustrated in the upper two panels of figure (5).
15
15
10
10
5
5
0
0
?5
?5
?10
?10
?15
?15
?10
?5
0
5
10
15
?15
?15
15
15
10
10
5
5
0
0
?5
?5
?10
?10
?15
?15
?10
?5
0
5
10
15
?15
?15
?10
?5
0
5
10
15
?10
?5
0
5
10
15
Figure 5: Randomly-oriented grating stimuli and estimated motion flow. Upper left panel: rotation
stimulus (with 75% coherence ratio). Upper right panel: expansion stimulus (with 75% coherence
ratio). Lower left panel: motion flow estimated from stimulus in first panel with rotation model.
Lower right panel: motion flow estimated from stimulus in second panel with expansion model.
5.2
Result
The results of psychophysical experiments (middle panel of figure 6) showed worse performance
for perceiving translation than rotation/expansion motion [6]. Clearly, as shown in the third panel
of the same figure, the model performs best for rotation and expansion, and is worst for translation.
This finding agrees with human performance in psychophysical experiments.
6
Conclusion
Humans motion sensitivities depend on the motion patterns (translation/rotation/expansion). We
propose a computational model in which different prior motions compete to fit the data by levels
Human
Model
0.25
Coherence Ratio Threshold
Coherence Ratio Threshold
0.5
0.4
0.3
0.2
0.1
0
translation
rotation
expansion
0.2
0.15
0.1
0.05
0
translation
rotation
expansion
Figure 6: Stimulus and results. Left panel: illustration of grating stimulus. Blue arrows indicate the
drifting velocity of each grating. Middle panel: human coherence thresholds for different motion
stimuli. Right panel: Model prediction of coherence thresholds which are consistent with human
trends.
of inference. This analysis involves formulating two new prior models for rotation and expansion
model and deriving their properties. This competitive prior approach gives good fits to the empirical
data and accounts for the dominant trends reported in [4, 6].
Our current work aims to extend these findings to a range of different motions (e.g. affine motion)
and to use increasingly naturalistic appearance/intensity models. It is also important to determine
if motion patterns to which humans are sensitive correspond to those appearing regularly in natural
motion sequences.
References
[1] J.F. Barraza and N.M. Grzywacz. Measurement of angular velocity in the perception of rotation. Vision Research, 42.2002.
[2] J. Duchon. Lecture Notes in Math. 571, (eds Schempp, W. and Zeller, K.) 85-100. Springer-Verlag, Berlin, 1979.
[3] C. J. Duffy, and R. H. Wurtz. Sensitivity of MST neurons to optic flow stimuli. I. A continuum of response selectivity to large field
stimuli. Journal of Neurophysiology. 65, 1329-1345. 1991.
[4] T. Freeman, and M. Harris. Human sensitivity to expanding and rotating motion: effect of complementary masking and directional
structure. Vision research, 32, 1992.
[5] D. Knill and W. Richards (Eds). Perception as Bayesian Inference. Cambridge University Press, 1996.
[6] A. Lee, A. Yuille, and H. Lu. Superior perception of circular/radial than translational motion cannot be explained by generic priors. VSS
2008.
[7] H. Lu and A.L. Yuille. Ideal Observers for Detecting Motion: Correspondence Noise. NIPS 2005.
[8] M. C. Morrone, D. C. Burr, and L. Vaina. Two stages of visual processing for radial and circular motion. Nature, 376, 507-509. 1995.
[9] M. Morrone, M. Tosetti, D. Montanaro, A. Fiorentini, G. Cioni, and D. C. Burr. A cortical area that responds specifically to optic flow
revealed by fMRI. Nature Neuroscience, 3, 1322 -1328. 2000.
[10] S. Nishida, K. Amano, M. Edwards, and D.R. Badcock. Global motion with multiple Gabors - A tool to investigate motion integration
across orientation and space. VSS 2006.
[11] R. Sekuler, S.N.J. Watamaniuk and R. Blake. Perception of Visual Motion. In Steven?s Handbook of Experimental Psychology. Third
edition. H. Pashler, series editor. S. Yantis, volume editor. J. Wiley Publishers. New York. 2002.
[12] A.A. Stocker and E.P. Simoncelli. Noise characteristics and prior expectations in human visual speed perception Nature Neuroscience,
vol. 9(4), pp. 578?585, Apr 2006.
[13] A.A. Stocker, and E. Simoncelli. A Bayesian model of conditioned perception. Proceedings of Neural Information Processing Systems.
2007.
[14] K. Tanaka, Y. Fukada, and H. Saito. Underlying mechanisms of the response specificity of expansion/contraction and rotation cells in the
dorsal part of the MST area of the macaque monkey. Journal of Neurophysiology. 62, 642-656. 1989.
[15] Y. Weiss, and E.H. Adelson. Slow and smooth: A Bayesian theory for the combination of local motion signals in human vision Technical
Report 1624. Massachusetts Institute of Technology. 1998.
[16] Y. Weiss, E.P. Simoncelli, and E.H. Adelson. Motion illusions as optimal percepts. Nature Neuroscience, 5, 598-604. 2002.
[17] A.L. Yuille and N.M. Grzywacz. A computational theory for the perception of coherent visual motion. Nature, 333,71-74. 1988.
| 3458 |@word neurophysiology:2 middle:4 stronger:1 proportion:1 simulation:4 contraction:3 series:1 current:1 comparing:1 mst:2 shape:1 drop:1 plot:5 v:3 discrimination:1 stationary:1 selected:4 detecting:2 math:1 mathematical:1 windowed:1 differential:3 combine:1 burr:2 introduce:2 x0:5 freeman:2 estimating:2 notation:1 underlying:1 panel:21 spectively:1 inward:1 what:1 monkey:1 developed:1 finding:3 multidimensional:1 unit:1 normally:1 appear:1 zeller:1 local:6 consequence:2 fiorentini:1 examined:1 suggests:1 sekuler:1 range:4 perpendicular:1 uy:4 differs:2 illusion:1 saito:1 area:2 empirical:1 significantly:1 gabor:1 radial:4 specificity:1 naturalistic:1 cannot:1 selection:10 operator:3 pashler:1 restriction:1 equivalent:1 map:2 demonstrated:1 center:1 maximizing:3 independently:1 focused:1 formulate:1 perceive:1 vaina:1 estimator:1 deriving:1 his:1 ity:1 coordinate:1 grzywacz:2 gm:12 heavily:1 us:1 hypothesis:1 agreement:1 velocity:17 trend:3 ego:2 element:6 fukada:1 richards:1 observed:2 bottom:1 steven:1 capture:1 worst:1 decrease:1 environment:1 vanishes:1 asked:1 depend:1 solving:2 yuille:5 upon:1 cooperativity:1 valued:3 statistic:2 g1:1 advantage:1 sequence:1 propose:3 relevant:1 los:3 exploiting:1 coupling:3 develop:1 stat:2 measured:1 strong:1 edward:1 grating:19 involves:1 indicate:1 direction:7 closely:1 correct:3 human:18 vx:8 explains:2 require:1 blake:1 exp:4 predict:1 continuum:1 perceived:3 estimation:4 label:1 sensitive:2 agrees:1 create:1 tool:1 clearly:1 gaussian:1 always:1 aim:1 derived:1 ax:1 likelihood:5 contrast:1 dim:1 inference:8 rigid:1 entire:1 selects:1 translational:3 arg:5 among:1 orientation:5 favored:1 spatial:1 integration:9 field:4 adelson:2 fmri:2 discrepancy:1 others:1 stimulus:39 quantitatively:1 inherent:1 employ:2 report:1 randomly:5 oriented:3 consisting:1 freedom:1 detection:4 circular:4 investigate:2 behind:1 stocker:3 encourage:1 orthogonal:1 rotating:1 re:2 circle:1 formulates:2 uniform:1 shuang:1 reported:2 answer:1 combined:1 peak:1 randomized:1 sensitivity:4 lee:2 again:1 ambiguity:1 worse:1 derivative:2 account:4 retinal:1 includes:1 depends:1 performed:1 sine:1 observer:3 red:1 wave:1 competitive:1 masking:1 hongjing:2 square:1 characteristic:1 percept:1 correspond:2 identify:1 directional:1 generalize:1 bayesian:10 critically:1 lu:3 explain:2 ed:2 energy:4 colleague:1 pp:1 e2:2 static:7 massachusetts:1 recall:1 higher:1 response:2 wei:2 formulation:1 angular:2 stage:1 effect:2 verify:1 concept:1 consisted:1 regularization:1 analytically:1 assigned:3 illustrated:1 performs:2 motion:87 image:2 novel:3 recently:3 common:1 rotation:43 superior:1 volume:1 discussed:2 extend:1 measurement:6 cambridge:1 smoothness:3 similarly:6 dot:11 moving:5 dominant:1 posterior:1 showed:2 selectivity:1 slowness:5 verlag:1 seen:1 additional:1 preceding:1 determine:1 clockwise:1 ii:2 relates:1 multiple:3 signal:4 simoncelli:4 infer:1 rj:8 alan:1 technical:1 smooth:14 cross:1 e1:2 prediction:3 vision:3 expectation:1 wurtz:1 cell:1 background:1 publisher:1 extra:1 biased:1 subject:1 counterclockwise:1 regularly:1 inconsistent:2 flow:10 ideal:2 revealed:1 enough:2 concerned:1 variety:3 affect:1 fit:2 psychology:2 competing:1 identified:1 det:3 angeles:3 motivated:1 york:1 prefers:2 outward:1 vy:10 nishida:2 estimated:5 neuroscience:3 blue:2 discrete:1 vol:1 express:3 threshold:13 enormous:1 nevertheless:1 graph:1 compete:1 wu:1 decision:1 coherence:7 correspondence:1 yielded:1 optic:2 constraint:3 ri:21 ucla:6 speed:21 min:1 formulating:1 department:3 watamaniuk:1 combination:2 across:1 increasingly:1 y0:5 explained:1 equation:3 turn:1 mechanism:1 available:1 gaussians:2 apply:1 observe:1 appropriate:1 generic:1 appearing:1 alternative:1 drifting:4 original:1 top:1 ensure:1 psychophysical:7 move:1 question:5 strategy:4 responds:1 gradient:1 berlin:1 assuming:1 modeled:2 illustration:1 rotational:1 ratio:4 proper:1 unknown:1 perform:2 upper:5 neuron:1 precise:2 drift:1 intensity:1 introduced:1 required:3 specified:1 coherent:2 tanaka:1 nip:1 macaque:1 address:2 able:1 bar:1 pattern:12 perception:13 biasing:1 summarize:1 max:4 green:7 including:3 explanation:1 badcock:1 natural:1 technology:1 duchon:1 prior:27 literature:2 lecture:1 degree:1 affine:1 consistent:4 imposes:1 editor:2 translation:39 bias:3 side:1 institute:1 curve:3 cortical:1 world:1 qualitatively:1 commonly:1 emphasize:1 preferred:2 aperture:2 global:7 decides:1 handbook:1 morrone:2 nature:5 ca:3 expanding:2 expansion:43 complex:2 amano:1 main:1 apr:1 arrow:1 noise:3 arise:1 edition:1 knill:1 complementary:1 en:1 slow:15 wiley:1 position:1 third:2 yantis:1 physiological:1 evidence:7 consist:1 conditioned:2 duffy:1 appearance:1 visual:8 expressed:1 ux:4 g2:1 springer:1 loses:1 harris:2 goal:1 formulated:2 towards:3 room:1 change:1 determined:1 perceiving:2 except:1 specifically:1 discriminate:1 experimental:3 formally:1 select:1 dorsal:1 phenomenon:3 |
2,711 | 3,459 | Multi-Agent Filtering with Infinitely Nested Beliefs
Luke S. Zettlemoyer
MIT CSAIL
Cambridge, MA 02139
[email protected]
Brian Milch?
Google Inc.
Mountain View, CA 94043
[email protected]
Leslie Pack Kaelbling
MIT CSAIL
Cambridge, MA 02139
[email protected]
Abstract
In partially observable worlds with many agents, nested beliefs are formed when
agents simultaneously reason about the unknown state of the world and the beliefs
of the other agents. The multi-agent filtering problem is to efficiently represent
and update these beliefs through time as the agents act in the world. In this paper, we formally define an infinite sequence of nested beliefs about the state of
the world at the current time t, and present a filtering algorithm that maintains a
finite representation which can be used to generate these beliefs. In some cases,
this representation can be updated exactly in constant time; we also present a simple approximation scheme to compact beliefs if they become too complex. In
experiments, we demonstrate efficient filtering in a range of multi-agent domains.
1
Introduction
The existence of nested beliefs is one of the defining characteristics of a multi-agent world. As an
agent acts, it often needs to reason about what other agents believe. For instance, a teacher must
consider what a student knows to decide how to explain important concepts. A poker agent must
think about what cards other players might have ? and what cards they might think it has ? in
order to bet effectively. In this paper, we assume a cooperative setting where all the agents have
predetermined, commonly-known policies expressed as functions of their beliefs; we focus on the
problem of efficient belief update, or filtering.
We consider the nested filtering problem in multi-agent, partially-observable worlds [6, 1, 9]. In
this setting, agents receive separate observations and independently execute actions, which jointly
change the hidden state of the world. Since each agent does not get to see the others? observations
and actions, there is a natural notion of nested beliefs. Given its observations and actions, an agent
can reason not only about the state of the external world, but also about the other agents? observations
and actions. It can also condition on what others might have seen and done to compute their beliefs
at the next level of nesting. This pattern can be repeated to arbitrary depth.
The multi-agent filtering problem is to efficiently represent and update these nested beliefs through
time. In general, an agent?s beliefs depend on its entire history of actions and observations. One
approach to computing these beliefs would be to remember the entire history, and perform inference
to compute whatever probabilities are needed at each time step. But the time required for this
computation would grow with the history length. Instead, we maintain a belief state that is sufficient
for predicting future beliefs and can be approximated to achieve constant-time belief updates.
We begin by defining an infinite sequence of nested beliefs about the current state st , and showing
that it is sufficient for predicting future beliefs. We then present a multi-agent filtering algorithm that
maintains a compact representation sufficient for generating this sequence. Although in the worst
case this representation grows exponentially in the history length, we show that its size remains
constant for several interesting problems. We also describe an approximate algorithm that always
?
This work was done while the second author was at MIT CSAIL.
maintains a constant representation size (and constant-time updates), possibly at the cost of accuracy.
In experiments, we demonstrate efficient and accurate filtering in a range of multi-agent domains.
2
Related Work
In existing research on partially observable stochastic games (POSGs) and Decentralized POMDPs
(DEC-POMDPs) [6, 1, 9], policies are represented as direct mappings from observation histories to
actions. That approach removes the need for the agents to perform any kind of filtering, but requires
the specification of some particular class of policies that return actions for arbitrarily long histories.
In contrast, many successful algorithms for single-agent POMDPs represent policies as functions on
belief states [7], which abstract over the specifics of particular observation histories. Gmytrasiewicz
and Doshi [5] consider filtering in interactive POMDPs. Their approach maintains finitely nested
beliefs that are derived from a world model as well as hand-specified models of how each agent
reasons about the other agents. In this paper, all of the nested reasoning is derived from a single
world model, which eliminates the need for any agent-specific models.
To the best of our knowledge, our work is the first to focus on filtering of infinitely nested beliefs.
There has been significant work on infinitely nested beliefs in game theory, where Brandenburger
and Dekel [2] introduced the notion of an infinite sequence of finitely nested beliefs. However,
they do not describe any method for computing these beliefs from a world model or updating them
over time. Another long-standing line of related work is in the epistemic logic community. Fagin
and Halpern [3] define labeled graphs called probabilistic Kripke structures, and show how a graph
with finitely many nodes can define an infinite sequence of nested beliefs. Building on this idea,
algorithms have been proposed for answering queries on probabilistic Kripke structures [10] and on
influence diagrams that define such structures [8]. However, these algorithms have not addressed
the fact that as agents interact with the world over time, the set of observation sequences they could
have received (and possibly the set of beliefs they could arrive at) grows exponentially.
3
Nested Filtering
In this section, we describe the world model and define the multi-agent filtering problem. We then
present a detailed example where a simple problem leads to a complex pattern of nested reasoning.
3.1
Partially observable worlds with many agents
We will perform filtering given a multi-agent, decision-theoretic model for acting in a partially
observable world.1 Agents receive separate observations and independently execute actions, which
jointly change the state of the world. There is a finite set of states S, but the current state s ? S
cannot be observed directly by any of the agents. Each agent j has a finite set of observations Oj
that it can receive and a finite set of actions Aj that it can execute. Throughout this paper, we will
use superscripts and vector notation to name agents and subscripts to indicate time. For example,
ajt ? Aj is the action for agent j at time t; ~at = hait , . . . , ajt i is a vector with actions for each of the
agents; and aj0:t = (aj0 , . . . , ajt ) is a sequence of actions for agent j at time steps 0 . . . t.
The state dynamics is defined by a distribution p0 (s) over initial states and a transition distribution
p(st |st?1 , ~at?1 ) that is conditioned on the previous state st?1 and the action vector ~at?1 . For each
agent j, observations are generated from a distribution p(ojt |st , ~at?1 ) conditioned on the current
state and the previous joint action. Each agent j sees only its own actions and observations. To
record this information, it is useful to define a history hj0:t = (aj0:t?1 , oj1:t ) for agent j at time t. A
policy is a distribution ? j (ajt |hj0:t ) over the actions agent j will take given this history. Together,
these distributions define the joint world model:
t?1
Y
p(s0:t , ~h0:t ) = p0 (s0 )
~? (~ai |~h0:i )p(si+1 |si , ~ai )p(~oi+1 |si+1 , ~ai )
(1)
i=0
Q
Q
where ~? (~at |~h0:t ) = j ? j (ajt |hj0:t ) and p(~ot+1 |st+1 , ~at ) = j p(ojt+1 |st+1 , ~at ).
1
This is the same type of world model that is used to define POSGs and DEC-POMDPs. Since we focus on
filtering instead of planning, we do not need to define reward functions for the agents.
3.2
The nested filtering problem
In this section, we describe how to compute infinitely nested beliefs about the state at time t. We then
define a class of policies that are functions of these beliefs. Finally, we show that the current nested
belief for an agent i contains all of the information required to compute future beliefs. Throughout
the rest of this paper, we use a minus notation to define tuples indexed by all but one agent. For
?i
are tuples of histories and policies for all agents k 6= i.
example, h?i
0:t and ?
We define infinitely nested beliefs by presenting an infinite sequence of finitely nested beliefs. For
each agent i and nesting level n, the belief function B i,n : hi0:t ? bi,n
t maps the agent?s history to its
nth-level beliefs at time t. The agent?s zeroth-level belief function B i,0 (hi0:t ) returns the posterior
i
distribution bi,0
t = p(st |h0:t ) over states given the input history, which can be computed from Eq. 1:
B i,0 (hi0:t )
=
p(st |hi0:t )
?
P
s0:t?1 ,h?i
0:t
p(s0:t , ~h0:t ).
Agent i?s first-level belief function B i,1 (hi0:t ) returns a joint distribution on st and the zeroth-level
beliefs of all the other agents (what the other agents believe about the state of the world). We can
for all agents k 6= i by summing the probabilities of
compute the tuple of zeroth-level beliefs b?i,0
t
= B ?i,0 (h?i
that
lead
to
these
beliefs
(that
is,
such that b?i,0
all histories h?i
t
0:t )):
0:t
B i,1 (hi0:t )
=
|hi0:t )
p(st , b?i,0
t
?
P
s0:t?1 ,h?i
0:t
, B ?i,0 (h?i
p(s0:t , ~h0:t )?(b?i,0
t
0:t )).
The delta function ?(?, ?) returns one when its arguments are equal and zero otherwise.
For level n, B i,n (hi0:t ) returns a distribution over states and level n ? 1 beliefs for the other agents.
For example, at level 2, the function returns a joint distribution over: the state, what the other agents
believe about the state, and what they believe others believe. Again, these beliefs are computed by
summing over histories for the other agents that lead to the appropriate level n ? 1 beliefs:
B i,n (hi0:t )
=
|hi0:t )
p(st , b?i,n?1
t
?
P
s0:t?1 ,h?i
0:t
, B ?i,n?1 (h?i
p(s0:t , ~h0:t )?(b?i,n?1
t
0:t )).
Note that for all nesting levels n, B i,n (hi0:t ) is a discrete distribution. There are only finitely many
beliefs each agent k could hold at time t ? each arising from one of the possible histories hk0:t .
= B i,? (hi0:t ) to be the infinite sequence of nested beliefs generated by computing
Define bi,?
t
B i,n (hi0:t ) for n = 0, 1, . . .. We can think of bi,?
t as a belief state for agent i, although not one
that can be used directly by a filtering algorithm. We will assume that the policies ? i are represented
as functions of these belief states: that is, ? i (ait |bi,?
t ) can be thought of as a procedure that looks
i,?
at arbitrary parts of the infinite sequence bt and returns a distribution over actions. We will see
examples of this type of policy in the next section. Under this assumption, bi,?
t is a sufficient statistic
for predicting future beliefs in the following sense:
Proposition 1 In a model with policies ? j (ajt |bj,?
t ) for each agent j, there exists a belief estimation
function BE s.t. ?ai0:t?1 , oi1:t , ait , oit+1 . B i,? (ai0:t , oi1:t+1 ) = BE(B i,? (ai0:t?1 , oi1:t ), ait , oit+1 ).
To prove this result, we need to demonstrate a procedure that correctly computes the new belief
given only the old belief and the new action and observation. The filtering algorithm we will present
in Sec. 4 achieves this goal by representing the nested belief with a finite structure that can be used
to generate the infinite sequence, and showing how these structures are updated over time.
3.3
Extended Example: The Tiger Communication World
We now describe a simple two-agent ?tiger world? where the optimal policies require the agents to
coordinate their actions. In this world there are two doors: behind one randomly chosen door is a
hungry tiger, and behind the other is a pile of gold. Each agent has unique abilities. Agent l (the
tiger listener) can hear the tiger roar, which is a noisy indication of its current location, but cannot
open the doors. Agent d (the door opener) can open doors but cannot hear the roars. To facilitate
communication, agent l has two actions, signal left and signal right, which each produce a unique
observation for agent d. When a door is opened, the world resets and the tiger is placed behind a
randomly chosen door. To act optimally, agent l must listen to the tiger?s roars until it is confident
about the tiger?s location and then send the appropriate signal to agent d. Agent d must wait for this
bl,?
al
? l (al |bl,? )
bd,?
ad
? d (ad |bd,? )
bl,0 (T L) > 0.8
bl,0 (T L) > 0.8
otherwise
SL
SR
L
1.0
1.0
1.0
bd,0 (T L) > 0.8
bd,0 (T R) > 0.8
otherwise
OR
OL
L
1.0
1.0
1.0
Figure 1: Deterministic policies for the tiger world that depend on each agent?s beliefs about the physical state,
where the tiger can be on the left (T L) or the right (T R). The tiger listener, agent l, will signal left (SL) or
right (SR) if it confident of the tiger?s location. The door opener, agent d, will open the appropriate door when
it is confident about the tiger?s location. Otherwise both agents listen (to the tiger or for a signal).
signal and then open the appropriate door. Fig. 1 shows a pair of policies that achieve this desired
interaction and depend only on each agent?s level-zero beliefs about the state of the world. However,
as we will see, the agents cannot maintain their level-zero beliefs in isolation. To correctly update
these beliefs, each agent must reason about the unseen actions and observations of the other agent.
Consider the beliefs that each agent must maintain to execute its policies during a typical scenario.
Assume the tiger starts behind the left door. Initially, both agents have uniform beliefs about the
location of the tiger. As agent d waits for a signal, it does not gain any information about the tiger?s
location. However, it maintains a representation of the possible beliefs for agent l and knows that l is
receiving observations that correlate with the state of the tiger. In this case, the most likely outcome
is that agent l will hear enough roars on the left to do a ?signal left? action. This action produces an
observation for agent d which allows it to gain information about l?s beliefs. Because agent d has
maintained the correspondence between the true state and agent l?s beliefs, it can now infer that the
tiger is more likely to be on the left (it is unlikely that l could have come to believe the tiger was
on the left if that were not true). This inference makes agent d confident enough about the tiger?s
location to open the right door and reset the world. Agent l must also represent agent d?s beliefs,
because it never receives any observations that indicate what actions agent d is taking. It must track
agent d?s belief updates to know that d will wait for a signal and then immediately open a door.
Without this information, l cannot predict when the world will be reset, and thus when it should
disregard past observations about the location of the tiger.
Even in this simple tiger world, we see a complicated reasoning pattern: the agents must track each
others? beliefs. To update its belief about the external world, each agent must infer what actions the
other agent has taken, which requires maintaining that agent?s beliefs about the world. Moreover,
updating the other agent?s beliefs requires maintaining what it believes you believe. Continuing
this reasoning to deeper levels leads to the infinitely nested beliefs defined in Sec. 3.2. However,
we will never explicitly construct these infinite beliefs. Instead, we maintain a finite structure that
is sufficient to recreate them to arbitrary depth, and only expand as necessary to compute action
probabilities.
4
Efficient Filtering
i
i
= BE(bi,?
In this section, we present an algorithm for performing belief updates bi,?
t
t?1 , at?1 , ot )
on nested beliefs. This algorithm is applicable in the cooperative setting where there are commonly
known policies ? j (ajt |bj,?
t ) for each agent j. The approach, which we call the SDS filter, maintains
a set of Sparse Distributions over Sequences of past states, actions, and observations.
Sequence distributions. The SDS filter deals with two kinds of sequences: histories hj0:t =
(aj0:t?1 , oj1:t ) and trajectories x0:t = (s0:t , ~a0:t?1 ). A history represents what agent j knows before acting at time t; a trajectory is a trace of the states and joint actions through time t. The
filter for agent i maintains the following sequence sets: a set X of trajectories that might have
occurred so far, and for each agent j (including i itself), a set H j of possible histories. One of
the elements of H i is marked as being the history that i has actually experienced. The SDS filter
maintains belief information in the form of sequence distributions ?j (x0:t |hj0:t ) = p(x0:t |hj0:t ) and
? j (hj0:t |x0:t ) = p(hj0:t |x0:t ) for all agents j, histories hj0:t ? H j , and trajectories x0:t ? X.2 The
?j distributions represent what agent j would believe about the possible sequences of states and
other agents? actions given hj0:t . The ? j distributions represent the probability of j receiving the
observations in hj0:t if the trajectory x0:t had actually happened.
Actions are included in both histories and trajectories; when x0:t and hj0:t specify different actions, both
? (x0:t |hj0:t ) and ? j (hj0:t |x0:t ) are zero.
2
j
The insight behind the SDS filter is that these sequence distributions can be used to compute the
nested belief functions B i,n (hi0:t ) from Sec. 3.2 to arbitrary depth. The main challenge is that sets
of possible histories and trajectories grow exponentially with the time t. To avoid this blow-up,
the SDS filter does not maintain the complete set of possible sequences. We will see that some
sequences can be discarded without affecting the results of the belief computations. If this pruning
is insufficient, the SDS filter can drop low-probability sequences and perform approximate filtering.
A second challenge is that if we represent each sequence explicitly, the space required grows linearly
with t. However, the belief computations do not require the details of each trajectory and history. To
compute beliefs about current and future states, it suffices to maintain the sequence distributions ?j
and ? j defined above, along with the final state st in each trajectory. The SDS filter maintains only
this information.3 For clarity, we will continue to use full sequence notation in the paper.
In the rest of this section, we first show how the sequence distributions can be used to compute nested
beliefs of arbitrary depth. Then, we show how to maintain the sequence distributions. Finally, we
present an algorithm that computes these distributions while maintaining small sequence sets.
The nested beliefs from Sec. 2.2 can be written in terms of the sequence distributions as follows:
X
B j,0 (hj0:t )(s) =
?j (x0:t |hj0:t )
(2)
x0:t ?X : xt =s
X
B j,n (hj0:t )(s, b?j,n?1 ) =
?j (x0:t |hj0:t )
x0:t ?X : xt =s
Y
X
k6=j
k
hk
0:t ?H
? k (hk0:t |x0:t )?(bk,n?1 , B k,n?1 (hk0:t )) (3)
At level zero, we sum over the probabilities according to agent j of all trajectories with the correct
final state. At level n, we perform the same outer sum, but for each trajectory we sum the probabilities of the histories for agents k 6= j that would lead to the beliefs we are interested in. Thus,
the sequence distributions at time t are sufficient for computing any desired element of the infinite
belief sequence B j,? (hj0:t ) for any agent j and history hj0:t .
Updating the distributions. The sequence distributions are updated at each time step t as follows.
For each agent j, trajectory x0:t = (s0:t , ~a0:t?1 ) and history hj0:t = (aj0:t?1 , oj1:t ):
? j (hj0:t |x0:t )
?
j
(x0:t |hj0:t )
=
=
? j (hj0:t?1 |x0:t?1 )p(ojt |st , ~at?1 )
?
j
(x0:t?1 |hj0:t?1 )p(~at?1 |x0:t?1 )p(st |st?1 , ojt , ~at?1 )
(4)
(5)
The values of ? j on length-t histories are computed from existing ? j values by multiplying in the
probability of the most recent observation. To extend ?j to length-t trajectories, we multiply in the
probability of the state transition and the probability of the agents? actions given the past trajectory:
p(~at?1 |x0:t?1 ) =
Y X
? k (hk0:t?1 |x0:t?1 )? k (akt?1 |B k,? (hk0:t?1 ))
(6)
k hk
0:t?1
Here, to predict the actions for agent k, we take an expectation over its possible histories hk0:t?1
(according to the ? k distribution from the previous time step) of the probability of each action
akt?1 given the beliefs B k,? (hk0:t?1 ) induced by the history. In practice, only some of the entries in
B k,? (hk0:t?1 ) will be needed to compute k?s action; for example, in the tiger world, the policies are
functions of the zero-level beliefs. The necessary entries are computed from the the previous ? and
? distributions as described in Eqs. 2 and 3. This computation is not prohibitive because, as we will
see later, we only consider a small subset of the possible histories.
Returning to the example tiger world, we can see that maintaining these sequence distributions will
allow us to achieve the desired interactions described in Sec. 3.3. For example, when the door opener
receives a ?signal left? observation, it will infer that the tiger is on the left because it has done the
reasoning in Eq. 6 and determined that, with high probability, the trajectories that would have led
the tiger listener to take this action are the ones where the tiger is actually on the left.
3
This data structure is closely related to probabilistic Kripke structures [3] which are known to be sufficient
for recreating nested beliefs. We are not aware of previous work that guarantees compactness through time.
Initialization. Input: Distribution p(s) over states.
1. Initialize trajectories and histories: X = {((s), ())|s ? S}, H j = {((), ())}
2. Initialize distributions: ?x = ((s), ()) ? X, j, hj ? H j : ?j (x|hj ) = p(s) and ? j (hj |x) = 1.
Filtering. Input: Action ait?1 and observation oit .
1. Compute new sequence sets X and H j , for all agents j, by adding all possible states, actions, and
observations to sequences in the previous sets. Compute new sequence distributions ?j and ? j , for
all agents j, as described in Eqs. 5, 4, and 6. Mark the observed history hi0:t ? H i .
2. Merge and drop sequences:
(a) Drop trajectories and histories that are commonly known to be impossible:
? ?x0:t ? X s.t. ?j, hj0:t ? H j . ?j (x0:t |hj0:t ) = 0: Set X = X \ {x0:t }.
? ?j, hj0:t ? H j s.t. ?x0:t ? X . ? j (hj0:t |x0:t ) = 0: Set H j = H j \ {hj0:t }.
(b) Merge histories that lead to the same beliefs:
j
0j
j
j
j
? ?j, hj0:t ? H j , h0j
0:t ? H s.t. ?x0:t ? X . ? (x0:t |h0:t ) = ? (x0:t |h0:t ):
j
j
0j
j
j
j
Set H j = H j \ {h0j
}
and
?
(h
|x
)
=
?
(h
|x
)
+
?
(h
|x
0:t
0:t 0:t
0:t 0:t
0:t 0:t ) for all x0:t .
(c) Reset when marginal of st is common knowledge:
? If ?j, k, hj0:t ? H j , hk0:t , ? H k , st . ?j (st |hj0:t ) = ?k (st |hk0:t ):
Reinitialize the filter using the distribution ?j (st |hj0:t ) instead of the prior p0 (s).
3. Prune: For all ?j or ? j with m ? N non-zero entries:
Remove the m ? N lowest-probability sequences and renormalize.
Figure 2: The SDS filter for agent i. At all times t, the filter maintains sequence sets X and H j , for all agents
j, along with the sequence distributions ?j and ? j for all agents j. Agent i?s actual observed history is marked
as a distinguished element hi0:t ? H i and used to compute its beliefs B i,? (hi0:t ).
Filtering algorithm. We now consider the challenge of maintaining small sequence sets. Fig. 2
provides a detailed description of the SDS filtering algorithm for agent i. The filter is initialized with
empty histories for each agent and trajectories with single states that are distributed according to the
prior. At each time t, Step 1 extends the sequence sets, computes the sequence distributions, and
records agent i?s history. Running a filter with only this step would generate all possible sequences.
Step 2 introduces three operations that reduce the size of the sequence sets while guaranteeing that
Eqs. 2 and 3 still produce the correct nested beliefs at time t. Step 2(a) removes trajectories and
histories when all the agents agree that they are impossible; there is no reason to track them. For
example, in the tiger communication world, the policies are such that for the first few time steps each
agent will always listen (to the tiger or for signals). During this period all the trajectories where other
actions are taken are known to be impossible and can be ignored. Step 2(b) merges histories for an
agent j that lead to the same beliefs. This is achieved by arbitrarily selecting one history to be
deleted and adding its ? j probability to the other?s ? j . For example, as the tiger listener hears roars,
any two observation sequences with the same numbers of roars on the left and right provide the same
information about the tiger and can be merged. Step 2(c) resets the filter if the marginal over states
at time t has become commonly known to all the agents. For example, when both agents know that a
door has been opened, this implies that the world has reset and all previous trajectories and histories
can be discarded. This type of agreement is not limited to cases where the state of the world is reset.
It occurs with any distribution over states that the agents agree on, for example when they localize
and both know the true state, even if they disagree about the trajectory of past states.
Together, these three operators can significantly reduce the size of the sequence sets. We will see
in the experiments (Sec. 5) that they enable the SDS filter to exactly track the tiger communication
world extremely efficiently. However, in general, there is no guarantee that these operators will be
enough to maintain small sets of trajectories and histories. Step 3 introduces an approximation by
removing low-probability sequences and normalizing the belief distributions. This does guarantee
that we will maintain small sequence sets, possibly at the cost of accuracy. In many domains we can
ignore unlikely histories and trajectories without significantly changing the current beliefs.
5
Evaluation
In this section, we describe the performance of the SDS algorithm on three nested filtering problems.
SDS N=10
SDS N=50
10
8
6
4
2
0
0
5
10
15
20
Time Step
25
(a) Tiger world: time.
30
SDS N=100
SDS N=?
14
12
10
8
6
4
2
0
0
5
10
Time Step
15
(b) Box pushing: time.
20
Empirical Variational Distance
SDS -c
SDS
Running Time (seconds)
Running Time (seconds)
SDS -a,-b,-c
SDS -b,-c
SDS N=10
SDS N=50
SDS N=100
0.25
0.2
0.15
0.1
0.05
0
0
5
10
Time Step
15
20
(c) Box pushing: error.
Figure 3: Time per filtering step, and error, for the SDS algorithm on two domains.
Tiger Communication World. The tiger communication world was described in detail in Sec. 3.3.
Fig. 3(a) shows the average computation time used for filtering at each time step. The full algorithm
(SDS) maintains a compact, exact representation without any pruning and takes only a fraction of a
second to do each update. The graph also shows the results of disabling different parts of Step 2(a-c)
of the algorithm (for example, SDS -a,-b,-c does not do any simplifications from Step 2). Without
these steps, the algorithm runs in exponential time. Each simplification allows the algorithm to
perform better, but all are required for constant-time performance. Since the SDS filter runs without
the pruning in Step 3, we know that it computes the correct beliefs; there is no approximation error.4
Box Pushing. The DEC-POMDP literature includes several multi-agent domains; we evaluate
SDS on the largest of them, known as the box-pushing domain [9]. In this scenario, two agents
interact in a 3x4 grid world where they must coordinate their actions to move a large box and then
independently push two small boxes. The state encodes the positions and orientations of the robots,
as well as the locations of the three boxes. The agents can move forward, rotate left and right, or
stay still. These actions fail with probability 0.1, leaving the state unchanged. Each agent receives
deterministic observations about what is in the location in front of it (empty space, a robot, etc.).
We implemented policies for each agent that consist of a set of 20 rules specifying actions given its
zeroth-level beliefs about the world state. While executing their policies, the agents first coordinate
to move the large box and then independently move the two small boxes. The policies are such that,
with high probability, the agents will always move the boxes. There is uncertainty about when this
will happen, since actions can fail. We observed, in practice, that it rarely took more than 20 steps.
Fig. 3(b) shows the running time of the SDS filter on this domain, with various pruning parameters
(N = 10, 50, 100, ? in Step 3). Without pruning (N = ?), the costs are too high for the filter
to move beyond time step five. With pruning, however, the cost remains reasonable. Fig. 3(c)
shows the error incurred with various degrees of pruning, in terms of the difference between the
estimated zeroth-level beliefs for the agents and the true posterior over physical states given their
observations.5 Note that in order to accurately maintain each agent?s beliefs about the physical
state?which includes the position of the other robot?the filter must assign accurate probabilities
to unobserved actions by the other agent , which depend on its beliefs. This is the same reasoning
pattern we saw in the tiger world where we are required to maintain infinitely nested beliefs. As
expected, we see that more pruning leads to faster running time but decreased accuracy. We also
find that the problem is most challenging around time step ten and becomes easier in the limit, as the
world moves towards the absorbing state where both agents have finished their tasks. With N = 100,
we get high-quality estimates in an acceptable amount of time.
Noisy Muddy Children. The muddy children problem is a classic puzzle often discussed by researchers in epistemic logic [4]. There are n agents and 2n possible states. Each agent?s forehead
can be either muddy or clean, but it does not get any direct observations about this fact. Initially, it is
commonly known that at least one agent has a muddy forehead. As time progresses, the agents follow a policy of raising their hand if they know that their forehead is muddy; they must come to this
conclusion given only observations about the cleanliness of the other agents? foreheads and who has
4
The exact version of SDS also runs in constant time on the broadcast channel domain of Hansen et al. [6].
Because the box-pushing problem is too large for beliefs to be computed exactly, we compare the filter?s
performance to empirical distributions obtained by generating 10,000 sequences of trajectories and histories.
We group the runs by the history hi0:t ; for all histories that appear at least ten times, we compare the empirical
distribution ?bt of states occurring after that history to the filter?s computed beliefs ?bi,0
t , using the variational
P ?
?bi,0
(s)|.
|
b
(s)
?
distance V D(?bt , ?bi,0
t
t
t ) =
s
5
raised their hands (this yields 22n possible observations for each agent). This puzzle is represented
in our framework as follows. The initial knowledge is encoded with a prior that is uniform over all
states with in which at least one agent is muddy. The state of the world never changes. Observations
about the muddiness of the other agents are only correct with probability ?, and each agent raises its
hand if it assigns probability at least 0.8 to being muddy.
When there is no noise, ? = 1.0, the agents behave as follows. With m ? n muddy agents, everyone
waits m time steps and then all of the muddy agents simultaneously raise their hands.6 The SDS
filter exhibits exactly this behavior and runs in reasonable time, using only a few seconds per filtering
step, for problem instances with up to 10 agents without pruning. We also ran the filter on instances
with noise (? = 0.9) and up to 5 agents. This required pruning histories to cope with the extremely
large number of possible but unlikely observation sequences. The observed behavior is similar to
the deterministic case: eventually, all of the m muddy agents raise their hands. In expectation, this
happens at a time step greater than m, since the agents must receive multiple observations before
they are confident about each other?s cleanliness. If one agent raises its hand before the others, this
provides more information to the uncertain agents, who usually raise their hands soon after.
6
Conclusions
We have considered the problem of efficient belief update in multi-agent scenarios. We introduced
the SDS algorithm, which maintains a finite belief representation that can be used to compute an
infinite sequence of nested beliefs about the physical world and the beliefs of other agents. We
demonstrated that on some problems, SDS can maintain this representation exactly in constant time
per filtering step. On more difficult examples, SDS maintains constant-time filtering by pruning
low-probability trajectories, yielding acceptable levels of approximation error.
These results show that efficient filtering is possible in multi-agent scenarios where the agents?
policies are expressed as functions of their beliefs, rather than their entire observation histories.
These belief-based policies are independent of the current time step, and have the potential to be
more compact than history-based policies. In the single-agent setting, many successful POMDP
planning algorithms construct belief-based policies; we plan to investigate how to do similar beliefbased planning in the multi-agent case.
References
[1] D. S. Bernstein, E. Hansen, and S. Zilberstein. Bounded policy iteration for decentralized POMDPs. In
Proc. of the 19th International Joint Conference on Artificial Intelligence (IJCAI), 2005.
[2] A. Brandenburger and E. Dekel. Hierarchies of beliefs and common knowledge. Journal of Economic
Theory, 59:189?198, 1993.
[3] R. Fagin and J. Y. Halpern. Reasoning about knowledge and probability. Journal of the ACM, 41(2):340?
367, 1994.
[4] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi. Reasoning About Knowledge. The MIT Press, 1995.
[5] P. J. Gmytrasiewicz and P. Doshi. A framework for sequential planning in multi-agent settings. Journal
of Artificial Intelligence Research, 24:49?79, 2005.
[6] E. A. Hansen, D. S. Bernstein, and S. Zilberstein. Dynamic programming for partially observable stochastic games. In Proc. of the 19th National Conf, on Artificial Intelligence (AAAI), 2004.
[7] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic
domains. Artificial Intelligence, 101:99?134, 1998.
[8] B. Milch and D. Koller. Probabilistic models for agents? beliefs and decisions. In Proc. 16th Conference
on Uncertainty in Artificial Intelligence (UAI), 2000.
[9] S. Seuken and S. Zilberstein. Improved memory-bounded dynamic programming for decentralized
POMDPs. In Proc. of the 23rd Conference on Uncertainty in Artificial Intelligences (UAI), 2007.
[10] A. Shirazi and E. Amir. Probabilistic modal logic. In Proc. of the 22nd National Conference on Artificial
Intelligence (AAAI), 2007.
6
This behavior can be verified by induction. If there is one muddy agent, it will see that the others are clean
and raise its hand immediately. This implies that if no one raises their hand in the first round, there must be
at least two muddy agents. At time two, they will both see only one other muddy agent and infer that they are
muddy. The pattern follows for larger m.
| 3459 |@word version:1 nd:1 dekel:2 open:6 p0:3 minus:1 epistemic:2 initial:2 contains:1 selecting:1 past:4 existing:2 current:9 com:1 si:3 must:15 bd:4 written:1 happen:1 predetermined:1 remove:3 drop:3 update:11 intelligence:7 prohibitive:1 amir:1 record:2 provides:2 node:1 location:10 five:1 along:2 direct:2 become:2 prove:1 x0:32 expected:1 behavior:3 planning:5 multi:15 ol:1 actual:1 becomes:1 begin:1 notation:3 moreover:1 bounded:2 lowest:1 what:14 mountain:1 kind:2 unobserved:1 guarantee:3 remember:1 fagin:3 act:3 interactive:1 exactly:5 returning:1 whatever:1 appear:1 before:3 sd:33 limit:1 subscript:1 merge:2 might:4 zeroth:5 initialization:1 specifying:1 luke:1 challenging:1 limited:1 range:2 bi:11 unique:2 practice:2 procedure:2 empirical:3 thought:1 significantly:2 wait:4 get:3 cannot:5 operator:2 milch:2 influence:1 impossible:3 map:1 deterministic:3 demonstrated:1 send:1 independently:4 pomdp:2 immediately:2 assigns:1 insight:1 rule:1 nesting:3 seuken:1 classic:1 notion:2 coordinate:3 updated:3 hierarchy:1 exact:2 programming:2 agreement:1 element:3 approximated:1 updating:3 cooperative:2 labeled:1 observed:5 worst:1 ran:1 reward:1 hi0:18 littman:1 dynamic:3 halpern:3 depend:4 raise:7 joint:6 muddy:14 represented:3 listener:4 various:2 describe:6 query:1 artificial:7 outcome:1 h0:9 encoded:1 larger:1 otherwise:4 ability:1 statistic:1 unseen:1 think:3 jointly:2 noisy:2 itself:1 superscript:1 final:2 sequence:51 indication:1 took:1 interaction:2 ai0:3 reset:7 achieve:3 gold:1 description:1 ijcai:1 empty:2 produce:3 generating:2 guaranteeing:1 executing:1 finitely:5 received:1 progress:1 disabling:1 eq:5 implemented:1 indicate:2 come:2 implies:2 closely:1 correct:4 merged:1 filter:23 stochastic:3 opened:2 enable:1 require:2 assign:1 suffices:1 proposition:1 brian:2 hold:1 around:1 considered:1 mapping:1 bj:2 predict:2 puzzle:2 achieves:1 estimation:1 proc:5 applicable:1 hansen:3 saw:1 largest:1 mit:6 always:3 rather:1 avoid:1 hj:3 bet:1 zilberstein:3 derived:2 focus:3 hk:2 contrast:1 sense:1 inference:2 entire:3 gmytrasiewicz:2 bt:3 initially:2 hidden:1 unlikely:3 a0:2 expand:1 compactness:1 koller:1 interested:1 orientation:1 k6:1 plan:1 raised:1 initialize:2 marginal:2 equal:1 construct:2 never:3 aware:1 x4:1 represents:1 look:1 future:5 others:6 few:2 randomly:2 simultaneously:2 national:2 maintain:12 investigate:1 multiply:1 evaluation:1 introduces:2 yielding:1 behind:5 accurate:2 tuple:1 necessary:2 indexed:1 old:1 continuing:1 initialized:1 desired:3 renormalize:1 uncertain:1 instance:3 leslie:1 kaelbling:2 cost:4 entry:3 subset:1 uniform:2 successful:2 too:3 front:1 optimally:1 teacher:1 confident:5 st:21 international:1 csail:4 standing:1 probabilistic:5 stay:1 receiving:2 together:2 roar:6 again:1 aaai:2 broadcast:1 possibly:3 external:2 conf:1 return:7 potential:1 blow:1 student:1 sec:7 includes:2 inc:1 explicitly:2 ad:2 kripke:3 view:1 later:1 start:1 maintains:13 complicated:1 formed:1 oi:1 accuracy:3 characteristic:1 efficiently:3 who:2 yield:1 accurately:1 trajectory:26 pomdps:7 multiplying:1 researcher:1 history:50 explain:1 lpk:1 doshi:2 gain:2 knowledge:6 listen:3 actually:3 follow:1 specify:1 improved:1 modal:1 execute:4 done:3 box:11 hait:1 until:1 hand:10 receives:3 google:2 quality:1 aj:2 believe:8 shirazi:1 grows:3 facilitate:1 building:1 name:1 concept:1 true:4 deal:1 round:1 game:3 during:2 maintained:1 presenting:1 theoretic:1 demonstrate:3 complete:1 hungry:1 reasoning:8 variational:2 common:2 absorbing:1 physical:4 exponentially:3 extend:1 occurred:1 discussed:1 forehead:4 significant:1 cambridge:2 ai:3 rd:1 grid:1 had:1 specification:1 robot:3 etc:1 posterior:2 own:1 recent:1 scenario:4 arbitrarily:2 continue:1 seen:1 greater:1 prune:1 period:1 signal:11 full:2 multiple:1 infer:4 faster:1 long:2 expectation:2 iteration:1 represent:7 achieved:1 dec:3 receive:4 zettlemoyer:1 affecting:1 addressed:1 decreased:1 diagram:1 grow:2 leaving:1 ot:2 eliminates:1 rest:2 sr:2 induced:1 call:1 door:15 bernstein:2 enough:3 isolation:1 reduce:2 idea:1 economic:1 oit:3 recreate:1 action:44 oi1:3 useful:1 ignored:1 detailed:2 amount:1 ten:2 generate:3 sl:2 happened:1 moses:1 delta:1 arising:1 correctly:2 track:4 per:3 estimated:1 discrete:1 group:1 deleted:1 localize:1 clarity:1 changing:1 clean:2 verified:1 graph:3 fraction:1 sum:3 run:5 you:1 uncertainty:3 ojt:4 arrive:1 throughout:2 extends:1 decide:1 reasonable:2 decision:2 acceptable:2 cleanliness:2 simplification:2 correspondence:1 encodes:1 argument:1 extremely:2 performing:1 according:3 happens:1 taken:2 agree:2 remains:2 eventually:1 fail:2 needed:2 know:8 operation:1 decentralized:3 posgs:2 appropriate:4 distinguished:1 existence:1 running:5 maintaining:5 pushing:5 unchanged:1 bl:4 move:7 occurs:1 reinitialize:1 poker:1 exhibit:1 distance:2 separate:2 card:2 outer:1 reason:6 induction:1 length:4 insufficient:1 difficult:1 trace:1 policy:26 unknown:1 perform:6 disagree:1 observation:35 discarded:2 finite:7 behave:1 defining:2 extended:1 communication:6 arbitrary:5 community:1 introduced:2 bk:1 pair:1 required:6 specified:1 hk0:10 raising:1 merges:1 beyond:1 usually:1 pattern:5 hj0:34 hear:3 challenge:3 oj:1 including:1 memory:1 belief:108 everyone:1 natural:1 predicting:3 nth:1 representing:1 scheme:1 finished:1 hears:1 prior:3 literature:1 recreating:1 interesting:1 filtering:31 oj1:3 incurred:1 agent:162 degree:1 sufficient:7 s0:10 pile:1 placed:1 soon:1 h0j:2 allow:1 deeper:1 taking:1 sparse:1 distributed:1 depth:4 world:45 transition:2 computes:4 author:1 commonly:5 forward:1 far:1 cope:1 correlate:1 approximate:2 observable:7 compact:4 pruning:11 ignore:1 logic:3 uai:2 summing:2 tuples:2 pack:1 channel:1 ca:1 interact:2 ajt:7 complex:2 domain:9 opener:3 main:1 linearly:1 noise:2 vardi:1 ait:4 repeated:1 child:2 fig:5 akt:2 experienced:1 position:2 exponential:1 answering:1 removing:1 specific:2 xt:2 showing:2 normalizing:1 exists:1 consist:1 adding:2 effectively:1 sequential:1 conditioned:2 push:1 occurring:1 cassandra:1 easier:1 led:1 likely:2 infinitely:7 expressed:2 partially:7 nested:33 acm:1 ma:2 goal:1 marked:2 towards:1 change:3 tiger:37 included:1 infinite:11 typical:1 determined:1 acting:3 called:1 disregard:1 player:1 rarely:1 formally:1 mark:1 rotate:1 evaluate:1 |
2,712 | 346 | Connection Topology and Dynamics
in Lateral Inhibition Networks
C. M. Marcus, F. R. Waugh, and R. M. Westervelt
Department of Physics and Division of Applied Sciences, Harvard University
Cambridge, MA 02138
ABSTRACT
We show analytically how the stability of two-dimensional lateral
inhibition neural networks depends on the local connection topology.
For various network topologies, we calculate the critical time delay for
the onset of oscillation in continuous-time networks and present
analytic phase diagrams characterizing the dynamics of discrete-time
networks.
1
INTRODUCTION
Mutual inhibition in an array of neurons is a common feature of sensory systems
including vision, olfaction, and audition in organisms ranging from invertebrates to man.
A well-studied instance of this configuration is lateral inhibition between neighboring
photosensitive neurons in the retina (Dowling, 1987). Inhibition serves in this case to
enhance the perception of edges and to broaden the dynamic range by setting a local
reference point for measuring intensity variations. Lateral inhibition thus constitutes the
first stage of visual information processing. Many artificial vision systems also take
advantage of the computational power of lateral inhibition by directly wiring inhibition
into the photodetecting electronic hardware (Mead, 1989).
Lateral inhibition may create extensive feedback paths, leading to network-wide collective
oscillations. Sustained oscillations arising from lateral inhibition have been observed in
biological visual system~specifically. in the compound eye of the horseshoe crab
Limulus (Barlow and Fraioli, 1978; Coleman and Renninger, 1978)-as well as in
artificial vision systems, for instance plaguing an early version of the electronic retina
chip built by Mead et al. (Wyatt and Standley, 1988; Mead, 1989).
In this paper we study the dynamics of simple neural network models of lateral inhibition
in a variety of two-dimensional connection schemes. The lattice structures we study are
shown in Fig. 1. Two-dimensional lattices are of particular importance to artificial
vision systems because they allow an efficient mapping of an image onto a network and
because they are well-suited for implementation in VLSI circuitry. We show that the
98
Connection Topology and Dynamics in Lateral Inhibition Networks
stability of these networks depends sensitively on such design considerations as local
connection topology, neuron self-coupling, the steepness or gain of the neuron transfer
function, and details of the network dynamics such as connection delays for continuoustime dynamics or update rule for discrete-time dynamics.
(a)
0
0
:~
o
o
o
o
o
0
0
:To
0
000
(b)
0
000
00*00
o
0
00000
o
o
(d) 0
o
o
o
o
o
o
o
o
o
o
0
o
(c)
o
0
0
0
0
0
0
0
o
0
0
Figure 1: Connection schemes for two-dimensional lateral inhibition networks
considered in this paper: (a) nearest-neighbor connections on a square lattice; (b)
nearest-neighbor connections on a triangular lattice; (c) 8-neighbor connections
on a square lattice; and (d) 12-neighbor connections on a square lattice.
The paper is organized as follows. Section 2 introduces the dynamical equations
describing continuous-time and discrete-time lateral inhibition networks . Section 3
discusses the relationship between lattice topology and critical time delay for the onset of
oscillation in the continuous-time case. Section 4 presents analytic phase diagrams
characterizing the dynamics of discrete-time lateral inhibition networks as neuron gain,
neuron self-coupling, and lattice structure are varied. Our conclusions are presented in
Section 5.
2
NETWORK DYNAMICS
We begin by considering a general neural network model defined by the set of electronic
circuit equations
Ci dUi(t')/dt'= - ui(t')/Ri + ~T;jfA uAt'-'l'i/)) + Ii
J
,i=l .... ,N,
(1)
I
where u ? is the voltage. C . the capacitance. and R j -1 = 1'. j l1ij the total conductance at
the inpJt of neuron i. I~put to the network is througli the applied currents Ii. The
nonlinear transfer function Ii is taken to be sigmoidal with odd symmetry and maximum
slope at the origin. A time delay 'l'i/ in the communication from neuron i to neuron j
has been explicitly included. Such a delay could arise from the finite operating speed of
the elements-neurons or amplifiers-or from the finite propagation speed of the
interconnections. For the case of lateral inhibition networks with self-coupling. the
connection matrix is given by
{
r
Tij = -1
o
Irl+
for i = j
for i, j connected neighbors
(2)
otherwise,
which makes R i- 1 =
z for all i, where z is the number of connected neighbors.
For simplicity, we take all neurons to have the same delay and characteristic relaxation
99
100
Marcus, Waugh, and Westervelt
time (t.'=tdelgy ' R.C .=trelax for all i) and identical transfer functions. With these
assumptions, hq. (1) ~an be rescaled and written in terms of the neuron outputs Xi(t) as
dxi(t)/dt = - Xi(t) + F(Ij I;:;xj(t - t) + Ii)' i=l, ... , N,
(3)
where the odd, sigmoidal function F now appears outside the sum. The function F is
characterized by a maximum slope f3 (> 0), and its saturation amplitude can be set to ?1
without loss of generality. The commonly used form F(h) =tanh(f3h) satisfies these
requirements; we will continue to use F to emphasize generality. As a result of
rescaling, the delay time t is now measured in units of network relaxation time (i.e.
t = tdelay/trelax )' and the connection matrix is normalized such that Ljl1ijl = 1 for
all i. Stability of Eq. (3) against coherent oscillation will be discussed in Section 3.
The discrete-time iterated map,
, i=l, ... , N,
(4)
with parallel updating of neuron states Xi(t), corresponds to the long-delay limit of Eq.
(3) (care must be taken in considering this limit; not all aspects of the delay system carry
over to the map (Mallet-Paret and Nussbaum, 1986)). The iterated map network, Eq. (4),
is particularly useful for implementing fast, parallel networks using conventional
computer clocking techniques. The speed advantage of parallel dynamics, however, comes
at a price: the parallel-update network may oscillate even when the corresponding
sequential update network is stable. Section 4 gives phase diagrams based on global
stability analysis which explicitly define the oscillation-free operating region of Eq. (4)
and its generalization to a multistep updating rule.
3
STABILITY OF LATTICES WITH DELAYED INHIBITION
In the absence of delay (t = 0) the continuous-time lateral inhibition network, Eq. (3),
always converges to a fixed point attractor. This follows from the famous stability
criterion based on a Liapunov (or "energy") function (Cohen and Grossberg, 1983;
Hopfield, 1984), and relies on the symmetry of the lateral inhibitory connections (Le.
'Tjj = Tji for all connection schemes in Fig. I). This guarantee of convergence does not
hold for nonzero delay, however, and it is known that adding delay can induce sustained,
coherent oscillation in a variety of symmetrically connected network configurations
(Marcus and Westervelt, 1989a). Previously we have shown that certain delay networks
of the form ofEq. (3)--including lateral inhibition network~will oscillate coherently,
that is with all neurons oscillating in phase, for sufficiently large delay. As the delay is
reduced, however, the oscillatory mode becomes unstable, leaving only fixed point
attractors. A critical value of delay tcrit below which sustained oscillation vanishes for
any value of neuron gain f3 is given by
tcrit=-ln(I+Amax/Amin)
(0< Amax <-Amin)
(5)
where Amax and Amin are the extremal eigenvalues of the connection matrix Tij. The
analysis leading to (5) is based on a local stability analysis of the coherent oscillatory
mode. Though this local analysis lacks the rigor of a global analysis (which can be done
for t = 0 and for the discrete-time case, Eq. (4)) the result agrees well with experiments
and numerical simulations (Marcus and Westervelt, 1989a).
Connection Topology and Dynamics in Lateral Inhibition Networks
It is straightforward to find the spectrum of eigenvalues for the lattices in Fig. 1.
Assuming periodic boundary conditions, one can expand the eigenvalue equation Tx = A x
in terms of periodic functions x) = Xo exp(i q. Rj ) ,where Rj is the 2D vector position of
neuron j and q is the reciprocal lattice vector characterizing a particular eigenmode. In
the large network limit, this expansion leads to the following results for the square and
triangular lattices with nearest neighbor connections and self-connection r [see next
section for a table of eigenvalues]:
'rcrit
'rcrit
~ In( 1/2 - 2/ r )
(-4<r<0)
[n.n. square lattice, Fig. l(a)] ,
~ In[(r- 6)/(2r- 3)] (-3 < r< 3/2) [n.n. triangular lattice, Fig. 1(b)].
(6a)
(6b)
Curves showing 'rcrit as a function of self-connection r are given in Fig. 2. These
reveal the surprising result that the triangular lattice is much more prone to delay-induced
oscillation than the square lattice. For instance, with no self connection (r= 0), the
square lattice does not show sustained oscillation for any finite delay, while the triangular
lattice oscillates for 'r > In 2 == 0.693 .
2
r
1
0
-1
-3
r
2.5
~*
~
+
-4
Figure 2: Critical delay 'rcrit as a function of self-connection r. from Eq. (6).
Note that for r = 0 only triangular lattice oscillates at finite delay. The
analysis does not apply at exactly 'r = 0, where both networks are stable for all
values of r.
The important difference between these two lattices--and the quality which accounts for
their dissimilar stability properties-is not simply the number of neighbors, but is the
presence of frustration in the triangular lattice but not in the square lattice. Lateral
inhibition, like antiferromagnetism, forms closed loops in the triangular lattice which do
not allow all of the connections to be satisfied by any arrangement of neuron states. In
contrast, lateral inhibition on the square lattice is not frustrated, and is, in fact, exactly
equivalent to lateral excitation via a gauge transformation. We note that a similar
situation exists in 2D magnetic models: while models of 2D ferromagnetism on square
and triangular lattices behave nearly identically (both are nonfrustrated). the corresponding
2D antiferromagnets are quite different, due to the presence of frustration in the triangular
lattice, but not the square lattice (Wannier, 1950).
101
102
Marcus, Waugh, and Westervelt
4
LATTICES WITH ITERATED-MAP DYNAMICS
Next we consider lateral inhibition networks with discrete-time dynamics where all neuron
states are updated in parallel. The standard parallel dynamics fonnulation was given above
as Eq. (4), but here we will consider a generalized updating rule which offers some
important practical advantages. The generalized system we consider updates the neuron
states based on an average over M previous time steps, rather than just using a single
previous state to generate the next This multistep rule is somewhat like including time
delay, but as we will see, increasing M actually makes the system more stable compared
to standard parallel updating. This update rule also differs from the delay-differential
system in pennitting a rigorous global stability analysis. The dynamical system we
consider is defined by the following set of coupled iterated maps:
M-l
:L
zlt)=M- 1
(7)
Xj(t-'r) ,
-r=O
where i,j = l, ... ,N and ME {1,2,3, ... }. The standard parallel updating rule, Eq.(4), is
recovered by setting M = 1.
A global analysis of the dynamics of Eq. (7) for any symmetric Tij is given in (Marcus
and Westervelt, 1990), and for M=1 in (Marcus and Westervelt, 1989b). It is found that
for any M, if all eigenvalues A satisfy 131,1,1 < 1 then there is a single attractor which
depends only on the inputs Ii. For Ii = 0, this attractor is the origin, Le. all neurons at
zero output. Whenever 13IAI > 1 for one or more eigenvalues, multiple fixed points as
well as periodic attractors may exist. There is, in addition, a remarkably simple glopal
stability criterion associated with Eq. (7): satisfying the condition 1/13 >-Amin(1i')/~
insures that no periodic attractors exist, though there may be a multiplicity of fixed' point
attractors. As in the previous section, Amin is the most negative eigenvalue of Tij. If
Tij has no negative eigenvalues, then Amin is the smallest positive eigenvalue, and the
stability criterion is satisfied trivially since 13 is defined to be positive.
These stability results may be used to compute analytic phase diagrams for the various
connection schemes shown in Fig. 1 and defined in Eq. (3). The extremal eigenvalues of
Tij are calculated using the Fourier expansion described above. In the limit of large
lattice size and assuming periodic boundary conditions, we find the following:
Amax:
Amin:
square n.n.
triangle n. n.
r+ 4
r+3
r+4
Irl+4
Irl+6
Irl+8
r- 4
r-6
r- 8
r- 12
Irl+4
Irl+6
Irl+8
Irl +12
square
8-n.
12-n.
r+ 13/3
Irl+12
square
The resulting phase diagrams characterizing regions with different dynamic properties are
shown in Fig. 3. The four regions indicated in the diagrams are characterized as follows:
(1) orig: low gain regime where a unique fixed point attractor exists (that attractor is the
origin for Ii = 0); (2) fp: for some inputs Ii multiple fixed point attractors may exist,
each with an attracting basin, but no oscillatory attractors exist in this region (i.e. no
attractors with period >1); (3) osc: at most one fixed point attractor, but one or more
oscillatory modes also may exist; (4) fp + osc: multiple fixed points as well as
oscillatory attractors may exist.
Connection Topology and Dynamics in Lateral Inhibition Networks
(a)
(b)
3
4
2
r
1
6
r
2 ong
orig
o f----lt-----:'"--=--~
3 f3 4
1
Of-----j'-----lo.~--'---'
1
-1
-2
-2
osc
osc
-4
-3
(c)
(d) 6
4
r
4
3
2
r
orig
2
1
o
1----4-~,..__--'---'
1
-1
-2
-2
-4
osc
(f) 6
(e) 2
fp
r
1
4
r
fp
2
-1
o 1------'--~::--1'-----'"---'5
-2
-2
-3
osc
orig
fp+
osc
osc
-4
Figure 3: Phase diagrams based on global analysis for lateral inhibition
networks with discrete-time parallel dynamics [Eq.(7)] as a function of neuron
gain f3 and self-connection r. Regions orig, jp, OSC, and jp+osc are
defmed in text. (a) Nearest-neighbor connections on a square lattice and singlestep updating (M=l); (b) nearest-neighbor connections on a triangular lattice,
M=l; (c) 8-neighbor connections on a square lattice, M=l; (d) 12-neighbor
connections on a square lattice, M=l; (e) nearest-neighbor connections on a
square lattice, M=3; (0 nearest-neighbor connections on a triangular lattice,
M=3.
103
104
Marcus, Waugh, and Westervelt
5
CONCLUSIONS
We have shown analytically how the dynamics of two-dimensional neural network models
of lateral inhibition depends on both single-neuron properties-such as the slope of the
sigmoidal transfer function, delayed response, and the strength of self-connection--and
also on the topological properties of the network.
The design rules implied by the analysis are in some instances what would be expected
intuitively. For example, the phase diagrams in Fig. 4 show that in order to eliminate
oscillations one can either include a positive self-connection term or decrease the gain of
the neuron. It is also not surprising that reducing the time delay in a delay-differential
system eliminates oscillation. Less intuitive is the observation that for discrete-time
dynamics using a multistep update rule greatly expands the region of oscillation-free
operation (compare, for example Figs. 4(a) and 4(e?. One result emerging in this paper
that seems quite counterintuitive is the dramatic effect of connection topology, which
persists even in the limit of large lattice size. This point was illustrated in a comparison
of networks with delayed inhibition on square and triangular lattices, where it was found
that in the absence of self-connection, only the triangular lattices will show sustained
oscillation.
Finally, we note that it is not clear to us how to generalize our results to other network
models, for example to models with asymmetric connections which allow for directionselective motion detection. Such questions remain interesting challenges for future work.
Acknowledgments
We thank Bob Meade and Cornelia Kappler for informative discussions. One of us
(C.M.M.) acknowledges support as an IBM Postdoctoral Fellow, and one (F.R.W.) from
the Army Research Office as a JSEP Graduate Fellow. This work was supported in part
by ONR contract NOOOI4-89-J-1592, JSEP contract NOOOI4-89-J-1D23, and DARPA
contract AFOSR-89-0506.
References
Barlow, R. B. and A. J. Fraioli (1978), J. Gen. Physiol., 71, 699.
Cohen, M. A., and S. Grossberg (1983), IEEE Trans. SMC-13, 815.
Coleman, B. D. and G.H. Renninger (1978), Math. Biosc. 38, 123.
Dowling, J. E. (1987), The Retina: An Approachable Part of the Brain (Harvard
University Press, Cambridge, MA).
Hopfield, J. J. (1984), Proc. Nat. Acad. Sci. USA 81, 3008.
Mallet-Paret, 1. and R. D. Nussbaum (1986) in Chaotic Dynamics and Fractals, edited
by M. F. Barnsley and S. G. Demko, (Academic Press, Orlando) p. 263.
Marcus, C. M. and R. M. Westervelt (1989a), Phys. Rev. A 39, 347.
Marcus, C. M. and R. M. Westervelt (1989b), Phys. Rev. A 40, 501.
Marcus, C. M. and R. M. Westervelt (1990), Phys. Rev. A 42, 2410.
Mead, Carver A. (1989), Analog VLSI and Neural Systems (Addison-Wesley, Reading,
MA).
Wyatt, Jr., J. L., and D. L. Standley (1988), in Neural Information Processing Systems,
Denver CO, 1987, edited by D. Z. Anderson, (AlP, New York), p. 860.
Wannier, G. M. (1950), Phys. Rev. 79, 357.
| 346 |@word version:1 seems:1 simulation:1 ferromagnetism:1 dramatic:1 carry:1 configuration:2 current:1 recovered:1 surprising:2 written:1 must:1 physiol:1 numerical:1 informative:1 analytic:3 update:6 liapunov:1 coleman:2 reciprocal:1 math:1 sigmoidal:3 nussbaum:2 differential:2 sustained:5 expected:1 wannier:2 brain:1 f3h:1 considering:2 increasing:1 becomes:1 begin:1 circuit:1 duo:1 what:1 directionselective:1 emerging:1 transformation:1 guarantee:1 fellow:2 expands:1 oscillates:2 exactly:2 unit:1 positive:3 persists:1 local:5 limit:5 acad:1 mead:4 path:1 multistep:3 studied:1 co:1 smc:1 range:1 graduate:1 grossberg:2 practical:1 unique:1 acknowledgment:1 differs:1 chaotic:1 induce:1 onto:1 put:1 conventional:1 map:5 equivalent:1 straightforward:1 renninger:2 simplicity:1 rule:8 array:1 amax:4 counterintuitive:1 stability:12 variation:1 updated:1 origin:3 harvard:2 element:1 satisfying:1 particularly:1 updating:6 asymmetric:1 observed:1 calculate:1 region:6 connected:3 decrease:1 rescaled:1 edited:2 vanishes:1 ui:1 ong:1 dynamic:22 orig:5 division:1 triangle:1 darpa:1 hopfield:2 chip:1 various:2 tx:1 fast:1 artificial:3 outside:1 quite:2 interconnection:1 otherwise:1 triangular:14 jsep:2 advantage:3 eigenvalue:10 neighboring:1 loop:1 gen:1 amin:7 intuitive:1 convergence:1 requirement:1 oscillating:1 converges:1 coupling:3 measured:1 nearest:7 ij:1 odd:2 eq:13 come:1 tji:1 alp:1 implementing:1 orlando:1 generalization:1 biological:1 hold:1 crab:1 considered:1 sufficiently:1 exp:1 limulus:1 mapping:1 circuitry:1 early:1 smallest:1 proc:1 tanh:1 extremal:2 agrees:1 create:1 gauge:1 always:1 rather:1 sensitively:1 voltage:1 office:1 greatly:1 contrast:1 rigorous:1 waugh:4 eliminate:1 vlsi:2 expand:1 mutual:1 f3:4 identical:1 constitutes:1 nearly:1 future:1 retina:3 approachable:1 delayed:3 phase:8 attractor:14 olfaction:1 continuoustime:1 amplifier:1 conductance:1 detection:1 clocking:1 introduces:1 edge:1 carver:1 instance:4 wyatt:2 measuring:1 lattice:38 delay:24 periodic:5 contract:3 physic:1 enhance:1 frustration:2 satisfied:2 audition:1 leading:2 rescaling:1 account:1 satisfy:1 explicitly:2 depends:4 onset:2 closed:1 parallel:9 slope:3 ofeq:1 square:19 jfa:1 characteristic:1 generalize:1 famous:1 iterated:4 bob:1 oscillatory:5 phys:4 whenever:1 eigenmode:1 against:1 energy:1 associated:1 dxi:1 gain:6 noooi4:2 organized:1 amplitude:1 actually:1 appears:1 wesley:1 dt:2 response:1 iai:1 done:1 though:2 generality:2 anderson:1 just:1 stage:1 irl:9 nonlinear:1 propagation:1 lack:1 mode:3 quality:1 indicated:1 reveal:1 usa:1 effect:1 normalized:1 barlow:2 analytically:2 symmetric:1 nonzero:1 illustrated:1 wiring:1 self:11 defmed:1 excitation:1 standley:2 criterion:3 generalized:2 mallet:2 motion:1 ranging:1 image:1 consideration:1 common:1 denver:1 cohen:2 jp:2 discussed:1 organism:1 analog:1 dowling:2 cambridge:2 trivially:1 stable:3 operating:2 inhibition:27 attracting:1 compound:1 certain:1 onr:1 continue:1 care:1 somewhat:1 period:1 ii:8 multiple:3 rj:2 characterized:2 academic:1 offer:1 long:1 vision:4 addition:1 remarkably:1 diagram:8 leaving:1 zlt:1 eliminates:1 fonnulation:1 induced:1 symmetrically:1 presence:2 identically:1 variety:2 xj:2 topology:9 york:1 oscillate:2 fractal:1 tij:6 useful:1 clear:1 hardware:1 reduced:1 generate:1 exist:6 inhibitory:1 arising:1 discrete:9 steepness:1 four:1 relaxation:2 sum:1 electronic:3 oscillation:14 topological:1 photosensitive:1 strength:1 westervelt:11 ri:1 invertebrate:1 fourier:1 aspect:1 speed:3 department:1 jr:1 remain:1 rev:4 intuitively:1 multiplicity:1 xo:1 taken:2 ln:1 equation:3 previously:1 describing:1 discus:1 addison:1 serf:1 operation:1 apply:1 magnetic:1 osc:10 singlestep:1 broaden:1 include:1 implied:1 capacitance:1 arrangement:1 coherently:1 question:1 hq:1 thank:1 lateral:24 sci:1 me:1 unstable:1 marcus:11 assuming:2 relationship:1 negative:2 implementation:1 design:2 collective:1 neuron:23 observation:1 finite:4 horseshoe:1 behave:1 situation:1 communication:1 varied:1 intensity:1 extensive:1 connection:37 coherent:3 trans:1 dynamical:2 perception:1 below:1 regime:1 fp:5 challenge:1 reading:1 saturation:1 built:1 including:3 power:1 critical:4 scheme:4 eye:1 acknowledges:1 coupled:1 text:1 afosr:1 loss:1 tjj:1 interesting:1 basin:1 tdelay:1 ibm:1 lo:1 prone:1 supported:1 free:2 allow:3 wide:1 neighbor:14 characterizing:4 feedback:1 boundary:2 curve:1 calculated:1 sensory:1 commonly:1 emphasize:1 global:5 d23:1 xi:3 spectrum:1 postdoctoral:1 continuous:4 table:1 barnsley:1 transfer:4 symmetry:2 expansion:2 tcrit:2 arise:1 fig:10 position:1 uat:1 showing:1 exists:2 sequential:1 adding:1 importance:1 ci:1 nat:1 suited:1 lt:1 simply:1 insures:1 army:1 visual:2 corresponds:1 satisfies:1 relies:1 frustrated:1 ma:3 price:1 man:1 absence:2 trelax:2 included:1 specifically:1 reducing:1 total:1 rigor:1 support:1 dissimilar:1 |
2,713 | 3,460 | Learning with Consistency between Inductive
Functions and Kernels
Haixuan Yang1,2
Irwin King1
Michael R. Lyu1
1
2
Department of Computer Science & Engineering
Department of Computer Science
The Chinese University of Hong Kong
Royal Holloway University of London
{hxyang,king,lyu}@cse.cuhk.edu.hk
[email protected]
Abstract
Regularized Least Squares (RLS) algorithms have the ability to avoid over-fitting
problems and to express solutions as kernel expansions. However, we observe
that the current RLS algorithms cannot provide a satisfactory interpretation even
on the penalty of a constant function. Based on the intuition that a good kernelbased inductive function should be consistent with both the data and the kernel, a
novel learning scheme is proposed. The advantages of this scheme lie in its corresponding Representer Theorem, its strong interpretation ability about what kind
of functions should not be penalized, and its promising accuracy improvements
shown in a number of experiments. Furthermore, we provide a detailed technical description about heat kernels, which serves as an example for the readers to
apply similar techniques for other kernels. Our work provides a preliminary step
in a new direction to explore the varying consistency between inductive functions
and kernels under various distributions.
1
Introduction
Regularized Least Squares (RLS) algorithms have been drawing people?s attention since they were
proposed due to their ability to avoid over-fitting problems and to express solutions as kernel expansions in terms of the training data [4, 9, 12, 13]. Various modifications of RLS are made to
improve its performance either from the viewpoint of manifold [1] or in a more generalized form
[7, 11]. However, despite these modifications, problems still remain. We observe that the previous
RLS-related work has the following problem:
Over Penalization. For a constant function f = c, a nonzero term ||f ||K is penalized in both
RLS and LapRLS [1]. As a result, for a distribution generalized by a nonzero constant function,
the resulting regression function by both RLS and LapRLS is not a constant as illustrated in the left
diagram in Fig. 1. For such situations, there is an over-penalization.
In this work, we aim to provide a new viewpoint for supervised or semi-supervised learning problems. By such a viewpoint we can provide a general condition under which constant functions
should not be penalized. The basic idea is that, if a learning algorithm can learn an inductive function f (x) from examples generated by a joint probability distribution P on X ? R, then the learned
function f (x) and the marginal PX represents a new distribution on X ? R, from which there is a
re-learned function r(x). The re-learned function should be consistent with the learned function in
the sense that the expected difference on distribution PX is small. Because the re-learned function
depends on the underlying kernel, the difference f (x) ? r(x) depends on f (x) and the kernel, and
from this point of view, we name this work.
RLS
The Re?learned function and the Residual
1.02
1
RLS vs PRLS
1.003
f(x)
RLS??=0.005
PRLS??=1000
PRLS??=1
PRLS??=0.001
PRLS??=0
r(x)
1.01
1.002
f(x)?r(x)
0.5
1
The Ideal Function
Labeled Data
RLS??=0.1
RLS??=0.01
0.98
0.97
0
y
y
y
1.001
0.99
1
?0.5
0.999
RLS?? =0
A
0.96
0
RLS??=0.005
0.5
1
?1
0.998
?2
0
2
x
x
0
0.5
1
x
Figure 1: Illustration for over penalization. Left diagram: The training set contains 20 points, whose
x is randomly drawn from the interval [0 1], whereas the test set contains another 20 points, and
y is generated by 1 + 0.005?, ? ? N (0, 1). The over penalized constant functions in the term
||f ||K cause the phenomena that smaller ? can achieve better results. On the other hand, the overfitting phenomenon when ? = 0 suggests the necessity of the regularization term. Based on these
observations, an appropriate penalization on a function is expected. Middle diagram: r(x) is very
smooth, and f (x)?r(x) remains the uneven part of f (x); therefore f (x)?r(x) should be penalized
while f is over penalized in ||f ||K . Right diagram: the proposed model has a stable property so that
a large variant of ? results in small changes of the curves, suggesting a right way of penalizing
functions.
2
Background
The RKHS Theory enables us to express solutions of RLS as kernel expansions in terms of the
training data. Here we give a brief description of the concepts. For a complete discussion, see [2].
Let X be a compact domain or manifold, ? be a Borel measure on X, and K : X ? X ? R be
a Mercer kernel, then there is an associated Hilbert space RKHS HK of functions X ? R with
the corresponding norm || ? ||K . HK satisfies the reproducing property, i.e., for all f ? HK ,
f (x) = hKx , f i, where
an operator LK can be defined on
R Kx is the function K(x, ?). Moreover,
2
HK as: (LK f )(x) = X f (y)K(x, y)d?(y), where
L
(X)
is
the
Hilbert space of square integrable
?
R
functions on X with the scalar product hf, gi? = X f (x)g(x)d?(x).
Given a Mercer kernel and a set of labeled examples (xi , yi ) (i = 1, ..., l), there are two popular
inductive learning algorithms: RLS [12, 13] and the Nadaraya-Watson Formula [5, 8, 14]. By the
standard Tikhonov regularization, RLS is a special case of the following functional extreme problem:
l
f ? = arg min
f ?HK
1X
V (xi , yi , f ) + ?||f ||2K
l i=1
(1)
where V is some loss function.
The Classical Representer Theorem states that the solution to this minimization problem exists in
HK and can be written as
l
X
?i K(xi , x).
(2)
f ? (x) =
i=1
Such a Representer Theorem is general because it plays an important role in both RLS in the case
when V (x, y, f ) = (y ? f (x))2 , and SVM in the case when V (x, y, f ) = max(0, 1 ? yf (x)).
The Nadaraya-Watson Formula is based on local weighted averaging, and it comes with a closed
form:
l
l
X
X
r(x) =
yi K(x, xi )/
K(x, xi ).
(3)
i=1
i=1
The formula has a similar appearance as Eq. (2), but it plays an important role in this paper because
we can write it in an integral form which makes our idea technically feasible as follows. Let p(x) be
a probability density function over X, P (x) be the corresponding cumulative distribution function,
and f (x) be an inductive function. We observe that, if (xi , f (xi ))(i = 1, 2, . . . , l) are sampled from
the function y = f (x), then
A Re-learned Function can be expressed as
R
Pl
f (?)K(x, ?)dP (?)
LK (f )
i=1 f (xi )K(x, xi )
r(x) = lim
= XR
=R
,
Pl
l??
K(x, ?)dP (?)
K(x, ?)dP (?)
X
X
i=1 K(x, xi )
(4)
based on f (x) and P (x). From this form, we show two points: (1) If r(x) = f (x), then f (x) is
completely predicted by itself through the Nadaraya-Watson Formula, and so f (x) is considered
to be completely consistent with the kernel K(x, y); if r(x) 6= f (x), then the difference ||f (x) ?
r(x)||K can measure how badly f (x) is consistent with the kernel K(x, y) and (2) Intuitively r(x)
can also be understood as the smoothed function of f (x) through a kernel K. Consequently, f (x) ?
r(x) represents the intrinsically uneven part of f (x), which we will penalize. This intuition is
illustrated in the middle diagram in Fig. 1.
R
Throughout this paper,Rwe assume that X K(x, ?)dP (?) is a constant, and for simplicity all kernels
are normalized by K/ X K(x, ?)dP (?) so that r(x) = LK (f ). Moreover, we assume that X is
compact, and the measure ? is specified as P (x).
3
Partially-penalized Regularization
For a given kernel K and an inductive function f , LK (f ) is the prediction function produced by K
through the Nadaraya-Watson Formula. Based on Eq. (1), penalizing the inconsistent part f (x) ?
LK (f ) leads to the following Partially-penalized Regularization problem:
l
f ? = arg min
f ?HK
1X
V (xi , yi , f ) + ?||f ? LK (f )||2K .
l i=1
(5)
To obtain a Representer Theorem, we need one assumption.
Assumption 1 Let f1 , f2 ? HK . If hf1 , f2 iK = 0, then ||f1 ? LK (f1 ) + f2 ? LK (f2 )||2K =
||f1 ? LK (f1 )||2K + ||f2 ? LK (f2 )||2K .
It is well-known that the operator LK is compact, self-adjoint, and positive with respect to L2? (X),
and by the Spectral Theorem [2, 3], its eigenfunctions e1 (x), e2 (x), . . . form an orthogonal basis of
L2? (X) and the corresponding eigenvalues ?1 ? ?2 , . . . are either
P finitely many that
P are nonzero,
or there are infinitely
many,
in
which
case
?
?
0.
Let
f
=
a
e
(x),
f
=
k
1
i
i
2
P
P
P
Pi
P i bi ei (x), then
f1 ?LK (f1 ) = i ai ei (x)?LP
a
e
(x))
=
a
e
(x)?
?
a
e
(x)
=
K(
i
i
i
i
i
i
i
i
i
i
i (1??i )ai ei (x),
and similarly, f2 ? LK (f2 ) = i (1 ? ?i )bi ei (x). By the discussions in [1], we have hei , ej i? = 0
if i 6= j, and hei , ei i? = 1; hei , ej iK = 0 if i 6= j, and hei , ei iK = ?1i . If we consider the situation
that ai , bi ? 0 for all i ? 1, then hf1 ,P
f2 iK = 0 implies that ai bi = 0 for all i ? 1, and consequently
hf1 ? LK (f1 ), f2 ? LK (f2 )iK = i (1 ? ?i )2 ai bi hei (x), ei (x)iK = 0. Therefore, under some
constrains, this assumption is a fact. Under this assumption, we have a Representer Theorem.
Theorem 2 Let ?j (x) be a basis in H0 of the operator I ? LK , i.e., H0 = {f ? HK |f ? LK (f ) =
0}. Under Assumption 1, the minimizer of the optimization problem in Eq. (5) is
f ? (x) =
o
X
j=1
?j ?j (x) +
l
X
i=1
?i K(xi , x)
(6)
Proof of the Representer Theorem. Any function f ? HK can be uniquely decomposed into a
component f|| in the linear subspace spanned by the kernel functions {K(xi , ?)}li=1 , and a compol
P
nent f? orthogonal to it. Thus, f = f|| + f? =
?i K(xi , ?) + f? . By the reproducing property
i=1
and the fact that hf? , K(xi , ?)i = 0 for 1 ? i ? l, we have
l
l
X
X
?i K(xi , ?), K(xj , ?)i.
f (xj ) = hf, K(xj , ?)i = h
?i K(xi , ?), K(xj , ?)i + hf? , K(xj , ?)i = h
i=1
i=1
Thus the empirical terms involving the loss function in Eq. (5) depend only on the value of the
coefficients {?i }li=1 and the gram matrix of the kernel function. By Assumption 1, we have
||f ? LK (f )||2K
l
P
= ||
?
||
i=1
l
P
?i K(xi , ?) ? LK (
l
P
i=1
l
P
?i K(xi , ?) ? LK (
i=1
i=1
?i K(xi , ?))||2K + ||f? ? LK (f? )||2K
?i K(xi , ?))||2K .
It follows that the minimizer of Eq. (5) must have ||f? ? LK (f? )||2K = 0, and therefore admits a
l
o
l
P
P
P
representation f ? (x) = f? +
?i K(xi , x) =
?j ?j (x) +
?i K(xi , x).
i=1
3.1
j=1
i=1
Partially-penalized Regularized Least Squares (PRLS) Algorithm
In this section, we focus our attention in the case that V (xi , yi , f ) = (yi ? f (xi ))2 , i.e, the Regularized Least Squares algorithm. In our setting, we aim to solve:
min
f ?HK
1X
(yi ? f (xi ))2 + ?||f ? LK (f )||2K .
l
(7)
By the Representer Theorem, the solution to Eq. (7) is of the following form:
f ? (x) =
o
X
?j ?j (x) +
j=1
By the proof of Theorem 2, we have f? =
l
X
?i K(xi , x).
(8)
i=1
o
P
j=1
?j ?j (x) and hf? ,
l
P
?i K(xi , x)iK = 0. By
i=1
Assumption 1 and the fact that f? belongs to the null space H0 of the operator I ? LK , we have
Pl
Pl
||f ? ? LK (f ? )||2K = ||f? ? LK (f? )||2K + || i=1 ?i K(xi , x) ? LK ( i=1 ?i K(xi , x))||2K
Pl
Pl
= || i=1 ?i K(xi , x) ? i=1 ?i LK (K(xi , x))||2K = ?T (K ? 2K 0 + K 00 )?,
(9)
where ? = [?1 , ?2 , . . . , ?l ]T , K is the l ? l gram matrix Kij = K(xi , xj ), K 0 and
0
00
K 00 are reconstructed l ? l matrices Kij
= hK(xi , x), LK (K(xj , x))iK , and Kij
=
hLK (K(xi , x)), LK (K(xj , x))iK . Substituting Eq. (8) and Eq. (9) to the problem in Eq. (7), we arrive at the following quadratic objective function of the l-dimensional variable ? and o-dimensional
variable ? = [?1 , ?2 , . . . , ?o ]T :
1
[?? , ? ? ] = arg min (Y ? K? ? ??)T (Y ? K? ? ??) + ??T (K ? 2K 0 + K 00 )?,
l
(10)
where ? is an l ? o matrix ?ij = ?j (xi ), and Y = [y1 , y2 , . . . , yl ]T . Taking derivatives with respect
to ? and ?, since the derivative of the objective function vanishes at the minimizer, we obtain
(?l(K ? 2K 0 + K 00 ) + K 2 )? + K?? = KY, ?T (Y ? K? ? ??) = 0.
(11)
In the term ||f ?LK (f )||, f is subtracted by LK (f ), and so it partially penalized. For this reason, the
resulting algorithm is referred as Partially-penalized Regularized Least Squares algorithm (PRLS).
3.2
The PLapRLS Algorithm
The idea in the previous section can also be extended to LapRLS in the manifold regularization
framework [1]. In the manifold setting, the smoothness on the data adjacency graph should be
considered, and Eq. (5) is modified as
f ? = arg min
f ?HK
l
l+u
X
1X
?I
V (xi , yi , f )+?A ||f ?LK (f )||2K +
(f (xi )?f (xj ))2 Wij , (12)
l i=1
(u + l)2 i,j=1
where Wij are edge weights in the data adjacency. From W , the graph Laplacian L is given by
Pl+u
L = D ? W , where D is the diagonal matrix with Dii = j=1 Wij . For this optimization problem,
the result in Theorem 2 can be modified slightly as:
Theorem 3 Under Assumption 1, the minimizer of the optimization problem in Eq. (12) admits an
expansion
o
l+u
X
X
?i K(xi , x).
(13)
f ? (x) =
?j ?j (x) +
j=1
i=1
Following Eq. (13), we continue to optimize the (l + u)-dimensional variable ? =
[?1 , ?2 , . . . , ?l+u ]? and the o-dimensional variable ? = [?1 , ?2 , . . . , ?o ]T . In a similar way as
the previous section and LapRLS in [1], ? and ? are determined by the following linear systems:
(KJK + ?1 (K ? 2K 0 + K 00 ) + ?2 KLK)? + (KJ? + ?2 KL?)? = KJY,
(14)
(?0 JK ? ?2 ?0 LK)? + (?0 ? ? ?2 ?0 L?)? = ?0 ? Y,
where K, K 0 , K 00 are the (l + u) ? (l + u) Gram matrices over labeled and unlabeled points; Y is an
(l + u) dimensional label vector given by: Y = [y1 , y2 , . . . , yl , 0, . . . , 0], J is an (l + u) ? (l + u)
diagonal matrix given by J = diag(1, 1, . . . , 1, 0, . . . , 0) with the first l diagonal entries as 1 and the
rest 0, and ? is an (l + u) ? o matrix ?ij = ?j (xi ).
4
4.1
Discussions
Heat Kernels and the Computation of K 0 and K 00
In this section we will illustrate the computation of K 0 and K 00 in the case of heat kernels. The basic
facts about heat kernels are excerpted from [6], and for more materials, see [10].
Given a manifold M and points x and y, the heat kernel Kt (x, y) is a special solution to the heat
equation with a special initial condition called the delta function ?(x?y). More specifically, ?(x?y)
describes a unit heat source at position y with no heat in other positions. Namely, ?(x ? y) = 0 for
R +?
x 6= y and ?? ?(x ? y)dx = 1. If we let f0 (x, 0) = ?(x ? y), then Kt (x, y) is a solution to the
following differential equation on a manifold M:
?f
? Lf = 0, f (x, 0) = f0 (x),
?t
(15)
where f (x, t) is the temperature at location x at time t, beginning with an initial distribution f0 (x) at
time zero, and L is the Laplace-Beltrami operator. Equation (15) describes the heat flow throughout
a geometric manifold with initial conditions.
Theorem 4 Let M be a complete Riemannian manifold. Then there exists a function K ?
C ? (R+ ? M ? M), called the heat kernel, which satisfies the following properties for
all x, y ? RM, with Kt (x, y) = K(t, x, y): (1) Kt (x, y) defines a Mercer kernel. (2)
K
R t (x, y) = M Kt?s (x, z)Ks (z,R y)dz for any s > 0. (3) The solution tomEq. (15) is f (x, t) =
Kt (x, y)f0 (y)dy. (4) 1 = M Kt (x, y)1dy and (5) When M = R , Lf is simplified as
M
P ?2f
||x?y||2
?m
2 e?
4t
.
i ?x2 , and the heat kernel takes the Gaussian RBF form Kt (x, y) = (4?t)
i
K 0 and K 00 can be computed as follows:
0
Kij
=
=
=
=
hKt (xi , x), LK (Kt (xj , x))iK (by definition)
L
R K (Kt (xj , x))|x=xi (by the reproducing property of a Mercer kernel)
Kt (xj , y)Kt (xi , y)d?(y) (by the definition of LK )
X
K2t (xi , xj ) (by Property 2 in Theorem 4)
(16)
00
Based on the fact that LK is self-adjoint, we can similarly derive Kij
= K3t (xi , xj ). For other
0
00
kernels, K and K can also be computed.
4.2
What should not be penalized?
From Theorem 2, we know that the functions in the null space H0 = {f ? HK |f ? LK (f ) =
0} should not be penalized. Although there may be looser assumptions that can guarantee the
validity
of the result in Theorem 2, there are two assumptions in this work: X is compact and
R
K(x,
?)dP (?) in Eq. (4) is a constant. Next we discuss the constant functions and the linear
X
functions.
Should constant functions be penalized?
Under the two
R
R assumptions, a constant function c should
not be penalized, because c = X cK(x, ?)p(?)d?/ X K(x, ?)p(?)d?, i.e., cR ? H0 . For heat
kernels, if P (x) is uniformly distributed on M, then by Property 4 in Theorem 4, X K(x, ?)dP (?)
is a constant, and so c should not be penalized.
For polynomial kernels, the theory cannot guarantee that constant functions should not be penalized
even with a uniform distribution P (x). For example, considering the polynomial kernel xy +1 in the
R
R1
interval X = [0 1] and the uniform distribution on X, X (xy +1)dP (y) = 0 (xy +1)dy = x/2+1
is not a constant. As a counter example, we will show in Section 5.3 that not penalizing constant
functions in polynomial kernels will result in much worse accuracy. The reason for this phenomenon
is that constant functions may not be smooth in the feature space produced by the polynomial kernel
R1
under some distributions. The readers can deduce an example for p(x) such that 0 (xy + 1)dP (y)
happens to be a constant.
Should linear function aT x be penalized? In the case when X is a closed ball Br with radius
T
r when P (x) is uniformly distributed over Br and when K is the Gaussian RBFR kernel, then
R a x
1
should not be penalized when r is big enough. Since r is big enough, we have Rn ?dx ? Br ?dx
R
R
R
and Br Kt (x, y)dy ? 1, and so aT x = Rn Kt (x, y)aT ydy ? Br Kt (x, y)aT ydy ? LK (aT x).
Consequently ||aT x ? LK (aT x)||K will be small enough, and so the linear function aT x needs not
be penalized. For other kernels, other spaces, or other PX , the conclusion may not be true.
5
Experiments
In this section, we evaluate the proposed algorithms PRLS and PLapRLS on a toy dataset (size: 40),
a medium-sized dataset (size: 3,119), and a large-sized dataset (size: 20,000), and provide a counter
example for constant functions on another dataset (size: 9,298). We use the Gaussian RBF kernels in
the first three datasets, and use polynomial kernels to provide a counter example on the last dataset.
Without any prior knowledge about the data distribution, we assume that the examples are uniformly
distributed, and so constant functions are considered to be in H0 for the Gaussian RBF kernel, but
linear functions are not considered to be in H0 since it is rare for data to be distributed uniformly on
a large ball. The data and results for the toy dataset are illustrated in the left diagram and the right
diagram in Fig. 1.
5.1
UCI Dataset Isolet about Spoken Letter Recognition
We follow the same semi-supervised settings as that in [1] to compare RLS with PRLS, and compare
LapRLS with PLapRLS on the Isolet database. The dataset contains utterances of 150 subjects who
1
Note that a subset of Rn is compact if and only if it is closed and bounded. Since Rn is not bounded, it
is not compact, and so the Representer Theorem cannot be established. This is the reason why we cannot talk
about Rn directly.
RLS vs PRLS
LapRLS vs PLapRLS
28
25
Error Rate (unlabeled set)
26
Error Rates (unlabeled set)
RLS
PRLS
24
22
20
18
16
14
LapRLS
PLapRLS
20
15
12
10
0
5
10
15
20
Labeled Speaker #
25
10
30
0
5
RLS vs PPLS
25
30
LapRLS vs PLapRLS
35
32
RLS
PRLS
LapRLS
PLapRLS
30
Error Rates (test set)
Error Rates (test set)
10
15
20
Labeled Speaker #
30
25
20
28
26
24
22
20
18
16
15
0
5
10
15
20
Labeled Speaker #
25
30
14
0
5
10
15
20
Labeled Speaker #
25
30
Figure 2: Isolet Experiment
pronounced the name of each letter of the English alphabet twice. The speakers were grouped into
5 sets of 30 speakers each. The data of the first 30 speakers forms a training set of 1,560 examples,
and that of the last 29 speakers forms the test set. The task is to distinguish the first 13 letters from
the last 13. To simulate a real-world situation, 30 binary classification problems corresponding to 30
splits of the training data where all 52 utterances of one speaker were labeled and all the rest were
left unlabeled. All the algorithms use Gaussian RBF kernels. For RLS and LapRLS, the results were
obtained with width ? = 10, ?l = 0.05, ?A l = ?I l/(u + l)2 = 0.005. For PRLS and PLapRLS,
the results were obtained with width ? = 4, ?l = 0.01, and ?A l = ?I l/(u + l)2 = 0.01. In Fig. 2,
we can see that both PRLS and PLapRLS make significant performance improvements over their
corresponding counterparts on both unlabeled data and test set.
5.2
UCI Dataset Letter about Printed Letter Recognition
In Dataset Letter, there are 16 features for each example, and there are 26 classes representing the
upper case printed letters. The first 400 examples were taken to form the training set. The remaining
19,600 examples form the test set. The parameters are set as follows: ? = 1, ?l = ?A (l+u) = 0.25,
and ?I l/(u + l)2 = 0.05. For each of the four algorithms RLS, PRLS, LapRLS, and PLapRLS, for
each of the 26 one-versus-all binary classification tasks, and for each of 10 runs, two examples for
each class were randomly labeled. For each algorithm, the averages over all the 260 one-versus-all
binary classification error rates for unlabeled 398 examples and test set are listed respectively as
follows: (5.79%, 5.23%) for RLS, (5.12%, 4.77%) for PRLS, (0%, 2.96%) for LapRLS, and (0%,
3.15%) for PLapRLS respectively. From the results, we can see that RLS is improved on both
unlabeled examples and test set. The fact that there is no error in the total 260 tasks for LapRLS
and PLapRLS on unlabeled examples suggests that the data is distributed in a curved manifold. On
a curved manifold, the heat kernels do not take the Gaussian RBF form, and so PLapRLS using the
Gaussian RBF form cannot achieve its best. This is the reason why we can observe that PLapRLS
is slightly worse than LapRLS on the test set. This suggests the need for a vast of investigations on
heat kernels on a manifold.
5.3
A Counter Example in Handwritten Digit Recognition
Note that, polynomial kernels with degree 3 were used on USPS dataset in [1], and 2 images for each
class were randomly labeled. We follow the same experimental setting as that in [1]. For RLS, if we
use Eq. (2), then the averages of 45 pairwise binary classification error rates are 8.83% and 8.41%
for unlabeled 398 images and 8,898 images in the test set respectively. If constant functions are not
Pl
penalized, then we should use f ? (x) = i=1 ?i K(xi , x) + a, and the corresponding error rates are
9.75% and 9.09% respectively. By this example, we show that leaving constant functions outside
the regularization term is dangerous;
however, it is fortunate that we have a theory to guide this in
R
Section 4: if X is compact and X K(x, ?)dP (?) in Eq. (4) is a constant, then constant functions
should not be penalized.
6
Conclusion
A novel learning scheme is proposed based on a new viewpoint of penalizing the inconsistent part
between inductive functions and kernels. In theoretical aspects, we have three important claims: (1)
On a compact domain or manifold, if the denominator in Eq. (4) is a constant, then there is a new
Representer Theorem; (2) The same conditions become a sufficient condition under which constant
functions should not be penalized; and (3) under the same conditions, a function belongs to the
null space if and only if the function should not be penalized. Empirically, we claim that the novel
learning scheme can achieve accuracy improvement in practical applications.
Acknowledgments
The work described in this paper was supported by two grants from the Research Grants Council of
the Hong Kong Special Administrative Region, China (Project No. CUHK4150/07E) and Project
No. CUHK4235/04E). The first author would like to thank Hao Ma for his helpful suggestions,
thank Kun Zhang and Wenye Li for useful discussions, and thank Alberto Paccanaro for his support.
References
[1] GMikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric
framework for learning from labeled and unlabeled examples. Journal of Machine Learning
Research, 7:2399?2434, 2006.
[2] F. Cucker and S. Smale. On the mathematical foundations of learning. Bulletin (New Series)
of the American Mathematical Society, 39(1):1?49, 2002.
[3] Lokenath Debnath and Piotr Mikusinski. Introduction to Hilbert Spaces with Applications.
Academic Press, San Diego, second edition, 1999.
[4] T. Evgeniou, M. Pontil, and T. Poggio. Regularization networks and support vector machines.
Advances in Computational Mathematics, 13:1?50, 2000.
[5] T. Hastie and C. Loader. Local regression: Automatic kernel carpentry. Statistical Science,
8(1):120?129, 1993.
[6] John Lafferty and Guy Lebanon. Diffusion kernels on statistical manifolds. Journal of Machine
Learning Research, 6:129?163, 2005.
[7] Wenye Li, Kin-Hong Lee, and Kwong-Sak Leung. Generalized regularized least-squares learning with predefined features in a Hilbert space. In NIPS, 2006.
[8] E. A. Nadaraya. On estimating regression. Theory of Probability and Its Applications,
9(1):141?142, 1964.
[9] R.M. Rifkin and R.A. Lippert. Notes on regularized least-squares. Technical Report 2007-019,
Massachusetts Institute of Technology, 2007.
[10] S. Rosenberg. The Laplacian on a Riemmannian Manifold. Cambridge University Press, 1997.
[11] Bernhard Sch?olkopf, Ralf Herbrich, and Alex J. Smola. A generalized representer theorem. In
COLT, 2001.
[12] I. Sch?onberg. Spline functions and the problem of graduation. Proc. Nat. Acad. Sci. USA,
52:947?950, 1964.
[13] A. N. Tikhonov and V. Y. Arsenin. Solutions of Ill-posed Problems. W. H. Winston, 1977.
[14] G. S. Watson. Smooth regression analysis. Sankhy?a, Series A, 26:359?372, 1964.
| 3460 |@word kong:2 middle:2 polynomial:6 norm:1 klk:1 initial:3 necessity:1 contains:3 series:2 rkhs:2 current:1 dx:3 written:1 must:1 john:1 enables:1 v:5 nent:1 beginning:1 provides:1 cse:1 location:1 herbrich:1 zhang:1 mathematical:2 differential:1 become:1 ik:10 fitting:2 pairwise:1 expected:2 decomposed:1 considering:1 project:2 estimating:1 underlying:1 moreover:2 bounded:2 medium:1 null:3 what:2 kind:1 spoken:1 guarantee:2 rm:1 unit:1 grant:2 positive:1 engineering:1 local:2 understood:1 acad:1 despite:1 loader:1 twice:1 k:1 k2t:1 china:1 suggests:3 nadaraya:5 bi:5 practical:1 acknowledgment:1 lf:2 carpentry:1 xr:1 digit:1 pontil:1 empirical:1 printed:2 cannot:5 unlabeled:10 operator:5 optimize:1 dz:1 lyu1:1 attention:2 simplicity:1 isolet:3 spanned:1 his:2 ralf:1 ppls:1 laplace:1 diego:1 play:2 ydy:2 recognition:3 jk:1 labeled:11 database:1 role:2 region:1 counter:4 intuition:2 vanishes:1 constrains:1 depend:1 debnath:1 technically:1 f2:11 completely:2 basis:2 usps:1 joint:1 various:2 talk:1 alphabet:1 heat:14 london:1 outside:1 h0:7 whose:1 posed:1 solve:1 drawing:1 ability:3 niyogi:1 gi:1 rwe:1 itself:1 advantage:1 eigenvalue:1 product:1 uci:2 rifkin:1 achieve:3 adjoint:2 description:2 pronounced:1 ky:1 olkopf:1 r1:2 hkt:1 illustrate:1 derive:1 ac:1 ij:2 finitely:1 eq:16 strong:1 c:1 predicted:1 come:1 implies:1 direction:1 beltrami:1 radius:1 wenye:2 kwong:1 dii:1 material:1 adjacency:2 graduation:1 f1:8 preliminary:1 investigation:1 pl:8 considered:4 lyu:1 claim:2 substituting:1 proc:1 label:1 council:1 grouped:1 weighted:1 minimization:1 gaussian:7 aim:2 modified:2 ck:1 avoid:2 ej:2 cr:1 varying:1 rosenberg:1 focus:1 improvement:3 hk:16 hkx:1 sense:1 helpful:1 leung:1 wij:3 arg:4 classification:4 colt:1 ill:1 special:4 marginal:1 evgeniou:1 piotr:1 represents:2 rls:28 representer:10 sankhy:1 report:1 spline:1 belkin:1 randomly:3 extreme:1 kjk:1 predefined:1 kt:15 integral:1 edge:1 xy:4 poggio:1 orthogonal:2 re:5 theoretical:1 kij:5 entry:1 rare:1 subset:1 uniform:2 density:1 lee:1 yl:2 cucker:1 michael:1 guy:1 worse:2 american:1 derivative:2 li:4 toy:2 suggesting:1 coefficient:1 depends:2 view:1 closed:3 hf:5 partha:1 square:8 accuracy:3 who:1 handwritten:1 produced:2 definition:2 e2:1 associated:1 proof:2 riemannian:1 sampled:1 dataset:11 popular:1 intrinsically:1 massachusetts:1 lim:1 knowledge:1 riemmannian:1 hilbert:4 haixuan:2 supervised:3 follow:2 improved:1 furthermore:1 smola:1 hand:1 ei:7 defines:1 yf:1 name:2 usa:1 validity:1 concept:1 y2:2 true:1 counterpart:1 inductive:8 regularization:8 normalized:1 nonzero:3 satisfactory:1 illustrated:3 self:2 uniquely:1 width:2 speaker:9 hong:3 generalized:4 paccanaro:1 complete:2 temperature:1 image:3 novel:3 functional:1 empirically:1 interpretation:2 significant:1 cambridge:1 ai:5 smoothness:1 automatic:1 consistency:2 mathematics:1 similarly:2 stable:1 f0:4 deduce:1 belongs:2 tikhonov:2 binary:4 watson:5 continue:1 yi:8 integrable:1 cuhk:1 semi:2 smooth:3 technical:2 academic:1 alberto:1 e1:1 laplacian:2 prediction:1 variant:1 regression:4 basic:2 involving:1 denominator:1 kernel:45 penalize:1 whereas:1 background:1 interval:2 diagram:7 source:1 leaving:1 sch:2 rest:2 eigenfunctions:1 subject:1 inconsistent:2 flow:1 lafferty:1 ideal:1 split:1 enough:3 xj:14 hastie:1 idea:3 br:5 penalty:1 cause:1 useful:1 detailed:1 listed:1 delta:1 write:1 express:3 four:1 drawn:1 prls:16 penalizing:4 diffusion:1 vast:1 graph:2 run:1 letter:7 arrive:1 throughout:2 reader:2 looser:1 dy:4 distinguish:1 quadratic:1 winston:1 badly:1 dangerous:1 alex:1 x2:1 aspect:1 simulate:1 min:5 px:3 department:2 ball:2 remain:1 smaller:1 slightly:2 describes:2 lp:1 modification:2 happens:1 intuitively:1 taken:1 equation:3 remains:1 hei:5 discus:1 know:1 serf:1 apply:1 observe:4 appropriate:1 spectral:1 sak:1 subtracted:1 vikas:1 remaining:1 chinese:1 classical:1 society:1 lippert:1 objective:2 diagonal:3 dp:10 subspace:1 thank:3 sci:1 manifold:15 reason:4 illustration:1 kun:1 smale:1 hao:1 upper:1 observation:1 datasets:1 curved:2 situation:3 extended:1 y1:2 rn:5 reproducing:3 smoothed:1 namely:1 specified:1 kl:1 learned:7 established:1 nip:1 max:1 royal:1 regularized:7 residual:1 representing:1 scheme:4 improve:1 technology:1 brief:1 lk:41 utterance:2 kj:1 prior:1 geometric:2 l2:2 loss:2 suggestion:1 versus:2 penalization:4 foundation:1 degree:1 sufficient:1 consistent:4 mercer:4 viewpoint:4 pi:1 arsenin:1 penalized:24 supported:1 last:3 english:1 guide:1 institute:1 taking:1 excerpted:1 bulletin:1 distributed:5 curve:1 gram:3 cumulative:1 world:1 author:1 made:1 san:1 simplified:1 lebanon:1 reconstructed:1 compact:8 bernhard:1 overfitting:1 xi:46 why:2 promising:1 learn:1 expansion:4 domain:2 diag:1 big:2 edition:1 hf1:3 fig:4 referred:1 borel:1 position:2 fortunate:1 lie:1 administrative:1 kin:1 theorem:20 formula:5 rhul:1 svm:1 admits:2 exists:2 nat:1 kx:1 yang1:1 hlk:1 explore:1 appearance:1 infinitely:1 expressed:1 partially:5 scalar:1 sindhwani:1 minimizer:4 satisfies:2 ma:1 sized:2 king:1 consequently:3 rbf:6 feasible:1 change:1 determined:1 specifically:1 uniformly:4 averaging:1 called:2 total:1 experimental:1 holloway:1 uneven:2 kernelbased:1 people:1 support:2 irwin:1 cuhk4150:1 laprls:14 evaluate:1 phenomenon:3 |
2,714 | 3,461 | An improved estimator of Variance Explained in the
presence of noise
Ralf. M. Haefner?
Laboratory for Sensorimotor Research
National Eye Institute, NIH
Bethesda, MD 20892
[email protected]
Bruce. G. Cumming
Laboratory for Sensorimotor Research
National Eye Institute, NIH
Bethesda, MD 20892
[email protected]
Abstract
A crucial part of developing mathematical models of information processing in the
brain is the quantification of their success. One of the most widely-used metrics
yields the percentage of the variance in the data that is explained by the model.
Unfortunately, this metric is biased due to the intrinsic variability in the data.
We derive a simple analytical modification of the traditional formula that significantly improves its accuracy (as measured by bias) with similar or better precision
(as measured by mean-square error) in estimating the true underlying Variance
Explained by the model class. Our estimator advances on previous work by a)
accounting for overfitting due to free model parameters mitigating the need for a
separate validation data set, b) adjusting for the uncertainty in the noise estimate
and c) adding a conditioning term. We apply our new estimator to binocular disparity tuning curves of a set of macaque V1 neurons and find that on a population
level almost all of the variance unexplained by Gabor functions is attributable to
noise.
1
Introduction
Constructing models of biological systems, e.g. in systems neuroscience, mostly aims at providing
functional descriptions, not fundamental physical laws. It seems likely that any parametric model
of signal processing in single neurons can be ruled out given a sufficient amount of data. Rather
than only testing the statistical validity of a particular mathematical formulation against data, e.g.
by using a ?2 -test, it is equally important to know how much of the signal, or variance, in the data
is explained by the model. This is commonly measured by Variance Explained (VE), the coefficient
of determination or r2 statistic. A fundamental problem of the traditional estimator for VE is its
bias in the presence of noise in the data. This noise may be due to measurement error or sampling
noise owing to the high intrinsic variability in the underlying data. This is especially important when
trying to model cortical neurons where variability is ubiquitous. Either kind of noise is in principle
unexplainable by the model and hence needs to be accounted for when evaluating the quality of the
model. Since the total variance in the data consists of the true underlying variance plus that due to
noise, the traditional estimator yields a systematic underestimation of the true VE of the model in
the absence of noise [1][2][3].
This has been noted by several authors before us; David & Gallant compute the traditional measure
at several noise levels and extrapolate it to the noise-free condition [1]. This method relies on many
repeats of the same stimulus and is therefore often impractical. Sahani & Linden add an analytical
correction to the traditional formula in order to reduce its bias [2]. A number of subsequent studies
have used their corrections to evaluate their models (e.g. [4][5][6]). We further improve on Sahani
?
Corresponding author ([email protected])
1
& Linden?s formula in three ways: 1) most importantly by accounting for the number of parameters
in the model, 2) adding a correction term for the uncertainty in the noise estimation, and 3) including
a conditioning term to improve the performance in the presence of excessive noise. We propose a
principled method to choose the conditioning term in order to electively minimize either the bias or
the mean-square-error (MSE) of the estimator.
In numerical simulations we find that the analytical correction alone is capable of drastically reducing the bias at moderate and high noise levels while maintaining a mean-square-error about as good
as the traditional formula. Only for very high levels of noise is it advantageous to make use of the
conditioning term. We test the effect of our improved formula on a data set of disparity selective
macaque V1 neurons and find that for many cells noise accounts for most of the unexplained variance. On a population level we find that after adjusting for the noise, Gabor functions can explain
about 98% of the underlying response variance.
2
2.1
Derivation of an improved estimator
Traditional Variance Explained
Given a set of N measurements di of process D and given the model predictions mi , the traditional
Variance Explained ? is computed as the difference of total variance var(di ) and the variance of the
residuals of the model var(di ? mi ). It is usually reported as a fraction of total variance:
N
P
(di ? mi )2
var(di ) ? var(di ? mi )
var(di ? mi )
i=1
?=
=1?
=1? N
.
P
var(di )
var(di )
?2
(di ? d)
(1)
i=1
In most cases, the di themselves are averages of individual measurements and subject to a sampling
error. Since the variances of independent random variables add, this measurement noise leads to
additive noise terms in both numerator and denominator of equation (1). Below we show that as
the noise level increases, ? ? (n ? 1)/(N ? 1) with n being the number of model parameters (see
equation 8). The consequence is a systematic misestimation of the true Variance Explained (typically
underestimation since (n ? 1)/(N ? 1) is usually smaller than the true VE). The effect of this can
be seen in Figure 1 for two example simulations. In each simulation we fit a model to simulated
noisy data sampled from a different but known underlying function. This allows us to compare
the estimated VE to the true one, in the absence of noise. The average bias (estimated VE minus
true VE) of the traditional variance explained is shown for 2000 instantiations of each simulation
(shown in triangles). As we simulate an increase in sampling noise, the variance explained decreases
significantly, underestimating the true VE by up to 30% in our examples.
2.2
Noise bias
PRi
Let d?i = 1/Ri j=1
dij where the Ri are the number of observations for each variable i. We further
assume that the measured dij are drawn from a Gaussian distribution around the true means Di with
a variance of R?2i . Then the d?i are drawn from N [Di ; ?2i ]. To simplify the presentation we assume
that the variables have been transformed to equalize all ? ? ?i and that R ? Ri . It follows that
PN PR
? 2 = 1/(RN (R ? 1)) i=1 j=1 (dij ? d?i )2 is an estimate of ?2 based on measurements with
N? = N (R ? 1) degrees of freedom. In the terms of Sahani & Linden [2], ? 2 is the noise power.
Our estimator, however, is more direct and accurate ? especially for small N and R.
Let Mi be the best fitting model to Di of a given model class with parameters. Then the variance
explained in the absence of noise becomes:
N
P
(Di ? Mi )2
var(Mi ? Di )
i=1
?0 = 1 ?
=1? N
P
var(Di )
? 2
(Di ? D)
i=1
2
(2)
? = 1/N PN Di . Then ?0 is the true value for the Variance Explained that one would
where D
i=1
like to know: based on the best fit of the model class to the underlying data in the absence of any
measurement or sampling noise. ?0 is of course unknown and the values obtained by (1) are drawn
from a probability distribution around the true Variance Explained.
Normalizing both denominator and numerator of formula (1) by ? 2 leaves ? unchanged. However it
becomes clear that the resulting denominator is drawn from a noncentral F -distribution:
1
N ?1
N
X
i=1
?? 2
(di ? d)
=
?2
1
N ?1
1
N?
N
P
?? 2 /?2
(di ? d)
i=1
N P
R
P
?
(dij ? d?i )2 /(R?2 )
?2N ?1 (?DD )/(N ? 1)
?2N? /N?
i=1 j=1
with N ?1 and N? = N (R?1) degrees of freedom, the noncentrality parameter ?DD =
?? = 1/N PN d? . For N > 2 the mean of this distribution is given by
? 2 /?2 and d
D)
?
i=1 i
"
#
N
?? 2
1 X (di ? d)
N? (N ? 1 + ?DD )
=
E
N ? 1 i=1
?2
(N ? 1)(N? ? 2)
Hence, an unbiased estimator of
PN
(3)
i=1 (Di
PN
i=1 (Di ?
(4)
? 2 /?2 = ?DD is given by
? D)
?? 2
N? ? 2 X (di ? d)
? (N ? 1)
N? i=1
?2
N
?DD =
(5)
With the same reasoning we find that the numerator of equation (1)
N
?2N ?n (?DD )/(N ? n)
1 X (di ? mi )2
?
N ? n i=1
?2
?2N? /N?
(6)
follows a noncentral F -distribution with N ? n and N? degrees of freedom and the noncentrality
PN
PN
parameter ?DM = i=1 (Di ? Mi )2 /?2 . Hence, an unbiased estimator of i=1 (Di ? Mi )2 /?2 =
?DM is given by
N
?DM =
N? ? 2 X (di ? mi )2
? (N ? n)
N? i=1
?2
(7)
Combining (5) and (7) yields an estimator for ?0 whose numerator and denominator are individually
unbiased:
2
N
X
di ? mi
N? (N ? n)
?
?
N? ? 2
?[?0 ] = 1 ? i=1
.
(8)
2
N
X
di ? d?
N? (N ? 1)
?
?
N? ? 2
i=1
Note that apart from the difference in noise estimation, the estimator proposed by Sahani & Linden
is contained in ours as a special case, becoming identical when there is no uncertainty in the noise
estimate (N? ? ?) and testing a model with no free parameters (n = 0). N? ? ? is an excellent
approximation in their case of fitting receptive fields to long series of data, but less so in the case
of fitting tuning curves with a limited number of data points. However, the fact that their noiseterm does not account for overfitting due to free parameters in the model means that their formula
overestimates the true Variance Explained. Hence, it requires a separate validation data set which
might be costly to obtain.
At this point we wish to note that (5), (7) and (8) readily generalize to cases where the noise level
?i and the number of observations Ri on which the means d?i are based (and therefore N?i ) differ
between those data points.
3
2.3
Conditioning term
First it is important to note that while both numerator and denominator in formula (8) are now unbiased, the ratio is generally not. In fact, the ratio is not even well-defined for arbitrary measurements
since the denominator can become zero and negative. In practice this is avoided by implicit or explicit selection criteria imposed by the experimenter requiring a minimum SNR in the data before
further analysis. An example would be a criterion based on the significance level pANOVA of the
modulation in the data as assessed by a 1-way ANOVA test. (Any criterion can be used in the context
of the framework described here, as long as it is used consistently.) The effect of such a criterion
is to cut off the lower tail of the distribution from which the denominator is drawn to exclude zero.
This introduces a bias to the denominator the size of which depends on the amount of noise and
the strictness of the criterion used. We recognize that both biases are strongest when the data is
such that the ratio is close to singular and therefore propose an additive conditioning term C in the
denominator of (8):
"N
# "N
#
X di ? mi 2 N? (N ? n)
X di ? d?2 N? (N ? 1)
?(C) = 1 ?
?
/
?
+ C . (9)
?
N? ? 2
?
N? ? 2
i=1
i=1
Depending on the application, the optimal C can be chosen to either minimize the mean-squareerror (MSE) E[?(C) ? ?0 ] or the bias |E[?(C)] ? ?0 | of the estimator. Generally, the optimal
levels of conditioning for the two scenarios are different, i.e. unbiasedness comes at the expense of
an increased MSE and vice versa. For individual estimates a small bias can be acceptable in order
to improve accuracy (and hence minimize MSE). When averaging over a large number of estimates,
e.g. from a population of neurons, it becomes important that the estimator is unbiased.
C = C(N, n, N? , ?DM , ?DD ; pANOVA ) is itself a function of a number of variables, only two of
which, ?DM and ?DD , are unknown a priori. We approximate them by our estimates from equations
(5) and (7). The optimal C can then be determined in each case by a simple minimization across a
large number of random samples drawn from the appropriate distributions (compare equations (3)
and (6)):
Cbias
Cbias
CMSE
min |E [?(C)] ? (1 ? ?DM /?DD )| and therefore :
C
?2N ?n (?DM )/?2N? ? (N ? n)/(N? ? 2)
?DM
?
: min E 2
C
?N ?1 (?DD )/?2N? ? (N ? 1)/(N? ? 2) + C/N?
?DD
"
2 #
?2N ?n (?DM )/?2N? ? (N ? n)/(N? ? 2)
?DM
: min E
?
C
?2N ?1 (?DD )/?2N? ? (N ? 1)/(N? ? 2) + C/N?
?DD
:
(10)
(11)
(12)
Note that the ?2N? distributions in numerator and denominator, sampling over varying estimates
of the underlying noise ? 2 , are shared in both formulas since the ? 2 is shared. Those two minimization problems can easily be solved by Monte-Carlo sampling the probability distributions and
subsequently find the minimum of MSE or bias, respectively, across all samples.
2.4
Application to simulated data
Figure 1 demonstrates the performance of various estimators of VE for three synthetic examples. In
the left column we show the results when testing a model that consists of a 3rd degree polynomial
that has been fit to noisy data sampled from a Gaussian distribution around an underlying sinefunction. Over the domain studied here, the true VE of the model as fit to the data in the noiseless
condition would be 77%. The center & right column shows the case of a Gabor function that is fit to
noisy data sampled from a difference-of-Gaussians ?reality?. Here the true VE is 90%. The center
column simulates Gaussian and the right column Gamma noise (Fano factor of 2).
We confirm that the traditional VE measure (triangles) has an increasingly negative bias with increasing noise level ?. Applying the Sahani-Linden correction (squares) this negative bias is turned
into a positive one since the overfitting of noise due to the free parameters in the model is not taken
into consideration. This leads to an overestimation of the true VE when applied to the fitting data
instead of a separate set of validation data. Accounting for the number of parameters greatly reduces
the bias to close to zero across a large range of noise levels (dots). The bias becomes notable only
4
bias
11
3
6
10
2
4
9
1
2
0.1
0.2
0.1
0.05
0
?0.05
0
0
?0.1
?0.2
?0.2
?0.3
bias
RMSE
0.3
0.15
0.2
0.2
0.05
0.1
0.1
0
0
0
0.1
0.1
0
0
?0.1
?0.1
?0.2
?0.2
0.1
0.1
0.05
0
?0.05
?0.3
RMSE
0.3
0.15
0.1
0.05
0
?
0.2
0.2
0.1
0.1
0
?
0
?
Figure 1: Simulation results: Left column: a 3rd degree polynomial is fit to noise data drawn from
an underlying sine-function. Center & Right column: a Gabor function is fit to noisy data around
a linear combination of three Gaussians ? two ?excitatory? and one ?inhibitory?. Left & Center:
Gaussian noise, Right: Gamma distributed noise (Fano factor of 2). First row: data (stars) and
model (lines) are shown in the noise-free condition. Their true VE is 77% and 90%, respectively.
Rows 2-5: bias (defined as estimated minus true VE) and RMSE are shown as a function of noise
?. The traditional estimator is shown by triangles, the Sahani-Linden correction by squares, our
estimator from eq.(8) by dots. Rows 4 & 5: We enforce our prior knowledge that 0 ? ? ? 1.
Estimators with conditioning term C (eq.9) optimized for bias (+) and MSE (x), both dashed, are
shown. Restricting VE to 0 ? ? ? 1 is the reason for the plateau in the bias of the Sahani-Linden
estimator (right column, fourth from the top). In all panels data samples with insignificant variation
in the data (pANOVA > 0.05) were excluded from the analysis. Note the different scales in each
panel.
5
0.5
0
0.4
RMSE
bias
?0.1
?0.2
?0.3
0.2
0.1
?0.4
?0.5
10
0.3
20
30
40
50
0
10
60
N
20
30
40
50
60
N
Figure 2: Tradeoff between number of conditions N and number of repetitions R at each condition.
Traditional measure: triangles; unbiased estimate: dots. The total number of measurements was
fixed at N ? R = 120, while the number of different conditions N is varied along the abscissa.
at the highest noise levels (at which a large number of data samples does not pass the ANOVAtest for significant modulation), while still remaining smaller than that of the traditional estimator.
The reason for the decreasing bias of the Sahani-Linden estimator at very high noise levels is the
coincidental cancellation of two bias terms: the negative bias at high noise levels also seen in our
estimator for Gabor-fits to differences of Gaussians, and their general positive bias due to not taking
the over-fitting of parameters into account. Comparing the MSE (shown as root-mean-square-error
or RMSE) of the different estimators shows that they are similar in the case of fitting a polynomial (left column) and significantly improved in the case of fitting a Gabor function (center & right
column ? note the different y-axis scales among all column). 1
The bottom two rows simulate the situation where our prior knowledge that 0 ? VE ? 1 is explicitly
enforced. Since the numerator in our unbiased estimator (eq.8) yields values around its noiseless
value that can be positive and negative, the estimator can be negative or greater than one. Restricting
our estimator to [0..1] interferes with its unbiasedness. We test whether a conditioning term can
improve the performance of our estimator and find that this is the case for the Gabor fit, but not the
polynomial fit. In the case of the Gabor fit, the improvement due to the conditioning term is greatest
at the highest noise levels as expected. The bias is decreased at the highest three noise levels tested
and the MSE is slightly decreased (at the highest noise level) or the same as with conditioning.
Where the purely analytical formula outperforms the one with conditioning that is because the approximations we have to make in determining the optimal C are greater than the inaccuracy in the
analytical formula at those noise levels. This is especially true in the 3rd column where the strongly
non-Gaussian noise is incompatible with the Gaussian assumption in our computation of C. We
conclude that unless one has to estimate VE in the presence of extremely high noise, and has confirmed that conditioning provides an improvement for the particular situation under consideration,
our analytical estimator is preferable. (Note the different y-axis scales across the 2nd and 4th rows.)
Using an estimator that accounts for the amount of noise has another major benefit. Because the total
number of measurements N ? R one can make is usually limited, there is a tradeoff between number
of conditions N and number of repeats R. Everything else being equal the result from the traditional
estimator for VE will depend strongly on that choice: the more conditions and the fewer repeats,
the higher the standard error of the means ? (noise) and hence the lower the estimated VE will be
? regardless of the model. Figure 2 demonstrates this behavior in the case of fitting a Gabor to a
difference-of-Gaussians exactly as in Figure 1. Keeping the total number of measurements constant,
the traditional VE (triangles) decreases drastically as the number of conditions N is increased. The
new unbiased estimator (dots) in comparison has a much reduced bias and depends only weakly
on R. This means that relatively few repeats (but at least 2) are necessary, allowing many more
conditions to be tested than previously, hence increasing resolution.
1
It is not surprising that the precise behavior of the respective estimators varies between examples. Two
approximations were made in the analytical derivation: (1) the model is approx. linear in its parameters and (2)
unbiasing the denominator is not the same as unbiasing the ratio. Both approximations are accurate in the small
noise regime. However, as noise levels increase they introduce biases that interact depending on the situation.
6
B
3.5
3
7
spikerate0.5
spikerate0.5
A
2.5
2
6
5
4
3
1.5
?1
2
?0.5
0
0.5
disparity
1
?0.4
C
D
VE (cond min MSE)
VE (unbiased)
1.5
1
0.5
0 ?2
10
?0.2
0
disparity
0.2
1
0.8
0.6
0.4
0.2
?1
10
log(sigma2/var(d))
0
10
0.2
0.4
0.6
VE (old)
0.8
1
Figure 3: Disparity tuning curves of V1 neurons fit with a Gabor function: A: Data from an example
neuron shown by their standard error of the mean (SEM) errorbars. Estimate of VE by Gabor
fit (solid line) changes from 85% to 93% when noise is adjusted for. B: Data from 2nd example
neuron. VE of Gabor fit changes from 94% to 95%. ?2 ?test on compatibility of data with model:
p?2 = 4 ? 10?4 . C: Unbiased VE as a function of signal-to-noise power. One outlier at (0.93;4.0) not
shown. D: Traditional VE estimate vs unbiased VE with conditioning to minimize MSE. VE values
are limited to 0..1 range. C & D: Filled symbols denote cells whose responses are incompatible
with the Gabor model, as evaluated by a ?2 ?test (p?2 < 0.05).
3
3.1
Application to experimental data
Methods
The data are recorded extracellularly from isolated V1 neurons in two awake, fixating rhesus
macaque monkeys and have been previously published in [7]. The stimulus consisted of dynamic
random dots (RDS) with a binocular disparity applied perpendicular to the preferred orientation of
the cell. We only included neurons in the analysis which were significantly modulated by binocular
disparity as evaluated by a one-way ANOVA test. 109 neurons passed the test with pANOVA < 0.05.
Since neuronal spike counts are approximately Poisson distributed we perform all subsequent analysis using the square root of the spike rates to approximately equalize variances. We fit a Gabor
function with six parameters to the spike rates of each cell and perform a ?2 ? test on the residuals. The minimum number of different conditions Nmin = 13 and the median number of repeats
median(R) = 15.
3.2
Results
Most disparity tuning curves in V1 are reasonably well-described by Gabor functions, which explain
more than 90% of the variance in two thirds of the neurons [8]. Whether the remaining third reflect
a failure of the model or are merely a consequence of noise in the data has been an open question.
Panels A & B in Figure 3 show the responses of two example cells together with their best-fitting
Gabor functions. The traditional VE in panel A is only 82% even though the data is not significantly
different from the model (p?2 = 0.64). After adjusting for noise, the unbiased VE becomes 92%,
i.e. more than half of the unexplained variance can be attributed to the response variability for each
measurement. Panel B shows the opposite situation: 94% of the variance is explained according
to the traditional measure and only an additional 1% can be attributed to noise. However, despite
7
this high VE, since the measurement error is relatively small, the model is rejected with a high
significance (p?2 = 4 ? 10?4 ).
Panel C shows the unbiased estimate of the VE for the entire population of neurons depending on
their noise power relative to signal power. At high relative noise levels there is a wide spread of
values and for decreasing noise, the VE values asymptote near 1. In fact, the overall population
mean for the unbiased VE is 98%, compared with the traditional estimate of 82%. This means that
for the entire population, most of the variance previously deemed unexplained by the model can in
fact be accounted for by our uncertainty about the data. 22 out of 109 cells or 20% rejected the
model (p?2 < 0.05) and are denoted by filled circles. Panel D demonstrates the effect of the new
measure on each individual cell. For the estimation of the true VE for each neuron individually, we
incorporate our knowledge about the bounds 0 ? ?0 ? 1 and optimize the conditioning term for
minimum MSE. With the exception of two neurons, the new estimate of the true VE is greater than
the traditional one. On average 40% of the unexplained variance in each individual neuron can be
accounted for by noise.
4
Conclusions
We have derived an new estimator of the variance explained by models describing noisy data. This
estimator improves on previous work in three ways: 1) by accounting for overfitting due to free
model parameters, 2) by adjusting for the uncertainty in our estimate of the noise and 3) by describing a way to add an appropriate level of conditioning in cases of very low signal-to-noise in the
data or other imposed constraints. Furthermore, our estimator does not rely on a large number of
repetitions of the same stimulus in order to perform an extrapolation to zero noise. In numerical simulations with Gaussian and strongly skewed noise we have confirmed that our correction is capable
of accounting for most noise levels and provides an estimate with greatly improved bias compared
to previous estimators. We note that where the results from the two simulations differ, it is the more
realistic simulation where the new estimator performs best.
Another important benefit of our new estimator is that it addresses the classical experimenter?s
dilemma of a tradeoff between number of conditions N and number of repeats R at each condition. While the results from the traditional estimator quickly deteriorate with increasing N and
decreasing R, the new estimator is much closer to invariant with respect to both ? allowing the
experimenter to choose a greater N for higher resolution.
When applying the new VE estimator to a data set of macaque V1 disparity tuning curves we find
that almost all of the variance previously unaccounted for by Gabor fits can be attributed to sampling
noise. For our population of 109 neurons we find that 98% of the variance can be explained by a
Gabor model. This is much higher than previous estimates precisely because they did not account
for the variability in their data, illustrating the importance of this correction especially in cases where
the model is good. The improvement we present is not limited to neuronal tuning curves but will be
valuable to any model testing where noise is an important factor.
Acknowledgments
We thank Christian Quaia and Stephen David for helpful discussions.
References
[1] S.V. David, and J.L. Gallant, Network 16, 239 (2005).
[2] M. Sahani, and J.F. Linden, Advances in Neural Information Processing Systems 15, 109 (2003).
[3] A. Hsu, A. Borst, and F.E. Theunissen, Network 15, 91 (2004).
[4] C.K. Machens, M.S. Wehr, and A.M. Zador, J Neurosci 24, 1089 (2004).
[5] I. Nauhaus, A. Benucci, M. Carandini, and D.L. Ringach, Neuron 57, 673 (2008).
[6] V. Mante, V. Bonin, and M. Carandini, Neuron 58, 625 (2008).
[7] R.M. Haefner and B.G. Cumming, Neuron 57, 147 (2008).
[8] S.J. Prince, A.D. Pointon, B.G. Cumming, and A.J. Parker, J Neurophysiol 87, 191 (2002).
8
| 3461 |@word illustrating:1 polynomial:4 seems:1 advantageous:1 nd:2 open:1 simulation:8 rhesus:1 accounting:5 minus:2 solid:1 series:1 disparity:9 ours:1 outperforms:1 com:2 comparing:1 surprising:1 gmail:2 readily:1 subsequent:2 numerical:2 additive:2 realistic:1 christian:1 asymptote:1 v:1 alone:1 half:1 leaf:1 fewer:1 underestimating:1 provides:2 mathematical:2 along:1 direct:1 become:1 consists:2 fitting:9 introduce:1 deteriorate:1 expected:1 behavior:2 themselves:1 abscissa:1 brain:1 decreasing:3 borst:1 gov:1 increasing:3 becomes:5 estimating:1 underlying:9 panel:7 kind:1 coincidental:1 monkey:1 impractical:1 preferable:1 exactly:1 demonstrates:3 overestimate:1 before:2 positive:3 consequence:2 despite:1 becoming:1 modulation:2 approximately:2 might:1 plus:1 studied:1 limited:4 range:2 perpendicular:1 acknowledgment:1 testing:4 practice:1 significantly:5 gabor:18 close:2 selection:1 context:1 applying:2 optimize:1 imposed:2 center:5 regardless:1 zador:1 resolution:2 estimator:41 importantly:1 ralf:3 population:7 variation:1 machens:1 cut:1 theunissen:1 bottom:1 solved:1 decrease:2 highest:4 valuable:1 principled:1 equalize:2 overestimation:1 dynamic:1 depend:1 weakly:1 purely:1 dilemma:1 triangle:5 neurophysiol:1 easily:1 various:1 derivation:2 monte:1 rds:1 whose:2 widely:1 statistic:1 noisy:5 itself:1 analytical:7 interferes:1 propose:2 turned:1 combining:1 noncentrality:2 description:1 noncentral:2 derive:1 depending:3 measured:4 eq:3 come:1 differ:2 owing:1 subsequently:1 everything:1 biological:1 adjusted:1 correction:8 around:5 major:1 estimation:3 unexplained:5 individually:2 vice:1 repetition:2 minimization:2 gaussian:7 aim:1 rather:1 pn:7 varying:1 derived:1 improvement:3 consistently:1 greatly:2 helpful:1 typically:1 entire:2 selective:1 transformed:1 mitigating:1 compatibility:1 overall:1 among:1 orientation:1 denoted:1 priori:1 special:1 field:1 equal:1 sampling:7 identical:1 excessive:1 stimulus:3 simplify:1 few:1 gamma:2 national:2 ve:39 individual:4 recognize:1 freedom:3 introduces:1 benucci:1 accurate:2 capable:2 closer:1 necessary:1 respective:1 bonin:1 unless:1 filled:2 old:1 ruled:1 circle:1 prince:1 isolated:1 increased:2 column:11 snr:1 dij:4 reported:1 varies:1 synthetic:1 unbiasedness:2 fundamental:2 systematic:2 off:1 together:1 quickly:1 reflect:1 recorded:1 choose:2 account:5 exclude:1 fixating:1 star:1 coefficient:1 notable:1 explicitly:1 depends:2 sine:1 root:2 extrapolation:1 extracellularly:1 bruce:1 rmse:5 minimize:4 square:7 accuracy:2 variance:33 yield:4 generalize:1 carlo:1 confirmed:2 published:1 explain:2 strongest:1 plateau:1 against:1 failure:1 sensorimotor:2 dm:10 nauhaus:1 di:33 mi:14 attributed:3 sampled:3 hsu:1 experimenter:3 adjusting:4 carandini:2 knowledge:3 improves:2 ubiquitous:1 higher:3 response:4 improved:5 formulation:1 evaluated:2 though:1 strongly:3 furthermore:1 rejected:2 lsr:1 binocular:3 implicit:1 nmin:1 unbiasing:2 quality:1 effect:4 validity:1 requiring:1 true:20 unbiased:14 consisted:1 hence:7 excluded:1 laboratory:2 pri:1 ringach:1 numerator:7 skewed:1 noted:1 criterion:5 trying:1 performs:1 reasoning:1 consideration:2 nih:3 functional:1 physical:1 unaccounted:1 conditioning:16 tail:1 measurement:12 significant:1 versa:1 tuning:6 rd:3 approx:1 fano:2 cancellation:1 dot:5 add:3 moderate:1 apart:1 scenario:1 success:1 seen:2 minimum:4 greater:4 additional:1 signal:5 stephen:1 dashed:1 reduces:1 determination:1 long:2 equally:1 prediction:1 denominator:11 bgc:1 metric:2 noiseless:2 poisson:1 cell:7 strictness:1 decreased:2 else:1 singular:1 median:2 crucial:1 biased:1 subject:1 simulates:1 near:1 presence:4 fit:16 sigma2:1 opposite:1 reduce:1 tradeoff:3 whether:2 six:1 passed:1 generally:2 clear:1 amount:3 reduced:1 percentage:1 inhibitory:1 neuroscience:1 estimated:4 drawn:7 anova:2 v1:6 merely:1 fraction:1 enforced:1 uncertainty:5 fourth:1 almost:2 incompatible:2 acceptable:1 bound:1 mante:1 constraint:1 precisely:1 awake:1 ri:4 simulate:2 min:4 extremely:1 relatively:2 developing:1 according:1 combination:1 smaller:2 across:4 increasingly:1 slightly:1 bethesda:2 modification:1 explained:17 outlier:1 pr:1 invariant:1 taken:1 equation:5 previously:4 describing:2 count:1 know:2 gaussians:4 apply:1 appropriate:2 enforce:1 top:1 remaining:2 maintaining:1 especially:4 classical:1 unchanged:1 question:1 spike:3 parametric:1 receptive:1 costly:1 md:2 traditional:21 separate:3 misestimation:1 simulated:2 thank:1 reason:2 providing:1 ratio:4 unfortunately:1 mostly:1 expense:1 negative:6 unknown:2 perform:3 gallant:2 allowing:2 neuron:20 observation:2 situation:4 variability:5 precise:1 rn:1 varied:1 nei:1 arbitrary:1 david:3 optimized:1 errorbars:1 inaccuracy:1 macaque:4 address:1 usually:3 below:1 regime:1 including:1 power:4 greatest:1 quantification:1 rely:1 residual:2 improve:4 eye:2 axis:2 deemed:1 sahani:9 prior:2 determining:1 relative:2 law:1 var:10 validation:3 degree:5 sufficient:1 principle:1 dd:13 row:5 course:1 excitatory:1 accounted:3 repeat:6 free:7 keeping:1 drastically:2 bias:30 institute:2 wide:1 taking:1 distributed:2 benefit:2 curve:6 cortical:1 evaluating:1 author:2 commonly:1 made:1 avoided:1 approximate:1 preferred:1 confirm:1 overfitting:4 instantiation:1 conclude:1 reality:1 reasonably:1 sem:1 interact:1 mse:11 excellent:1 constructing:1 domain:1 wehr:1 did:1 significance:2 spread:1 neurosci:1 noise:69 neuronal:2 parker:1 attributable:1 precision:1 cumming:3 wish:1 explicit:1 third:2 formula:11 symbol:1 r2:1 insignificant:1 linden:9 normalizing:1 intrinsic:2 restricting:2 adding:2 importance:1 likely:1 contained:1 relies:1 haefner:4 presentation:1 unexplainable:1 absence:4 shared:2 change:2 included:1 determined:1 reducing:1 averaging:1 total:6 pas:1 experimental:1 underestimation:2 cond:1 exception:1 modulated:1 assessed:1 incorporate:1 evaluate:1 tested:2 extrapolate:1 |
2,715 | 3,462 | Posterior Consistency of the Silverman g-prior in
Bayesian Model Choice
Zhihua Zhang
School of Computer Science & Technology
Zhejiang University, Hangzhou, China
Michael I. Jordan
Departments of EECS and Statistics
University of California, Berkeley, CA, USA
Dit-Yan Yeung
Department of Computer Science & Engineering
HKUST, Hong Kong, China
Abstract
Kernel supervised learning methods can be unified by utilizing the tools from
regularization theory. The duality between regularization and prior leads to interpreting regularization methods in terms of maximum a posteriori estimation and
has motivated Bayesian interpretations of kernel methods. In this paper we pursue
a Bayesian interpretation of sparsity in the kernel setting by making use of a mixture of a point-mass distribution and prior that we refer to as ?Silverman?s g-prior.?
We provide a theoretical analysis of the posterior consistency of a Bayesian model
choice procedure based on this prior. We also establish the asymptotic relationship
between this procedure and the Bayesian information criterion.
1
Introduction
We address a supervised learning problem over a set of training data {xi , yi }ni=1 where xi ? X ? Rp
is a p-dimensional input vector and yi is a univariate response. Using the theory of reproducing
kernels, we seek to find a predictive function f (x) from the training data.
Suppose f = u + h ? ({1} + HK ) where HK is a reproducing kernel Hilbert space (RKHS). The
estimation of f (x) is then formulated as a regularization problem of the form
)
( n
1X
g
2
L(yi , f (xi )) + khkHK ,
(1)
min
f ?HK
n i=1
2
where L(y, f (x)) is a loss function, khk2HK is the RKHS norm and g > 0 is the regularization
parameter. By the representer theorem [7], the solution for (1) is of the form
f (x) = u +
n
X
?j K(x, xj ),
(2)
j=1
Pn
where K(?, ?) is the kernel function. Noticing that khk2HK = i,j=1 K(xi , xj )?i ?j and substituting
(2) into (1), we obtain the minimization problem with respect to (w.r.t.) the ?i as
?
? X
n
g
1
L(yi , f (xi )) + ? 0 K? ,
(3)
min
n i=1
2
u,?
where K = [K(xi , xj )] is the n?n kernel matrix and ? = (?1 , . . . , ?n )0 is the vector of regression
coefficients.
From the Bayesian standpoint, the role of the regularization term g2 ? 0 K? can be captured ?by assign-?
ing a design-dependent prior Nn (0, g ?1 K?1 ) to the regression vector ?. The prior Nn 0, K?1
for ? was first proposed
by [5]
?
? in his Bayesian formulation of spline smoothing. Here we refer to the
prior ? ? Nn 0, g ?1 K?1 as the Silverman g-prior by analogy to the Zellner g-prior
? [9]. When?
K is singular, by analogy to generalized singular g-prior (gsg-prior) [8], we call Nn 0, g ?1 K?1
a generalized Silverman g-prior.
Given the high dimensionality generally associated with RKHS methods, sparseness has emerged as
a significant theme, particularly when computational concerns are taken into account. For example,
the number of support vectors in support vector machine (SVM) is equal to the number of nonzero
components of ?. That is, if ?j = 0, the jth input vector is excluded from the basis expansion in
(2); otherwise the jth input vector is a support vector. We are thus interested in a prior for ? which
allows some components of ? to be zero. To specify such a prior we first introduce an indicator
vectorP
? = (?1 , . . . , ?n )0 such that ?j = 1 if xj is a support vector and ?j = 0 if it is not. Let
n
n? = j=1 ?j be the number of support vectors, let K? be the n?n? submatrix of K consisting of
those columns of K for which ?j = 1, and let ? ? be the corresponding subvector of ?. Accordingly,
?
?
we let ? ? ? Nn? 0, g ?1 K?1
?? where K?? is the n? ?n? submatrix of K? consisting of those rows
of K? for which ?j = 1.
We thus have a Bayesian model choice problem in which a family of models is indexed by an
indicator vector ?. Within the Bayesian framework we can use Bayes factors to choose among these
models [3]. In this paper we provide a frequentist theoretical analysis of this Bayesian procedure.
In particular, motivated by the work of [1] on the consistency of the Zellner g-prior, we investigate
the consistency for model choice of the Silverman g-prior for sparse kernel-based regression.
2
Main Results
Our analysis is based on the following regression model M? :
y = u1n + K? ? ? + ?
?
?
? ? Nn (0, ? 2 In ), ? ? |? ? Nn? 0, ? 2 (g? K?? )?1 ,
(4)
where y = (y1 , . . . , yn )0 . Here and later, 1m denotes the m?1 vector of ones and Im denotes
the m?m identity matrix. We compare each model M? with the null model M0 , formulating the
model choice problem via the hypotheses H0 : ? = 0 and H? : ? ? ? Rn? .
Throughout this paper, for any n? , it is always assumed to take a finite value even though n ? ?.
e ? = [1n , K? ]. The following condition is also assumed:
Let K
e0 K
e
For a fixed n? < n, n1 K
? ? is positive definite and
(5)
converges to a positive definite matrix as n ? ?.
Suppose that the sample y is generated by model M? with parameter values u, ? ? and ?. We
formalize the problem of consistency for model choice as follows [1]:
plim p(M? |y) = 1 and plim p(M? |y) = 0 for all M? 6= M? ,
(6)
n??
n??
where ?plim? denotes convergence in probability and the limit is taken w.r.t. the sampling distribution under the true model M? .
2.1
A Noninformative Prior for (u, ? 2 )
We first consider the case when (u, ? 2 ) is assigned the following noninformative prior:
(u, ? 2 ) ? 1/? 2 .
(7)
0
0
Moreover, we assume 1n K = 0. In this case, we have 1n K? = 0 so that the intercept u may be
regarded as a common parameter for both M? and M0 .
After some calculations the marginal likelihood is found to be
p(y|M? ) =
?( n?1
n?1
1
2 )
?1n k?n+1 |Q? |? 2 (1 ? F?2 )? 2 ,
n?1 ? ky ? y
2
?
n
(8)
where y? =
1
n
Pn
i=1
0
yi , Q? = In + g? ?1 K? K?1
?? K? and
F?2 =
y0 K? (g? K?? + K0? K? )?1 K0? y
.
ky ? y?1n k2
Let RSS? = (1 ? R?2 )ky ? y?1n k2 be the residual sum of squares. Here,
R?2 =
y0 K? (K0? K? )?1 K0? y
(y ? y?1n )0 K? (K0? K? )?1 K0? (y ? y?1n )
=
.
ky ? y?1n k2
ky ? y?1n k2
It is easily proven that for fixed n, plimg? ?0 F?2 = R?2 and plimg? ?0 (1 ? F?2 )ky ? y?1n k2 = RSS? ,
e ? )y where H
e? = K
e ? (K
e0 K
e ?1 K
e 0 . As a special case of (8), it is also
and RSS? = y0 (In ? H
? ?)
?
immediate to obtain the marginal distribution of the null model as
p(y|M0 ) =
?( n?1
2 )
?1n k?n+1 .
n?1 ? ky ? y
? 2 n
Then the Bayes factor for M? versus M0 is
1
BF?0 = |Q? |? 2 (1 ? F?2 )?
n?1
2
.
In the limiting case when g? ? 0 and both n and n? are fixed, BF?0 tends to 0. This implies that a
large spread of the prior forces the Bayes factor to favor the null model. Thus, as in the case of the
Zellner g-prior [4], Bartlett?s paradox arises for the Silverman g-prior.
The Bayes factor for M? versus M? is given by
1
BF??
n?1
|Q? |? 2 (1 ? F?2 )? 2
BF?0
=
=
1
n?1 .
BF?0
|Q? |? 2 (1 ? F?2 )? 2
(9)
Based on the Bayes factor, we now explore the consistency of the Silverman g-prior. Suppose that
the sample y is generated by model M? with parameter values u, ? ? and ? 2 . Then the consistency
property (6) is equivalent to
plim BF?? = 0, for all M? 6= M? .
n??
Assume that under any model M? that does not contain M? , i.e, M? + M? ,
e0 K
e0
e e e
?
? ? (In ? H? )K? ? ?
= c? ? (0, ?),
(10)
lim
n??
n
e 0 = (u, ? 0 ). Note that In ? H
e ? is a symmetric idempotent matrix which projects onto
where ?
?
?
n
e ? . Given that (In ? H
e ? )1n = 0 and 10 K? = 0,
the subspace of R orthogonal to the span of K
n
condition (10) reduces to
? 0? K0? (In ? H? )K? ? ?
= c? ? (0, ?),
n??
n
lim
where H? = K? (K0? K? )?1 K0? . We now have the following theorem whose proof is given in
Sec. 3.
Theorem 1 Consider the regression model (4) with the noninformative prior for (u, ? 2 ) in (7).
Assume that conditions (5) and (10) are satisfied and assume that g? can be written in the form
g? =
w1 (n? )
with
w2 (n)
lim w2 (n) = ? and
n??
w20 (n)
=0
n?? w2 (n)
lim
(11)
for particular choices of functions w1 and w2 , where w2 is differentiable and w20 (n) is the first
derivative w.r.t. n. When the true model M? is not the null model, i.e., M? 6= M0 , the posterior
probabilities are consistent for model choice.
Theorem 1 can provide an empirical methodology for setting g. For example, it is clear that g = 1/n
where w1 (n? ) = 1 and w2 (n) = n satisfies condition (11).
It is interesting to consider the (asymptotic) relationship between the Bayes factor and Bayesian
information (or Schwartz) criterion (BIC) in our setting. Given two models M? and M? , the
difference between the BICs of these two models is given by
S?? =
n? ? n?
n RSS?
ln
+
ln(n).
2 RSS?
2
We thus obtain the following asymptotic relationship (the proof is given in Sec. 3):
Theorem 2 Under the regression model and the conditions in Theorem 1, we have
plim
n??
ln BF??
S?? +
n? ?n?
2
ln w2 (n)
= 1.
Furthermore, if M? is not nested within M? , then plimn??
limits are taken w.r.t. the model M? .
2.2
ln BF??
S??
= 1. Here the probability
A Natural Conjugate Prior for (u, ? 2 )
In this section, we analyze consistency for model choice under a different prior for (u, ? 2 ), namely
the standard conjugate prior:
p(u, ? 2 ) = N (u|0, ? 2 ? ?1 )Ga(? ?2 |a? /2, b? /2)
(12)
where Ga(u|a, b) is the Gamma distribution:
p(u) =
ba a?1
u
exp(?bu), a > 0, b > 0.
?(a)
We further assume that u and ? ? are independent. Then
e ? Nn +1 (0, ? 2 ??1 ) with ?? =
?
?
?
?
?
?
0
0
g? K??
?
.
(13)
The marginal likelihood of model M? is thus
a /2
p(y|M? ) =
?
?? a?2+n
b?? ?( n+a
1?
2 )
|M? |? 2 b? + y0 M?1
,
? y
a?
n/2
? ?( 2 )
(14)
e ? ??1 K
e 0 . The Bayes factor for M? versus M? is given by
where M? = In + K
?
?
? a?2+n
?1 ?
?
|M? | 2 b? + y0 M?1
? y
.
BF?? =
|M? |
b? + y0 M?1
? y
e ? ??1 K
e 0 and |M? | = |?? ||?? |?1 = ? ?1 g??n? |K?? |?1 |?? | where
Because M?1
= In ? K
?
?
?
e0 K
e ? + ?? , we have
?? = K
?
n /2 ?
BF?? =
g? ?
n /2
g? ?
|K?? ||?? |
|K?? ||?? |
? 21 ?
?
? a +n
e ? ??1 K
e 0 y ? ?2
b? + y0 In ?K
?
?
.
?
?
e ? ??1 K
e0 y
b? + y0 In ?K
?
?
Theorem 3 Consider the regression model (4) with the conjugate prior for (u, ? 2 ) in (12). Assume
that conditions (5) and (10) are satisfied and that g? takes the form in (11) with w1 (n? ) being a
decreasing function. When the true model M? is not the null model, i.e., M? 6= M0 , the posterior
probabilities are consistent for model choice.
Note the difference between Theorem 1 and Theorem 3: in the latter theorem w1 (n? ) is required
to be a decreasing function of n? . Thanks to the fact that g? = w1 (n? )/w2 (n), such a condition
is equivalent to assuming that g? is a decreasing function of n? . Again, g? = 1/n satisfies these
conditions. Similarly with Theorem 2, we also have
Theorem 4 Under the regression model and the conditions in Theorem 3, we have
ln BF??
= 1.
plim
n? ?n?
n?? S?? +
ln w2 (n)
2
Furthermore, if M? is not nested within M? , then plimn??
limits are taken w.r.t. the model M? .
3
ln BF??
S??
= 1. Here the probability
Proofs
In order to prove these theorems, we first give the following lemmas.
?
?
? ?1
A11 A12
A11
Lemma 1 Let A =
be symmetric and positive definite, and let B =
A21 A22
0
have the same size as A. Then A?1 ? B is positive semidefinite.
0
0
?
Proof The proof follows readily once we express A?1 and B as
?
? ? ?1
??
?
I
0
0
A11
I ?A?1
A12
?1
11
,
A =
?A21 A?1
I
0
I
0
A?1
11
22?1
?
?
?
?
?
?
I
0
I ?A?1
A?1
0
11 A12
11
,
B=
0
I
0
0
?A21 A?1
I
11
where A22?1 = A22 ? A21 A?1
11 A12 is also positive definite.
The following two lemmas were presented by [1].
Lemma 2 Under the sampling model M? : (i) if M? is nested within or equal to a model M? , i.e.,
M? j M? , then
RSS?
= ?2
plim
n
n??
and (ii) for any model M? that does not contain M? , if (10) satisfies, then
RSS?
= ? 2 + c? .
plim
n
n??
Lemma 3? Under
? the sampling model M? , if M? is nested within a model M? , i.e., M? ? M? ,
then n ln
RSS?
RSS?
d
d
?? ?2n? ?n? as n ? ? where ?? denotes convergence in distribution.
Lemma 4 Under the regression model (4), if limn?? g? (n) = 0 and condition (5) is satisfied, then
plim (1 ? F?2 )ky ? y?1n k2 ? RSS? = 0.
n??
Proof It is easy to compute
y0 K? [(K0? K? )?1 ? (K0? K? + g? (n)K?? )?1 ]K0? y
(1 ? F?2 )ky ? y?1n k2 ? RSS?
=
.
?2
?2
Since both K0? K? /n and K?? are positive definite, there exists an n? ?n? nonsingular matrix An
and an n? ?n? positive diagonal matrix ?n? such that K0? K? /n = A0n ?n? An and K?? = A0n An .
Letting z = ? ?1 (n?n? )?1/2 (A0n )?1 K0? y, we have
z ? Nn? (? ?1 (n?n? )1/2 An ?, In? )
and
f (z) ,
?
??1
(1 ? F?2 )ky ? y?1n k2 ? RSS?
= z0 z ? z0 n?n? n?n? + g? (n)In?
z
2
?
n?
X
g? (n)
=
zj2 .
n?
(n)
+
g
(n)
j
?
j=1
Note that zj2 follows a noncentral chi-square distribution, ?2 (1, vj ), with vj
=
0
2
2
n?j (n)(aj (n) ?) /? where ?j (n) > 0 is the jth diagonal element of ?n? and aj (n) is
the jth column of An . We thus have E(zj2 ) = 1 + vj and Var(zj2 ) = 2(1 + 2vj ). It follows from
condition (5) that
lim K0? K? /n = lim A0n ?n? An = A0 ?? A,
n??
n??
where A is nonsingular and ?? is a diagonal matrix with positive diagonal elements, and both are
independent of n. Hence,
?
?
?
?
g? (n)
g? (n)
lim E
zj2 = 0 and lim Var
zj2 = 0.
n??
n??
n?j (n) + g? (n)
n?j (n) + g? (n)
We thus have plimn?? f (z) = 0. The proof is completed.
Lemma 5 Assume that M? is nested within M? and g? is a decreasing function of n? . Then
e ? ??1 K
e 0 )y ? y0 (In ? K
e ? ??1 K
e 0 )y.
y0 (In ? K
?
?
?
?
e ? = [K
e ? , K2 ] without loss of generality. We
Proof Since M? is nested within M? , we express K
? 11
12 ?
?? ??
now write ?? =
where ?11
? is of size n? ?n? . Hence, we have
?21
?22
?
?
#?1
"
e0 K
e ? + ?11 K
e 0 K2 + ?12
K
?1
?
?
?
?
.
?? =
e ? + ?21 K0 K2 + ?22
K02 K
?
?
2
?
?
0
0
11
0 e
0 e
e
e
Because 0 < g? ? g? , K? K? +?? ? (K? K? +?? ) =
is positive semidef0 (g? ?g? )K??
?1
e0 K
e +?11 )?1 ? (K
e0 K
e
inite. Consequently, (K
is positive semidefinite. It follows from
?
? ?? +?? )
?? ?
0
?1
e K
e ? +?? )
(
K
0
?1
?
Lemma 1 that ?? ?
is also positive semidefinite. We thus have
0
0
e ? ??1 K
e 0 )y ? y0 (In ? K
e ? ??1 K
e 0 )y
y0 (In ? K
?
?
?
?
#?1 ?
"
?
11
e0 K
e
e 0 K2 +?12
?1
e0 e
K
K
? ? +??
?
?
e?
= y0 K
? (K? K? +?? )
22
21
0
e ? +?
0
K
+?
K02 K
K
?
?
2 2
3.1
0
0
??
e 0 y ? 0.
K
?
Proof of Theorem 1
We now prove Theorem 1. Consider that
ln BF?? =
1 |Q? | n?1 (1 ? F?2 )
ln
+
ln
.
2 |Q? |
2
(1 ? F?2 )
Because
n?
|Q? |
? 12
g? 2 |K?? |1/2
=
,
|g? K?? + K0? K? |1/2
we have
?
? w1 (n? )
1
0
?
?
|Q? |
w1 (n? )n?
|K?? |
nw2 (n) K?? + n K? K?
= ln
+
+
ln
ln
ln
? + (n? ?n? ) ln(nw2 (n)).
?
? w1 (n? ) K?? + 1 K0? K? ?
|Q? |
w1 (n? )n?
|K?? |
nw2 (n)
n
Because
?
? w1 (n? )
1
0
?
?
| 1 K0 K? |
nw2 (n) K?? + n K? K?
? (??, ?),
= lim ln n1 0?
? = lim ln ? w (n )
?
n??
? 1 ? K?? + 1 K0? K? ? n?? | n K? K? |
nw2 (n)
n
it is easily proven that
(
1 |Q? |
=
lim ln
n?? 2
|Q? |
where const =
?
2
+
1
2
ln
|K?? |
|K?? | .
?
n? < n ?
?? n? > n?
const n? = n? ,
(15)
According to Lemma 4, we also have
y 1n k2
n?1 (1?F?2 )
n?1 (1?F?2 )ky??
n?1 RSS?
ln
= plim
ln
= plim
ln
.
2
2
(1?F? ) n?? 2
(1?F? )ky??
y 1n k2
RSS?
n?? 2
n?? 2
plim
Now consider the following two cases:
(a) M? is not nested within M? :
From Lemma 2, we obtain
plim ln
n??
RSS?
RSS? /n
?2
= plim ln
= ln 2
.
RSS?
RSS? /n
? +c?
n??
Moreover, we have the following limit
i
n?1 h ? ? 2 ? n? ?n?
ln 2
+
ln(nw2 (n)) = ??
lim
n?? 2
? +c?
n?1
w (n)+nw0 (n)
n ?n
= 0 and
due to limn?? ?n?1 ? ln(nw2 (n)) = limn?? (n? ?n? ) 2 nw2 (n)2
? 2 ?
?
< 1. This implies that limn?? ln BF?? = ??. Thus we obtain
ln ?2 +c?
limn?? BF?? = 0.
(b) M? is nested within M? :
d
We always have n? > n? . By Lemma 3, we have (n?1) ln(RSS? /RSS? ) ?? ?2n? ?n? .
d
Hence, (RSS? /RSS? )(n?1)/2 ?? exp(?2n? ?n? /2). Combining this result with (15) leads
to a zero limit for BF?? .
3.2
Proof of Theorem 2
Using the same notations as those in Theorem 1, we have
C?? =
ln BF??
S?? +
n? ?n?
2
ln w2 (n)
=
n?1
n
(1?F 2 )
ln (1?F?2 ) +
?
RSS?
ln RSS
?
n? ?n?
ln(nw2 (n)) + n2 Const
n
.
n ?n
+ ? n ? ln(nw2 (n))
(a) M? is not nested within M? :
From Lemma 4, we obtain
2
plim C?? = lim
ln ?2?+c? +
n??
n??
ln
?2
? 2 +c
?
+
n? ?n?
n
n? ?n?
n
ln(nw2 (n))
ln(nw2 (n))
= 1.
In this case, we also have
2
n ?n
ln ?2?+c? + ? n ? ln(nw2 (n))
ln BF??
= 1.
plim
= lim
2
n ?n
n??
S??
n??
ln ?2?+c? + ? n ? ln n
(b) M? is nested within M? :
We obtain
(1?F 2 )
plim C?? = plim
n??
n??
(n?1) ln (1?F?2 ) + (n? ?n? ) ln(nw2 (n)) + 2 ? Const
?
RSS?
+ (n? ?n? ) ln(nw2 (n))
n ln RSS
?
d
due to n? > n? and n ln(RSS? /RSS? ) ?? ?2n? ?n? .
=1
3.3
Proof of Theorem 3
We now sketch the proof of Theorem 3. For the case that M? is not nested within M? , the proof
is similar to that of Theorem 1. When M? is nested within M? , Lemma 5 shows the following
relationship
?
?
?
? 0?
?
e ? ??1 K
e0 y?
e ? ??1 K
e0 y?
y In ?K
b? + y0 In ?K
?
?
?
?
? ln
ln
?
?
?
? .
e ? ??1 K
e0 y
e ? ??1 K
e0 y
b? + y0 In ?K
y0 In ?K
?
?
?
?
We thus have
?
?
?
?
? 0?
e ? ??1 K
e0 y?
e ? ??1 K
e0 y?
b? + y0 In ?K
y In ?K
a? +n
a? +n
?
?
?
?
? plim
ln
ln
plim
?
?
?
?
e ? ??1 K
e0 y
e ? ??1 K
e0 y
2
2
n??
n??
b? + y0 In ?K
y0 In ?K
?
?
?
?
?
? 0?
e? y?
y In ?H
a? +n
ln
? (0, ?).
= plim
?
?
e? y
2
n??
y0 In ?H
From this result the proof follows readily.
4
Conclusions
In this paper we have presented a frequentist analysis of a Bayesian model choice procedure for
sparse regression. We have captured sparsity by a particular choice of prior distribution which we
have referred to as a ?Silverman g-prior.? This prior emerges naturally from the RKHS perspective.
It is similar in spirit to the Zellner g-prior, which has been widely used for Bayesian variable selection and Bayesian model selection due to its computational tractability in the evaluation of marginal
likelihoods [6, 2]. Our analysis provides a theoretical foundation for the Silverman g-prior and
suggests that it can play a similarly wide-ranging role in the development of fully Bayesian kernel
methods.
References
[1] C. Fern?andez, E. Ley, and M. F. J. Steel. Benchmark priors for Bayesian model averaging.
Journal of Econometrics, 100:381?427, 2001.
[2] E. I. George and R. E. McCulloch. Approaches for Bayesian variable selection. Statistica Sinica,
7:339?374, 1997.
[3] R. E. Kass and A. E. Raftery. Bayes factors. Journal of the American Statistical Association,
90:773?795, 1995.
[4] F. Liang, R. Paulo, G. Molina, M. A. Clyde, and J. O. Berger. Mixtures of g-priors for Bayesian
variable selection. Journal of the American Statistical Association, 103(481):410?423, 2008.
[5] B. W. Silverman. Some aspects of the spline smoothing approach to non-parametric regression
curve fitting (with discussion). Journal of the Royal Statistical Society, B, 47(1):1?52, 1985.
[6] M. Smith and R. Kohn. Nonparametric regression using Bayesian variable selection. Journal of
Econometrics, 75:317?344, 1996.
[7] G. Wahba. Spline Models for Observational Data. SIAM, Philadelphia, 1990.
[8] M. West. Bayesian factor regression models in the ?large p, small n? paradigm. In J. M.
Bernardo, M. J. Bayarri, J. .O Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith, and M. West,
editors, Bayesian Statistics 7, pages 723?732. Oxford University Press, 2003.
[9] A. Zellner. On assessing prior distributions and Bayesian regression analysis with g?prior
distributions. In P. K. Goel and A. Zellner, editors, Bayesian Inference and Decision Techniques:
Essays in Honor of Bruno de Finetti, pages 233?243. North-Holland, Amsterdam, 1986.
| 3462 |@word kong:1 norm:1 bf:18 essay:1 seek:1 r:28 rkhs:4 ka:1 hkust:1 written:1 readily:2 noninformative:3 accordingly:1 smith:2 provides:1 zhang:1 prove:2 fitting:1 introduce:1 chi:1 decreasing:4 project:1 moreover:2 notation:1 mass:1 mcculloch:1 null:5 pursue:1 unified:1 berkeley:1 bernardo:1 k2:14 schwartz:1 yn:1 positive:11 engineering:1 tends:1 limit:5 oxford:1 china:2 suggests:1 zhejiang:1 definite:5 silverman:10 procedure:4 empirical:1 yan:1 onto:1 ga:2 selection:5 intercept:1 equivalent:2 nw2:15 utilizing:1 regarded:1 his:1 limiting:1 suppose:3 play:1 hypothesis:1 element:2 dawid:1 particularly:1 econometrics:2 role:2 inite:1 predictive:1 basis:1 easily:2 plimn:3 k0:21 h0:1 whose:1 emerged:1 widely:1 otherwise:1 favor:1 statistic:2 differentiable:1 combining:1 ky:12 convergence:2 zellner:6 assessing:1 a11:3 noncentral:1 converges:1 school:1 a22:3 implies:2 ley:1 a12:4 observational:1 vectorp:1 assign:1 andez:1 im:1 khkhk:1 exp:2 substituting:1 m0:6 estimation:2 tool:1 minimization:1 always:2 pn:2 likelihood:3 hk:3 posteriori:1 inference:1 hangzhou:1 dependent:1 nn:9 a0:1 interested:1 among:1 development:1 smoothing:2 special:1 marginal:4 equal:2 once:1 sampling:3 representer:1 spline:3 gamma:1 consisting:2 n1:2 plim:21 investigate:1 evaluation:1 mixture:2 semidefinite:3 orthogonal:1 indexed:1 e0:19 theoretical:3 column:2 tractability:1 eec:1 clyde:1 thanks:1 siam:1 bu:1 michael:1 w1:11 again:1 satisfied:3 choose:1 american:2 derivative:1 account:1 paulo:1 de:1 sec:2 north:1 coefficient:1 later:1 bayarri:1 analyze:1 bayes:8 square:2 ni:1 nonsingular:2 bayesian:23 fern:1 naturally:1 associated:1 proof:14 lim:14 emerges:1 dimensionality:1 hilbert:1 formalize:1 supervised:2 methodology:1 response:1 specify:1 formulation:1 though:1 generality:1 furthermore:2 sketch:1 aj:2 usa:1 contain:2 true:3 regularization:6 assigned:1 hence:3 excluded:1 symmetric:2 nonzero:1 hong:1 criterion:2 generalized:2 interpreting:1 ranging:1 common:1 association:2 interpretation:2 refer:2 significant:1 consistency:8 similarly:2 bruno:1 posterior:4 perspective:1 honor:1 yi:5 molina:1 captured:2 george:1 goel:1 paradigm:1 ii:1 reduces:1 ing:1 calculation:1 regression:14 yeung:1 kernel:9 singular:2 limn:5 standpoint:1 w2:10 spirit:1 jordan:1 call:1 easy:1 xj:4 bic:1 wahba:1 motivated:2 kohn:1 bartlett:1 generally:1 clear:1 nonparametric:1 dit:1 write:1 finetti:1 express:2 sum:1 zj2:6 noticing:1 family:1 throughout:1 decision:1 submatrix:2 aspect:1 min:2 formulating:1 span:1 department:2 according:1 conjugate:3 heckerman:1 y0:21 making:1 taken:4 ln:58 letting:1 frequentist:2 rp:1 denotes:4 completed:1 const:4 establish:1 w20:2 society:1 parametric:1 diagonal:4 subspace:1 assuming:1 relationship:4 berger:2 liang:1 sinica:1 ba:1 steel:1 design:1 benchmark:1 finite:1 immediate:1 paradox:1 y1:1 rn:1 reproducing:2 namely:1 subvector:1 required:1 california:1 address:1 sparsity:2 royal:1 natural:1 force:1 indicator:2 residual:1 technology:1 idempotent:1 raftery:1 philadelphia:1 prior:37 asymptotic:3 fully:1 loss:2 interesting:1 analogy:2 proven:2 versus:3 var:2 foundation:1 consistent:2 editor:2 row:1 jth:4 wide:1 sparse:2 curve:1 a0n:4 assumed:2 xi:6 ca:1 expansion:1 vj:4 main:1 spread:1 statistica:1 n2:1 referred:1 west:2 theme:1 a21:4 theorem:21 z0:2 svm:1 concern:1 exists:1 sparseness:1 univariate:1 explore:1 zhihua:1 amsterdam:1 g2:1 holland:1 nested:12 satisfies:3 identity:1 formulated:1 consequently:1 averaging:1 lemma:13 duality:1 support:5 latter:1 arises:1 |
2,716 | 3,463 | Reconciling Real Scores with Binary Comparisons:
A Unified Logistic Model for Ranking
Nir Ailon
Google Research NY
111 8th Ave, 4th FL New York NY 10011 [email protected]
Abstract
The problem of ranking arises ubiquitously in almost every aspect of life, and
in particular in Machine Learning/Information Retrieval. A statistical model for
ranking predicts how humans rank subsets V of some universe U . In this work we
define a statistical model for ranking that satisfies certain desirable properties.
The model automatically gives rise to a logistic regression based approach to
learning how to rank, for which the score and comparison based approaches are
dual views. This offers a new generative approach to ranking which can be used
for IR.
There are two main contexts for this work. The first is the theory of econometrics
and study of statistical models explaining human choice of alternatives. In this
context, we will compare our model with other well known models. The second
context is the problem of ranking in machine learning, usually arising in the context of information retrieval. Here, much work has been done in the discriminative
setting, where different heuristics are used to define ranking risk functions.
Our model is built rigorously and axiomatically based on very simple desirable
properties defined locally for comparisons, and automatically implies the existence of a global score function serving as a natural model parameter which can
be efficiently fitted to pairwise comparison judgment data by solving a convex
optimization problem.
1
Introduction
Ranking is an important task in information sciences. The most notable application is information
retrieval (IR), where it is crucial to return results in a sorted order for the querier. The subject of
preference and ranking has been thoroughly studied in the context of statistics and econometric
theory [8, 7, 29, 36, 34, 31], combinatorial optimization [26, 37, 20, 3, 4, 14] and machine learning
[6, 9, 33, 21, 19, 35, 23, 22, 25, 16, 17, 1, 13, 15, 28, 18].
Recently Ailon and Mehryar [5] following Balcan et al [9] have made significant progress in reducing the task of learning ranking to the binary classification problem of learning preferences. This
comparison based approach is in contrast with a score based approach which tries to regress to a
score function on the elements we wish to rank, and sort the elements based on this score as a final
step.
The difference between the score based and comparison approaches is an example of ?local vs.
global? views: A comparison is local (how do two elements compare with each other), and a score
is global (how do we embed the universe on a scale). The score based approach seems reasonable
in cases where the score can be defined naturally in terms of measurable utility. In some real world
scenarios, either (i) an interpretable score is difficult to define (e.g. a relevance score in information
retrieval) and (ii) an interpretable score is easy to define (e.g. how much a random person is willing
to pay for product X in some population) but learning the score is difficult due to noisy or costly
label acquisition for scores on individual points [7].
A well known phenomenon in the psychological study of human choice seems to potentially offer an
elegant solution to the above difficulties: Human response to comparison questions is more stable in
the sense that it is not easily affected by irrelevant alternatives. This phenomenon makes acquisition
of comparison labels for learning tasks more appealing, but raises the question of how to go back
and fit a latent score function that explains the comparisons. Moreover, the score parameter fitting
must be computationally efficient. Much effort has been recently put in this subject from a machine
learning perspective [6, 9, 33, 21, 19, 35, 23, 22, 25, 16, 17, 1, 13, 15, 28, 18].
2
Ranking in Context
The study of ranking alternatives has not been introduced by ML/IR, and has been studied throughly
from the early years of the 20th century in the context of statistics and econometrics. We mention
work in ML/IR by Lebanon and Lafferty [27] and Cao et al. [12] who also draw from the classic
work for information retrieval purposes.
ML/IR is usually interested in the question of how a machine should correctly rank alternatives
based on experience from human feedback, whereas in statistics and econometrics the focus is on
the question of how a human chooses from alternatives (for the purpose of e.g. effective marketing
or policy making). Therefore, there are notable differences between the modern and classic foci.
Notwithstanding these differences, the classic foci is relevant to modern applications, and vice versa.
For example, any attempt to correctly choose from a set (predominantly asked in the classic context)
can be converted into a ranking algorithm by repeatedly choosing and removing from the set.
Definition 2.1 A ranking model for U is a function D mapping any finite subset V ? U to a
distribution on rankings of V . In other words, D(V ) is a probability distribution on the |V |! possible
orderings of V .
A Thurstonian model for ranking (so named after L. Thurstone [36]) is one in which an independent
random real valued variable Zv is associated with each v ? V , and the ranking is obtained by sorting
the elements if V in decreasing order (assuming the value represents utility). Often the distributions
governing the Zv ?s are members of a parametric family, with a location parameter representing an
intrinsic ?value?. The source of variability in Zv is beyond the scope of this work. This model is
related to the more general random utility model (RUM) approach studied in econometrics.
A purely comparison based model is due to Babington and Smith: The parameter of the model is a
matrix {puv }u,v?U . Given items u, v, a subject would prefer u over v with probability puv = 1?pvu .
Given a subset V , the subject flips a corresponding biased coin independently to decide on the
preference of all pairs u, v ? V , and repeats the process until the set of preferences is transitive. This
model is unwieldy in full generality, and more succinct representations were proposed. Mallows [30]
following Bradley and Terry [11] proposed to take puv as ?(u)/(?(u) + ?(v)), where the ?(v)?s
are constants attached to each element. Note that the marginal probability of u being preferred over
v in the context of a set V ? {u, v} in the Babington-Smith model is in general not puv , even in
Mallows?s special case.
In distance based models it is assumed that there is a ?modal? ranking of the set V , and the probability of any ranking decreases with its distance from the mode. Several definitions of distances
between permutations. Often the probability density itself is defined as an exponential model. We
refer the reader to [31] for in depth analysis of such models.
The Plackett-Luce model. The classic model most related to this work is Plackett and Luce?s [29,
34] multistage model for ranking. Each element v ? U has an assigned ?value? parameter
P ?(v). At
each stage a choice is made. Given a set V , item u ? V wins with probability ?(u)/ v?V ?(v).1
The winner is removed from V and the process is repeated for the remaining elements, until a
ranking is obtained. Yellott [38] made the surprising observation that the Luce-Plackett model is
exactly Thurstone?s model where the Zu ?s are translated Gumbel (doubly-exponential) distributed
1
This choice function is known as the multinomial logit (MNL) and is equivalent to the standard (dichotomous) logit when only two alternatives are available.
variables. The underlying winner choice model satisfies Luce?s choice axiom [29] which, roughly
speaking, stipulates that the probability of an element u winning in V is the same as the product of
the probability of the winner contained in V ? ? V and the probability of u winning in V ? . It turns
out that this axiom (often used as criticism of the model) implies the underlying choice function of
the Plackett-Luce model.
An interesting property of Plackett-Luce for our purpose is that it is asymmetric in the sense that it
is winner-centric and not loser-centric. The model cannot explain both ranking by successive loser
choice and successive winner choice simultaneously unless it is trivial (this point was noticed by
McCullagh [32]). It is clear however that breaking down the process of ranking by humans to an
iterated choice of winners ignores the process of elimination (placing alternatives at the bottom of
the list). In the following sections we propose a new symmetric model for ranking, in which the basic
discrete task is a comparison of pairs of elements, and not choice of an element from arbitrarily large
sets (as in Plackett-Luce).
3
An Axiomatic Approach for Defining a Pairwise-Stable Model for Ranking
For a ranking ? of some subset V ? U , we use the notation u ?? v to denote that u precedes2 v
according to ?. We let ?(v) ? {1, . . . , n} denote the rank of v ? V , where lower numbers designate
precedence (hence u ?? v if ?(u) < ?(v)). The inverse ? ?1 (i) is the unique element v of V with
?(v) = i. We overload notation and let ?(u, v) denote the indicator variable taking the value of 1 if
u ? v and 0 otherwise.
Definition 3.1 A ranking model D for U satisfies pairwise stability if for any u, v ? U and for any
V1 , V2 ? {u, v}, Pr??D(V1 ) [u ?? v] = Pr??D(V2 ) [u ? v].
Pairwise stability means that the preference (or comparison) of u, v is statistically independent of
the context (subset) they are ranked in. Note that Plackett-Luce is pairwise stable (this follows
from the fact that the model is Thurstonian) but Babington-Smith/Mallows is not. If a ranking
model D satisfies pairwise stability, then the probability PrD [u ? v] is naturally defined and equals
Pr??D(V ) [u ?? v] for any V ? {u, v}.
Pairwise stability is a weak property which permits a very wide family of ranking distributions. In
particular, if the universe U is a finite set then any distribution ? on rankings on the entire universe
U gives rise to a model D? with D? (V ) defined as the restriction of ? to V . This model clearly
satisfies pairwise stability but does not have a succint description and hence undesirable.
We strengthen the conditions on our model by considering triplets of elements. Assume that a
model D satisfies pairwise stability. Fix three elements u, v, w. Consider a process in which we
randomly and independently decide how u and w should compare with v. What would be the
induced distribution on the order of u and w, conditioned on them being placed on opposite sides
of v? If we sample from the distributions D({u, v}) and D({v, w}) to independently decide how to
compare u with v and w with v (respectively), then we get
Pr[u ? w |( u ? v ? w) ? (w ? v ? u)] =
PrD [u ? v] PrD [v ? w]
.
PrD [u ? v] PrD [v ? w] + PrD [w ? v] PrD [v ? u]
What happens if we force this to equal PrD [u ? w]? In words, this would mean that the comparison
of u with w conditioned on the comparison being determined by pivoting around v is distributed like
D({u, w}). We write this desired property as follows (the second line follows from the first):
2
We choose in this work to use the convention that an element u precedes v if u is in a more favorable
position. When a score function is introduced later, the convention will be that higher scores correspond to
more favorable positions. We will use the symbol < (resp. >) to compare scores, which is semantically
opposite to ? (resp. ?) by our convention.
PrD [u ? v] PrD [v ? w]
D
PrD [u ? v] PrD [v ? w] + PrD [w ? v] PrD [v ? u]
PrD [w ? v] PrD [v ? u]
.
Pr[w ? u] =
D
PrD [w ? v] PrD [v ? u] + PrD [u ? v] PrD [v ? w]
Pr[u ? w] =
(1)
Definition 3.2 Assume D is a ranking model for U satisfying pairwise stability. For a pair u, w ? U
and another element v ? U we say that u and w satisfy the pivot condition with respect to v if (1)
holds.
Dividing the two desired equalities in (1), we get (assuming the ratio exists):
PrD [u ? w]
PrD [u ? v] PrD [v ? w]
=
.
PrD [w ? u]
PrD [w ? v] PrD [v ? u]
(2)
If we denote by ?D (a, b) the ?comparison logit3 ?: ?D (a, b) = log(PrD [a ? b]/ PrD [b ? a]) , then
(2) implies ?D (u, v) + ?D (v, w) + ?D (w, u) = 0 . This in turn implies that there exist numbers
s1 , s2 , s3 such that ?(u, v) = s1 ? s2 , ?(v, w) = s2 ? s3 and ?(w, u) = s1 ? s3 . These numbers,
defined up to any additive constant, should be called (additive) scores. We will see in what follows
that the score function can be extended to a larger set by patching scores on triplets.
By the symmetry it is now clear that the pivoting condition of u and w with respect to v implies the
pivoting condition of u and v with respect to w and of v and w with respect to u. In other words, the
pivoting condition is a property of the triplet {u, v, w}.
Definition 3.3 Assume a ranking model D for U satisfies pairwise stability, and let ?D : U ? U ?
R denote the comparison logit as defined above. A triplet {u, v, w} ? U is said to satisfy the pivot
condition in D if ?D (u, v)+?D (v, w)+?D (w, u) = 0 . We say that U satisfies the pivot condition
in D if {u, v, w} satisfies the pivot condition for all {u, v, w} ? U .
Lemma 3.1 If U satisfies the pivot condition in a pairwise stability model D for U , then there exists
a real valued score function s : V ? R such that for all a, b ? V , ?D (a, b) = s(a) ? s(b) .
Proof Fix some element v ? U and set s(v) = 0. For every other element u ? V \ {v} set
s(v) = ?D (v, u). It is now immediate to verify that for all a, b ? V one has ?D (a, b) = s(a)?s(b).
Indeed, by construction s(a) ? s(b) = ?D (a, u) ? ?D (b, u) but by the pivot property this equals
exactly ?D (a, b), as required (remember that ?D (a, b) = ??D (a, b) by definition of ?D ).
By starting with local assumptions (pairwise stability and the pivoting property), we obtained a natural global score function s on the universe of elements. The score function governs the probability
of u preceding v via the difference s(u) ? s(v) passed through the inverse logit. Note that we used
the assumption that the comparison logit is finite on all u, v (equivalently, that 0 < PrD (u ? v) < 1
for all u, v), but this assumption can be dropped if we allow the score function to obtain values in
R + ?Z, where ? is the limit ordinal of R.
The Plackett-Luce model satisfies both pairwise stability and the pivot condition with s(u) =
log ?(u). Hence our definitions are non empty. Inspired by recent work on the QuickSort algorithm [24] as a random process [4, 3, 5, 37], we define a new symmetric model based on a series of
comparisons rather than choices from sets.
4
The New Ranking Model
We define a model called QSs (short for QuickSort), parametrized by a score function s : U 7? R
as follows. Given a finite subset V ? U :
1. Pick a ?pivot element? v uniformly at random from V .
3
The ?logit of p? is standard shorthand for the log-odds, or log(p/(1 ? p)).
2. For all u ? V \ {v}, place u to the left of v with probability 1/(1 + es(v)?s(u) ), and to the
right with the remaining probability 1/(1 + es(u)?s(v) ), independently of all other choices.
3. Recurse on the left and on the right sides, and output the ranking of V obtained by joining
the results in an obvious way (left ? pivot ? right).
(The function 1/(1 + e?x ) is the inverse logit function.) We shall soon see that QuickSort gives
us back all the desired statistical local properties of a ranking models. That the model QSs can
be sampled efficiently is a simple consequence of the fact that QuickSort runs in expected time
O(n log n) (some attention needs to be paid the fact that unlike in the textbook proofs for QuickSort
the pivoting process is randomized, but this is not difficult [5]).
Theorem 4.1 The ranking model QSs for U satisfies both pairwise stability and the pivoting condition. Additionally, for any subset V ? U the mode of QSs (V ) is any ranking ? ? satisfying u ??? v
whenever s(u) > s(v).
Proof (of Theorem 4.1): First we note that if QSs satisfies pairwise stability, then the pivot property will be implied as well. Indeed, by taking V = {u, v} we would get from the model that
PrQSs (u ? v) = 1/(1 + es(v)?s(u) ), immediately implying the pivot property.
To see that QSs satisfies pairwise stability, we show that for any u, v and V ? {u, v}, the probability
of the event u ?? v is exactly 1/(1 + es(v)?s(u) ), where ? ? QSs (V ). Indeed, the order of u, v
can be determined in one of two ways. (i) Directly: u or v are chosen as pivot when the other is
present in the same recursive call. We call this event E{u,v} . Conditioned on this event, clearly the
probability that u ?? v is exactly the required probability 1/(1 + es(v)?s(u) ) by step 2 of QuickSort
(note that it doesn?t matter which one of v or u is the pivot). (ii) Indirectly: A third element w ? V
is the pivot when both u and v are present in the recursive call, and w sends u and v to opposite
?
recursion sides. We denote this event by E{u,v},w
. Conditioned on this event, the probability that
u ?? v, is exactly as required (by using the same logit calculus we used in Section 3).
To concludenthe proof of pairwise stability,
it remains to observe that the collection of events
o
?
{E{u,v} } ? E{u,v},w
: w ? V \ {u, v} is a pairwise disjoint cover of the probability space. This
implies that Pr??QSs (V ) (u ?? v) is the desired quantity 1/(1 + es(v)?s(u) ), concluding the proof
of pairwise stability.
We need to work harder to prove the intuitive mode argument. Let ?, ? be two permutations on V
such that
a1 ?? a2 ?? ? ? ? ?? ak ?? u ?? v ?? ak+1 ?? ? ? ? ?? an?2
a1 ? a2 ?? ? ? ? ?? ak ?? v ?? u ?? ak+1 ?? ? ? ? ?? an?2 ,
where V = {u, v}?{a1 , . . . , an?2 }. In words, ? and ? differ on the order of exactly two consecutive
elements u, v. Assume that s(u) > s(v) (so ? , placing u in a more favorable position than v, is
intuitively more ?correct?). We will prove that the probability of getting ? is strictly higher than the
probability of getting ? from QSs . Since ? ? , the permutation sorting by s, can be obtained from
any permutation by a sequence of swapping incorrectly ordered (according to s) adjacent pairs, this
would prove the theorem by a standard inductive argument.
Let q? = Pr??QS [? = ? ], and similarly define q? . To prove that q? > q? we need extra notation.
Our QuickSort generative model gives rise to a random integer node-labeled ordered binary tree4
implicitly constructed as an execution side effect. This tree records the final position of the pivots
chosen in each step as follows: The label L of the root of the tree is the rank of the pivot in the final
solution (which equals the size of the left recursion plus 1). The left subtree is the tree recursively
constructed on the left, and the right subtree is the tree recursively constructed on the right with
L added to the labels of all the vertices. Clearly the resulting tree has exactly n nodes with each
label in {1 . . . n} appearing exactly once. Let p?,T denote the probability that QuickSort outputs a
permutation ? and (implicitly) constructs a pivot selection tree T . Let T denote the collection of
all ordered labeled binary trees with node labels in {1, . . . , n}. For T ? T and a node x ? T let
?(x) denote the integer label on x. Let Tx denote the subtree rooted by x and let ?(Tx ) denote the
4
By that we mean a tree in which each node has at most one left child node and at most one right child node,
and the nodes are labeled with integers.
collection of labels on those nodes. By construction, if QuickSort outputted a ranking ? with an
(implicitly constructed) tree T , then at some point the recursive call to QuickSort took ? ?1 (?(Tx ))
as input and chose ? ?1 (?(x)) as pivot, for any node
P x of T . By a standard
P probability argument
(summing over a disjoint cover of events): q? = T ?T q?,T and q? = T ?T q?,T . It suffices to
show now that for any fixed T ? T , q?,T > q?,T . To compute q?,T for ? = ?, ? we proceed as
follows: At each node x of T we will attach a number P? (x) which is the likelihood of the decisions
made at that level, namely, the choice of the pivot itself and the separation of the rest of the elements
to its right and left.
Y
Y
1
P? (x) =
Pr[? ?1 (?(x)) ? ? ?1 (?(y))] ,
Pr[? ?1 (?(y)) ? ? ?1 (?(x))] ?
QS
QS
|Tx |
y?TL (x)
y?TR (x)
Where |Tx | is the number of nodes in Tx , TR (x) is the set of vertices in the left subtree of x and similarly for TL (x). The factor 1/|Tx | comes from the likelihood of uniformly at random having chosen
the pivot ? ?1 (?(x)) from the set of nodes of Tx . The first product corresponds to the random comparison Q
decisions made on the elements thrown
Q to the left, and the second to right. By construction,
p?,T = x?T P? (x) and similarly p?,T = x?T P? (x). Since u, v are adjacent in both ? and ?, it is
clear that the two nodes x1 , x2 ? T labeled ? (u) and ? (v) respectively have an ancestor-descendent
relation in T (otherwise their least common ancestor in T would have been placed between them,
violating the consecutiveness of u and v in our construction and implying p?,T = q?,T = 0). Also
recall that ?(u) = ? (v) and ?(v) = ? (u). By our assumption that ? and ? differ only on the order
of the adjacent elements u, v, P? (x) and P? (x) could differ only on nodes x on the path between
x1 and x2 . Assume w.l.o.g. that x1 is an ancestor of x2 , and that x2 is a node in the left subtree of
x1 . By our construction, x2 is the rightmost node5 in TL (x1 ). Let Y denote the set of nodes on the
path from x1 to x2 (exclusive) in T . Let W denote the set of nodes in the left (and only) subtree
of x2 , and let Z denote the set of remaining nodes in TL (x1 ): Z = TL (x1 ) \ (W ? Y ? {x2 }).
Since ? ?1 (?(z)) = ? ?1 (?(z)) for all z ? Z we can define elt(z) = ? ?1 (?(z)) = ? ?1 (?(z)) and
similarly we can correspond each y ? Y with a single element elt(y) and each w ? W with a single
elements elt(w) of V . As claimed above, we only need to compare between P? (x1 ) and P? (x1 ),
between P? (x2 ) and P? (x2 ) and P? (y) and P? (y) for y ? Y . Carefully unfolding these products
node by node, we see that it suffices to notice that for all y ? Y , the probability of throwing elt(y)
to the left of u (pivoting on u) times the probability of throwing v to the right of elt(y) (pivoting
on elt(y)) as appears inside the product P? (x1 )P? (y) is exactly the probability of throwing elt(y)
to the left of v (pivoting on v) times the probability of throwing u to the right of elt(y) (pivoting on
elt(y)) as appears inside the product P? (x1 )P? (y). Also for all w ? W the probability of throwing elt(w) to the left of u (pivoting on u) times the probability of throwing elt(w) to the left of v
(pivoting on v) appears exactly once in both P? (x1 )P? (x2 ) and P? (x1 )P? (x2 ) (though in reversed
order). Following these observations one can be convinced by the desired result of the theorem by
noting that in virtue of s(u) > s(v): (i) PrQS [v ? u] > PrQS [u ? v], and (ii) for all z ? Z,
PrQS [elt(z) ? u] > PrQS [elt(z) ? v].
5
Comparison of Models
The stochastic QuickSort model as just defined as well as Plackett-Luce share much in common,
but they are not identical for strictly more than 2 elements. Both satisfy the intuitive property that
the mode of the distribution corresponding to a set V is any ranking which sorts the elements of
V in decreasing s(v) = log ?(v) value. The stochastic QuickSort model, however, does not suffer
from the asymmetry problem which is often stated as a criticism of Plackett-Luce. Indeed, the
distributions QSs (V ) has the following property: If we draw from QSs (V ) and flip the resulting
permutation, the resulting distribution is QS?s (V ). This property does not hold in general for
Plackett-Luce, and hence serves as proof of their nonequivalence.
Assume we want to fit s in the MLE sense by drawing random permutations from QSs (V ). This
seems to be difficult due to the unknown choice of pivot. On the other hand, the log-likelihood
function corresponding to Plackett-Luce is globally concave in the values of the function s on V ,
and hence a global maximum can be efficiently found. This also holds true in a generalized linear
model, in which s(v) is given as the dot product of a feature vector ?(v) with an unknown weight
5
The rightmost node of T is the root if it has no right descendent, or the rightmost node of its right subtree.
vector which we estimate (as done in [10] in the context of predicting demand for electric cars).
Hence, for the purpose of learning given full permutations of strictly more than two elements, the
Plackett-Luce model is easier to work with.
In practical IR settings, however, it is rare that training data is obtained as full permutations: such
a task is tiresome. In most applications, the observables used for training are in the form of binary response vectors (either relevant or irrelevant for each alternative) or comparison of pairs of
alternatives (either A better or B better given A,B). For the latter, Plackett-Luce is identical to QuickSort, and hence efficient fitting of parameters is easy (using logistic regression). As for the former,
the process of generating a binary response vector can be viewed as the task performed at a single
QuickSort recursive level. It turns out that by defining a nuisance parameter to represent the value s
of an unknown pivot, MLE estimation can be performed efficiently and exactly [2].
References
[1] Shivani Agarwal and Partha Niyogi. Stability and generalization of bipartite ranking algorithms. In COLT, pages 32?47, 2005.
[2] N. Ailon. A simple linear ranking algorithm using query dependent intercept variables.
arXiv:0810.2764v1.
[3] Nir Ailon. Aggregation of partial rankings, p-ratings and top-m lists. In SODA, 2007.
[4] Nir Ailon, Moses Charikar, and Alantha Newman. Aggregating inconsistent information: ranking and clustering. In Proceedings of the 37th Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, May 22-24, 2005, pages 684?693. ACM, 2005.
[5] Nir Ailon and Mehryar Mohri. An efficient reduction of ranking to classification. In COLT,
2008.
[6] Erin L. Allwein, Robert E. Schapire, and Yoram Singer. Reducing multiclass to binary: A
unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113?141,
2000.
[7] D. Ariely, G. Loewenstein, and D. Prelec. Coherent arbitrariness: Stable demand curves without stable preferences. The Quarterly Journal of Economics, 118(1):73?105, 2008.
[8] K. J. Arrow. A difficulty in the concept of social welfare. Journal of Political Economy,
58(4):328?346, August 1950.
[9] Maria-Florina Balcan, Nikhil Bansal, Alina Beygelzimer, Don Coppersmith, John Langford,
and Gregory B. Sorkin. Robust reductions from ranking to classification. In Nader H. Bshouty
and Claudio Gentile, editors, COLT, volume 4539 of Lecture Notes in Computer Science, pages
604?619. Springer, 2007.
[10] S. Beggs and S. Cardell. Assessing the potential demand for electric cars. Journal of Econometrics, 17:1?19, 1981.
[11] R.A. Bradley and M.A. Terry. Rank analysis of incomplete block designs. Biometrika, 39:324?
345, 1952.
[12] Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise
approach to listwise approach. In ICML ?07: Proceedings of the 24th international conference
on Machine learning, pages 129?136, New York, NY, USA, 2007. ACM.
[13] William W. Cohen, Robert E. Schapire, and Yoram Singer. Learning to order things. J. Artif.
Intell. Res. (JAIR), 10:243?270, 1999.
[14] D. Coppersmith, Lisa Fleischer, and Atri Rudra. Ordering by weighted number of wins gives
a good ranking for weighted tournamnets. In Proceedings of the 17th Annual ACM-SIAM
Symposium on Discrete Algorithms (SODA), 2006.
[15] Corinna Cortes and Mehryar Mohri. AUC Optimization vs. Error Rate Minimization. In
Advances in Neural Information Processing Systems (NIPS 2003), volume 16, Vancouver,
Canada, 2004. MIT Press.
[16] Corinna Cortes, Mehryar Mohri, and Ashish Rastogi. An Alternative Ranking Problem for
Search Engines. In Proceedings of the 6th Workshop on Experimental Algorithms (WEA 2007),
volume 4525 of Lecture Notes in Computer Science, pages 1?21, Rome, Italy, June 2007.
Springer-Verlag, Heidelberg, Germany.
[17] Corinna Cortes, Mehryar Mohri, and Ashish Rastogi. Magnitude-Preserving Ranking Algorithms. In Proceedings of the Twenty-fourth International Conference on Machine Learning
(ICML 2007), Oregon State University, Corvallis, OR, June 2007.
[18] David Cossock and Tong Zhang. Subset ranking using regression. In COLT, pages 605?619,
2006.
[19] Koby Crammer and Yoram Singer. Pranking with ranking. In Thomas G. Dietterich, Suzanna
Becker, and Zoubin Ghahramani, editors, Advances in Neural Information Processing Systems
14 [Neural Information Processing Systems: Natural and Synthetic, NIPS 2001, December
3-8, 2001, Vancouver, British Columbia, Canada], pages 641?647. MIT Press, 2001.
[20] Ronald Fagin, Ravi Kumar, Mohammad Mahdian, D. Sivakumar, and Erik Vee. Comparing
and aggregating rankings with ties. In Alin Deutsch, editor, Proceedings of the Twenty-third
ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, June 14-16,
2004, Paris, France, pages 47?58. ACM, 2004.
[21] Yoav Freund, Raj D. Iyer, Robert E. Schapire, and Yoram Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933?969, 2003.
[22] Ralf Herbrich, Thore Graepel, Peter Bollmann-Sdorra, and Klaus Obermayer. Learning a preference relation in ir. In In Proceedings Workshop Text Categorization and Machine Learning,
International Conference on Machine Learning, pages 80?84, 1998.
[23] Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Large margin rank boundaries for ordinal
regression. In Advances in Large Margin Classifiers, pages 115?132, 2000.
[24] C.A.R. Hoare. Quicksort: Algorithm 64. Comm. ACM, 4(7):321?322, 1961.
[25] Thorsten Joachims. Optimizing search engines using clickthrough data. In KDD ?02: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and
data mining, pages 133?142, New York, NY, USA, 2002. ACM Press.
[26] Claire Kenyon-Mathieu and Warren Schudy. How to rank with few errors. In STOC ?07:
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, pages 95?
103, New York, NY, USA, 2007. ACM Press.
[27] Guy Lebanon and John D. Lafferty. Cranking: Combining rankings using conditional probability models on permutations. In ICML ?02: Proceedings of the Nineteenth International
Conference on Machine Learning, pages 363?370, San Francisco, CA, USA, 2002. Morgan
Kaufmann Publishers Inc.
[28] Erich L. Lehmann. Nonparametrics: Statistical Methods Based on Ranks. Holden-Day, San
Francisco, California, 1975.
[29] R.D. Luce. Individual choice behaviour. Wiley, 1959.
[30] C.L. Mallows. Non-null ranking models. Biometrika, 44:113?130, 1957.
[31] John I. Marden. Analyzing and modeling rank data. Chapman & Hall, 1995.
[32] P. McCullagh. Permutations and regression models. Probability models and statistical analyses
for ranking data, pages 196?215, 1993.
[33] Mark H. Montague and Javed A. Aslam. Condorcet fusion for improved retrieval. In Proceedings of the 2002 ACM CIKM International Conference on Information and Knowledge
Management, McLean, VA, USA, November 4-9, 2002, pages 538?548. ACM, 2002.
[34] R. L. Plackett. The analysis of permutations. Applied Statistics, 24:193?202.
[35] Cynthia Rudin, Corinna Cortes, Mehryar Mohri, and Robert E. Schapire. Margin-based ranking meets boosting in the middle. In Peter Auer and Ron Meir, editors, Learning Theory,
18th Annual Conference on Learning Theory, COLT 2005, Bertinoro, Italy, June 27-30, 2005,
Proceedings, pages 63?78. Springer, 2005.
[36] L. L. Thurstone. A law of comparative judgement. Psychological Reviews, 34:273?286.
[37] David P. Williamson and Anke van Zuylen. ?deterministic algorithms for rank aggregation and
other ranking and clustering problems?. In Proceedings of the 5th Workshop on Approximation
and Online Algorithms (WAOA) (to appear), 2007.
[38] J. Yellott. The relationship between luce?s choice axiom, thurstone?s theory of comparatice judgment, and the double exponential distribution. Journal of Mathematical Psychology,
15:109?144, 1977.
| 3463 |@word middle:1 judgement:1 seems:3 logit:8 willing:1 calculus:1 pick:1 paid:1 mention:1 tr:2 harder:1 recursively:2 reduction:2 liu:1 series:1 score:28 rightmost:3 bradley:2 com:1 comparing:1 surprising:1 beygelzimer:1 gmail:1 must:1 john:3 ronald:1 additive:2 kdd:1 interpretable:2 v:2 implying:2 generative:2 rudin:1 item:2 smith:3 short:1 record:1 boosting:2 node:23 location:1 preference:8 successive:2 herbrich:2 ron:1 zhang:1 mathematical:1 constructed:4 symposium:4 shorthand:1 doubly:1 prove:4 fitting:2 inside:2 pairwise:21 expected:1 indeed:4 roughly:1 mahdian:1 inspired:1 globally:1 decreasing:2 ming:1 automatically:2 hoare:1 considering:1 moreover:1 underlying:2 notation:3 sdorra:1 null:1 what:3 textbook:1 unified:1 remember:1 every:2 fagin:1 concave:1 tie:2 exactly:11 biometrika:2 classifier:2 appear:1 dropped:1 local:4 aggregating:2 limit:1 consequence:1 joining:1 ak:4 analyzing:1 meet:1 path:2 sivakumar:1 plus:1 chose:1 studied:3 schudy:1 statistically:1 unique:1 practical:1 thirty:1 mallow:4 recursive:4 block:1 axiom:3 yan:1 outputted:1 word:4 zoubin:1 get:3 cannot:1 undesirable:1 selection:1 put:1 context:11 risk:1 intercept:1 restriction:1 measurable:1 equivalent:1 deterministic:1 go:1 attention:1 starting:1 independently:4 convex:1 economics:1 immediately:1 suzanna:1 q:16 loewenstein:1 marden:1 ralf:2 population:1 century:1 classic:5 thurstone:4 stability:17 resp:2 construction:5 strengthen:1 element:29 satisfying:2 nader:1 econometrics:5 asymmetric:1 predicts:1 labeled:4 database:1 bottom:1 ordering:2 decrease:1 removed:1 comm:1 asked:1 multistage:1 rigorously:1 raise:1 solving:1 purely:1 bipartite:1 observables:1 translated:1 easily:1 montague:1 tx:8 anke:1 effective:1 precedes:1 query:1 newman:1 klaus:2 choosing:1 heuristic:1 larger:1 valued:2 nineteenth:1 say:2 drawing:1 otherwise:2 nikhil:1 statistic:4 niyogi:1 noisy:1 itself:2 final:3 online:1 sequence:1 took:1 propose:1 product:7 qin:1 cao:2 relevant:2 combining:2 loser:2 description:1 intuitive:2 getting:2 empty:1 asymmetry:1 assessing:1 double:1 generating:1 categorization:1 comparative:1 bshouty:1 progress:1 dividing:1 implies:6 come:1 convention:3 differ:3 deutsch:1 correct:1 stochastic:2 human:7 elimination:1 explains:1 behaviour:1 fix:2 suffices:2 generalization:1 designate:1 precedence:1 strictly:3 hold:3 around:1 hall:1 welfare:1 mapping:1 scope:1 early:1 a2:2 consecutive:1 purpose:4 favorable:3 estimation:1 axiomatic:1 combinatorial:1 label:8 vice:1 weighted:2 unfolding:1 minimization:1 mit:2 clearly:3 rather:1 claudio:1 allwein:1 focus:3 june:4 joachim:1 maria:1 rank:13 likelihood:3 contrast:1 sigkdd:1 ave:1 criticism:2 sense:3 political:1 plackett:15 dependent:1 economy:1 entire:1 holden:1 relation:2 ancestor:3 france:1 interested:1 tao:1 germany:1 dual:1 classification:3 colt:5 special:1 marginal:1 equal:4 once:2 construct:1 having:1 chapman:1 identical:2 represents:1 placing:2 koby:1 icml:3 few:1 modern:2 randomly:1 bertinoro:1 simultaneously:1 intell:1 individual:2 william:1 attempt:1 thrown:1 mining:1 recurse:1 swapping:1 rudra:1 partial:1 experience:1 unless:1 tree:9 incomplete:1 desired:5 re:1 fitted:1 psychological:2 modeling:1 cover:2 yoav:1 vertex:2 subset:8 rare:1 gregory:1 synthetic:1 chooses:1 thoroughly:1 person:1 density:1 international:6 randomized:1 siam:1 pranking:1 ashish:2 management:1 choose:2 guy:1 mclean:1 return:1 li:1 converted:1 potential:1 erin:1 matter:1 oregon:1 satisfy:3 notable:2 descendent:2 ranking:56 inc:1 later:1 view:2 try:1 root:2 performed:2 aslam:1 sort:2 aggregation:2 partha:1 ir:7 kaufmann:1 who:1 efficiently:4 judgment:2 correspond:2 rastogi:2 weak:1 beggs:1 iterated:1 explain:1 whenever:1 definition:7 prelec:1 acquisition:2 regress:1 obvious:1 naturally:2 associated:1 proof:6 sampled:1 recall:1 knowledge:2 car:2 graepel:2 carefully:1 auer:1 back:2 centric:2 puv:4 appears:3 higher:2 jair:1 violating:1 day:1 response:3 modal:1 improved:1 nonparametrics:1 done:2 though:1 generality:1 marketing:1 governing:1 stage:1 just:1 until:2 langford:1 hand:1 google:1 logistic:3 mode:4 artif:1 thore:2 usa:6 dietterich:1 kenyon:1 verify:1 effect:1 true:1 concept:1 inductive:1 hence:7 assigned:1 equality:1 former:1 symmetric:2 elt:13 alantha:1 adjacent:3 nuisance:1 auc:1 rooted:1 generalized:1 bansal:1 mohammad:1 balcan:2 recently:2 predominantly:1 pivoting:13 patching:1 common:2 multinomial:1 arbitrariness:1 cohen:1 attached:1 winner:6 volume:3 cossock:1 significant:1 refer:1 corvallis:1 versa:1 erich:1 similarly:4 dot:1 stable:5 recent:1 perspective:1 italy:2 irrelevant:2 raj:1 optimizing:1 scenario:1 claimed:1 certain:1 verlag:1 binary:7 arbitrarily:1 life:1 ubiquitously:1 preserving:1 morgan:1 gentile:1 preceding:1 ii:3 full:3 desirable:2 offer:2 retrieval:6 mle:2 a1:3 va:1 regression:5 basic:1 florina:1 arxiv:1 represent:1 agarwal:1 whereas:1 want:1 baltimore:1 zuylen:1 source:1 sends:1 crucial:1 publisher:1 biased:1 extra:1 unlike:1 rest:1 sigact:1 subject:4 induced:1 elegant:1 bollmann:1 thing:1 member:1 december:1 lafferty:2 inconsistent:1 odds:1 call:4 integer:3 noting:1 easy:2 axiomatically:1 fit:2 psychology:1 sorkin:1 opposite:3 luce:17 multiclass:1 pivot:22 fleischer:1 utility:3 passed:1 becker:1 effort:1 suffer:1 peter:2 york:4 speaking:1 proceed:1 repeatedly:1 clear:3 governs:1 locally:1 shivani:1 schapire:4 meir:1 exist:1 s3:3 notice:1 moses:1 cikm:1 arising:1 correctly:2 disjoint:2 serving:1 stipulates:1 discrete:2 write:1 prd:29 shall:1 affected:1 zv:3 alina:1 ravi:1 econometric:1 v1:3 year:1 run:1 inverse:3 cranking:1 fourth:1 lehmann:1 soda:2 named:1 place:1 family:2 almost:1 reasonable:1 decide:3 reader:1 separation:1 draw:2 decision:2 prefer:1 fl:1 pay:1 annual:4 throwing:6 x2:12 thurstonian:2 dichotomous:1 aspect:1 argument:3 concluding:1 kumar:1 charikar:1 ailon:6 according:2 appealing:1 making:1 happens:1 s1:3 intuitively:1 pr:10 thorsten:1 computationally:1 remains:1 turn:3 singer:4 ordinal:2 flip:2 serf:1 available:1 permit:1 observe:1 quarterly:1 v2:2 indirectly:1 appearing:1 alternative:10 coin:1 corinna:4 existence:1 thomas:1 top:1 remaining:3 reconciling:1 clustering:2 unifying:1 yoram:4 sigmod:1 ghahramani:1 feng:1 implied:1 noticed:1 question:4 quantity:1 added:1 parametric:1 costly:1 exclusive:1 md:1 said:1 obermayer:2 win:2 distance:3 reversed:1 parametrized:1 condorcet:1 trivial:1 assuming:2 erik:1 relationship:1 ratio:1 vee:1 equivalently:1 difficult:4 robert:4 potentially:1 stoc:1 sigart:1 stated:1 rise:3 alin:1 design:1 clickthrough:1 policy:1 unknown:3 twenty:2 javed:1 observation:2 finite:4 november:1 incorrectly:1 immediate:1 defining:2 extended:1 variability:1 rome:1 ninth:1 august:1 canada:2 prqs:4 introduced:2 rating:1 pair:5 required:3 pvu:1 namely:1 david:2 paris:1 engine:2 coherent:1 california:1 yellott:2 nip:2 beyond:1 usually:2 eighth:1 coppersmith:2 built:1 terry:2 event:7 natural:3 difficulty:2 ranked:1 force:1 indicator:1 attach:1 recursion:2 predicting:1 representing:1 mathieu:1 transitive:1 columbia:1 nir:4 text:1 review:1 discovery:1 vancouver:2 law:1 freund:1 lecture:2 permutation:12 interesting:1 principle:1 editor:4 share:1 claire:1 mohri:5 convinced:1 repeat:1 placed:2 soon:1 side:4 allow:1 lisa:1 warren:1 explaining:1 wide:1 taking:2 distributed:2 listwise:1 feedback:1 depth:1 curve:1 world:1 rum:1 boundary:1 doesn:1 ignores:1 made:5 collection:3 san:2 social:1 lebanon:2 hang:1 preferred:1 implicitly:3 ml:3 global:5 quicksort:15 summing:1 assumed:1 francisco:2 discriminative:1 zhe:1 don:1 search:2 latent:1 triplet:4 additionally:1 robust:1 ca:1 ariely:1 symmetry:1 heidelberg:1 mehryar:6 williamson:1 electric:2 mnl:1 main:1 universe:5 arrow:1 s2:3 succinct:1 repeated:1 nailon:1 child:2 x1:14 tl:5 ny:5 tong:1 wiley:1 position:4 wish:1 exponential:3 winning:2 breaking:1 third:2 removing:1 unwieldy:1 embed:1 down:1 theorem:4 zu:1 british:1 cynthia:1 symbol:1 list:2 cortes:4 virtue:1 fusion:1 intrinsic:1 exists:2 throughly:1 workshop:3 magnitude:1 notwithstanding:1 iyer:1 conditioned:4 execution:1 subtree:7 demand:3 gumbel:1 sorting:2 easier:1 margin:4 contained:1 ordered:3 atri:1 van:1 springer:3 corresponds:1 satisfies:14 acm:13 conditional:1 sorted:1 viewed:1 mccullagh:2 determined:2 reducing:2 semantically:1 uniformly:2 lemma:1 called:2 e:6 experimental:1 mark:1 latter:1 arises:1 crammer:1 relevance:1 overload:1 tsai:1 phenomenon:2 |
2,717 | 3,464 | Robust Near-Isometric Matching via Structured
Learning of Graphical Models
Julian J. McAuley
NICTA/ANU
julian.mcauley
@nicta.com.au
Tib?erio S. Caetano
NICTA/ANU
tiberio.caetano
@nicta.com.au
Alexander J. Smola
Yahoo! Research?
[email protected]
Abstract
Models for near-rigid shape matching are typically based on distance-related features, in order to infer matches that are consistent with the isometric assumption.
However, real shapes from image datasets, even when expected to be related by
?almost isometric? transformations, are actually subject not only to noise but also,
to some limited degree, to variations in appearance and scale. In this paper, we
introduce a graphical model that parameterises appearance, distance, and angle
features and we learn all of the involved parameters via structured prediction. The
outcome is a model for near-rigid shape matching which is robust in the sense that
it is able to capture the possibly limited but still important scale and appearance
variations. Our experimental results reveal substantial improvements upon recent
successful models, while maintaining similar running times.
1
Introduction
Matching shapes in images has many applications, including image retrieval, alignment, and registration [1, 2, 3, 4]. Typically, matching is approached by selecting features for a set of landmark
points in both images; a correspondence between the two is then chosen such that some distance
measure between these features is minimised. A great deal of attention has been devoted to defining
complex features which are robust to changes in rotation, scale etc. [5, 6].1
An important class of matching problems is that of near-isometric shape matching. In this setting,
it is assumed that shapes are defined up to an isometric transformation (allowing for some noise),
and therefore distance features are typically used to encode the shape. Recent work has shown how
the isometric constraint can be exploited by a particular type of graphical model whose topology
encodes the necessary properties for obtaining optimal matches in polynomial time [11].
Another line of work has focused on structured learning to optimize graph matching scores, however
no explicit exploitation of the geometrical constraints involved in shape modeling are made [12].
In this paper, we combine the best of these two approaches into a single model. We produce an
exact, efficient model to solve near-isometric shape matching problems using not only isometryinvariant features, but also appearance and scale-invariant features. By doing so we can learn the
relative importances of variations in appearance and scale with regard to variations in shape per
se. Therefore, even knowing that we are in a near-isometric setting, we will capture the eventual
variations in appearance and scale into our matching criterion in order to produce a robust nearisometric matcher. In terms of learning, we introduce a two-stage structured learning approach to
address the speed and memory efficiency of this model.
?
Alexander J. Smola was with NICTA at the time of this work.
We restrict our attention to this type of approach, i.e. that of matching landmarks between images. Some
notable approaches deviate from this norm ? see (for example) [7, 8, 9, 10].
1
1
Figure 1: The graphical model introduced in [11].
2
2.1
Background
Shape Matching
?Shape matching? can mean many different things, depending on the precise type of query one is
interested in. Here we study the case of identifying an instance of a template shape (S ? T ) in a
target scene (U) [1].2 We assume that we know S, i.e. the points in the template that we want to
query in the scene. Typically both T and U correspond to a set of ?landmark? points, taken from a
pair of images (common approaches include [6, 13, 14]).
For each point t ? T and u ? U, a certain set of unary features are extracted (here denoted by ?(t),
?(u)), which contain local information about the image at that point [5, 6]. If y : S ? U is a generic
mapping representing a potential match, the goal is then to find a mapping y? which minimises the
aggregate distance between corresponding features, i.e.
y? = f (S, U) = argmin
y
|S|
X
2
c1 (si , y(si )), where c1 (si , y(si )) = k?(si ) ? ?(y(si ))k2 .
(1)
i=1
(here k?k2 denotes the L2 norm). For injective y eq. (1) is a linear assignment problem, efficiently
solvable in cubic time. In addition to unary or first-order features, pairwise or second-order features
can be induced from the locations of the unary features. In this case eq. (1) would be generalised
to minimise an aggregate distance between pairwise features. This however induces an NP-hard
problem (quadratic assignment). Discriminative structured learning has recently been applied to
models of both linear and quadratic assignment in [12].
2.2
Graphical Models
In isometric matching settings, one may suspect that it may not be necessary to include all pairwise
relations in quadratic assignment. In fact a recent paper [11] has shown that if only the distances as
encoded by the graphical model depicted in figure 1 are taken into account (nodes represent points
in S and states represent points in U), exact probabilistic inference in such a model can solve the
isometric problem optimally. That is, an energy function of the following form is minimised:3
|S|
X
c2 (si , si+1 , y(si ), y(si+1 )) + c2 (si , si+2 , y(si ), y(si+2 )).
(2)
i=1
In [11], it is shown that loopy belief propagation using this model converges to the optimal assignment, and that the number of iterations required before convergence is small in practice.
We will extend this model by adding a unary term, c1 (si , y(si )) (as in (eq. 1)), and a third-order
term, c3 (si , si+1 , si+2 , y(si ), y(si+1 ), y(si+2 )). Note that the graph topology remains the same.
2
Here T is the set of all points in the template scene, whereas S corresponds to those points in which we
are interested. It is also important to note that we treat S as an ordered object in our setting.
3
si+1 should be interpreted as s(i+1) mod |S| (i.e. the points form a loop).
2
2.3
Discriminative Structured Learning
In practice, feature vectors may be very high-dimensional, and which components are ?important?
will depend on the specific properties of the shapes being matched. Therefore, we introduce a
parameter, ?, which controls the relative importances of the various feature components. Note that
? is parameterising the matching criterion itself. Hence our minimisation problem becomes
y? = f (S, U; ?)
=
argmaxhh(S, U, y), ?i
(3)
y
where h(S, U, y)
= ?
|S|
X
?(si , si+1 , si+2 , y(si ), y(si+1 ), y(si+2 )).
(4)
i=1
(y is a mapping from S to U, ? is a third-order feature vector ? our specific choice is shown in
section 3).4 In order to measure the performance of a particular weight vector, we use a loss function, ?(?
y , y i ), which represents the cost incurred by choosing the assignment y? when the correct
assignment is y i (our specific choice of loss function is described in section 4). To avoid overfitting,
2
we also desire that ? is sufficiently ?smooth?. Typically, one uses the squared L2 norm, k?k2 , to
penalise non-smooth choices of ? [15].
Learning in this setting now becomes a matter of choosing ? such that the empirical risk (average
loss on all training instances) is minimised, but which is also
(to
1 sufficiently
?smooth?
prevent overN
1
N
fitting). Specifically,
if
we
have
a
set
of
training
pairs,
S
.
.
.
S
,
U
.
.
.
U
, with labelled
matches y 1 . . . y N , then we wish to minimise
N
1 X
?
2
?(f (S i , U i ; ?), y i ) +
k?k2 .
N i=1
2
| {z }
|
{z
}
regulariser
empirical risk
(5)
Here ? (the regularisation constant) controls the relative importance of minimising the empirical risk
against the regulariser. In our case, we simply choose ? such that the empirical risk on our validation
set is minimised.
Solving (eq. 5) exactly is an extremely difficult problem and in practice is not feasible, since the
loss is piecewise constant on the parameter ?. Here we capitalise on recent advances in large-margin
structured estimation [15], which consist of obtaining convex relaxations of this problem. Without
going into the details of the solution (see, for example, [15, 16]), it can be shown that a convex
relaxation of this problem can be obtained, which is given by
min
?
N
?
1 X
2
?i + k?k2
N i=1
2
(6a)
subject to
hh(S i , U i , y i ) ? h(S i , U i , y), ?i ? ?(y, y i ) ? ?i
for all i and y ? Y
(6b)
(where Y is the space of all possible mappings). It can be shown that for the solution of the above
problem, we have that ?i? ? ?(f (S i , U i ; ?), y i ). This means that we end up minimising an upper
bound on the loss, instead of the loss itself.
Solving (6) requires only that we are able, for any value of ?, to find
argmax hh(S i , U i , y), ?i + ?(y, y i ) .
(7)
y
In other words, for each value of ?, we are able to identify the mapping which is consistent with the
model (eq. 3), yet incurs a high loss. This process is known as ?column generation? [15, 16]. As we
will define our loss as a sum over the nodes, solving (eq. 7) is no more difficult than solving (eq. 3).
4
We have expressed (eq. 3) as a maximisation problem as a matter of convention; this is achieved simply
by negating the cost function in (eq. 4).
3
Figure 2: Left: the (ordered) set of points in our template shape (S). Centre: connections between
immediate neighbours. Right: connections between neighbour?s neighbours (our graphical model).
3
Our Model
Although the model of [11] solves isometric matching problems optimally, it provides no guarantees
for near-isometric problems, as it only considers those compatibilities which form cliques in our
graphical model. However, we are often only interested in the boundary of the object: if we look at
the instance of the model depicted in figure 2, it seems to capture exactly the important dependencies;
adding additional dependencies between distant points (such as the duck?s tail and head) would be
unlikely to contribute to this model.
With this in mind, we introduce three new features (for brevity we use the shorthand yi = y(si )):
?1 (s1 , s2 , y1 , y2 ) = (d1 (s1 , s2 ) ? d1 (y1 , y2 ))2 , where d1 (a, b) is the Euclidean distance between
a and b, scaled according to the width of the target scene.
?2 (s1 , s2 , s3 , y1 , y2 , y3 ) = (d2 (s1 , s2 , s3 ) ? d2 (y1 , y2 , y3 ))2 , where d2 (a, b, c) is the Euclidean
distance between a and b scaled by the average of the distances between a, b, and c.
?3 (s1 , s2 , s3 , y1 , y2 , y3 ) = (?(s1 , s2 , s3 ) ? ?(y1 , y2 , y3 ))2 , where ?(a, b, c) is the angle between
a and c, w.r.t. b.5
We also include the unary features ?0 (s1 , y1 ) = (?(s1 ) ? ?(y1 ))2 (i.e. the pointwise squared difference between ?(s1 ) and ?(y1 )). ?1 is exactly the feature used in [11], and is invariant to isometric
transformations (rotation, reflection, and translation); ?2 and ?3 capture triangle similarity, and are
thus also invariant to scale. In the context of (eq. 4), we have
?(s1 , s2 , s3 , y1 , y2 , y3 ) := ?0 (s1 , y1 ), ?1 (s1 , s2 , y1 , y2 ) + ?1 (s1 , s3 , y1 , y3 ),
?2 (s1 , s2 , s3 , y1 , y2 , y3 ) + ?2 (s1 , s3 , s2 , y1 , y3 , y2 ), ?3 (s1 , s2 , s3 , y1 , y2 , y3 ) . (8)
In practice, landmark detectors often identify several hundred points [6, 17], which is clearly impractical for an O(|S||U|3 ) method (|U| is the number of landmarks in the target scene). To address
this, we adopt a two stage learning approach: in the first stage, we learn only unary compatibilities,
exactly as is done in [12]. During the second stage of learning, we collapse the first-order feature
vector into a single term, namely
?00 (s1 , y1 ) = h?0 , ?0 (s1 , y1 )i
(9)
(?0 is the weight vector learned during the first stage). We now perform learning for the third-order
model, but consider only the p ?most likely? matches for each node, where the likelihood is simply
determined using ?00 (s1 , y1 ). This reduces the performance and memory requirements to O(|S|p3 ).
A consequence of using this approach is that we must now tune two regularisation constants; this is
not an issue in practice, as learning can be performed quickly using this approach.6
5
Using features of such different scales can be an issue for regularisation ? in practice we adjusted these
features to have roughly the same scale. For full details, our implementation is available at (not included for
blind review).
6
In fact, even in those cases where a single stage approach was tractable (such as the experiment in section
4.1), we found that the two stage approach worked better. Typically, we required much less regularity during
the second stage, possibly because the higher order features are heterogeneous.
4
Figure 3: Left: The adjacency structure of the graph (top); the boundary of our ?shape? (centre);
the topology of our graphical model (bottom). Right: Example matches using linear assignment
(top, 6/30 mismatches), quadratic assignment (centre, 4/30 mismatches), and the proposed model
(bottom, no mismatches). The images shown are the 12th and 102nd frames in our sequence. Correct
matches are shown in green, incorrect matches in red. All matches are reported after learning.
4
Experiments
4.1
House Data
In our first experiment, we compare our method to those of [11] and [12]. Both papers report the
performance of their methods on the CMU ?house? sequence ? a sequence of 111 frames of a toy
house, with 30 landmarks identified in each frame.7 As in [12], we compute the Shape Context
features for each of the 30 points [5].
In addition to the unary model of [12], a model based on quadratic assignment is also presented, in
which pairwise features are determined using the adjacency structure of the graphs. Specifically, if a
pair of points (p1 , p2 ) in the template scene is to be matched to (q1 , q2 ) in the target, there is a feature
which is 1 if there is an edge between p1 and p2 in the template, and an edge between q1 and q2 in
the target (and 0 otherwise). We also use such a feature for this experiment, however our model only
considers matchings for which (p1 , p2 ) forms an edge in our graphical model (see figure 3, bottom
left). The adjacency structure of the graphs is determined using the Delaunay triangulation, (figure
3, top left).
As in [11], we compare pairs of images with a fixed baseline (separation between frames). For our
loss function, ?(?
y , y i ), we used the normalised Hamming loss, i.e. the proportion of mismatches.
Figure 4 shows our performance on this dataset, as the baseline increases. On the left we show the
performance without learning, for which our model exhibits the best performance by a substantial
margin.8
Our method is also the best performing after learning ? in fact, we achieve almost zero error for all
but the largest baselines (at which point our model assumptions become increasingly violated, and
we have less training data). In figure 5, we see that the running time of our method is similar to the
quadratic assignment method of [12]. To improve the running time, we also show our results with
p = 10, i.e. for each point in the template scene, we only consider the 10 ?most likely? matches, using
the weights from the first stage of learning. This reduces the running time by more than an order of
7
http://vasc.ri.cmu.edu/idb/html/motion/house/index.html
Interestingly, the quadratic method of [12] performs worse than their unary method; this is likely because
the relative scale of the unary and quadratic features is badly tuned before learning, and is indeed similar to
what the authors report. Furthermore, the results we present for the method of [12] after learning are much
better than what the authors report ? in that paper, the unary features are scaled using a pointwise exponent
(? exp(?|?a ? ?b |2 )), whereas we found that scaling the features linearly (|?a ? ?b |2 ) worked better.
8
5
House data, learning
Normalised Hamming loss on test set
Normalised Hamming loss on test set
House data, no learning
1
point matching
linear
quadratic
higher order
0.8
0.6
0.4
0.2
0
0
10
20
30
40
50
Baseline
60
70
80
90
0.3
linear (learning)
quadratic (learning)
higher order (learning, 10 points)
higher order (learning)
0.25
0.2
0.15
0.1
0.05
0
0
10
20
30
40 50
Baseline
60
70
80
90
Figure 4: Comparison of our technique against that of [11] (?point matching?), and [12] (?linear?,
?quadratic?). The performance before learning is shown on the left, the performance after learning is
shown on the right. Our method exhibits the best performance both before and after learning (note
the different scales of the two plots). Error bars indicate standard error.
Normalised Hamming loss on test set
House (baseline = 60)
0.3
0.25
linear (learning)
quadratic (learning)
higher order (learning, 10 points)
higher order (learning)
0.2
0.15
0.1
0.05
0
0.0001
0.001
0.01
0.1
Average running time (seconds, logarithmic scale)
1
Figure 5: The running time and performance of our method, compared to those of [12] (note that the
method of [11] has running time identical to our method). Our method is run from 1 to 20 iterations
of belief propagation, although the method appears to converge in fewer than 5 iterations.
magnitude, bringing it closer to that of linear assignment; even this model achieves approximately
zero error up to a baseline of 50.
Finally, figure 6 (left) shows the weight vector of our model, for a baseline of 60. The first 60
weights are for the Shape Context features (determined during the first stage of learning), and the
final 5 show the weights from our second stage of learning (the weights correspond to the first-order
features, distances, adjacencies, scaled distances, and angles, respectively ? see section 3). We can
provide some explanation of the learned weights: the Shape Context features are separated into 5
radial, and 12 angular bins ? the fact that there are peaks around the 16th and 24th , features indicates
that some particular radial bins are more important than the others; the fact that several consecutive
bins have low weight indicates that some radial bins are unimportant (etc.). It is much more difficult
to reason about the second stage of learning, as the features have different scales, and cannot be
compared directly ? however, it appears that all of the higher-order features are important to our
model.
4.2
Bikes Data
For our second experiment, we used images of bicycles from the Caltech 256 Dataset [18]. Bicycles
are reasonably rigid objects, meaning that matching based on their shape is logical. Although the
images in this dataset are fairly well aligned, they are subject to reflections as well as some scaling
and shear. For each image in the dataset, we detected landmarks automatically, and six points on
the frame were hand-labelled (see figure 7). Only shapes in which these interest points were not
occluded were used, and we only included images that had a background; in total, we labelled 44
6
House data first/higher order weight vector (baseline = 60)
2
0.2
1.5
0
0
-0.5
-0.1
-1
-1.5
2
4
0.1
0.5
3
6
Importance
Importance
1
Bikes data first/higher order weight vector
8
1
2
0
0
-2
-1
-4
-2
-6
-0.2
-2
-3
-8
Index
Index
Figure 6: Left: The weight vector of our method after learning, for the ?house? data. The first 60
weights are for the Shape Context features from the first stage of of learning; the final 5 weights are
for the second stage of learning. Right: The same plot, for the ?bikes? data.
Figure 7: Top: A selection of our training images. Bottom: An example match from our test set.
Left: The template image (with the shape outlined in green, and landmark points marked in blue).
Centre: The target image, and the match (in red) using unary features with the affine invariant/SIFT
model of [17] after learning (endpoint error = 0.27). Right: the match using our model after learning
(endpoint error = 0.04).
images. The first image was used as the ?template?, the other 43 were used as targets. Thus we are
learning to match bicycles similar to the chosen template.
Initially, we used the SIFT landmarks and features as described in [6]. Since this approach typically
identifies several hundred landmarks, we set p = 20 for this experiment (i.e. we consider the 20
most likely points). Since we cannot hope to get exact matches, we use the endpoint error instead
of the normalised Hamming loss, i.e. we reward points which are close to the correct match.9 Table
1 reveals that the performance of this method is quite poor, even with the higher-order model, and
furthermore reveals no benefit from learning. This may be explained by the fact that although the
SIFT features are invariant to scale and rotation, they are not invariant to reflection.
In [17], the authors report that the SIFT features can provide good matches in such cases, as long as
landmarks are chosen which are locally invariant to affine transformations. They give a method for
identifying affine-invariant feature points, whose SIFT features are then computed.10 We achieve
much better performance using this method, and also observe a significant improvement after learning. Figure 7 shows an example match using both the unary and higher-order techniques.
Finally, figure 6 (right) shows the weights learned for this model. Interestingly, the first-order term
during the second stage of learning has almost zero weight. This must not be misinterpreted: during
the second stage, the response of each of the 20 candidate points is so similar that the first-order features are simply unable to convey any new information ? yet they are still very useful in determining
the 20 candidate points.
9
Here the endpoint error is just the average Euclidean distance from the correct label, scaled according to
the width of the image.
10
We used publicly available implementations of both methods.
7
Table 1: Performance on the ?bikes? dataset. The endpoint error is reported, with standard errors in
parentheses (note that the second-last column, ?higher-order? uses the weights from the first stage of
learning, but not the second).
5
Detector/descriptor
SIFT [6]
unary
Training:
0.335 (0.038)
Validation: 0.343 (0.027)
Testing:
0.351 (0.024)
+ learning
0.319 (0.034)
0.329 (0.019)
0.312 (0.015)
higher-order
0.234 (0.047)
0.236 (0.031)
0.302 (0.045)
+ learning
0.182 (0.031)
0.257 (0.033)
0.311 (0.039)
Affine invariant/SIFT [17]
Training:
0.322 (0.018)
Validation: 0.337 (0.015)
Testing:
0.332 (0.024)
0.280 (0.016)
0.298 (0.019)
0.339 (0.028)
0.233 (0.042)
0.245 (0.028)
0.277 (0.035)
0.244 (0.042)
0.229 (0.032)
0.231 (0.034)
Conclusion
We have presented a model for near-isometric shape matching which is robust to typical additional
variations of the shape. This is achieved by performing structured learning in a graphical model that
encodes features with several different types of invariances, so that we can directly learn a ?compound invariance? instead of taking for granted the exclusive assumption of isometric invariance.
Our experiments revealed that structured learning with a principled graphical model that encodes
both the rigid shape as well as non-isometric variations gives substantial improvements, while still
maintaining competitive performance in terms of running time.
Acknowledgements: We thank Marconi Barbosa and James Petterson for proofreading. NICTA
is funded by the Australian Government?s Backing Australia?s Ability initiative, and the Australian
Research Council?s ICT Centre of Excellence program.
References
[1] Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. PAMI
24 (2002) 509?522
[2] Mori, G., Belongie, S., Malik, J.: Shape contexts enable efficient retrieval of similar shapes. In: CVPR.
(2001) 723?730
[3] Mori, G., Malik, J.: Estimating human body configurations using shape context matching. In: ECCV.
(2002) 666?680
[4] Frome, A., Huber, D., Kolluri, R., Bulow, T., Malik, J.: Recognizing objects in range data using regional
point descriptors. In: ECCV. (2004)
[5] Belongie, S., Malik, J.: Matching with shape contexts. In: CBAIVL00. (2000) 20?26
[6] Lowe, D.G.: Object recognition from local scale-invariant features. In: ICCV. (1999) 1150?1157
[7] Felzenszwalb, P.F., Huttenlocher, D.P.: Pictorial structures for object recognition. IJCV 61 (2005) 55?79
[8] Felzenszwalb, P.F., Schwartz, J.D.: Hierarchical matching of deformable shapes. In: CVPR. (2007)
[9] LeCun, Y., Huang, F.J., Bottou, L.: Learning methods for generic object recognition with invariance to
pose and lighting. CVPR (2004) 97?104
[10] Carmichael, O., Hebert, M.: Shape-based recognition of wiry objects. PAMI 26 (2004) 1537?1552
[11] McAuley, J.J., Caetano, T.S., Barbosa, M.S.: Graph rigidity, cyclic belief propagation and point pattern
matching. PAMI 30 (2008) 2047?2054
[12] Caetano, T., Cheng, L., Le, Q., Smola, A.: Learning graph matching. In: ICCV. (2007) 1?8
[13] Canny, J.: A computational approach to edge detection. In: RCV. (1987) 184?203
[14] Smith, S.: A new class of corner finder. In: BMVC. (1992) 139?148
[15] Tsochantaridis, I., Hofmann, T., Joachims, T., Altun, Y.: Support vector machine learning for interdependent and structured output spaces. In: ICML. (2004)
[16] Teo, C., Le, Q., Smola, A., Vishwanathan, S.: A scalable modular convex solver for regularized risk
minimization. In: KDD. (2007)
[17] Mikolajczyk, K., Schmid, C.: Scale and affine invariant interest point detectors. 60 (2004) 63?86
[18] Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset. Technical Report 7694, California
Institute of Technology (2007)
8
| 3464 |@word exploitation:1 polynomial:1 norm:3 seems:1 nd:1 proportion:1 d2:3 q1:2 incurs:1 mcauley:3 configuration:1 cyclic:1 score:1 selecting:1 tuned:1 interestingly:2 com:2 si:30 yet:2 must:2 distant:1 kdd:1 shape:34 hofmann:1 plot:2 fewer:1 smith:1 provides:1 node:3 location:1 contribute:1 org:1 misinterpreted:1 c2:2 become:1 initiative:1 incorrect:1 shorthand:1 ijcv:1 combine:1 fitting:1 introduce:4 excellence:1 pairwise:4 huber:1 indeed:1 expected:1 roughly:1 p1:3 automatically:1 solver:1 becomes:2 estimating:1 matched:2 bike:4 rcv:1 what:2 argmin:1 interpreted:1 q2:2 transformation:4 impractical:1 guarantee:1 y3:9 penalise:1 exactly:4 k2:5 scaled:5 schwartz:1 control:2 generalised:1 before:4 local:2 treat:1 consequence:1 approximately:1 pami:3 au:2 limited:2 collapse:1 range:1 lecun:1 testing:2 practice:6 maximisation:1 carmichael:1 empirical:4 matching:26 word:1 radial:3 altun:1 get:1 cannot:2 close:1 selection:1 tsochantaridis:1 risk:5 context:9 optimize:1 attention:2 convex:3 focused:1 identifying:2 variation:7 target:7 exact:3 us:2 recognition:5 huttenlocher:1 bottom:4 tib:1 capture:4 barbosa:2 caetano:4 substantial:3 principled:1 reward:1 occluded:1 depend:1 solving:4 upon:1 efficiency:1 triangle:1 matchings:1 various:1 separated:1 query:2 approached:1 detected:1 aggregate:2 outcome:1 choosing:2 whose:2 encoded:1 quite:1 solve:2 cvpr:3 modular:1 otherwise:1 ability:1 itself:2 final:2 sequence:3 canny:1 aligned:1 loop:1 achieve:2 deformable:1 vasc:1 convergence:1 regularity:1 requirement:1 produce:2 converges:1 object:10 depending:1 pose:1 minimises:1 eq:10 p2:3 solves:1 frome:1 indicate:1 australian:2 convention:1 correct:4 human:1 australia:1 enable:1 adjacency:4 bin:4 government:1 tiberio:1 adjusted:1 sufficiently:2 around:1 exp:1 great:1 mapping:5 bicycle:3 achieves:1 adopt:1 consecutive:1 estimation:1 label:1 council:1 teo:1 largest:1 hope:1 minimization:1 clearly:1 idb:1 avoid:1 minimisation:1 encode:1 joachim:1 improvement:3 likelihood:1 indicates:2 baseline:9 sense:1 inference:1 rigid:4 unary:13 typically:7 unlikely:1 initially:1 perona:1 relation:1 going:1 interested:3 backing:1 compatibility:2 issue:2 html:2 denoted:1 exponent:1 yahoo:1 fairly:1 identical:1 represents:1 look:1 icml:1 np:1 report:5 piecewise:1 others:1 neighbour:3 petterson:1 pictorial:1 argmax:1 detection:1 interest:2 alignment:1 devoted:1 parameterising:1 edge:4 closer:1 necessary:2 injective:1 euclidean:3 instance:3 column:2 modeling:1 negating:1 assignment:12 loopy:1 cost:2 hundred:2 recognizing:1 successful:1 optimally:2 reported:2 dependency:2 peak:1 probabilistic:1 minimised:4 quickly:1 squared:2 choose:1 possibly:2 huang:1 worse:1 corner:1 toy:1 account:1 potential:1 matter:2 notable:1 blind:1 performed:1 lowe:1 doing:1 red:2 competitive:1 publicly:1 descriptor:2 efficiently:1 correspond:2 identify:2 parameterises:1 lighting:1 detector:3 against:2 energy:1 involved:2 james:1 hamming:5 dataset:6 logical:1 holub:1 actually:1 appears:2 higher:13 isometric:16 response:1 bmvc:1 done:1 furthermore:2 angular:1 smola:5 stage:17 just:1 hand:1 propagation:3 reveal:1 contain:1 y2:11 hence:1 deal:1 during:6 width:2 criterion:2 performs:1 motion:1 reflection:3 geometrical:1 image:19 meaning:1 recently:1 common:1 rotation:3 shear:1 endpoint:5 extend:1 tail:1 significant:1 outlined:1 centre:5 had:1 funded:1 similarity:1 etc:2 delaunay:1 recent:4 triangulation:1 compound:1 certain:1 yi:1 exploited:1 caltech:2 additional:2 converge:1 full:1 infer:1 reduces:2 smooth:3 technical:1 match:18 minimising:2 long:1 retrieval:2 finder:1 parenthesis:1 prediction:1 scalable:1 heterogeneous:1 cmu:2 iteration:3 represent:2 achieved:2 c1:3 background:2 want:1 addition:2 whereas:2 regional:1 bringing:1 subject:3 induced:1 suspect:1 thing:1 mod:1 near:8 revealed:1 bulow:1 topology:3 restrict:1 identified:1 knowing:1 minimise:2 six:1 granted:1 useful:1 se:1 unimportant:1 tune:1 locally:1 induces:1 category:1 http:1 s3:9 per:1 blue:1 prevent:1 registration:1 graph:7 relaxation:2 sum:1 run:1 angle:3 almost:3 p3:1 separation:1 griffin:1 scaling:2 bound:1 correspondence:1 cheng:1 quadratic:12 badly:1 constraint:2 worked:2 alex:1 vishwanathan:1 scene:7 ri:1 encodes:3 speed:1 extremely:1 min:1 performing:2 proofreading:1 structured:10 according:2 poor:1 increasingly:1 s1:19 explained:1 invariant:11 iccv:2 taken:2 mori:2 remains:1 hh:2 know:1 mind:1 tractable:1 end:1 available:2 observe:1 hierarchical:1 generic:2 denotes:1 running:8 include:3 top:4 graphical:12 maintaining:2 malik:5 exclusive:1 exhibit:2 distance:13 unable:1 thank:1 landmark:11 considers:2 reason:1 nicta:6 pointwise:2 index:3 julian:2 difficult:3 implementation:2 regulariser:2 perform:1 allowing:1 upper:1 datasets:1 immediate:1 erio:1 defining:1 precise:1 head:1 y1:19 frame:5 introduced:1 pair:4 required:2 namely:1 c3:1 connection:2 california:1 learned:3 address:2 able:3 bar:1 pattern:1 mismatch:4 program:1 including:1 memory:2 green:2 belief:3 explanation:1 regularized:1 solvable:1 representing:1 improve:1 technology:1 identifies:1 schmid:1 deviate:1 review:1 ict:1 l2:2 acknowledgement:1 interdependent:1 determining:1 relative:4 regularisation:3 loss:14 generation:1 validation:3 incurred:1 degree:1 affine:5 consistent:2 translation:1 eccv:2 last:1 hebert:1 normalised:5 institute:1 template:10 taking:1 felzenszwalb:2 benefit:1 regard:1 boundary:2 mikolajczyk:1 author:3 made:1 clique:1 overfitting:1 reveals:2 assumed:1 belongie:3 discriminative:2 kolluri:1 table:2 learn:4 reasonably:1 robust:5 obtaining:2 bottou:1 complex:1 linearly:1 s2:11 noise:2 convey:1 body:1 cubic:1 explicit:1 wish:1 duck:1 candidate:2 house:9 third:3 specific:3 sift:7 consist:1 adding:2 importance:5 magnitude:1 anu:2 margin:2 depicted:2 logarithmic:1 simply:4 appearance:6 likely:4 desire:1 ordered:2 expressed:1 corresponds:1 extracted:1 goal:1 marked:1 eventual:1 labelled:3 feasible:1 change:1 hard:1 included:2 specifically:2 determined:4 typical:1 total:1 invariance:4 experimental:1 matcher:1 puzicha:1 support:1 alexander:2 brevity:1 violated:1 d1:3 rigidity:1 |
2,718 | 3,465 | Optimal Response Initiation:
Why Recent Experience Matters
Matt Jones
Dept. of Psychology &
Institute of Cognitive Science
University of Colorado
Michael C. Mozer
Dept. of Computer Science &
Institute of Cognitive Science
University of Colorado
Sachiko Kinoshita
MACCS &
Dept. of Psychology
Macquarie University
[email protected]
[email protected]
[email protected]
Abstract
In most cognitive and motor tasks, speed-accuracy tradeoffs are observed: Individuals can respond slowly and accurately, or quickly yet be prone to errors.
Control mechanisms governing the initiation of behavioral responses are sensitive not only to task instructions and the stimulus being processed, but also to the
recent stimulus history. When stimuli can be characterized on an easy-hard dimension (e.g., word frequency in a naming task), items preceded by easy trials
are responded to more quickly, and with more errors, than items preceded by hard
trials. We propose a rationally motivated mathematical model of this sequential
adaptation of control, based on a diffusion model of the decision process in which
difficulty corresponds to the drift rate for the correct response. The model assumes that responding is based on the posterior distribution over which response
is correct, conditioned on the accumulated evidence. We derive this posterior as
a function of the drift rate, and show that higher estimates of the drift rate lead
to (normatively) faster responding. Trial-by-trial tracking of difficulty thus leads
to sequential effects in speed and accuracy. Simulations show the model explains
a variety of phenomena in human speeded decision making. We argue this passive statistical mechanism provides a more elegant and parsimonious account than
extant theories based on elaborate control structures.
1
Introduction
Consider the task of naming the sum of two numbers, e.g., 14+8. Given sufficient time, individuals
will presumably produce the correct answer. However, under speed pressure, mistakes occur. In
most cognitive and motor tasks, speed-accuracy tradeoffs are observed: Individuals can respond
accurately but slowly, or quickly but be prone to errors. Speed-accuracy tradeoffs are due to the fact
that evidence supporting the correct response accumulates gradually over time (Rabbitt & Vyas,
1970; Gold & Shadlen, 2002). Responses initiated earlier in time will be based on lower-quality
information, and hence less likely to be correct.
On what basis do motor systems make the decision to initiate a response? Recent theories have
cast response initiation in terms of optimality (Bogacz et al., 2006), where optimality might be
defined as maximizing reward per unit time, or minimizing a linear combination of latency and error
rate. Although optimality might be defined in various ways, all definitions require an estimate of
the probability that each candidate response will be correct. We argue that this estimate in turn
requires knowledge of the task difficulty, or specifically, the rate at which evidence supporting the
correct response accumulates over time. If a task is performed repeatedly, task difficulty can be
estimated over a series of trials, suggesting that optimal decision processes should show sequential
effects, in which performance on one trial depends on the difficulty of recent trials. We describe an
experimental paradigm that offers behavioral evidence of sequential effects in response initiation.
1
0
1
0.8
0.6
0.4
^
2
1
0.8
P(R* | X)
P(R*|X)
evidence
4
0.2
?2
0
50
100
0
0
50
time
100
0.6
0.4
0.2
0
0
time
50
100
time
Figure 1: An illustration of the MDM. Left panel: evidence accumulation for a 20-AFC task as a
function of time, with ?R? = .04, ?i6=R? = 0, ? = .15. Middle panel: the posterior over responses,
P (R? |X), with a = .04 and b = 0, based on the diffusion trace in the left panel. Right panel: the
? ? |X), assuming a
posterior over responses, P (R
? = .07 and ?b = .02 for the same diffusion trace.
We summarize key phenomena from this paradigm, and show that these phenomena are predicted
by a model of response initiation. Our work achieves two goals: (1) offering a better understanding
of and a computational characterization of control processes involved in response initiation, and (2)
offering a rational basis for sequential effects in simple stimulus-response tasks.
2
Models of Decision Making
Neurophysiological and psychological data (e.g., Gold & Shadlen, 2002; Ratcliff, Cherian, & Segraves, 2003) have provided converging evidence for a theory of cortical decision making, known as
the diffusion decision model or DDM (see recent review by Ratcliff & McKoon, 2007). The DDM is
formulated for two-alternative forced choice (2AFC) decisions. A noisy neural integrator accumulates evidence over time; positive evidence supports one response, negative evidence the other. The
model?s dynamics are represented by a differential equation, dx = ?dt + w, where x is the accumulated evidence over time t, ? is the relative rate of evidence supporting one response over the other
(positive or negative, depending on the balance of evidence), and w is white noise, w ? N (0, ? 2 dt).
The variables ? and ? are called the drift and diffusion rates. A response is initiated when the accumulated evidence reaches a positive or negative threshold, i.e., x > ?+ or x < ?? . The DDM
implements the optimal decision strategy under various criteria of optimality (Bogacz et al., 2006).
Tasks involving n alternative responses (nAFC) can be modeled by generalizing the DDM to have
one integrator per possible response (Bogacz & Gurney, 2007; Vickers, 1970). We refer to this
generalized class of models as multiresponse diffusion models or MDM. Consider one example of
an nAFC task: naming the color of a visually presented color patch. The visual system produces
a trickle of evidence for the correct or target response, R? . This evidence supports the target response via a positive drift rate, ?R? , whereas the drift rates of the other possible color names,
{?i | i 6= R? }, are zero. (We assume no similarity among the stimuli, e.g., an aqua patch provides no evidence for the response ?blue?, although our model could be extended in this way.) The
left panel of Figure 1 illustrates typical dynamics of the MDM. The abcissa represents processing
time relative to the onset of the color patch, and each curve represents one integrator (color name).
2.1
A Decision Rule for the Multiresponse Diffusion Model
Although the DDM decision rule is optimal, no unique optimal decision rule exists for the multipleresponse case (Bogacz & Gurney, 2007; Dragelin et al, 1999). Rules based on an evidence
criterion?analogous to the DDM decision rule?turn out to be inadequate. Instead, candidate
rules are based on the posterior probability that a particular response is correct given the observed evidence up to the current time, P (R? = r|X). In our notation, R? is the random
variable denoting the target response, r is a candidate response among the n alternatives, and
X = {xi (j? ) | i = 1...n, j = 0... T? } is a collection of discrete samples of the multivariate diffusion process observed up to the current time T . The simulations reported here use a decision rule
that initiates responding when the accuracy of the response is above a threshold, ?:
If ?r such that P (R? = r|X) ? ?, then initiate response r.
2
(1)
This rule has been shown to minimize decision latency in the limit of ? ? 1 (Dragelin et al., 1999).
However, our model?s predictions are not tied to this particular rule. We emphasize that any sensible
rule requires estimation of P (R? = r|X), and we focus on how the phenomena explained by our
model derive from the properties of this posterior distribution.
Baum and Veeravalli (1994; see also Bogacz & Gurney, 2007) derive P (R? = r|X) for the case
where all nontargets have the same drift rate, ?nontgt , the target has drift rate ?tgt , and ?nontgt , ?tgt ,
and ? are known. (We introduce the ?tgt and ?nontgt notation to refer to these drift rates even in
the absence of information about R? .) We extend the Baum and Veeravalli result to the case where
?tgt is an unknown random variable that must be estimated by the observer. The diffusion rate of a
random walk, ? 2 , can be determined with arbitrary precision from a single observed trajectory, but
the drift rate cannot (see Supplementary Material ? available at http://matt.colorado.edu/papers.htm).
Therefore, estimating statistics of ?tgt is critical to achieving optimal performance.
Given a sequence of discrete observations from a diffusion process, x = {x(j? ) | j = 0... T? }, we
can use the independence of increments to a diffusion process with known drift and diffusion rates,
x(t2 ) ? x(t1 ) ? N (t2 ? t1 )?, (t2 ? t1 )? 2 , to calculate the likelihood of x:
P (x|?, ?) ? exp (?x(T )? ? ?2 T /2)/? 2 ,
(2)
where ?x(T ) = x(T ) ? x(0) is a sufficient statistic for estimating ?.
Consider the case where the drift rate of the target is a random variable, ?tgt ? N (a, b2 ), and the
drift rate of all nontargets, ?nontgt , is zero. Using Equation 2 and integrating out ?tgt , the posterior
over response alternatives can be determined (see Supplementary Material):
2
b ?xr (T )2 + 2a? 2 ?xr (T )
?
P (R = r|X, a, b) ? exp
.
(3)
2? 2 (? 2 + T b2 )
The middle panel of Figure 1 shows P (R? |X, a, b), as a function of processing time for the diffusion
trace in the left panel, when the true drift rate is known (a = ?tgt and b = 0).
2.2
Estimating Drift
To recap, we have argued that optimal response initiation in nAFC tasks requires calculation of
the posterior response distribution, which in turn depends on assumptions about the drift rate of the
target response. We proposed a decision rule based on a probabilisitic framework (Equations 1 and 3)
that permits uncertainty in the drift rate, but requires a characterization of the prior distribution of
this variable.
We assume that the parameters of this distribution, a and b, are unknown. Consequently, the observer
? ? |X), based on estimates a
cannot compute P (R? |X), but must use an approximation, P (R
? and ?b.
When ?tgt is not representative of the assumed distribution N (?
a, ?b2 ), performance of the model
will be impaired, as illustrated by a comparison of the center and right panels of Figure 1. In the
center panel, ?tgt = .04 is known; in the right panel, ?tgt is not representative of the assumed
distribution. The consequence of this mismatch is that?for the criterion indicated by the dashed
horizontal line?the model chooses the wrong response.
We turn now to the estimation of the model?s drift distribution parameters, a
? and ?b. Consider a
sequence of trials, k = 1...K, in which the same decision task is performed with different stimuli,
and the drift rate of the target response on trial k is ?(k). Following each trial, the drift rate can
also be estimated: ?
?tgt (k) = ?xR? (Tk )/Tk , where Tk is the time taken to respond on trial k.
If the task environment changes slowly, the drift rates over trials will be autocorrelated, and the
drift distribution parameters on trial k can be estimated from past trial history, {?
?tgt (1)...?
?tgt (k ?
1)}. The weighting of past history should be based on the strength of the autocorrelation. Using
maximum likelihood estimation of a and b with an exponential weighting on past history, one obtains
a
?(k) = v1 (k)/v0 (k), and ?b(k) = [v2 (k)/v0 (k) ? a
?(k)2 ]0.5
,
(4)
where k is an index over trials, and the {vi (k)} are moment statistics of the drift disribution, updated
following each trial using an exponential weighting constant, ? ? [0, 1]:
vi (k) = ?vi (k ? 1) + ?
?tgt (k ? 1)i .
(5)
This update rule is an efficient approximation to full hierarchical Bayesian inference of a and b.
When combined with Equations 1 and 3 it determines the model?s response on the current trial.
3
3
The Blocking Effect
The optimal decision framework we have proposed naturally leads to the prediction that performance
on the current trial is influenced by drift rates observed on recent trials. Because drift rates determine
the signal-to-noise ratio of the diffusion process, they reflect the difficulty of the task at hand. Thus,
the framework predicts that an optimal decision maker should show sequential effects based on
recent trial difficulty. We now turn to behavioral data consistent with this prediction.
In any behavioral task, some items are intrinsically easier than others, e.g., 10+3 is easier than 5+8,
whether due to practice or the number of cognitive operations required to determine the sum. By
definition, individuals have faster response times (RTs) and lower error rates to easy items. However,
the RTs and error rates are modulated by the composition of a trial block. Consider an experimental
paradigm consisting of three blocks: just easy items (pure easy), just hard items (pure hard), and
a mixture of both in random order (mixed). When presented in a mixed block, easy items slow
down relative to a pure block and hard items speed up. This phenomenon, known as the blocking
effect (not to be confused with blocking in associative learning), suggests that the response-initiation
processes use information not only from the current stimulus, but also from the stimulus environment
in which it is operating. Table 1 shows a typical blocking result for a word-reading task, where word
frequency is used to manipulate difficulty. We summarize the central, robust phenomena of the
blocking-effect literature (e.g., Kiger & Glass, 1981; Lupker, Brown & Columbo, 1997; Lupker,
Kinoshita, Coltheart, & Taylor, 2000; Taylor & Lupker, 2001).
P1. Blocking effects occur across diverse paradigms, including naming, arithmetic verification
and calculation, target search, and lexical decision. They are obtained when stimulus or response
characteristics alternate from trial to trial. Thus, the blocking effect is not associated with a specific
stimulus or response pathway, but rather is a general phenomenon of response initiation.
P2. A signature of the effect concerns the relative magnitudes of easy-item slowdown and hard-item
speedup. Typically, slowdown and speedup are of equal magnitude. Significantly more speedup
than slowdown is never observed. However, in some paradigms (e.g., lexical decision, priming)
significantly more slowdown than speedup can be observed.
P3. The RT difference bewteen easy and hard items does not fully disappear in mixed blocks. Thus,
RT depends on both the stimulus type and the composition of the block.
P4. Speed-accuracy tradeoffs are observed: A drop in error rate accompanies easy-item slowdown,
and a rise in error rate accompanies hard-item speedup.
P5. The effects of stimulus history are local, i.e., the variability in RT on trial k due to trial k ? l
decreases rapidly with l. Dependencies for l > 2 are not statistically reliable (Taylor & Lupker,
2001), although the experiments may not have had sufficient power to detect weak dependencies.
P6. Overt responses are necessary for obtaining blocking effects, but overt errors are not.
4
Explanations for the Blocking Effect
The blocking effect demonstrates that the response time depends not only on information accruing
from the current stimulus, but also on recent stimuli in the trial history. Therefore, any explanation
of the blocking effect must specify the manner by which response initiation processes are sensitive
to the composition of a block. Various mechanisms of control adaptation have been proposed.
Domain-specific mechanisms. Many of the proposed mechanisms are domain-specific. For example,
Rastle and Coltheart (1999) describe a model with two routes to naming, one lexical and one nonlexical, and posit that the composition of a block affects the emphasis that is placed on the output of
one route versus the other. Because of the ubiquity of blocking effects across tasks, domain-specific
Table 1: RTs and Error Rates for
Blocking study of Lupker, Brown,
& Columbo (1997, Expt. 3)
Easy
Hard
Pure Block
488 ms (3.6%)
583 ms (12.0%)
4
Mixed Block
513 ms (1.8%)
559 ms (12.2%)
Difference
+25 ms (-1.8%)
-24 ms (+0.2%)
accounts are not compelling. Parsimony is achieved only if the adaptation mechanism is localized
to a stage of response initiation common across stimulus-response tasks.
Rate of convergence. Kello and Plaut (2003) have proposed that control processes adjust a gain
parameter on units in a dynamical connectionist model. Increasing the gain results in more rapid
convergence, but also a higher error rate. Simulations of this model have explained the basic blocking effect, but not the complete set of phenomena we listed previously. Of greater concern is the
fact that the model predicts the time taken to utter the response (when the response mode is verbal)
decreases with increased speed pressure, which does not appear to be true (Damian, 2003).
Evidence criterion. A candidate mechanism with intuitive appeal is the trial-to-trial adjustment of an
evidence criterion in the MDM, such that the easier the previous trials are, the lower the criterion is
set. This strategy results in the lowest criterion in a pure-easy block, intermediate in a mixed block,
and highest in a pure-hard block. Because a higher criterion produces slower RTs and lower error
rates, this leads to slowdown of easy items and speedup of hard items in a mixed block. Nonetheless,
there are four reasons for being skeptical about an account of the blocking effect based on adjustment
of an evidence criterion. (1) From a purely computational perspective, the optimality?or even the
behavioral robustness?of an MDM with an evidence criterion has not been established. (2) Taylor
and Lupker (2001) illustrate that adaptation of an evidence criterion can?at least in some models?
yield incorrect predictions concerning the blocking effect. (3) Strayer and Kramer (1994) attempted
to model the blocking effect for a 2AFC task using an adaptive response criterion in the DDM. Their
account fit data, but had a critical shortcoming: They needed to allow different criteria for easy and
hard items in a mixed block, which makes no sense because the trial type was not known in advance,
and setting differential criteria depends on knowing the trial type. (4) On logical grounds, the relative
importance of speed versus accuracy should be determined by task instructions and payoffs. Item
difficulty is an independent and unrelated factor. Consistent with this logical argument is the finding
that manipulating instructions to emphasize speed versus accuracy does not produce the same pattern
of effects as altering the composition of a block (Dorfman & Glanzer, 1988).
5
Our Account: Sequential Estimation of Task Difficulty
Having argued that existing accounts of the blocking effect are inadequate, we return to our analysis
of nAFC tasks, and show that it provides a parsimonious account of blocking effects. Our account is
premised on the assumption that response initiation processes are in some sense optimal. Regardless
of the specific optimality criterion, optimal response initiation requires an estimate of accuracy,
specifically, the probability that a response will be correct conditioned on the evidence accumulated
thus far, P (R? = r|X). As we argue above, estimation of this probability requires knowledge of
the difficulty (drift) of the correct response, and recent trial history can provide this information.
The response posterior, P (R? = r|X), under our generative model of the task environment (Equation 3) predicts a blocking effect. To see this clearly, consider the special case where
uncertainty in
?tgt is negligible, i.e., b ? 0, which simplifies Equation 3 to P (R? = r|X) ? exp a?xr (T )/? 2 .
This expression is a Gibbs distribution with temperature ? 2 /a. As the temperature is lowered, the
entropy drops, and the probabilities become more extreme. Thus, larger values of a lead to faster responses, because the greater expected signal-to-noise ratio makes evidence more reliable. How does
this fact relate to the blocking effect? Easy items have, by definition, a higher mean drift than hard
items; therefore, the estimated drift in the easy condition will be greater than in the hard condition,
E[?
aE ] > E[?
aH ]. Any learning rule for a
? based on recent history will yield an estimated drift in
the mixed condition between those of the easy and hard conditions, i.e., E[?
aE ] > E[?
aM ] > E[?
aH ].
With response times related to a
?, an easy item will slow down in the mixed condition relative to the
pure, and a hard item will speed up.
Although we could fit behavioral data (e.g., Table 1) quantitatively, such fits add no support for the
model beyond a qualitative fit. The reason lies in the mapping of model decision times to human
response latencies. An affine transform must be allowed, scaling time in the model to real-world
time, and also allowing for a fixed-duration stage of perceptual processing. A blocking effect of any
magnitude in the model could therefore be transformed to fit any pattern of data that had the right
qualitative features. We thus focus on qualitative performance of the model.
5
Figure 2: Simulation of the blocking paradigm with random parameter settings. (a) Scatterplot of
hard speedup vs. easy slowdown, where coloring of a cell reflects the log(frequency) with which
a given simulation outcome is obtained. (b) Histogram of percentage reduction in the difference
between easy and hard RTs as a result of intermixing. (c) Scatterplot of change in error rate between
pure and mixed conditions for easy and hard items.
The model has four internal parameters: ? (diffusion rate), ? (history decay), ? (accuracy criterion),
and n (number of response alternatives). In addition, to simulate the blocking effect, we must specify
the true drift distributions for easy and hard items, i.e., aE , bE , aH , and bH . (We might also allow
for nonzero drift rates for some or all of the distractor responses.) To explore the robustness of
the model, we performed 1200 replications of a blocking simulation, each with randomly drawn
values for the eight free parameters. Parameters were drawn as follows: ? ? U (.05, .25), ? ?
1 ? 1/(1 + U (1, 20) (these values are uniform in the half-life of the exponential memory decay),
n ? bU (2, 100)c, ? ? U (.95, .995), aH ? U (.01, .05), aE ? aH + U (.002, .02), bH ? (aE ?
aH )/U (3, 10), and bE = bH . Each replication involved simulating three conditions: pure easy, pure
hard, and mixed. The pure conditions were run for 5000 trials and the mixed condition for 10000
trials. Each condition began with an additional 25 practice trials which were discarded from our
analysis but were useful to eliminate the effects of initialization of a
? and ?b. The model parameters
were not adapted following error trials. For each replication and each condition, the median response
time (RT) and mean error rate were computed. We discarded from our analysis simulations in which
the error rates were grossly unlike those obtained in experimental studies, specifically, where the
mean error rate in any condition was above 20%, and where the error rates for easy and hard items
differed by more than a factor of 10.
Figure 2a shows a scatterplot comparing the speedup of hard items (from pure to mixed conditions)
to the slowdown of easy items. Units are in simulation time steps. The dashed diagonal line indicates
speedup comparable in magnitude to slowdown. Much of the scatter is due to sampling noise in the
median RTs. The model obtains a remarkably symmetric effect: 41% of replications yield speedup
> slowdown, 40% yield slowdown > speedup, and the remaining 19% yield exactly equal sized
effects. The slope of the regression line through the origin is 0.97. Thus, the model shows a key
signature of the behavioral data?symmetric blocking effects (Phenomenon P2).
Figure 2b shows a histogram of the percentage reduction in the difference between easy and hard
RTs as a result of intermixing. This percentage is 100 if easy RTs slow down and hard RTs speed
up to become equal; the percentage is 0 if there is no slowdown of easy RTs or speedup of hard
RTs. The simulation runs show a 10?30% reduction as a result of the blocking manipulation. This
percentage is unaffected by the affine transformation required to convert simulation RTs to human
RTs, and is thus directly comparable. Behavioral studies (e.g., Table 1) typically show 20?60%
effects. Thus, the model?with random parameter settings?tends to underpredict human results.
Nonetheless, the model shows the key property that easy RTs are still faster than hard RTs in the
mixed condition (Phenomenon P3).
Figure 2c shows a scatterplot of the change in error rate for easy items (from pure to mixed conditions) versus change in error rate for hard items. Consistent with the behavioral data (Phenomenon
P4), a speed-accuracy trade off is observed: When easy items slow down in the mixed versus pure
conditions, error rates drop; when hard items speed up, error rates rise. This trade off is expected,
because block composition affects only the stopping point of the model and not the model dynamics. Thus, any speedup should yield a higher error rate, and vice versa. Interestingly, the accuracy
6
620
Response Time
Figure 3: Human (black) and simulation (white) RTs for easy and
hard items in a mixed block, conditional on the 0, 1, and 2 previous items (Taylor & Lupker, 2001).
Last letter in the trial sequence indicates the current trial and trial order is left to right.
human
simulation
600
580
560
540
E
H
EE
HE
EH
HH
EEE HEE EHE HHE EEH HEH EHH HHH
Trial Sequence
criterion is fixed across conditions in the model; the differences in error rates arise because of a mismatch between the parameters a and b used to generate trials, and the parameters a
? and ?b estimated
from the trial sequence. Thus, although the criterion does not change across conditions, and the
criterion is expressed in terms of accuracy (Equation 1), the block composition nonetheless affects
the speed-accuracy trade off.
Although the blocking effect is typically characterized by comparing performance of an item type
across blocks, sequential effects within a block have also been examined. Taylor and Lupker (2001,
Experiment 1) instructed participants to name high-frequency words (easy items) and nonwords
(hard items). Focusing on the mixed block, Taylor and Lupker analyzed RTs conditional on the
context?the 0, 1, and 2 preceding items. The black bars in Figure 3 show the RTs conditional
on the context. Trial k is most influenced by trial k ? 1, but trial k ? 2 modulates RTs as well.
This decreasing influence of previous trials (Phenomenon P5) is well characterized by the model
via the exponential-decay parameter, ? (Equation 5). To model the Taylor and Lupker data, we
ran a simulation with generic parameters which were not tuned to the data: aE = .05, aH = .04,
bE = bH = .002, ? = .15, ? = .99, ? = .5, and n = 5. We then scaled simulation RTs to human
RTs with an affine transform whose two free parameters were fit to the data. The result, shown by
the white bars in Figure 3, captures the important properties of the data.
We have addressed all of the key phenomena of the blocking effect except two. Phenomenon P1
concerns the fact that the effect occurs across a variety of tasks and difficulty manipulations. The
ubiquity of the effect is completely consistent with our focus on general mechanisms of response
initiation. The model does not make any claims about the specific domain or the cause of variation
in drift rates. Phenomenon P6 states that overt responses are required to obtain the blocking effect.
Although the model cannot lay claims to distinctions between overt and covert responses, it does
require that a drift estimate, ?
?tgt , be obtained on each trial in order to adjust a
? and ?b, which leads
to blocking effects. In turn, ?
?tgt is determined at the point in the diffusion process when a response
would be initiated. Thus, the model claims that selecting a response on trial k is key to influencing
performance on trial k + 1.
6
Conclusions
We have argued that optimal response initiation in speeded choice tasks requires advance knowledge
about the difficulty of the current decision. Difficulty corresponds to the expected rate of evidence
accumulation for the target response relative to distractors. When difficulty is high, the signal-tonoise ratio of the evidence-accumulation process is low, and a rational observer will wait for more
evidence before initiating a response.
Our model assumes that difficulty in the current task environment is estimated from the difficulty of
recent trials, under an assumption of temporal autocorrelation. This is consistent with the empirically observed blocking effect, whereby responses are slower to easy items and faster to hard items
when those items are interleaved, compared to when item types are presented in separate blocks.
According to our model, mixed blocks induce estimates of local difficulty that are intermediate between those in pure easy and pure hard blocks. The resultant overestimation of difficulty for easy
items leads to increased decision times, while an opposite effect occurs for hard items.
We formalize these ideas in a multiresponse diffusion model of decision making. Evidence for each
response accrues in a random walk, with positive drift rate ?tgt for the correct response and zero drift
for distractors. Analytical derivations show that conversion of evidence to a posterior distribution
7
over responses depends on ?tgt , which acts as an inverse temperature in a Gibbs distribution. When
this parameter is uncertain, with a prior estimated from recent context, error in the estimate leads to
systematic bias in the response time. Underestimation of the drift rate, as with easy trials in a mixed
block, leads to damping of the computed posterior and response slowdown. Overestimation, as with
hard trials in a mixed block, leads to exaggeration of the posterior and response speedup.
The model successfully explains the full range of phenomena associated with the blocking effect,
including the effects on both RTs and errors, the patterns of slowdown of easy items and speedup
of hard items, and the detailed sequential effects of recent trials. Moreover, the model is robust
to parameter settings, as our random-replication simulation shows. The model is robust in other
respects as well: Its qualitative behavior does not depend on the number of response alternatives (we
have tried up to 1000), the decision rule (we have also tried a criterion based on the posterior ratio
between the most and next most probable responses), the estimation algorithm for a
? and ?b (we have
also tried a Kalman filter), and violations of assumptions of the generative model (e.g., nonzero drift
rates for some of the distractors, reflecting the similarity structure of perceptual representations).
The tradeoff between speed and accuracy in decision making is a paradigmatic problem of cognitive
control. Theories in cognitive science often hand the problem of control to a homunculus. When
control processes are specified, they generally involve explicit, active, and sophisticated mechanisms
(e.g., conflict detection; A.D. Jones et al., 2002). Our model achieves sequential adaptation of
control via a statistical mechanism that is passive and in a sense dumb; it essentially reestimates the
statistical structure of the environment by updating an expectation of task difficulty. Our belief is
that many aspects of cognitive control can be explained away by such passive statistical mechanisms,
eventually eliminating the homunculus from cognitive science.
Acknowledgments
This research was supported by NSF grants BCS-0339103, BCS-720375, SBE-0518699, and SBE-0542013,
and ARC Discovery Grant DP0556805. We thank the students in CSCI7222/CSCI4830/PSYC7782 for interesting discussions that led to this work.
References
Baum, C. W., & Veeravalli, V. (1994). A sequential procedure for multi-hypothesis testing. IEEE Trans. Inf.
Theory, 40, 1994?2007.
Bogacz, R, Brown, E, Moehlis, J, Holmes, P & Cohen JD (2006). The physics of optimal decision making: A
formal analysis of models of performance in two-alternative forced choice tasks. Psych. Rev., 113, 700?765.
Bogacz, R. & Gurney, K. (2007). The basal ganglia and cortex implement optimal decision making between
alternative actions. Neural Computation, 19, 442-477.
Damian, M. F. (2003). Articulatory duration in single word speech production. JEP: LMC, 29, 416?431.
Dorfman, D., & Glanzer, M. (1988). List composition effects in lexical decision and recognition memory. J.
Mem. & Lang., 27, 633?648.
Gold, J.I., & Shadlen, M.N. (2002). Banburismus and the brain: Decoding the relationship between sensory
stimuli, decisions and reward. Neuron, 36, 299?308.
Jones, A. D., Cho, R. Y., Nystrom, L. E., Cohen, J. D., & Braver, T. S. (2002). A computational model of
anterior cingulate function in speeded response tasks: Effects of frequency, sequence, and conflict. Cogn.,
Aff., & Beh. Neuro., 2, 300?317.
Kello, C. T. & Plaut, D. C. (2003). Strategic control over rate of processing in word reading: A computational
investigation. J. Mem. & Lang., 48, 207?232.
Kiger, J. I., & Glass, A. L. (1981). Context effects in sentence verification. JEP:HPP, 7, 688?700.
Lupker, S. J., Brown, P., & Colombo, L. (1997). Strategic control in a naming task: Changing routes or
changing deadlines? JEP:LMC, 23, 570?590.
Rabbitt, PMA, & Vyas, SM (1970). An elementary preliminary taxonomy for some errors in laboratory choice
RT tasks. Acta Psych., 33, 56-76.
Rastle, K., & Coltheart, M. (1999). Serial and strategic effects in reading aloud. JEP:HPP, 25, 482?503.
Ratcliff, R., & McKoon, G. (2007). The diffusion decision model: Theory and data for two-choice decision
tasks. Neural Computation, 20, 873?922.
Ratcliff, R., Cherian, A., & Segraves, M. (2003) A comparison of macaque behavior and superior colliculus
neuronal activity to predictions from models of two-choice decisions. J. Neurophys., 90, 1392?1407.
Taylor, T. E., & Lupker, S. J. (2001). Sequential effects in naming: A time-criterion account. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 27, 117?138.
8
| 3465 |@word trial:53 cingulate:1 middle:2 eliminating:1 instruction:3 simulation:15 tried:3 pressure:2 reduction:3 moment:1 series:1 cherian:2 selecting:1 offering:2 denoting:1 interestingly:1 tuned:1 past:3 existing:1 current:9 comparing:2 anterior:1 neurophys:1 lang:2 yet:1 dx:1 must:5 scatter:1 hpp:2 motor:3 drop:3 update:1 v:1 generative:2 half:1 item:44 rts:22 provides:3 characterization:2 plaut:2 mathematical:1 differential:2 become:2 replication:5 incorrect:1 qualitative:4 pathway:1 behavioral:9 autocorrelation:2 manner:1 introduce:1 lupker:12 expected:3 rapid:1 behavior:2 p1:2 distractor:1 multi:1 integrator:3 probabilisitic:1 brain:1 initiating:1 decreasing:1 increasing:1 provided:1 estimating:3 notation:2 confused:1 panel:10 unrelated:1 moreover:1 lowest:1 what:1 bogacz:7 parsimony:1 psych:2 finding:1 transformation:1 temporal:1 act:1 coltheart:3 exactly:1 wrong:1 demonstrates:1 scaled:1 control:13 unit:3 dorfman:2 grant:2 appear:1 positive:5 t1:3 negligible:1 local:2 before:1 tends:1 mistake:1 limit:1 consequence:1 nontargets:2 accumulates:3 accruing:1 initiated:3 might:3 black:2 emphasis:1 au:1 initialization:1 examined:1 acta:1 suggests:1 speeded:3 statistically:1 range:1 unique:1 acknowledgment:1 testing:1 practice:2 block:27 implement:2 xr:4 jep:4 procedure:1 cogn:1 significantly:2 word:6 integrating:1 induce:1 wait:1 cannot:3 bh:4 hee:1 context:4 influence:1 accumulation:3 center:2 maximizing:1 baum:3 lexical:4 regardless:1 duration:2 pure:16 rule:14 holmes:1 mq:1 variation:1 increment:1 analogous:1 updated:1 target:9 colorado:5 hypothesis:1 origin:1 recognition:1 updating:1 lay:1 predicts:3 blocking:33 observed:11 p5:2 capture:1 calculate:1 decrease:2 highest:1 trade:3 nonwords:1 ran:1 mozer:2 environment:5 overestimation:2 reward:2 dynamic:3 signature:2 depend:1 purely:1 basis:2 completely:1 htm:1 various:3 represented:1 autocorrelated:1 derivation:1 forced:2 describe:2 shortcoming:1 outcome:1 aloud:1 whose:1 supplementary:2 larger:1 statistic:3 transform:2 noisy:1 associative:1 vickers:1 sequence:6 analytical:1 propose:1 adaptation:5 p4:2 rapidly:1 gold:3 intuitive:1 convergence:2 impaired:1 produce:4 tk:3 derive:3 depending:1 illustrate:1 p2:2 predicted:1 expt:1 posit:1 correct:12 filter:1 human:7 mckoon:2 material:2 explains:2 require:2 argued:3 investigation:1 preliminary:1 probable:1 elementary:1 recap:1 ground:1 visually:1 presumably:1 exp:3 mapping:1 cognition:1 claim:3 achieves:2 estimation:6 overt:4 maker:1 sensitive:2 vice:1 successfully:1 reflects:1 clearly:1 rather:1 colombo:1 focus:3 ratcliff:4 likelihood:2 indicates:2 sense:3 detect:1 am:1 inference:1 glass:2 stopping:1 accumulated:4 typically:3 eliminate:1 manipulating:1 transformed:1 among:2 special:1 equal:3 never:1 having:1 sampling:1 represents:2 jones:3 afc:3 t2:3 stimulus:16 quantitatively:1 others:1 connectionist:1 randomly:1 individual:4 consisting:1 detection:1 multiresponse:3 adjust:2 violation:1 mixture:1 extreme:1 analyzed:1 dumb:1 articulatory:1 necessary:1 experience:1 moehlis:1 damping:1 taylor:9 walk:2 uncertain:1 psychological:1 increased:2 earlier:1 compelling:1 altering:1 strategic:3 uniform:1 inadequate:2 reported:1 dependency:2 answer:1 chooses:1 combined:1 cho:1 bu:1 systematic:1 off:3 physic:1 decoding:1 michael:1 quickly:3 extant:1 reflect:1 central:1 slowly:3 cognitive:9 return:1 account:9 suggesting:1 premised:1 b2:3 student:1 matter:1 vi:3 depends:6 onset:1 performed:3 observer:3 participant:1 slope:1 minimize:1 intermixing:2 accuracy:15 responded:1 characteristic:1 yield:6 weak:1 bayesian:1 accurately:2 trajectory:1 unaffected:1 history:9 ah:7 reach:1 influenced:2 definition:3 grossly:1 nonetheless:3 frequency:5 involved:2 nystrom:1 naturally:1 associated:2 resultant:1 rational:2 gain:2 intrinsically:1 logical:2 knowledge:3 color:5 distractors:3 formalize:1 sophisticated:1 reflecting:1 coloring:1 focusing:1 higher:5 dt:2 response:79 specify:2 governing:1 just:2 stage:2 p6:2 gurney:4 glanzer:2 hand:2 horizontal:1 veeravalli:3 mode:1 quality:1 indicated:1 normatively:1 name:3 matt:2 effect:50 brown:4 true:3 hence:1 symmetric:2 nonzero:2 laboratory:1 illustrated:1 white:3 lmc:2 whereby:1 criterion:21 generalized:1 m:6 complete:1 covert:1 temperature:3 passive:3 mdm:5 began:1 common:1 superior:1 preceded:2 empirically:1 cohen:2 extend:1 he:1 refer:2 composition:8 versa:1 gibbs:2 i6:1 had:3 lowered:1 similarity:2 operating:1 v0:2 rastle:2 add:1 trickle:1 cortex:1 posterior:13 multivariate:1 recent:13 perspective:1 inf:1 manipulation:2 route:3 initiation:15 life:1 greater:3 additional:1 preceding:1 determine:2 paradigm:6 paradigmatic:1 dashed:2 signal:3 arithmetic:1 full:2 bcs:2 influencing:1 faster:5 characterized:3 calculation:2 offer:1 heh:1 naming:7 concerning:1 manipulate:1 deadline:1 serial:1 converging:1 involving:1 prediction:5 basic:1 ae:6 regression:1 essentially:1 expectation:1 neuro:1 histogram:2 achieved:1 cell:1 damian:2 whereas:1 addition:1 remarkably:1 addressed:1 median:2 unlike:1 elegant:1 ee:1 intermediate:2 easy:37 variety:2 tgt:20 independence:1 psychology:3 affect:3 fit:6 opposite:1 simplifies:1 idea:1 knowing:1 tradeoff:5 whether:1 motivated:1 expression:1 vyas:2 accompanies:2 speech:1 cause:1 repeatedly:1 action:1 useful:1 latency:3 detailed:1 listed:1 generally:1 involve:1 ddm:7 processed:1 http:1 generate:1 homunculus:2 percentage:5 nsf:1 estimated:9 per:2 blue:1 diverse:1 discrete:2 basal:1 key:5 four:2 threshold:2 achieving:1 drawn:2 changing:2 diffusion:18 v1:1 sum:2 convert:1 colliculus:1 run:2 inverse:1 letter:1 uncertainty:2 respond:3 kello:2 macquarie:1 patch:3 parsimonious:2 p3:2 decision:34 eee:1 scaling:1 comparable:2 interleaved:1 activity:1 strength:1 occur:2 adapted:1 aff:1 aspect:1 speed:16 argument:1 optimality:6 simulate:1 speedup:15 according:1 alternate:1 combination:1 across:7 rev:1 making:7 kinoshita:2 explained:3 gradually:1 taken:2 equation:8 previously:1 turn:6 eventually:1 mechanism:11 hh:1 needed:1 initiate:3 available:1 operation:1 permit:1 eight:1 hierarchical:1 v2:1 generic:1 ubiquity:2 away:1 simulating:1 braver:1 mcj:1 alternative:8 robustness:2 slower:2 jd:1 assumes:2 responding:3 remaining:1 disappear:1 occurs:2 strategy:2 rt:5 diagonal:1 rationally:1 separate:1 thank:1 sensible:1 argue:3 reason:2 banburismus:1 assuming:1 nonlexical:1 kalman:1 modeled:1 index:1 illustration:1 ratio:4 minimizing:1 balance:1 relationship:1 taxonomy:1 relate:1 trace:3 negative:3 rise:2 exaggeration:1 unknown:2 allowing:1 conversion:1 observation:1 neuron:1 discarded:2 arc:1 sm:1 supporting:3 payoff:1 extended:1 variability:1 beh:1 arbitrary:1 drift:37 cast:1 required:3 specified:1 sentence:1 conflict:2 distinction:1 established:1 macaque:1 trans:1 beyond:1 bar:2 dynamical:1 pattern:3 mismatch:2 reading:3 summarize:2 including:2 reliable:2 explanation:2 memory:3 belief:1 power:1 critical:2 difficulty:20 eh:1 review:1 understanding:1 prior:2 literature:1 discovery:1 relative:7 fully:1 segraves:2 mixed:21 interesting:1 versus:5 localized:1 sbe:2 utter:1 affine:3 sufficient:3 consistent:5 shadlen:3 verification:2 production:1 prone:2 skeptical:1 placed:1 slowdown:14 free:2 last:1 supported:1 pma:1 verbal:1 bias:1 allow:2 formal:1 institute:2 curve:1 dimension:1 cortical:1 world:1 hhh:1 instructed:1 collection:1 adaptive:1 sensory:1 far:1 emphasize:2 obtains:2 active:1 mem:2 assumed:2 xi:1 search:1 ehe:1 why:1 table:4 tonoise:1 robust:3 obtaining:1 reestimates:1 priming:1 domain:4 noise:4 arise:1 allowed:1 neuronal:1 representative:2 elaborate:1 differed:1 slow:4 precision:1 explicit:1 exponential:4 candidate:4 lie:1 tied:1 perceptual:2 accrues:1 weighting:3 down:4 specific:6 appeal:1 decay:3 list:1 evidence:31 concern:3 exists:1 scatterplot:4 sequential:12 importance:1 modulates:1 magnitude:4 conditioned:2 illustrates:1 easier:3 entropy:1 generalizing:1 led:1 likely:1 explore:1 ganglion:1 neurophysiological:1 visual:1 expressed:1 adjustment:2 tracking:1 corresponds:2 determines:1 conditional:3 goal:1 formulated:1 kramer:1 consequently:1 sized:1 absence:1 hard:36 change:5 specifically:3 typical:2 determined:4 except:1 called:1 eeh:1 experimental:4 attempted:1 underestimation:1 internal:1 support:3 modulated:1 dept:3 hhe:1 phenomenon:16 |
2,719 | 3,466 | Sparsity of SVMs that use the -insensitive loss
Ingo Steinwart
Information Sciences Group CCS-3
Los Alamos National Laboratory
Los Alamos, NM 87545, USA
[email protected]
Andreas Christmann
University of Bayreuth
Department of Mathematics
D-95440 Bayreuth
[email protected]
Abstract
In this paper lower and upper bounds for the number of support vectors are derived
for support vector machines (SVMs) based on the -insensitive loss function. It
turns out that these bounds are asymptotically tight under mild assumptions on the
data generating distribution. Finally, we briefly discuss a trade-off in between
sparsity and accuracy if the SVM is used to estimate the conditional median.
1
Introduction
Given a reproducing kernel Hilbert space (RKHS) of a kernel k : X ? X ? R and training set
D := ((x1 , y1 ), . . . , (xn , yn )) ? (X ? R)n , the -insensitive SVM proposed by Vapnik and his
co-workers [10, 11] for regression tasks finds the unique minimizer fD,? ? H of the regularized
empirical risk
n
1X
?kf k2H +
L (yi , f (xi )) ,
(1)
n i=1
where L denotes the -insensitive loss defined by L (y, t) := max{0, |y ? t| ? } for all y, t ? R
and some fixed ? 0. It is well known, see e.g. [2, Proposition 6.21], that the solution is of the form
fD,? =
n
X
?i? k(xi , ? ) ,
(2)
i=1
where the coefficients ?i? are a solution of the optimization problem
maximize
n
X
i=1
yi ?i ?
n
X
i=1
|?i | ?
n
1 X
?i ?j k(xi , xj )
2 i,j=1
(3)
?C ? ?i ? C
for all i = 1, . . . , n.
(4)
Pn
Here we set C := 1/(2?n). Note that the equality constraint i=1 ?i = 0 needed in [2, Proposition
6.21] is superfluous since we do not include an offset term b in the primal problem (1). In the
following, we write SV (fD,? ) := {i : ?i? 6= 0} for the set of indices that belong to the support
vectors of fD,? . Furthermore, we write # for the counting measure, and hence #SV (fD,? ) denotes
the number of support vectors of fD,? .
subject to
It is obvious from (2) that #SV (fD,? ) has a crucial influence on the time needed to compute
fD,? (x). Due to this fact, the -insensitive loss was originally motivated by the goal to achieve
sparse decision functions, i.e., decision functions fD,? with #SV (fD,? ) < n. Although empirically it is well-known that the -insensitive SVM achieves this sparsity, there is, so far, no theoretical explanation in the sense of [5]. The goal of this work is to provide such an explanation by
establishing asymptotically tight lower and upper bounds for the number of support vectors. Based
on these bounds we then investigate the trade-off between sparsity and estimation accuracy of the
-insensitive SVM.
2
Main results
Before we can formulate our main results we need to introduce some more notations. To this end,
let P be a probability measure on X ? R, where X is some measurable space. Given a measurable
f : X ? R, we then define the L -risk of f by RL ,P (f ) := E(x,y)?P L (y, f (x)). Moreover, recall
that P can be split into the marginal distribution PX on X and the regular conditional probability
P( ? |x). Given a RKHS H of a bounded kernel k, [1] then showed that
fP,? := arg inf ?kf k2H + RL ,P (f )
f ?H
exists and is uniquely determined whenever RL ,P (0) < ?. Let us write ?(x,y) for the Dirac
Pn
measure at some (x, y) ? X ? R. By considering the empirical measure D := n1 i=1 ?(xi ,yi ) of a
training set D := ((x1 , y1 ), . . . , (xn , yn )) ? (X ? R)n , we then see that the corresponding fD,? is
the solution of (1). Finally, we need to introduce the sets
A?low (f ) :=
(x, y) ? X ? R : |f (x) ? y| > + ?
A?up (f ) :=
(x, y) ? X ? R : |f (x) ? y| ? ? ? ,
where f : X ? R is an arbitrary function and ? ? R. Moreover, we use the short forms Alow (f ) :=
A0low (f ) and Aup (f ) := A0up (f ). Now we can formulate our first main result.
Theorem 2.1 Let P be a probability measure on X ? R and H be a separable RKHS with bounded
measurable kernel satisfying kkk? ? 1. Then, for all n ? 1, ? > 0, ? > 0, and ? > 0 satisfying
?? ? 4, we have
2
? 2 ?2 n
#SV (fD,? )
Pn D ? (X ? R)n :
> P A?low (fP,? ) ? ? ? 1 ? 3e? 16 ? e?2? n
n
and
2
? 2 ?2 n
#SV (fD,? )
Pn D ? (X ? R)n :
< P A?up (fP,? ) + ? ? 1 ? 3e? 16 ? e?2? n .
n
Before we present our second main result, we briefly illustrate Theorem 2.1 for the case where we
fix the regularization parameter ? and let n ? ?.
Corollary 2.2 Let P be a probability measure on X ? R and H be a separable RKHS with bounded
measurable kernel satisfying kkk? ? 1. Then, for all ? > 0 and ? > 0, we have
#SV (fD,? )
? P Aup (fP,? ) + ? = 1 .
lim Pn D ? (X ? R)n : P Alow (fP,? ) ? ? ?
n??
n
Note that the above corollary exactly describes the asymptotic behavior of the fraction of support
vectors modulo the probability of the set
Aup (fP,? )\Alow (fP,? ) = (x, fP,? (x) ? ) : x ? X ? (x, fP,? (x) + ) : x ? X .
In particular, if the conditional distributions P( ? |x), x ? X, have no discrete components, then the
above corollary gives an exact description.
Of course, in almost no situation it is realistic to assume that ? stays fixed if the sample size n grows.
Instead, it is well-known, see [1], that the regularization parameter should vanish in order to achieve
consistency. To investigate this case, we need to introduce some additional notations from [6] that
are related to the L -risk. Let us begin by denoting the Bayes L -risk by R?L ,P := inf RL ,P (f ),
where P is a distribution and the infimum is taken over all measurable functions f : X ? R. In
addition, given a distribution Q on R, [6] and [7, Chapter 3] defined the inner L -risks by
Z
CL ,Q (t) :=
L (y, t) dQ(y) ,
t ? R,
R
and the minimal inner L -risks were denoted by CL? ,Q := inf t?R CL ,Q (t). Obviously, we have
Z
RL ,P (f ) =
CL ,P( ? |x) f (x) dPX (x) ,
(5)
X
and [6, Lemma 2.5], see also [7, Lemma 3.4], further established the intuitive formula R?L ,P =
R
C?
dPX (x). Moreover, we need the sets of conditional minimizers
X L ,P( ? |x)
M? (x) := t ? R : CL ,P( ? |x) (t) = CL? ,P( ? |x) .
The following lemma collects some useful properties of these sets.
Lemma 2.3 Let P be a probability measure on X ? R with R?L ,P < ?. Then M? (x) is a nonempty and compact interval for PX -almost all x ? X.
Given a function f : X ? R, Lemma 2.3 shows that for PX -almost all x ? X there exists a unique
t? (x) ? M? (x) such that
?
t (x) ? f (x) ? t ? f (x)
for all t ? M? (x) .
(6)
In other words, t? (x) is the element in M? (x) that has the smallest distance to f (x). In the following, we sometimes write t?? (x) := t? (x) if f = fP,? and we wish to emphasize the dependence of
t? (x) on ?. With the help of these elements, we finally introduce the sets
?
Mlow
(f ) :=
(x, y) ? X ? R : |t? (x) ? y| > + ?
?
Mup
(f ) :=
(x, y) ? X ? R : |t? (x) ? y| ? ? ? ,
0
where ? ? R. Moreover, we again use the short forms Mlow (f ) := Mlow
(f ) and Mup (f ) :=
0
Mup (f ). Now we can formulate our second main result.
Theorem 2.4 Let P be a probability measure on X ? R and H be a separable RKHS with bounded
measurable kernel satisfying kkk? ? 1. Assume that RL ,P (0) < ? and that H is dense in
L1 (PX ). Then, for all ? > 0, there exist a ?? > 0 and a ?? > 0 such that for all ? ? (0, ?? ] and all
n ? 1 we have
2 2
#SV (fD,? )
Pn D ? (X ? R)n : P Mlow (fP,? ) ? ? ?
? P Mup (fP,? ) + ? ? 1 ? 8e??? ? n .
n
If we choose a sequence of regularization parameters ?n such that ?n ? 0 and ?2n n ? ?, then the
resulting SVM is L -risk consistent under the assumptions of Theorem 2.4, see [1]. For this case,
the following obvious corollary of Theorem 2.4 establishes lower and upper bounds on the number
of support vectors.
Corollary 2.5 Let P be a probability measure on X ? R and H be a separable RKHS with bounded
measurable kernel satisfying kkk? ? 1. Assume that RL ,P (0) < ? and that H is dense in
L1 (PX ). Furthermore, let (?n ) ? (0, ?) be a sequence with ?n ? 0 and ?2n n ? ?. Then, for all
? > 0, the probability Pn of D ? (X ? R)n satisfying
#SV (fD,?n )
lim inf P Mlow (fP,?m ) ? ? ?
? lim sup P Mup (fP,?m ) + ?
m??
n
m??
converges to 1 for n ? ?.
In general, the probabilities of the sets Mlow (fP,? ) and Mup (fP,? ) are hard to control since, e.g.,
for fixed x ? X and ? ? 0 it seems difficult to show that fP,? (x) is not ?flipping? from the left
hand side of M? (x) to the right hand side. Indeed, for general M? (x), such flipping would give
different values t?? (x) ? M? (x) for ? ? 0, and hence would result in significantly different sets
Mlow (fP,? ) and Mup (fP,? ). As a consequence, it seems hard to show that, for probability measures
P whose conditional distributions P( ? |x), x ? X, have no discrete components, we always have
lim inf P Mlow (fP,? ) = lim sup P Mup (fP,? ) .
(7)
??0
??0
However, there are situations in which this equality can easily be established. For example, assume
that the sets M? (x) are PX -almost surely singletons. In this case, t?? (x) is in fact independent of ?,
and hence so are Mlow (fP,? ) and Mup (fP,? ). Namely, in this case these sets contain the pairs (x, y)
for which y is not contained in the closed or open -tube around M? (x), respectively. Consequently,
(7) holds provided that the conditional distributions P( ? |x), x ? X, have no discrete components,
and hence Corollary 2.5 gives a tight bound on the number of support vectors. Moreover, if in
this case we additionally assume = 0, i.e., we consider the absolute loss, then we easily find
P(Mlow (fP,? )) = P(Mup (fP,? )) = 1, and hence Corollary 2.5 shows that the corresponding SVM
does not tend to produce sparse decision functions. Finally, recall that for this specific loss function,
M? (x) equals the median of P( ? |x), and hence M? (x) is a singleton whenever the median of
P( ? |x) is unique.
Let us now illustrate Corollary 2.5 for > 0. To this end, we assume in the following that the
conditional distributions P( ? |x) are symmetric, i.e., for PX -almost all x ? X there exists a conditional center c(x) ? R such that P(c(x) + A|x) = P(c(x) ? A|x) for all measurable A ? R.
Note that by considering A := [0, ?) it is easy to see that c(x) is a median of P( ? |x). Furthermore, the assumption RL ,P (0) < ? imposed in the results above ensures that the conditional
mean fP? (x) := E(Y |x) of P( ? |x) exists PX -almost surely, and from this it is easy to conclude that
c(x) = fP? (x) for PX -almost all x ? X. Moreover, from [8, Proposition 3.2 and Lemma 3.3] we
immediately obtain the following lemma.
Lemma 2.6 Let P be a probability measure on X ? R such that RL ,P (0) < ?. Assume that the
conditional distributions P( ? |x), x ? X, are symmetric and that for PX -almost all x ? X there
exists a ?(x) > 0 such that for all ? ? (0, ?(x)] we have
(8)
P fP? (x) + [??, ?]x > 0 ,
?
P fP (x) + [ ? ?, + ?] x > 0 .
(9)
Then, for PX -almost all x ? X, we have M? (x) = {fP? (x)} and fP? (x) equals PX -almost surely
the unique median of P( ? |x).
Obviously, condition (8) means that the conditional distributions have some mass around their median fP? , whereas (9) means that the conditional distributions have some mass around fP? ? . Moreover, [8] showed that under the assumptions of Lemma 2.6, the corresponding -insensitive SVM can
be used to estimate the conditional median. Let us now illustrate how the value of influences both
the accuracy of this estimate and the sparsity. To this end, let us assume for the sake of simplicity
that the conditional distributions P( ? |x) have continuous Lebesgue densities p( ? |x) : R ? [0, ?).
By the symmetry of the conditional distributions it is then easy to see that these densities are symmetric around fP? (x). Now, it follows from the continuity of the densities, that (8) is satisfied if
p(fP? (x)|x) > 0, whereas (9) is satisfied if p(fP? (x) + |x) > 0. Let us first consider the case where
the conditional distributions are equal modulo translations. In other words, we assume that there
exists a continuous Lebesgue density q : R ? [0, ?) which is symmetric around 0 such that for
PX -almost all x ? X we have
q(y) = p(fP? (x) + y|x) ,
y ? R.
Note that this assumption is essentially identical to a classical ?signal plus noise? assumption. In
the following we further assume that q is unimodal, i.e., q has its only local and global maximum
at 0. From this we easily see that (8) is satisfied, and (9) is satisfied if q() > 0. By Lemma
2.6 and the discussion around (7) we then conclude that under the assumptions of Corollary 2.5
the fraction of support vectors asymptotically approaches 2Q([, ?)), where Q is the probability
measure defined by q. This confirms the intuition that larger values of lead to sparser decision
functions. In particular, if Q([, ?)) = 0, the corresponding SVM produces super sparse decision
functions, i.e., decision functions whose number of support vectors does not grow linearly in the
sample size. However, not surprisingly, there is a price to be paid for this sparsity. Indeed, [8,
Lemma 3.3] indicates that the size of q() has a direct influence on the ability of fD,? to estimate
the conditional median fP? . Let us describe this in a little more detail. To this end, we first find by
[8, Lemma 3.3] and the convexity of t 7? CL ,Q (t) that
2
t /2
if t ? [0, ]
?
CL ,Q (t) ? CL ,Q ? q() ?
2
t ? /2 if t ? .
By a literal repetition of the proof of [8, Theorem 2.5] we then find the self-calibration inequality
q
p
kf ? fP? kL1 (PX ) ? 2/q() RL ,P (f ) ? R?L ,P ,
(10)
which holds for all f : X ? R with RL ,P (f ) ? R?L ,P ? 2 /2. Now, if we are in the situation
of Corollary 2.5, then we know that RL ,P (fD,?n ) ? R?L ,P in probability for n ? ?, and thus
(10) shows that fD,?n approximates the conditional median fP? with respect to the L1 (PX )-norm.
However, the guarantee for this approximation becomes worse the smaller q() becomes, i.e., the
larger is. In other words, the sparsity of the decision functions may be paid by less accurate
estimates of the conditional median. On the other hand, our results also show that moderate values
for can lead to both reasonable estimates of the conditional median and relatively sparse decision
functions. In this regard we further note that one can also use [8, Lemma 3.3] to establish selfcalibration inequalities that measure the distance of f to fP? only up to . In this case, however, it
is obvious that such self-calibration inequalities are worse the larger is, and hence the informal
conclusions above remain unchanged.
Finally, we like to mention that, if the conditional distributions are not equal modulo translations, then the situation may become more involved. In particular, if we are in a situation with
p(fP? (x)|x) > 0 and p(fP? (x) + |x) > 0 but inf x p(fP? (x)|x) = inf x p(fP? (x) + |x) = 0, selfcalibration inequalities of the form (10) are in general impossible, and weaker self-calibration inequalities require additional assumptions on P. We refer to [8] where the case = 0 is considered.
3
Proofs
Setting C :=
1
2?n
and introducing slack variables, we can restate the optimization problem (1) as
n
X
1
(?i + ??i )
kf k2H + C
2
i=1
minimize
(11)
f (xi ) ? yi ? + ?i ,
yi ? f (xi ) ? + ??i ,
subject to
?i , ??i ? 0
for all i = 1, . . . , n.
In the following we denote the (unique) solution of (11) by (f ? , ? ? , ??? ), where we note that we have
f ? = fD,? . It is well-known, see e.g. [2, p. 117], that the dual optimization problem of (11) is
maximize
n
X
yi (?
?i ? ?i ) ?
i=1
subject to
n
n
X
1 X
(?
?i + ?i ) ?
(?
?i ? ?i )(?
? j ? ?j )k(xi , xj )
2 i,j=1
i=1
0 ? ?i , ?
?i ? C
(12)
for all i = 1, . . . , n,
? n? ) denotes a solution of
where k is the kernel of the RKHS H. Furthermore, if (?1? , ?
? 1? , . . . , ?n? , ?
? ? ??
(12), then we can recover the primal solution (f , ? , ? ) by
f?
=
n
X
(?
?i? ? ?i? )k(xi , ? ) ,
(13)
i=1
?i?
???
i
=
max{0, f ? (xi ) ? yi ? } ,
(14)
=
max{0, yi ? f ? (xi ) ? } ,
(15)
for all i = 1, . . . , n. Moreover, the Karush-Kuhn-Tucker conditions of (12) are
?i? (f ? (xi ) ? yi ? ? ?i? ) = 0 ,
?
? i? (yi ? f ? (xi ) ? ? ??i? ) = 0 ,
(?i? ? C)?i? = 0 ,
(?
?i? ? C)??i? = 0 ,
? ? ??? = 0 ,
i i
?i? ?
? i?
=
0,
(16)
(17)
(18)
(19)
(20)
(21)
where i = 1, . . . , n. Finally, note that by setting ?i := ?
? i ? ?i the problem (12) can be simplified
to (3), and consequently, a solution ? ? of (3) is of the form ? ? = ?
? ? ? ?? . The following simple
lemma provides lower and upper bounds for the set of support vectors.
Lemma 3.1 Using the above notations we have
i : |fD,? (xi ) ? yi | > ? i : ?i? 6= 0 ? i : |fD,? (xi ) ? yi | ? .
Proof: Let us first prove the inclusion on the left hand side. To this end, we begin by fixing an index
i with fD,? (xi ) ? yi > . By fD,? = f ? and (14), we then find ?i? > 0, and hence (18) implies
?i? = C. From (21) we conclude ?
? i? = 0 and hence we have ?i? = ?
? i? ? ?i? = ?C 6= 0. The case
yi ? fD,? (xi ) > can be shown analogously, and hence we obtain the first inclusion. In order to
show the second inclusion we fix an index i with ?i? 6= 0. By ?i? = ?
? i? ? ?i? and (21) we then have
either ?i? 6= 0 or ?
? i? 6= 0. Let us first consider the case ?i? 6= 0 and ?
? i? = 0. The KKT condition (16)
together with fD,? = f ? implies fD,? (xi ) ? yi ? = ?i? and since ?i? ? 0 we get fD,? (xi ) ? yi ? .
The second case ?
? i? = 0 can be shown analogously.
We further need the following Hilbert space version of Hoeffding?s inequality from [12, Chapter 3],
see also [7, Chapter 6.2] for a slightly sharper inequality.
Theorem 3.2 Let (?, A, P) be a probability space and H be a separable Hilbert space. Moreover,
let ?1 , . . . , ?n : ? ? H be independent random variables satisfying EP ?i = 0 and k?i k? ? 1 for
all i = 1, . . . , n. Then, for all ? ? 1 and all n ? ? , we have
r
n
1 X
?
P
?i
< 4
? 1 ? 3e?? .
n i=1
n
H
Finally, we need the following theorem, see [7, Corollary 5.10], which was essentially shown by
[13, 5, 3] .
Theorem 3.3 Let P be a probability measure on X ? R and H be a separable RKHS with bounded
measurable kernel satisfying kkk? ? 1. We write ? : X ? H for the canonical feature map of H,
i.e., ?(x) := k( ? , x), x ? X. Then for all ? > 0 there exists a function h : X ? R ? [?1, 1] such
that for all n ? 1 and all D ? (X ? R)n we have
kfD,? ? fP,? kH ? ??1 kED h? ? EP h?kH ,
where ED denotes the empirical average with respect to D.
Proof of of Theorem 2.1: In order to show the first estimate we fix a ? > 0 and a ? > 0 such that
?? ? 4. Let ? := ?2 ? 2 n/16 which implies n ? ? . Combining Theorems 3.2 and 3.3 we then obtain
p
1 ? 3e?? ? Pn D ? (X ? R)n : kED h? ? EP h?kH ? 4 ? /n
? Pn D ? (X ? R)n : kfD,? ? fP,? kH ? ? .
(22)
Let us now assume that we have a training set D ? (X ? R)n such that kfP,? ? fD,? kH ? ?. Given
a pair (x, y) ? A?low (fP,? ), we then have
+ ? < |fP,? (x) ? y| ? |fD,? (x) ? y| + |fP,? (x) ? fD,? (x)| ? |fD,? (x) ? y| + ?
by the triangle inequality and kkk? ? 1 which implies k ? k? ? k ? kH . In other words, we have
A?low (fP,? ) ? Alow (fD,? ). Consequently, Lemma 3.1 yields
#SV (fD,? ) ? # i : |fD,? (xi ) ? yi | >
? # i : |fP,? (xi ) ? yi | > + ?
n
X
=
1A?low (fP,? ) (xi , yi ) .
i=1
Combining this estimate with (22) we then obtain
n
? 2 ?2 n
1X
#SV (fD,? )
?
1A?low (fP,? ) (xi , yi ) ? 1 ? 3e? 16 .
Pn D ? (X ? R)n :
n
n i=1
Moreover, Hoeffding?s inequality, see, e.g. [4, Theorem 8.1], shows
n
2
1X
Pn D ? (X ? R)n :
1A?low (fP,? ) (xi , yi ) > P A?low (fP,? ) ? ? ? 1 ? e?2? n
n i=1
for all ? > 0 and n ? 1. From these estimates and a union bound we conclude the first inequality.
In order to show the second estimate we first observe that for training sets D ? (X ? R)n with
kfP,? ? fD,? kH ? ? we have Aup (fD,? ) ? A?up (fP,? ). Lemma 3.1 then shows
#SV (fD,? ) ?
n
X
1A?up (fP,? ) (xi , yi ) ,
i=1
and hence (22) yields
n
? 2 ?2 n
1X
n
n #SV (fD,? )
P D ? (X ? R) :
?
1A?up (fP,? ) (xi , yi ) ? 1 ? 3e? 16 .
n
n i=1
Using Hoeffding?s inequality analogously to the proof of the first estimate we then obtain the second
estimate.
0
Proof of of Corollary 2.2: We first observe that we have A?low (fP,? ) ? A?low (fP,? ) for 0 ? ? 0 ? ?.
Let us show
[
A?low (fP,? ) = Alow (fP,? ) .
(23)
?>0
Obviously, the inclusion ??? directly follows from the above monotonicity. Conversely, for (x, y) ?
Alow (fP,? ) we have |f (x) ? y| > and hence |f (x) ? y| > + ? for some ? > 0, i.e., we have
shown (x, y) ? A?low (fP,? ). From (23) we now conclude
lim P A?low (fP,? ) = P Alow (fP,? ) .
(24)
?&0
In addition, we have
0
A?up (fP,? )
? A?up (fP,? ) for 0 ? ? 0 ? ?, and it is easy to check that
\
A?up (fP,? ) = Aup (fP,? ) .
(25)
?>0
Indeed, if (x, y) ? A?up (fP,? ) for all ? > 0 we have |f (x) ? y| ? ? ? for all ? > 0, from which
we conclude |f (x) ? y| ? , i.e. (x, y) ? Aup (fP,? ). Conversely, the inclusion ??? directly follows
from the above monotonicity of the sets Aup . From (25) we then conclude
lim P A?up (fP,? ) = P Aup (fP,? ) .
(26)
?&0
Let us now fix a decreasing sequence (?n ) ? (0, 1) with ?n ? 0 and ?n2 n ? ?. Combining (24)
and (26) with the estimates of Theorem 2.1, we then obtain the assertion.
Proof of Lemma 2.3: Since the loss function L is Lipschitz continuous and convex in t, it is easy
to verify that t 7? CL ,P( ? |x) (t) is Lipschitz continuous and convex for PX -almost all x ? X, and
hence M? (x) is a closed interval. In order to prove the remaining assertions it suffices to show
that limt??? CL ,P( ? |x) (t) = ? for PX -almost all x ? X. To this end, we first observe that
R?L ,P < ? implies CL? ,P( ? |x) < ? for PX -almost all x ? X. Let us fix such an x, a B > 0,
and a sequence (tn ) ? R with tn ? ??. By the shape of L , there then exists an r0 > 0 such
that L (y, t) ? 2B for all y, t ? R with |y ? t| ? r0 . Furthermore, there exists an M > 0 with
P([?M, M ] | x) ? 1/2, and since tn ? ?? there further exists an n0 ? 1 such that tn ? ?M ?r0
for all n ? n0 . For y ? [?M, M ] we thus have y ? tn ? r0 , and hence we finally find
Z
CL ,P( ? |x) (tn ) ?
L (y, tn ) dP(y|x) ? B
[?M,M ]
for all n ? n0 . The case tn ? ? can be shown analogously.
For the proof of Theorem 2.4 we need the following two intermediate results.
Theorem 3.4 Let P be a probability measure on X ? R and H be a separable RKHS with bounded
measurable kernel satisfying kkk? ? 1. Assume that RL ,P (0) < ? and that H is dense in
L1 (PX ). Then, for all ? > 0 and ? > 0, there exists a ?0 > 0 such that for all ? ? (0, ?0 ] we have
PX x ? X : |fP,? (x) ? t| > ? for all t ? M? (x) < ? .
Proof: Since H is dense in L1 (PX ) we have inf f ?H RL ,P (f ) = R?L ,P by [9, Theorem 3], and
hence lim??0 RL ,P (fP,? ) = R?L ,P . Now we obtain the assertion from [6, Theorem 3.16].
Lemma 3.5 Let P be a probability measure on X ? R and H be a separable RKHS with bounded
measurable kernel satisfying kkk? ? 1. Assume that RL ,P (0) < ? and that H is dense in
L1 (PX ). Then, for all ? > 0 and ? > 0, there exists a ?0 > 0 such that for all ? ? (0, ?0 ] we have
2?
2?
P Mlow
(fP,? ) ? P A?low (fP,? ) + ?
and
P Mup
(fP,? ) ? P A?up (fP,? ) ? ? .
Proof: We write t?? (x) for the real number defined by (6) for f (x) := fP,? (x). Then we have
2?
2?
Mlow
(fP,? ) ?
Mlow
(fP,? ) ? (x, y) ? X ? R : |fP,? (x) ? t?? (x)| ? ?
? (x, y) ? X ? R : |fP,? (x) ? t(x)| > ? for all t(x) ? M? (x) .
2?
Moreover, given an (x, y) ? Mlow
(fP,? ) ? {(x, y) ? X ? R : |fP,? (x) ? t?? (x)| ? ?}, we find
?
+ 2? < |t? (x) ? y| ? |fP,? (x) ? t?? (x)| + |fP,? (x) ? y| ? ? + |fP,? (x) ? y| ,
i.e., we have (x, y) ? A?low (fP,? ). Estimating the probability of the remaining set by Theorem 3.4
then yields the first assertion. In order to prove the second estimate we first observe that
A?up (fP,? ) ?
A?up (fP,? ) ? (x, y) ? X ? R : |fP,? (x) ? t?? (x)| ? ?
? (x, y) ? X ? R : |fP,? (x) ? t(x)| > ? for all t(x) ? M? (x) .
For (x, y) ? A?up (fP,? ) ? {(x, y) ? X ? R : |fP,? (x) ? t?? (x)| ? ?} we further have
? ? ? |fP,? (x) ? y| ? |fP,? (x) ? t?? (x)| + |t?? (x) ? y| ? ? + |t?? (x) ? y| ,
2?
(fP,? ). Again, the assertion now follows from Theorem 3.4 .
i.e., we have (x, y) ? Mup
Proof of Theorem 2.4: Analogously to the proofs of (24) and (26), we find
?
?
lim P Mlow
(fP,? ) = P Mlow (fP,? )
and
lim P Mup
(fP,? ) = P Mup (fP,? )
?&0
?&0
Combining these equations with Theorem 2.1 and Lemma 3.5, we then obtain the assertion.
References
[1] A. Christmann and I. Steinwart. Consistency and robustness of kernel based regression. Bernoulli,
13:799?819, 2007.
[2] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University
Press, Cambridge, 2000.
[3] E. De Vito, L. Rosasco, A. Caponnetto, M. Piana, and A. Verri. Some properties of regularized kernel
methods. J. Mach. Learn. Res., 5:1363?1390, 2004.
[4] L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, New
York, 1996.
[5] I. Steinwart. Sparseness of support vector machines. J. Mach. Learn. Res., 4:1071?1105, 2003.
[6] I. Steinwart. How to compare different loss functions. Constr. Approx., 26:225?287, 2007.
[7] I. Steinwart and A. Christmann. Support Vector Machines. Springer, New York, 2008.
[8] I. Steinwart and A. Christmann. How SVMs can estimate quantiles and the median. In J.C. Platt, D. Koller,
Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 305?312.
MIT Press, Cambridge, MA, 2008.
[9] I. Steinwart, D. Hush, and C. Scovel. Function classes that approximate the Bayes risk. In G. Lugosi
and H. U. Simon, editors, Proceedings of the 19th Annual Conference on Learning Theory, pages 79?93.
Springer, New York, 2006.
[10] V. Vapnik, S. Golowich, and A. Smola. Support vector method for function approximation, regression
estimation, and signal processing. In M. Mozer, M. Jordan, and T. Petsche, editors, Advances in Neural
Information Processing Systems 9, pages 81?287. MIT Press, Cambridge, MA, 1997.
[11] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, New York, 1998.
[12] V. Yurinsky. Sums and Gaussian Vectors. Lecture Notes in Math. 1617. Springer, Berlin, 1995.
[13] T. Zhang. Convergence of large margin separable linear classification. In T. K. Leen, T. G. Dietterich, and
V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 357?363. MIT Press,
Cambridge, MA, 2001.
| 3466 |@word mild:1 version:1 briefly:2 seems:2 norm:1 open:1 confirms:1 paid:2 mention:1 denoting:1 rkhs:10 scovel:1 john:1 realistic:1 shape:1 n0:3 short:2 provides:1 math:1 zhang:1 direct:1 become:1 prove:3 introduce:4 indeed:3 behavior:1 decreasing:1 gov:1 little:1 considering:2 becomes:2 begin:2 provided:1 notation:3 moreover:11 bounded:8 mass:2 estimating:1 guarantee:1 exactly:1 platt:1 control:1 yn:2 before:2 local:1 consequence:1 mach:2 establishing:1 lugosi:2 plus:1 collect:1 conversely:2 co:1 unique:5 union:1 dpx:2 empirical:3 significantly:1 orfi:1 word:4 regular:1 get:1 risk:8 influence:3 impossible:1 measurable:11 imposed:1 map:1 center:1 convex:2 formulate:3 simplicity:1 immediately:1 his:1 modulo:3 exact:1 element:2 satisfying:10 recognition:1 ep:3 ensures:1 trade:2 intuition:1 mozer:1 convexity:1 cristianini:1 vito:1 tight:3 triangle:1 easily:3 chapter:3 describe:1 whose:2 larger:3 ability:1 obviously:3 sequence:4 combining:4 achieve:2 roweis:1 description:1 intuitive:1 kh:7 dirac:1 los:2 convergence:1 produce:2 generating:1 converges:1 help:1 illustrate:3 golowich:1 fixing:1 christmann:5 implies:5 kuhn:1 restate:1 require:1 fix:5 karush:1 suffices:1 proposition:3 hold:2 around:6 considered:1 k2h:3 achieves:1 smallest:1 estimation:2 repetition:1 establishes:1 mit:3 always:1 gaussian:1 super:1 pn:11 corollary:12 derived:1 bernoulli:1 indicates:1 check:1 sense:1 minimizers:1 koller:1 arg:1 dual:1 classification:1 denoted:1 marginal:1 equal:4 identical:1 national:1 alow:7 lebesgue:2 n1:1 fd:40 kfd:2 investigate:2 primal:2 superfluous:1 accurate:1 worker:1 taylor:1 re:2 theoretical:1 minimal:1 assertion:6 introducing:1 kl1:1 alamo:2 sv:13 density:4 stay:1 mup:14 off:2 probabilistic:1 analogously:5 together:1 again:2 nm:1 tube:1 satisfied:4 choose:1 rosasco:1 hoeffding:3 literal:1 worse:2 yurinsky:1 de:2 singleton:2 gy:1 coefficient:1 aup:8 closed:2 sup:2 bayes:2 recover:1 simon:1 minimize:1 accuracy:3 yield:3 kfp:2 bayreuth:3 cc:1 whenever:2 ed:1 involved:1 tucker:1 obvious:3 proof:12 recall:2 lim:10 hilbert:3 originally:1 verri:1 leen:1 furthermore:5 smola:1 hand:4 steinwart:7 continuity:1 infimum:1 grows:1 usa:1 dietterich:1 contain:1 verify:1 equality:2 hence:15 regularization:3 symmetric:4 laboratory:1 self:3 uniquely:1 tn:8 l1:6 empirically:1 rl:16 insensitive:8 belong:1 approximates:1 refer:1 cambridge:5 approx:1 consistency:2 mathematics:1 inclusion:5 shawe:1 calibration:3 showed:2 inf:8 moderate:1 inequality:11 yi:23 additional:2 r0:4 surely:3 maximize:2 signal:2 unimodal:1 caponnetto:1 regression:3 essentially:2 kernel:13 sometimes:1 limt:1 addition:2 whereas:2 interval:2 median:12 grow:1 crucial:1 subject:3 tend:1 jordan:1 counting:1 intermediate:1 split:1 easy:5 xj:2 andreas:2 inner:2 ked:2 motivated:1 york:4 useful:1 svms:3 exist:1 canonical:1 write:6 discrete:3 group:1 asymptotically:3 fraction:2 sum:1 almost:14 reasonable:1 decision:8 bound:8 annual:1 constraint:1 sake:1 separable:9 px:22 relatively:1 department:1 describes:1 smaller:1 remain:1 slightly:1 son:1 constr:1 taken:1 equation:1 turn:1 discus:1 nonempty:1 slack:1 needed:2 know:1 singer:1 end:6 informal:1 observe:4 petsche:1 robustness:1 denotes:4 remaining:2 include:1 establish:1 classical:1 unchanged:1 flipping:2 dependence:1 dp:1 distance:2 berlin:1 devroye:1 index:3 difficult:1 sharper:1 upper:4 ingo:2 situation:5 y1:2 reproducing:1 arbitrary:1 namely:1 pair:2 lanl:1 established:2 hush:1 pattern:1 fp:105 sparsity:7 max:3 explanation:2 regularized:2 tresp:1 kf:4 asymptotic:1 loss:8 lecture:1 consistent:1 dq:1 editor:4 translation:2 course:1 surprisingly:1 side:3 weaker:1 absolute:1 sparse:4 regard:1 xn:2 simplified:1 far:1 approximate:1 compact:1 uni:1 emphasize:1 monotonicity:2 global:1 kkt:1 conclude:7 xi:25 piana:1 continuous:4 additionally:1 learn:2 symmetry:1 cl:13 main:5 dense:5 linearly:1 noise:1 n2:1 x1:2 quantiles:1 wiley:1 wish:1 vanish:1 theorem:21 formula:1 specific:1 kkk:8 offset:1 svm:8 exists:12 vapnik:3 sparseness:1 margin:1 sparser:1 contained:1 springer:4 minimizer:1 ma:3 conditional:21 goal:2 consequently:3 price:1 lipschitz:2 hard:2 determined:1 lemma:20 support:15 |
2,720 | 3,467 | Spike Feature Extraction Using Informative Samples
Zhi Yang, Qi Zhao and Wentai Liu
School of Engineering
University of California at Santa Cruz
1156 High Street, Santa Cruz, CA 95064
{yangzhi, zhaoqi, wentai}@soe.ucsc.edu
Abstract
This paper presents a spike feature extraction algorithm that targets real-time
spike sorting and facilitates miniaturized microchip implementation. The proposed algorithm has been evaluated on synthesized waveforms and experimentally recorded sequences. When compared with many spike sorting approaches
our algorithm demonstrates improved speed, accuracy and allows unsupervised
execution. A preliminary hardware implementation has been realized using an
integrated microchip interfaced with a personal computer.
1
Introduction
Real-time extraction of information from composite neural recordings is a significant challenge in
neural interfacing. Developing integrated circuit (IC) to enable portable and implantable systems
is important to allow the study of complex behavior in neuroscience experiments, closed loop deep
brain stimulation, and cortical controlled neuromuscular prostheses. In order for a spike feature
extraction algorithm to be functional as a small device with real-time low-latency processing and
low power operation it must be efficient in both computation and IC implementation.
Implementing spike sorting before data telemetry offers many significant advantages. Spike feature
extraction provides the necessary information required to sort spikes from raw sampled data. With
this information each spike event can be represented by its unique features and firing time, resulting in significant data compression. A data transceiver designed with the current semiconductor
technology can simultaneously support a large number of recording channels for a microchip implementation to extract the spike feature. System integration using wireless power telemetry or a
rechargeable battery as well as wireless data telemetry removes the need for tethering wires. As a
result, a fully wireless operation would relieve the subjects overall stress factor and allow them to
move freely in their natural environment.
Frequently used spike feature extraction algorithms include principal component analysis (PCA)
[1], bayesian algorithm [2], template matching [3], wavelets [4] and independent component analysis (ICA) [5], which demand significant computation. Efforts to improve the efficiency of these
algorithms have been reported, however, these efforts relied on either over simplified functionality
or bulky hardware systems that consume excessive power.
In part, complex algorithm procedures are applied to mediate the effects of noise and distortion in the
recording process. The associated noise includes ion channel noise, activities from distant neurons,
field potentials, thermal noise and circuit noise. Significant sampling distortion is also present since
it is unrealistic to synchronize the sampling clock with individual recorded spikes.
This paper reports a new spike feature extraction algorithm which is suitable for real-time spike
sorting and enables integrated microchip implementation.
2
2.1
Related Work
PCA Based Spike Feature Extraction
PCA is a feature extraction algorithm widely employed for spike sorting. It uses correlation between
samples and computes the vectors capturing the maximal variance. PCA algorithm performs well
given a strong correlation between samples by reporting relevant features. However, recorded spikes
are usually corrupted by large low frequency noise and distortion, which blur sample correlation and
compromise the quality of the estimated covariance matrix and its eigenvectors. As a result, PCA
may fail to resolve spike clusters in noisy recordings.
2.2
Variable Selection Techniques
As a complementary approach to dimensionality reduction algorithms, Jolliffe discussed a general
feature extraction algorithm based on a subset of samples in the classic work [6]. This concept
requires only a subset of samples containing the necessary information to cluster the data; as opposed
to using all of the samples. These informative samples are especially useful in the presence of single
prominent sample set.
There are two challenges facing a sample selection algorithm. The first challenge is the computational burden to select informative samples. If the training procedure is as complicated as suggested
in [6], it would prohibit microchip implementation for implant purposes. The power and area are the
primary problems with the microchip implementation of other spike feature extraction algorithms.
The second challenge is the availability of localized features. Improved performance compared to
PCA is unlikely if localized features are not prominent.
2.3
Our Approach
We have developed a spike feature extraction algorithm based on informative samples. The theoretical framework includes neuronal geometry signatures, noise shaping, and informative sample
selection. By evaluating neuronal geometry signatures with the compartment model, we find that
high frequency signal spectrum may contain useful information to differentiate neurons. Studying
the noise properties has revealed that a frequency shaping filter can be used to boost the SNR. The
sample selection technique using estimated entropy identifies informative samples for sorting spikes.
In addition, a preliminary IC implementation of the algorithm has been reported [7, 8] and further
integrated onto a multi-channel neural recording IC with wireless telemetry [9].
3
3.1
Geometry Signatures, Noise and Sampling Distortion
Neuronal Geometry Signature
This section describes how neuronal geometry signatures contribute to the difference among similar
waveforms. Assume that both the intra- and extra- fluids are neutral, the induced voltage waveform
is
Z
?
jm (?
r , t)dr
?
V (?
r0 ) =
,
(1)
?
?
4??e |?
r ??
r0 |
?
where jm is the transmembrane current and ?e is the conductivity of the tissue environment; ?
r0 and
?
?
r represent the locations of the point electrode and the active membrane segments, respectively.
Since action potentials propagate slowly along the axonal branches of the cortex neurons (averaged
0.5m/sec ? 2m/sec [10]), active membranes do not fire simultaneously. As a result, the detailed
geometry of the underlying neuron influences the shape of spikes. Assuming that ionic channels
are uniformly dotted on the active membranes within the recording radius of the electrode, the
spike waveform is modeled as the convolution of the transmembrane current profile and an implicit
geometry kernel function as
Z
V (t) = jm (? )W (t ? ? )d?,
(2)
where W (t) is the geometry kernel function.
The recorded waveforms from neurons with similar ion channel populations can be very similar. A
general spike sorting algorithm frequently fails to resolve such ambiguity and may report a single,
large, spike cluster. The approach of differentiating associated kernel functions can be used to sort
the similar spikes. Assume W1 (t) and W2 (t) as the geometry kernel functions of two neurons with
the same ion channel population, the difference between the two spikes is
Z
?V (t) = jm (? )[W1 (t ? ? ) ? W2 (t ? ? )]d?,
(3)
R
Small waveform differences appear if (W1 (t) ? W2 (t))dt ? 0. Intuitively, the condition means
the waveforms are identical, ignoring the skew of the activation of membranes.
To differentiate the waveforms, we rewrite Eq. 3 in the frequency domain as
F(?V ) = F(jm )F(W1 ? W2 )
(4)
R
where F() denotes the fourier transform. The condition of [W1 (t) ? W2 (t)]dt ? 0 is equivalent to
F(W1 ?W2 ) ? 0|f =0Hz , which implies that the waveform difference caused by the geometry kernel
functions has small contribution at lower frequency spectrum. A more quantitative explanation can
be given by studying the derivative of F(?V ) with respect to the frequency using Eq. 4
?F(?V )
?F(jm )
?F(W1 ? W2 )
=
F(W1 ? W2 ) + F(jm )
,
(5)
?f
?f
?f
where f is frequency.
Note that F(jm ) is narrowly band limited signal and F(W1 ?W2 ) serves as a notch frequency mask
with a relative wider spectrum. The first term in Eq. 5 is attenuated by F(W1 ? W2 ) within the
dominant spectrum of F(jm ). Otherwise, appreciable waveform difference is expected according to
Eq. 4. The second term in Eq. 5, on the other hand, exhibits a strong frequency dependency within
the dominant spectrum of F(jm ). It can be expanded as
Z
?F(W1 ? W2 )
F(jm )
? 2?F(jm ) (W1 (t) ? W2 (t))t sin(2?f t)dt,
(6)
?f
when kernel functions Wi are symmetrical.
In summary, the waveform difference between similar neurons caused by geometry functions satisfies the following conditions
?
F(?V ) ? 0|f =0Hz
R
(7)
?F (?V )
t)
? 4? 2 f F(jm ) (W1 (t) ? W2 (t))t sin(2?f
dt ? f.
?f
2?f
t)
(?V )
In Eq. 7, ?F ?f
is linear to frequency f at low frequency region, as sin(2?f
? 1. The strong
2?f t
emphasis on frequency shows that F(?V ) exhibits a higher frequency spectrum. As a result, a
frequency-shaping filter that emphasizes on high-frequency spectrum may help to differentiates kernel functions.
3.2
Noise and Sample Distortion
An estimated power spectrum of noise associated with recorded neural signal, where the dominance
of low frequency noise is clear, is plotted in Figure 1. The noise profile is approximately fitted as
fc1 ?
N (f ) = Nneu + Ne.e + N1/f + Ntherm ? Nfc1 (
) + Ntherm ,
(8)
f
where Nneu is the neuronal noise, Ne.e is the electrode-electrolyte interface noise, N1/f is the flicker
noise and Ntherm is the thermal noise. The low frequency noise is assumed to have profile following
f ?? .
Sampling distortion is unavoidable, since the neuron?s firing is random and not synchronized with
the sampling clock of the analog-to-digital converter(ADC). It can be reduced by either increasing
the sampling frequency of the ADC or performing interpolation and alignment in the digital domain.
Both approaches require additional power, computation and storage space, which are not favorable
to microchip implementation. The sampling distortion is related to the slope of the spikes. In case
a fast transition edge is sampled 4 times, the sampling distortion can be more than 10% of the spike
peak-to-peak magnitude. Considerable distortion is expected since ?neural spikes? are, by definition,
fast changing waveforms.
3
Power Spectrum Of Spikes Recorded Cat Cerabral Cortex
10
2
Power Spectrum Of Spikes derivative Recorded Cat Cerabral Cortex
10
2
10
1
power spectrum
10
1
10
0
10
0
10
?1
10
?1
2000
4000
6000
8000
frequency
(a)
10000
12000
14000
10
2000
4000
6000
8000
10000
12000
14000
(b)
Figure 1: noise properties of recordings from a cat cerebral cortex (500 Hz to 15K Hz); (a) noise
power spectrum of raw data. (b) noise power spectrum of the derivative.
4
Sample Information
In order to use informative samples to sort spikes, it is necessary to quantify the information carried
by individual spike samples. Intuitively, a sample is considered to be informative if the superimposed
spikes can be classified into multiple clusters by evaluating that sample alone. The method used to
quantify the sample information is outlined below.
Sample Information Estimation
Input: M peak aligned spike segments {vi , i = (1, M )} with N samples for each segment
Output: Information inf oj carried by spike samples {vi (j), i = (1, M )}
? j = 1, construct one dimensional data set X = {vi (j), i = (1, M )}
? Obtain a nested cluster configuration based on X
th
? Estimate the possibility pq that a spike being
P partitioned into the q cluster. Use the entropy
to estimate the information inf oj = ? pq >p0 pq ln(pq ), where p0 is a threshold of the
cluster size.
? Repeat the procedures to a different sample, e.g. j = j + 1.
The computation required to accurately quantify the entropy of an underlying data set is typically
high. However, only a rough estimation is required to select informative samples. Therefore, the
amount of spikes to compute information can be reduced to a relatively small number, which should
allow hardware implementation in terms of storage space and computation complexity. With the
synthesized spike data we used, each sequence contains 3 neuronal sources with similar firing rate.
As a result, the possible information score should be 0, 31 Ln(3) + 23 ln(1.5) or Ln(3). When we
increase the mount of training events to M = 300 the information scores approximately settle to the
expected values, as shown in Figure 2.
Quantitative comparisons to investigate the existence of informative samples in noisy spikes have
been done. Results using synthesized spikes with recordings from neocortex and basal ganglia [4]
are shown in Figure 2. There are two clear observations. First, the amount of information carried by
each sample varies, indicating a non-uniform signal-to-noise-plus-distortion-ratio. Second, it is necessary to create informative samples if due to severe noise, distortion and similarity of spike clusters,
few of the samples is informative. As a constraint to create informative samples, the computation
and storage space have to be feasible for microchip implementation.
5
Create Informative Samples Using Frequency Shaping Filter
As analyzed in Section 3, a frequency shaping filter can be used to manifest different geometry
kernel functions, reduce noise and redistribute distortion among spike samples. Such a filter is
0.8
0.7
spike derivative
spike
spike derivative
spike
0.4
information
information
information
0.6
0.4
0.2
0.2
0
5
10
15
20
25
sample number
30
?0.2
35
0
5
10
(a)
spike derivative
spike
35
0.4
0.2
0
5
10
15
20
25
sample number
(e)
0.4
0.3
0.2
30
35
0.3
0.2
0.1
0
0
?0.1
?0.1
0
5
10
15
20
25
sample number
30
35
spike derivative
spike
spike derivative
spike
0.6
0.4
0.4
0.4
0.3
0.2
0.3
0.2
0.1
0.1
0
0
10
15
20
25
sample number
30
35
?0.1
15
20
25
sample number
30
35
spike derivative
spike
0.6
0.5
5
10
(d)
0.5
0
5
0.7
0.5
?0.1
0
(c)
information
information
0.6
0
0.4
0.7
0.6
0.8
information
30
0.7
1
0.5
(b)
1.2
?0.2
15
20
25
sample number
spike derivative
spike
0.6
0.5
0.1
0
0
?0.2
0.7
spike derivative
spike
0.6
0.6
information
1
0.8
information
1.2
0.3
0.2
0.1
0
0
5
10
(f)
15
20
25
sample number
30
35
?0.1
0
5
(g)
10
15
20
25
sample number
30
35
(h)
Figure 2: (a) - (h) information carried by samples from spikes and their derivatives. Horizontal axis
is the sample number and vertical axis is the estimated entropy. The black solid line and red dotted
line represent the sample information from spikes and their derivatives, respectively.
designed to boost high frequency spike features, which should be localized and less correlated if
examined in time domain. In this section, we use derivative operation as an example to illustrate the
usefulness of the frequency shaping filter, and further demonstrate that the filter creates additional
informative samples.
In a discrete time spike sequence, the frequency response of taking derivative is
H(f ) = 2ej?f /2 sin(?f /fs ),
(9)
where fs is the sampling frequency of the ADC.
As shown in Section 3.1, the difference between neuron geometry kernel functions W (t) of similar
spikes is contained in the higher frequency components, which should be emphasized by derivative
operation.
The noise power spectrum is modified by taking derivative. Intuitively, low frequency noise is
reduced and the high frequency thermal noise is amplified, as shown in Figure 1 (b). The quantitative
impact of the frequency shaping filter on noise is affected by the recording system and biological
environment, and the typical values of ? we observe vary around 2 within the signal band as shown
in Figure 1. Use ? = 2 for illustration, the filter?s influence on noise could be quantified by ? Eq. 9
??
fc1 fc2
1
? ,
2
2fspike
2
(10)
where fc1 and fc2 are the lower and higher corner frequencies of the digital filter, respectively. In
case ? is less than 1, SNR further increases, which favors spike sorting from the noise perspective.
The sampling distortion distribution among samples is altered after taking the derivative. In the
original waveforms, samples close to peaks suffer less distortion compared with those in transition.
After taking the derivative, samples initially suffering from large distortion become less distorted
because V ?? (t) in Eq. 2 has at least one zero crossing point during the transition. Quantitative
experiments to demonstrate the creation of informative samples have been done.
A subset of results are shown in Figure 2 (a) - (h). In these data, the black solid lines represent
information carried by the samples from spikes and the dotted red lines represent the derivatives.
The spike data are 8 challenging sequences from [4]. They are compiled from recordings in the
neocortex and basal ganglia with superimposed noise. All 8 sequences contain 3 neuronal sources.
During estimation of sample entropy, a mean shift classifier with a hierarchical merging procedure
is being used to quantify the partition. Small clusters with events less than 5% are ignored. The
corresponding feature extraction results using the most informative samples from spikes as well as
their derivatives are shown in Figure 3 (a) - (h), which clearly presents a 3 cluster configuration.
0.6
0.4
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
?0.2
?0.2
?0.2
?0.4
?0.4
0.2
?0.4
?0.6
?0.6
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
?0.8
?1
?0.8
?0.6
?0.4
(a)
?0.2
0
0.2
0.4
0.6
0.8
?1
?0.8
?0.6
?0.4
(b)
1
0.8
0.6
0.6
0.4
0.6
0.4
0.4
0.2
0.4
0.2
0.2
0
0.2
0
0
0.6
0
?0.2
0
?0.2
?0.2
?0.2
?0.4
?0.2
?0.4
?0.4
?0.4
?0.6
?0.4
?0.6
?0.6
?0.6
0.4
0.2
?0.6
?0.8
?0.8
?0.8
0.6
?0.2
0
0.2
0.4
0.6
0.8
?0.8
?0.8
?0.6
?0.4
(c)
1
?0.2
0
0.2
0.4
0.6
0.8
?0.6
?0.6
?0.4
?0.2
(d)
1
0
0.2
0.4
0.6
0.8
1
?0.8
?1
?0.8
?0.6
(e)
1
?0.4
?0.2
0
0.2
0.4
0.6
0.8
?0.8
?0.6
?0.4
?0.2
(f)
1
0
0.2
0.4
0.6
0.8
?0.8
?0.8
1
1
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0
0
0
0
0
0
0.4
0.6
0.8
1
0
0.2
(i)
0.4
0.6
0.8
1
0
0.2
(j)
1
0.4
0.6
0.8
1
0
0.2
(k)
1
0.4
0.6
0.8
1
0
0.2
(l)
1
0.4
0.6
0.8
1
0
0.2
(m)
1
0.4
0.6
0.8
1
0.6
0.8
1
0
1
0.9
0.9
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.4
0.6
0.8
1
0
0
0.2
(q)
0.4
0.6
0.8
1
0
0
0.2
(r)
1
0.4
0.6
0.8
1
0
0
0.2
(s)
1
0.4
0.6
0.8
1
0
0
0.2
(t)
1
0.4
0.6
0.8
1
0
0
0.2
(u)
1
0.4
0.6
0.8
1
0
0.6
0.8
1
0
1
0.9
0.9
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.6
0.8
1
0
0
0.2
(y)
0.4
0.6
0.8
1
0
0
0.2
(z)
1
0.4
0.6
0.8
1
0
0
0.2
(aa)
1
0.4
0.6
0.8
1
0
0
0.2
(ab)
1
0.4
0.6
0.8
1
0
0
0.2
(ac)
1
0.4
0.6
0.8
1
0
0.6
0.8
1
0
1
0.9
0.9
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.6
(ag)
0.8
1
0
0
0.2
0.4
0.6
(ah)
0.8
1
0
0
0.2
0.4
0.6
(ai)
0.8
1
0
0
0.2
0.4
0.6
(aj)
0.8
1
0
0
0.2
0.4
0.6
(ak)
0.8
1
0
0
0.2
0.4
0.6
(al)
0.8
1
0
1
0
0.2
0.4
0.6
0.8
1
0.8
1
1
0.9
0.8
0.6
0.4
0.8
(af)
0.7
0.2
0.6
0.1
0.4
0.9
0
0.4
0.2
0.2
0.8
0
0.2
(ae)
1
1
0.3
0
(ad)
1
0.8
1
0.9
0.8
0.6
0.4
0.6
(x)
0.7
0.2
0
(w)
1
0.4
0.1
0.4
0.9
0
0.2
0.2
0.2
0.8
0
0
0.3
0
(v)
1
0.8
1
0.9
0.8
0.7
0.2
0.6
(p)
0.9
0
0.4
0.1
0.4
0.8
0
0.2
0.2
0.2
(o)
1
0
0.3
0
(n)
1
?0.2
1
0.9
0.2
?0.4
(h)
0.8
0
?0.6
(g)
0.3
0.2
0.1
0
0.2
0.4
0.6
(am)
0.8
1
0
0
0.2
0.4
0.6
(an)
Figure 3: feature extraction results using the proposed algorithm and competing algorithms. (a) (h) display the extracted features using the most informative samples of spikes and their derivatives
(proposed). (i) - (p) display the extracted features using a subset of samples includes the peaks of
the spike derivative and spike height (implemented on chip, proposed). (q) - (x) display the PCA
based feature extraction. (y) - (af) display the wavelets based feature extraction. (ag) - (an) display
spike peaks based feature extraction. (All the algorithms are tested without performing interpolation.
Nonlinear energy operator (NEO) [11] is used as the spike detection algorithm. Overlapping spikes
within 600 ?Sec are ignored. Haar wavelet is used to perform wavelets based feature extraction, and
features are obtained from the variance peaks after the wavelet transform. Two dimensional features
are projected from a higher dimensional space.)
Table 1: Accuracy comparison of using different spike feature extraction algorithms
Sequence Number
1
2
3
4
5
6
7
8
Informative Samples 97.8% 97.8% 97.8% 97.0% 98.0% 99.2% 96.6%
92.0%
Hardware
97.6% 97.6% 97.4% 95.4% 98.2% 98.4% 93.2%
91.0%
PCA
97.8% 89.0% 60.4% 55.2% 97.6% 77.8% 80.2%
68.8%
Wavelets
92.4% 91.0% 81.8% 57.4% 97.4% 68.2% 51.0%
49.4%
Spike Peaks
34.2% 33.8% 35.4% 34.0% 36.2% 37.8% 35.6%
36.0%
Note: Informative samples are harvested from both spikes and their derivatives. Hardware uses peaks
of spikes and their derivatives. 3000 spikes each sequence from [4].
6
Experiments
Synthesized spike sequences used in Figure 2 are applied to compare the sorting accuracies of different approaches. Feature extraction using the pre-specified subset consists of the peaks of the spike
derivative as well as the height of the original spike is shown in Figure 3 (i) - (p). Comparative
feature extraction results using competing algorithms, e.g, PCA, wavelets, spike peaks and width
are also shown in Figure 3. The extracted spike features are clustered on a PC [12]. About 5%
overlapping spikes are ignored to clearly quantify the performance of different spike feature extraction algorithms. The proposed feature extraction algorithm including the most informative samples
(corresponding to Figure 3 (a) - (h)) achieves the highest accuracy (97.0%). The hardware [9, 8]
0.6
0.4
1
0.2
0.8
0
0.6
?0.2
0.4
?0.4
0.2
?0.6
?0.8
0
0
1
0
5
10
15
20
25
30
0.5
0.8
0.6
35
(a)
(d)
(e)
0.4
0.2
0
1
(b)
(f)
(g)
(c)
(h)
(i)
(j)
(k)
Figure 4: (a) recorded spikes from cat cerebral cortex are superimposed, (b) the extracted spike
features using a subset of samples are plotted and grouped with a clustering algorithm implemented
on PC. (c) the classified spike clusters are superimposed. (d) - (k) individual spike clusters superimposed in (c) are displayed. Spike clusters in (d) - (g) are plotted in a smaller vertical scale (-0.3,
0.15) compared with (h) - (j) in (-0.5, 0.3) and (k) in (-0.5, 0.5).
using the pre-specified subset gives similar accuracy (96.1%). The counterpart algorithms include
PCA, wavelets and spike peaks and width give 78.4%, 73.6% and 35.4%, respectively. The sorting
accuracy comparisons are listed in Table 1.
Animal sequences are collected to test the performance of the proposed algorithm. An example
with overlapped spike clusters is selected for demonstration. The sequence is recorded from the
cat cerebral cortex. The sorting results are displayed in Figure 4. In Figure 4 (a), the detected
1210 spikes are superimposed. Extracted spike features using the pre-specified subset of samples
implemented on chip are shown in Figure 4 (b). The discrete points in feature space are grouped
into 8 clusters with different colors using off-line clustering. Less than 10 % of noisy spikes and
overlapping spikes are discarded, the rest are classified and plotted in Figure 4(c). To further quantify
the validity of the classified spike clusters, superimposed clusters in Figure 4(c) are individually
plotted in Figure 4(d)-(k).
The second example containing more than 4000 spikes recorded from a monkey is shown in Figure 5.
In Figure 5 (a), detected spikes are superimposed. Extracted features using the pre-specified subset
of informative samples are shown in Figure 5 (b). A zoom in of Figure 5 (b) is plotted in Figure 5 (c)
to display the isolation quality of clusters in feature space. The corresponding PCA based feature
extraction is shown in Figure 5 (d) as a comparison. The classified spike clusters using the prespecified subset of informative samples are plotted in Figure 6 (a) - (e). Spike clusters plotted in
Figure 6 (b), (c) and (d) resemble each other in shape and magnitude. To demonstrate that the
informative samples based sorting does not over partitioning the data set, the derivatives of spike
clusters plotted in Figure 6 (a) - (e) are also plotted in Figure 6 (f)-(j) with the same color indication.
Clearly, Figure 6 (g), (h) and (i) present three well-differentiated waveform patterns in either peakto-peak magnitude or shape.
7
Conclusion
A sample selection based spike feature extraction algorithm is reported in this paper. The theoretical
framework includes neuronal geometry signatures, frequency shaping filter, and informative sample
selection. Unlike PCA which uses correlated features, the sample selection algorithm focuses on
localized and uncorrelated features which are strengthened by the frequency shaping filter. With
simulated spike waveforms from a public data base, the algorithm demonstrates an improved sorting accuracy compared with many competing algorithms. The algorithm is designed for integrated
microchip implementation and performing real-time spike sorting. A preliminary hardware implementation has been realized using an integrated circuit chip interfaced with a personal computer.
300
1
200
100
0.8
0
0
?100
?0.02
?200
0.4
?0.04
?0.06
?300
0
1
0.8
?500
?0.2
0.6
?600
?0.4
0
?700
0
5
10
15
20
25
30
0.2
?0.08
0
?400
?800
0.6
?0.12
0.14
0.4
0.05
0.1
0.15
35
(a)
0.2
0.2
0.25
0.3
0.35
0.4
0.1
?0.1
0.12
0.1
0.08
0.06
0.3
0.04
1
0.8
0.5
0.6
0.4
0.02
0
(b)
0
1
0.2
0
0.4
0
(c)
0.2
0
(d)
Figure 5: (a) detected spikes from a monkey, (b) extracted spike features using a subset of samples,
(c) zoom in of (b) for better visualization; (d) extracted features using PCA.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
Figure 6: (a) - (e) the classified 5 clusters of the monkey sequence shown in Figure 5, (f)-(j) the
derivative of the classified 5 clusters. The identity is indicated by color.
References
[1] Zumsteg ZS, Kemere C, O?Driscoll S, Santhanam G, Ahmed RE, Shenoy KV, et al. Power feasibility of
implantable digital spike sorting circuits for neural prosthetic systems. IEEE Trans Neural Syst Rehabil
Eng. 2005 Sep;13(3):272?279.
[2] Lewicki MS. Bayesian modeling and classification of neural signals. Advances in NIPS. 1994;p. 590?597.
[3] Vargas-Irwin C, Donoghue JP. Automated spike sorting using density grid contour clustering and subtractive waveform decomposition. J Neurosci Methods. 2007;164(1).
[4] Quian Quiroga R, Nadasdy Z, Ben-Shaul Y. Unsupervised spike detection and sorting with wavelets and
superparamagnetic clustering. Neural Comput. 2004 Aug;16(8):1661?1687.
[5] Takahashi S, Sakurai Y. Coding of spatial information by soma and dendrite of pyramidal cells in the
hippocampal CA1 of behaving rats. Eur J Neurosci Methods. 2007 Oct;26(7):2033?2045.
[6] Jolliffe IT. Principal Component Analysys. New York: Springer-Verlag. 2002;.
[7] Yang Z, Chen T, Liu W. A neuron signature based spike feature extraction algorithm for on-chip implementation. to Appear in Proc 30th Ann Int Conf IEEE EMBS. 2008 Aug;p. 4237?4240.
[8] Chen T, Yang Z, Liu W, Chen L. NEUSORT2.0: a multiple-channel neural signal processor with systolic
array buffer and channel-interleaving processing schedule. to appear Proc 30th Ann Int Conf IEEE EMBS.
2008 Aug;p. 6652?6656.
[9] Chae M, Liu W, Yang Z, Chen T, Kim J, Sivaprakasam M, et al. A 128 channel 6mW wireless neural
recording IC with on-the-fly spike sorting and UWB transmitter. IEEE ISSCC 2008 Dig Tech Papers.
2008 Feb;7(6):241?261.
[10] Buzsaki G, Penttonen M, Nadasdy Z, Bragin A. Pattern and inhibition-dependent invasion of pyramidal
cell dendrites by fast spikes in the hippocampus in vivo. Proc Natl Acad Sci USA. 1996 Sep;93(18):9921?
9925.
[11] Kaiser JF. On a simple algorithm to calculate the energy of a signal. In Proc IEEE Int Conf Acoustic
Speech and Signal Processing. 1990;p. 381?384.
[12] Yang Z, Zhao Q, Liu W. Neural signal classification using a simplified feature set with nonparametric
clustering. to appear in Neurocomputing;.
| 3467 |@word compression:1 hippocampus:1 propagate:1 covariance:1 p0:2 eng:1 decomposition:1 solid:2 reduction:1 liu:5 configuration:2 contains:1 score:2 nadasdy:2 current:3 activation:1 must:1 cruz:2 distant:1 partition:1 informative:25 blur:1 shape:3 enables:1 remove:1 designed:3 alone:1 selected:1 device:1 prespecified:1 uwb:1 provides:1 contribute:1 location:1 height:2 along:1 ucsc:1 become:1 transceiver:1 microchip:9 consists:1 isscc:1 mask:1 expected:3 ica:1 behavior:1 frequently:2 multi:1 brain:1 zhi:1 resolve:2 jm:13 increasing:1 underlying:2 circuit:4 monkey:3 z:1 developed:1 ca1:1 adc:3 ag:2 quantitative:4 nneu:2 demonstrates:2 classifier:1 partitioning:1 conductivity:1 appear:4 shenoy:1 before:1 engineering:1 semiconductor:1 acad:1 ak:1 mount:1 firing:3 interpolation:2 approximately:2 black:2 plus:1 emphasis:1 examined:1 quantified:1 challenging:1 limited:1 averaged:1 unique:1 procedure:4 area:1 superparamagnetic:1 composite:1 matching:1 pre:4 onto:1 close:1 selection:7 operator:1 storage:3 influence:2 equivalent:1 array:1 classic:1 population:2 target:1 us:3 overlapped:1 crossing:1 fly:1 calculate:1 region:1 highest:1 transmembrane:2 environment:3 complexity:1 battery:1 personal:2 signature:7 rewrite:1 segment:3 compromise:1 creation:1 creates:1 efficiency:1 sep:2 chip:4 represented:1 cat:5 fast:3 detected:3 widely:1 consume:1 distortion:15 otherwise:1 favor:1 wentai:2 noisy:3 transform:2 differentiate:2 sequence:11 advantage:1 indication:1 maximal:1 relevant:1 loop:1 aligned:1 amplified:1 buzsaki:1 kv:1 cluster:23 electrode:3 comparative:1 ben:1 wider:1 help:1 illustrate:1 ac:1 school:1 bulky:1 aug:3 eq:8 strong:3 implemented:3 resemble:1 implies:1 synchronized:1 quantify:6 waveform:16 radius:1 functionality:1 filter:12 enable:1 settle:1 public:1 implementing:1 require:1 clustered:1 preliminary:3 biological:1 quiroga:1 around:1 considered:1 ic:5 vary:1 achieves:1 purpose:1 favorable:1 estimation:3 proc:4 individually:1 grouped:2 create:3 rough:1 clearly:3 interfacing:1 modified:1 ej:1 voltage:1 focus:1 transmitter:1 superimposed:8 tech:1 kim:1 am:1 dependent:1 integrated:6 unlikely:1 typically:1 initially:1 shaul:1 overall:1 among:3 classification:2 animal:1 spatial:1 integration:1 field:1 construct:1 extraction:26 sampling:10 identical:1 unsupervised:2 excessive:1 report:2 few:1 simultaneously:2 zoom:2 implantable:2 individual:3 neurocomputing:1 geometry:14 fire:1 n1:2 ab:1 detection:2 possibility:1 investigate:1 intra:1 severe:1 alignment:1 analyzed:1 redistribute:1 pc:2 natl:1 edge:1 necessary:4 prosthesis:1 plotted:10 re:1 theoretical:2 fitted:1 modeling:1 sakurai:1 subset:11 neutral:1 snr:2 uniform:1 usefulness:1 reported:3 dependency:1 varies:1 corrupted:1 eur:1 density:1 peak:13 off:1 w1:13 ambiguity:1 recorded:10 unavoidable:1 containing:2 opposed:1 slowly:1 dr:1 corner:1 conf:3 zhao:2 derivative:28 syst:1 takahashi:1 potential:2 sec:3 coding:1 includes:4 relieve:1 availability:1 int:3 caused:2 vi:3 ad:1 invasion:1 closed:1 red:2 sort:3 relied:1 complicated:1 slope:1 vivo:1 contribution:1 telemetry:4 compartment:1 accuracy:7 variance:2 interfaced:2 raw:2 bayesian:2 accurately:1 emphasizes:1 ionic:1 dig:1 tissue:1 classified:7 ah:1 processor:1 definition:1 energy:2 frequency:32 associated:3 sampled:2 manifest:1 color:3 dimensionality:1 shaping:9 schedule:1 higher:4 dt:4 response:1 improved:3 evaluated:1 done:2 implicit:1 clock:2 correlation:3 hand:1 horizontal:1 nonlinear:1 overlapping:3 aj:1 quality:2 indicated:1 usa:1 effect:1 validity:1 concept:1 contain:2 counterpart:1 miniaturized:1 sin:4 during:2 width:2 prohibit:1 rat:1 m:1 prominent:2 hippocampal:1 stress:1 demonstrate:3 performs:1 interface:1 neo:1 stimulation:1 functional:1 jp:1 cerebral:3 discussed:1 analog:1 synthesized:4 significant:5 ai:1 outlined:1 grid:1 pq:4 cortex:6 similarity:1 compiled:1 behaving:1 base:1 feb:1 dominant:2 inhibition:1 perspective:1 inf:2 verlag:1 buffer:1 additional:2 employed:1 freely:1 r0:3 signal:10 branch:1 multiple:2 kemere:1 af:2 offer:1 ahmed:1 controlled:1 qi:1 impact:1 feasibility:1 ae:1 represent:4 kernel:9 ion:3 cell:2 addition:1 embs:2 pyramidal:2 neuromuscular:1 source:2 extra:1 w2:13 rest:1 unlike:1 recording:11 subject:1 induced:1 facilitates:1 hz:4 axonal:1 mw:1 yang:5 presence:1 revealed:1 automated:1 isolation:1 converter:1 competing:3 reduce:1 attenuated:1 donoghue:1 shift:1 narrowly:1 quian:1 pca:13 notch:1 effort:2 f:2 suffer:1 speech:1 york:1 action:1 deep:1 ignored:3 useful:2 latency:1 santa:2 eigenvectors:1 detailed:1 clear:2 listed:1 amount:2 nonparametric:1 neocortex:2 band:2 hardware:7 reduced:3 electrolyte:1 flicker:1 dotted:3 neuroscience:1 estimated:4 discrete:2 affected:1 santhanam:1 dominance:1 basal:2 soma:1 threshold:1 changing:1 distorted:1 reporting:1 capturing:1 display:6 activity:1 constraint:1 prosthetic:1 fourier:1 speed:1 performing:3 expanded:1 relatively:1 vargas:1 developing:1 according:1 membrane:4 describes:1 smaller:1 wi:1 partitioned:1 intuitively:3 ln:4 visualization:1 skew:1 fail:1 jolliffe:2 differentiates:1 serf:1 studying:2 operation:4 observe:1 hierarchical:1 differentiated:1 existence:1 original:2 denotes:1 clustering:5 include:2 especially:1 move:1 realized:2 spike:115 kaiser:1 primary:1 exhibit:2 fc2:2 simulated:1 street:1 sci:1 portable:1 collected:1 assuming:1 driscoll:1 systolic:1 modeled:1 illustration:1 ratio:1 demonstration:1 fluid:1 implementation:14 perform:1 vertical:2 wire:1 neuron:10 convolution:1 observation:1 discarded:1 thermal:3 displayed:2 required:3 specified:4 california:1 acoustic:1 boost:2 nip:1 trans:1 suggested:1 usually:1 below:1 pattern:2 challenge:4 oj:2 including:1 explanation:1 power:13 suitable:1 event:3 natural:1 unrealistic:1 synchronize:1 haar:1 improve:1 altered:1 technology:1 fc1:3 ne:2 identifies:1 axis:2 carried:5 extract:1 relative:1 fully:1 harvested:1 facing:1 localized:4 digital:4 subtractive:1 uncorrelated:1 summary:1 repeat:1 wireless:5 allow:3 template:1 taking:4 differentiating:1 cortical:1 evaluating:2 transition:3 contour:1 computes:1 projected:1 simplified:2 active:3 symmetrical:1 assumed:1 spectrum:14 table:2 channel:9 ca:1 ignoring:1 dendrite:2 complex:2 domain:3 neurosci:2 noise:31 mediate:1 profile:3 suffering:1 complementary:1 neuronal:8 strengthened:1 fails:1 comput:1 wavelet:9 interleaving:1 emphasized:1 burden:1 merging:1 magnitude:3 execution:1 implant:1 demand:1 sorting:18 chen:4 entropy:5 ganglion:2 contained:1 lewicki:1 springer:1 aa:1 nested:1 satisfies:1 extracted:8 oct:1 identity:1 ann:2 rehabil:1 appreciable:1 jf:1 soe:1 experimentally:1 considerable:1 feasible:1 typical:1 uniformly:1 principal:2 indicating:1 select:2 support:1 irwin:1 tested:1 correlated:2 |
2,721 | 3,468 | Characterizing response behavior in
multi-sensory perception with conflicting cues
Rama Natarajan1
Iain Murray1
Ladan Shams2
Richard S. Zemel1
1
Department of Computer Science, University of Toronto, Canada
{rama,murray,zemel}@cs.toronto.edu
2
Department of Psychology, University of California Los Angeles, USA
[email protected]
Abstract
We explore a recently proposed mixture model approach to understanding interactions between conflicting sensory cues. Alternative model formulations, differing in their sensory noise models and inference methods,
are compared based on their fit to experimental data. Heavy-tailed sensory likelihoods yield a better description of the subjects? response behavior
than standard Gaussian noise models. We study the underlying cause for
this result, and then present several testable predictions of these models.
1
Introduction
A natural scene contains several multi-modal sensory cues to the true underlying values of
its physical properties. There is substantial evidence that the brain deals with the sensory
information from multiple modalities simultaneously, to form a coherent and unified percept
of the world and to guide action. A major focus of multi-sensory perceptual studies has been
in exploring the synergistic as well as modulatory interactions between individual sensory
cues. The perceptual consequences of these interactions can be effectively explored in cases
where the cues are in conflict with each other, resulting in potentially illusory percepts such
as the ?ventriloquism effect? [1].
A well-tested hypothesis with regards to multi-sensory cue interaction is that the individual
sensory estimates are combined in a linear fashion, weighted by their relative reliabilities.
Most studies that expound this linear approach assume that sensory noise in the different
modalities are independent of each other, and that the sensory likelihoods can be well approximated by Gaussian distributions. Under these assumptions, the maximum-likelihood
estimator of the underlying physical variable is an affine combination of the sensory estimates weighted in proportion to their precisions. This linear model predicts that the variance of the posterior distribution is always lower than that of individual cues. However,
data from several psychophysical studies contradict this prediction, necessitating non-linear
computational strategies to deal with the inputs.
Recent studies [2; 3; 4; 5] have proposed a particular form of mixture model to address
response behavior in situations with a large conflict between sensory stimuli. Conflicts
arise when corresponding cues suggest very different estimates of an underlying variable.
The basic intuition behind these models is that large stimulus disparities might be a consequence of the stimuli having resulted from multiple underlying causal factors. We evaluate
the different formulations in their ability to model experimental data [6] that exhibit very
interesting non-linear response behavior under conflicting stimulus conditions. The formulations differ in how perceptual estimates are derived from sensory data. We demonstrate
some inadequacies of the current models and propose an alternative formulation that employs heavy-tailed sensory likelihoods. The proposed model not only achieves better fits to
non-linear response behavior in the experimental data but also makes several quantitatively
testable predictions.
2
A Mixture Model for Evaluating Cue Interactions
In this section, we present an overview of a recently proposed mixture model approach [3]
to dealing with conflicting sensory inputs. We describe two approaches to inference under
this model ? causal averaging and causal selection ? and analyze the model predictions on
our simulation of an auditory localization task [6].
The environmental variables of interest are the spatial locations of an auditory and visual
stimulus, denoted by sa and sv respectively. Information about the stimuli is provided by
noisy sensory cues xa and xv . The model evaluates sensory cues under two discrete hypotheses (C = {1, 2}) regarding the causal structure underlying the generation of the stimuli. The
hypotheses are that the two stimuli could arise from the same (C = 1) or different (C = 2)
causal events. This mixture model instantiates a simple idea: if there is a common cause,
cues are combined; otherwise they are segregated. The model is characterized by (i) the
sensory likelihoods P (xv |sv ) and P(xa |sa ), (ii) the prior distributions P (sv , sa ) over true
stimulus positions and (iii) the prior over hypotheses P (C).
2.1 Generating sensory data
The standard model assumes Gaussian sensory likelihoods and prior distributions. The
true auditory and visual stimulus positions are assumed to be the same for C = 1, i.e.,
sa = sv = s drawn from a zero-mean Gaussian prior distribution: s ? N (0, ?p2 ) where ?p
is standard deviation of the distribution. The noisy sensory evidence xa is a sample from a
Gaussian distribution with mean sa = s and standard deviation ?a : xa ? N (xa ; sa = s, ?a2 ).
Similarly for the visual evidence: xv ? N (xv ; sv = s, ?v2 ).
When there are C = 2 underlying causes, they are drawn independently from the zero-mean
Gaussian prior distribution: sv ? N (0, ?p2 ); sa ? N (0, ?p2 ). Then xv ? N (xv ; sv , ?v2 ) and
xa ? N (xa ; sa , ?a2 ). The belief in each hypothesis given the cues xa and xv is defined by the
posterior distribution:
P (C|xv , xa ) =
P (xv , xa |C)P (C)
P (xv , xa )
(1)
When the hypotheses are discrete C = {1, 2}, the normalization constant P (xv , xa ) =
P (xv , xa |C = 1)P (C = 1) + P (xv , xa |C = 2)(1 ? P (C = 1)).
Given this particular causal generative
model, the conditional likelihoods in Equation 1 are
R
defined
as
P
(x
,
x
|C
=
1)
=
P
(x
|s
v
a
v
v = s)P (xa |sa = s)P (s)ds and P (xv , xa |C = 2) =
R
R
P (xv |sv )P (sv )dsv P (xa |sa )P (sa )dsa . The conditional sensory likelihoods are specified
as: P (xv , xa |sv , sa , C) = P (xv |sv )P (xa |sa ).
2.2 Inference methods
2.2.1
Causal averaging
The conditional posterior over stimulus variables is calculated for each hypothesis as
P (sv , sa |xv , xa , C = 1) and P (sv , sa |xv , xa , C = 2). The standard approach to computing the full posterior distribution of interest P (sa , sv |xa , xv ) is by integrating the evidence
over both hypotheses weighted by the posterior distribution over C (Equation 1). Such a
model averaging approach to causal inference is specified by the following identity:
X
Pavg (sv , sa |xv , xa ) =
P (sv , sa |xv , xa , C)P (C|xv , xa )
(2)
C
=
X P (xv , xa |sv , sa , C)P (sv , sa |C)P (C|xv , xa )
C
P (xv , xa |C)
(3)
Here, P (C = 1|xv , xa ) = ?c is the posterior mixing proportion and (1 ? ?c ) = P (C =
2|xv , xa ).
2.2.2
Causal selection
An alternative approach is to calculate an approximate posterior distribution by first selecting the hypothesis C ? that maximizes the posterior distribution P (C|xv , xa ). Under this
model selection approach, subsequent inference is based on the selected hypothesis alone.
C ? = argmax P (C|xv , xa )
(4)
C={1,2}
Then the posterior distribution over stimulus location is approximated as follows:
Psel (sv , sa |xv , xa ) ? P (sv , sa |xv , xa , C = C ? )
P (xv , xa |sv , sa , C = C ? )P (sv , sa |C = C ? )
=
P (xv , xa |C = C ? )
(5)
(6)
2.3 Evaluating the models on experimental data
Here, we evaluate the causal averaging and selection models on an auditory localization
task [6] where visual and auditory stimuli were presented at varying spatial and temporal
disparities. In addition to reporting the location of the auditory target, subjects were also
asked to report on whether they perceived the two stimuli to be perceptually unified. The
variables examined were the bias and variance of the subjects? estimates for each stimulus
condition. The data exhibit very interesting non-linear response behavior (solid lines in
Figures 1A and 1D).
In our simulation of the task, the auditory target was presented at locations {0? , 5? , 10? } left
or right of fixation. Although the real experiment varied the fixation location from trial to
trial, it was found to have no effect on subsequent analyses and data were collapsed across
all fixation locations. Hence, we assume the fixation point to be at the center of space
(0? ). The visual stimuli were assumed to be temporally coincident with the auditory stimuli
and presented at varying spatial disparities {0? , 5? , 10? , 15? , 20? , 25? } left or right of sound.
Sensory evidence xa and xv were corrupted by Gaussian noise as described earlier.
Each stimulus combination {sa , sv } was presented with equal probability 2000 times. The
spatial axis ranged from ?25? to 25? and was divided into 1? width bins. On each trial, the
model computes a posterior probability distribution over stimulus locations conditioned on
the noisy cues xa and xv according to one of Equations 3 or 6. It then estimates visual and
auditory locations s?a and s?v as the peak of the posterior distribution (maximum aposteriori
estimate): s?a = argmaxsa P (sa , sv |xa , xv ).
We have simulated estimators using other criteria, such as minimizing the squared error
of the estimates (i.e, expected value of the posterior distribution). The results were very
?sa
similar using the different estimators. Percent bias is given by: ss?av ?s
? 100. Goodness of fit
a
was computed using squared error loss to quantify the amount by which model estimates
differed from the behavioral data. For analysis, the trials were dichotomized into unity and
non-unity trials based on the perception of spatial unity. A trial was classified as unity if
the posterior probability P (C = 1|xv , xa ) was greater than some threshold ? and non-unity
otherwise.
The simulation results (i.e., the estimates s?a and s?v ) were averaged across trials in each
category. The parameters of the model are: 1) the stimulus location variance ?p2 , 2?3) the
observation variances ?a2 and ?v2 , 4) the prior mixture proportion ? = P (C = 1), and 5) the
unity perception threshold ?. The parameter values were estimated to fit the experimental
data and are provided in the figure captions.
2.4 Simulation results for the Gaussian model
Figure 1 presents predictions made by both the theoretical models. The behavioral data
[6] (solid lines in all plots) range from spatial disparities ?15? to 15? ; error bars represent
standard errors across 5 subjects. Model predictions (dashed lines) extend to a wider range
of ?25? to 25? . Some of the predicted trends are similar to the behavioral data. Regardless of stimulus disparity, whenever visual and auditory stimuli were perceived as unity,
the predicted response bias was very high (dashed gray; Figure 1A). This means that the
auditory location was perceived to be very near to the visual stimulus. When the stimuli
appeared to not be unified, the auditory location was biased away from the visual stimulus ? increasingly so as disparity decreased (dashed black; Figure 1A).
A: Localisation biases
B: Causal averaging model
100
C: Causal selection model
14
80
14
Dat Unity
Dat Non?unity
12
Dat Unity
Dat Non?unity
12
40
20
0
?20
?40
Std dev.(+/? deg)
Std dev.(+/? deg)
Percent bias
60
10
8
6
4
10
8
6
4
?60
?80
2
Unity
Non?unity
?100
?25 ?20 ?15 ?10 ?5
0
5 10 15
Spatial disparity sv?sa (deg.)
20
2
0
?25 ?20 ?15 ?10 ?5
0
5 10 15
Spatial disparity sv?sa (deg.)
25
20
0
?25 ?20 ?15 ?10 ?5
0
5 10 15
Spatial disparity sv?sa (deg.)
25
E: Causal averaging model
25
20
25
F: Causal selection model
20
20
Unity trials
Non?unity trials
Unity trials
Non?unity trials
15
Percent of trials
Percent of trials
20
10
5
0
?25 ?20 ?15 ?10 ?5
0
5 10 15
Localisation error (deg.)
15
10
5
20
25
0
?25 ?20 ?15 ?10 ?5
0
5 10 15
Localisation error (deg.)
Figure 1: Simulation results - Gaussian sensory likelihoods: In this, and all subsequent figures,
solid lines plot the actual behavioral data reported in [6] and dashed lines are the model predictions.
(A) Localization biases in the data, plotted alongside predictions from both models. (B) Causal averaging model, response variability: ?a = 8, ?v = 0.05, ? = 0.15. (C) Causal selection model: ?a = 6,
?v = 2.5, ? = 0.2. For both models: ?p = 100, ? = 0.5. (D) Distribution of localization errors in data,
for sv ? sa = 0; re-printed with permission from [6]. (E,F) Localization errors predicted by the causal
averaging and causal selection models respectively.
However, both the models exhibit one or more significant differences from the experimental observations. The predicted curves for unity trials (dashed gray; Figures 1B,C) are all
concave, whereas they were actually observed to be convex (solid gray lines). On non-unity
trials too, the predicted response variabilities (dashed black lines) are an inadequate fit to
the real data (solid black lines).
An additional test for the appropriateness of the models is the predictions they make with
regards to the distribution of localisation errors. An analysis of the behavioral data derived from the spatially coincident stimulus conditions (sv ? sa = 0) revealed a distinct
pattern (Figure 1D). On unity trials, localization error was 0? implying that the responses
were clustered around the auditory target. On non-unity trials, the errors were bi-modally
distributed and failed the test for normality [6]. Causal selection predicts a qualitatively
similar distribution of errors (Figure 1F), suggesting that it may be the most appropriate
inference strategy under the given task and model assumptions.
3
An Alternative Model for Sensory Likelihoods
3.1 Heavy-tailed likelihood formulation
In this section, we re-formulate the sensory likelihoods P (xa |sa ) and P (xv |sv ) as a mixture
of Gaussian and uniform distributions. This mixture creates a likelihood function with heavy
tails.
(1 ? ?)
(1 ? ?)
xv ? ?N (xv ; sv , ?v2 ) +
; xa ? ?N (xa ; sa , ?a2 ) +
(7)
rl
rl
3.2 Simulation results with heavy-tailed sensory likelihoods
Figure 2 presents predictions made by the theoretical models based on heavy-tailed likelihoods. Both models now provide a much better fit to bias and variance, compared to their
A: Localisation biases
B: Causal averaging model
100
C: Causal selection model
14
80
14
Dat Unity
Dat Non?unity
12
Dat Unity
Dat Non?unity
12
40
20
0
?20
?40
Std dev.(+/? deg)
Std dev.(+/? deg)
Percent bias
60
10
8
6
4
10
8
6
4
?60
?80
2
Dat Unity
Dat Non?unity
?100
?25 ?20 ?15 ?10 ?5
0
5 10 15
Spatial disparity sv?sa (deg.)
20
2
0
?25 ?20 ?15 ?10 ?5
0
5 10 15
Spatial disparity s ?s (deg.)
25
v
20
0
?25 ?20 ?15 ?10 ?5
0
5 10 15
Spatial disparity sv?sa (deg.)
25
a
E: Causal averaging model
25
20
25
F: Causal selection model
20
20
Unity trials
Non?unity trials
Unity trials
Non?unity trials
15
Percent of trials
Percent of trials
20
10
5
0
?25 ?20 ?15 ?10 ?5
0
5 10 15
Localisation error (deg.)
15
10
5
20
25
0
?25 ?20 ?15 ?10 ?5
0
5 10 15
Localisation error (deg.)
Figure 2: Simulation results - heavy-tailed likelihoods: (A) Localization biases in the data, plotted
alongside model predictions. (B) Causal averaging model, response variability: ?a = 3.5, ?v = 2. (C)
Causal selection model: ?a = 5, ?v = 2.5. In both models, ?p = 100, ? = 0.2, ? = 0.5, rl = 180? . (D)
Distribution of localization errors in data, for sv ? sa = 0. (E,F) Localization errors predicted by the
heavy-tailed causal averaging and causal selection models.
Gaussian counterparts. The heavy-tailed causal averaging model (Figure 2B) makes reasonable predictions with regards to variability. However, both the amount and the trend of
predicted biases for non-unity trials (dotted line; 2A) do not match observations.
Here too, the best-fitting model is causal selection (dashed line; Figures 2A,C). The localization error distribution (Figure 2F) very closely matches the true observations (Figure 2D) in
how the unity responses are uni-modally distributed about the target location sa , and nonunity responses are bi-modally distributed either side of the target. Visually, this is a better
prediction of the true distribution of errors, compared to the prediction made by the Gaussian causal selection model (Figure 1F); we are unable to make a quantitative comparison
for want of access to the raw data.
Compared with the results in Figure 1, our models make very different bias and variance
predictions for spatial disparities not tested. This is discussed in detail in Section 4. The
heavy-tailed likelihood model has two more free parameters (rp and mixing proportion ?;
Equation 7) than the Gaussian, which is essentially a subset of the heavy-tailed mixture
when ? = 1. Although the Gaussian model may be preferred for its computational simplicity, it is a demonstrably poor fit to the data and the heavy-tailed model is a worthwhile
improvement.
3.3 Analyzing the likelihood models
Existence of the heavy tails in the likelihood function seems to be a critical feature that
supports the non-linear behavior in the data. We substantiate this suggestion using Figure
3, and attempt to give some intuition behind the qualitative differences in variability and
bias between Figures 1 and 2. The discussion below focuses on 3 disparity conditions. The
congruent case |sv ? sa | = 0 is chosen for reference; |sv ? sa | = 10 and |sv ? sa | = 25 are
chosen since the Gaussian and heavy-tailed models tend to differ most in their predictions
at these disparities.
Let us first consider the unity case. In general, most of the samples on unity trials are
from the region of space where both the auditory and visual likelihoods overlap. When
true disparity |sv ? sa | = 0, it means that the two likelihoods overlap maximally (Figures
3Aii and 3Cii). Hence regardless of the form of the likelihood, variability on unity trials at
|sv ? sa | = 0 should be roughly between ?v and ?a . This can be verified in Figures 1C, 2C.
A: Gaussian likelihoods, unity
100
B: Gaussian likelihoods, non?unity
i.sv?sa=10
100
50
C: Heavy?tailed likelihoods, unity
i.sv?sa=10
200
150
150
100
50
100
?20
?15
?10
?5
0
5
10
15
20
25
ii.sv?sa=0
0
?25
100
50
50
0
?25
0
?25
100
?20
?15
?10
?5
0
5
10
15
20
25
iii.s ?s =?25
v
100
a
?20
?15
50
0
?25
0
?25
?5
0
5
10
15
20
ii.sv?sa=0
?20
?15
?10
?5
0
5
10
15
20
25
25
iii.s ?s =?25
v
50
?10
Number of samples
Number of samples
50
0
?25
0
?25
i.sv?sa=10
?20
?15
?10
?5
0
5
x ?s (deg.)
a
10
15
20
25
?20
a
?15
100
50
?5
0
5
10
15
20
25
100
50
0
?25
?5
0
5
x ?s (deg.)
a
10
15
20
25
i.sv?sa=10
?20
?15
?10
?5
0
5
10
15
20
25
?5
0
5
10
15
20
25
0
5
10
15
20
25
150
ii.sv?sa=0
?20
?15
?10
100
50
?5
0
5
10
15
20
25
200
0
?25
0
?25
200
150
100
a
?10
0
?25
ii.sv?sa=0
?20
?15
?10
200
150
a
?10
?15
200
150
iii.s ?s =?25
v
50
?20
D: Heavy?tailed likelihoods, non?unity
200
?20
?15
100
a
iii.s ?s =?25
v
50
?10
?5
0
5
x ?s (deg.)
a
a
10
15
20
25
0
?25
?20
?15
a
?10
?5
x ?s (deg.)
a
a
Figure 3: Analyzing the likelihood models: Results from the causal selection models. In all plots,
light-gray histograms are samples xv from visual likelihood distribution; dark-gay histograms plot
xa . Black histograms are built only from samples xa on which either unity (A,C) or non-unity (B,D)
judgment was made. Each panel corresponds to one of three chosen disparities; histograms in the
panel plot samples from all stimulus conditions that correspond to that particular disparity.
Now one of the biggest differences between the likelihood models is what happens to this
variability as |sv ? sa | increases. In the case of the Gaussian, the amount of overlap between the two likelihoods decreases (Figures 3Ai,3Aiii). Consequently, the samples are
from a somewhat smaller region in space and hence the variability also decreases. This
corresponds to the concave curves predicted by the Gaussian model (Figures 1C; dashed
gray). Whereas for the heavy-tailed likelihood, the overlapping regions roughly increase
with increasing disparity, due to the long tails (Figures 3Ci,3Ciii). This is reflected in the
gradually increasing variability on unity trials corresponding to the better matching convex
curves predicted by the heavy-tailed model (Figure 2C).
On the non-unity trials, most of the samples are from non-overlapping regions of space.
Here, the biggest difference between the likelihood models is that in the Gaussian case, after
a certain spatial limit, the variability tends to increase with increasing |sv ? sa |. We also see
this trend in simulation results presented in [2; 4]. This is because as disparity increases, the
degree of overlap between two likelihoods decreases and variability approaches ?a (Figures
3Bi,3Biii). However, the behavior in the real data suggests that variability continues to be a
constant. With heavy-tailed likelihoods, the tails of the two likelihoods continue to overlap
even as disparity increases; hence the variability is roughly constant (Figures 3Di,3Diii).
4
Model Predictions
Quantitative predictions ? variance and bias: Our heavy-tailed causal selection model
makes two predictions with regards to variability and bias for stimulus conditions not yet
tested. One prediction is that on non-unity trials, as spatial disparity sv ? sa increases,
the localisation variability continues to remain constant at roughly a value equivalent to
the standard deviation of the auditory likelihood (Figure 2C; black dashed plot). However,
response percent bias approaches zero (Figure 2A; black dashed plot), indicating that when
spatial disparity is very high and the stimuli are perceived as being independent, auditory
localisation response is consistent with auditory dominance.
A second prediction is that percent bias gradually decreases with increasing disparity on
unity trials as well. This suggests that even when highly disparate stimuli are perceived as
being unified, perception may be dominated by the auditory cues. Our results also predict
that the variability in this case continues to increase very gradually with increasing disparity
up to some spatial limits (|sv ?sa | = 20? in our simulations) after which it begins to decrease.
This accords with intuition, since for very large disparities, the number of trials in which the
the stimuli are perceived as being unified will be very small.
Qualitative prediction ? distribution of localization errors: Our model also makes a
qualitative prediction concerning the distribution of localisation errors for incongruent (sv ?
sa 6= 0) stimulus conditions. In both Figures 4A and B, localization error on unity trials is
equivalent to the stimulu disparity sv ? sa = 10? , indicating that even at this high disparity,
responses are cluttered closer to the visual stimulus location. On non-unity trials, the error
is about 5? here; responses are more broadly distributed and the bias is highly reduced.
The Gaussian and heavy-tailed predictions differ in how quickly the error distributions go
to zero.
B: Heavy?tailed predictions (Psel)
s ?s =10
v
a
10
5
15
8
v
10
6
5
4
20
25
20
60
40
3
20
2
0
?25 ?20 ?15 ?10 ?5
0
5 10 15
Localisation error (deg.)
Data
Causal averaging
Causal selection
80
a
5
0
?25 ?20 ?15 ?10 ?5
0
5 10 15
Localisation error (deg.)
D: Heavy?tailed predictions: biases
100
7
s ?s =10
Std. dev (+/? deg)
Unity trials
Non?unity trials
Percent of trials
Percent of trials
15
C: Heavy?tailed predictions: variability
20
Unity trials
Non?unity trials
Percent bias
A: Gaussian predictions (Psel)
20
1
25 ?20
Causal averaging
Causal selection
?10
0
10
Spatial disparity s ?s (deg.)
v
a
20
?20
?15
?10
?5
Spatial disparity s ?s (deg.)
v
0
a
Figure 4: Model predictions: (A,B) Localization error distributions predited by the Gaussian and
heavy-tailed causal selection models. Plots correspond to stimulus condition sv = 20;sa = 10. (C,D)
Response variability and bias predicted by they heavy-tailed causal averaging and selection models on
simulation of an audio-visual localization task [3].
Specificity to experimental task: In the experimental task we have examined here [6],
subjects were subjects were asked to first indicate the perceived location of sound on each
trial and then to report their judgement of unity. The requirement to explicitly make a unity
judgement may incur an experimental bias towards the causal selection model.
To explore the potential influence of task instructions on subjects? inference strategy, we
tested our models on a simulation of a different audio-visual spatial localisation task [3].
Here, subjects were asked to report on both visual and auditory stimulus locations and were
not explicitly instructed to make unity judgements. The authors employed model averaging
to explain the results [3] and the data were found to have a very high likelihood under their
model. However, they do not analyse variability in the subjects? responses and this aspect of
behavior as a function of spatial disparity is not readily obvious in their published data.
We evaluated both our heavy-tailed causal averaging as well as causal selection models on
a simulation of this experiment. The two models make very different predictions. Causal
averaging predicts that response variability will monotonically increase with increasing disparity, while selection predicts a less straightforward trend (Figure 4C). Both models predict
a similar amount of response bias and that it will decrease with increasing disparity (Figure
4C). This particular prediction is confirmed by the response bias in their behavioral data plot
made available in [3]. Considering the paradigmatic differences between the two studies
([6] and [3]) and the wide range in bias, applying both inference methods and likelihood
models on this data could be very informative.
Adaptation of the prior: One interesting aspect of inference under this generative model is
that as the value of ? = P (C = 1) increases, the variability also increases for both unity and
non-unity trials across all disparities. However, the response bias remains unchanged. Given
this correlation between response variability and the prior over hypotheses, our approach
may be used to understand whether and how subjects? priors change during the course of
an experimental session. Considering that the best value across all trials for this prior is
quite small (? ? 0.2), we hypothesize that this value will be quite high at the start of an
experiment, and gradually reduce. This hypothesis leads to a prediction that variability
decreases during an experimental session.
5
Discussion
In this paper, we ventured to understand the computational mechanisms underlying sensory
cue interactions that give rise to a particular pattern of non-linear response behavior [6],
using a mixture of two different models that could have generated the sensory data. We
proposed that the form of the sensory likelihood is a critical feature that drives non-linear
behavior, especially at large stimulus disparities. In particular, a heavy-tailed likelihood
function more accurately fits subjects? bias and variance in a cue combination task.
Heavy-tailed distributions have been used previously in modeling cue interactions [7; 8].
In this paper, we went further by comparing the ability of heavy-tailed and Gaussian like-
lihood models to describe behavior. Qualitative fits of summarised statistics such as bias
and variance are insufficient to make any strong claims about human perceptual processes;
nevertheless, this work provides some insight into the potential functional role of sensory
noise.
Another significant contribution in this paper is the critical evaluation of model selection
versus averaging approaches to inference. These two inference methods may predict different variances in their estimates, as a function of stimulus conflict. As suggested in Section
4, having these different models at hand allows one to examine how task instructions affect
subject behavior.
We noted in Section 3.2 that the heavy-tailed model is more complex than the Gaussian
model. Although we have not included any complexity penalty, this formulation was supported by two aspects: (i) it was relatively insensitive to parameter settings, providing a
better fit to the data than the Gaussian model for a wide range of parameter values; (ii)
optimizing the fit of the Gaussian model required implausible values for parameters ?a , ?v
(Fig 1B), whereas parameters for the heavy-tailed model accorded well with published data.
One downside about our results is that even though the model bias for unity trials captures
the slightly increasing trend as disparity decreases, it is not as large as in the behavioral data
(close to 100%) or as that predicted by the Gaussian models. This does not seem to be a
consequence of the parameter values chosen. One interpretation provided by [6] of the large
bias in the data is that a perceptual decision (unity or non-unity) determines a sensorimotor
action (localization response). Then one response strategy might be to ignore the posterior
probability P (sa |xv , xa ) once unity is judged and then set s?a = s?v ; although this results in
prediction of higher bias, the strategy is not Bayes-optimal. Yet another potential limitation
of our approach is that the only form of noise we consider is sensory; we do not yet take
into account any motor component that may drive target localization.
Currently, we have access to only an estimate of the average variance in subjects? auditory
target location estimates. On the computational side, one interesting avenue for future work
would be to evaluate the model averaging and selection hypothesis based on a likelihood
model derived directly from the raw data. On the experimental side, one of the major inadequacies of most experimental paradigms is that the only (approximate) measure of a
subject?s perceptual uncertainty involves measuring the response variability across a large
number of trials. An alternative paradigm that allows measurement of the perceptual uncertainty on a single trial could provide important constraints on computational models of
the perceptual phenomena. At the neural level, a key step entails exploring biologically
plausible neural implementations of the mixture model approach.
Acknowledgments
The authors would like to thank National Sciences and Engineering Research Council of
Canada and Canadian Institute For Advanced Research (RN and RZ), the government of
Canada (IM), UCLA Faculty Grants Program and UCLA Faculty Career Development (LS).
References
[1] I P Howard and W B Templeton. Human spatial orientation. Wiley, New York, 1966.
[2] Konrad P K?
ording and Joshua B Tenenbaum. Causal inference in sensorimotor integration. In
NIPS, pages 737?744. MIT Press, 2006.
[3] Konrad P K?
ording, Ulrik Beierholm, Wei Ji Ma, Steven Quartz, Joshua B Tenenbaum, and Ladan
Shams. Causal inference in multisensory perception. PLoS ONE, 2(9), 2007.
[4] Y Sato, T Toyoizumi, and K Aihara. Bayesian inference explains perception of unity and ventriloquism aftereffect. Neural Comp., 19:3335?55, 2007.
[5] Alan Stocker and Eero Simoncelli. A Bayesian model of conditioned perception. In NIPS 20,
pages 1409?1416. MIT Press, Cambridge, MA, 2008.
[6] MT Wallace, GE Roberson, WE Hairston, BE Stein, JW Vaughan, and JA Schirillo. Unifying
multisensory signals across time and space. Exp Brain Res., 158(2):252?8, 2004.
[7] David C Knill. Robust cue integration: A Bayesian model and evidence from cue-conflict studies
with stereoscopic and figure cues to slant. Journal of Vision, 7(7):1?24, 2007.
[8] Alan A Stocker and Eero P Simoncelli. Noise characteristics and prior expectations in human
visual speed perception. Nat. Neurosci., 9:578?585, 2006.
| 3468 |@word trial:45 faculty:2 judgement:3 proportion:4 seems:1 instruction:2 simulation:12 solid:5 contains:1 disparity:36 selecting:1 ording:2 current:1 comparing:1 yet:3 readily:1 subsequent:3 informative:1 motor:1 hypothesize:1 plot:9 alone:1 cue:21 generative:2 selected:1 implying:1 provides:1 toronto:2 location:16 qualitative:4 fixation:4 fitting:1 behavioral:7 expected:1 roughly:4 behavior:13 examine:1 wallace:1 multi:4 brain:2 actual:1 considering:2 increasing:8 provided:3 begin:1 underlying:8 schirillo:1 maximizes:1 panel:2 what:1 psych:1 differing:1 unified:5 temporal:1 quantitative:2 concave:2 grant:1 engineering:1 xv:43 limit:2 consequence:3 tends:1 analyzing:2 might:2 black:6 examined:2 suggests:2 range:4 bi:3 averaged:1 acknowledgment:1 printed:1 matching:1 integrating:1 specificity:1 suggest:1 synergistic:1 selection:26 close:1 judged:1 collapsed:1 influence:1 applying:1 vaughan:1 equivalent:2 center:1 go:1 regardless:2 straightforward:1 independently:1 convex:2 cluttered:1 formulate:1 l:1 simplicity:1 iain:1 estimator:3 insight:1 target:7 caption:1 beierholm:1 hypothesis:13 roberson:1 trend:5 approximated:2 continues:3 lihood:1 std:5 predicts:4 observed:1 role:1 steven:1 capture:1 calculate:1 region:4 went:1 plo:1 decrease:8 substantial:1 intuition:3 complexity:1 asked:3 incur:1 localization:16 creates:1 aii:1 distinct:1 describe:2 zemel:1 quite:2 plausible:1 toyoizumi:1 s:1 otherwise:2 ability:2 statistic:1 analyse:1 noisy:3 propose:1 interaction:7 adaptation:1 mixing:2 description:1 los:1 requirement:1 congruent:1 generating:1 rama:2 wider:1 sa:59 strong:1 p2:4 c:1 predicted:11 indicate:1 involves:1 quantify:1 differ:3 appropriateness:1 closely:1 human:3 bin:1 explains:1 ja:1 government:1 clustered:1 im:1 exploring:2 around:1 visually:1 exp:1 predict:3 claim:1 major:2 achieves:1 a2:4 perceived:7 currently:1 council:1 weighted:3 mit:2 gaussian:28 always:1 varying:2 dsv:1 derived:3 focus:2 improvement:1 likelihood:41 inference:14 orientation:1 denoted:1 development:1 spatial:22 integration:2 equal:1 once:1 having:2 future:1 report:3 stimulus:37 quantitatively:1 richard:1 employ:1 simultaneously:1 resulted:1 national:1 individual:3 argmax:1 attempt:1 interest:2 highly:2 localisation:13 evaluation:1 mixture:11 light:1 behind:2 stocker:2 closer:1 re:3 plotted:2 causal:44 dichotomized:1 theoretical:2 earlier:1 modeling:1 downside:1 dev:5 goodness:1 measuring:1 deviation:3 subset:1 uniform:1 inadequate:1 too:2 reported:1 corrupted:1 sv:54 combined:2 peak:1 quickly:1 squared:2 suggesting:1 potential:3 account:1 explicitly:2 analyze:1 start:1 bayes:1 contribution:1 variance:11 characteristic:1 percept:2 yield:1 judgment:1 correspond:2 raw:2 bayesian:3 accurately:1 confirmed:1 drive:2 comp:1 published:2 classified:1 explain:1 implausible:1 whenever:1 evaluates:1 sensorimotor:2 obvious:1 di:1 auditory:20 illusory:1 actually:1 higher:1 reflected:1 response:28 modal:1 maximally:1 wei:1 formulation:6 evaluated:1 though:1 jw:1 xa:46 correlation:1 d:1 hand:1 overlapping:2 gray:5 usa:1 effect:2 ranged:1 true:6 gay:1 counterpart:1 hence:4 spatially:1 deal:2 konrad:2 during:2 width:1 noted:1 substantiate:1 criterion:1 demonstrate:1 necessitating:1 percent:12 recently:2 common:1 functional:1 mt:1 physical:2 overview:1 ji:1 rl:3 insensitive:1 extend:1 tail:4 interpretation:1 discussed:1 significant:2 measurement:1 cambridge:1 ai:1 slant:1 similarly:1 pavg:1 session:2 reliability:1 access:2 entail:1 posterior:14 recent:1 optimizing:1 certain:1 continue:1 joshua:2 greater:1 additional:1 somewhat:1 cii:1 employed:1 paradigm:2 monotonically:1 signal:1 dashed:10 ii:6 multiple:2 simoncelli:2 full:1 sound:2 sham:1 paradigmatic:1 alan:2 match:2 characterized:1 long:1 divided:1 concerning:1 prediction:33 basic:1 essentially:1 vision:1 expectation:1 histogram:4 normalization:1 represent:1 accord:1 addition:1 whereas:3 want:1 decreased:1 modality:2 biased:1 subject:14 tend:1 seem:1 near:1 revealed:1 iii:5 canadian:1 affect:1 fit:11 psychology:1 reduce:1 regarding:1 idea:1 avenue:1 angeles:1 whether:2 inadequacy:2 penalty:1 york:1 cause:3 action:2 modulatory:1 amount:4 dark:1 stein:1 tenenbaum:2 demonstrably:1 category:1 reduced:1 dotted:1 stereoscopic:1 estimated:1 ladan:3 broadly:1 summarised:1 discrete:2 dominance:1 key:1 threshold:2 nevertheless:1 drawn:2 verified:1 uncertainty:2 reporting:1 reasonable:1 decision:1 sato:1 constraint:1 scene:1 ucla:3 dominated:1 aspect:3 speed:1 relatively:1 department:2 according:1 combination:3 poor:1 instantiates:1 across:7 smaller:1 increasingly:1 remain:1 unity:62 slightly:1 templeton:1 biologically:1 happens:1 aihara:1 gradually:4 equation:4 aftereffect:1 remains:1 previously:1 mechanism:1 ge:1 available:1 accorded:1 worthwhile:1 v2:4 away:1 appropriate:1 alternative:5 permission:1 rp:1 existence:1 rz:1 assumes:1 unifying:1 zemel1:1 testable:2 murray:1 especially:1 dat:10 unchanged:1 psychophysical:1 strategy:5 exhibit:3 unable:1 thank:1 simulated:1 ulrik:1 insufficient:1 providing:1 minimizing:1 potentially:1 disparate:1 rise:1 implementation:1 av:1 observation:4 howard:1 coincident:2 situation:1 variability:24 rn:1 varied:1 canada:3 david:1 required:1 specified:2 incongruent:1 conflict:5 california:1 coherent:1 conflicting:4 nip:2 address:1 bar:1 alongside:2 below:1 perception:8 pattern:2 suggested:1 appeared:1 program:1 built:1 belief:1 event:1 critical:3 natural:1 overlap:5 advanced:1 normality:1 temporally:1 axis:1 diii:1 prior:11 understanding:1 segregated:1 relative:1 loss:1 interesting:4 generation:1 suggestion:1 limitation:1 aposteriori:1 versus:1 degree:1 affine:1 consistent:1 heavy:32 course:1 supported:1 free:1 guide:1 bias:31 side:3 understand:2 institute:1 wide:2 characterizing:1 distributed:4 regard:4 curve:3 calculated:1 world:1 evaluating:2 computes:1 sensory:34 instructed:1 made:5 qualitatively:1 author:2 approximate:2 contradict:1 uni:1 preferred:1 ignore:1 dealing:1 deg:23 assumed:2 eero:2 hairston:1 tailed:30 robust:1 career:1 complex:1 neurosci:1 noise:7 arise:2 knill:1 fig:1 biggest:2 fashion:1 differed:1 wiley:1 precision:1 position:2 perceptual:8 quartz:1 explored:1 dsa:1 biii:1 evidence:6 effectively:1 ci:1 nat:1 perceptually:1 conditioned:2 explore:2 visual:16 failed:1 corresponds:2 environmental:1 determines:1 ma:2 conditional:3 identity:1 consequently:1 towards:1 change:1 included:1 averaging:21 experimental:13 multisensory:2 ventriloquism:2 indicating:2 support:1 ciii:1 evaluate:3 audio:2 tested:4 phenomenon:1 |
2,722 | 3,469 | Overlaying classifiers:
a practical approach for optimal ranking
St?ephan Cl?emenc?on
Telecom Paristech (TSI) - LTCI UMR Institut Telecom/CNRS 5141
[email protected]
Nicolas Vayatis
ENS Cachan & UniverSud - CMLA UMR CNRS 8536
[email protected]
Abstract
ROC curves are one of the most widely used displays to evaluate performance
of scoring functions. In the paper, we propose a statistical method for directly
optimizing the ROC curve. The target is known to be the regression function up
to an increasing transformation and this boils down to recovering the level sets of
the latter. We propose to use classifiers obtained by empirical risk minimization of
a weighted classification error and then to construct a scoring rule by overlaying
these classifiers. We show the consistency and rate of convergence to the optimal
ROC curve of this procedure in terms of supremum norm and also, as a byproduct
of the analysis, we derive an empirical estimate of the optimal ROC curve.
1
Introduction
In applications such as medical diagnosis, credit risk screening or information retrieval, one aims at
ordering instances under binary label information. The problem of ranking binary classification data
is known in the machine learning literature as the bipartite ranking problem ([FISS03], [AGH+ 05],
[CLV08]). A natural approach is to find a real-valued scoring function which mimics the order
induced by the regression function. A classical performance measure for scoring functions is the
Receiver Operating Characteristic (ROC) curve which plots the rate of true positive against false
positive ([vT68], [Ega75]). The ROC curve offers a graphical display which permits to judge rapidly
how a scoring rule discriminates the two populations (positive against negative). A scoring rule
whose ROC curve is close to the diagonal line does not discriminate at all, while the one lying above
all others is the best possible choice. From a statistical learning perspective, risk minimization (or
performance maximization) strategies for bipartite ranking have been based mostly on a popular
summary of the ROC curve known as the Area Under a ROC Curve (AUC - see [CLV08], [FISS03],
[AGH+ 05]) which corresponds to the L1 -metric on the space of ROC curves. In the present paper,
we propose a statistical methodology to estimate the optimal ROC curve in a stronger sense than
the AUC sense, namely in the sense of the supremum norm. In the same time, we will explain how
to build a nearly optimal scoring function. Our approach is based on a simple observation: optimal
scoring functions can be represented from the collection of level sets of the regression function.
Hence, the bipartite ranking problem may be viewed as a ?continuum? of classification problems
with asymmetric costs where the targets are the level sets. In a nonparametric setup, regression
or density level sets can be estimated with plug-in methods ([Cav97], [RV06], [AA07], [WN07],
...). Here, we take a different approach based on a weighted empirical risk minimization principle.
We provide rates of convergence with which an optimal point of the ROC curve can be recovered
according to this principle. We also develop a practical ranking method based on a discretization of
the original problem. From the resulting classifiers and their related empirical errors, we show how
1
to build a linear-by-part estimate of the optimal ROC curve and a quasi-optimal piecewise constant
scoring function. Rate bounds in terms of the supremum norm on ROC curves for these procedures
are also established.
The rest of the paper is organized as follows: in Section 2, we present the problem and give some
properties of ROC curves, in Section 3, we provide a statistical result for the weighted empirical risk
minimization, and in Section 4, we develop the main results of the paper which describe the statistical performance of a scoring rule based on overlaying classifiers as well as the rate of convergence
of the empirical estimate of the optimal ROC curve.
2
Bipartite ranking, scoring rules and ROC curves
Setup. We study the ranking problem for classification data with binary labels. The data are assumed
to be generated as i.i.d. copies of a random pair (X, Y ) ? X ? {?1, +1} where X is a random
descriptor living in the measurable space X and Y represents its binary label (relevant vs. irrelevant,
healthy vs. sick, ...). We denote by P = (?, ?) the distribution of (X, Y ), where ? is the marginal
distribution of X and ? is the regression function (up to an affine transformation): ?(x) = P{Y =
1 | X = x}, x ? X . We will also denote by p = P{Y = 1} the proportion of positive labels.
In the sequel, we assume that the distribution ? is absolutely continuous with respect to Lebesgue
measure.
Optimal scoring rules. We consider the approach where the ordering can be derived by the means
of a scoring function s : X ? R: one expects that the higher the value s(X) is, the more likely the
event ?Y = +1? should be observed. The following definition sets the goal of learning methods in
the setup of bipartite ranking.
Definition 1 (Optimal scoring functions) The class of optimal scoring functions is given by the set
S ? = { s? = T ? ? | T : [0, 1] ? R strictly increasing }.
Interestingly, it is possible to make the connection between an arbitrary (bounded) optimal scoring
function s? ? S ? and the distribution P (through the regression function ?) completely explicit.
Proposition 1 (Optimal scoring functions representation, [CV08]) A bounded scoring function
s? is optimal if and only if there exist a nonnegative integrable function w and a continuous random
variable V in (0, 1) such that:
?x ? X ,
s? (x) = inf s? + E (w(V ) ? I{?(x) > V }) .
X
A crucial consequence of the last proposition is that solving the bipartite ranking problem amounts
to recovering the collection {x ? X | ?(x) > u}u?(0,1) of level sets of the regression function ?.
Hence, the bipartite ranking problem can be seen as a collection of overlaid classification problems.
This view was first introduced in [CV07] and the present paper is devoted to the description of a
statistical method implementing this idea.
ROC curves. We now recall the concept of ROC curve and explain why it is a natural choice of
performance measure for the ranking problem with classification data. We consider here only true
ROC curves which correspond to the situation where the underlying distribution is known. First,
we need to introduce some notations. For a given scoring rule s, the conditional cdfs of the random
variable s(X) are denoted by Gs and Hs . We also set, for all z ? R:
? s (z) = 1 ? Gs (z) = P {s(X) > z | Y = +1} ,
G
? s (z) = 1 ? Hs (z) = P {s(X) > z | Y = ?1} .
H
to be the residual conditional cdfs of the random variable s(X). When s = ?, we shall denote the
??, H
? ? respectively.
previous functions by G? , H ? , G
We introduce the notation Q(Z, ?) to denote the quantile of order 1 ? ? for the distribution of a
random variable Z conditioned on the event Y = ?1. In particular, the following quantile will be
of interest:
? ??1 (?) ,
Q? (?) = Q(?(X), ?) = H
2
where we have used here the notion of generalized inverse F ?1 of a c`adl`ag function F : F ?1 (z) =
inf{t ? R | F (t) ? z}. We now turn to the definition of the ROC curve.
Definition 2 (True ROC curve) The ROC curve of a scoring function s is the parametric curve:
? s (z), G
? s (z)
z 7? H
for thresholds z ? R. It can also be defined as the plot of the function:
?s ? H
? ?1 (?) = G
? s (Q(s(X), ?)) .
ROC(s, ? ) : ? ? [0, 1] 7? G
s
By convention, points of the curve corresponding to possible jumps (due to possible degenerate
points of Hs or Gs ) are connected by line segments, so that the ROC curve is always continuous.
For s = ?, we take the notation ROC? (?) = ROC(?, ?).
? s is also called the true positive rate while H
? s is the false positive rate, so that
The residual cdf G
the ROC curve is the plot of the true positive rate against the false positive rate.
Note that, as a functional criterion, the ROC curve induces a partial order over the space of all scoring functions. Some scoring function might provide a better ranking on some part of the observation
space and a worst one on some other. A natural step to take is to consider local properties of the
ROC curve in order to focus on best instances but this is not straightforward as explained in [CV07].
We expect optimal scoring functions to be those for which the ROC curve dominates all the others
for all ? ? (0, 1). The next proposition highlights the fact that the ROC curve is relevant when
evaluating performance in the bipartite ranking problem.
Proposition 2 The class S ? of optimal scoring functions provides the best possible ranking with
respect to the ROC curve. Indeed, for any scoring function s, we have:
?? ? (0, 1) , ROC? (?) ? ROC(s, ?) ,
and ?s? ? S ? , ?? ? (0, 1) , ROC(s? , ?) = ROC? (?).
The following result will be needed later.
Proposition 3 We assume that the optimal ROC curve is differentiable. Then, we have, for any ?
such that Q? (?) < 1:
d
1?p
Q? (?)
ROC? (?) =
?
.
d?
p
1 ? Q? (?)
For proofs of the previous propositions and more details on true ROC curves, we refer to [CV08].
3
Recovering a point on the optimal ROC curve
We consider here the problem of recovering a single point of the optimal ROC curve from a sample
of i.i.d. copies {(Xi , Yi )}i=1,...,n of (X, Y ). This amounts to recovering a single level set of the
regression function ? but we aim at controlling the error in terms of rates of false positive and true
positive. For any measurable set C ? X , we set the following notations:
?(C) = P(X ? C | Y = ?1) and ?(C) = P(X ? C | Y = +1) .
We also define the weighted classification error:
L? (C) = 2p(1 ? ?) (1 ? ?(C)) + 2(1 ? p)? ?(C) ,
with ? ? (0, 1) being the asymmetry factor.
Proposition 4 The optimal set for this error measure is C?? = {x : ?(x) > ?}. We have indeed,
for all C ? X :
L? (C?? ) ? L? (C) .
Also the optimal error is given by:
L? (C?? ) = 2E min{?(1 ? ?(X)), (1 ? ?)?(X)} .
The excess risk for an arbitrary set C can be written:
L? (C) ? L? (C?? ) = 2E (| ?(X) ? ? | I{X ? C?C?? }) ,
where ? stands for the symmetric difference between sets.
3
The empirical counterpart of the weighted classification error can be defined as:
n
n
X
2(1 ? ?) X
? ? (C) = 2?
L
I{Yi = ?1, Xi ? C} +
I{Yi = +1, Xi ?
/ C} .
n i=1
n
i=1
This leads to consider the weighted empirical risk minimizer over a class C of candidate sets:
? ? (C).
C?? = arg min L
C?C
The next result provides rates of of convergence of the weighted empirical risk minimizer C?? to the
best set in the class in terms of the two types of error ? and ?.
Theorem 1 Let ? ? (0, 1). Assume that C is of finite VC dimension V and contains C?? . Suppose
also that both G? and H ? are twice continuously differentiable with strictly positive first derivatives
and that ROC? has a bounded second derivative. Then, for all ? > 0, there exist constants c(V )
independent of ? such that, with probability at least 1 ? ?:
1
c(V )
log(1/?) 3
|?(C?? ) ? ?(C?? )| ? p
.
?
n
p(1 ? ?)
?
The same result
palso holds for the excess risk of C? in terms of the rate ? of true positive with a
factor term of (1 ? p)? in the denominator instead .
It is noteworthy that, while convergence in terms of classification error is expected to be of the order
of n?1/2 , its two components corresponding to the rate of false positive and true positive present
slower rates.
4
Nearly optimal scoring rule based on overlaying classifiers
Main result. We now propose to collect the classifiers studied in the previous section in order to
build a scoring function for the bipartite ranking problem. From Proposition 1, we can focus on
optimal scoring rules of the form:
Z
?
s (x) = I{x ? C?? } ?(d?),
(1)
where the integral is taken w.r.t. any positive measure ? with same support as the distribution of
?(X).
Consider a fixed partition ?0 = 0 < ?1 ? . . . ? ?K ? 1 = ?K+1 of the interval (0, 1). We can
then construct an estimator of s? by overlaying a finite collection of (estimated) density level sets
C??1 , . . . , C??K :
K
X
s?(x) =
I{x ? C??i },
i=1
which may be seen as an empirical version of a discrete version of the target s? .
In order to consider the performance of such an estimator, we need to compare the ROC curve of s? to
the optimal ROC curve. However, if the sequence {C??i }i=1,...,K is not decreasing, the computation
of the ROC curve as a function of the errors of the overlaying classifiers becomes complicated.
The main result of the paper is the next theorem which is proved for a modified sequence which
yields to a different estimator. We introduce: {C??i }1?i?K defined by:
C??1 = C??1 and C??i+1 = C??i ? C??i+1 for all i ? {1, . . . , K ? 1} .
The corresponding scoring function is then given by:
s?K (x) =
K
X
I{x ? C??i } .
i=1
4
(2)
Hence, the ROC curve of s?K is simply the broken line that connects the knots (?(C??i ), ?(C??i )),
0 ? i ? K + 1.
The next result offers a rate bound in the ROC space, equipped with a sup-norm. Up to our knowledge, this is the first result on the generalization ability of decision rules in such a functional space.
Theorem 2 Under the same assumptions as in Theorem 1 and with the previous notations, we set
K = Kn ? n1/8 . Fix > 0. Then, there exists a constant c such that, with probability at least
1 ? ?, we have:
c log(1/?)
sup |ROC? (?) ? ROC(?sK , ?)| ?
.
n1/4
??[,1?]
Remark 1 (P ERFORMANCE OF CLASSIFIERS AND ROC CURVES .) In the present paper, we have
adopted a scoring approach to ROC analysis which is somehow related to the evaluation of the
performance of classifiers in ROC space. Using combinations of such classifiers to improve performance in terms of ROC curves has also been pointed out in [BDH06] and [BCT07].
Remark 2 (P LUG - IN ESTIMATOR OF THE REGRESSION FUNCTION .) Note that taking ? = ?
the
measure over [0, 1] in the expression of s? leads to the regression function ?(x) =
R Lebesgue
I{x ? C?? } d?. An estimator for the regression function could be the following: ??K (x) =
PK+1
?
i=1 (?i ? ?i?1 )I{x ? C?i }.
Remark 3 (A DAPTIVITY OF THE PARTITION .) A natural extension of the approach would be to
consider a flexible partition (?i )i which could possibly be adaptively chosen depending on the local
regularity of the ROC curve. For now, it is not clear how to extend the method of the paper to
take into account adaptive partitions, however we have investigated such partitions corresponding
to different approximation schemes of the optimal ROC curve elsewhere ([CV08]), but the rates of
convergence obtained in the present paper are faster.
Optimal ROC curve approximation and estimation. We now provide some insights on the previous result. The key for the proof of Theorem 2 is the idea of a piecewise linear approximation of
the optimal ROC curve.
We introduce some notations. Let ?0 = 0 < ?1 < . . . < ?K < ?K+1 = 1 be a given partition
of [0, 1] such that maxi?{0,...,K} {?i+1 ? ?i } ? ?. Set: ?i ? {0, . . . , K + 1}, ?i? = ?(C??i ) and
?i? = ?(C??i ).
The broken line that connects the knots {(?i? , ?i? ); 0 ? i ? K + 1} provides a piecewise linear
(concave) approximation/interpolation of the optimal ROC curve ROC? . In the spirit of the finite
element method (FEM, see [dB01] for instance), we introduce the ?hat functions? defined by:
?
?
?i ? {1, . . . , K ? 1}, ??i ( ? ) = ?( ? ; (?i?1
, ?i? )) ? ?( ? ; (?i? , ?i+1
)),
with the notation ?(?, (?1 , ?2 )) = (? ? ?1 )/(?2 ? ?1 ) ? I{? ? [?1 , ?2 ]} for all ?1 < ?2 . We
?
also set ??K ( ? ) = ?( ? ; (?K
, 1)) for notational convenience. The piecewise linear approximation
?
of ROC may then be written as:
K
X
?
] (?) =
ROC
?i? ??i (?) .
i=1
?
] (?), we propose: i) to find an estimate C??i of the
In order to obtain an empirical estimator of ROC
true level set C??i based on the training sample {(Xi , Yi )}i=1,...,n as in Section 3, ii) to compute the
corresponding errors ?
? i and ??i using a test sample {(Xi0 , Yi0 )}i=1,...,n . Hence we define:
n
n
1 X
1 X
0
0
?
I{Xi ? C, Yi = ?1} and ?i (C) =
I{Xi0 ? C, Yi0 = +1},
?
? i (C) =
n? i=1
n+ i=1
Pn
with n+ = i=1 I{Yi0 = +1} = n ? n? . We set ?
?i = ?
? i (C??i ) and ??i = ??i (C??i ). We propose
?
] (?):
the following estimator of ROC
\? (?) =
ROC
K
X
i=1
5
??i ??i (?),
where ??K (?) = ?(.; (?
?K , 1)) and ??i (?) = ?(.; (?
? i?1 , ?
? i )) ? ?(.; (?
?i, ?
? i+1 )) for 1 ? i < K.
?
[
Hence, ROC is the broken line connecting the empirical knots {(?
?i , ?i ); 0 ? i ? K + 1}.
The next result takes the form of a deviation bound for the estimation of the optimal ROC curve.
It quantifies the order of magnitude of a confidence band in supremum norm around an empirical
estimate based on the previous approximation scheme with empirical counterparts.
Theorem 3 Under the same assumptions as in Theorem 1 and with the previous notations, set K =
Kn ? n1/6 . Fix > 0. Then, there exists a constant c such that, with probability at least 1 ? ?,
1/3
\? (?) ? ROC? (?)| ? c?1 log(n/?)
sup |ROC
.
n
??[,1?]
5
Conclusion
We have provided a strategy based on overlaid classifiers to build a nearly-optimal scoring function.
Statistical guarantees are provided in terms of rates of convergence for a functional criterion which
is the ROC space equipped with a supremum norm. This is the first theoretical result of this nature.
To conclude, we point out that ROC analysis raises important and novel issues for statistical learning
and we hope that the present contribution gives a flavor of possible research directions.
Appendix - Proof section
Proof of Theorem 1. The idea of the proof is to relate the excess risk in terms of ?-error to the excess
risk in terms of weighted classification error. First we re-parameterize the weighted classification
error. Set C(?) = {x ? X | ?(x) > Q? (?)} and:
`? (?) = L? (C(?)) = 2(1 ? p)? ? + 2p(1 ? ?)(1 ? ROC? (?))
Since ROC? is assumed to be differentiable and using Proposition 3, it is easy to check that the
value ?? = ?(C?? ) minimizes `? (?). Denote by `?? = `? (?? ). It follows from a Taylor expansion
of `? (?) around ?? at the second order that there exists ?0 ? [0, 1] such that:
d2
ROC? (?0 ) (? ? ?? )2
d?2
Using also the fact that ROC? dominates any other curve of the ROC space, we have: ?C ? X
measurable, ?(C) ? ROC? (?(C)). Also, by assumption, there exists m such that: ?? ? [0, 1],
d2
?
?
?
d?2 ROC (?) ? ?m. Hence, since `? (?(C? )) = L? (C? ), we have:
2
1
L? (C?? ) ? L? (C?? ) .
?(C?? ) ? ?(C?? ) ?
mp(1 ? ?)
`? (?) = `?? ? p(1 ? ?)
We have obtained the desired inequality. It remains to get the rate of convergence for the weighted
empirical risk.
Now set: F ? = pG? + (1 ? p)H ? . We observe that: ?t > 0, P(|?(X) ? ?| ? t) = F ? (? +
t) ? F ? (? ? t) ? 2t supu (F ? )0 (u). We have thus shown that the distribution satisfies a modified
Tsybakov?s margin condition [Tsy04], for all ? ? [0, 1], of the form:
?
P(|?(X) ? ?| ? t) ? D t 1?? .
with ? = 1/2 and D = 2 supu (F ? )0 (u). Adapting slightly the argument used in [Tsy04], [BBL05],
we have that, under the modified margin condition, there exists a constant c such that, with probability 1 ? ?:
1
log(1/?) 2??
?
?
?
L? (C? ) ? L? (C? ) ? c
.
n
Proof of Theorem 2. We note ?
? i = ?(C??i ), ??i = ?(C??i ) and also ??i ( ? ) = ?( ? ; (?
?i?1 , ?
? i )) ?
PK ? ?
?( ? ; (?
?i , ?
? i+1 )). We then have ROC(?sK , ?) =
?
?
(?)
and
we
can
use
the
following
i
i
i=1
6
decomposition, for any ? ? [0, 1]:
?
ROC (?) ? ROC(?sK , ?) =
?
ROC (?) ?
K
X
!
ROC (?
?i )??i (?)
?
+
i=1
K
X
(ROC? (?
?i ) ? ??i )??i (?) .
i=1
It is well-known folklore in linear approximation theory ([dB01]) that if s?K is a piecewise constant
scoring function whose ROC curve interpolates the points {(?
? i , ROC? (?
? i ))}i=0,...,K of the optimal
ROC curve, then we can bound the first term (which is positive), ?? ? [0, 1], by:
?
d2
1
ROC? (?) ? max (?
?i+1 ? ?
? i )2 .
inf
0?i?K
8 ??[0,1] d?2
Now, to control the second term, we upper bound the following quantity:
|ROC? (?
?i ) ? ??i | ? sup
??[0,1]
d
ROC? (?) ? |?
?i ? ?i? | + |?i? ? ??i |
d?
We further bound: |?
?i ? ?i? | ? |?
?i ? ?i | + |?i ? ?i? | where ?i = ?(C?i ). In order to deal with the
first term, the next lemma will be needed:
Lemma 1 We have, for all k ? {1, . . . , K}:
?(C?k ) = ?(C?k ) + (k ? 1)OP (n?1/4 ) .
where the notation OP (1) is used for a r.v. which is bounded in probability.
From the lemma, it follows that: max1?i?K |?
?i ? ?i | = OP (Kn?1/4 ). We can then use Theorem
1 with ? replaced by ?/K to get that max1?i?K |?i ? ?i? | = OP ((n?1 log K)1/3 ). The same
inequalities hold with the ??s. It remains to control the quantity ?
? i+1 ? ?
? i . We have:
|?
? i+1 ? ?
? i |? max | ?(C?k ) ? ?(C?k?1 ) | +K OP (n?1/4 ) .
1?k?K
We have that:
?
max | ?(C?k ) ? ?(C?k?1 ) |? 2 max | ?(C?k ) ? ?(Ck? ) | + max | ?(Ck? ) ? ?(Ck?1
)|
1?k?K
1?k?K
1?k?K
As before, we have that the first term is of the order (log K/n)1/3 and since the second derivative
of the optimal ROC curve is bounded, the second term is of the order K ?1 . Eventually, we choose
K in order to optimize the quantity: K ?2 + (log K/n)2/3 + K 2 n?1/2 + Kn?1/4 + (log K/n)1/3 .
As only the first and the third term matter, this leads to the choice of K = Kn ? n1/8 .
Proof of Lemma 1.
We have that ?(C?2 ) = ?(C?2 ) + ?(C?1 \ C?2 ). Therefore, since C1? ? C2? and observing that
?(C?1 \ C?2 ) = ?(((C?1 \ C1? ) ? (C?1 ? C1? )) \ ((C?2 \ C2? ) ? (C?2 ? C2? )) ,
it suffices to use the additivity of the probability measure ?(.) to get: ?(C?2 ) = ?(C?2 ) + OP (n?1/4 ).
Eventually, errors are stacked and we obtain the result.
Proof of Theorem 3.
We use the following decomposition, for any fixed ? ? (0, 1):
!
!
K
K
X
X
?
?
?
?
\? (?) ?
\? (?)?ROC (?) = ROC
ROC
ROC (?
?i )??i (?) +
ROC (?
?i )??i (?) ? ROC (?) .
i=1
i=1
Therefore, we have by a triangular inequality: ?? ? [0, 1],
K
X
\?
?
?
ROC (?
?i )?i (?) ? max |??i ? ?i | + |?i ? ?i? | + |ROC? (?i? ) ? ROC? (?
?i )| .
ROC (?) ?
1?i?K
i=1
7
And, by the finite increments theorem, we have:
|ROC
?
(?i? )
?
? ROC (?
?i )| ?
!
d
?
sup
ROC (?) (|?i? ? ?i | + |?i ? ?
? i |) .
d?
??[0,1]
For the other term, we use the same result on approximation as in the proof of Theorem 2:
K
X
1
d2
?
?
ROC (?
?i )??i (?) ? ROC (?) ? ?
ROC? (?) ? max (?
? i+1 ? ?
? i )2
inf
0?i?K
8
??[0,1] d?2
i=1
?
max (?
?i+1 ? ?
? i ) ? max (?i+1
? ?i? ) + 2 max |?i? ? ?i | + 2 max |?
?i ? ?i | .
0?i?K
0?i?K
1?i?K
1?i?K
?1/2
?
We recall that: max1?i?K |?
?i ? ?i |. = OP (Kn
). Moreover, max0?i?K {?i+1
? ?i? } is of the
?1
?
order of K . And with probability at least 1 ? ?, we have that max1?i?K |?i ? ?i | is bounded as
in Theorem 1, except that ? is replaced by ?/K in the bound. Eventually, we get the generalization
bound: K ?2 + (log K/n)1/3 , which is optimal for a number of knots: K ? n1/6 .
References
[AA07]
J.-Y. Audibert and A.Tsybakov. Fast learning rates for plug-in classifiers. Annals of
statistics, 35(2):608?633, 2007.
[AGH+ 05] S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth. Generalization bounds
for the area under the ROC curve. J. Mach. Learn. Res., 6:393?425, 2005.
[BBL05] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of Classification: A Survey of Some
Recent Advances. ESAIM: Probability and Statistics, 9:323?375, 2005.
[BCT07] M. Barreno, A.A. Cardenas, and J.D. Tygar. Optimal ROC curve for a combination of
classifiers. In NIPS?07, 2007.
[BDH06] F.R. Bach, D.Heckerman, and Eric Horvitz. Considering cost asymmetry in learning
classifiers. Journal of Machine Learning Research, 7:1713?1741, 2006.
[Cav97]
L. Cavalier. Nonparametric estimation of regression level sets. Statistics, 29:131?160,
1997.
[CLV08] S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and empirical risk minimization of
U-statistics. The Annals of Statistics, 36(2):844?874, 2008.
[CV07]
S. Cl?emenc?on and N. Vayatis. Ranking the best instances. Journal of Machine Learning
Research, 8:2671?2699, 2007.
[CV08]
S. Cl?emenc?on and N. Vayatis. Tree-structured ranking rules and approximation of the
optimal ROC curve. Technical Report hal-00268068, HAL, 2008.
[dB01]
C. de Boor. A practical guide to splines. Springer, 2001.
[Ega75]
J.P. Egan. Signal Detection Theory and ROC Analysis. Academic Press, 1975.
[FISS03] Y. Freund, R. D. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for
combining preferences. Journal of Machine Learning Research, 4:933?969, 2003.
[RV06]
P. Rigollet and R. Vert. Fast rates for plug-in estimators of density level sets. Technical
Report arXiv:math/0611473v2, arXiv:math/0611473v2, 2006.
[Tsy04]
A. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, 32(1):135?166, 2004.
[vT68]
H.L. van Trees. Detection, Estimation, and Modulation Theory, Part I. Wiley, 1968.
[WN07]
R. Willett and R. Nowak. Minimax optimal level set estimation. IEEE Transactions on
Image Processing, 16(12):2965?2979, 2007.
8
| 3469 |@word h:3 version:2 proportion:1 norm:6 stronger:1 yi0:3 d2:4 decomposition:2 palso:1 pg:1 contains:1 interestingly:1 horvitz:1 recovered:1 discretization:1 written:2 partition:6 plot:3 v:2 provides:3 boosting:1 math:2 herbrich:1 preference:1 c2:3 tsy04:3 introduce:5 boor:1 expected:1 indeed:2 decreasing:1 equipped:2 considering:1 increasing:2 becomes:1 provided:2 bounded:6 underlying:1 notation:9 moreover:1 minimizes:1 ag:1 transformation:2 guarantee:1 concave:1 classifier:16 control:2 medical:1 overlaying:6 positive:16 before:1 local:2 consequence:1 mach:1 interpolation:1 noteworthy:1 lugosi:2 might:1 modulation:1 umr:2 twice:1 studied:1 collect:1 cdfs:2 practical:3 supu:2 procedure:2 erformance:1 area:2 empirical:16 adapting:1 vert:1 confidence:1 get:4 convenience:1 close:1 risk:13 optimize:1 measurable:3 roth:1 emenc:4 straightforward:1 agh:3 survey:1 rule:11 estimator:8 insight:1 population:1 notion:1 increment:1 annals:3 target:3 controlling:1 suppose:1 cmla:2 element:1 asymmetric:1 observed:1 worst:1 parameterize:1 connected:1 ordering:2 discriminates:1 broken:3 peled:1 raise:1 solving:1 segment:1 bipartite:9 max1:4 eric:1 completely:1 represented:1 additivity:1 stacked:1 fast:2 describe:1 whose:2 widely:1 valued:1 triangular:1 ability:1 statistic:6 cardenas:1 sequence:2 differentiable:3 propose:6 fr:2 relevant:2 combining:1 rapidly:1 degenerate:1 description:1 bbl05:2 convergence:8 regularity:1 asymmetry:2 derive:1 develop:2 depending:1 op:7 recovering:5 judge:1 convention:1 direction:1 vc:1 implementing:1 fix:2 generalization:3 suffices:1 proposition:9 strictly:2 extension:1 hold:2 lying:1 around:2 credit:1 overlaid:2 continuum:1 estimation:5 label:4 healthy:1 weighted:10 minimization:5 hope:1 always:1 aim:2 modified:3 ck:3 pn:1 derived:1 focus:2 notational:1 check:1 sense:3 cnrs:2 quasi:1 arg:1 classification:12 flexible:1 issue:1 denoted:1 tygar:1 marginal:1 construct:2 represents:1 nearly:3 mimic:1 others:2 report:2 piecewise:5 spline:1 replaced:2 connects:2 lebesgue:2 n1:5 ltci:1 detection:2 screening:1 interest:1 evaluation:1 devoted:1 har:1 integral:1 nowak:1 byproduct:1 partial:1 clemencon:1 institut:1 tree:2 taylor:1 clv08:3 re:2 desired:1 egan:1 theoretical:1 instance:4 maximization:1 cost:2 deviation:1 expects:1 kn:6 adaptively:1 st:1 density:3 sequel:1 connecting:1 continuously:1 lug:1 choose:1 possibly:1 derivative:3 account:1 de:1 matter:1 mp:1 ranking:19 audibert:1 later:1 view:1 observing:1 sup:5 aggregation:1 complicated:1 contribution:1 descriptor:1 characteristic:1 correspond:1 yield:1 knot:4 explain:2 definition:4 against:3 proof:9 adl:1 boil:1 proved:1 popular:1 recall:2 knowledge:1 organized:1 graepel:1 higher:1 methodology:1 somehow:1 hal:2 concept:1 true:10 counterpart:2 hence:6 symmetric:1 boucheron:1 deal:1 auc:2 universud:1 criterion:2 generalized:1 l1:1 image:1 novel:1 functional:3 rigollet:1 extend:1 xi0:2 willett:1 refer:1 consistency:1 pointed:1 operating:1 sick:1 recent:1 perspective:1 optimizing:1 irrelevant:1 inf:4 inequality:3 binary:4 yi:5 scoring:32 integrable:1 seen:2 living:1 ii:1 signal:1 technical:2 faster:1 academic:1 plug:3 offer:2 bach:1 retrieval:1 regression:12 denominator:1 metric:1 arxiv:2 agarwal:1 c1:3 vayatis:5 interval:1 crucial:1 rest:1 induced:1 spirit:1 stephan:1 easy:1 idea:3 expression:1 interpolates:1 remark:3 clear:1 amount:2 nonparametric:2 tsybakov:3 band:1 induces:1 schapire:1 exist:2 estimated:2 diagnosis:1 discrete:1 shall:1 key:1 threshold:1 inverse:1 decision:1 cachan:2 appendix:1 bound:9 display:2 nonnegative:1 g:3 bousquet:1 argument:1 min:2 structured:1 according:1 combination:2 heckerman:1 slightly:1 explained:1 taken:1 remains:2 turn:1 eventually:3 needed:2 singer:1 barreno:1 adopted:1 permit:1 observe:1 v2:2 slower:1 hat:1 original:1 graphical:1 folklore:1 quantile:2 build:4 classical:1 quantity:3 strategy:2 parametric:1 diagonal:1 setup:3 mostly:1 relate:1 negative:1 upper:1 observation:2 finite:4 situation:1 arbitrary:2 ephan:1 introduced:1 namely:1 pair:1 connection:1 established:1 nip:1 max:11 event:2 natural:4 residual:2 minimax:1 scheme:2 improve:1 esaim:1 literature:1 freund:1 expect:1 highlight:1 affine:1 principle:2 elsewhere:1 summary:1 last:1 copy:2 guide:1 cv07:3 taking:1 van:1 curve:55 dimension:1 evaluating:1 stand:1 collection:4 jump:1 adaptive:1 transaction:1 excess:4 supremum:5 tsi:1 receiver:1 assumed:2 conclude:1 xi:5 continuous:3 fem:1 quantifies:1 sk:3 why:1 nature:1 learn:1 nicolas:1 expansion:1 investigated:1 cl:4 pk:2 main:3 telecom:3 en:2 roc:111 wiley:1 explicit:1 candidate:1 third:1 down:1 theorem:14 maxi:1 dominates:2 exists:5 false:5 magnitude:1 iyer:1 conditioned:1 margin:2 flavor:1 simply:1 likely:1 springer:1 corresponds:1 minimizer:2 satisfies:1 cdf:1 conditional:2 viewed:1 goal:1 paristech:2 except:1 lemma:4 max0:1 called:1 discriminate:1 support:1 latter:1 absolutely:1 evaluate:1 |
2,723 | 347 | Signal Processing by Multiplexing and
Demultiplexing in Neurons
DavidC. Tam
Division of Neuroscience
Baylor College of Medicine
Houston, TX 77030
[email protected]
Abstract
Signal processing capabilities of biological neurons are
investigated. Temporally coded signals in neurons can be
multiplexed to increase the transmission capacity.
Multiplexing of signal is suggested in bi-threshold neurons with
high-threshold and low-thre shold for switching firing
modes. To extract the signal embedded in the interspikeintervals of firing, the encoded signal are de multiplexed and
multiplexed by a network of neurons with delayed-line
circuitry for signal processing. The temporally coded input
signal is transformed spatially by mapping the firing intervals
topographically to the output of the network, thus decoding
the specific firing inters pike-intervals. The network also
provides a band-pass filtering capability where the
variability of the timing of the original signal can be decoded.
II
II
II
II
1 INTRODUCTION
Signals of biological neurons are encoded in the firing patterns of spike trains or
the time series of action potentials generated by neurons. The signal content of
the codes encoded by a presynaptic neuron will be decoded by some other neurons
postsynpatically. Neurons are often thought to be encoding a single type of
282
Signal Processing by Multiplexing and Demultiplexing in Neurons
codes. But there is evidence suggesting that neurons may encode more than one
type of signals. One of the mechanisms for embedding multiple types of signals
processed by a neuron is multiplexing. When the signals are multiplexed, they
also need to be demultiplexed to extract the useful information transmitted by
the neurons. Theoretical and experimental evidence of such multiplexing and
demultiplexing scheme for signal processing by neurons will be given below.
2 MULIPLEXING IN NEURONS
Most neurons fire action potentials when the membrane potential is
depolarized to a threshold above the resting potential. For some neurons, there
are more than a single threshold that can trigger the generation of action
potentials. The thresholds occur not only at depolarized membrane potential
(above the resting potential) but also at hyperpolarized potential (below the
resting potential). This bi-threshold phenomena had been reported in a number
of biological neurons including the giant squid axon (Hodgkin & Huxley, 1952),
thalamic (Jahnsen & Llinas, 1984), inferior olivary (Yarom & Llinas, 1987),
and hippocampal neurons (Stasheff & Wilson, 1990). The phenomena of
triggering the firing of action potentials at a membrane potential below the
resting potential level following prolonged hyperpolarization have been
observed under different conditions in different neurons such as during the
anodal break after voltage-clamped at a hyperpolarized potential (Hodgkin
& Huxley, 1952), and are called "low-threshold spikes" (Yarom & Llinas,
1987) and "baseline spikes" (Stasheff & Wilson, 1990), which are spikes
elicited naturally during the after-hyperpolarization (a.h. p.) period. The
generation of low-threshold spikes is a voltage- and time-dependent process
occurring during a prolonged hyperpolarization for de-inactivation of ionic
conductances.
Given this bi-threshold for firing of action potentials, a neuron can function in
two modes of operations: one at depolarization potentials and the other at
hyperpolarization potentials. Thus, when the neuron is depolarized from the
resting potential, the neuron will process signal based on the "high-threshold",
and when the neurons is hyperpolarized for a prolonged duration, the neuron
will process signal based on the "low-threshold". Formally, it is described as
follows:
I,
yet) =
{
0,
ifV(t)~Ohi
or
if V(t-iilt) < OlD and vet) ~ OlD' for 1 <i<j
(1)
otherwise
where yet) denotes the occurrence of the firing of an action potential at time t,
x(t) denotes the membrane potential of the neuron at time t, 0hi denotes the
"high-threshold" and OlD denotes the "low-threshold", and jilt represents the
duration of hyperpolarization, such that the neuron will fire when
283
284
Tam
depolarized at the hyperpolarization potential. This bi-threshold firing
phenomenon was suggested to be involved in the two different rhythms
generated by a neuron as a periodic bi-stable oscillator (Rose & Hindmarsh,
1985; Goldbeter & Moran, 1988), which can switch between two different firing
frequencies, thus multiplexing the signal depending on the mode of operation or
polarization level (Tam, 1990c).
3 DEMULTIPLEXING IN NEURONS
The multiplexed signal encoded in a neuron can be demultiplexed in a number of
ways. One of the systematic way of extracting the firing frequency of the
encoded signal can be described by a network of neurons. Given the temporally
modulated input spike train spike, the firing intervals of the encoded signal
can be extracted by a network of neurons such that the firing of these output
neurons will decode the interspike-intervals of the input signal. In this
network, the temporal codes of the input spike train will be converted into a
spatially-distributed topographical code where each output neuron represents
a particular firing interval with a specific band-width. Thus, the original
signal is demultiplexed by mapping the input firing intervals into the firing of
specific neurons based on the spatial location of the neuron in the output layer.
The circuitry of this network of neurons utilizes delay-lines for signal
processing (Reiss, 1964; Tam, 1990a, b). Examples of delay-line architecture
used for signal processing can be found in the cerebellar cortex (Eccles et al.,
1967), inferior colliculus (Yin, et al., 1987, 1986, 1985; Chan et al., 1987) and
cochlear nucleus (Carr & Konishi, 1990).
The time-delayed network can be described as follows. Let x(t) be a time-series
of spikes (or delta-functions, 6(t?) with a total of n+l spikes:
n
x(t) =
~ 6(t- T)
(2)
j=O
Let the input to the network be a spike train x(t) given by (2). There are k
neurons in the first input layer of the network. The input is split into multiple
branches, each of which is connected to all k neurons in the first layer. In
addition to the direct connection between the input and the first layer neurons,
each input branch to the first layer neuron is also split into multiple branches
with successive incremental time-delays. Specially, the k-th neuron in the
first layer has k+ 1 input lines, each input is successively delayed by a time
delay Lit relative to the previous one. That is, the i-th input to this k-th
neuron in the first layer at time t is given by x(t-iLit). Thus, the sum of the input
to this k-th neuron is given by:
Signal Processing by Multiplexing and Demultiplexing in Neurons
k
Xit) = :Lx(t- it1t)
(3)
i=O
3.1 BAND-PASS FILTERING
Band-pass filtering can be accomplished by the processing at the first layer of
neurons. If the threshold for the generation of an output spike for the k-th
neuron is set at one, then this neuron will fire only when the inters pikeinterval, Ij , of the input spike train is within the time-delay window, kAt.
That is, the output of this k-th neuron is given by:
y.,/.
t) = {
I, ifXk>l
0, otherwise
(4)
The interspike-interval, I j , is defined as the time interval between any two
adjacent spikes:
(5)
Therefore, the k-th neuron can be considered as encoding a band-pass filtered
input interspike-interval, 0 < Ij $ kAt. Thus, the k-th neuron in the first layer
essentially capture the input interspike-interval firing of less than kAt, the
band-passed interspike-interval To ensure that the neuron will fire a spike of
At in duration, we introduce a refractory period of (k-I)At after the firing of a
spike for the k-th neuron to suppress continual activation of the neuron due to
the phase differences of the incoming delayed signal.
3.2 HIGHER-ORDER INTERSPIKE-INTERVAL PROCESSING
Higher-order inters pike-intervals can be eliminated by the second layer
neurons. The order of the interspike-interval is defined by the number of
intervening spikes between any two spikes in the spike train. That is, the firstorder inters pike-interval contains no intervening spike between the two
adjacent spikes under consideration. Second-order inters pike-interval is the
time interval between two consecutive first-order interspike-intervals, i.e., the
interval containing one intervening spike.
If the second layer neurons receive excitatory input from the corresponding
neuron with a threshold (0) 1) and inhibitory input from the corresponding
neuron with a threshold of (0 > 2), then the higher-order intervals are
eliminated, with the output of the second layer (double-primed) neuron given
by:
"
,{I,
y'i/t) =y.,/.t)-y1!t) =
where
if2~X.,/.t?
.
0, otherwIse
1
(6)
285
286
Tam
, {1,
y1!t)=
ifXk>2
0, otherwise
(7)
This requires that an addition input layer of neurons be added to the network,
which we call the first-parallel layer, whose input/ output relationship is
given by (7). In other words, there are k first layer neurons and k first-parallel
layer neurons serving as the input layers of the network. The k-th neuron in the
first layer and the k-th neuron in the first-parallel layer are similar in their
inputs, but the thresholds for producing an output spike are different. The
difference between the outputs of the first set of neurons (first layer) in the first
layer and the primed set of neurons (first-parallel layer) is computed by the
second layer by making excitatory connection from the first layer neuron and
inhibitory connection from the first-parallel layer neuron for each
corresponding k-th neuron respectively as described by (6). This will ensure
accurate estimation of only first-order interspike-interval, 0 < Ij ~ kt1t, within
the time-delay window kt1t.
3.3 BAND-WIDTH PROCESSING
The third layer neurons will filter the input signal by distributing the
frequency (or interval) of firing of neurons within a specific band-width. Since
the k-th neuron in the second layer detects the band-passed first-order
inters pike-intervals (0 < Ij ~ kt1t) and the h-th neuron detects another bandpassed interspike-intervals (0 < Ij ~ hL1t), then the difference between these
two neurons will detect first-order interspike-intervals with a band-width of
(k-h)L1t. In order words, it will detect the first-order interspike-interval
between kL1t and hL1t, i.e., hL1t < Ij ~ kL1t.
This requires that the third layer neurons derive their inputs from two sources:
one excitatory and the other inhibitory from the second layer. The output of
the k-th neuron in the third layer, y" 'k(t), is obtained from the difference
between the outputs of k-th and h-th neurons in the second layer:
k
if2 ~ :Lx(t- iL1t) > 1
Y'k'tlt) = y'i/t) - y'h(t) =
i=h
(7)
, otherwise
A two-dimensional topographical map of the band-passed interspike-intervals
of the input spike train can be represented by arranging the third-layer neurons
in a two-dimensional array, with one axis (the horizontal axis) representing
the k index (the band-passed interspike-interval) of equation (7) and the other
axis (the vertical axis) representing the (k-h) index (the band-width
Signal Processing by Multiplexing and Demultiplexing in Neurons
interspike-interval). Thus the firing of the third layer neurons represents the
band-passed filtered version of the original input spike train, extracting the
firing interspike-interval of the input signal. The "coordinate" of the neuron in
the third layer represents the band-passed interspike-interval (0 < Ij :r; kAt)
and the band-width interspike-interval (hAt < Ij :r; kAt) of the original input
spike train signal. The band-width can be used to detect the variations (or
jittering) in the timing for firing of spikes in the input spike train, since the
timing of firing of spikes in biological neurons can be very variable. Thus, the
network can be used to detect the variability of timing in firing of spikes by the
firing location of the third layer neuron.
3.4 EXTRACTION OF EMBEDDED SIGNAL BY BI-THRESHOLD FIRING
If the neurons in the second and third layers are bi-threshold neurons where one
threshold is at the "depolarization" level (Le., a positive value) and the
other threshold is at the "hyperpolarization" level (Le., a negative value),
then addition information may be extracted based on the level of firing
threshold. Since the neuron in the second and third layers receive inhibitory
inputs from the preceding layer, there are instances where the neuron be
"hyperpolarized" or the sum of the inputs to the neuron is negative. Such
condition occurs when the order of the interspike-interval is higher than one.
In other words, the higher-order interspike-interval signal is embedded in the
"hyperpolarization", which is normally suppressed from generating a spike
when there is only one threshold for firing at the "depolarized ll level (Ohi)'
But for bi-threshold neurons where there is another threshold at the
hyperpolarized level (Olo), such embedded signal encoded as
hyperpolarization can be extracted by sending an external depolarizing signal
to this neuron causing the neuron to fire at the low threshold. Thus the
hyperpolarization signal can be "read-out" by an external input to the bithreshold neuron. In summary, a time-delay network can be used to process
temporally modulated pulsed-coded spike train signal and extract the firing
interspike-intervals by mapping the band-passed intervals topographically on
a two-dimensional output array from which the order of the interspikeinterval can be extracted using different thresholds of firing.
Acknowledgements
This work is supported by ONR contract N00014-90-J-1353.
References
Carr, C. E. & Konishi, M. (1990) A circuit for detection of interaural time
differences in the brain stem of the barn owl. ]. Neurosci. 10: 3227-3246.
Chan, J. C., Yin, T. C. & Musicant, A. D. (1987) Effects of interaural time delays
of noise stimuli on low-frequency cells in the cat1s inferior colliculus. II.
Responses to band-pass filtered noises. ]. Neurophysiol. 58: 543-561.
287
288
Tam
Goldbeter, A. & Moran, F. (1988) Dynamics of a biochemical system with
multiple oscillatory domains as a clue for multiple modes of neuronal
oscillations. Eur. Biophys. J. 15:277-287.
Hodgkin, A. L. & Huxley, A. F. (1952) A quantitative description of membrane
current and its application to conduction and excitation in nerve. J.
Physiol. (London) 117: 500-544.
Eccles, J.C., Ito, M. and Szentagothai, J. (1967) The Cerebellum as a Neuronal
Machine, Springer-Verlag, New York, Heidelberg.
Jahnsen, H. & Llinas, R. (1984) Electrophysiological properties of guinea-pig
thalamic neurones: An in vitro study. J. Physiol. (London) 349:205-226.
Reiss, R.F. (1964) A theory of resonant networks. In (Ed. R.F. Reiss) Neural
Theory and Modeling: Proceedings of the 1962 Ojai Symposium.
Stanford University Press, Stanford, CA.
Rose, R. M. & Hindmarsh, J. L. (1985) A model of a thalamic neuron. Proc. R.
Soc. Lond. 225:161-193.
Stasheff, S. F. & Wilson, W. A. (1990) Increased ectopic action potential
generation accompanies epileptogenesis in vitro. Neurosci. Lett. 111:
144-150.
Tam, D. C. (1990a) Temporal-spatial coding transformation: Conversion of
frequency-code to place-code via a time-delayed neural network.
Proceedings of the International Joint Conference on Neural Networks
(H. Caudill, eds.), Jan., 1990. Vol. 1, pp.I-130-133.
Tam, D. C. (1990b) Decoding of firing intervals in a temporal-coded spike train
using a topographically mapped neural network. Proc. of International
Joint Conference on Neural Networks. Vol. 3, pp. m-627-632.
Tam, D. C. (1990c) Functional significance of bi-threshold firing of neurons.
Society for Neuroscience Abstract. Vol. 16, p. 1091.
Yarom, Y. & Llinas, R. (1987) Long-term modifiability of anomalous and
delayed rectification in guinea pig inferior olivary neurons. ]. Neurosci.
7:1166-1177.
Yin, T.
c., Chan, J. C. &
Carney, L. H. (1987) Effects of interaural time delays
of noise stimuli on low-frequency cells in the cat's inferior colliculus.
III. Evidence for cross-correlation. J. Neurophysiol. 58: 562-583.
Yin, T. C., Chan, J. C. & Irvine, D. R. (1986) Effects of interaural time delays of
noise stimuli on low-frequency cells in the cat's inferior colliculus. I.
Responses to wideband noise. J. Neurophysiol. 55: 280-300.
Yin, T.
c., Hirsch, J. A. &
Chan, J. C. (1985) Responses of neurons in the cat's
superior colliculus to acoustic stimuli. II. A model of interaural
intensity sensitivity. J. Neurophysiol. 53: 746-758.
| 347 |@word version:1 hyperpolarized:5 squid:1 series:2 contains:1 current:1 thre:1 activation:1 yet:2 physiol:2 interspike:21 filtered:3 provides:1 location:2 successive:1 lx:2 direct:1 symposium:1 interaural:5 introduce:1 brain:1 detects:2 prolonged:3 window:2 circuit:1 depolarization:2 giant:1 transformation:1 temporal:3 quantitative:1 continual:1 firstorder:1 olivary:2 normally:1 producing:1 positive:1 timing:4 switching:1 encoding:2 firing:32 wideband:1 bi:9 kat:5 jan:1 thought:1 word:3 map:1 duration:3 array:2 embedding:1 konishi:2 coordinate:1 variation:1 arranging:1 trigger:1 decode:1 observed:1 capture:1 connected:1 ilit:1 rose:2 dynamic:1 jittering:1 topographically:3 division:1 neurophysiol:4 joint:2 represented:1 tx:1 cat:3 train:12 london:2 whose:1 encoded:7 stanford:2 otherwise:5 ifv:1 if2:2 causing:1 intervening:3 description:1 double:1 transmission:1 bandpassed:1 incremental:1 generating:1 depending:1 derive:1 ij:8 soc:1 filter:1 owl:1 biological:4 considered:1 barn:1 mapping:3 circuitry:2 consecutive:1 estimation:1 proc:2 primed:2 inactivation:1 voltage:2 wilson:3 encode:1 xit:1 baseline:1 detect:4 dependent:1 biochemical:1 transformed:1 spatial:2 extraction:1 eliminated:2 represents:4 lit:1 stimulus:4 delayed:6 phase:1 cns:1 fire:5 conductance:1 detection:1 hindmarsh:2 accurate:1 old:3 theoretical:1 instance:1 increased:1 modeling:1 delay:10 reported:1 conduction:1 periodic:1 eur:1 international:2 sensitivity:1 systematic:1 contract:1 decoding:2 successively:1 containing:1 external:2 tam:9 suggesting:1 potential:21 converted:1 de:2 coding:1 break:1 thalamic:3 capability:2 elicited:1 parallel:5 depolarizing:1 ionic:1 oscillatory:1 ed:2 frequency:7 involved:1 pp:2 naturally:1 irvine:1 electrophysiological:1 nerve:1 higher:5 response:3 llinas:5 correlation:1 horizontal:1 mode:4 effect:3 polarization:1 spatially:2 read:1 adjacent:2 ll:1 during:3 width:7 cerebellum:1 inferior:6 rhythm:1 excitation:1 hippocampal:1 eccles:2 carr:2 consideration:1 superior:1 functional:1 hyperpolarization:10 vitro:2 refractory:1 resting:5 had:1 stable:1 cortex:1 chan:5 pulsed:1 n00014:1 verlag:1 onr:1 ohi:2 accomplished:1 musicant:1 transmitted:1 houston:1 preceding:1 period:2 signal:38 ii:6 branch:3 multiple:5 stem:1 cross:1 long:1 coded:4 anomalous:1 essentially:1 cerebellar:1 cell:3 receive:2 addition:3 interval:37 source:1 depolarized:5 specially:1 call:1 extracting:2 split:2 iii:1 switch:1 architecture:1 l1t:1 triggering:1 distributing:1 passed:7 accompanies:1 york:1 pike:5 neurones:1 action:7 jilt:1 useful:1 band:19 processed:1 inhibitory:4 neuroscience:2 delta:1 serving:1 vol:3 threshold:29 demultiplexing:6 sum:2 colliculus:5 hodgkin:3 topographical:2 place:1 resonant:1 utilizes:1 oscillation:1 layer:38 hi:1 demultiplexed:3 occur:1 huxley:3 multiplexing:8 lond:1 membrane:5 suppressed:1 making:1 rectification:1 equation:1 mechanism:1 sending:1 operation:2 occurrence:1 hat:1 original:4 denotes:4 ensure:2 medicine:1 yarom:3 society:1 szentagothai:1 added:1 spike:33 occurs:1 mapped:1 capacity:1 cochlear:1 presynaptic:1 code:6 index:2 relationship:1 baylor:1 negative:2 suppress:1 conversion:1 vertical:1 neuron:97 variability:2 y1:2 intensity:1 connection:3 tmc:1 bcm:1 acoustic:1 suggested:2 below:3 pattern:1 pig:2 including:1 caudill:1 representing:2 scheme:1 temporally:4 axis:4 extract:3 acknowledgement:1 relative:1 embedded:4 generation:4 filtering:3 nucleus:1 excitatory:3 summary:1 supported:1 guinea:2 distributed:1 lett:1 clue:1 tlt:1 hirsch:1 incoming:1 vet:1 ca:1 heidelberg:1 investigated:1 domain:1 significance:1 neurosci:3 noise:5 neuronal:2 axon:1 decoded:2 clamped:1 carney:1 third:9 ito:1 specific:4 moran:2 evidence:3 hl1t:3 occurring:1 biophys:1 yin:5 springer:1 extracted:4 oscillator:1 content:1 called:1 total:1 pas:5 experimental:1 ectopic:1 formally:1 college:1 modulated:2 olo:1 multiplexed:5 reiss:3 phenomenon:3 |
2,724 | 3,470 | Estimating vector fields using
sparse basis field expansions
Stefan Haufe1, 2, *
Vadim V. Nikulin3, 4
Andreas Ziehe1, 2
1, 2, 4
?
Klaus-Robert Muller
Guido Nolte2
1
TU Berlin, Dept. of Computer Science, Machine Learning Laboratory, Berlin, Germany
2
Fraunhofer Institute FIRST (IDA), Berlin, Germany
3
Charit?e University Medicine, Dept. of Neurology, Campus Benjamin Franklin, Berlin, Germany
4
Bernstein Center for Computational Neuroscience, Berlin, Germany
*
[email protected]
Abstract
We introduce a novel framework for estimating vector fields using sparse basis
field expansions (S-FLEX). The notion of basis fields, which are an extension
of scalar basis functions, arises naturally in our framework from a rotational invariance requirement. We consider a regression setting as well as inverse problems. All variants discussed lead to second-order cone programming formulations. While our framework is generally applicable to any type of vector field, we
focus in this paper on applying it to solving the EEG/MEG inverse problem. It
is shown that significantly more precise and neurophysiologically more plausible
location and shape estimates of cerebral current sources from EEG/MEG measurements become possible with our method when comparing to the state-of-the-art.
1
Introduction
Current machine learning is frequently concerned with the estimation of functions with multivariate
output. While in many cases the outputs can be treated as mere collections of scalars (e.g. different
color channels in image processing), in some contexts there might be a deeper interpretation of them
as spatial vectors with a direction and a magnitude. Such ?truly? vectorial functions are called vector
fields and become manifest for example in optical flow fields, electromagnetic fields and wind fields
in meteorology. Vector field estimators have to take into account that the numerical representation of
a vector depends on the coordinate system it is measured in. That is, the estimate should be invariant
with respect to a rotation of the coordinate system.
Let v : RP 7? RQ be a vector field. Mathematically speaking, we are seeking to approximate v
? using empirical measurements. Here we consider two types of measurements. The first
by a field v
type are direct samples (xn , yn ), xn ? RP , yn ? RQ , n = 1, . . . , N of v leading to a regression
problem. The second case occurs, if only indirect measurements zm ? R, m = 1, . . . , M are
available, which we assume to be generated by a known linear1 transformation of the vector field
outputs yn belonging to nodes xn , n = 1, . . . , N . This kind of estimation problem is known as
an inverse problem. Let z = (z1 , . . . , zM )T denote the vector of indirect measurements, Y =
T T
(y1T , . . . , yN
) the N ? Q matrix of vector field outputs and vec(Y ) a column vector containing
the stacked transposed rows of Y . The linear relationship between Y and z can be written as z =
F vec(Y ) using the forward model F ? RM ?N Q .
1
If the true relation is nonlinear, it is here assumed to be linearized.
1
As an example of an inverse problem consider the way humans localize acoustic sources. Here z
comprises the signal arriving at the ears, v is the spatial distribution of the sound sources and F
is given by physical equations of sound propagation. Using information from two ears, humans
do already very well in estimating the direction of incoming sounds. By further incorporating prior
knowledge, e.g. on the loudness of the sources, v can usually be well approximated. The use of prior
knowledge (a.k.a. regularization) is indeed the most effective strategy for solving inverse problems
[13], which are inherently ambiguous. Hence, the same mechanisms used to avoid overfitting in,
e.g., regression may be applied to cope with the ambiguity of inverse problems.
For the estimation of scalar functions, methods that utilize sparse linear combinations of basis functions have gained considerable attention recently (e.g. the ?lasso? [14]). Apart from the computational tractability that comes with the sparsity of the learned model, the possibility of interpreting the
estimates in terms of their basis functions is a particularly appealing feature of these methods. While
sparse expansions are also desirable in vector field estimation, lasso and similar methods cannot be
used for that purpose, as they break rotational invariance in the output space RQ . This is easily seen
as sparse methods tend to select different basis functions in each of the Q dimensions.
Only few attempts have been made on rotation-invariant sparse vector field expansions so far. In [8]
a dense expansion is discussed, which could be modified to a sparse version maintaining rotational
invariance. Unfortunately, this method is restricted to approximating curl-free fields. In contrast,
we here propose a method that can be used to decompose any vector field. We will derive the
general framework in section 2. In section 3 we will apply the (appropriately customized) method
for solving the EEG/MEG inverse problem. Finally, we will draw a brief conclusion in section 4.
2
Method
Our model is based on the assumption that v can be well approximated by a linear combination
of some basis fields. A basis field is defined here (unlike in [8]) as a vector field, in which all
output vectors point in the same direction, while the magnitudes are proportional to a scalar (basis)
function b : RP 7? R. As demonstrated in Fig. 1, this model has an expressive power which
is comparable to a basis function expansion of scalar functions. Given a set (dictionary) of basis
functions bl (x), l = 1, . . . , L, the basis field expansion is written as
v(x) =
L
X
cl bl (x) ,
(1)
l=1
with coefficients cl ? RQ , l = 1, . . . , L to be estimated. Note that by including one coefficient for
each output dimension, both orientations and proportionality factors are learned in this model (the
term ?basis field? thus refers to a basis function with learned coefficients). In order to select a small
set of fields, most of the coefficient vectors cl have to vanish. This can be accomplished by solving
a least-squares problem with an additional lasso-like `1 -norm penalty on the coefficients. However,
care has to be taken in order to maintain rotational invariance of the solution. We here propose to use
a regularizer that imposes sparsity and is invariant with respect to rotations, namely the `1 -norm of
the magnitudes of the coefficient vectors. Let C = (c1 , . . . , cL )T ? RL?Q contain the coefficients
and
?
?
b1 (x1 ) . . . bL (x1 )
?
?
..
..
N ?L
B=?
(2)
??R
.
.
b1 (xN ) . . . bL (xN )
the basis functions evaluated at the xn . The parameters are estimated using
C? = arg min L(C) + ?R(C) ,
(3)
C
PL
where R(C) = kCk1,2 = l=1 kcl k2 is the regularizer (the so-called `1,2 -norm of the matrix C),
L(C) is the quadratic loss function, which is defined by L(C) = k vec(Y ?BC)k22 in the regression
case and L(C) = kz?F vec(BC)k22 in the inverse reconstruction case, and ? is a positive constant.
In the statistics literature `1,2 -norm regularization is already known as a general mechanism for
achieving sparsity of grouped predictors [18]. Besides vector field estimation, this concept has
natural applications in, e.g, multiple kernel learning [1] and channel selection for brain computer
interfacing [15]. It has also recently been considered in the general multiple output setting [17].
2
1
2
3
SUM
Figure 1: Complicated vector field (SUM) as a sum of three basis fields (1-3).
2.1
Rotational Invariance
Rotational invariance, in the sense that the estimates after rotation of the coordinates axes are equal
to the rotated estimates, is a desirable property of an estimator. One has to distinguish invariance
in input- from invariance in output space. The former requirement may arise in many estimation
settings and can be fulfilled by the choice of appropriate basis functions bl (x). The latter one is
specific to vector field estimation and has to be assured by formulating a rotationally invariant cost
function. Our proposed estimator Eq. 3 is rotationally invariant. This is due to the use of the `2 norm in output space RQ , which does not change under rotation. I.e. for an orthogonal matrix
R ? RQ?Q , RT R = I
L
X
kRcl k2 =
l=1
L q
X
tr(cTl RT Rcl ) =
l=1
L
X
kcl k2 .
(4)
l=1
For the same argument, additional regularizers R? (C) = k vec(D? C)k22 (the well-known Tikhonov
regularizer) or R+ (C) = kD+ Ck1,2 (promoting sparsity of the linearly transformed vectors) may
be introduced without breaking the rotational invariance in RQ .
2.2
Optimization
Eq. 3 is a convex problem, composed of the quadratic term L(C) and the convex nondifferentiable
term R(C). It is equivalent to the following program
C?
=
arg
min
C,u
L
P
ul
s.t. kcl k2
? ul ,
l = 1, . . . , L
(5)
l=1
L(C)
?
?,
in which a linear function of the variables is minimized subject to quadratic and second-order cone
constraints [6]. The latter constraints are obtained by introducing auxiliary variables ul ? R, l =
1, . . . , L encoding upper bounds of the magnitudes of the coefficient vectors. Problem Eq. 5 is
an instance of second-order cone programming (SOCP), a standard class of convex programs, for
which efficient interior-point based solvers are available. The problem stays inside the SOCP class
even if the original formulation is modified in any of the following ways:
? Additional regularizers R+ (C) or R? (C) are used.
? The quadratic loss function is replaced by a more robust `1 -norm based loss (e.g. hinge
loss). In the regression case, this loss should be defined based on the magnitude of the
residual vector, which leads to a formulation involving the `1,2 -norm (and thus additional
SOCP constraints).
? Complex basis functions (e.g. Fourier bases or Morlet wavelets) are used. This approach
also requires complex coefficients, by which it is then possible not only to optimally scale
the basis functions, but also to optimally shift their phase. Similarly, it is possible to reconstruct complex vector fields from complex measurements using real-valued basis functions.
3
3
Application to the EEG/MEG inverse problem
Vector fields occur, for example, in form of electrical currents in the brain, which are produced by
postsynaptic neuronal processes. Knowledge of the electrical fields during a certain experimental
condition allows one to draw conclusions about the locations in which the cognitive processing
takes place and is thus of high value for research and medical diagnosis. Invasive measurements
allow very local assessment of neuronal activations, but such procedure in humans is only possible
when electrodes are implanted for treatment/diagnosis of neurological diseases, e.g., epilepsy. In
the majority of cases recordings of cortical activity are performed with non-invasive measures such
as electro- and magnetoencephalography, EEG and MEG respectively. The reconstruction of the
current density from such measurements is an inverse problem.
3.1
Method specification
In the following the task is to infer the generating cerebral current density given an EEG measurement z ? RM . The current density is a vector field v : R3 7? R3 assigning a vectorial current source
to each location in the brain. We obtained a realistic head model from high-resolution MRI (magnetic resonance imaging) slices of a human head [4]. Inside the brain, we arranged 2142 nodes in a
regular grid of 1 cm distance. The forward mapping F ? RM ?2142?3 from these nodes to the electrodes was constructed according to [9] ? taking into account the realistic geometry and conductive
properties of brain, skull and skin.
Dictionary
In most applications the ?true? sources are expected to be small in number and spatial extent. However, many commonly used methods estimate sources that almost cover the whole brain (e.g. [11]).
Another group of methods delivers source estimates that are spatially sparse, but usually not rotationally invariant (e.g. [7]). Here often too many sources, which are scattered around the true
sources, are estimated. Both the very smooth and the very sparse estimates are unrealistic from a
physiological point of view. Only very recently, approaches capable of achieving a compromise between these two extremes have been outlined [16, 3]. For achieving a similar effect we here propose
a sparse basis field expansion using radial basis functions. More specifically we consider spherical
Gaussians
1
?3
2
bn,s (x) = (2??s ) 2 exp ? kx ? xn k2 ?s?2
(6)
2
s = 1, . . . , 4, having spatial standard deviations ?1 = 0.5 cm, ?2 = 1 cm, ?3 = 1.5 cm, ?4 = 2 cm
and being centered at nodes xn , n = 1, . . . , N (see Fig. 2 for examples). Using this redundant
dictionary our expectation is that sources of different spatial extent can be reconstructed by selecting
the appropriate basis functions. Unlike the approaches taken in [16, 3] this approach does not require
an additional hyperparameter for controlling the tradeoff between sparsity and smoothness.
Figure 2: Gaussian basis functions with fixed center and standard deviations 0.5 cm ? 2 cm.
Normalization
Our `1,2 -norm based regularization is a heuristic for selecting the smallest possible number of basis
fields necessary to explain the measurement. Using this approach, however, not only the number
of nonzero coefficient vectors, but also their magnitudes enter the cost function. It is therefore
important to normalize the basis functions in order not to a-priori prefer some of them. Let Bs
be the N ? N matrix containing the basis functions with standard deviation ?s . The large matrix
B = (B1 /k vec(B1 )k1 , . . . , B4 /k vec(B4 )k1 ) ? RN ?4N is then constructed using normalized Bs .
By this means, no length scale is artificially prefered.
4
An estimation bias is also introduced by the location of the sources. Due to volume conduction, the
signal captured at the sensors is much stronger for superficial sources compared to deep sources.
?1
In [10] the variance estimate S? = F? T F? F? T
F? ? R3N ?3N is derived for the (least-squares)
?
estimated sources, where F = HF and H = I ? 11T /1T 1 ? RM ?M . We found that S? can be
used for removing the location bias. This can be done by either penalizing activity at locations with
high variance or by penalizing basis functions with high variance in the center. We here employ the
former approach, as the latter may be problematic for basis functions with large extent. Using this
? (x) requires knowledge of the forward model for x. Therefore, we restrict
approach, evaluation of v
ourselves here to nodes xn , n = 1, . . . , N . Let Wn ? R3?3 denote the inverse matrix square root of
the part of S? belonging to node xn . Defining
?
?
W1 . . .
0
?
.. ? ? R3N ?3N ,
..
(7)
W = ? ...
.
. ?
0 . . . WN
the coefficients are estimated using C? = arg min kCk1,2 s.t. kz ? F W vec(BC)k22 < ?. The
C
P
? (xn ) = Wn L
?l bl (xn ).
estimated current density at node xn is v
l=1 c
3.2
Experiments
Validation of methods for inverse reconstruction is generally difficult due to the lack of a ?ground
truth?. The measurements z cannot be used in this respect, as the main goal is not to predict the
EEG/MEG measurements, but the vector field v(x) as accurately as possible. Therefore, the only
way to evaluate inverse methods is to assess their ability to reconstruct known functions. We do
this by reconstructing a) simulated current sources and b) sources of real EEG data that are already
well-localized by other studies. For each EEG measurement, simulated or not, we conduct a 5 ? 5
crossvalidation, i.e. we perform 25 inverse reconstructions based on different training sets containing 80 % of the electrodes. In each crossvalidation run, we evaluate two criteria. Most important
is the reconstruction error, defined as Cy = k vec(Y )/k vec(Y )k2 ? vec(Y? tr )/k vec(Y? tr )k2 k2 ,
where Y? tr are the vector field outputs at nodes xn , n = 1, . . . , N estimated using only the training
set. This criterion can only be evaluated for the simulated data. For real and simulated data we also
evaluate the generalization error, i.e. the error in the prediction of the remaining 20% (the test set)
of the EEG measurements. This is defined as Cz = kzte ? F te vec(Y? tr )k22 , where zte and F te are
the parts of z and F belonging to the test set.
We compared the sparse basis field expansion (S-FLEX) approach using Gaussian basis functions
(see section 3.1) to the commonly used approaches of LORETA [11] and Minimum Current Estimate
(MCE) [7], and the recently proposed Focal Vectorfield Reconstruction (FVR) technique [3]. All
three competitors correspond to using unit impulses as basis functions while employing different
regularizers. The LORETA solution, e.g., is a Tikhonov regularized least-squares estimate while
MCE is equivalent to applying lasso to each dimension separately, yielding current vectors that are
biased towards being axes-parallel. We here used a variant of MCE, in which the original depth
compensation approach was replaced by the approach outlined in section 3.1. Interestingly, FVR
can be interpreted as a special case of S-FLEX employing the rotation-invariant regularizer R+ (C)
to enforce both sparsity and smoothness. The tradeoff parameter ? of this method was chosen as
suggested in [3]. All methods were formulated such that the fitness of the solution was ensured by
the constraint kz ? F vec(Y? tr )k22 < ?. The optimization was carried out using freely available
packages for convex programming [12, 2].
Simulated data
We simulated current densities in the following way. First, we sampled outputs yn , n = 1, . . . , N
from a multivariate standard normal distribution. The function (xn , yn ) was then spatially smoothed
using a Gaussian lowpass filter with standard deviation 2.5 cm. Finally, each yn was shortened by
the 90th percentile of the magnitudes of all yn ? leaving only 10% of the current vectors active.
Current densities obtained by this procedure usually feature 2-3 active patches (sources) with small
to medium extent and smoothly varying magnitude and orientation (see Fig. 3 for an example). This
5
behaviour was considered consistent with the general believe on the sources. We simulated five
densities and computed respective pseudo-measurements for 118 channels using the forward model
F . As no noise was injected in the system, ? was set to zero in the following reconstruction.
Real data
We recorded 113-channel EEG of one healthy subject (male, 26 years) during electrical median
nerve stimulation. The EEG electrodes were positioned according to the international 10-20 system. The exact positions were obtained using a 3D digitizer and mapped onto the surface of the
head model. EEG data were recorded with sampling frequency of 2500 Hz and digitally bandpassfiltered between 15 Hz and 450 Hz. Left and right median nerves were stimulated in separate blocks
by applying constant square 0.2 ms current pulses to the respective thenars. Current pulses had
intensities above motor threshold (approx. 9 mA), inducing unintended twitches of the thumbs.
The interstimulus interval varied randomly between 500 ms and 700 ms. About 1100 trials were
recorded for each hand. Artifactual trials as well as artifactual electrodes were excluded from the
analysis. For the remaining data, baseline correction was done based on the mean amplitude in the
prestimulus interval (-100 ms to -10 ms). Finally, a single measurement vector was constructed by
averaging the EEG amplitudes at 21 ms across 1946 trials (50% left hand, 50% right hand). By this
means the EEG response to somatosensory input at the hands was captured with high signal-to-noise
ratio (SNR). Based on that the brain areas representing left and right hand were to be reconstructed
with ? set according to the estimated SNR.
3.3
Results
Fig. 3 shows a simulated current density along with reconstructions according to LORETA, MCE,
FVR and S-FLEX. From the figure it becomes apparent, that LORETA and MCE do not approximate
the true current density very well. While the LORETA solution is rather blurry, merging the two true
sources, the MCE solution exhibits many spikes, which could easily be misinterpreted as different
sources. Note that the strong orientation bias of MCE cannot be seen in Fig. 3 as only dipole
amplitudes are plotted. The estimates of FVR and S-FLEX approximately recover the shape of the
sources. S-FLEX comes closest to the true shape, as its estimates are less focal than the ones of
FVR. However, S-FLEX still slightly underestimate the extent of the sources.
The localization results of left and right N20 generators are shown in Fig. 4. The solutions of FVR
and S-FLEX are almost indistinguishable. Both show activity concentrated in two major patches,
one in each contralateral somatosensory cortex. This is in good agreement with the localization of
the hand areas reported in the literature (e.g. [5]). LORETA estimates only one large active region
over the whole central area, with the maximum lying exactly in between the hand areas. The MCE
solution consists of eight spikes scattered across the whole somatosensory area.
Tab. 1 shows that S-FLEX generalizes better than its competitors, although insignificantly. More
importantly S-FLEX outperforms its peers in terms of reconstruction accuracy. The distance to
the runner-up FVR is, however, larger than expected from Fig. 3. This is due to the fact that the
parameter of FVR controlling the tradeoff between sparsity and smoothness was fixed here to a
value promoting ?maximally sparse sources which are still smooth?. While this might be a good
assumption in practise, it was not rewarded in our validation setting. We here explicitly required
reconstruction rather than shrinkage of the sources.
LORETA
FVR
S-FLEX
MCE
Cy SIM
Cz SIM
Cz REAL
1.00 ? 0.01
0.955 ? 0.02
0.71 ? 0.04
1.21 ? 0.01
2.87 ? 0.78
1.21 ? 1.00
0.952 ? 0.28
1.86 ? 0.57
8.18 ? 1.38
8.01 ? 1.79
7.95 ? 1.84
8.13 ? 1.60
Table 1: Ability of LORETA, FVR, S-FLEX and MCE to reconstruct simulated currents (Cy SIM)
and generalization performance with respect to the EEG measurements (Cz SIM/REAL). Winning
entries (reaching significance) are shown in bold face.
6
SIM
LORETA
FVR
S-FLEX
MCE
Figure 3: Simulated current density (SIM) and reconstruction according to LORETA, FVR, S-FLEX
and MCE. Color encodes current magnitude.
LORETA
FVR
S-FLEX
MCE
Figure 4: Localization of somatosensory evoked N20 generators according to LORETA, FVR,
S-FLEX and MCE. Color encodes current magnitude.
7
4
Conclusion and Outlook
This paper contributes a novel and general methodology for obtaining sparse decompositions of
vector fields. An important ingredient of our framework is the insight that the vector field estimate
should be invariant with respect to a rotation of the coordinate system. Interestingly, the latter
constraint together with sparsity leads to a second-order cone programming formulation.
We have focussed here on solving the EEG/MEG inverse problem, where our proposed S-FLEX
approach outperformed the state-of-the-art in approximating the true shape of the current sources.
However, other fields might as well benefit from the use of S-FLEX: in meteorology for example, an
improved decomposition of wind fields into their driving components might provide novel insights
that could be useful for better weather forecasting.
Acknowledgments
This work was supported in part by the German BMBF grants BCCNB-A4 (FKZ 01GQ0415),
BFNTB-A1 (FKZ 01GQ0850) and FaSor (FKZ 16SV2234). We thank Friederike Hohlefeld and
Monika Weber for help in preparing the experiment, and Ryota Tomioka for fruitful discussions.
References
[1] F.R. Bach, G.R.G. Lanckriet, and M.I. Jordan. Multiple kernel learning, conic duality and the SMO
algorithm. In Proceedings of the Twenty-first International Conference on Machine Learning, 2004.
[2] M. Grant, S. Boyd, and Y. Ye. CVX: Matlab Software for Disciplined Convex Programming, October
2006. http://www.stanford.edu/?boyd/cvx/, Version 1.0RC.
[3] S. Haufe, V.V. Nikulin, A. Ziehe, K.-R. M?uller, and G. Nolte. Combining sparsity and rotational invariance in EEG/MEG source reconstruction. NeuroImage, 42(2):26?738, 2008.
[4] C.J. Holmes, R. Hoge, L. Collins, R. Woods, A.W. Toga, and A.C. Evans. Enhancement of MR images
using registration for signal averaging. J. Comput. Assist. Tomogr., 22(2):324?333, 1998.
[5] J. Huttunen, S. Komssi, and L. Lauronen. Spatial dynamics of population activities at S1 after median
and ulnar nerve stimulation revisited: An MEG study. NeuroImage, 32:1024?1031, 2006.
[6] M.S. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second-order cone programming.
Lin. Alg. Appl., 284:193?228, 1998.
[7] K. Matsuura and Y. Okabe. Selective minimum-norm solution of the biomagnetic inverse problem. IEEE
Trans. Biomed. Eng., 42:608?615, 1995.
[8] F.A. Mussa-Ivaldi. From basis functions to basis fields: vector field approximation from sparse data. Biol.
Cybern., 67:479?489, 1992.
[9] G. Nolte and G. Dassios. Analytic expansion of the EEG lead field for realistic volume conductors. Phys.
Med. Biol., 50:3807?3823, 2005.
[10] R.D. Pascual-Marqui. Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details. Meth. Find. Exp. Clin. Pharmacol., 24(1):5?12, 2002.
[11] R.D. Pascual-Marqui, C.M. Michel, and D. Lehmann. Low resolution electromagnetic tomography: a
new method for localizing electrical activity in the brain. Int. J. Psychophysiol., 18:49?65, 1994.
[12] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim.
Method. Softw., 11?12:625?653, 1999.
[13] A. Tarantola. Inverse Problem Theory and Model Parameter Estimation. SIAM, Philadelphia, 2005.
[14] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. B Meth., 58(1):267?288,
1996.
[15] R. Tomioka and S. Haufe. Combined classification and channel/basis selection with L1-L2 regularization
with application to P300 speller system. In Proceedings of the 4th International Brain-Computer Interface
Workshop and Training Course 2008. Verlag der Technischen Universit?at Graz, 2008.
[16] M. Vega-Hern?andez, E. Mart??nez-Montes, J.M. S?anchez-Bornot, A. Lage-Castellanos, and P.A. Vald?esSosa. Penalized least squares methods for solving the EEG inverse problem. Stat. Sinica, 2008. In
press.
[17] D.P. Wipf and B.D. Rao. An empirical bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Proces., 55(7):3704?3716, 2007.
[18] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J. Roy. Stat.
Soc. B Meth., 68(1):49?67, 2006.
8
| 3470 |@word trial:3 mri:1 version:2 norm:9 stronger:1 proportionality:1 pulse:2 linearized:1 bn:1 decomposition:2 eng:1 tr:6 outlook:1 ivaldi:1 selecting:2 unintended:1 bc:3 interestingly:2 franklin:1 outperforms:1 current:23 ida:1 comparing:1 optim:1 activation:1 assigning:1 written:2 evans:1 numerical:1 realistic:3 tarantola:1 shape:4 analytic:1 motor:1 node:8 location:6 revisited:1 misinterpreted:1 five:1 rc:1 along:1 constructed:3 direct:1 become:2 yuan:1 consists:1 inside:2 introduce:1 expected:2 indeed:1 frequently:1 brain:10 spherical:1 solver:1 becomes:1 estimating:3 campus:1 medium:1 kind:1 cm:8 interpreted:1 transformation:1 pseudo:1 exactly:1 ensured:1 rm:4 k2:8 universit:1 unit:1 medical:1 digitizer:1 yn:8 grant:2 vald:1 positive:1 local:1 encoding:1 shortened:1 approximately:1 might:4 evoked:1 appl:1 acknowledgment:1 flex:18 block:1 procedure:2 twitch:1 area:5 empirical:2 significantly:1 weather:1 boyd:3 radial:1 refers:1 regular:1 cannot:3 interior:1 selection:4 onto:1 context:1 applying:3 cybern:1 www:1 equivalent:2 fruitful:1 demonstrated:1 center:3 attention:1 convex:5 resolution:3 dipole:1 estimator:3 insight:2 holmes:1 importantly:1 vandenberghe:1 population:1 notion:1 coordinate:4 controlling:2 guido:1 programming:6 exact:1 agreement:1 lanckriet:1 roy:2 approximated:2 particularly:1 electrical:4 zte:1 cy:3 region:1 graz:1 lage:1 prefered:1 rq:7 benjamin:1 disease:1 digitally:1 practise:1 lobo:1 dynamic:1 solving:7 compromise:1 localization:3 basis:36 easily:2 lowpass:1 indirect:2 regularizer:4 stacked:1 kcl:3 effective:1 monte:1 klaus:1 peer:1 apparent:1 heuristic:1 y1t:1 plausible:1 valued:1 larger:1 stanford:1 reconstruct:3 ability:2 statistic:1 loreta:12 propose:3 reconstruction:12 nikulin:1 zm:2 tu:2 combining:1 p300:1 inducing:1 interstimulus:1 normalize:1 crossvalidation:2 electrode:5 requirement:2 enhancement:1 generating:1 rotated:1 help:1 derive:1 stat:3 measured:1 sim:6 eq:3 soc:2 auxiliary:1 c:1 strong:1 come:2 somatosensory:4 direction:3 filter:1 centered:1 human:4 require:1 behaviour:1 electromagnetic:3 generalization:2 decompose:1 andez:1 mathematically:1 extension:1 pl:1 correction:1 lying:1 around:1 considered:2 ground:1 normal:1 exp:2 mapping:1 predict:1 driving:1 major:1 dictionary:3 smallest:1 purpose:1 r3n:2 estimation:10 outperformed:1 applicable:1 okabe:1 healthy:1 grouped:2 ctl:1 stefan:1 uller:1 interfacing:1 gaussian:3 sensor:1 modified:2 rather:2 reaching:1 avoid:1 shrinkage:2 varying:1 ax:2 focus:1 derived:1 contrast:1 baseline:1 sense:1 relation:1 transformed:1 selective:1 germany:4 biomed:1 arg:3 classification:1 orientation:3 priori:1 resonance:1 art:2 spatial:6 special:1 field:45 equal:1 having:1 sampling:1 softw:1 preparing:1 wipf:1 minimized:1 few:1 employ:1 randomly:1 composed:1 fitness:1 replaced:2 phase:1 geometry:1 ourselves:1 mussa:1 maintain:1 attempt:1 possibility:1 evaluation:1 runner:1 male:1 truly:1 extreme:1 yielding:1 regularizers:3 capable:1 necessary:1 respective:2 sedumi:1 orthogonal:1 conduct:1 plotted:1 instance:1 column:1 castellanos:1 rao:1 cover:1 localizing:1 tractability:1 cost:2 introducing:1 deviation:4 entry:1 contralateral:1 snr:2 predictor:1 too:1 optimally:2 reported:1 conduction:1 combined:1 density:10 international:3 siam:1 stay:1 together:1 w1:1 ambiguity:1 ear:2 recorded:3 containing:3 central:1 cognitive:1 leading:1 michel:1 account:2 de:1 socp:3 insignificantly:1 rcl:1 bold:1 coefficient:11 int:1 explicitly:1 toga:1 depends:1 performed:1 break:1 wind:2 view:1 root:1 tab:1 hf:1 recover:1 complicated:1 parallel:1 ass:1 square:6 accuracy:1 variance:3 correspond:1 bayesian:1 thumb:1 accurately:1 produced:1 mere:1 explain:1 simultaneous:1 phys:1 competitor:2 underestimate:1 proces:1 frequency:1 invasive:2 naturally:1 matsuura:1 transposed:1 sampled:1 treatment:1 manifest:1 color:3 knowledge:4 friederike:1 sloreta:1 amplitude:3 positioned:1 nerve:3 methodology:1 response:1 maximally:1 improved:1 disciplined:1 formulation:4 evaluated:2 arranged:1 done:2 hand:7 sturm:1 expressive:1 nonlinear:1 assessment:1 propagation:1 lack:1 impulse:1 believe:1 effect:1 ye:1 k22:6 contain:1 true:7 concept:1 former:2 regularization:4 hence:1 normalized:1 spatially:2 excluded:1 laboratory:1 nonzero:1 symmetric:1 indistinguishable:1 during:2 ambiguous:1 percentile:1 criterion:2 m:6 delivers:1 interpreting:1 l1:1 interface:1 image:2 weber:1 novel:3 recently:4 vega:1 rotation:7 stimulation:2 physical:1 rl:1 b4:2 cerebral:2 volume:2 discussed:2 interpretation:1 epilepsy:1 measurement:17 vec:14 enter:1 curl:1 smoothness:3 approx:1 grid:1 outlined:2 similarly:1 focal:2 bandpassfiltered:1 had:1 specification:1 morlet:1 surface:1 cortex:1 base:1 charit:1 multivariate:2 closest:1 apart:1 rewarded:1 tikhonov:2 certain:1 verlag:1 accomplished:1 muller:1 der:1 seen:2 rotationally:3 additional:5 care:1 captured:2 minimum:2 n20:2 freely:1 mr:1 redundant:1 signal:5 multiple:3 sound:3 desirable:2 infer:1 smooth:2 huttunen:1 technical:1 bach:1 dept:2 lin:2 a1:1 prediction:1 variant:2 regression:7 involving:1 implanted:1 expectation:1 kernel:2 normalization:1 cz:4 c1:1 pharmacol:1 separately:1 interval:2 median:3 source:27 leaving:1 appropriately:1 biased:1 vadim:1 unlike:2 subject:2 med:1 tend:1 recording:1 electro:1 vectorfield:1 hz:3 flow:1 jordan:1 bernstein:1 haufe:3 concerned:1 wn:3 nolte:2 lasso:5 restrict:1 fkz:3 andreas:1 tradeoff:3 technischen:1 shift:1 assist:1 ul:3 forecasting:1 penalty:1 speaking:1 matlab:2 deep:1 generally:2 useful:1 speller:1 meteorology:2 concentrated:1 tomography:2 http:1 problematic:1 neuroscience:1 estimated:8 fulfilled:1 tibshirani:1 diagnosis:2 hyperparameter:1 group:1 threshold:1 achieving:3 localize:1 penalizing:2 registration:1 utilize:1 imaging:1 cone:6 sum:3 year:1 run:1 inverse:18 package:1 injected:1 wood:1 lehmann:1 place:1 almost:2 patch:2 cvx:2 draw:2 prefer:1 comparable:1 ziehe1:1 bound:1 distinguish:1 quadratic:4 activity:5 occur:1 vectorial:2 constraint:5 software:1 encodes:2 fourier:1 argument:1 min:3 formulating:1 optical:1 hohlefeld:1 according:6 combination:2 belonging:3 kd:1 across:2 slightly:1 reconstructing:1 postsynaptic:1 appealing:1 skull:1 ulnar:1 b:2 s1:1 invariant:8 restricted:1 taken:2 equation:1 hern:1 r3:3 mechanism:2 german:1 available:3 gaussians:1 generalizes:1 apply:1 promoting:2 eight:1 appropriate:2 enforce:1 magnetic:1 blurry:1 rp:3 original:2 standardized:1 remaining:2 a4:1 maintaining:1 ck1:1 hinge:1 clin:1 medicine:1 k1:2 approximating:2 bl:6 seeking:1 skin:1 already:3 occurs:1 spike:2 strategy:2 rt:2 loudness:1 exhibit:1 distance:2 separate:1 mapped:1 berlin:6 simulated:10 majority:1 thank:1 nondifferentiable:1 extent:5 meg:9 besides:1 length:1 relationship:1 rotational:8 ratio:1 conductive:1 unfortunately:1 difficult:1 robert:1 october:1 sinica:1 ryota:1 twenty:1 perform:1 upper:1 anchez:1 compensation:1 defining:1 precise:1 head:3 rn:1 varied:1 smoothed:1 intensity:1 introduced:2 namely:1 required:1 toolbox:1 z1:1 acoustic:1 smo:1 learned:3 lebret:1 trans:2 suggested:1 usually:3 sparsity:9 program:2 including:1 power:1 unrealistic:1 treated:1 natural:1 regularized:1 customized:1 residual:1 meth:3 representing:1 brief:1 conic:1 carried:1 fraunhofer:1 philadelphia:1 prior:2 literature:2 l2:1 loss:5 neurophysiologically:1 proportional:1 localized:1 ingredient:1 generator:2 validation:2 consistent:1 imposes:1 row:1 course:1 penalized:1 supported:1 free:1 arriving:1 monika:1 bias:3 allow:1 deeper:1 institute:1 taking:1 face:1 focussed:1 sparse:15 benefit:1 slice:1 dimension:3 xn:15 kck1:2 cortical:1 depth:1 kz:3 forward:4 collection:1 made:1 commonly:2 far:1 employing:2 cope:1 reconstructed:2 approximate:2 overfitting:1 incoming:1 active:3 b1:4 assumed:1 neurology:1 table:1 stimulated:1 channel:5 superficial:1 robust:1 inherently:1 obtaining:1 eeg:20 contributes:1 alg:1 expansion:10 cl:4 complex:4 artificially:1 assured:1 significance:1 dense:1 main:1 linearly:1 whole:3 noise:2 arise:1 x1:2 neuronal:2 fig:7 marqui:2 scattered:2 pascual:2 bmbf:1 tomioka:2 neuroimage:2 position:1 comprises:1 winning:1 comput:1 vanish:1 breaking:1 wavelet:1 removing:1 specific:1 physiological:1 incorporating:1 biomagnetic:1 workshop:1 merging:1 gained:1 magnitude:10 te:2 kx:1 mce:14 smoothly:1 artifactual:2 nez:1 neurological:1 scalar:5 truth:1 ma:1 mart:1 goal:1 formulated:1 magnetoencephalography:1 towards:1 considerable:1 change:1 specifically:1 averaging:2 conductor:1 called:2 invariance:10 experimental:1 duality:1 select:2 ziehe:1 latter:4 arises:1 collins:1 evaluate:3 biol:2 |
2,725 | 3,471 | Convergence and Rate of Convergence of A
Manifold-Based Dimension Reduction Algorithm
Andrew K. Smith, Xiaoming Huo
School of Industrial and Systems Engineering
Georgia Institute of Technology
Atlanta, GA 30332
[email protected], [email protected]
Hongyuan Zha
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
Abstract
We study the convergence and the rate of convergence of a local manifold learning
algorithm: LTSA [13]. The main technical tool is the perturbation analysis on the
linear invariant subspace that corresponds to the solution of LTSA. We derive a
worst-case upper bound of errors for LTSA which naturally leads to a convergence
result. We then derive the rate of convergence for LTSA in a special case.
1
Introduction
Manifold learning (ML) methods have attracted substantial attention due to their demonstrated potential. Many algorithms have been proposed and some work has appeared to analyze the performance of these methods. The main contribution of this paper is to establish some asymptotic
properties of a local manifold learning algorithm: LTSA [13], as well as a demonstration of some of
its limitations. The key idea in the analysis is to treat the solutions computed by LTSA as invariant
subspaces of certain matrices, and then carry out a matrix perturbation analysis.
Many efficient ML algorithms have been developed including locally linear embedding (LLE) [6],
ISOMAP [9], charting [2], local tangent space alignment (LTSA) [13], Laplacian eigenmaps [1],
and Hessian eigenmaps [3]. A common feature of many of these manifold learning algorithms is
that their solutions correspond to invariant subspaces, typically the eigenspace associated with the
smallest eigenvalues of a kernel or alignment matrix. The exact form of this matrix, of course,
depends on the details of the particular algorithm.
We start with LTSA for several reasons. First of all, in numerical simulations (e.g., using the tools
offered by [10]), we find empirically that LTSA performs among the best of the available algorithms.
Second, the solution to each step of the LTSA algorithm is an invariant subspace, which makes
analysis of its performance more tractable. Third, the similarity between LTSA and several other
ML algorithms (e.g., LLE, Laplacian eigenmaps and Hessian eigenmaps) suggests that our results
may generalize. Our hope is that this performance analysis will provide a theoretical foundation for
the application of ML algorithms.
The rest of the paper is organized as follows. The problem formulation and background information
are presented in Section 2. Perturbation analysis is carried out, and the main theorem is proved
(Theorem 3.7) in Section 3. Rate of convergence under a special case is derived in Section 4.
Some discussions related to existing work in this area are included in Section 5. Finally, we present
concluding remarks in Section 6.
1
2
Manifold Learning and LTSA
We formulate the manifold learning problem as follows. For a positive integer n, let yi ? IRD , i =
1, 2, . . . , n, denote n observations. We assume that there is a mapping f : IRd ? IRD which satisfies
a set of regularity conditions (detailed in the next subsection). In addition, we require another set of
(possibly multivariate) values xi ? IRd , d < D, i = 1, 2, . . . , n, such that
yi = f (xi ) + ?i ,
i = 1, 2, . . . , n,
(1)
D
2
where ?i ? IR denotes a random error. For example, we may assume ?i ? N (0, ? ID ); i.e., a
multivariate normal distribution with mean zero and variance-covariance proportional to the identity
matrix. The central questions of manifold learning are: 1) Can we find a set of low-dimensional
vectors such that equation (1) holds? 2) What kind of regularity conditions should be imposed on
f ? 3) Is the model well defined? These questions are the main focus of this paper.
2.1
A Pedagogical Example
(a) Embedded Spiral
(b) Noisy Observations
(c) Learned vs. Truth
0.05
0.04
3
3.5
0.03
3
2.5
2.5
0.02
2
0.01
2
1.5
1.5
0
1
1
!0.01
0.5
0.5
0
0
1
!0.02
!0.5
0.5
1
0.5
0
0
!0.5
!0.5
!1
!1
!0.03
1
0.5
1
0.5
0
!0.04
0
!0.5
!0.5
!1
!1
!0.05
0
0.2
0.4
0.6
0.8
1
Figure 1: An illustrative example of LTSA in nonparametric dimension reduction. The straight line
pattern in (c) indicates that the underlying parametrization has been approximately recovered.
An illustrative example of dimension reduction that makes our formulation more concrete is given
in Figure 1. Subfigure (a) shows the true underlying structure of a toy example, a 1-D spiral. The
noiseless observations are equally spaced points on this spiral. In subfigure (b), 1024 noisy obser1
I3 ). We then apply LTSA to
vations are generated with multivariate noise satisfying ?i ? N (0, 100
the noisy observations, using k = 10 nearest neighbors. In subfigure (c), the result from LTSA is
compared with the true parametrization. When the underlying parameter is faithfully recovered, one
should see a straight line, which is observed to hold approximately in subfigure (c).
2.2
Regularity and Uniqueness of the Mapping f
If the conditions on the mapping f are too general, the model in equation (1) is not well defined.
For example, if the mapping f (?) and point set {xi } satisfy (1), so do f (A?1 (? ? b)) and {Axi + b},
where A is an invertible d by d matrix and b is a d-dimensional vector. As being common in the
manifold-learning literature, we adopt the following condition on f .
Condition 2.1 (Local Isometry) The mapping f is locally isometric: For any ? > 0 and x in the
domain of f , let N? (x) = {z : kz ? xk2 < ?} denote an ?-neighborhood of x using Euclidean
distance. We have
kf (x) ? f (x0 )k2 = kx ? x0 k2 + o(kx ? x0 k2 ).
The above condition indicates that in a local sense, f preserves Euclidean distance. Let J(f ; x0 )
denote the Jacobian of f at x0 . We have J(f ; x0 ) ? IRD?d , where each column (resp., row) of
J(f ; x0 ) corresponds to a coordinate in the feature (resp., data) space. The above in fact implies the
following lemma [13].
Lemma 2.2 The matrix J(f ; x0 ) is orthonormal for any x0 , i.e., J T (f ; x0 )J(f ; x0 ) = Id .
2
Given the previous condition, model (1) is still not uniquely defined. For example, for any d by d
orthogonal matrix O and any d-dimensional vector b, if f (?) and {xi } satisfy (1) and Condition
2.1,
P
so do f (OT (??b)) and {Oxi +b}. We can force b to be 0 by imposing the condition that i xi = 0.
In dimension reduction, we can consider the sets {xi } and {Oxi } ?invariant,? because one is just a
rotation of the other. In fact, the invariance coincides with the concept of ?invariant subspace? to be
discussed.
Condition 2.3 (Local Linear Independence Condition) Let Yi ? IRD?k , 1 ? i ? n, denote a
matrix whose columns are made by the ith observation yi and its k ? 1 nearest neighbors. We
choose k ? 1 neighbors so that the matrix Yi has k columns. It is generally assumed that d < k.
For any 1 ? i ? n, the rank of Yi P k is at least d; in other words, the dth largest singular value of
matrix Yi P k is greater than 0.
In the above, we use the projection matrix P k = Ik ? k1 ?1k 1Tk , where Ik is the k by k identity matrix
and 1k is a k-dimensional column vector of ones. The regularity of the manifold can be determined
by the Hessians of the mapping. Rewrite f (x) for x ? IRd as f (x) = (f1 (x), f2 (x), . . . , fD (x))T .
Furthermore, let x = (x1 , . . . , xd )T . The Hessian is a D by D matrix,
[Hi (f ; x)]jk =
? 2 fi (x)
,
?xj ?xk
1 ? i ? D, 1 ? j, k ? d.
The following condition ensures that f is locally smooth. We impose a bound on all the components
of the Hessians.
Condition 2.4 (Regularity of the Manifold) |[Hi (f ; x)]jk | ? C1 for all i, j, and k, where C1 > 0
is a prescribed constant.
2.3
Solutions as Invariant Subspaces and a Related Metric
We now give a more detailed discussion of invariant subspaces. Let R(X) denote the subspace
spanned by the columns of X. Recall that xi , i = 1, 2, . . . , n, are the true low-dimensional representations of the observations. We treat the xi ?s as column vectors. Let X = (x1 , x2 , ? ? ? , xn )T ;
i.e., the ith row of X corresponds to xi , 1 ? i ? n. If the set {Oxi }, where O is a d by d orthogonal
square matrix, forms another solution to the dimension reduction problem, we have
(Ox1 , Ox2 , ? ? ? , Oxn )T = XOT .
It is evident that R(XOT ) = R(X). This justifies the invariance that was mentioned earlier.
The goal of our performance analysis is to answer the following question: Letting k tan(?, ?)k2
denote the Euclidean norm of the vector of canonical angles between two invariant subspaces ([8,
e denote the true and estimated parameters, respectively, how do
Section I.5]), and letting X and X
e
we evaluate k tan(R(X), R(X))k2 ?
2.4
LTSA: Local Tangent Space Alignment
We now review LTSA. There are two main steps in the LTSA algorithm [13].
1. The first step is to compute the local representation on the manifold. Recall the projection matrix
P k . It is easy to verify that P k = P k ? P k , which is a characteristic of projection matrices. We
solve the minimization problem: min?,V kYi P k ? ?V kF , where ? ? IRD?d , V ? IRd?k , and
V V T = Id . Let Vi denote optimal V . Then the row vectors of Vi are the d right singular vectors of
Yi P k .
2. The solution to LTSA corresponds to the invariant subspace which is spanned and determined by
the eigenvectors associated with the 2nd to the (d + 1)st smallest eigenvalues of the matrix
(S1 , . . . , Sn )diag(P k ? V1T V1 , . . . , P k ? VnT Vn )(S1 , . . . , Sn )T .
(2)
T
where Si ? IRn?k is a selection matrix such that Y T Si = Yi , where Y = (y1 , y2 , . . . , yn ) .
3
As mentioned earlier, the subspace spanned by the eigenvectors associated with the 2nd to the (d +
1)st smallest eigenvalues of the matrix in 2 is an invariant subspace, which will be analyzed using
matrix perturbation techniques. We slightly reformulated the original algorithm as presented in [13]
for later analysis.
3
Perturbation Analysis
We now carry out a perturbation analysis on the reformulated version of LTSA. There are two steps:
in the local step (Section 3.1), we characterize the deviation of the null spaces of the matrices
P k ? ViT Vi , i = 1, 2, . . . , n. In the global step (Section 3.2), we derive the variation of the null
space under global alignment.
3.1
Local Coordinates
Let X be the matrix of true parameters. We define Xi = X T Si = (x1 , x2 , ? ? ? , xn )Si ; i.e., the
columns of Xi are made by xi and those xj ?s that correspond to the k ? 1 nearest neighbors of yi .
We require a bound on the size of the local neighborhoods defined by the Xi ?s.
Condition 3.1 (Universal Bound on the Sizes of Neighborhoods) For all i, 1 ? i ? n, we have
?i < ? , where ? is a prescribed constant and ?i is an upper bound on the distance between two
columns of Xi : ?i = maxxj ,xk kxj ? xk k, where the maximum is taken over all columns of Xi .
In this paper, we are interested in the case when ? ? 0.
We will need conditions on the local tangent spaces. Let dmin,i (respectively, dmax,i ) denote the
minimum (respectively, maximum) singular values of Xi P k . Let
dmin = min dmin,i ,
dmax = max dmax,i .
1?i?n
1?i?n
?
We can bound dmax as dmin ? dmax ? ? k [5].
Condition 3.2 (Local Tangent Space) There exists a constant C2 > 0, such that
C2 ? ? ? dmin .
(3)
The above can roughly be thought of as requiring that the local dimension of the manifold remain
constant (i.e., the manifold has no singularities.)
The following condition defines a global bound on the errors (?i ).
Condition 3.3 (Universal Error Bound) There exists ? > 0, such that ?i, 1 ? i ? n, we have
kyi ? f (xi )k? < ?. Moreover, we assume ? = o(? ); i.e., we have ?? ? 0, as ? ? 0.
It is reasonable to require that the error bound (?) be smaller than the size of the neighborhood (? ),
which is reflected in the above condition.
Within each neighborhood, we give a perturbation bound between an invariant subspace spanned by
the true parametrization and the invariant subspace spanned by the singular vectors of the matrix of
noisy observations. Let Xi P k = Ai Di Bi be the singular value decomposition of the matrix Xi P k ;
here Ai ? IRd?d is orthogonal (Ai ATi = Id ), Di ? IRd?d is diagonal, and the rows of Bi ? IRd?k
are the right singular vectors corresponding to the largest singular values (Bi BiT = Id ). It is not
hard to verify that
Bi = Bi P k .
(4)
ei D
e iB
ei be the singular value decomposition of Yi P k , and assume that this is the
Let Yi P k = A
(0)
?thin? decomposition of rank d. We may think of this as the perturbed version of J(f ; xi )Xi P k .
T
ei are the eigenvectors of (Yi P k ) (Yi P k ) corresponding to the d largest eigenvalues.
The rows of B
T
e T )) denote the invariant subspace that is spanned by the columns of
Let R(Bi ) (respectively, R(B
i
T
T
e
matrix Bi (respectively, Bi ).
4
e T )) as defined above, we have
Theorem 3.4 Given invariant subspaces R(BiT ) and R(B
i
eiT ))k2 ? C3 ? + C1 ? ,
lim k sin(R(BiT ), R(B
? ?0
?
where C3 is a constant that depends on k, D and C2 .
The proof is presented in [5]. The above gives an upper bound on the deviation of the local invariant
subspace in step 1 of the modified LTSA. It will be used later to prove a global upper bound.
3.2
Global Alignment
Condition 3.5 (No Overuse of One Observation) There exists a constant C4 , such that
n
X
Si
? C4 .
i=1
?
Note that we must have C4 ? k. The next condition (Condition 3.6) will implicitly give an upper
bound on C4 .
Pn
Recall
i=1 Si k? is the maximum row sum of the absolute values of the entries
Pn that the quantity k P
n
in i=1 Si . The value of k i=1 Si k? is equal to the maximum number of nearest neighbor subsets
to which a single observation belongs.
We will derive an upper bound on the angle between the invariant subspace spanned by the result of
LTSA and the space spanned by the true parameters.
Given (4), it can be shown that Xi P k (P k ? BiT Bi )(Xi P k )T = 0. Recall X = (x1 , x2 , . . . , xn )T ?
T
IRn?d . It is not hard to verify that the row vectors of (1n , X) span the (d + 1)-dimensional null
space of the matrix:
(S1 , . . . , Sn )P k diag(I ? B1T B1 , . . . , I ? BnT Bn )P k (S1 , . . . , Sn )T .
(5)
1n
Assume that ( ?
, X, (X c ))T is orthogonal, where X c ? IRn?(n?1?d) . Although in our original
n
problem formulation, we made no assumption about the xi ?s, we can still assume that the columns
of X are orthonormal because we can transform any set of xi ?s into an orthonormal set by rescaling
the columns and multiplying by an orthogonal matrix. Based on the previous paragraph, we have
?
?
1T
?n
n
1n
0(d+1)?(d+1)
0(d+1)?(n?d?1)
?
?
c
T
(6)
? X
? Mn ? , X, X =
0(n?d?1)?(d+1)
L2
n
(X c )T
where
Mn = (S1 , . . . , Sn )P k diag(Ik ? B1T B1 , . . . , Ik ? BnT Bn )P k (S1 , . . . , Sn )T
and
L2 = (X c )T Mn X c .
Let `min denote the minimum singular value (i.e., eigenvalue) of L2 . We will need the following
condition on `min .
Condition 3.6 (Appropriateness of Global Dimension) `min > 0 and `min goes to 0 at a slower
rate than ?? + 12 C1 ? ; i.e., as ? ? 0, we have
Pn
1
?
i=1 Si k?
? + 2 C1 ? ? k
? 0.
`min
As discussed in [12, 11], this condition is actually related to the amount of overlap between the
nearest neighbor sets.
5
Theorem 3.7 (Main Theorem)
e R(X))k2 ?
lim k tan(R(X),
? ?0
C3 ( ?? + C1 ? ) ? k
`min
Pn
i=1
Si k?
.
(7)
As mentioned in the Introduction, the above theorem gives a worst-case bound on the performance
of LTSA. For proofs as well as a discussion of the requirement that ? ? 0 see [7]. A discussion on
when Condition 3.6 is satisfied will be long and beyond the scope of this paper. We leave it to future
investigation. We refer to [5] for some simulation results related to the above analysis.
4
A Preliminary Result on the Rate of Convergence
We discuss the rate of convergence for LTSA (to the true underlying manifold structure) in the
aforementioned framework. We modify the LTSA (mainly on how to choose the size of the nearest
neighborhood) for a reason that will become evident later.
We assume the following result regarding the relationship between k, `min , and ? (this result can be
proved for xi being sampled on a uniform grid, using the properties of biharmonic eigenvalues for
partial differential equations) holds:
+
`min ? C(k) ? ?min
(?2 ) ? ? 4 ,
(8)
+
where ?min
(?2 ) is a constant, and C(k) ? k 5 . We will address such a result in the more general
context in the future.
So far, we have assumed that k is constant. However, allowing k to be a function of the sample size
n, say k = n? , where ? ? [0, 1) allows us to control the asymptotic behavior of `min along with the
convergence of the estimated alignment matrix to the true alignment matrix.
Consider our original bound on the angle between the true coordinates and the estimated coordinates:
Pn
C3 ( ?? + C1 ? ) ? k i=1 Si k?
e
lim k tan(R(X), R(X))k2 ?
.
? ?0
`min
Now, set k = n? , where ? ? [0, 1) is an exponent, the value of which will be decided later.
We must
?
kD
be careful in disregarding constants, since they may involve k. We have that C3 = C2 . C1 and
Pn
C2 are fundamental constants not involving k. Further, it is easy to see that k i=1 Si k? is O(k) since each point has k neighbors, the maximum number of neighborhoods to which a point belongs
is of the same order as k.
Now, we can use a simple heuristic to estimate the size of ? , the neighborhood size. For example,
suppose we fix and consider -neighborhoods. For simplicity, assume that the parameter space is
the unit hypercube [0, 1]d , where d is the intrinsic dimension. The law of large numbers tells us that
??1
k ? d ? n. Thus we can approximate ? as ? ? O(n d ). Plugging all this into the original equation
and dropping the constants, we get
e R(X))k2 ? n
lim k tan(R(X),
??1
d
? ?0
?n
`min
3?
2
? Constant.
If we conjecture that the relationship in (8) holds in general (i.e., the generating coordinates can
follow a more general distribution rather than only lying in a uniform grid), then we have
e R(X))k2 ?
lim k tan(R(X),
n
??1
d
?
? n 2 ? n?
? Constant.
??1
n5? ? n4? d
Now the exponent is a function only of ? and the constant d. We can try to solve for ? such that the
convergence is as fast as possible. Simplifying the exponents, we get
? ?0
??1
e R(X))k2 ? n ?7?
2 ?3( d ) ? Constant.
lim k tan(R(X),
? ?0
As a function of ? restricted to the interval [0, 1), there is no minimum?the exponent decreases
with ?, and we should choose ? close to 1.
6
However, in the proof of the convergence of LTSA, it is assumed that the errors in the local step
converge to 0. This error is given by
?
kD ? [? + 12 C1 ? 2 ]
T
T
ei ))k2 ?
?
k sin(R(Bi ), R(B
.
C2 ? ? ? kD ? [? + 12 C1 ? 2 ]
Thus, our choice of ? is restricted by the fact that the RHS of this equation must still converge to 0.
Disregarding constants and writing this as a function of n, we get
?
n2 ?n
2??2
d
??1
2??2 .
?
n d ?n2 ?n d
This quantity converges to 0 as n ? ? if and only if we have
? 2? ? 2
??1
2
+
<
? ? <
.
2
d
d
d+2
Note that this bound is strictly less than 1 for all positive integers d, so our possible choices of ? are
restricted further.
By the reasoning above, we want the exponent to be as large as possible. Further, it is easy to see
2
will always yield a bound converging to 0.
that for all d, choosing an exponent roughly equal to d+2
The following table gives the optimal exponents for selected values of d along with the convergence
e R(X))k2 . In general, using the optimal value of ?, the convergence
rate of lim? ?0 k tan(R(X),
?4
rate will be roughly n d+2 .
Table 1: Convergence rates for a few values of the underlying dimension d.
d
1
2
3
4
5
Optimal ?
0.66
0.5
0.4
0.33
0.29
Convergence rate ?1.33 ?1 ?0.8 ?0.66 ?0.57
Thesis [7] presents some numerical experiments to illustrate the above results. Associated with each
fixed value of k, there seems to be a threshold value of n, above which the performance degrades.
This value increases with k, though perhaps at the cost of worse performance for small n. However, we expect from the above analysis that, regardless of the value chosen, the performance will
eventually become unacceptable for any fixed k.
5
Discussion
To the best of our knowledge, the performance analysis that is based on invariant subspaces is new.
Consequently the worst-case upper bound is the first of its kind. There are still open questions to be
addressed (Section 5.1). In addition to a discussion on the relation of LTSA to existing dimension
reduction methodologies, we will also address relation with known results as well (Section 5.2).
5.1
Open Questions
The rate of convergence of `min is determined by the topological structure of f . It is important to
estimate this rate of convergence, but this issue has not been addressed here. We did not address the
correctness of (8) at all. It turns out the proof of (8) is quite nontrivial and tedious.
We assume that ? ? 0. One can imagine that it is true when the error bound (?) goes to 0 and when
the xi ?s are sampled with a sufficient density in the support of f . An open problem is how to derive
the rate of convergence of ? ? 0 as a function of the topology of f and the sampling scheme. After
doing so, we may be able to decide where our theorem is applicable.
5.2
Relation to Existing Work
The error analysis in the original paper about LTSA is the closest to our result. However, Zhang and
Zha [13] do not interpret their solutions as invariant subspaces, and hence their analysis does not
yield a worst case bound as we have derived here.
7
Reviewing the original papers on LLE [6], Laplacian eigenmaps [1], and Hessian eigenmaps [3]
reveals that their solutions are subspaces spanned by a specific set of eigenvectors. This naturally
suggests that results analogous to ours may be derivable as well for these algorithms. A recent book
chapter [4] stresses this point. After deriving corresponding upper bounds, we can establish different
proofs of consistency than those presented in these papers.
ISOMAP, another popular manifold learning algorithm, is an exception. Its solution cannot immediately be rendered as an invariant subspace. However, ISOMAP calls for MDS, which can be
associated with an invariant subspace; one may derive an analytical result through this route.
6
Conclusion
We derive an upper bound of the distance between two invariant subspaces that are associated with
the numerical output of LTSA and an assumed intrinsic parametrization. Such a bound describes
the performance of LTSA with errors in the observations, and thus creates a theoretical foundation
for its use in real-world applications in which we would naturally expect such errors to be present.
Our results can also be used to show other desirable properties, including consistency and rate of
convergence. Similar bounds may be derivable for other manifold-based learning algorithms.
References
[1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373?1396, 2003.
[2] M. Brand. Charting a manifold. In Neural Information Processing Systems, volume 15. Mitsubishi Electric Research Labs, MIT Press, March 2003.
[3] D. L. Donoho and C. E. Grimes. Hessian eigenmaps: New locally linear embedding techniques for high-dimensional data. Proceedings of the National Academy of Arts and Sciences,
100:5591?5596, 2003.
[4] X. Huo, X. S. Ni, and A. K. Smith. Mining of Enterprise Data, chapter A survey of manifoldbased learning methods. Springer, New York, 2005. Invited book chapter, accepted.
[5] X. Huo and A. K. Smith. Performance analysis of a manifold learning algorithm in dimension
reduction. Technical report, Georgia Institute of Technology, March 2006. Downloadable at
www2.isye.gatech.edu/statistics/papers/06-06.pdf, to appear in Linear Algebra and Its Applications.
[6] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding.
Science, 290:2323?2326, 2000.
[7] A. K. Smith. New results in dimension reduction and model selection. Ph.D. Thesis. Available
at http://etd.gatech.edu, 2008.
[8] G. W. Stewart and J.-G. Sun. Matrix Perturbation Theory. Academic Press, Boston, MA,
1990.
[9] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290:2319?2323, 2000.
[10] T.
Wittman.
MANIfold
learning
Matlab
demo.
URL:
http://www.math.umn.edu/?wittman/mani/index.html, April 2005.
[11] H. Zha and H. Zhang. Spectral properties of the alignment matrices in manifold learning.
SIAM Review, 2008.
[12] H. Zha and Z. Zhang. Spectral analysis of alignment in manifold learning. In Proceedings of
IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005.
[13] Z. Zhang and H. Zha. Principal manifolds and nonlinear dimension reduction via local tangent
space alignment. SIAM Journal of Scientific Computing, 26(1):313?338, 2004.
8
| 3471 |@word version:2 norm:1 seems:1 nd:2 tedious:1 open:3 simulation:2 mitsubishi:1 bn:2 covariance:1 decomposition:3 simplifying:1 carry:2 reduction:12 ours:1 ati:1 existing:3 recovered:2 com:1 si:12 gmail:1 attracted:1 must:3 numerical:3 v:1 selected:1 xk:3 huo:4 parametrization:4 smith:4 ith:2 math:1 zhang:4 along:2 c2:6 unacceptable:1 become:2 differential:1 ik:4 enterprise:1 prove:1 paragraph:1 x0:11 behavior:1 roughly:3 v1t:1 underlying:5 moreover:1 eigenspace:1 null:3 what:1 kind:2 developed:1 xd:1 k2:13 control:1 unit:1 yn:1 appear:1 positive:2 engineering:1 local:17 treat:2 modify:1 id:5 approximately:2 suggests:2 bi:10 decided:1 b1t:2 area:1 universal:2 thought:1 projection:3 word:1 get:3 cannot:1 ga:2 selection:2 close:1 context:1 writing:1 www:1 imposed:1 demonstrated:1 go:2 attention:1 regardless:1 vit:1 survey:1 formulate:1 simplicity:1 immediately:1 orthonormal:3 spanned:9 deriving:1 embedding:3 coordinate:5 variation:1 analogous:1 resp:2 imagine:1 tan:8 suppose:1 exact:1 satisfying:1 jk:2 observed:1 worst:4 ensures:1 sun:1 decrease:1 substantial:1 mentioned:3 rewrite:1 reviewing:1 algebra:1 creates:1 f2:1 kxj:1 eit:1 chapter:3 fast:1 tell:1 neighborhood:9 choosing:1 vations:1 whose:1 heuristic:1 quite:1 solve:2 say:1 niyogi:1 statistic:1 think:1 transform:1 noisy:4 eigenvalue:6 analytical:1 roweis:1 academy:1 convergence:20 regularity:5 requirement:1 generating:1 leave:1 converges:1 tk:1 derive:7 andrew:1 illustrate:1 nearest:6 school:1 bnt:2 implies:1 appropriateness:1 require:3 f1:1 fix:1 etd:1 investigation:1 preliminary:1 singularity:1 strictly:1 hold:4 lying:1 normal:1 mapping:6 scope:1 adopt:1 smallest:3 xk2:1 uniqueness:1 applicable:1 largest:3 correctness:1 faithfully:1 tool:2 hope:1 minimization:1 mit:1 always:1 i3:1 modified:1 rather:1 pn:6 gatech:4 derived:2 focus:1 rank:2 indicates:2 mainly:1 industrial:1 sense:1 typically:1 irn:3 relation:3 interested:1 maxxj:1 among:1 aforementioned:1 issue:1 html:1 exponent:7 art:1 special:2 equal:2 sampling:1 thin:1 future:2 report:1 few:1 belkin:1 preserve:1 national:1 atlanta:2 fd:1 mining:1 alignment:10 umn:1 analyzed:1 grime:1 partial:1 orthogonal:5 euclidean:3 theoretical:2 subfigure:4 column:12 earlier:2 stewart:1 cost:1 deviation:2 entry:1 subset:1 uniform:2 eigenmaps:8 too:1 characterize:1 answer:1 perturbed:1 st:2 density:1 fundamental:1 siam:2 international:1 invertible:1 concrete:1 thesis:2 central:1 satisfied:1 choose:3 possibly:1 worse:1 book:2 rescaling:1 toy:1 potential:1 de:1 downloadable:1 vnt:1 satisfy:2 biharmonic:1 depends:2 vi:3 later:4 try:1 lab:1 analyze:1 doing:1 zha:6 start:1 contribution:1 square:1 ni:1 ir:1 variance:1 characteristic:1 correspond:2 spaced:1 yield:2 generalize:1 multiplying:1 cc:1 straight:2 naturally:3 associated:6 di:2 proof:5 sampled:2 proved:2 popular:1 recall:4 subsection:1 lim:7 knowledge:1 dimensionality:3 organized:1 actually:1 isometric:1 follow:1 reflected:1 methodology:1 april:1 formulation:3 though:1 furthermore:1 just:1 langford:1 ei:4 ox1:1 nonlinear:3 defines:1 perhaps:1 scientific:1 concept:1 true:11 isomap:3 verify:3 mani:1 hence:1 requiring:1 y2:1 sin:2 uniquely:1 illustrative:2 coincides:1 pdf:1 stress:1 evident:2 pedagogical:1 performs:1 silva:1 reasoning:1 fi:1 common:2 rotation:1 empirically:1 volume:1 discussed:2 interpret:1 refer:1 imposing:1 ai:3 grid:2 consistency:2 similarity:1 multivariate:3 isometry:1 closest:1 recent:1 belongs:2 route:1 certain:1 xot:2 yi:14 minimum:3 greater:1 impose:1 converge:2 signal:1 desirable:1 smooth:1 technical:2 academic:1 long:1 equally:1 plugging:1 laplacian:4 converging:1 involving:1 n5:1 noiseless:1 metric:1 kernel:1 c1:10 background:1 addition:2 want:1 interval:1 addressed:2 singular:9 ot:1 rest:1 invited:1 ltsa:30 integer:2 call:1 www2:1 spiral:3 easy:3 independence:1 xj:2 topology:1 idea:1 regarding:1 url:1 ird:12 reformulated:2 speech:1 hessian:7 york:1 remark:1 matlab:1 generally:1 detailed:2 eigenvectors:4 involve:1 amount:1 nonparametric:1 locally:5 ph:1 tenenbaum:1 http:2 canonical:1 estimated:3 dropping:1 key:1 threshold:1 kyi:2 v1:1 sum:1 angle:3 reasonable:1 decide:1 vn:1 bit:4 bound:25 hi:2 topological:1 nontrivial:1 x2:3 prescribed:2 concluding:1 min:16 span:1 rendered:1 xiaoming:1 conjecture:1 march:2 kd:3 remain:1 slightly:1 smaller:1 describes:1 n4:1 s1:6 invariant:22 restricted:3 taken:1 equation:5 dmax:5 discus:1 eventually:1 turn:1 letting:2 tractable:1 oxi:3 available:2 apply:1 spectral:2 slower:1 original:6 denotes:1 k1:1 establish:2 hypercube:1 question:5 quantity:2 degrades:1 md:1 diagonal:1 subspace:24 distance:4 manifold:23 reason:2 charting:2 index:1 relationship:2 demonstration:1 allowing:1 upper:9 dmin:5 observation:10 y1:1 perturbation:8 c3:5 c4:4 acoustic:1 learned:1 address:3 dth:1 beyond:1 able:1 pattern:1 appeared:1 including:2 max:1 overlap:1 force:1 mn:3 scheme:1 technology:3 carried:1 sn:6 review:2 literature:1 l2:3 tangent:5 kf:2 geometric:1 asymptotic:2 law:1 embedded:1 expect:2 limitation:1 proportional:1 foundation:2 offered:1 sufficient:1 row:7 course:1 lle:3 institute:3 neighbor:7 saul:1 absolute:1 dimension:13 axi:1 xn:3 world:1 kz:1 made:3 far:1 approximate:1 derivable:2 implicitly:1 ml:4 global:7 reveals:1 hongyuan:1 b1:2 assumed:4 xi:27 demo:1 table:2 electric:1 domain:1 diag:3 did:1 main:6 rh:1 noise:1 n2:2 x1:4 georgia:3 isye:1 ib:1 third:1 jacobian:1 theorem:7 specific:1 disregarding:2 exists:3 intrinsic:2 justifies:1 kx:2 boston:1 springer:1 corresponds:4 truth:1 satisfies:1 ma:1 identity:2 goal:1 consequently:1 careful:1 donoho:1 hard:2 included:1 determined:3 lemma:2 principal:1 invariance:2 accepted:1 brand:1 exception:1 college:1 support:1 evaluate:1 |
2,726 | 3,472 | Cascaded Classification Models:
Combining Models for Holistic Scene Understanding
Geremy Heitz
Stephen Gould
Department of Electrical Engineering
Stanford University, Stanford, CA 94305
Ashutosh Saxena
Daphne Koller
Department of Computer Science
Stanford University, Stanford, CA 94305
{gaheitz,sgould}@stanford.edu
{asaxena,koller}@cs.stanford.edu
Abstract
One of the original goals of computer vision was to fully understand a natural
scene. This requires solving several sub-problems simultaneously, including object detection, region labeling, and geometric reasoning. The last few decades
have seen great progress in tackling each of these problems in isolation. Only recently have researchers returned to the difficult task of considering them jointly. In
this work, we consider learning a set of related models in such that they both solve
their own problem and help each other. We develop a framework called Cascaded
Classification Models (CCM), where repeated instantiations of these classifiers
are coupled by their input/output variables in a cascade that improves performance
at each level. Our method requires only a limited ?black box? interface with the
models, allowing us to use very sophisticated, state-of-the-art classifiers without
having to look under the hood. We demonstrate the effectiveness of our method
on a large set of natural images by combining the subtasks of scene categorization,
object detection, multiclass image segmentation, and 3d reconstruction.
1
Introduction
The problem of ?holistic scene understanding? encompasses a number of notoriously difficult computer vision tasks. Presented with an image, scene understanding involves processing the image to
answer a number of questions, including: (i) What type of scene is it (e.g., urban, rural, indoor)? (ii)
What meaningful regions compose the image? (iii) What objects are in the image? (iv) What is the
3d structure of the scene? (See Figure 1). Many of these questions are coupled?e.g., a car present
in the image indicates that the scene is likely to be urban, which in turn makes it more likely to find
road or building regions. Indeed, this idea of communicating information between tasks is not new
and dates back to some of the earliest work in computer vision (e.g., [1]). In this paper, we present
a framework that exploits such dependencies to answer questions about novel images.
While our focus will be on image understanding, the goal of combining related classifiers is relevant
to many other machine learning domains where several related tasks operate on the same (or related)
raw data and provide correlated outputs. In the area of natural language processing, for instance,
we might want to process a single document and predict the part of speech of all words, correspond
the named entities, and label the semantic roles of verbs. In the area of audio signal processing, we
might want to simultaneously do speech recognition, source separation, and speaker recognition.
In the problem of scene understanding (as in many others), state-of-the-art models already exist for
many of the tasks of interest. However, these carefully engineered models are often tricky to modify,
or even simply to re-implement from available descriptions. As a result, it is sometimes desirable to
treat these models as ?black boxes,? where we have we have access only to a very simple input/output
interface. in short, we require only the ability to train on data and produce classifications for each
data instance; specifics are given in Section 3 below.
In this paper, we present the framework of Cascaded Classification Models (CCMs), where stateof-the-art ?black box? classifiers for a set of related tasks are combined to improve performance on
1
(a) Detected Objects
(b) Classified Regions
(c) 3D Structure
(d) CCM Framework
Figure 1: (a)-(c) Some properties of a scene required for holistic scene understanding that we seek to unify
using a cascade of classifiers. (d) The CCM framework for jointly predicting each of these label types.
some or all tasks. Specifically, the CCM framework creates multiple instantiations of each classifier,
and organizes them into tiers where models in the first tier learn in isolation, processing the data to
produce the best classifications given only the raw instance features. Lower tiers accept as input both
the features from the data instance, as well as features computed from the output classifications of
the models at the previous tier. While only demonstrated in the computer vision domain, we expect
the CCM framework have broad applicability to many applications in machine learning.
We apply our model to the scene understanding task by combining scene categorization, object
detection, multi-class segmentation, and 3d reconstruction. We show how ?black-box? classifiers
can be easily integrated into our framework. Importantly, in extensive experiments on large image
databases, we show that our combined model yields superior results on all tasks considered.
2
Related Work
A number of works in various fields aim to combine classifiers to improve final output accuracy.
These works can be divided into two broad groups. The first is the combination of classifiers that
predict the same set of random variables. Here the aim is to improved classifications by combining
the outputs of the individual models. Boosting [6], in which many weak learners are combined into a
highly accurate classifier, is one of the most common and powerful such scemes. In computer vision,
this idea has been very successfully applied to the task of face detection using the so-called Cascade
of Boosted Ensembles (CoBE) [18, 2] framework. While similar to our work in constructing a
cascade of classifiers, their motivation was computational efficiency, rather than a consideration
of contextual benefits. Tu [17] learns context cues by cascading models for pixel-level labeling.
However, the context is, again, limited to interactions between labels of the same type.
The other broad group of works that combine classifiers is aimed at using the classifiers as components in large intelligent systems. Kumar and Hebert [9], for example, develop a large MRF-based
probabilistic model linking multiclass segmentation and object detection. Such approaches have also
been used in the natural language processing literature. For example, the work of Sutton and McCallum [15] combines a parsing model with a semantic role labeling model into a unified probabilistic
framework that solves both simultaneously. While technically-correct probabilistic representations
are appealing, it is often painful to fit existing methods into a large, complex, highly interdependent network. By leveraging the idea of cascades, our method provides a simplified approach that
requires minimal tuning of the components.
The goal of holistic scene understanding dates back to the early days of computer vision, and is
highlighted in the ?intrinsic images? system proposed by Barrow and Tenenbaum [1], where maps
of various image properties (depth, reflectance, color) are computed using information present in
other maps. Over the last few decades, however, researchers have instead targeted isolated computer
vision tasks, with considerable success in improving the state-of-the-art. For example, in our work,
we build on the prior work in scene categorization of Li and Perona [10], object detection of Dalal
and Triggs [4], multi-class image segmentation of Gould et al. [7], and 3d reconstruction of Saxena
et al. [13]. Recently, however, researchers have returned to the question of how one can benefit from
exploiting the dependencies between different classifiers.
Torralba et al. [16] use context to significantly boost object detection performance, and Sudderth
et al. [14] use object recognition for 3d structure estimation. In independent contemporary work,
Hoiem et al. [8] propose an innovative system for integrating the tasks of object recognition, surface
orientation estimation, and occlusion boundary detection. Like ours, their system is modular and
leverages state-of-the-art components. However, their work has a strong leaning towards 3d scene
2
reconstruction rather than understanding, and their algorithms contain many steps that have been
specialized for this purpose. Their training also requires intimate knowledge of the implementation
of each module, while ours is more flexible allowing integration of many related vision tasks regardless of their implementation details. Furthermore, we consider a broader class of images and object
types, and label regions with specific classes, rather than generic properties.
3
Cascaded Classification Models
Our goal is to classify various characteristics of our data using state-of-the-art methods in a way
that allows the each model to benefit from the others? expertise. We are interested in using proven
?off-the-shelf? classifiers for each subtask. As such these classifiers will be treated as ?black boxes,?
each with its own (specialized) data structures, feature sets, and inference and training algorithms.
To fit into our framework, we only require that each classifier provides a mechanism for including
additional (auxiliary) features from other modules. Many state-of-the-art models lend themselves
to the easy addition of new features. In the case of ?intrinsic images? [1], the output of each component is converted into an image-sized feature map (e.g., each ?pixel? contains the probability that
it belongs to a car). These maps can easily be fed into the other components as additional image
channels. In cases where this cannot be done, it is trivial to convert the original classifier?s output to
a log-odds ratio and use it along with features from their other classifiers in a simple logistic model.
A standard setup has, say, two models that predict the variables YD and YS respectively for the
same input instance I. For example, I might be an image, and YD could be the locations of all cars
in the image, while YS could be a map indicating which pixels are road. Most algorithms begin
by processing I to produce a set of features, and then learn a function that maps these features into
a predicted label (and in some cases also a confidence estimate). Cascaded Classification Models
(CCMs) is a joint classification model that shares information between tasks by linking component
classifiers in order to leverage their relatedness. Formally:
Definition 3.1: An L-tier Cascaded Classification Model (L-CCM) is a cascade of classifiers of the
target labels Y = {Y1 , . . . , YK }L (L ?copies? of each label) consisting of independent classifiers
? 0 and a series of conditional classifiers fk,? (?k (I, y??1 ); ?c,? ) ? Y
? ?,
fk,0 (?k (I); ?k,0 ) ? Y
k
k
?k
indexed by ?, indicating the ?tier? of the model, where y?k indicates the assignment to all labels
other than yk . The labels at the final tier (L ? 1) represent the final classification outputs.
A CCM uses L copies of each component model, stacked into tiers, as depicted in Figure 1(d). One
copy of each model lies in the first tier, and learns with only the image features, ?k (I), as input.
??1
Subsequent tiers of models accepts a feature vector, ?k (I, y?k
), containing the original image
features and additional features computed from the outputs of models in the preceeding tier. Given
a novel test instance, classification is performed by predicting the most likely (MAP) assignment to
each of the variables in the final tier.
We learn our CCM in a feed-forward manner. That is, we begin from the top level, training the
independent (fk,0 ) classifiers first, in order to maximize the classification performance on the training data. Because we assume a learning interface into each model, we simply supply the subset of
data that has ground labels for that model to its learning function. For learning each component k in
each subsequent level ? of the CCM, we first perform classification using the (? ? 1)-tier CCM that
has already been trained. From these output assignments, each classifier can compute a new set of
features and perform learning using the algorithm of choice for that classifier.
For learning a CCM, we assume that we have a dataset of fully or partially annotated instances. It
is not necessary for every instance to have groundtruth labels for every component, and our method
works even when the training sets are disjoint. This is appealing since the prevalence of large,
volunteer-annotated datasets (e.g., the LabelMe dataset [12] in vision or the Penn Treebank [11] in
language processing), is likely to provide large amounts of heterogeneously labeled data.
4
CCM for Holistic Scene Understanding
Our scene understanding model uses a CCM to combine various subsets of four computer vision
tasks: scene categorization, multi-class image segmentation, object detection, and 3d reconstruction.
We first introduce the notation for the target labels and then briefly describe the specifics of each
component. Consider an image I. Our scene categorization classifier produces a scene label C from
one of a small number of classes. Our multi-class segmentation model produces a class label Sj
3
Figure 2: (left,middle) Two exmaple features used by the ?context? aware object detector. (right) Relative
location maps showing the relative location of regions (columns) to objects (rows). Each map shows the prevalence of the region relative to the center of object. For example, the top row shows that cars are likely to have
road beneath and sky above, while the bottom rows show that cows and sheep are often surrounded by grass.
for each of a predefined set of regions j in the image. The base object detectors produce a set of
scored windows (Wc,i ) that potentially contain an object of type c. We attach a label Dc,i to each
window, that indicates whether or not the window contains the object. Our last component module
is monocular 3d reconstruction, which produces a depth Zi for every pixel i in the image.
Scene Categorization Our scene categorization module is a simple multi-class logistic model that
classifies the entire scene into one of a small number of classes. The base model uses a 13 dimensional feature vector ?(I) with elements based on mean and variance of RGB and YCrCb color
channels over the entire image, plus a bias term. In the conditional model, we include features that
indicate the relative proportions of each region label (a histogram of Sj values) in the image, plus
counts of the number of objects of each type detected, producing a final feature vector of length 26.
Multiclass Image Segmentation The segmentation module aims to assign a label to each pixel. We
base our model on the work of Gould et al. [7] who make use of relative location?the preference for
classes to be arranged in a consistent configuration with respect to one another (e.g., cars are often
found above roads). Each image is pre-partitioned into a set {S1 , . . . , SN } of regions (superpixels)
and the pixels are labeled by assigning a class to each region Sj . The method employs a pairwise
conditional Markov random field (CRF) constructed over the superpixels with node potentials based
on appearance features and edge potentials encoding a preference for smoothness.
In our work we wish to model the relative location between detected objects and region labels. This
has the advantage of being able to encode scale, which was not possible in [7]. The right side of
Figure 2 shows the relative location maps learned by our model. These maps model the spatial
location of all classes given the location and scale of detected objects. Because the detection model
provides probabilities for each detection, we actually use the relative location maps multiplied by
the probability that each detection is a true detection. Preliminary results showed an improvement
in using these soft detections over hard (thresholded) detections.
Object Detectors Our detection module builds on the HOG detector of Dalal and Triggs [4]. For
each class, the HOG detector is trained on a set of images disjoint from our datasets below. This
detector is then applied to all images in our dataset with a low threshold that produces an overdetection. For each image I, and each object class c, we typically find 10-100 candidate detection
windows Wc,i . Our independent detector model learns a logistic model over a small feature vector
?c,i that can be extracted directly from the candidate window.
Our conditional classifier seeks to improve the accuracy of the HOG detector by using image segmentation (denoted by Sj for each region j), 3d reconstruction of the scene, with depths (Zj ) for
each region, and a categorization of the scene as a whole (C), to improve the results of the HOG
detector. Thus, the output from other modules and the image are combined into a feature vector
?k (I, C, S, Z). A sampling of some features used are shown in Figure 2. This augmented feature
vector is used in a logistic model as in the independent case. Both the independent and context aware
logistics are regularized with a small ridge term to prevent overfitting.
Reconstruction Module Our reconstruction module is based on the work of Saxena et al. [13]. Our
Markov Random Field (MRF) approach models the 3d reconstruction (i.e., depths Z at each point
in the image) as a function of the image features and also models the relations between depths at
4
HOG
Independent
2-CCM
5-CCM
Ground
Ideal Input
C AR
0.39
0.55
0.58
0.59
0.49
0.63
P EDES .
0.29
0.53
0.55
0.56
0.53
0.64
B IKE
0.13
0.57
0.65
0.63
0.62
0.56
B OAT
0.11
0.31
0.48
0.47
0.35
0.65
S HEEP
0.19
0.39
0.45
0.40
0.40
0.45
C OW
0.28
0.49
0.53
0.54
0.51
0.56
Mean
0.23
0.47
0.54
0.53
0.48
0.58
Segment
N/A
72.1%
75.0%
75.8%
73.6%
78.4%
Category
N/A
70.6%
77.3%
76.8%
69.9%
86.7%
Table 1: Numerical evaluation of our various training regimes for the DS1 dataset. We show average precision
(AP) for the six classes, as well as the mean. We also show segmentation and scene categorization accuracy.
various points in the image. For example, unless there is occlusion, it is more likely that two nearby
regions in the image would have similar depths.
More formally, our variables are continuous, i.e., at a point i, the depth Zi ? R. Our baseline model
consists of two types of terms. The first terms model the depth at each point as a linear function
of the local image features, and the second type models relationships between neighboring points,
encouraging smoothness. Our conditional model includes an additional set of terms that models the
depth at each point as a function of the features computed from an image segmentation S in the
neighborhood of a point. By including this third term, our model benefits from the segmentation
outputs in various ways. For example, a classification of grass implies a horizontal surface, and a
classification of sky correlates with distant image points. While detection outputs might also help
reconstruction, we found that most of the signal was present in the segmentation maps, and therefore
dropped the detection features for simplicity.
5
Experiments
We perform experiments on two subsets of images. The first subset DS1 contains 422 fully-labeled
images of urban and rural outdoor scenes. Each image is assigned a category (urban, rural, water,
other). We hand label each pixel as belonging to one of: tree, road, grass, water, sky, building
and foreground. The foreground class captures detectable objects, and a void class (not used during
training or evaluation) allows for the small number of regions not fitting into one of these classes
(e.g., mountain) to be ignored. This is standard practice for the pixel-labeling task (e.g., see [3]). We
also annotate the location of six different object categories (car, pedestrian, motorcycle, boat, sheep,
and cow) by drawing a tight bounding box around each object. We use this dataset to demonstrate the
combining of three vision tasks: object detection, multi-class segmentation, and scene categorization
using the models described above.
Our much larger second dataset DS2 was assembled by combining 362 images from the DS1 dataset
(including either the segmentation or detection labels, but not both), 296 images from the Microsoft
Research Segmentation dataset [3] (labeled with segments), 557 images from the PASCAL VOC
2005 and 2006 challenges [5] (labeled with objects), and 534 images with ground truth depth information. This results in 1749 images with disjoint labelings (no image contains groundtruth labels for more than one task). Combining these datasets results in 534 reconstruction images with
groundtruth depths obtained by laser range-finder (split into 400 training and 134 test), 596 images
with groundtruth detections (same 6 classes as above, split into 297 train and 299 test), and 615 with
groundtruth segmentations (300 train and 315 test). This dataset demonstrates the typical situation
in learning related tasks whereby it is difficult to obtain large fully-labeled datasets. We use this
dataset to demonstrate the power of our method in leveraging the data from these three tasks to
improve performance.
5.1 DS1 Dataset
Experiments with the DS1 dataset were performed using 5-fold cross validation, and we report
the mean performance results across folds. We compare five training/testing regimes (see Table 1).
Independent learns parameters on a 0-Tier (independent) CCM, where no information is exchanged
between tasks. We compare two levels of complexity for our method, a 2-CCM and a 5-CCM
to test how the depth of the cascade affects performance. The last two training/testing regimes
involve using groundtruth information at every stage for training and for both training and testing,
respectively. Groundtruth trains a 5-CCM using groundtruth inputs for the feature construction
(i.e., as if each tier received perfect inputs from above), but is evaluated with real inputs. The Ideal
5
(a) Cars
(b) Pedestrians
(c) Motorbikes
(d) Categorization
(e) Boats
(f) Sheep
(g) Cows
(h) Segmentation
Figure 3: Results for the DS1 dataset. (a-c,e-g) show precision-recall curves for the six object classes that we
consider, while (d) shows our accuracy on the scene categorization task and (h) our accuracy in labeling regions
in one of seven classes.
Input experiment uses the Groundtruth model and also uses the groundtruth input to each tier at
testing time. We could do this since, for this dataset, we had access to fully labeled groundtruth.
Obviously this is not a legitimate operating mode, but does provide an interesting upper bound on
what we might hope to achieve.
To quantitatively evaluate our method, we consider metrics appropriate to the tasks in question.
For scene categorization, we report an overall accuracy for assigning the correct scene label to an
image. For segmentation, we compute a per-segment accuracy, where each segment is assigned the
groundtruth label that occurs for the majority of pixels in the region. For detection, we consider a
particular detection correct if the overlap score is larger than 0.2 (overlap score equals the area of
intersection divided by the area of union between the detected bounding box and the groundtruth).
We plot precision-recall (PR) curves for detections, and report the average precision of these curves.
AP is a more stable version of the area under the PR curve.
Our numerical results are shown in Table 1, and the corresponding graphs are given in Figure 3. The
PR curves compare the HOG detector results to our Independent results and to our 2-CCM results.
It is interesting to note that a large gain was achieved by adding the independent features to the
object detectors. While the HOG score looks at only the pixels inside the target window, the other
features take into account the size and location of the window, allowing our model to capture the
fact that foreground object tend to occur in the middle of the image and at a relatively small range
of scales. On top of this, we were able to gain an additional benefit through the use of context in the
CCM framework. For the categorization task, we gained 7% using the CCM framework, and for
segmentation, CCM afforded a 3% improvement in accuracy. Furthermore, for this task, running an
additional three tiers, for a 5-CCM, produced an additional 1% improvement.
Interestingly, the Groundtruth method performs little better than Independent for these three tasks.
This shows that it is better to train the models using input features that are closer to the features it
will see at test time. In this way, the downstream tiers can learn to ignore signals that the upstream
tiers are bad at capturing, or even take advantage of consistent upstream bias. Also, the Ideal Input
results show that CCMs have made significant progress towards the best we can hope for from these
models.
5.2
DS2 Dataset
For this dataset we combine the three subtasks of reconstruction, segmentation, and object detection. Furthermore, as described above, the labels for our training data are disjoint. We trained an
Independent model and a 2-CCM on this data. Quantitatively, 2-CCM outperformed Independent
on segmentation by 2% (75% vs. 73% accuracy), on detection by 0.02 (0.33 vs. 0.31 mean average
precision), and on depth reconstruction by 1.3 meters (15.4 vs. 16.7 root mean squared error).
6
Figure 4: (top two rows) three cases where CCM improved results for all tasks. In the first, for instance, the
presence of grass allows the CCM to remove the boat detections. The next four rows show four examples
where detections are improved and four examples where segmentations are improved.
Figure 4 shows example outputs from each component. The first three (top two rows) show images
where all components improved over the independent model. In the top left our detectors removed
some false boat detections which were out of context and determined that the watery appearance
of the bottom of the car was actually foreground. Also by providing a sky segment, our method
allowed the 3d reconstruction model to infer that those pixels must be very distant (red). The next
two examples show similar improvement for detections of boats and water.
The remaining examples show how separate tasks improve by using information from the others. In
each example we show results from the independent model for the task in question, the independent
contextual task and the 2-CCM output. The first four examples show that our method was able
to make correct detections whereas the independent model could not. The last examples show
improvements in multi-class image segmentation.
7
6
Discussion
In this paper, we have presented the Cascaded Classification Models (CCM) method for combining
a collection of state-of-the-art classifiers toward improving the results of each. We demonstrated
our method on the task of holistic scene understanding by combining scene categorization, object
detection, multi-class segmentation and depth reconstruction, and improving on all. Our results are
consistent with other contemporary research, including the work of Hoiem et al. [8], which uses
different components and a smaller number of object classes.
Importantly, our framework is very general and can be applied to a number of machine learning
domains. This result provides hope that we can improve by combining our complex models in
a simple way. The simplicity of our method is one of its most appealing aspects. Cascades of
classifiers have been used extensively within a particular task, and our results suggest that this should
generalize to work between tasks. In addition, we showed that CCMs can benefit from the cascade
even with disjoint training data, e.g., no images containing labels for more than one subtask.
In our experiments, we passed relatively few features between the tasks. Due to the homogeneity of
our data, many of the features carried the same signal (e.g., a high probability of an ocean scene is a
surrogate for a large portion of the image containing water regions). For larger, more heterogeneous
datasets, including more features may improve performance. In addition, larger datasets will help
prevent the overfitting that we experienced when trying to include a large number of features.
It is an open question how deep a CCM is appropriate in a given scenario. Overfitting is anticipated
for very deep cascades. Furthermore, because of limits in the context signal, we cannot expect to
get unlimited improvements. Further exploration of cases where this combination is appropriate is
an important future direction. Another exciting avenue is the idea of feeding back information from
the later classifiers to the earlier ones. Intuitively, a later classifier might encourage earlier ones to
focus its effort on fixing certain error modes, or allow the earlier classifiers to ignore mistakes that
do not hurt ?downstream.? This also should allow components with little training data to optimize
their results to be most beneficial to other modules, while worrying less about their own task.
Acknowledgements This work was supported by the DARPA Transfer Learning program under contract number FA8750-05-2-0249 and the Multidisciplinary University Research Initiative (MURI),
contract number N000140710747, managed by the Office of Naval Research.
References
[1] H. G. Barrow and J.M. Tenenbaum. Recovering intrinsic scene characteristics from images. CVS, 1978.
[2] S.C. Brubaker, J. Wu, J. Sun, M.D. Mullin, and J.M. Rehg. On the design of cascades of boosted ensembles for face detection. In Tech report GIT-GVU-05-28, 2005.
[3] A. Criminisi. Microsoft research cambridge object recognition image database (version 1.0 and 2.0).,
2004. Available Online: http://research.microsoft.com/vision/cambridge/recognition.
[4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[5] M. Everingham et al. The 2005 pascal visual object classes challenge. In MLCW, 2005.
[6] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application
to boosting. In European Conference on Computational Learning Theory, pages 23?37, 1995.
[7] S. Gould, J. Rodgers, D. Cohen, G. Elidan, and D. Koller. Multi-class segmentation with relative location
prior. IJCV, 2008.
[8] D. Hoiem, A.A. Efros, and M. Hebert. Closing the loop on scene interpretation, 2008.
[9] S. Kumar and M. Hebert. A hier. field framework for unified context-based classification. In ICCV, 2005.
[10] F. Li and P. Perona. A bayesian hier. model for learning natural scene categories. In CVPR, 2005.
[11] M. P. Marcus, M.A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: the
penn treebank. Comput. Linguist., 19(2), 1993.
[12] B.C. Russell, A.B. Torralba, K.P. Murphy, and W.T. Freeman. Labelme: A database and web-based tool
for image annotation. IJCV, 2008.
[13] A. Saxena, M. Sun, and A.Y. Ng. Learning 3-d scene structure from a single still image. In PAMI, 2008.
[14] E.B. Sudderth, A. Torralba, W.T. Freeman, and A.S. Willsky. Depth from familiar objects: A hierarchical
model for 3d scenes. In CVPR, 2006.
[15] C. Sutton and A. McCallum. Joint parsing and semantic role labeling. In CoNLL, 2005.
[16] Antonio B. Torralba, Kevin P. Murphy, and William T. Freeman. Contextual models for object detection
using boosted random fields. In NIPS, 2004.
[17] Z. Tu. Auto-context and its application to high-level vision tasks. In CVPR, 2008.
[18] P. Viola and M.J. Jones. Robust real-time object detection. IJCV, 2001.
8
| 3472 |@word middle:2 version:2 briefly:1 dalal:3 proportion:1 heterogeneously:1 triggs:3 everingham:1 open:1 seek:2 rgb:1 git:1 configuration:1 contains:4 series:1 score:3 hoiem:3 document:1 ours:2 interestingly:1 fa8750:1 existing:1 contextual:3 com:1 tackling:1 assigning:2 must:1 parsing:2 subsequent:2 numerical:2 distant:2 remove:1 plot:1 ashutosh:1 grass:4 v:3 cue:1 mccallum:2 short:1 provides:4 boosting:2 node:1 location:12 preference:2 daphne:1 five:1 along:1 constructed:1 supply:1 initiative:1 consists:1 ijcv:3 compose:1 combine:5 fitting:1 inside:1 introduce:1 manner:1 pairwise:1 indeed:1 themselves:1 multi:9 freeman:3 voc:1 encouraging:1 little:2 window:7 considering:1 begin:2 classifies:1 notation:1 what:5 mountain:1 unified:2 sky:4 every:4 saxena:4 classifier:33 demonstrates:1 tricky:1 penn:2 producing:1 engineering:1 local:1 modify:1 treat:1 dropped:1 limit:1 mistake:1 sutton:2 encoding:1 yd:2 ap:2 black:5 might:6 plus:2 pami:1 limited:2 range:2 hood:1 testing:4 practice:1 union:1 implement:1 prevalence:2 area:5 cascade:11 significantly:1 ccm:31 word:1 road:5 integrating:1 confidence:1 pre:1 suggest:1 get:1 cannot:2 context:10 optimize:1 map:13 demonstrated:2 center:1 rural:3 regardless:1 painful:1 unify:1 simplicity:2 preceeding:1 communicating:1 legitimate:1 cascading:1 importantly:2 rehg:1 hurt:1 target:3 construction:1 us:6 element:1 recognition:6 muri:1 database:3 labeled:7 bottom:2 role:3 module:10 electrical:1 capture:2 region:19 sun:2 russell:1 contemporary:2 removed:1 ccms:4 yk:2 subtask:2 complexity:1 trained:3 solving:1 segment:5 tight:1 technically:1 creates:1 efficiency:1 learner:1 easily:2 joint:2 darpa:1 various:7 train:5 stacked:1 laser:1 describe:1 detected:5 labeling:6 kevin:1 neighborhood:1 modular:1 stanford:6 solve:1 larger:4 say:1 drawing:1 cvpr:4 ability:1 jointly:2 highlighted:1 final:5 online:1 obviously:1 advantage:2 reconstruction:16 propose:1 interaction:1 tu:2 relevant:1 combining:11 beneath:1 date:2 neighboring:1 motorcycle:1 holistic:6 loop:1 achieve:1 description:1 exploiting:1 produce:8 categorization:15 perfect:1 object:39 help:3 develop:2 fixing:1 received:1 progress:2 strong:1 solves:1 auxiliary:1 c:1 recovering:1 indicate:1 implies:1 predicted:1 involves:1 direction:1 correct:4 annotated:3 criminisi:1 exploration:1 human:1 engineered:1 require:2 feeding:1 assign:1 generalization:1 marcinkiewicz:1 preliminary:1 around:1 considered:1 ground:3 great:1 predict:3 efros:1 early:1 torralba:4 purpose:1 estimation:2 outperformed:1 label:25 successfully:1 tool:1 hope:3 aim:3 rather:3 shelf:1 boosted:3 broader:1 office:1 earliest:1 encode:1 focus:2 naval:1 improvement:6 indicates:3 superpixels:2 tech:1 baseline:1 inference:1 integrated:1 entire:2 accept:1 typically:1 perona:2 koller:3 relation:1 labelings:1 interested:1 pixel:11 overall:1 classification:19 orientation:1 flexible:1 stateof:1 denoted:1 pascal:2 art:8 integration:1 spatial:1 field:5 aware:2 equal:1 having:1 ng:1 sampling:1 ike:1 broad:3 look:2 jones:1 anticipated:1 foreground:4 future:1 others:3 report:4 intelligent:1 quantitatively:2 few:3 employ:1 oriented:1 simultaneously:3 homogeneity:1 individual:1 murphy:2 familiar:1 occlusion:2 consisting:1 microsoft:3 william:1 detection:37 interest:1 highly:2 evaluation:2 sheep:3 predefined:1 accurate:1 edge:1 closer:1 encourage:1 necessary:1 unless:1 indexed:1 iv:1 tree:1 exchanged:1 re:1 isolated:1 minimal:1 mullin:1 instance:9 classify:1 column:1 soft:1 earlier:3 ar:1 assignment:3 applicability:1 subset:4 dependency:2 answer:2 combined:4 probabilistic:3 off:1 contract:2 again:1 squared:1 containing:3 li:2 account:1 converted:1 potential:2 includes:1 pedestrian:2 performed:2 root:1 later:2 red:1 portion:1 annotation:1 accuracy:9 variance:1 characteristic:2 who:1 ensemble:2 correspond:1 yield:1 generalize:1 weak:1 raw:2 bayesian:1 produced:1 notoriously:1 researcher:3 expertise:1 classified:1 detector:12 definition:1 gain:2 dataset:16 recall:2 color:2 car:8 improves:1 knowledge:1 segmentation:26 sophisticated:1 carefully:1 back:3 actually:2 feed:1 day:1 improved:5 arranged:1 done:1 box:7 evaluated:1 furthermore:4 stage:1 hand:1 horizontal:1 web:1 hier:2 logistic:4 mode:2 multidisciplinary:1 building:3 contain:2 true:1 managed:1 assigned:2 semantic:3 during:1 speaker:1 whereby:1 trying:1 crf:1 demonstrate:3 ridge:1 theoretic:1 performs:1 interface:3 reasoning:1 image:62 consideration:1 novel:2 recently:2 superior:1 common:1 specialized:2 cohen:1 linking:2 rodgers:1 interpretation:1 significant:1 cambridge:2 cv:1 smoothness:2 tuning:1 fk:3 closing:1 language:3 had:1 access:2 stable:1 surface:2 operating:1 base:3 own:3 showed:2 belongs:1 scenario:1 certain:1 success:1 geremy:1 seen:1 additional:7 gaheitz:1 maximize:1 elidan:1 signal:5 stephen:1 ii:1 multiple:1 desirable:1 infer:1 cross:1 divided:2 y:2 finder:1 mrf:2 heterogeneous:1 vision:13 metric:1 volunteer:1 histogram:2 sometimes:1 represent:1 annotate:1 achieved:1 addition:3 want:2 whereas:1 void:1 sudderth:2 source:1 operate:1 tend:1 leveraging:2 effectiveness:1 odds:1 leverage:2 ideal:3 presence:1 iii:1 easy:1 split:2 affect:1 isolation:2 fit:2 zi:2 cow:3 idea:4 avenue:1 multiclass:3 whether:1 six:3 passed:1 effort:1 returned:2 speech:2 linguist:1 deep:2 ignored:1 antonio:1 aimed:1 involve:1 amount:1 tenenbaum:2 extensively:1 category:4 http:1 schapire:1 exist:1 zj:1 disjoint:5 per:1 ds1:6 group:2 four:5 threshold:1 urban:4 prevent:2 thresholded:1 graph:1 worrying:1 downstream:2 convert:1 powerful:1 named:1 groundtruth:14 wu:1 separation:1 decision:1 conll:1 asaxena:1 capturing:1 bound:1 fold:2 occur:1 scene:40 afforded:1 unlimited:1 nearby:1 wc:2 aspect:1 innovative:1 kumar:2 relatively:2 gould:4 department:2 combination:2 belonging:1 across:1 smaller:1 beneficial:1 partitioned:1 appealing:3 s1:1 intuitively:1 iccv:1 pr:3 tier:19 monocular:1 turn:1 count:1 mechanism:1 detectable:1 fed:1 available:2 gvu:1 multiplied:1 apply:1 hierarchical:1 generic:1 appropriate:3 ocean:1 motorbike:1 original:3 top:6 running:1 include:2 remaining:1 exploit:1 reflectance:1 build:2 question:7 already:2 occurs:1 surrogate:1 ow:1 gradient:1 separate:1 entity:1 majority:1 seven:1 trivial:1 water:4 toward:1 marcus:1 willsky:1 length:1 relationship:1 ratio:1 providing:1 difficult:3 setup:1 potentially:1 hog:7 ds2:2 implementation:2 design:1 perform:3 allowing:3 upper:1 datasets:6 markov:2 barrow:2 logistics:1 situation:1 viola:1 santorini:1 y1:1 dc:1 brubaker:1 verb:1 subtasks:2 required:1 extensive:1 sgould:1 accepts:1 learned:1 boost:1 nip:1 assembled:1 able:3 below:2 indoor:1 regime:3 challenge:2 encompasses:1 program:1 including:7 lend:1 power:1 overlap:2 natural:5 treated:1 attach:1 predicting:2 cascaded:7 regularized:1 boat:5 improve:8 carried:1 coupled:2 auto:1 sn:1 prior:2 understanding:12 geometric:1 literature:1 interdependent:1 meter:1 acknowledgement:1 relative:9 freund:1 fully:5 expect:2 interesting:2 proven:1 validation:1 consistent:3 exciting:1 treebank:2 leaning:1 share:1 surrounded:1 row:6 supported:1 last:5 copy:3 hebert:3 english:1 bias:2 side:1 understand:1 allow:2 face:2 benefit:6 boundary:1 heitz:1 depth:15 curve:5 forward:1 made:1 collection:1 simplified:1 correlate:1 sj:4 ignore:2 relatedness:1 overfitting:3 instantiation:2 corpus:1 continuous:1 decade:2 table:3 n000140710747:1 learn:4 channel:2 transfer:1 ca:2 robust:1 improving:3 complex:2 upstream:2 constructing:1 domain:3 european:1 motivation:1 whole:1 scored:1 oat:1 bounding:2 repeated:1 allowed:1 augmented:1 precision:5 sub:1 experienced:1 wish:1 comput:1 lie:1 candidate:2 intimate:1 outdoor:1 third:1 learns:4 bad:1 specific:3 showing:1 intrinsic:3 false:1 adding:1 gained:1 depicted:1 intersection:1 simply:2 likely:6 appearance:2 visual:1 partially:1 truth:1 extracted:1 conditional:5 goal:4 targeted:1 sized:1 towards:2 labelme:2 considerable:1 hard:1 specifically:1 typical:1 determined:1 called:2 meaningful:1 organizes:1 indicating:2 formally:2 evaluate:1 audio:1 correlated:1 |
2,727 | 3,473 | QUIC-SVD: Fast SVD Using Cosine Trees
Michael P. Holmes, Alexander G. Gray and Charles Lee Isbell, Jr.
College of Computing
Georgia Tech
Atlanta, GA 30327
{mph, agray, isbell}@cc.gatech.edu
Abstract
The Singular Value Decomposition is a key operation in many machine learning
methods. Its computational cost, however, makes it unscalable and impractical
for applications involving large datasets or real-time responsiveness, which are
becoming increasingly common. We present a new method, QUIC-SVD, for fast
approximation of the whole-matrix SVD based on a new sampling mechanism
called the cosine tree. Our empirical tests show speedups of several orders of
magnitude over exact SVD. Such scalability should enable QUIC-SVD to accelerate and enable a wide array of SVD-based methods and applications.
1
Introduction
The Singular Value Decomposition (SVD) is a fundamental linear algebraic operation whose abundant useful properties have placed it at the computational center of many methods in machine learning and related fields. Principal component analysis (PCA) and its kernel and nonlinear variants
are prominent examples, and countless other instances are found in manifold and metric learning,
clustering, natural language processing/search, collaborative filtering, bioinformatics and more.
Notwithstanding the utility of the SVD, it is critically bottlenecked by a computational complexity
that renders it impractical on massive datasets. Yet massive datasets are increasingly common in
applications, many of which require real-time responsiveness. Such applications could use SVDbased methods more liberally if the SVD were not so slow to compute. We present a new method,
QUIC-SVD, for fast, sample-based SVD approximation with automatic relative error control. This
algorithm is based on a new type of data partitioning tree, the cosine tree, that shows excellent ability
to home in on the subspaces needed for good SVD approximation. We demonstrate several-orderof-magnitude speedups on medium-sized datasets, and verify that approximation error is properly
controlled. Based on these results, QUIC-SVD seems able to help address the scale of modern
problems and datasets, with the potential to benefit a wide array of methods and applications.
2
Background
For A ? Rm?n , we write A(i) for the ith row of A and A(j) for the jth column. We use Om?n to
represent the subset of Rm?n whose columns are orthonormal. Since the columns of V ? Om?n
are an orthonormal basis, we sometimes use expressions such as ?the subspace V ? to refer to the
subspace spanned by the columns of V . Throughout this paper we assume m ? n, such that
sampling rows gives bigger speedup than sampling columns. This is no loss of generality, since
whenever m < n we can perform SVD on the transpose, then swap U and V to get the SVD of
the original matrix. Alternatively, row-sampling-based methods have analogous column-sampling
versions that can be used in place of transposition; we leave this implicit and develop only the
row-sampling version of our algorithm.
1
Algorithm 1 Optimal approximate SVD within a row subspace Vb .
E XTRACT SVD
Input: target matrix A ? Rm?n , subspace basis Vb ? On?k
Output: U , ?, V , the SVD of the best approximation to A within the subspace spanned by Vb ?s columns
T
1. Compute AVb , then (AVb )T AVb and its SVD: U 0 ?0 V 0 = (AVb )T AVb
2. Let V = Vb V 0 , ? = (?0 )1/2 , and U = (AVb )V 0 ??1
3. Return U , ?, V
The singular value decomposition is defined as follows:
Definition 1. Let A be an m ? n real matrix of rank ?. Then there exists a factorization of the form
A = U ?V T ,
(1)
where U and V each have orthonormal columns and are of size m ? ? and n ? ?, respectively, and
? is diagonal with entries ?1 ? ?2 ? . . . ? ?? > 0.
Equivalently,
we can write the SVD as a weighted sum of rank-one outer products: A =
P?
T
?
u
v
,
i
i
i where ui and vi represent the ith columns of U and V . The columns ui and vi
i=1
are referred to as the left and right singular vectors, while the weights ?i are the singular values.
Though it is sometimes overkill, the SVD can be used to solve essentially any problem in numerical
linear algebra. Instances of such problems abound in machine learning.
Given m ? n, the exact SVD has O(mn2 ) runtime (O(n3 ) for square matrices). This is highly
unscalable, rendering exact SVD impractical for large datasets. However, it is often the case that
good approximations can be found using subsets of the rows or columns. Of significant interest are
low-rank approximations to a matrix. The optimal k-rank approximation, in the sense of minimizing
b 2 , is the k-rank truncation of the SVD:
the squared error ||A ? A||
F
Ak =
k
X
?i ui viT = Uk ?k Vk .
(2)
i=1
Ak is the projection of A?s rows onto the subspace spanned by the top k right singular vectors, i.e.,
Ak = AVk VkT . The optimality of Ak implies that the columns of Vk span the subspace of dimension
at most k in which the squared error of A?s row-wise projection is minimized. This leads us to a
formulation of SVD approximation in which we seek to find a subspace in which A?s projection has
sufficiently low error, then perform the SVD of A in that subspace. If the subspace is substantially
lower in rank/dimension than A, the SVD of the projection can be computed significantly faster
than the SVD of the original A (quadratically so, as we will have decreased the n in O(mn2 )). An
important procedure we will require is the extraction of the best approximate SVD within a given
subspace Vb . Algorithm 1 describes this process; portions of this idea appeared in [1] and [2], but
without enumeration of its properties. We state some of the key properties as a lemma.
Lemma 1. Given a target matrix A and a row subspace basis stored in the columns of Vb ,
E XTRACT SVD has the following properties:
1. Returns a full SVD, meaning U and V with orthonormal columns, and ? diagonal.
2. U ?V T = AVb Vb T , i.e., the extracted SVD reconstructs exactly to the projection of A?s rows
onto the subspace spanned by Vb .
3. U ?V T minimizes squared-error reconstruction of A among all SVDs whose rows are restricted to the span of Vb .
We omit the fairly straightforward proof. The runtime of the procedure is O(kmn), where k is the
rank of Vb . As this SVD extraction will constitute the last and most expensive step of our algorithm,
we therefore require a subspace discovery method that finds a subspace of sufficient quality with as
low a rank k as possible. This motivates the essential idea of our approach, which is to leverage the
2
Table 1: Distinctions between whole-matrix SVD approximation and LRMA.
Whole-Matrix SVD Approximation
Low-Rank Matrix Approximation
b or unaligned Vb & ?
b only
True SVD: U , ?, and V
A
Addresses full-rank matrix
Fixed low-rank k
Full-rank relative error bound
k-rank error bound, additive or relative
Table 2: Distinctions between subspace construction in QUIC-SVD and previous LRMA methods.
QUIC-SVD
Previous LRMA Methods
Iterative buildup, fast empirical error control
One-off computation, loose error bound
Adaptive sample size minimization
Fixed a priori sample size (loose)
Cosine tree sampling
Various sampling schemes
geometric structure of a matrix to efficiently derive compact (i.e., minimal-rank) subspaces in which
to carry out the approximate SVD.
Previous Work. A recent vein of work in the theory and algorithms community has focused on
using sampling to solve the problem of low-rank matrix approximation (LRMA). The user specifies
a desired low rank k, and the algorithms try to output something close to the optimal k-rank approximation. This problem is different from the whole-matrix SVD approximation we address, but a close
relationship allow us to draw on some of the LRMA ideas. Table 1 highlights the distinctions between whole-matrix SVD approximation and LRMA. Table 2 summarizes the differences between
our algorithmic approach and the more theoretically-oriented approaches taken in the LRMA work.
Each LRMA algorithm has a way of sampling to build up a subspace in which the matrix projection
has bounded error. Our SVD also samples to build a subspace, so the LRMA sampling methods
are directly comparable to our tree-based approach. Three main LRMA sampling techniques have
emerged,1 and we will discuss each from the perspective of iteratively sampling a row, updating a
subspace so it spans the new row, and continuing until the subspace captures the input matrix to
within a desired error threshold. This is how our method works, and it is similar to the framework
used by Friedland et al. [1]. The key to efficiency (i.e., rank-compactness) is for each sampled row
to represent well the rows that are not yet well represented in the subspace.
Length-squared (LS) sampling. Rows are sampled with probability proportional to their squared
lengths: pi = ||A(i) ||2F /||A||2F . LS sampling was used in the seminal work of Frieze, Kannan, and
Vempala [3], and in much of the follow-on work [4, 5]. It is essentially an importance sampling
scheme for the squared error objective. However, it has two important weaknesses. First, a row
can have high norm while not being representative of other rows. Second, the distribution is nonadaptive, in that a point is equally likely to be drawn whether or not it is already well represented in
the subspace. Both of these lead to wasted samples and needless inflation of the subspace rank.
Residual length-squared (RLS) sampling. Introduced by Deshpande and Vempala [2], RLS modifies the LS probabilities after each subspace update by setting pi = ||A(i) ? ?V (A(i) )||2F /||A ?
?V (A)||2F , where ?V represents projection onto the current subspace V . By adapting the LS distribution to be over residuals, this method avoids drawing samples that are already well represented in
the subspace. Unfortunately, there is still nothing to enforce that any sample will be representative
of other high-residual samples. Further, updating residuals requires an expensive s passes through
the matrix for every s samples that are added, which significantly limits practical utility.
Random projections (RP). Introduced by Sarl?os [6], the idea is to sample linear combinations of
rows, with random combination coefficients drawn from a Gaussian. This method is strong where
LS and RLS are weak ? because all rows influence every sample, each sample is likely to represent a
sizeable number of rows. Unfortunately the combination coefficients are not informed by importance
(squared length), and the sampling distribution is non-adaptive. Further, each linear combination
requires a full matrix pass, again limiting practicality.
Also deserving mention is the randomized sparsification used by Achlioptas et al. [7]. Each of the
LRMA sampling methods has strengths we can draw on and weaknesses we can improve upon. In
particular, our cosine tree sampling method can be viewed as combining the representativeness of
RP sampling with the adaptivity of RLS, which explains its empirically dominant rank efficiency.
1
Note that our summary of related work is necessarily incomplete due to space constraints; our intent is to
summarize the essential results from the LRMA literature inasmuch as they pertain to our approach.
3
Algorithm 2 Cosine tree construction.
CTN ODE
Input: A ? Rm?n
Output: cosine tree node containing the rows of A
1. N ? new cosine tree node
2. N.A ? A
3. N.splitP t ? ROW S AMPLE LS(A) // split point sampled from length-squared distribution
4. return N
CTN ODE S PLIT
Input: cosine tree node N
Output: left and right children obtained by cosine-splitting of N
1. for each N.A(i) , compute ci = |cos(N.A(i) , N.splitP t)|
2. if ?i, ci = 1, return nil
3. cmax = max{ci |ci < 1}; cmin = min{ci }
4. Al ? [ ]; Ar ? [ ]
5. for i = 1 to N.nRows
(a) if cmax ? ci ? ci ? cmin , Al ?
Ar
(b) else Ar ?
N.A(i)
Al
N.A(i)
6. return CTN ODE(Al ), CTN ODE(Ar )
3
Our Approach
Rather than a fixed low-rank matrix approximation, our objective is to approximate the whole-matrix
SVD with as high a rank as is required to obtain the following whole-matrix relative error bound:
b 2F ? ||A||2F ,
||A ? A||
(3)
b = U ?V T is the matrix reconstructed by our SVD approximation. In contrast to the error
where A
bounds of previous methods, which are stated in terms of the unknown low-rank Ak , our error
bound is in terms of the known A. This enables us to use a fast, empirical Monte Carlo technique to
determine with high confidence when we have achieved the error target, and therefore to terminate
with as few samples and as compact a subspace as possible. Minimizing subspace rank is crucial for
speed, as the final SVD extraction is greatly slowed by excess rank when the input matrix is large.
We use an iterative subspace buildup as described in the previous section, with sampling governed
by a new spatial partitioning structure we call the cosine tree. Cosine trees are designed to leverage
the geometrical structure of a matrix and a partial subspace in order to quickly home in on good representative samples from the regions least well represented. Key to the efficiency of our algorithm is
an efficient error checking scheme, which we accomplish by Monte Carlo error estimation at judiciously chosen stages. Such a combination of spatial partitioning trees and Monte Carlo estimation
has been used before to good effect [8], and we find it to be a successful pairing here as well.
Cosine Trees for Efficient Subspace Discovery. The ideal subspace discovery algorithm would
oracularly choose as samples the singular vectors vi . Each vi is precisely the direction that, added to
the subspace spanned by the previous singular vectors, will maximally decrease residual error over
all rows of the matrix. This intuition is the guiding idea for cosine trees.
A cosine tree is constructed as follows. Starting with a root node, which contains all points (rows),
we take its centroid as a representative to include in our subspace span, and randomly sample a point
to serve as the pivot for splitting. We sample the pivot from the basic LS distribution, that being the
cheapest source of information as to sample importance. The remaining points are sorted by their
absolute cosines relative to the pivot point, then split according to whether they are closer to the high
or low end of the cosines. The two groups are assigned to two child nodes, which are placed in a
4
Algorithm 3 Monte Carlo estimation of the squared error of a matrix projection onto a subspace.
MCS Q E RROR
Input: A ? Rm?n , Vb ? On?k , s ? {1 . . . m}, ? ? [0, 1]
Output: sqErr ? R s.t. with probability at least 1 ? ?, ||A ? AVb Vb T ||2F ? sqErr
1. S = rowSamplesLS(A, s) // sample s rows from the length-squared distribution
2. for i = 1 to s : // compute weighted sq. mag. of each sampled row?s projection onto V
(a) wgtM agSq[i] =
1
pS
(i)
||S(i) V ||2F // pS(i) is prob. of drawing Si under LS sampling
3. ?
? = avg(wgtM agSq); ?
? 2 = var(wgtM agSq); magSqLB = lowBound(?
?, ?
? 2 , s, ?)
4. return ||A||2F ? magSqLB
Algorithm 4 QUIC-SVD: fast whole-matrix approximate SVD with relative error control.
QUIC-SVD
Input: A ? Rm?n , ? [0, 1], and ? ? [0, 1]
b = U ?V T satisfies ||A ? A||
b 2F ? ||A||2F with probability at least 1 ? ?
Output: an SVD U, ?, V s.t. A
1. V = [ ]; mcSqErr = ||A||2F ; Nroot = CTN ODE(A)
2. Q = E MPTY P RIORITY Q UEUE (); Q.insert(Nroot , 0)
3. do until mcSqErr ? ||A||2F :
(a) N = Q.pop(); C = CTN ODE S PLIT(N ) // C = {Nl , Nr }, the children of N
(b) Remove N ?s contributed basis vector from V
(c) for each Nc ? C :
i. V = [V MGS(V, Nc .centroid)] // MGS = modified Gram-Schmidt orthonormalization
(d) for each Nc ? C :
i. errC = MCS Q E RROR(Nc .A, V, O(log[Nc .nRows]), ?)
ii. Q.insert(Nc , errC)
(e) mcSqErr = MCS Q E RROR(A, V, O(log m), ?)
4. return E XTRACT SVD(A, V )
queue prioritized by the residual error of each node. The process is then repeated according to the
priority order of the queue. Algorithm 2 defines the splitting process.
Why do cosine trees improve sampling efficiency? By prioritizing expansion by the residual error
of the frontier nodes, sampling is always focused on the areas with maximum potential for error
reduction. Since cosine-based splitting guides the nodes toward groupings with higher parallelism,
the residual magnitude of each node is increasingly likely to be well captured along the direction of
the node centroid. Expanding the subspace in the direction of the highest-priority node centroid is
therefore a good guess as to the direction that will maximally reduce residual error. Thus, cosine
tree sampling approximates the ideal of oracularly sampling the true singular vectors.
3.1
QUIC-SVD
Strong error control. Algorithm 4, QUIC-SVD (QUantized Iterative Cosine tree)2 , specifies a
way to leverage cosine trees in the construction of an approximate SVD while providing a strong
probabilistic error guarantee. The algorithm builds a subspace by expanding a cosine tree as described above, checking residual error after each expansion. Once the residual error is sufficiently
low, we return the SVD of the projection into the subspace. Note that exact error checking would
require an expensive O(k 2 mn) total cost, where k is the final subspace rank, so we instead use a
Monte Carlo error estimate as specified in Algorithm 3. We also employ Algorithm 3 for the error
estimates used in node prioritization. With Monte Carlo instead of exact error computations, the
total cost for error checking decreases to O(k 2 n log m), a significant practical reduction.
2
Quantized alludes to each node being represented by a single point that is added to the subspace basis.
5
The other main contributions to runtime are: 1) k cosine tree node splits for a total of O(kmn), 2)
O(k) single-vector Gram-Schmidt orthonormalizations at O(km) each for a total of O(k 2 m), and
3) final SVD extraction at O(kmn). Total runtime is therefore O(kmn), with the final projection
onto the subspace being the costliest step since the O(kmn) from node splitting is a very loose
worst-case bound. We now state the QUIC-SVD error guarantee.
Theorem 1. Given a matrix A ? Rm?n and , ? ? [0, 1], the algorithm QUIC-SVD returns an
b = U ?V T satisfies ||A ? A||
b 2 ? ||A||2 with probability at least 1 ? ?.
SVD U, ?, V such that A
F
F
Proof sketch. The algorithm terminates after mcSqErr ? ||A||2F with a call to E XTRACT SVD.
From Lemma 1 we know that E XTRACT SVD returns an SVD that reconstructs to A?s projection
b = AV V T ). Thus, we have only to show that mcSqErr in the terminal iteration
onto V (i.e., A
b 2 with probability at least 1 ? ?. Note that intermediate
is an upper bound on the error ||A ? A||
F
error checks do not affect the success probability, since they only ever tell us to continue expanding the subspace, which is never a failure. From the Pythagorean theorem, ||A ? AV V T ||2F =
||A||2F ? ||AV V T ||2F , and, since rotations do not affect lengths, ||AV V T ||2F = ||AV ||2F . The
call to MCS Q E RROR (step 3(e)) performs a Monte Carlo estimate of ||AV ||2F in order to estimate ||A||2F ? ||AV ||2F . It is easily verified that the length-squared-weighted sample mean used by
MCS Q E RROR produces an unbiased estimate of ||AV ||2F . By using a valid confidence interval to
generate a 1 ? ? lower bound on ||AV ||2F from the sample mean and variance (e.g., Theorem 1 of
[9] or similar), MCS Q E RROR is guaranteed to return an upper bound on ||A||2F ? ||AV ||2F with
probability at least 1 ? ?, which establishes the theorem.
Relaxed error control. Though the QUIC-SVD procedure specified in Algorithm 4 provides a
strong error guarantee, in practice its error checking routine is overconservative and is invoked more
frequently than necessary. For practical usage, we therefore approximate the strict error checking of
Algorithm 4 by making three modifications:
1. Set mcSqErr to the mean, rather than the lower bound, of the MCS Q E RROR estimate.
2. At each error check, estimate mcSqErr with several repeated Monte Carlo evaluations
(i.e., calls to MCS Q E RROR), terminating only if they all result in mcSqErr ? ||A||2F .
3. In each iteration, use a linear extrapolation from past decreases in error to estimate the
number of additional node splits required to achieve the error target. Perform this projected
number of splits before checking error again, thus eliminating needless intermediate error
checks.
Although these modifications forfeit the strict guarantee of Theorem 1, they are principled approximations that more aggressively accelerate the computation while still keeping error well under
control (this will be demonstrated empirically). Changes 1 and 2 are based on the fact that, because
mcSqErr is an unbiased estimate generated by a sample mean, it obeys the Central Limit Theorem
and thus approaches a normal distribution centered on the true squared error. Under such a symmetric distribution, the probability that a single evaluation of mcSqErr will exceed the true error is
0.5. The probability that, in a series of x evaluations, at least one of them will exceed the true error
is approximately 1 ? 0.5x (1 minus the probability that they all come in below the true error). The
probability that at least one of our mcSqErr evaluations results in an upper bound on the true error
(i.e., the probability that our error check is correct) thus goes quickly to 1. In our experiments, we
use x = 3, corresponding to a success probability of approximately 0.9 (i.e., ? ? 0.1).
Change 3 exploits that fact that the rate at which error decreases is typically monotonically nonincreasing. Thus, extrapolating the rate of error decrease from past error evaluations yields a conservative estimate of the number of splits required to achieve the error target. Naturally, we have to
impose limits to guard against outlier cases where the estimated number is unreasonably high. Our
experiments limit the size of the split jumps to be no more than 100.
4
Performance
We report the results of two sets of experiments, one comparing the sample efficiency of cosine
trees to previous LRMA sampling methods, and the other evaluating the composite speed and error
performance of QUIC-SVD. Due to space considerations we give results for only two datasets, and
6
madelon
0.025
LS
RLS
RP
CT
Opt
0.03
relative squared error
relative squared error
0.02
declaration
0.035
LS
RLS
RP
CT
Opt
0.015
0.01
0.025
0.02
0.015
0.01
0.005
0.005
0
0
0
10
20
30
subspace rank
(a) madelon kernel (2000
40
50
60
0
2000)
50
100
150
200
subspace rank
(b) declaration (4656
250
300
350
400
3923)
Figure 1: Relative squared error vs. subspace rank for various subspace discovery methods. LS is
length-squared, RLS is residual length-squared, RP is random projection, and CT is cosine tree.
due to the need to compute the exact SVD as a baseline we limit ourselves to medium-sized matrices.
Nonetheless, these results are illustrative of the more general performance of the algorithm.
Sample efficiency. Because the runtime of our algorithm is O(kmn), where k is the final dimension
of the projection subspace, it is critical that we use a sampling method that achieves the error target
with the minimum possible subspace rank k. We therefore compare our cosine tree sampling method
to the previous sampling methods proposed in the LRMA literature. Figure 1 shows results for the
various sampling methods on two matrices, one a 2000 2000 Gaussian kernel matrix produced
by the Madelon dataset from the NIPS 2003 Workshop on Feature Extraction (madelon kernel), and
the other a 4656 3923 scan of the US Declaration of Independence (declaration). Plotted is the
relative squared error of the input matrix?s projection onto the subspaces generated by each method
at each subspace rank. Also shown is the optimal error produced by the exact SVD at each rank.
Both graphs show cosine trees dominating the other methods in terms of rank efficiency. This
dominance has been confirmed by many other empirical results we lack space to report here. It is
particularly interesting how closely the cosine tree error can track that of the exact SVD. This would
seem to give some justification to the principle of grouping points according to their degree of mutual
parallelism, and validates our use of cosine trees as the sampling mechanism for QUIC-SVD.
Speedup and error. In the second set of experiments we evaluate the runtime and error performance
of QUIC-SVD. Figure 2 shows results for the madelon kernel and declaration matrices. On the top
row we show how speedup over exact SVD varies with the target error . Speedups range from 831
at = 0.0025 to over 3,600 at = 0.023 for madelon kernel, and from 118 at = 0.01 to nearly
20,000 at = 0.03 for declaration. On the bottom row we show the actual error of the algorithm
in comparison to the target error. While the actual error is most often slightly above the target, it
nevertheless hugs the target line quite closely, never exceeding the target by more than 10%. Overall,
the several-order-of-magnitude speedups and controlled error shown by QUIC-SVD would seem
to make it an attractive option for any algorithm computing costly SVDs.
5
Conclusion
We have presented a fast approximate SVD algorithm, QUIC-SVD, and demonstrated severalorder-of-magnitude speedups with controlled error on medium-sized datasets. This algorithm differs from previous related work in that it addresses the whole-matrix SVD, not low-rank matrix
approximation, it uses a new efficient sampling procedure based on cosine trees, and it uses empirical Monte Carlo error estimates to adaptively minimize needed sample sizes, rather than fixing
a loose sample size a priori. In addition to theoretical justifications, the empirical performance of
QUIC-SVD argues for its effectiveness and utility. We note that a refined version of QUIC-SVD is
forthcoming. The new version is greatly simplified, and features even greater speed with a deterministic error guarantee. More work is needed to explore the SVD-using methods to which QUIC-SVD
can be applied, particularly with an eye to how the introduction of controlled error in the SVD will
7
madelon
declaration
4,000
2.5e+04
2e+04
3,000
speedup
speedup
1.5e+04
2,000
1e+04
5,000
1,000
0
128
477
0
0
0.005
0.01
0.015
0.02
0.025
0.005
0.01
0.015
0.02
epsilon
0.025
(a) speedup - madelon kernel
0.035
(b) speedup - declaration
madelon
declaration
1
1
actual error
target error
actual error
target error
0.8
relative squared error
0.8
relative squared error
0.03
epsilon
0.6
0.4
0.2
0.6
0.4
0.2
0
0
0
0.005
0.01
0.015
0.02
0.025
0.01
epsilon
0.015
0.02
0.025
0.03
0.035
epsilon
(c) relative error - madelon kernel
(d) relative error - declaration
Figure 2: Speedup and actual relative error vs. for QUIC-SVD on madelon kernel and declaration.
affect the quality of the methods using it. We expect there will be many opportunities to enable new
applications through the scalability of this approximation.
References
[1] S. Friedland, A. Niknejad, M. Kaveh, and H. Zare. Fast Monte-Carlo Low Rank Approximations for
Matrices. In Proceedings of Int. Conf. on System of Systems Engineering, 2006.
[2] A. Deshpande and S. Vempala. Adaptive Sampling and Fast Low-Rank Matrix Approximation. In 10th
International Workshop on Randomization and Computation (RANDOM06), 2006.
[3] A. M. Frieze, R. Kannan, and S. Vempala. Fast Monte-Carlo Algorithms for Finding Low-Rank Approximations. In IEEE Symposium on Foundations of Computer Science, pages 370?378, 1998.
[4] P. Drineas, R. Kannan, and M. W. Mahoney. Fast Monte Carlo Algorithms for Matrices II: Computing a
Low-Rank Approximation to a Matrix. SIAM Journal on Computing, 36(1):158?183, 2006.
[5] P. Drineas, E. Drinea, and P. S. Huggins. An Experimental Evaluation of a Monte-Carlo Algorithm for
Singular Value Decomposition. Lectures Notes in Computer Science, 2563:279?296, 2003.
[6] T. Sarlos. Improved Approximation Algorithms for Large Matrices via Random Projections. In 47th IEEE
Symposium on Foundations of Computer Science (FOCS), pages 143?152, 2006.
[7] D. Achlioptas, F. McSherry, and B. Scholkopf. Sampling Techniques for Kernel Methods. In Advances in
Neural Information Processing Systems (NIPS) 17, 2002.
[8] M. P. Holmes, A. G. Gray, and C. L.Isbell, Jr. Ultrafast Monte Carlo for Kernel Estimators and Generalized
Statistical Summations. In Advances in Neural Information Processing Systems (NIPS) 21, 2008.
[9] J. Audibert, R. Munos, and C. Szepesvari. Variance estimates and exploration function in multi-armed
bandits. Technical report, CERTIS, 2007.
8
| 3473 |@word madelon:11 version:4 eliminating:1 seems:1 norm:1 km:1 seek:1 decomposition:4 mention:1 minus:1 carry:1 reduction:2 contains:1 series:1 mag:1 past:2 current:1 comparing:1 si:1 yet:2 ctn:6 numerical:1 additive:1 enables:1 remove:1 designed:1 extrapolating:1 update:1 v:2 guess:1 ith:2 transposition:1 provides:1 quantized:2 node:16 along:1 constructed:1 guard:1 symposium:2 pairing:1 focs:1 scholkopf:1 theoretically:1 frequently:1 multi:1 terminal:1 actual:5 enumeration:1 armed:1 abound:1 bounded:1 medium:3 substantially:1 minimizes:1 informed:1 finding:1 sparsification:1 impractical:3 guarantee:5 every:2 runtime:6 exactly:1 rm:7 uk:1 control:6 partitioning:3 omit:1 before:2 engineering:1 limit:5 ak:5 becoming:1 approximately:2 co:1 factorization:1 range:1 obeys:1 practical:3 practice:1 differs:1 orthonormalization:1 sq:1 procedure:4 area:1 empirical:6 significantly:2 adapting:1 projection:17 composite:1 confidence:2 get:1 onto:8 ga:1 close:2 needle:2 pertain:1 influence:1 seminal:1 deterministic:1 demonstrated:2 center:1 sarlos:1 modifies:1 straightforward:1 go:1 starting:1 vit:1 l:11 focused:2 splitting:5 estimator:1 holmes:2 array:2 orthonormal:4 spanned:5 justification:2 analogous:1 limiting:1 target:13 construction:3 massive:2 exact:9 user:1 prioritization:1 us:2 expensive:3 particularly:2 updating:2 vein:1 bottom:1 lrma:14 capture:1 worst:1 svds:2 region:1 decrease:5 highest:1 principled:1 intuition:1 complexity:1 ui:3 terminating:1 algebra:1 serve:1 upon:1 efficiency:7 basis:5 swap:1 drineas:2 accelerate:2 easily:1 various:3 represented:5 fast:11 mn2:2 monte:14 tell:1 refined:1 sarl:1 whose:3 emerged:1 quite:1 solve:2 dominating:1 drawing:2 ability:1 validates:1 final:5 mg:2 reconstruction:1 unaligned:1 product:1 combining:1 achieve:2 scalability:2 p:2 produce:1 leave:1 help:1 avk:1 develop:1 derive:1 fixing:1 strong:4 implies:1 come:1 avb:8 direction:4 closely:2 correct:1 centered:1 exploration:1 enable:3 cmin:2 explains:1 require:4 randomization:1 opt:2 summation:1 insert:2 frontier:1 sufficiently:2 inflation:1 normal:1 algorithmic:1 achieves:1 estimation:3 establishes:1 weighted:3 minimization:1 gaussian:2 always:1 modified:1 rather:3 gatech:1 properly:1 vk:2 rank:38 check:4 tech:1 contrast:1 greatly:2 centroid:4 baseline:1 sense:1 riority:1 typically:1 compactness:1 bandit:1 overall:1 among:1 priori:2 spatial:2 fairly:1 mutual:1 field:1 once:1 never:2 extraction:5 sampling:36 represents:1 rls:7 nearly:1 minimized:1 report:3 few:1 employ:1 modern:1 oriented:1 randomly:1 frieze:2 ourselves:1 atlanta:1 interest:1 highly:1 evaluation:6 weakness:2 mahoney:1 nl:1 mcsherry:1 nonincreasing:1 closer:1 partial:1 necessary:1 tree:30 incomplete:1 continuing:1 abundant:1 desired:2 plotted:1 theoretical:1 minimal:1 instance:2 column:14 ar:4 cost:3 subset:2 entry:1 successful:1 stored:1 varies:1 accomplish:1 adaptively:1 fundamental:1 randomized:1 international:1 siam:1 lee:1 off:1 probabilistic:1 plit:2 michael:1 unscalable:2 quickly:2 squared:21 again:2 central:1 reconstructs:2 containing:1 choose:1 priority:2 conf:1 return:11 potential:2 sizeable:1 representativeness:1 coefficient:2 int:1 audibert:1 vi:4 try:1 root:1 extrapolation:1 portion:1 option:1 collaborative:1 om:2 square:1 contribution:1 minimize:1 variance:2 efficiently:1 yield:1 weak:1 critically:1 produced:2 mc:8 carlo:14 confirmed:1 cc:1 whenever:1 definition:1 failure:1 against:1 nonetheless:1 deshpande:2 naturally:1 proof:2 sampled:4 dataset:1 routine:1 higher:1 follow:1 maximally:2 improved:1 formulation:1 though:2 generality:1 implicit:1 stage:1 achlioptas:2 until:2 sketch:1 nonlinear:1 o:1 lack:1 defines:1 quality:2 gray:2 usage:1 effect:1 verify:1 true:7 unbiased:2 assigned:1 aggressively:1 symmetric:1 iteratively:1 attractive:1 illustrative:1 cosine:31 generalized:1 prominent:1 demonstrate:1 performs:1 argues:1 geometrical:1 meaning:1 wise:1 invoked:1 consideration:1 charles:1 common:2 rotation:1 empirically:2 approximates:1 refer:1 significant:2 automatic:1 language:1 something:1 dominant:1 recent:1 perspective:1 success:2 continue:1 responsiveness:2 captured:1 additional:1 relaxed:1 impose:1 minimum:1 greater:1 determine:1 monotonically:1 ii:2 full:4 technical:1 faster:1 equally:1 bigger:1 controlled:4 involving:1 variant:1 basic:1 essentially:2 metric:1 iteration:2 kernel:11 represent:4 sometimes:2 achieved:1 background:1 addition:1 ode:6 decreased:1 interval:1 else:1 singular:10 source:1 crucial:1 pass:1 strict:2 ample:1 seem:2 effectiveness:1 call:4 leverage:3 ideal:2 intermediate:2 split:7 forfeit:1 exceed:2 rendering:1 affect:3 independence:1 forthcoming:1 reduce:1 idea:5 judiciously:1 pivot:3 whether:2 expression:1 pca:1 utility:3 render:1 queue:2 algebraic:1 buildup:2 constitute:1 useful:1 bottlenecked:1 generate:1 specifies:2 estimated:1 track:1 write:2 group:1 key:4 dominance:1 threshold:1 nevertheless:1 drawn:2 verified:1 nonadaptive:1 wasted:1 graph:1 sum:1 prob:1 place:1 throughout:1 home:2 draw:2 summarizes:1 vb:13 comparable:1 bound:12 ct:3 guaranteed:1 strength:1 constraint:1 precisely:1 isbell:3 n3:1 speed:3 span:4 optimality:1 min:1 vempala:4 speedup:13 according:3 combination:5 jr:2 describes:1 terminates:1 increasingly:3 slightly:1 certis:1 making:1 modification:2 huggins:1 slowed:1 restricted:1 outlier:1 taken:1 discus:1 loose:4 mechanism:2 needed:3 know:1 end:1 operation:2 enforce:1 inasmuch:1 schmidt:2 rp:5 original:2 top:2 clustering:1 include:1 remaining:1 unreasonably:1 opportunity:1 cmax:2 exploit:1 practicality:1 epsilon:4 build:3 objective:2 already:2 added:3 quic:23 costly:1 diagonal:2 nr:1 friedland:2 subspace:52 mph:1 outer:1 manifold:1 toward:1 kannan:3 length:10 relationship:1 providing:1 minimizing:2 equivalently:1 nc:6 unfortunately:2 stated:1 intent:1 motivates:1 unknown:1 perform:3 contributed:1 upper:3 av:10 datasets:8 vkt:1 ever:1 community:1 prioritizing:1 introduced:2 required:3 specified:2 quadratically:1 distinction:3 hug:1 pop:1 nip:3 address:4 able:1 parallelism:2 below:1 appeared:1 summarize:1 max:1 critical:1 natural:1 residual:12 mn:1 scheme:3 improve:2 eye:1 geometric:1 discovery:4 literature:2 checking:7 countless:1 relative:15 loss:1 expect:1 highlight:1 lecture:1 adaptivity:1 interesting:1 filtering:1 proportional:1 var:1 foundation:2 degree:1 sufficient:1 alludes:1 principle:1 pi:2 row:29 summary:1 placed:2 last:1 transpose:1 truncation:1 jth:1 keeping:1 guide:1 allow:1 wide:2 overkill:1 munos:1 absolute:1 benefit:1 dimension:3 gram:2 avoids:1 valid:1 evaluating:1 adaptive:3 avg:1 projected:1 jump:1 simplified:1 reconstructed:1 approximate:8 compact:2 excess:1 nroot:2 alternatively:1 search:1 iterative:3 why:1 table:4 terminate:1 szepesvari:1 expanding:3 errc:2 expansion:2 excellent:1 agray:1 necessarily:1 cheapest:1 main:2 whole:9 kmn:6 nothing:1 child:3 repeated:2 referred:1 representative:4 georgia:1 slow:1 guiding:1 exceeding:1 ultrafast:1 governed:1 theorem:6 grouping:2 exists:1 essential:2 workshop:2 importance:3 ci:7 magnitude:5 notwithstanding:1 likely:3 explore:1 rror:8 satisfies:2 extracted:1 declaration:11 sized:3 viewed:1 sorted:1 prioritized:1 change:2 lemma:3 principal:1 called:1 nil:1 pas:1 total:5 svd:80 conservative:1 experimental:1 college:1 scan:1 alexander:1 bioinformatics:1 pythagorean:1 evaluate:1 |
2,728 | 3,474 | Temporal Dynamics of Cognitive Control
Michael C. Mozer
Department of Computer Science and
Institute of Cognitive Science
University of Colorado
Boulder, CO 80309
[email protected]
Jeremy R. Reynolds
Department of Psychology
University of Denver
Denver, CO 80208
[email protected]
Abstract
Cognitive control refers to the flexible deployment of memory and attention in response to task demands and current goals. Control is often studied experimentally
by presenting sequences of stimuli, some demanding a response, and others modulating the stimulus-response mapping. In these tasks, participants must maintain
information about the current stimulus-response mapping in working memory.
Prominent theories of cognitive control use recurrent neural nets to implement
working memory, and optimize memory utilization via reinforcement learning.
We present a novel perspective on cognitive control in which working memory
representations are intrinsically probabilistic, and control operations that maintain
and update working memory are dynamically determined via probabilistic inference. We show that our model provides a parsimonious account of behavioral and
neuroimaging data, and suggest that it offers an elegant conceptualization of control in which behavior can be cast as optimal, subject to limitations on learning
and the rate of information processing. Moreover, our model provides insight into
how task instructions can be directly translated into appropriate behavior and then
efficiently refined with subsequent task experience.
1
Introduction
Cognitive control can be characterized as the ability to guide behavior according to current goals
and plans. Control often involves overriding default or overlearned behaviors. Classic examples of
experimental tasks requiring this ability include Stroop, Wisconsin card sorting, and task switching (for a review, see [1]). Although these paradigms vary in superficial features, they share the
key underlying property that successful performance involves updating and maintaining a task set.
The task set holds the information required for successful performance, e.g., the stimulus-response
mapping, or the dimension along which stimuli are to be classified or reported. For example, in
Wisconsin card sorting, participants are asked to classify cards with varying numbers of instances of
a colored symbol. The classification might be based on color, symbol, or numerosity; instructions
require participants to identify the current dimension through trial and error, and perform the appropriate classification until the dimension switches after some unspecified number of trials. Thus,
it requires participants to maintain a task set?the classification dimension?in working memory
(WM). Likewise, in the Stroop task, stimuli are color names presented in various ink colors, and the
task set specifies whether the color is to be named or the word is to be read.
To understand cognitive control, we need to characterize the brain?s policy for updating, maintaining,
and utilizing task set. Moreover, we need to develop theories of how task instructions are translated
into a policy, and how this policy is refined with subsequent experience performing a task.
1
1.1
Current Computational Theories of Control
From a purely computational perspective, control is not a great challenge. Every computer program
modulates its execution based on internal state variables. The earliest psychological theories of control had this flavor: Higher cognitive function was conceived of as a logical symbol system whose
variables could be arbitrarily bound [2], allowing for instructions to be used appropriately?and
perfectly?to update representations that support task performance. For example, in the Wisconsin
card sorting task, the control instruction?the classification dimension?would be bound to a variable, and responses would be produced by rules of the form, ?If the current dimension is D and
the stimulus is X, respond Y?. Behavioral data indicate that this naive computational perspective is
unlikely to be how control is implemented in the brain. Consider the following phenomena:
? When participants are asked to switch tasks, performance on the first trial following a
switch is inefficient, although performance on subsequent trials is efficient, suggesting that
loading a new task set depends on actually performing the new task [3]. This finding
is observed even for very simple tasks, and even when the switches are regular, highly
predictable, and well practiced.
? Switch costs are asymmetric, such that switching from an easy task to a difficult task is
easier than vice-versa [4].
? Some task sets are more difficult to implement than others. For example, in the Stroop task,
reading the word is quick and accurate, but naming the ink color is not [5].
? The difficulty of a particular task depends not only on the characteristics of the task itself,
but also on context in which participants might be called upon to perform [6].
To account for phenomena such as these, theories of control have in recent years focused on how
control can be implemented in cortical neural networks. In the prevailing neural-network-based
theory, task set is represented in an activity-based memory system, i.e., a population of neurons
whose recurrent activity maintains the representation over time. This active memory, posited to
reside in prefrontal cortex (PFC), serves to bias ongoing processing in posterior cortical regions to
achieve flexibility and arbitrary, task-dependent stimulus-response mappings (for review, see [1]).
For example, in the Stroop task, instructions to report the ink color might bias the neural population
representing colors?i.e., increase their baseline activity prior to stimulus onset?such that when
stimulus information arrives, it will reach threshold more rapidly, and will beat out the neural population that represents word orthography in triggering response systems [7]. In this framework, a
control policy must specify the updating and maintenance task set, which involves when to gate new
representations into WM and the strength of the recurrent connection that maintains the memory.
Further, the policy must specify which WM populations bias which posterior representations, and
the degree to which biasing is required. Some modelers have simply specified the policy by hand
[8], whereas most pretrain the model to perform a task?in a manner meant to reflect long-term
learning prior to experimental testing [7, 9, 10].
These models provide an account for a range of neurophysiological and behavioral data. However,
they might be criticized on a number of grounds. First, like their symbolic predecessors, the neural
network models must often be crippled arbitrarily to explain data; for example, by limiting the
strength of recurrent memory connections, the models obtain task set decay and can explain error
data. Second, the models require a stage of training which is far more akin to how a monkey
learns to perform a task than to how people follow task instructions. The reinforcement-learning
based models require a long stage of trial-and-error learning before the appropriate control policy
emerges. Whereas monkeys are often trained for months prior to testing, a notable characteristic of
humans is that they can perform a task adequately on the first trial from task instructions [11].
2
Control as Inference
Our work aims to provide an alternative, principled conceptualization of cognitive control. Our goal
is to develop an elegant theoretical framework with few free parameters that can easily be applied
to a wide range of experimental tasks. With strong computational and algorithmic constraints, our
framework has few degrees of freedom, and consequently, makes strong, experimentally verifiable
2
predictions. Additionally, as a more abstract framework than the neural net theories, one aim is to
provide insight as to how task instructions can be used directly and immediately to control behavior.
A fundamental departure of our approach from previous approaches is to consider WM as inherently
probabilistic. That is, instead of proposing that task set is stored in an all-or-none fashion, we wish
to allow for task set?as well as all cortical representations?to be treated as random variables. This
notion is motivated by computational neuroscience models showing how population codes can be
used to compute under uncertainty [12].
Given inherently probabilistic representations, it is natural to treat the problems of task set updating,
maintenance and utilization as probabilistic inference. To provide an intuition about our approach,
consider this scenario. I will walk around my house and tell you what objects I see. Your job is
to guess what I?ll report next. Suppose I report the following sequence: REFRIGERATOR , STOVE ,
SINK , TOILET, SHOWER , DRESSER . To guess what I?m likely to see next, you need to infer what
room I am in. Even though the room is a latent variable, it can be inferred from the sequence of
observations. At some points in the sequence, the room can be determined with great confidence
(e.g., after seeing TOILET and SHOWER). At other times, the room is ambiguous (e.g., following
SINK ), and only weak inferences can be drawn.
By analogy, our approach to cognitive control treats task set as a latent variable that must be inferred
from observations. The observations consist of stimulus-response-feedback triples.Sometimes the
observations will strongly constrain the task set, as in the Stroop task when the word GREEN is
shown in color red, and the correct response is red, or when an explicit instruction is given to report
the ink color; but other times the observations provide little constraint, as when the word RED is
shown in color red, and the correct response is red. One inference problem is therefore to determine
task set from the stimulus-response sequence. A second, distinct inference problem is to determine
the correct response on the current trial from the current stimulus and the trial history. Thus, in our
approach, control and response selection are cast as inference under uncertainty.
In this paper, we flesh out a model based on this approach. We use the model to account for behavioral data from two experiments. Each experiment involves a complex task environment in which
experimental participants are required to switch among eight tasks that have different degrees of
overlap and inconsistency with one another. Having constrained the model by fitting behavioral
data, we then show that the model can explain neuroimaging data. Moreover, the model provides
a different interpretation to these data than has been suggested previously. Beyond accounting for
data, the model provides an elegant theoretical framework in which control and response selection
can be cast as optimal, subject to limitations on the processing architecture.
3
Methods
Our model addresses data from two experiments conducted by Koechlin, Ody, and Kouneiher [6]. In
each experiment, participants are shown blocks of 12 trials, preceded by a cue that indicates which
of the eight tasks is to be performed with the stimuli in that block. The task specifies a stimulusresponse mapping. The stimuli in Experiments 1 and 2 are colored squares and colored letters,
respectively. Examples of the sequence of cues and stimuli for the two experiments is shown in
Figure 1A. In both experiments, there are two potential responses.
The stimulus-response mappings for Experiment 1 are shown in the eight numbered boxes of Figure 1C. (The layout of the boxes will be explained shortly.) Consider task 3 in the upper left corner
of the Figure. The notation indicates that task 3 requires a left response to the green square, a right
response to a red square, and no response (hereafter, no-go) to a white square. Task 4 is identical to
task 3, and the duplication is included because the tasks are described as distinct to participants and
each is associated with a unique task cue. The duplication makes the stimulus-response mapping
twice as likely, because the eight tasks have uniform priors. Task 1 (lower left corner of the figure)
requires a left response for a green square and no-go for a white square. There are no red stimuli in
the task 1 blocks, and the green?left mapping is depicted twice to indicate that the probability of a
green square appearing in the block is twice that of a white square.
We now explain the 3 ? 2 arrangement of cells in Figure 1C. First the rows. The four tasks in
the lower row allow for only one possible response (not counting no-go as a response), whereas
the four tasks in the upper row demand that a choice be made between two possible responses.
3
A
C3
C7
D
E
C3 O k G c e l E p K a C i C5 P g
time
B
X XX
X XX
X X X
X XX
P1 P2
P1 P2
P1 P2
P1 P2
3
4
X XX
X XX
P1 P1
P2 P2
7
8
X XX
2
1
X XX
P2
P1 P1
P2
5
6
X: {A,E,I,O,a,e,i,o,C,G,K,P,c,g,k,p}
P1: vowel/consonant; P2: Upper/lower case discrimination tasks
C
L R
L R
3
L L
L R
4
8
L L
R R
1
L R
7
2
R
5
R
6
L: left response; R: right response
Figure 1: (A) Examples of stimulus sequences from Exp. 1 and 2 (top and bottom arrows, respectively) of [6]. (B) Eight tasks in Exp. 2, adapted from [6]. (C) Eight tasks in Exp. 1. (D) Response
times from participants in Exp. 1 and 2 (white and black points, respectively). The data points correspond to the filled grey cells of (B) and (C), and appear in homologous locations. X-axis of graph
corresponds to columns of the 3?2 array of cells in (B) and (C); squares and circles correspond to
top and bottom row of each 3?2 array. (E) Simulation results from the model.
Thus, the two rows differ in terms of the demands placed on response selection. The three columns
differ in the importance of the task identity. In the leftmost column, task identity does not matter,
because each mapping (e.g., green?left) is consistent irrespective of the task identity. In contrast,
tasks utilizing yellow, blue, and cyan stimuli involve varied mappings. For example, yellow maps
to left in two tasks, to right in one task, and to no-go in one task. The tasks in the middle column
are somewhat less dependent on task identity, because the stimulus-response mappings called for
have the highest prior. Thus, the three columns represent a continuum along which the importance
of task identity varies, from being completely irrelevant (left column) to being critical for correct
performance (right column). Empty cells within the grid are conceptually possible, but were omitted
from the experiment.
Experiment 2 has the same structure as Experiment 1 (Figure 1B), with an extra level of complexity.
Rather than mapping a color to a response, the color determines which property of the stimulus is to
be used to select a response. For example, task 3 of Figure 1B demands that a green letter stimulus
(denoted as X here) be classified as a vowel or consonant (property P1), whereas a red letter stimulus
be classified as upper or lower case (property P2). Thus, Experiment 2 places additional demands
of stimulus classification and selection of the appropriate stimulus dimension.
Participants in each experiment received extensive practice on the eight tasks before being tested.
Testing involved presenting each task following each other task, for a total of 64 test blocks.
3.1
A Probabilistic Generative Model of Control Tasks
Following the style of many probabilistic models in cognitive science, we have designed a generative
model of the domain, and then invert the model to perform recognition via Bayesian inference. In
our case, the generative model is of the control task, i.e., the model produces sequences of stimulusresponse pairs such that the actual trial sequence would be generated with high probability. Instead
of learning this model from data, though, we assume that task instructions are ?programmed? into
the model.
Our generative model of control tasks is sketched in Figure 2A as a dynamical Bayes net. Vertical
slices of the model represent the trial sequence, with the subscript denoting the trial index. First
we explain the nodes and dependencies and then describe the conditional probability distributions
(CPDs).
The B node represents the task associated with the current block of trials. (We use the term ?block?
as shorthand notation for this task.) The block on trial k has 8 possible values in the experiments we
4
Bk-1
Bk
Ck-1
Rk-1
Sk-1
Bk+1
Ck
Rk
Sk
T
Ck+1
Rk+1
Sk+1
T
T
Figure 2: Dynamical Bayes net depiction of our generative model of control tasks, showing the
trial-to-trial structure of the model.
model, and its value depends on the block on trial k 1. The block determines the category of the
stimulus, C, which in turn determines the stimulus identity, S. The categories relevant to the present
experiments are: color label, block cue (the cue that identifies the task in the next block), upper/lower
case for letters, and consonant/vowel for letters. The stimuli correspond to instantiations of these
categories, e.g., the letter Q which is an instance of an upper case consonant. Finally, the R node
denotes the response, which depends both on the current stimulus category and the current block.
This description of the model is approximate for two reasons. First, we decompose the category and
stimulus representations into shape and color dimensions, expanding C into C color and C shape , and
S into S color and S shape . (When we refer to C or S without the superscript, it will denote both the
shape and color components.) Second, we wish to model the temporal dynamics of a single trial,
in order to explain response latencies. Although one could model the temporal dynamics as part of
the dynamical Bayes net architecture, we adopted a simpler and nearly equivalent approach, which
is to explicitly represent time, T , within a trial, and to assume that in the generative model, stimulus
information accumulates exponentially over time. With normalization of probabilities, this formulation is identical to a naive Bayes model with conditionally independent stimulus observations at
each time step. With these two modifications, the slices of the network (indicated by the dashed
rectangle in Figure 2A) are as depicted in Figure 2B.
To this point, we?ve designed a generic model of any experimental paradigm involving contextdependent stimulus-response mappings. The context is provided by the block B, which is essentially
a memory that can be sustained over trials. To characterize a specific experiment, we must specify
the CPDs in the architecture. These distributions can be entirely determined by the experiment description (embodied in Figure 1B,C). We toss in one twist to the model, which is to incorporate four
parameters into the CPDs that permit us to specify aspects of the human cognitive architecture, as
follows: , the degree of task knowledge (0: no knowledge; 1: perfect knowledge); , the persistence of the block memory (0: memory decays completely from one trial to the next; 1: memory
is perfect); and shape and color , the rate of transmission of shape and color information between
stimulus and category representations. Given these parameters and the experiment description, we
can define the CPDs in the model:
P (Bk = bBk1 = b) = b b + (1 )/NB , where is the Kronecker delta and NB is the
number of distinct block (task) identities. This distribution is a mixture of a uniform distribution
(no memory of block) and an identity mapping (perfect memory).
P (Ckz Bk ) = P (Ckz Bk ) + (1 )/NC z , where z
color, shape and NC z is the number
of distinct category values along dimension z, and P (. .) is the probability distribution defined
by the experiment and task (see Figure 2B,C). The mixture parameter, , interpolates between
a uniform distribution (no knowledge of task) and a distribution that represents complete task
knowledge.
P (Rk Bk , Ck ) = P (Rk Bk , Ck ) + (1 )/NR , where NR is the number of response alternatives (including no-go).
P (Skz = s Ckz = c, T = t)
(1 + z M z (s, c))t , where z
color, shape and M z (s, c)
is a membership function that has value 1 if s is an instance of category c along dimension z,
or 0 otherwise. By this CPD, the normalized probability for stimulus s grows exponentially to
5
HUMAN
! MR SIgnal
premotor cortex
posterior lateral PFC
anterior lateral PFC
Single
Dual
Cshape node
B node
MODEL
Entropy
R node
Exp. 1
Exp. 2
Exp. 1
Exp. 2
Importance of Task Identity
Figure 3: (top row) human neuroimaging data from three brain regions [6], (bottom row) entropy
read out from three nodes of the model. Full explanation in the text.
asymptote as a function of time t if s belongs to category c, and drops exponentially toward zero
if s does not belong to c.
This formulation encodes the experiment description?as represented by the P ? (.) probabilities?in
the model?s CPDs, with smoothing via to represent less-than-perfect knowledge of the experiment
description.
We would like to read out from the model a response on some trial k, given the stimulus on trial k,
Sk , and a history of past stimulus-response pairs, Hk = {S1 ...Sk?1 , R1 ...Rk?1 }. (In the experiments, subjects are well practiced and make few errors. Therefore, we assume the R?s are correct
or corrected responses.) The response we wish to read out consists of a choice and the number of
time steps required to make the choice. To simulate processing time within a trial, we search over
T . Larger T correspond to more time for evidence to propagate in the model, which leads to lower
entropy distributions over the hidden variables Ck and Rk . The model initiates a response when
one value of Rk passes a threshold ?, i.e., when [maxr P (Rk = r|Sk , T, Hk )] > ?. This yields the
response time (RT)
n h
i
o
t? = min t | max P (Rk = r|Sk , T = t, Hk ) > ?
(1)
r
?
and the response r = argmaxr P (Rk = r|Sk , T = t? , Hk ).
4
Simulation Results
We simulated the model on a trial sequence like that in the human study. We obtained mean RTs and
error rates from the model in the four experimental conditions of the two experiments (see the filled
cells of Figure 1B,C). The model?s five parameters?, ?, ?shape , ?color , and ??were optimized to
obtain the maximum correlation between the mean RTs obtained from the simulation (Equation 1)
and the human data (Figure 1D). This optimization resulted in a correlation between human and
simulation RTs of 0.99 (compare Figure 1D and E), produced by parameter values = 0.87, ? =
0.79, ?shape = 0.34, ?color = 0.88, and ? = 0.63.
To express simulation time in units of milliseconds? the measure of time collected in the human
data?we allowed an affine transform, which includes two free parameters: an offset constant indicating the time required for early perceptual and late motor processes, which are not embodied
in the model, and a scale constant to convert units of simulation time to milliseconds. With these
two transformation parameters, the model had a total of seven parameters. The astute reader will
note that there are only eight data points to fit, and one should therefore not be impressed by a close
match between simulation and data. However, our goal is to constrain model parameters with this
fit, and then explore emergent properties of the resulting fully constrained model.
One indication of model robustness is how well the model generalizes to sequences of trials other
than the one on which it was optimized. Across 11 additional generalization runs, the correlation
between model and empirical data remained high with low variability (?
? = 0.97, ?? = 0.004).
Another indication of the robustness of the result is to determine how sensitive the model is to
the choice of parameters. If randomly selected parameters yield large correlations, then the model
architecture itself is responsible for the good fit, not the particular choice of parameters. To perform this test, we excluded parameters ranges in which the model failed to respond reliably (i.e.,
6
the model never attained the response criterion of Equation 1), or in which the model produced no
RT variation across conditions. These requirements led to parameter ranges of: 0.8 ? ? 0.98;
0.1 ? ?color , ?shape ? 1.5; 0.6 ? ? ? 0.98; 0.65 ? ? ? 0.85. All randomly selected combinations of parameters in these ranges led to correlation values greater than 0.9, demonstrating that the
qualitative fit between model and behavioral results was insensitive to parameter selection, and that
the structure of the model is largely responsible for the fit obtained.
Koechlin, Ody, and Kouneiher [6] collected not only behavioral data, but also neuroimaging data
that identified brain regions involved in control, and how these brain regions modulated their activation across experimental manipulations. There were three manipulations in the experiments: (1) the
demand on response selection (varied along rows of Figure 1C), (2) the importance of task identity
(varied along the three columns of both Figure 1B and 1C), and (3) the demand of stimulus classification and selection of stimulus dimensions (varied along rows of Figure 1B). The top row of
Figure 3 shows effects of these experimental manipulations on the fMRI BOLD response of three
different brain regions.
The remarkable result obtained in our simulations is that we identified three components of the
model that produced signatures analogous to those of the fMRI BOLD response in three cortical
areas. We hypothesized that neural (fMRI) activity in the brain might be related to the entropy of
nodes in the model, on account of the fact that when entropy is high, many possibilities must be
simultaneously represented, which may lead to greater BOLD signal. Because fMRI techniques
introduce significant blurring in time, any measure in the model corresponding to the fMRI signal
would need to be integrated over the time of a trial. We therefore computed the mean entropy of
each model node over time T = 1...t? within a trial. We then averaged the entropy measure across
trials within a condition, precisely as we did the RTs. To compare these entropy measures to the
imaging data, the value corresponding to the bottom left cell of each experiment array (see Figure
1B and 1C) was subtracted from all of the conditions of that particular experiment. This subtraction
was performed because the nature of the MRI signal is relative, and these two cells form the baseline
conditions within the empirical observations. After performing this normalization, the values for R
and C shape were then collapsed across the columns in panels B and C of Figure 1, resulting in a
bar for each row within each panel. Additionally, the values for B were then collapsed across the
rows of each panel, resulting in a value for each column. The model entropy results are shown in
the bottom row of Figure 3, and comparison with the top row reveals an exact correspondence. We
emphasize that these results are obtained with the model which was fully constrained by fitting the
RT data. Thus, these results are emergent properties of the model.
Based on functional neuroanatomy, the correspondence between model components and brain regions is quite natural. Starting with the left column of Figure 3, uncertainty in the model?s response
corresponds to activity in premotor cortex. This activity is greater when the block calls for two distinct responses than when it calls for one. In the middle column of Figure 3, the uncertainty of shape
categorization corresponds to activity in posterior lateral prefrontal cortex. This region is thought
to be involved in the selection of task-relevant information, which is consistent with the nature of
the current conditions that produce increases. In the right column of Figure 3, the uncertainty of the
task identity (block) in the model corresponds to activity in anterior lateral PFC, a brain region near
areas known to be involved in WM maintenance. Interestingly, the lower the entropy the higher the
neural activity, in contrast to the other two regions. There is a natural explanation for this inversion, though: entropy is high in the block node when the block representation matters the least, i.e.,
when the stimulus-response mapping does not depend on knowing the task identity. Thus, higher
entropy of the block node actually connotes less information to be maintained due to the functional
equivalence among classes.
5
Discussion
We proposed a theoretical framework for understanding cognitive control which provides a parsimonious account of behavioral and neuroimaging data from two large experiments. These experiments
are sufficiently broad that they subsume several other experimental paradigms (e.g., Stroop, task
switching). Koechlin et al. [6] explain their findings in terms of a descriptive model that involves a
complex hierarchy of control processes within prefrontal cortex. The explanation for the neuroimaging data that emerges from our model is arguable simpler and more intuitive.
7
k
p(B )
1
0.5
0
0
10
20
30
40
50
60
Trial Number
70
80
90
100
1
2
3
4
5
6
7
8
Figure 4: Task (block) representation over a sequence of trials that involves all eight task types.
The key insight that underlies our model is the notion that cortical representations are intrinsically
probabilistic. This notion is not too surprising to theorists in computational neuroscience, but it leads
to a perspective that is novel within the field of control: that the all-or-none updating of WM can be
replaced with a probabilistic notion of updating, and the view that WM holds competing hypotheses
in parallel. Framing WM in probabilistic terms also offers a principled explanation for why WM
should decay. The parameter ? controls a tradeoff between the ability to hold information over time
and the ability to update when new relevant information arrives. In contrast, many neural network
models have two distinct parameters that control these aspects of memory.
Another novelty of our approach is the notion of that control results from dynamical inference processes, instead of being conceived of as resulting from long-term policy learning. Inference plays
a critical role on the WM (task identity) representation: WM is maintained not solely from internal
processes (e.g., the recurrent connections in a neural net), but is continually influenced by the ongoing stream of stimuli via inference. The stimulus stream sometimes supports the WM representation
and sometimes disrupts it. Figure 4 shows the trial-to-trial dynamics of the WM in our model. Note
that depending on the task, the memory looks quite different. When the stimulus-response pairs
are ambiguous as to the task, the representation becomes less certain. Fortunately for the model?s
performance, this is exactly the circumstance in which remembering the task identity is least critical.
Figure 4 also points to a promising future direction for the model. The stream of trials clearly
shows strong sequential effects. We are currently pursuing opportunities to examine the model?s
predictions regarding performance on the first trial in a block versus subsequent trials. The model
shows an effect observed in the task switching literature: initial trial performance is poor, but control
rapidly tunes to the task and subsequent trials are more efficient and roughly comparable.
Our model seems to have surprisingly strong predictive power. This power comes about from the
fact that the model expresses a form of bounded rationality: the model encodes the structure of the
task, subject to limitations on memory, learning, and the rate of perceptual processing. Exploiting
this bounded rationalityleads to strong constraints, few free parameters, and the ability to extend the
model to new tasks without introducing additional free parameters.
References
[1] E. K. Miller and J. D. Cohen. An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24:167?202, 2001.
[2] A. Newell and H. A. Simon. Human Problem Solving. Prentice-Hall, Englewood Cliffs, NJ, 1972.
[3] Robert D. Rogers and Stephen Monsell. Costs of a predictable switch between simple cognitive tasks. Journal of Experimental Psychology: General, 124:207?
231, 1995.
[4] Nick Yeung and Stephen Monsell. Switching between tasks of unequal familiarity: the role of stimulus-attribute and response-set selection. J Exp Psychol Hum
Percept Perform, 29(2):455?469, 2003.
[5] C. M. MacLeod. Half a century of research on the Stroop effect: An integrative review. Psychological Bulletin, 109:163?203, 1991.
[6] E. Koechlin, C. Ody, and F. Kouneiher. Neuroscience: The architecture of cognitive control in the human prefrontal cortex. Science, 424:1181?1184, 2003.
[7] J. D. Cohen, K. Dunbar, and J. L. McClelland. On the control of automatic processes: A parallel distributed processing model of the Stroop effect. Psychological
Review, 97(3):332?361, 1990.
[8] S. J. Gilbert and T. Shallice. Task switching: A pdp model. Cognitive Psychology, 44:297?337, 2002.
[9] N. P. Rougier, D. Noelle, T. S. Braver, J. D. Cohen, and R. C. O?Reilly. Prefrontal cortex and the flexibility of cognitive control: Rules without symbols.
Proceedings of the National Academy of Sciences, 102(20):7338?7343, 2005.
[10] M. J. Frank and R. C. O?Reilly. A mechanistic account of striatal dopamine function in human cognition: Psychopharmacological studies with cabergoline and
haloperidol. Behavioral Neuroscience, 120:497?517, 2006.
[11] Stephen Monsell. Control of mental processes. In V. Bruce, editor, Unsolved mysteries of the mind: Tutorial essays in cognition, pages 93?148. Psychology
press, Hove, UK, 1996.
[12] R S Zemel, P Dayan, and A Pouget. Probabilistic interpretation of population codes. Neural Comput, 10(2):403?430, 1998.
8
| 3474 |@word trial:38 mri:1 middle:2 inversion:1 loading:1 seems:1 instruction:11 grey:1 integrative:2 simulation:8 propagate:1 essay:1 accounting:1 initial:1 hereafter:1 practiced:2 denoting:1 interestingly:1 reynolds:2 past:1 current:12 anterior:2 crippled:1 surprising:1 activation:1 must:7 subsequent:5 cpds:5 shape:13 motor:1 asymptote:1 designed:2 drop:1 update:3 overriding:1 stroop:8 discrimination:1 cue:5 generative:6 guess:2 selected:2 rts:4 half:1 ody:3 colored:3 mental:1 provides:5 node:11 location:1 simpler:2 five:1 along:7 predecessor:1 qualitative:1 shorthand:1 sustained:1 consists:1 fitting:2 behavioral:9 introduce:1 manner:1 roughly:1 p1:10 disrupts:1 examine:1 behavior:5 brain:9 little:1 actual:1 becomes:1 provided:1 xx:7 moreover:3 underlying:1 notation:2 panel:3 bounded:2 what:4 unspecified:1 monkey:2 proposing:1 finding:2 transformation:1 nj:1 temporal:3 every:1 exactly:1 uk:1 control:41 utilization:2 unit:2 appear:1 continually:1 before:2 treat:2 switching:6 accumulates:1 cliff:1 subscript:1 solely:1 might:5 black:1 twice:3 studied:1 dynamically:1 equivalence:1 co:2 deployment:1 programmed:1 range:5 averaged:1 unique:1 responsible:2 testing:3 practice:1 block:24 implement:2 impressed:1 area:2 empirical:2 thought:1 dresser:1 persistence:1 word:5 confidence:1 refers:1 regular:1 seeing:1 suggest:1 symbolic:1 numbered:1 close:1 selection:9 nb:2 context:2 collapsed:2 prentice:1 optimize:1 equivalent:1 map:1 quick:1 gilbert:1 layout:1 attention:1 go:5 starting:1 focused:1 immediately:1 pouget:1 insight:3 rule:2 utilizing:2 array:3 classic:1 population:6 notion:5 variation:1 century:1 analogous:1 limiting:1 hierarchy:1 suppose:1 colorado:2 play:1 exact:1 rationality:1 hypothesis:1 recognition:1 updating:6 asymmetric:1 observed:2 bottom:5 role:2 refrigerator:1 region:9 numerosity:1 highest:1 principled:2 mozer:2 predictable:2 intuition:1 environment:1 complexity:1 asked:2 dynamic:4 signature:1 trained:1 depend:1 solving:1 predictive:1 purely:1 upon:1 toilet:2 completely:2 sink:2 translated:2 blurring:1 easily:1 emergent:2 various:1 represented:3 distinct:6 describe:1 zemel:1 tell:1 refined:2 whose:2 premotor:2 larger:1 quite:2 otherwise:1 ability:5 transform:1 itself:2 superscript:1 sequence:13 indication:2 descriptive:1 net:6 relevant:3 rapidly:2 flexibility:2 achieve:1 academy:1 description:5 intuitive:1 exploiting:1 empty:1 transmission:1 r1:1 requirement:1 produce:2 categorization:1 perfect:4 object:1 depending:1 recurrent:5 develop:2 received:1 job:1 p2:10 strong:5 implemented:2 involves:6 indicate:2 flesh:1 come:1 differ:2 direction:1 correct:5 attribute:1 human:11 rogers:1 require:3 generalization:1 decompose:1 hold:3 around:1 sufficiently:1 ground:1 hall:1 exp:9 great:2 mapping:15 algorithmic:1 cognition:2 vary:1 continuum:1 early:1 omitted:1 label:1 currently:1 sensitive:1 modulating:1 noelle:1 vice:1 argmaxr:1 clearly:1 aim:2 rather:1 ck:6 varying:1 earliest:1 indicates:2 pretrain:1 contrast:3 hk:4 baseline:2 psy:1 am:1 inference:11 dependent:2 dayan:1 membership:1 unlikely:1 integrated:1 hidden:1 sketched:1 classification:6 flexible:1 among:2 denoted:1 dual:1 plan:1 prevailing:1 constrained:3 smoothing:1 field:1 never:1 having:1 identical:2 represents:3 broad:1 look:1 nearly:1 fmri:5 future:1 others:2 stimulus:46 report:4 cpd:1 few:4 randomly:2 simultaneously:1 ve:1 resulted:1 national:1 replaced:1 maintain:3 vowel:3 freedom:1 englewood:1 highly:1 possibility:1 mixture:2 arrives:2 accurate:1 experience:2 filled:2 walk:1 circle:1 theoretical:3 psychological:3 instance:3 classify:1 criticized:1 column:13 cost:2 introducing:1 uniform:3 stimulusresponse:2 successful:2 conducted:1 too:1 characterize:2 reported:1 stored:1 dependency:1 varies:1 my:1 fundamental:1 probabilistic:11 michael:1 reflect:1 prefrontal:6 cognitive:17 corner:2 inefficient:1 style:1 account:7 jeremy:2 suggesting:1 potential:1 bold:3 includes:1 matter:2 notable:1 explicitly:1 depends:4 onset:1 stream:3 performed:2 view:1 red:8 wm:13 bayes:4 participant:11 maintains:2 parallel:2 simon:1 bruce:1 square:9 characteristic:2 efficiently:1 miller:1 likewise:1 identify:1 correspond:4 yield:2 yellow:2 conceptually:1 largely:1 weak:1 bayesian:1 produced:4 none:2 classified:3 history:2 explain:7 reach:1 influenced:1 c7:1 involved:4 associated:2 modeler:1 unsolved:1 intrinsically:2 logical:1 color:24 emerges:2 knowledge:6 actually:2 higher:3 attained:1 follow:1 response:52 specify:4 formulation:2 though:3 strongly:1 box:2 stage:2 until:1 correlation:5 working:5 hand:1 indicated:1 grows:1 name:1 effect:5 hypothesized:1 requiring:1 normalized:1 adequately:1 read:4 excluded:1 white:4 conditionally:1 ll:1 ambiguous:2 maintained:2 criterion:1 leftmost:1 prominent:1 presenting:2 complete:1 novel:2 shower:2 preceded:1 denver:2 overlearned:1 twist:1 functional:2 cohen:3 exponentially:3 insensitive:1 belong:1 interpretation:2 extend:1 refer:1 significant:1 versa:1 theorist:1 automatic:1 grid:1 reilly:2 had:2 cortex:8 depiction:1 posterior:4 recent:1 perspective:4 irrelevant:1 belongs:1 scenario:1 manipulation:3 certain:1 arbitrarily:2 inconsistency:1 additional:3 somewhat:1 greater:3 mr:1 fortunately:1 neuroanatomy:1 subtraction:1 determine:3 paradigm:3 novelty:1 remembering:1 dashed:1 signal:4 stephen:3 full:1 infer:1 match:1 characterized:1 offer:2 long:3 posited:1 naming:1 prediction:2 involving:1 underlies:1 maintenance:3 essentially:1 circumstance:1 dopamine:1 yeung:1 sometimes:3 represent:4 normalization:2 orthography:1 invert:1 cell:7 whereas:4 appropriately:1 extra:1 pass:1 subject:4 duplication:2 elegant:3 call:2 near:1 counting:1 easy:1 switch:7 fit:5 psychology:4 architecture:6 perfectly:1 triggering:1 identified:2 competing:1 regarding:1 knowing:1 tradeoff:1 whether:1 motivated:1 akin:1 interpolates:1 latency:1 involve:1 tune:1 verifiable:1 category:9 mcclelland:1 specifies:2 arguable:1 millisecond:2 tutorial:1 neuroscience:5 conceived:2 delta:1 blue:1 express:2 key:2 four:4 threshold:2 demonstrating:1 drawn:1 rectangle:1 imaging:1 graph:1 astute:1 year:1 convert:1 run:1 mystery:1 letter:6 uncertainty:5 respond:2 you:2 named:1 place:1 reader:1 pursuing:1 parsimonious:2 comparable:1 entirely:1 bound:2 cyan:1 monsell:3 stove:1 correspondence:2 annual:1 activity:9 strength:2 adapted:1 constraint:3 kronecker:1 your:1 constrain:2 precisely:1 encodes:2 aspect:2 simulate:1 min:1 performing:3 department:2 conceptualization:2 according:1 combination:1 poor:1 across:6 modification:1 s1:1 explained:1 boulder:1 equation:2 previously:1 turn:1 initiate:1 mechanistic:1 mind:1 serf:1 adopted:1 generalizes:1 operation:1 permit:1 eight:9 appropriate:4 generic:1 appearing:1 braver:1 subtracted:1 alternative:2 robustness:2 shortly:1 gate:1 top:5 denotes:1 include:1 opportunity:1 maintaining:2 macleod:1 ink:4 arrangement:1 hum:1 rt:3 nr:2 card:4 lateral:4 simulated:1 seven:1 collected:2 reason:1 toward:1 code:2 index:1 nc:2 difficult:2 neuroimaging:6 robert:1 striatal:1 frank:1 reliably:1 shallice:1 policy:8 perform:8 allowing:1 upper:6 vertical:1 neuron:1 observation:7 beat:1 subsume:1 variability:1 pdp:1 varied:4 arbitrary:1 inferred:2 bk:9 cast:3 required:5 specified:1 c3:2 connection:3 extensive:1 pair:3 optimized:2 nick:1 unequal:1 framing:1 address:1 beyond:1 suggested:1 bar:1 dynamical:4 departure:1 biasing:1 reading:1 challenge:1 program:1 green:7 memory:20 including:1 explanation:4 max:1 power:2 overlap:1 demanding:1 difficulty:1 treated:1 natural:3 homologous:1 critical:3 representing:1 identifies:1 axis:1 irrespective:1 psychol:1 naive:2 embodied:2 text:1 review:5 prior:5 understanding:1 literature:1 relative:1 wisconsin:3 fully:2 limitation:3 analogy:1 versus:1 remarkable:1 triple:1 degree:4 affine:1 consistent:2 editor:1 share:1 row:14 placed:1 surprisingly:1 free:4 guide:1 bias:3 understand:1 allow:2 institute:1 wide:1 bulletin:1 distributed:1 slice:2 feedback:1 default:1 dimension:11 cortical:5 reside:1 made:1 reinforcement:2 c5:1 percept:1 far:1 approximate:1 emphasize:1 maxr:1 active:1 instantiation:1 reveals:1 consonant:4 search:1 latent:2 sk:8 why:1 additionally:2 promising:1 nature:2 superficial:1 expanding:1 inherently:2 du:1 pfc:4 complex:2 domain:1 did:1 arrow:1 allowed:1 fashion:1 wish:3 explicit:1 comput:1 house:1 perceptual:2 late:1 learns:1 rk:11 remained:1 familiarity:1 specific:1 showing:2 symbol:4 offset:1 decay:3 evidence:1 consist:1 sequential:1 modulates:1 importance:4 execution:1 demand:7 sorting:3 flavor:1 easier:1 entropy:12 depicted:2 led:2 simply:1 likely:2 explore:1 neurophysiological:1 failed:1 corresponds:4 newell:1 determines:3 rougier:1 conditional:1 goal:4 month:1 identity:14 consequently:1 room:4 toss:1 experimentally:2 included:1 determined:3 corrected:1 called:2 total:2 experimental:10 indicating:1 select:1 internal:2 support:2 people:1 koechlin:4 modulated:1 meant:1 phenomenon:2 ongoing:2 incorporate:1 tested:1 contextdependent:1 |
2,729 | 3,475 | Online Prediction on Large Diameter Graphs
Mark Herbster, Guy Lever, Massimiliano Pontil
Department of Computer Science
University College London
Gower Street, London WC1E 6BT, England, UK
{m.herbster, g.lever, m.pontil}@cs.ucl.ac.uk
Abstract
We continue our study of online prediction of the labelling of a graph. We show a
fundamental limitation of Laplacian-based algorithms: if the graph has a large diameter then the number of mistakes made by such algorithms may be proportional
to the square root of the number of vertices, even when tackling simple problems.
We overcome this drawback by means of an efficient algorithm which achieves
a logarithmic mistake bound. It is based on the notion of a spine, a path graph
which provides a linear embedding of the original graph. In practice, graphs may
exhibit cluster structure; thus in the last part, we present a modified algorithm
which achieves the ?best of both worlds?: it performs well locally in the presence
of cluster structure, and globally on large diameter graphs.
1
Introduction
We study the problem of predicting the labelling of a graph in the online learning framework. Consider the following game for predicting the labelling of a graph: Nature presents a graph; nature
queries a vertex vi1 ; the learner predicts y?1 ? {?1, 1}, the label of the vertex; nature presents a
label y1 ; nature queries a vertex vi2 ; the learner predicts y?2 ; and so forth. The learner?s goal is to
minimise the total number of mistakes M = |{t : y?t 6= yt }|. If nature is adversarial, the learner
will always mispredict, but if nature is regular or simple, there is hope that a learner may make only
a few mispredictions. Thus, a central goal of online learning is to design algorithms whose total
mispredictions can be bounded relative to the complexity of nature?s labelling. In [9, 8, 7], the cut
size (the number of edges between disagreeing labels) was used as a measure of the complexity of a
graph?s labelling, and mistake bounds relative to this and the graph diameter were derived.
The strength of the methods in [8, 7] is in the case when the graph exhibits ?cluster structure?. The
apparent deficiency of these methods is that they have poor bounds when the graph diameter is large
relative to the number of vertices. We observe that this weakness is not due to insufficiently tight
bounds, but is a problem in their performance. In particular, we discuss an example of a n-vertex
labelled graph with a single edge between disagreeing label sets. On this graph, sequential prediction
using the common method
based upon minimising the Laplacian semi-norm of a labelling, subject to
?
constraints, incurs ?( n) mistakes (see Theorem 3). The expectation is that the number of mistakes
incurred by an optimal online algorithm is bounded by O(ln n).
We solve this problem by observing that there exists an approximate structure-preserving embedding
of any graph into a path graph. In particular the cut-size of any labelling is increased by no more than
a factor of two. We call this embedding a spine of the graph. The spine is the foundation on which we
build two algorithms. Firstly we predict directly on the spine with the 1-nearest-neighbor algorithm.
We demonstrate that this equivalent to the Bayes-optimal classifier for a particular Markov random
field. A logarithmic mistake bound for learning on a path graph follows by the Halving algorithm
analysis. Secondly, we use the spine of the graph as a foundation to add a binary support tree to the
original graph. This enables us to prove a bound which is the ?best of both worlds? ? if the predicted
set of vertices has cluster-structure we will obtain a bound appropriate for that case, but if instead,
the predicted set exhibits a large diameter we will obtain a polylogarithmic bound.
Previous work. The seminal approach to semi-supervised learning over graphs in [3] is to predict
with a labelling which is consistent with a minimum label-separating cut. More recently, the graph
Laplacian has emerged as a key object in semi-supervised learning, for example the semi-norm
induced by the Laplacian is commonly either directly minimised subject to constraints, or used as
a regulariser [14, 2]. In [8, 7] the online graph labelling problem was studied. An aim of those
papers was to provide a natural interpretation of the bound on the cumulative mistakes of the kernel
perceptron when the kernel is the pseudoinverse of the graph Laplacian ? bounds in this case being
relative to the cut and (resistance) diameter of the graph. In this paper we necessarily build directly
on the very recent results in [7] as those results depend on the resistance diameter of the predicted
vertex set as opposed to the whole graph [8]. The online graph labelling problem is also studied in
[13], and here the graph structure is not given initially. A slightly weaker logarithmic bound for the
online graph labelling problem has also been independently derived via a connection to an online
routing problem in the very recent [5].
2
Preliminaries
We study the process of predicting a labelling defined on the vertices of a graph. Following the
classical online learning framework, a sequence of labelled vertices {(vi1 , y1 ), (vi2 , y2 ), . . . }, the
trial sequence, is presented to a learning algorithm such that, on sight of each vertex vit , the learner
makes a prediction y?t for the label value, after which the correct label is revealed. This feedback
information is then used by the learning algorithm to improve its performance on further examples.
We analyse the performance of a learning algorithm in the mistake bound framework [12] ? the aim
is to minimise the maximum possible cumulative number of mistakes made on the training sequence.
A graph G = (V, E) is a collection of vertices V = {v1 , . . . , vn } joined by connecting (possibly
weighted) edges. Denote i ? j whenever vi and vj are connected so that E = {(i, j) : i ? j} is the
set of unordered pairs of connected vertex indices. Associated with each edge (i, j) ? E is a weight
Aij , so that A is the n ? n symmetric adjacency matrix. We say that G is unweighted if Aij = 1
for every (i, j) ? E and is 0 otherwise. In this paper, we consider only connected graphs ? that is,
graphs such that there exists a path between any two vertices. The Laplacian G of a graph
P G is the
n ? n matrix G = D ? A, where D is the diagonal degree matrix such that Dii = j Aij . The
quadratic form associated with the Laplacian relates to the cut size of graph labellings.
Definition 1. Given a labelling u ? IRn of G = (V, E) we define the cut size of u by
1 T
1 X
?G (u) =
u Gu =
Aij (ui ? uj )2 .
(1)
4
4
(i,j)?E
n
In particular, if u ? {?1, 1} we say that a cut occurs on edge (i, j) if ui 6= uj and ?G (u) measures
the number of cuts.
We evaluate the performance of prediction algorithms in terms of the cut size and the resistance
diameter of the graph. There is an established natural connection between graphs and resistive
networks where each edge (i, j) ? E is viewed as a resistor with resistance 1/Aij [4]. Thus the
effective resistance rG (vi , vj ) between vertex vi and vj is the potential difference needed to induce a
unit current flow between vi and vj . The effective resistance may be computed by the formula [11]
rG (vi , vj ) = (ei ? ej )T G+ (ei ? ej ),
(2)
+
n
where ? ? denotes the pseudoinverse and e1 , . . . , en are the canonical basis vectors of IR . The
resistance diameter of a graph RG := maxvi ,vj ?V rG (vi , vj ) is the maximum effective resistance
between any pair of vertices on the graph.
3
Limitations of online minimum semi-norm interpolation
As we will show, it is possible to develop online algorithms for predicting the labelling of a graph
which have a mistake bound that is a logarithmic function of the number of vertices. Conversely, we
first highlight a deficiency in a standard Laplacian based method for predicting a graph labelling.
Given a partially labelled graph G = (V, E) with |V | = n ? that is, such that for some ` ? n,
y` ? {?1, 1}` is a labelling defined on the ` vertices V` = {vi1 , vi2 , . . . , vi` } ? the minimum
semi-norm interpolant is defined by
y? = argmin{uT Gu : u ? IRn , uik = yk , k = 1, . . . , `}.
We then predict using y?i = sgn(?
yi ), for i = 1, . . . , n.
The common justification behind the above learning paradigm [14, 2] is that minimizing the cut (1)
encourages neighbouring vertices to be similarly labelled. However, we now demonstrate that in the
online setting such a regime will perform poorly on ?
certain graph constructions ? there exists a trial
sequence on which the method will make at least ?( n) mistakes.
Definition 2. An octopus graph of size d is defined to be d path graphs (the tentacles) of length d
(that is, with d + 1 vertices) all adjoined at a common end vertex, to which a further single head
vertex is attached, so that n = |V | = d2 + 2. This corresponds to the graph O1,d,d discussed in [8].
Theorem 3. Let G = (V, E) be an octopus graph of size d and y = (y1 , . . . , y|V | ) the labelling
such that yi = 1 if vi is the head vertex and yi = ?1 otherwise.
There exists a trial sequence for
p
which online minimum semi-norm interpolation makes ?( |V |) mistakes.
Proof. Let the first query vertex be the head vertex, and let the end vertex of a tentacle be queried at
each subsequent trial. We show that this strategy forces at least d mistakes. The solution to the minimum semi-norm interpolation with boundary
Pn values problem is precisely the harmonic solution [4]
y? (that is, for every unlabeled vertex vj , i=1 Aij (?
yi ? y?j ) = 0). If the graph is connected y? is
unique and the graph labelling problem is identical to that of identifying the potential at each vertex
of a resistive network defined on the graph where each edge corresponds to a resistor of 1 unit; the
harmonic principle corresponds to Kirchoff?s current law in this case. Using this analogy, suppose
that the end points of k < d tentacles are labelled and that the end vertex vq of an unlabelled tentacle
is queried. Suppose a current of k? flows from the head to the body of the graph. By Kirchoff?s
law, a current of ? flows along each labelled tentacle (in order to obey the harmonic principle at
2
every vertex it is clear that no current flows along the unlabelled tentacles). By Ohm?s law ? = d+k
.
Minimum semi-norm interpolation therefore results in the solution
2k
y?q = 1 ?
? 0 iff k ? d.
d+k
Hence the minimum semi-norm solution predicts incorrectly whenever k < d and the algorithm
makes at least d mistakes.
The above demonstrates a limitation in the method of online Laplacian minimum semi-norm interpolation for predicting a graph labelling ? the mistake bound can be proportional to the square root
of the number of data points. We solve these problems in the following section.
4
A linear graph embedding
We demonstrate a method of embedding data represented as a connected graph G into a path graph,
we call it a spine of G, which partially preserves the structure of G. Let Pn be the set of path graphs
with n vertices. We would like to find a path graph with the same vertex set as G, which solves
?P (u)
.
min
max
P?Pn u?{?1,1}n ?G (u)
If a Hamiltonian path H of G (a path on G which visits each vertex precisely once) exists, then
(u)
the approximation ratio is ??H
? 1. The problem of finding a Hamiltonian path is NP-complete
G (u)
however, and such a path is not guaranteed to exist. As we shall see, a spine S of G may be found
S (u)
efficiently and satisfies ?
?G (u) ? 2.
We now detail the construction of a spine of a graph G = (V, E), with |V | = n. Starting from
any node, G is traversed in the manner of a depth-first search (that is, each vertex is fully explored
before backtracking to the last unexplored vertex), and an ordered list VL = {vl1 , vl2 , . . . , vl2m+1 }
of the vertices (m ? |E|) in the order that they are visited is formed, allowing repetitions when
a vertex is visited more than once. Note that each edge in EG is traversed no more than twice
when forming VL . Define an edge multiset EL = {(l1 , l2 ), (l2 , l3 ), . . . , (l2m , l2m+1 )} ? the set
of pairs of consecutive
vertices in VL . Let u be an P
arbitrary labelling of G and denote, as usual,
P
?G (u) = 41 (i,j)?EG (ui ? uj )2 and ?L (u) = 14 (i,j)?EL (ui ? uj )2 . Since the multiset EL
contains every element of EG no more than twice, ?L (u) ? 2?G (u).
We then take any subsequence VL0 of VL containing every vertex in V exactly once. A spine
S = (V, ES ) is a graph formed by connecting each vertex in V to its immediate neighbours in
the subsequence VL0 with an edge. Since a cut occurs between connected vertices vi and vj in S
only if a cut occurs on some edge in EL located between the corresponding vertices in the list VL
we have
?S (u) ? ?L (u) ? 2?G (u).
(3)
Thus we have reduced the problem of learning the cut on a generic graph to that of learning the
cut on a path graph. In the following we see that 1-nearest neighbour (1-NN) algorithm is a Bayes
optimal algorithm for this problem. Note that the 1-NN algorithm does not perform well
? on general
graphs; on the octopus graph discussed above, for example, it can make at least ?( n) mistakes,
and even ?(n) mistakes on a related graph construction [8].
5
Predicting with a spine
We consider implementing the 1-NN algorithm on a path graph and demonstrate that it achieves a
mistake bound which is logarithmic in the length of the line. Let G = (V, E) be a path graph, where
V = {v1 , v2 , . . . , vn } is the set of vertices and E = {(1, 2), (2, 3), . . . , (n ? 1, n)}. The nearest
neighbour algorithm, in the standard online learning framework described above, attempts to predict
a graph labelling by producing, for each query vertex vit , the prediction y?t which is consistent with
the label of the closest labelled vertex (and predicts randomly in the case of a tie).
Theorem 4. Given the task of predicting the labelling of any unweighted, n-vertex path graph P in
the online framework, the number of mistakes, M , incurred by the 1-NN algorithm satisfies
n?1
?P (u)
M ? ?P (u) log2
+
+ 1,
(4)
?P (u)
ln 2
where u ? {?1, 1}n is any labelling consistent with the trial sequence.
Proof. We shall prove the result by noting that the Halving algorithm [1] (under certain conditions
on the probabilities assigned to each hypothesis) implements the nearest neighbour algorithm on a
path graph. Given any input space X and finite binary concept class C ? {?1, 1}|X| , the Halving
algorithm learns any target concept c? ? C as follows. Each hypothesis c ? C is given an associated
probability p(c). A sequence of labelled examples {(x1 , y1 ), . . . , (xt?1 , yt?1 )} ? X ? {?1, 1}, is
revealed in accordance with the usual online framework. Let Ft be the set of feasible hypotheses at
trial t; Ft = {c : c(xs ) = ys ?s < t}. Given an unlabelled example xtP? X at trial t the predicted
label y?t is that which agrees with the majority vote ? that is, such that
it predicts randomly if this is equal to
most MH mistakes with
1
2 ).
c?Ft ,c(xt )=y
?t
P
c?Ft
p(c)
p(c)
>
1
2
(and
It is well known [1] that the Halving algorithm makes at
MH ? log2
1
p(c? )
.
(5)
We now define a probability distribution over the space of all labellings u ? {?1, 1}n of P such that
the Halving algorithm with these probabilities implements the nearest neighbour algorithm. Let a cut
occur on any given edge with probability ?, independently of all other cuts; Prob(ui+1 6= ui ) = ?
?i < n. The position of all cuts fixes the labelling up to flipping every label, and each of these
two resulting possible arrangements are equally likely. This recipe associates with each possible
labelling u ? {?1, 1}n a probability p(u) which is a function of the labelling?s cut size
1 ?P (u)
?
(1 ? ?)n?1??P (u) .
(6)
2
This induces a full joint probability distribution on the space of vertex labels. In fact (6) is a Gibbs
measure and as such defines a Markov random field over the space of vertex labels [10]. The mass
function p therefore satisfies the Markov property
p(u) =
p(ui = ? | uj = ?j ?j 6= i) = p(ui = ? | uj = ?j ?j ? Ni ),
(7)
where here Ni is the set of vertices neighbouring vi ? those connected to vi by an edge. We will
give an equivalent Markov property which allows a more general conditioning to reduce to that over
boundary vertices.
Definition 5. Given a path graph P = (V, E), a set of vertices V 0 ? V and a vertex vi ? V , we
define the boundary vertices v` , vr (either of which may be vacuous) to be the two vertices in V 0 that
are closest to vi in each direction along the path; its nearest neighbours in each direction.
The distribution induced by (6) satisfies the following Markov property; given a partial labelling of
P defined on a subset V 0 ? V , the label of any vertex vi is independent of all labels on V 0 except
those on the vertices v` , vr (either of which could be vacuous)
p(ui = ? | uj = ?j , ?j : vj ? V 0 )
= p(ui = ? | u` = ?` , ur = ?r ).
(8)
Given the construction of the probability distribution formed by independent cuts on graph edges,
we can evaluate conditional probabilities. For example, p(uj = ? | uk = ?) is the probability of an
even number of cuts between vertex vj and vertex vk . Since cuts occur with probability ? and there
are |k?j|
possible arrangements of s cuts we have
s
p(uj = ? | uk = ?) =
X |k ? j|
1
?s (1 ? ?)|k?j|?s = (1 + (1 ? 2?)|k?j| ).
s
2
s even
(9)
X |k ? j|
1
?s (1 ? ?)|k?j|?s = (1 ? (1 ? 2?)|k?j| ).
s
2
(10)
Likewise we have that
p(uj 6= ? | uk = ?) =
s odd
Note also that for any single vertex we have p(ui = ?) = 12 for ? ? {?1, 1}.
Lemma 6. Given the task of predicting the labelling of an n-vertex path graph online, the Halving
algorithm, with a probability distribution over the labellings defined as in (6) and such that 0 <
? < 12 , implements the nearest neighbour algorithm.
Proof. Suppose that t ? 1 trials have been performed so that we have a partial labelling of a subset
V 0 ? V , {(vi1 , y1 ), (vi2 , y2 ), . . . , (vit?1 , yt?1 )}. Suppose the label of vertex vit is queried so that
the Halving algorithm makes the following prediction y?t for vertex vit : y?t = y if p(uit = y | uij =
yj ? 1 ? j < t) > 21 , y?t = ?y if p(uit = y | uij = yj ? 1 ? j < t) < 21 (and predicts randomly
if this probability is equal to 12 ). We first consider the case where the conditional labelling includes
vertices on both sides of vit . We have, by (8), that
p(uit = y | uij = yj ? 1 ? j < t)
= p(uit = y | u` = y? (`) , ur = y? (r) )
=
p(u` = y? (`) | ur = y? (r) , uit = y)p(ur = y? (r) , uit = y)
p(u` = y? (`) , ur = y? (r) )
=
p(u` = y? (`) | uit = y)p(ur = y? (r) | uit = y)
p(u` = y? (`) | ur = y? (r) )
(11)
where v` and vr are the boundary vertices and ? (`) and ? (r) are trials at which vertices v` and vr
are queried, respectively. We can evaluate the right hand side of this expression using (9, 10). To
show equivalence with the nearest neighbour method whenever ? < 12 , we have from (9, 10, 11)
p(uit = y | u` = y, ur 6= y)
=
(1 + (1 ? 2?)|`?it | )(1 ? (1 ? 2?)|r?it | )
2(1 ? (1 ? 2?)|`?r| )
which is greater than 12 if |` ? it | < |r ? it | and less than 21 if |` ? it | > |r ? it |. Hence, this
produces predictions exactly in accordance with the nearest neighbour scheme. We also have more
simply that for all it , ` and r and ? < 12
p(uit = y | u` = y, ur = y) >
1
1
, and p(uit = y | u` = y) > .
2
2
This proves the lemma for all cases.
A direct application of the Halving algorithm mistake bound (5) now gives
1
2
M ? log2
= log2
p(u)
??P (u) (1 ? ?)n?1??P (u)
P (u) 1
where u is any labelling consistent with the trial sequence. We choose ? = min( ?n?1
, 2 ) (note
P (u)
that the bound is vacuous when ?n?1
> 12 since M is necessarily upper bounded by n) giving
?P (u)
n?1
+ (n ? 1 ? ?P (u)) log2 1 +
+1
M ? ?P (u) log2
?P (u)
n ? 1 ? ?P (u)
n?1
?P (u)
? ?P (u) log2
+
+ 1.
?P (u)
ln 2
This proves the theorem.
The nearest neighbour algorithm can predict the labelling of any graph G = (V, E), by first transferring the data representation to that of a spine S of G, as presented in Section 4. We now apply the
above argument to this method and immediately deduce our first main result.
Theorem 7. Given the task of predicting the labelling of any unweighted, connected, n-vertex graph
G = (V, E) in the online framework, the number of mistakes, M , incurred by the nearest neighbour
algorithm operating on a spine S of G satisfies
n?1
2?G (u)
M ? 2?G (u) max 0, log2
+ 1,
(12)
+
2?G (u)
ln 2
where u ? {?1, 1}n is any labelling consistent with the trial sequence.
Proof. Theorem 4 gives bound (4) for predicting on any path, hence M ? ?S (u) log2
?S (u)
ln 2 + 1. Since this is an increasing function of ?S (u) for ?S (u) ? n
?S (u) ? n ? 1 (M is necessarily upper bounded by n) we upper bound
n?1
?S (u)
+
? 1 and is vacuous at
substituting ?S (u) ?
2?G (u) (equation (3)).
We observe that predicting with the spine is a minimax improvement over Laplacian minimal seminorm interpolation. Recall Theorem 3, there we showed
? that there exists a trial sequence such that
Laplacian
p minimal semi-norm interpolation incurs ?( n) mistakes. In fact this trivially generalizes
to ?( ?G (u)n) mistakes by creating a colony of ?G (u) octopi then identifying each previously
separate head vertex as a single central vertex. The upper bound (12) is smaller than the prior lower
bound.
The computational complexity for this algorithm is O(|E| + |V | ln |V |) time. We compute the spine
in O(|E|) time by simply listing vertices in the order in which they are first visited during a depthfirst search traversal of G. Using online 1-NN requires O(|V | ln |V |) time to predict an arbitrary
vertex sequence using a self-balancing binary search tree (e.g., a red-black tree) as the insertion of
each vertex into the tree and determination of the nearest left and right neighbour is O(ln |V |).
6
Prediction with a binary support tree
The Pounce online label prediction algorithm [7] is designed to exploit cluster structure of a graph
G = (V, E) and achieves the following mistake bound
M ? N (X, ?, rG ) + 4?G (u)? + 1,
(13)
for any ? > 0. Here, u ? IRn is any labelling consistent with the trial sequence, X =
{vi1 , vi2 , . . . } ? V is the set of inputs and N (X, ?, rG ) is a covering number ? the minimum
number of balls of resistance diameter ? (see Section 2) required to cover X. The mistake bound
(13) can be preferable to (12) whenever the inputs are sufficiently clustered and so has a cover of
small diameter sets. For example, consider two (m + 1)-cliques, one labeled ?+1?, one ??1? with
cm arbitrary interconnecting edges (c ? 1) here the bound (12) is vacuous while (13) is M ? 8c + 3
2
(with ? = m
, N (X, ?, rG ) = 2, and ?G (u) = cm). An input space V may have both local cluster structure yet have a large diameter. Imagine a ?universe? such that points are distributed into
many dense clusters such that some sets of clusters are tightly packed but overall the distribution is
quite diffuse. A given ?problem? X ? V may then be centered on a few clusters or alternatively
encompass the entire space. Thus, for practical purposes, we would like a prediction algorithm
which achieves the ?best of both worlds?, that is a mistake bound which is no greater, in order of
magnitude, than the maximum of (12) and (13). The rest of this paper is directed toward this goal.
We now introduce the notion of binary support tree, formalise the Pounce method in the support tree
setting and then prove the desired result.
Definition 8. Given a graph G = (V, E), with |V | = n, and spine S, we define a binary support tree
of G to be any binary tree T = (VT , ET ) of least possible depth, D, whose leaves are the vertices
of S, in order. Note that D < log2 (n) + 1.
We show that there is a weighting of the support tree which ensures that the resistance diameter of
the support tree is small, but also such that any labelling of the leaf vertices can be extended to the
support tree such that its cut size remains small. This enables effective learning via the support tree.
A related construction has been used to build preconditioners for solving linear systems [6].
Lemma 9. Given any spine graph S = (V, E) with |V | = n, and labelling u ? {?1, 1}n , with
? ? [?1, 1]|VT |
support tree T = (VT , ET ), there exists a weighting A of T , and a labelling u
? and u are identical on V , ?T (u)
? < ?S (u) and RT ? (log2 n + 1)(log2 n +
of T such that u
4)(log2 (log2 n + 2))2 .
Proof. Let vr be the root vertex of T . Suppose each edge (i, j) ? ET has a weight Aij , which
is a function of the edge?s depth d = max{dT (vi , vr ), dT (vj , vr )}, Aij = W (d) where dT (v, v 0 )
? such
is the number of edges in the shortest path from v to v 0 . Consider the unique labelling u
that, for 1 ? i ? n we have u
?i = ui and such that for every other vertex vp ? VT , with child
u
? +?
u
vertices vc1 , vc2 , we have u
?p = c1 2 c2 , or u
?p = u
?c in the case where vp has only one child, vc .
Suppose the edges (p, c1 ), (p, c2 ) ? ET are at some depth d in T , and let V 0 ? V correspond to
the leaf vertices of T descended from vp . Define ?S (uV 0 ) to be the cut of u restricted to vertices
in V 0 . If u
? c1 = u
?c2 then (?
up ? u
?c1 )2 + (?
up ? u
?c2 )2 = 0 ? 2?S (uV 0 ), and if u
?c1 6= u
?c2 then
2
2
(?
up ? u
?c1 ) + (?
up ? u
?c2 ) ? 2 ? 2?S (uV 0 ). Hence
W (d) (?
up ? u
?c1 )2 + (?
up ? u
?c2 )2 ? 2W (d)?S (uV 0 )
(14)
(a similar inequality is trivial in the case that vp has only one child). Since the sets of leaf descendants
of all vertices at depth d form a partition of V , summing (14) first over all parent nodes at a given
depth and then over all integers d ? [1, D] gives
? ?2
4?T (u)
D
X
W (d)?S (u).
d=1
(15)
We then choose
1
(d + 1)(log2 (d + 1))2
R?
? 21 + ln2 2 2 x ln12 x dx =
W (d) =
and note that
P?
1
d=1 (d+1)(log2 (d+1))2
(16)
1
2
+ ln 2 < 2.
PD
Further, RT = 2 d=1 (d + 1)(log2 (d + 1))2 ? D(D + 3)(log2 (D + 1))2 and so D ? log2 n + 1
gives the resistance bound.
Definition 10. Given the task of predicting the labelling of an unweighted graph G = (V, E) the
? is formed
augmented Pounce algorithm proceeds as follows: An augmented graph G? = (V? , E)
by attaching a binary support tree of G, with weights defined as in (16), to G; formally let T =
(VT , ET ) be such a binary support tree of G, then G? = (VT , E ? ET ). The Pounce algorithm is
?
then used to predict the (partial) labelling defined on G.
Theorem 11. Given the task of predicting the labelling of any unweighted, connected, n-vertex
graph G = (V, E) in the online framework, the number of mistakes, M , incurred by the augmented
Pounce algorithm satisfies
M ? min{N (X, ?, rG ) + 12?G (u)?} + 1,
(17)
?>0
where N (X, ?, rG ) is the covering number of the input set X = {vi1 , vi2 , . . . } ? V relative to
the resistance distance rG of G and u ? IRn is any labelling consistent with the trial sequence.
Furthermore,
M ? 12?G (u)(log2 n + 1)(log2 n + 4)(log2 (log2 n + 2))2 + 2.
(18)
Proof. Let u be some labelling consistent with the trial sequence. By (3) we have that ?S (u) ?
2?G (u) for any spine S of G. Moreover, by the arguments in Lemma 9 there exists some labelling
? of the weighted support tree T of G, consistent with u on V , such that ?T (u)
? < ?S (u). We then
u
have
? = ?T (u)
? + ?G (u) < 3?G (u).
?G?(u)
(19)
By Rayleigh?s monotonicity law the addition of the support tree does not increase the resistance
between any vertices on G, hence
N (X, ?, rG?) ? N (X, ?, rG ).
(20)
? yields
? on G,
Combining inequalities (19) and (20) with the pounce bound (13) for predicting u
? + 1 ? N (X, ?, rG ) + 12?G (u)? + 1.
M ? N (X, ?, rG?) + 4?G?(u)?
? G? + 2 ?
which proves (17). We prove (18) by covering G? with single ball so that M ? 4?G?(u)R
12?G (u)RT + 2 and the result follows from the bound on RT in Lemma 9.
7
Conclusion
We have explored a deficiency with existing online techniques for predicting the labelling of a graph.
As a solution, we have presented an approximate cut-preserving embedding of any graph G =
(V, E) into a simple path graph, which we call a spine, such that an implementation of the 1nearest-neighbours algorithm is an efficient realisation of a Bayes optimal classifier. This therefore
achieves a mistake bound which is logarithmic in the size of the vertex set for any graph, and the
complexity of our algorithm is of O(|E| + |V | ln |V |). We further applied the insights gained to
a second algorithm ? an augmentation of the Pounce algorithm, which achieves a polylogarithmic
performance guarantee, but can further take advantage of clustered data, in which case its bound is
relative to any cover of the graph.
References
[1] J. M. Barzdin and R. V. Frievald. On the prediction of general recursive functions. Soviet Math. Doklady,
13:1224?1228, 1972.
[2] M. Belkin and P. Niyogi. Semi-supervised learning on riemannian manifolds. Machine Learning, 56:209?
239, 2004.
[3] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In Proc. 18th
International Conf. on Machine Learning, pages 19?26. Morgan Kaufmann, San Francisco, CA, 2001.
[4] P. Doyle and J. Snell. Random walks and electric networks. Mathematical Association of America, 1984.
[5] J. Fakcharoenphol and B. Kijsirikul. Low congestion online routing and an improved mistake bound for
online prediction of graph labeling. CoRR, abs/0809.2075, 2008.
[6] K. Gremban, G. Miller, and M. Zagha. Performance evaluation of a new parallel preconditioner. Parallel
Processing Symposium, International, 0:65, 1995.
[7] M. Herbster. Exploiting cluster-structure to predict the labeling of a graph. In The 19th International
Conference on Algorithmic Learning Theory, pages 54?69, 2008.
[8] M. Herbster and M. Pontil. Prediction on a graph with a perceptron. In B. Sch?olkopf, J. Platt, and
T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 577?584. MIT Press,
Cambridge, MA, 2007.
[9] M. Herbster, M. Pontil, and L. Wainer. Online learning over graphs. In ICML ?05: Proceedings of the
22nd international conference on Machine learning, pages 305?312, New York, NY, USA, 2005. ACM.
[10] R. Kinderman and J. L. Snell. Markov Random Fields and Their Applications. Amer. Math. Soc., Providence, RI, 1980.
[11] D. Klein and M. Randi?c. Resistance distance. Journal of Mathematical Chemistry, 12(1):81?95, 1993.
[12] N. Littlestone. Learning when irrelevant attributes abound: A new linear-threshold algorithm. Machine
Learning, 2:285?318, 1988.
[13] K. Pelckmans and J. A. Suykens. An online algorithm for learning a labeling of a graph. In In Proceedings
of the 6th International Workshop on Mining and Learning with Graphs, 2008.
[14] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic
functions. In 20-th International Conference on Machine Learning (ICML-2003), pages 912?919, 2003.
| 3475 |@word trial:15 norm:10 vi1:6 nd:1 d2:1 incurs:2 contains:1 existing:1 current:5 tackling:1 yet:1 dx:1 subsequent:1 partition:1 frievald:1 enables:2 designed:1 congestion:1 leaf:4 pelckmans:1 hamiltonian:2 provides:1 multiset:2 node:2 math:2 firstly:1 mathematical:2 along:3 c2:7 direct:1 symposium:1 descendant:1 prove:4 resistive:2 introduce:1 manner:1 spine:18 globally:1 increasing:1 abound:1 bounded:4 moreover:1 mass:1 argmin:1 cm:2 finding:1 guarantee:1 every:7 unexplored:1 tie:1 exactly:2 doklady:1 preferable:1 classifier:2 demonstrates:1 uk:5 unit:2 platt:1 producing:1 before:1 accordance:2 depthfirst:1 local:1 mistake:31 path:23 interpolation:7 black:1 twice:2 studied:2 equivalence:1 conversely:1 kinderman:1 tentacle:6 directed:1 unique:2 practical:1 yj:3 practice:1 recursive:1 implement:3 descended:1 pontil:4 induce:1 regular:1 unlabeled:2 seminal:1 equivalent:2 yt:3 starting:1 independently:2 vit:6 mispredictions:2 identifying:2 immediately:1 insight:1 embedding:6 notion:2 justification:1 construction:5 suppose:6 target:1 imagine:1 neighbouring:2 hypothesis:3 associate:1 element:1 located:1 cut:25 predicts:6 labeled:2 disagreeing:2 ft:4 ensures:1 connected:9 yk:1 pd:1 complexity:4 ui:12 insertion:1 interpolant:1 traversal:1 depend:1 tight:1 solving:1 upon:1 learner:6 basis:1 gu:2 mh:2 joint:1 represented:1 america:1 soviet:1 massimiliano:1 effective:4 london:2 query:4 labeling:3 whose:2 apparent:1 emerged:1 solve:2 quite:1 say:2 otherwise:2 gremban:1 niyogi:1 analyse:1 online:28 sequence:14 advantage:1 ucl:1 combining:1 iff:1 poorly:1 forth:1 olkopf:1 recipe:1 exploiting:1 parent:1 cluster:10 produce:1 object:1 develop:1 ac:1 colony:1 nearest:13 odd:1 solves:1 soc:1 c:1 predicted:4 direction:2 drawback:1 correct:1 attribute:1 vc:1 centered:1 routing:2 sgn:1 dii:1 implementing:1 adjacency:1 fix:1 clustered:2 preliminary:1 snell:2 secondly:1 traversed:2 sufficiently:1 algorithmic:1 predict:8 kirchoff:2 substituting:1 achieves:7 consecutive:1 purpose:1 proc:1 label:16 visited:3 agrees:1 repetition:1 weighted:2 adjoined:1 hope:1 hoffman:1 mit:1 gaussian:1 always:1 aim:2 modified:1 sight:1 vl0:2 pn:3 ej:2 derived:2 vk:1 improvement:1 adversarial:1 el:4 nn:5 vl:5 bt:1 transferring:1 entire:1 initially:1 irn:4 uij:3 overall:1 field:4 once:3 equal:2 identical:2 vl1:1 icml:2 np:1 realisation:1 few:2 belkin:1 randomly:3 neighbour:13 preserve:1 tightly:1 doyle:1 attempt:1 ab:1 mining:1 evaluation:1 weakness:1 behind:1 edge:19 partial:3 tree:18 walk:1 desired:1 littlestone:1 formalise:1 minimal:2 increased:1 cover:3 vertex:82 subset:2 ohm:1 providence:1 herbster:5 fundamental:1 international:6 kijsirikul:1 minimised:1 connecting:2 augmentation:1 lever:2 central:2 opposed:1 containing:1 possibly:1 choose:2 guy:1 conf:1 creating:1 potential:2 chemistry:1 unordered:1 includes:1 vi:15 performed:1 root:3 observing:1 red:1 bayes:3 randi:1 parallel:2 square:2 ir:1 formed:4 ni:2 kaufmann:1 efficiently:1 miller:1 likewise:1 listing:1 correspond:1 yield:1 vp:4 whenever:4 definition:5 associated:3 proof:6 riemannian:1 recall:1 ut:1 dt:3 supervised:4 improved:1 amer:1 furthermore:1 preconditioner:1 hand:1 ei:2 defines:1 seminorm:1 usa:1 concept:2 y2:2 hence:5 assigned:1 symmetric:1 eg:3 game:1 during:1 encourages:1 self:1 covering:3 ln2:1 complete:1 demonstrate:4 wainer:1 performs:1 l1:1 harmonic:4 recently:1 common:3 attached:1 conditioning:1 discussed:2 interpretation:1 association:1 cambridge:1 gibbs:1 queried:4 uv:4 trivially:1 similarly:1 l3:1 operating:1 deduce:1 add:1 closest:2 recent:2 showed:1 irrelevant:1 certain:2 inequality:2 binary:9 continue:1 vt:6 yi:4 preserving:2 minimum:9 greater:2 morgan:1 paradigm:1 shortest:1 semi:14 relates:1 full:1 encompass:1 unlabelled:3 england:1 determination:1 minimising:1 e1:1 visit:1 y:1 equally:1 laplacian:11 prediction:14 halving:8 vl2:1 expectation:1 kernel:2 suykens:1 c1:7 addition:1 sch:1 rest:1 subject:2 induced:2 flow:4 lafferty:1 call:3 integer:1 presence:1 noting:1 revealed:2 reduce:1 minimise:2 expression:1 resistance:14 vc1:1 york:1 clear:1 locally:1 induces:1 diameter:14 reduced:1 exist:1 canonical:1 klein:1 l2m:2 shall:2 key:1 threshold:1 blum:1 v1:2 graph:94 prob:1 vn:2 bound:31 guaranteed:1 quadratic:1 strength:1 insufficiently:1 occur:2 constraint:2 deficiency:3 precisely:2 ri:1 diffuse:1 argument:2 min:3 preconditioners:1 department:1 ball:2 poor:1 smaller:1 slightly:1 ur:9 labellings:3 restricted:1 ln:10 equation:1 vq:1 previously:1 remains:1 discus:1 needed:1 end:4 generalizes:1 apply:1 observe:2 obey:1 v2:1 appropriate:1 generic:1 chawla:1 original:2 denotes:1 log2:23 gower:1 wc1e:1 giving:1 exploit:1 ghahramani:1 build:3 uj:10 prof:3 classical:1 arrangement:2 occurs:3 flipping:1 strategy:1 rt:4 usual:2 diagonal:1 exhibit:3 distance:2 separate:1 separating:1 street:1 majority:1 manifold:1 trivial:1 toward:1 length:2 o1:1 index:1 ratio:1 minimizing:1 design:1 regulariser:1 packed:1 implementation:1 perform:2 allowing:1 upper:4 mispredict:1 markov:6 finite:1 incorrectly:1 immediate:1 extended:1 head:5 y1:5 arbitrary:3 vacuous:5 pair:3 required:1 connection:2 polylogarithmic:2 established:1 proceeds:1 regime:1 max:3 vi2:6 natural:2 force:1 predicting:16 zhu:1 minimax:1 scheme:1 improve:1 prior:1 l2:2 relative:6 law:4 fully:1 highlight:1 limitation:3 proportional:2 analogy:1 foundation:2 incurred:4 degree:1 consistent:9 principle:2 editor:1 balancing:1 last:2 aij:8 side:2 weaker:1 perceptron:2 neighbor:1 attaching:1 distributed:1 overcome:1 feedback:1 boundary:4 world:3 cumulative:2 unweighted:5 depth:6 uit:11 made:2 commonly:1 collection:1 san:1 approximate:2 clique:1 monotonicity:1 pseudoinverse:2 summing:1 francisco:1 alternatively:1 subsequence:2 search:3 nature:7 ca:1 necessarily:3 electric:1 vj:12 octopus:4 main:1 dense:1 universe:1 whole:1 child:3 body:1 x1:1 augmented:3 en:1 uik:1 fakcharoenphol:1 ny:1 vr:7 interconnecting:1 xtp:1 position:1 resistor:2 weighting:2 learns:1 theorem:8 formula:1 xt:2 explored:2 list:2 x:1 exists:8 workshop:1 sequential:1 corr:1 gained:1 zagha:1 magnitude:1 labelling:46 rg:14 logarithmic:6 backtracking:1 simply:2 likely:1 rayleigh:1 forming:1 barzdin:1 ordered:1 partially:2 joined:1 corresponds:3 satisfies:6 acm:1 ma:1 conditional:2 goal:3 viewed:1 labelled:8 feasible:1 except:1 lemma:5 total:2 mincuts:1 e:1 vote:1 formally:1 college:1 mark:1 support:14 evaluate:3 |
2,730 | 3,476 | Effects of Stimulus Type and of Error-Correcting
Code Design on BCI Speller Performance
Jeremy Hill1
Jason Farquhar2
Felix Bie?mann1,3
Suzanne Martens1
Bernhard Sch?olkopf1
1
Max Planck Institute for Biological Cybernetics
{firstname.lastname}@tuebingen.mpg.de
2
NICI, Radboud University, Nijmegen, The Netherlands
[email protected]
3
Dept of Computer Science, TU Berlin, Germany
Abstract
From an information-theoretic perspective, a noisy transmission system such as a
visual Brain-Computer Interface (BCI) speller could benefit from the use of errorcorrecting codes. However, optimizing the code solely according to the maximal minimum-Hamming-distance criterion tends to lead to an overall increase
in target frequency of target stimuli, and hence a significantly reduced average
target-to-target interval (TTI), leading to difficulties in classifying the individual
event-related potentials (ERPs) due to overlap and refractory effects. Clearly any
change to the stimulus setup must also respect the possible psychophysiological consequences. Here we report new EEG data from experiments in which we
explore stimulus types and codebooks in a within-subject design, finding an interaction between the two factors. Our data demonstrate that the traditional, rowcolumn code has particular spatial properties that lead to better performance than
one would expect from its TTIs and Hamming-distances alone, but nonetheless
error-correcting codes can improve performance provided the right stimulus type
is used.
1
Introduction
The Farwell-Donchin speller [4], also known as the ?P300 speller,? is a Brain-Computer Interface
which enables users to spell words provided that they can see sufficiently well. This BCI determines
the intent of the user by recording and classifying his electroencephalogram (EEG) in response to
controlled stimulus presentations. Figure 1 shows a general P300 speller scheme. The stimuli are
intensifications of a number of letters which are organized in a grid and displayed on a screen. In a
standard setup, the rows and columns of the grid flash in a random order. The intensification of the
row or column containing the letter that the user wants to communicate is a target in a stimulus sequence and induces a different brain response than the intensification of the other rows and columns
(the non-targets). In particular, targets and non-targets are expected to elicit certain event-related
potential (ERP) components, such as the so-called P300, to different extents. By classifying the
epochs (i.e. the EEG segments following each stimulus event) into targets and non-targets, the target
row and column can be predicted, resulting in the identification of the letter of interest.
The classification process in the speller can be considered a noisy communication channel where
the sequence of EEG epochs is a modulated version of a bit string denoting the user?s desired letter.
1
Figure 1: Schematic of the visual speller system, illustrating the relationship between the spatial
pattern of flashes and one possible codebook for letter transmission (flash rows then columns).
These bit strings or codewords form the rows of a binary codebook C, a matrix in which a 1 at
position (i, j) means the letter corresponding to row i flashed at time-step j, and a 0 indicates that it
did not. The standard row-column code, in which exactly one row or exactly one column flashes at
any one time, will be denoted RC. It is illustrated in figure 1.
A classifier decodes the transmitted information into an output bit string. In practice, the poor
signal-to-noise ratio of the ERPs hampers accurate classification of the epochs, so the output bit
string may differ from the transmitted bit string (decoding error). Also, the transmitted string may
differ from the corresponding row in the codebook due to modulation error, for example if the user
lost his attention and missed a stimulus event. Coding theory tells us that we can detect and correct
transmission and decoding errors by adding redundancy to the transmitted bit string. The Hamming
distance d is the number of bit positions that differ between two rows in a codebook. The minimum
Hamming distance dmin of all pairs of codewords is related to the error correcting abilities of the
code by e = (dmin ? 1)/2, where e is the maximum number of errors that a code can guarantee to
correct [9]. In general, we find the mean Hamming distance within a given codebook to be a rough
predictor of that codebook?s performance.
In the standard approach, redundancy is added by repeating the flashing of all rows and columns R
times. This leads to d = 4R between two letters not in the same row or column and dmin = 2R
between two letters in the same row or column. The RC code is a poor code in terms of minimum
Hamming distance: to encode 36 different letters in 12 bits, dmin = 4 is possible, and the achievable
dmin increases supra-linearly with the total code length L (for example, dmin = 10 is possible in
L = 24 bits, the time taken for R = 2 repeats of the RC code).
However, the codes with a larger dmin are characterized by an increased weight compared to the RC
code, i.e. the number of 1?s per bitstring is larger. As target stimulus events occur more frequently
overall, the expected target-to-target interval (TTI) decreases. One cannot approach codebook optimization, therefore, without asking what effect this might have on the signals we are trying to
measure and classify, namely the ERPs in response to the stimulus events.
The speller was originally derived from an ?oddball? paradigm, in which subjects are presented with
a repetitive sequence of events, some of which are targets requiring a different response from the
(more frequent) non-targets. The targets are expected to evoke a larger P300 than the non-targets.
It was generally accepted that the amplitude of the target P300 decreases when the percentage of
targets increases [3, 11]. However, more recently, it was suggested that the observed tendency of
the P300 amplitude (as measured by averaging over many targets) to decrease with increased target
probability may in fact be attributed to greater prevalence of shorter target-to-target intervals (TTI)
[6] rather than an overall effect of target frequency per se. In a different type of paradigm using only
targets, it was shown that at TTIs smaller than about 1 second, the P300 amplitude is significantly
decreased due to refractory effects [15]. Typical stimulus onset asynchronies (SOAs) in the oddball
paradigm are in the order of seconds since the P300 component shows up somewhere between 200
and 800 msec[12]. In spellers, small SOAs of about 100 msec are often used [8, 13] in order to
2
achieve high information transfer rates. Consequently, one can expect a significant ERP overlap
into the epoch following a target epoch, and since row flashes are often randomly mixed in with
column flashes, different targets may experience very different TTIs. For a 6 ? 6 grid, the TTI
ranges from 1?SOA to 20?SOA, so targets may suffer to varying degrees from any refractory and
overlap effects.
In order to quantify the detrimental effects of short TTI we examined data from the two subjects in
dataset IIa+b from the BCI Competition III[2]. Following the classification procedures described in
section 3.3, we estimated classification performance on the individual epochs of both data sets by 10fold cross-validation within each subject?s data set. Binary (target versus non-target) classification
results were separated according to the time since the previous target (TPT)?for the targets this
distance measure is equivalent to the TTI. The left panel of fig 4 shows the average classification
error as a function of TPT (averaged across both subjects?both subjects show the same qualitative
effect). Evidently, the target epochs with a TPT< 0.5 sec display a classification accuracy that
approximates chance performance. Consequently, the target epochs with TPT< 0.5 sec, constituting
about 20% of all target epochs in a RC code, do not appear to be useful for transmission [10].
Clearly, there is a potential conflict between information-theoretic factors, which favour increasing
the minimum Hamming distance and hence the overall proportion of target stimuli, and the detrimental psychophysiological effects of doing so.
In [7] we explored this trade-off to see whether an optimal compromise could be found. We initially
built a generative model of the BCI system, using the competition data illustrated in figure 4, and
then used this model to guide the generation and selection of speller code books. The results were
not unequivocally successful: though we were able to show effects of both TTIs and of the Hamming
distances in our codebooks, our optimized codebook performed no better than the row-column code
for the standard flash stimulus. However, our series of experiments involved another kind of stimulus, and the effect of our codebook manipulation was found to interact with the kind of stimulus
used.
The purpose of the current paper is two-fold:
1. to present new data which ilustrate the stimulus/codebook interaction more clearly, and
demonstrate the advantage to be gained by the correct choice of stimulus together with an
error-correcting code.
2. to present evidence for another effect, which we had not previously considered in modelling
our subjects? responses, which may explain why row-column codes perform better than
expected: specifically, the spatial contiguity of rows and columns.
2
2.1
Decoding Framework
Probabilistic Approach to Classification and Decoding
We assume an N -letter alphabet ? and an N -letter by L-bit codebook C. The basic demodulation
and decoding procedure consists of finding the letter T? among the possible letters t ? ? showing
the largest probability Pr (t|X) of being the target letter T , given C and the measured brain signals
X = [x1 , . . . , xL ], i.e.,
Pr (X|t) Pr (t)
T? = argmax Pr (t|X) = argmax
,
(1)
Pr (X)
t??
t??
where the second equality follows from Bayes? rule. A simple approach to decoding is to treat the
individual binary epochs, with binary labels c = (Ct1 . . . CtL ), as independent. This allows us to
factor Pr (X|t) into per-epoch probabilities Pr (xj |c) for epoch indices j = 1 . . . L, to give
L
L
Pr (t) Y
Pr (t) Y Pr (Ctj |xj ) Pr (xj )
Pr (t|X) =
Pr (xj |c) =
= ft (X) ,
Pr (X) j=1
Pr (X) j=1
Pr (Ctj )
(2)
where the second equality again follows from Bayes? rule.
This form of Bayesian decoding [5] forms the basis for our decoding scheme. We train a probabilistic
discriminative classifier, in particular a linear logistic regression (LR) classifier [1, pp82-85], to
3
estimate Pr (Ctj |xj ) = pj in (2). As a result, we can obtain estimates of the probability Pr (t|X)
that a particular letter t corresponds to the user-selected codeword. Note that for decoding purposes
the
Q terms Pr (X) and Pr (xj ) can be ignored as they are independent of t. Furthermore, the product
j Pr (Ctj ) depends only on the positive-class prior of the binary classifier, Pr (+). In fact, it is
easy to show that during decoding this term cancels out the effect of the binary prior, which may
therefore be set arbitrarily without affecting the decisions made by our decoder. The simplest thing
to do is to train classifiers with Pr (+) = 0.5, in which case the denominator term is constant for all
t.
2.1.1
Codebook Optimization
We used a simple model of subjects? responses in each epoch in order to estimate the probability
of making a prediction error with the above decoding method. We used it to compute the codebook
loss, which is the sum of error probabilities, weighted by the probability of transmission of each
letter. This loss function was then minimized in order to obtain an optimized codebook.
Note that this approach is not a direct attempt to tackle the tendency for the performance of the
binary target-vs-nontarget classifier to deteriorate when TTI is short (although this would surely be
a promising alternative strategy). Instead, we take a ?normal? classifier, as susceptible to short-TTI
effects as classifiers in any other study, but try to estimate the negative impact of such effects, and
then find the best trade-off between avoiding short TTIs on the one hand, and having large Hamming
distances on the other hand.
Since our optimization did not result in a decisive gain in performance, we do not wish to emphasize
the details of the optimization methods here. However, for further details see the supplementary
material, or our tech report [7]. For the purposes of the current paper it is the properties of the
resulting codebooks that are important, rather than the precise criterion according to which they are
considered theoretically optimal. The codebooks themselves are described in section 3.1 and given
in full in the supplementary material.
3
EEG Experiments
We implemented a Farwell/Donchin-style speller, using a 6 ? 6 grid of alphanumeric characters,
presented via an LCD monitor on a desk in a quiet office. Subjects each performed a single 3-hour
session during which their EEG signals were measured using a QuickAmp system (BrainProducts
GmbH) in combination with an Electro-Cap. The equipment was set up to measure 58 channels of
EEG, one horizontal EOG at the left eye, one bipolar vertical EOG signal, and a synchronization
signal from a light sensor attached to the display, all sampled at 250 Hz. We present results from 6
healthy subjects in their 20s and 30s (5 male, 1 female).
Two factors were compared in a fully within-subject design: codebook and stimulus. These are
described in the next two subsections.
3.1
Codebook Comparison
In total, we explored 5 different stimulus codes:
1. RCmix : the 12-bit row-column code, with the 12 bits randomly permuted in time (row events
mixed up randomly between column events) as in the competition data [2].
2. RCsep : the 12-bit row-column code, where the 6 rows are intensified in random order, and
then the 6 columns in random order.
3. RC? : this code was generated by taking code RCsep and randomizing the assignment between codewords and letters. Thus, the TTI and Hamming-distance content of the codebook remained identical to RCsep , but the spatial contiguity of the stimulus events was
broken: that is to say, it was no longer a coherent row or column that flashed during any
one epoch, but rather a collection of 6 apparently randomly scattered letters. However, if a
subject were to have ?tunnel vision? and be unable to see any letters other than the target,
this would be exactly equivalent to RCsep . As we shall see, for the purposes of the speller,
our subjects do not have tunnel vision.
4
code
RCmix ?2
RCsep ?2
RC? ?2
D10
D8opt
L
24
24
24
24
24
dmin
4
4
4
10
8
E(d)
6.9
6.9
6.9
11.5
10.7
E(TTI)
5.4
6.0
6.0
2.5
3.1
E(#11)
0.4
0.1
0.1
3.1
0.0
Pr (1)
0.17
0.17
0.17
0.38
0.32
L
0.60
0.56
0.56
0.54
0.44
Table 1: Summary statistics for the 24-bit versions of the 5 codebooks used. E(#11) means the
average number of consecutive target letters per codeword, and Pr (1) the proportion of targets. L
is our estimated probability of an error, according to the model (see supplementary material or [7]).
4. D10: a 24-bit code with the largest minimum Hamming distance we could achieve
(dmin = 10). To make it, our heuristic for codeword selection was to pick the codeword
with the largest minimum distance between it and all previously selected codewords. A
large number of candidate codebooks were generated this way, and the criteria for scoring
a completed codebook were (first) dmin and (second, to select among a large number of
dmin = 10 candidates) the lowest number of consecutive targets.
5. D8opt : a 24-bit code optimized according to our model. The heuristic for greedy codeword
selection was the mean pairwise codebook loss w.r.t. previously selected codebook entries,
and the final scoring criterion was our overall codebook loss function.
3.2
Stimulus Comparison
Two stimulus conditions were compared. In both conditions, stimulus events were repeated with
a stimulus onset asynchrony (SOA) of 167 msec, which as close as our hardware could come to
recreating the 175-msec SOA of competition III dataset II.
Flashes: grey letters presented on a black background were flashed in a conventional manner, being
intensified to white for 33 msec (two video frames). An example is illustrated in the inset of the left
panel of figure 2.
Flips: each letter was superimposed on a small grey rectangle whose initial orientation was either
horizontal or vertical (randomly determined for each letter). Instead of the letter flashing, the rectangle flipped its orientation instantaneously by 90? . An example is illustrated in the inset of the
right panel of figure 2. Our previous experiments had led us to conclude that many subjects perform
significantly better with this stimulus, and find it more pleasant, than the flash. As we shall see, our
results from this stimulus condition support this finding, and indicate a potentially useful interaction
between stimulus type and codebook design.
3.3
Experimental Procedure
The experiment was divided into blocks, each block containing 20 trials with short (2?4 second)
rest pauses between trials. Each trial began with a red box which indicated to the subject which
letter (randomly chosen on each trial) they should attend to?this cue came on for a second, and was
removed 1 second before the start of the stimulus sequence. Subjects were instructed to count the
stimulus events at the target location, and not to blink, move or swallow during the sequence. The
sequence consisted of L = 72 stimulus events, their spatio-temporal arrangement being determined
by one of the five code conditions. The 12-bit RC codes were repeated six times in order to make the
length up to L = 72 (re-randomizing the row and column order on each repetition) and the 24-bit
optimized codes were repeated three times (reassigning the codewords between repetitions to ensure
maximal gap between targets at the end of one repetition and the beginning of the next) likewise to
ensure a total code length of 72 bits.
Each of the 5 code conditions occurred 4 times per block, the order of their occurrence being randomized. For a given block, the stimulus condition was held constant, but the stimulus type was
alternated between blocks. In total, each subject performed 16 blocks. Thus, in each of the 10
stimulus ? code conditions, there were a total of 32 letter presentations or 2304 stimulus events.
5
3.3.1
Online Verification
Subjects did not receive feedback at the end of each trial. However, at the end of the experiment,
we gave the subject the opportunity to perform free-spelling in order to validate the system?s performance: we asked each subject whether they would prefer to spell with flips or flashes, and loaded
a classifier trained on all data from their preferred stimulus type into the system. Using the 72-bit
codebooks, all subjects were able to spell 5-15 letters with online performance ranging from 90 to
100%. Our data analysis below is restricted to leave-one-letter-out offline performance, excluding
the free-spelled letters.
3.4
Data Analysis
The 60-channel data, sampled at 250 Hz, were band-pass filtered between 0.1 and 8 Hz using a
FIR filter. The data were then cut into 600-msec (150-sample) epochs time-locked to the stimulus
events, and these were downsampled to 25 Hz. The data were then whitened in 60-dimensional
sensor space (by applying a symmetric spatial filtering matrix equal to the matrix-square-root of the
data covariance matrix, computed across all training trials and time-samples). Finally a linear LR
classifier was applied [1, pp82-85]. The classifier?s regularization hyperparameter C was found by
10-fold cross-validation within the training set..
Offline letter classification performance was assessed by a leave-one-letter-out procedure: for a
given code condition, each of the 32 letters was considered in turn, and a probabilistic prediction
was made of its binary epoch labels using the above procedure trained only on epochs from the other
31 letters. These probabilities were combined using the decoding scheme described in section 2.1
and a prediction was made of the transmitted letter. We varied the number of consecutive epochs of
the test letter that the decoder was allowed to use, from the minimum (12 or 24) up to the maximum
72. For each epoch of the left-out letter, we also recorded whether the binary classifier correctly
classified the epoch as a target or non-target.
4
Results and Discussion
Estimates of 36-class letter prediction performance are shown in figures 2 (averaged across subjects,
as a function of codeword length) and 3 (for each individual subject, presenting only the results
for 24-bit codewords). The performance of the binary classifier on individual epochs is shown in
figure 4.
flashes
flips
100
100
90
RC*
80
% letters correct
% letters correct
90
RCmix
RCsep
70
D10
D8opt
60
50
RC*
80
RCmix
RCsep
70
D10
D8opt
60
50
40
12
24
0 16.67 33.33 50 msec
36
48
60
72
length of code (epochs)
40
12
24
0 16.67 33.33 50 msec
36
48
60
72
length of code (epochs)
Figure 2: Offline (leave-one-letter-out) 36-class prediction performance as a function of codeword
length (i.e. the number of consecutive epochs of the left-out letter that were used to make a prediction). Performance values (and standard-error bar heights) are averaged across the 6 subjects.
Our results indicated the following effects:
1. Using the Donchin flash stimulus, the deleterious effects of short TTIs were clear to see:
D10 performed far worse than the other codes despite its larger Hamming distances. In
both stimulus conditions, the averaged plots of figure 2 indicate that RCmix may also be
6
subject 2
subject 3
100
90
90
90
80
80
80
70
60
50
% letters correct
100
% letters correct
% letters correct
subject 1
100
70
60
70
60
50
40
50
40
RC*
RCmix RCsep
D10
40
D8opt
RC*
RCmix RCsep
codebook
D10
D8opt
RC*
codebook
RCmix RCsep
D10
D8opt
D10
D8opt
codebook
flashes
flips
subject 5
subject 6
100
90
90
90
80
80
80
70
60
50
% letters correct
100
% letters correct
% letters correct
subject 4
100
70
60
50
40
RCmix RCsep
D10
60
50
40
RC*
70
40
D8opt
RC*
RCmix RCsep
codebook
D10
D8opt
RC*
RCmix RCsep
codebook
codebook
Figure 3: Offline (leave-one-letter-out) 36-class prediction performance when decoding codewords
of length 24, for each of the subjects in each of the code conditions.
our 6 subjects, flashes
our 6 subjects, flips
100
95
95
95
90
85
80
75
70
65
60
55
non?targets
targets
50
45
90
85
80
75
70
65
60
55
non?targets
targets
50
45
1
2
3
4
5
6+
epochs since previous target
avg
% epochs classified correctly (binary problem)
100
% epochs classified correctly (binary problem)
% epochs classified correctly (binary problem)
competition III subjs IIa and IIb
100
90
85
80
75
70
65
60
55
non?targets
targets
50
45
1
2
3
4
5
6+
epochs since previous target
avg
1
2
3
4
5
6+
avg
epochs since previous target
Figure 4: Illustration of effect of TPT on epoch classification performance, (left) in the data from
competition III dataset II; (middle) in our experiments, averaged across all subjects and code conditions for blocks in which the flash stimulus was used; (right) in our experiments, averaged across the
same subjects and code conditions, but for blocks in which the flip stimulus was used. The rightmost
column of each plot shows average classification accuracy across all epochs (remember that short
TTIs are relatively uncommon overall, and therefore downweighted in the average).
performing slightly less well than RCsep , which has longer TTIs. However, the latter effect
is not as large or as consistent across subjects as it was in our preliminary study [7].
2. Using the Donchin flash stimulus, our optimized code D8opt performs about as well as
traditional RC codes, but does not outperform them.
3. Generally, performance using the flip stimulus is better than with the flash stimulus.
4. Using the flip stimulus, both D8opt and D10 perform better than the RC codes, and they
perform roughly equally as well as each other. We interpret this interaction between stimulus type and code type as an indication that the flip stimulus may generate rather different
psychophysiological responses from the flash (perhaps stronger primary visual evokedpotentials, in addition to the P300) of a kind which is less susceptible to short TTI (the
7
curves in the right panel of figure 4 being flatter than those in the middle panel). A comparative analysis of the spatial locations of discriminative sources in the two stimulus conditions is beyond the scope of the current short report.
5. Despite having identical TTIs and Hamming distances, RC? performs consistently worse
than RCsep , in both stimulus conditions.
In summary, we have obtained empirical support for the idea that TTI (finding #1), Hamming distance (finding #4) and stimulus type (finding #3) can all be manipulated to improve performance.
However, our initial attempt to find an optimal solution by balancing these effects was not successful
(finding #2). In the flash stimulus condition, the row-column codes performed better than expected,
matching the performance of our optimized code. In the flip stimulus condition, TTI effects were
greatly reduced, making either D8opt or D10 suitable despite the short TTIs of the latter.
It seems very likely that the unexpectedly high performance of RCsep and RCmix can be at least partly
explained by the idea that they have particular spatial properties that enhance their performance
beyond what Hamming distances and TTIs alone would predict. This hypothesis is corroborated by
finding #5. Models of such spatial effects should clearly be taken into account in future optimization
approaches.
Overall, best performance was obtained with the flip stimulus, using either of the two errorcorrecting codes, D8opt or D10: this consistently outperforms the traditional row-column flash design
and shows that error-correcting code design has an important role to play in BCI speller development.
As a final note, one should remember that a language model can be used to improve performance in
speller systems. In this case, the codebook optimization problem becomes more complicated than
the simplified setting we examined, because the prior Pr (t) in (2) is no longer flat. The nature of
the best codes, according to our optimization criterion, might change considerably: for example, a
small subset of codewords, representing the most probable letters, might be chosen to be particularly
sparse and/or to have a particularly large Hamming distance between them and between the rest of
the codebook, while within the rest of the codebook these two criteria might be considered relatively
unimportant. Ideally, the language model would be adaptive (for example, supplying a predictive
prior for each letter based on the previous three) which might mean that the codewords should be
reassigned optimally after each letter. However, such considerations must remain beyond the scope
of our study until we can either overcome the TTI-independent performance differences between
codes (perhaps, as our results suggest, by careful stimulus design), or until we can model the source
of these differences well enough to account for them in our optimization criterion.
References
[1] Bishop CM (1995) Neural Networks for Pattern Recognition. Clarendon Press, Oxford.
[2] Blankertz B, et al. (2006) IEEE Trans. Neural Systems & Rehab. Eng. 14(2): 153?159
[3] Donchin E, Coles MGH (1988) Behavioural and Brain Sciences 11: 357?374
[4] Farwell LA, Donchin E (1988) Electroencephalography and Clinical Neurophysiology 70: 510?523
[5] Gestel T, et al. (2002) Neural Processing Letters, 15: 45?48
[6] Gonsalvez CL, Polich J (2002) Psychophysiology 39(3): 388?96
[7] Hill NJ, et al (2008) Technical Report #166, Max Planck Institute for Biological Cybernetics.
[8] Krusienski DJ, et al. (2006) Journal of Neural Engineering 3(4): 299?305
[9] MacKay D (2005) Information Theory, Inference, and Learning Algorithms. Cambridge Univ. Press
[10] Martens SMM, Hill NJ, Farquhar J, Sch?olkopf B. (2007) Impact of Target-to-Target Interval on Classification Performance in the P300 Speller. Applied Neuroscience Conference, Nijmegen, The Netherlands.
[11] Pritchard WS (1981) Psychological Bulletin 89: 506?540
[12] Rugg MD, Coles MGH (2002) Electrophysiology of mind. Oxford Psychology Series 25
[13] Serby H, Yom-Tov E, Inbar GF (2005) IEEE Trans. Neural Systems & Rehab. Eng. 13(1):89-98
[14] Wolpaw JR, et al. (2002) Clinical Neurophysiology 113: 767?791
[15] Woods DL, Hillyard SA, Courchesne E, Galambos R. (1980) Science, New Series 207(4431): 655?657.
8
| 3476 |@word neurophysiology:2 trial:6 illustrating:1 middle:2 version:2 achievable:1 proportion:2 stronger:1 seems:1 grey:2 covariance:1 eng:2 pick:1 initial:2 series:3 denoting:1 rightmost:1 outperforms:1 current:3 bie:1 must:2 alphanumeric:1 enables:1 plot:2 olkopf1:1 v:1 alone:2 generative:1 selected:3 greedy:1 cue:1 beginning:1 short:10 lr:2 filtered:1 supplying:1 codebook:31 location:2 five:1 height:1 rc:19 direct:1 qualitative:1 consists:1 manner:1 theoretically:1 pairwise:1 deteriorate:1 expected:5 roughly:1 mpg:1 frequently:1 themselves:1 brain:5 electroencephalography:1 increasing:1 becomes:1 provided:2 panel:5 lowest:1 what:2 kind:3 cm:1 string:7 contiguity:2 finding:8 nj:2 guarantee:1 temporal:1 remember:2 tackle:1 bipolar:1 exactly:3 classifier:13 appear:1 planck:2 before:1 positive:1 felix:1 attend:1 treat:1 tends:1 engineering:1 consequence:1 despite:3 oxford:2 solely:1 modulation:1 erps:3 might:5 black:1 examined:2 range:1 locked:1 averaged:6 practice:1 lost:1 block:8 prevalence:1 wolpaw:1 procedure:5 empirical:1 elicit:1 significantly:3 matching:1 word:1 downsampled:1 suggest:1 cannot:1 close:1 selection:3 krusienski:1 applying:1 equivalent:2 conventional:1 marten:1 attention:1 correcting:5 rule:2 courchesne:1 his:2 target:58 play:1 user:6 hypothesis:1 recognition:1 iib:1 particularly:2 swallow:1 cut:1 corroborated:1 observed:1 ft:1 role:1 unexpectedly:1 decrease:3 trade:2 removed:1 broken:1 asked:1 ideally:1 lcd:1 trained:2 segment:1 compromise:1 predictive:1 basis:1 alphabet:1 train:2 separated:1 univ:1 radboud:1 tell:1 whose:1 heuristic:2 larger:4 supplementary:3 say:1 bci:6 ability:1 statistic:1 noisy:2 final:2 online:2 sequence:6 advantage:1 evidently:1 indication:1 nontarget:1 interaction:4 maximal:2 product:1 frequent:1 tu:1 rehab:2 p300:10 achieve:2 validate:1 competition:6 olkopf:1 transmission:5 supra:1 comparative:1 tti:14 leave:4 spelled:1 measured:3 sa:1 implemented:1 predicted:1 polich:1 come:1 indicate:2 quantify:1 differ:3 correct:11 filter:1 material:3 preliminary:1 biological:2 probable:1 sufficiently:1 considered:5 normal:1 mgh:2 scope:2 predict:1 consecutive:4 purpose:4 label:2 healthy:1 bitstring:1 cole:2 largest:3 ctl:1 repetition:3 weighted:1 instantaneously:1 rough:1 clearly:4 sensor:2 rather:4 ctj:4 varying:1 office:1 encode:1 derived:1 consistently:2 modelling:1 indicates:1 superimposed:1 tech:1 greatly:1 equipment:1 detect:1 hill1:1 inference:1 initially:1 w:1 germany:1 overall:7 classification:12 among:2 orientation:2 denoted:1 development:1 spatial:8 mackay:1 oddball:2 equal:1 having:2 identical:2 flipped:1 reassigning:1 cancel:1 future:1 minimized:1 report:4 stimulus:55 randomly:6 manipulated:1 hamper:1 individual:5 argmax:2 attempt:2 interest:1 male:1 uncommon:1 nl:1 light:1 held:1 accurate:1 experience:1 shorter:1 farwell:3 desired:1 re:1 psychological:1 increased:2 column:23 classify:1 asking:1 assignment:1 entry:1 subset:1 predictor:1 successful:2 optimally:1 randomizing:2 considerably:1 combined:1 randomized:1 probabilistic:3 off:2 decoding:13 enhance:1 together:1 again:1 recorded:1 containing:2 fir:1 worse:2 book:1 leading:1 style:1 downweighted:1 account:2 jeremy:1 potential:3 de:1 coding:1 sec:2 flatter:1 onset:2 depends:1 decisive:1 performed:5 root:1 try:1 jason:1 doing:1 apparently:1 red:1 start:1 bayes:2 complicated:1 square:1 accuracy:2 loaded:1 gestel:1 likewise:1 blink:1 identification:1 decodes:1 bayesian:1 cybernetics:2 classified:4 explain:1 nonetheless:1 frequency:2 involved:1 galambos:1 attributed:1 hamming:16 gain:1 sampled:2 dataset:3 subsection:1 cap:1 organized:1 amplitude:3 clarendon:1 originally:1 psychophysiology:1 response:7 though:1 box:1 furthermore:1 until:2 hand:2 horizontal:2 d10:14 logistic:1 indicated:2 asynchrony:2 perhaps:2 effect:22 requiring:1 consisted:1 spell:3 hence:2 equality:2 regularization:1 symmetric:1 flashed:3 illustrated:4 white:1 during:4 lastname:1 criterion:7 trying:1 presenting:1 hill:2 theoretic:2 demonstrate:2 electroencephalogram:1 performs:2 interface:2 ranging:1 consideration:1 recently:1 began:1 permuted:1 refractory:3 attached:1 occurred:1 approximates:1 interpret:1 significant:1 cambridge:1 grid:4 iia:2 session:1 tov:1 language:2 had:2 dj:1 hillyard:1 longer:3 female:1 perspective:1 optimizing:1 manipulation:1 codeword:7 certain:1 binary:13 arbitrarily:1 came:1 scoring:2 transmitted:5 minimum:7 greater:1 nici:2 surely:1 paradigm:3 signal:6 ii:2 full:1 technical:1 characterized:1 cross:2 clinical:2 divided:1 equally:1 demodulation:1 controlled:1 schematic:1 prediction:7 impact:2 basic:1 regression:1 denominator:1 vision:2 whitened:1 repetitive:1 receive:1 affecting:1 want:1 background:1 addition:1 interval:4 decreased:1 source:2 sch:2 rest:3 subject:36 recording:1 hz:4 electro:1 thing:1 inbar:1 iii:4 easy:1 enough:1 xj:6 gave:1 psychology:1 codebooks:7 idea:2 favour:1 whether:3 six:1 deleterious:1 suffer:1 tunnel:2 ignored:1 generally:2 useful:2 se:1 pleasant:1 clear:1 unimportant:1 speller:15 netherlands:2 repeating:1 desk:1 band:1 induces:1 hardware:1 simplest:1 reduced:2 generate:1 outperform:1 percentage:1 estimated:2 neuroscience:1 per:5 correctly:4 hyperparameter:1 shall:2 redundancy:2 monitor:1 erp:2 pj:1 rectangle:2 sum:1 wood:1 letter:52 communicate:1 psychophysiological:3 missed:1 decision:1 prefer:1 bit:21 display:2 fold:3 occur:1 flat:1 performing:1 relatively:2 according:6 combination:1 poor:2 jr:1 smaller:1 across:8 slightly:1 character:1 remain:1 making:2 explained:1 restricted:1 pr:26 errorcorrecting:2 taken:2 suzanne:1 behavioural:1 previously:3 turn:1 count:1 mind:1 flip:11 end:3 soa:4 yom:1 occurrence:1 alternative:1 ensure:2 completed:1 opportunity:1 somewhere:1 move:1 added:1 arrangement:1 codewords:9 strategy:1 primary:1 spelling:1 traditional:3 md:1 detrimental:2 quiet:1 distance:18 unable:1 berlin:1 intensified:2 decoder:2 extent:1 tuebingen:1 ru:1 code:50 length:8 index:1 relationship:1 illustration:1 ratio:1 setup:2 susceptible:2 potentially:1 farquhar:2 nijmegen:2 negative:1 intent:1 design:7 perform:5 dmin:11 vertical:2 displayed:1 communication:1 precise:1 excluding:1 frame:1 varied:1 pritchard:1 pair:1 namely:1 ct1:1 optimized:6 conflict:1 coherent:1 hour:1 trans:2 able:2 suggested:1 bar:1 below:1 pattern:2 firstname:1 beyond:3 built:1 max:2 video:1 event:15 overlap:3 difficulty:1 suitable:1 pause:1 representing:1 scheme:3 improve:3 blankertz:1 eye:1 gf:1 eog:2 alternated:1 epoch:32 prior:4 synchronization:1 loss:4 expect:2 fully:1 recreating:1 mixed:2 generation:1 filtering:1 versus:1 validation:2 degree:1 verification:1 consistent:1 unequivocally:1 classifying:3 balancing:1 row:26 summary:2 repeat:1 free:2 offline:4 guide:1 institute:2 taking:1 bulletin:1 sparse:1 benefit:1 feedback:1 curve:1 overcome:1 instructed:1 made:3 collection:1 avg:3 simplified:1 adaptive:1 far:1 constituting:1 emphasize:1 smm:1 preferred:1 bernhard:1 evoke:1 conclude:1 spatio:1 discriminative:2 rugg:1 why:1 table:1 promising:1 channel:3 transfer:1 nature:1 reassigned:1 eeg:7 interact:1 cl:1 did:3 tpt:5 linearly:1 noise:1 repeated:3 allowed:1 x1:1 gmbh:1 fig:1 screen:1 scattered:1 position:2 msec:8 intensification:3 wish:1 xl:1 candidate:2 remained:1 bishop:1 inset:2 showing:1 explored:2 evidence:1 dl:1 donchin:6 adding:1 gained:1 flashing:2 gap:1 led:1 electrophysiology:1 explore:1 likely:1 visual:3 corresponds:1 determines:1 chance:1 presentation:2 consequently:2 flash:20 careful:1 content:1 change:2 typical:1 specifically:1 determined:2 averaging:1 called:1 total:5 pas:1 accepted:1 tendency:2 experimental:1 partly:1 la:1 select:1 support:2 latter:2 modulated:1 assessed:1 dept:1 avoiding:1 |
2,731 | 3,477 | Linear Classification and Selective Sampling
Under Low Noise Conditions
Giovanni Cavallanti
DSI, Universit`a degli Studi di Milano, Italy
[email protected]
Nicol`o Cesa-Bianchi
DSI, Universit`a degli Studi di Milano, Italy
[email protected]
Claudio Gentile
DICOM, Universit`a dell?Insubria, Italy
[email protected]
Abstract
We provide a new analysis of an efficient margin-based algorithm for selective
sampling in classification problems. Using the so-called Tsybakov low noise condition to parametrize the instance distribution, we show bounds on the convergence rate to the Bayes risk of both the fully supervised and the selective sampling
versions of the basic algorithm. Our analysis reveals that, excluding logarithmic
factors, the average risk of the selective sampler converges to the Bayes risk at
rate N ?(1+?)(2+?)/2(3+?) where N denotes the number of ?
queried labels, and
? > 0 is the exponent in the low noise condition. For all ? > 3 ? 1 ? 0.73 this
convergence rate is asymptotically faster than the rate N ?(1+?)/(2+?) achieved
by the fully supervised version of the same classifier, which queries all labels, and
for ? ? ? the two rates exhibit an exponential gap. Experiments on textual data
reveal that simple variants of the proposed selective sampler perform much better
than popular and similarly efficient competitors.
1
Introduction
In the standard online learning protocol for binary classification the learner receives a sequence of
instances generated by an unknown source. Each time a new instance is received the learner predicts
its binary label, and is then given the true label of the current instance before the next instance is
observed. This protocol is natural in many applications, for instance weather forecasting or stock
market prediction, because Nature (or the market) is spontaneously disclosing the true label after
each learner?s guess. On the other hand, in many other applications obtaining labels may be an
expensive process. In order to address this problem, a variant of online learning that has been
proposed is selective sampling. In this modified protocol the true label of the current instance is
never revealed unless the learner decides to issue an explicit query. The learner?s performance is then
measured with respect to both the number of mistakes (made on the entire sequence of instances)
and the number of queries. A natural sampling strategy is one that tries to identify labels which are
likely to be useful to the algorithm, and then queries those ones only. This strategy somehow needs
to combine a measure of utility of examples with a measure of confidence. In the case of learning
with linear functions, a statistic that has often been used to quantify both utility and confidence is
the margin. In [10] this approach was employed to define a selective sampling rule that queries a
new label whenever the margin of the current instance, with respect to the current linear hypothesis,
is smaller (in magnitude) than an adaptively adjusted threshold. Margins were computed using
a linear learning algorithm based on an incremental version of Regularized linear Least-Squares
(RLS) for classification. Although this selective sampling algorithm is efficient, and has simple
variants working quite well in practice, the rate of convergence to the Bayes risk was never assessed
in terms of natural distributional parameters, thus preventing a full understanding of the properties
of this algorithm.
We improve on those results in several ways making three main contributions: (i) By coupling the
Tsybakov low noise condition, used to parametrize the instance distribution, with the linear model
of [10], defining the conditional distribution of labels, we prove that the fully supervised RLS (all
e n?(1+?)/(2+?) where ? ? 0 is the noise
labels are queried) converges to the Bayes risk at rate O
exponent in the low noise condition. (ii) Under the same low noise condition, we prove that the
e n?(1+?)/(3+?) ,
RLS-based selective sampling rule of [10] converges to the Bayes risk at rate O
e n??/(2+?) . Moreover, we show that similar results can be
with labels being queried at rate O
established for a mistake-driven (i.e., space and time efficient) variant. (iii) We perform experiments
on a real-world medium-size dataset showing that variants of our mistake-driven sampler compare
favorably with other selective samplers proposed in the literature, like the ones in [11, 16, 20].
Related work. Selective sampling, originally introduced by Cohn, Atlas and Ladner in [13, 14],
differs from the active learning framework as in the latter the learner has more freedom in selecting
which instances to query. For example, in Angluin?s adversarial learning with queries (see [1] for a
survey), the goal is to identify an unknown boolean function f from a given class, and the learner
can query the labels (i.e., values of f ) of arbitrary boolean instances. Castro and Nowak [9] study a
framework in which the learner also queries arbitrary domain points. However, in their case labels
are stochastically related to instances (which are real vectors). They prove risk bounds in terms
of nonparametric characterizations of both the regularity of the Bayes decision boundary and the
behavior of the noise rate in its proximity. In fact, a large statistical literature on adaptive sampling
and sequential hypothesis testing exists (see for instance the detailed description in [9]) which is
concerned with problems that share similarities with active learning. The idea of querying small
margin instances when learning linear classifiers has been explored several times in different active
learning contexts. Campbell, Cristianini and Smola [8], and also Tong and Koller [23], study a poolbased model of active learning, where the algorithm is allowed to interactively choose which labels
to obtain from an i.i.d. pool of unlabeled instances. A landmark result in the selective sampling
protocol is the query-by-committee algorithm of Freund, Seung, Shamir and Tishby [17]. In the
realizable (noise-free) case, and under strong distributional assumptions, this algorithm is shown to
require exponentially fewer labels than instances when learning linear classifiers (see also [18] for
a more practical implementation). An exponential advantage in the realizable case is also obtained
with a simple variant of the Perceptron algorithm by Dasgupta, Kalai and Monteleoni [16], under
the sole assumption that instances are drawn from the uniform distribution over the unit ball in Rd .
In the general statistical learning case, under no assumptions on the joint distribution of label and
instances, selective sampling bears no such exponential advantage. For instance, K?aa? ri?ainen shows
that, in order to approach the risk of the best linear classifier f ? within error ?, at least ?((?/?)2 )
labels are needed, where ? is the risk of f ? . A much more general nonparametric lower bound for
active learning is obtained by Castro and Nowak [9]. General selective sampling strategies for the
nonrealizable case have been proposed in [3, 4, 15]. However, none of these learning algorithms
seems to be computationally efficient when learning linear classifiers in the general agnostic case.
2
Learning protocol and data model
We consider the following online selective sampling protocol. At each step t = 1, 2, . . . the sampling algorithm (or selective sampler ) receives an instance xt ? Rd and outputs a binary prediction
for the associated label yt ? {?1, +1}. After each prediction, the algorithm has the option of ?sampling? (issuing a query) in order to receive the label yt . We call the pair (xt , yt ) an example. After
seeing the label yt , the algorithm can choose whether or not to update its internal state using the new
information encoded by (xt , yt ).
We assume instances xt are realizations of i.i.d. random variables X t drawn from an unknown
distribution on the surface of the unit Euclidean sphere in Rd , so that kX t k = 1 for all t ? 1.
Following [10], we assume that labels yt are generated according to the following simple linear
noise model:
exists a fixed and unknown vector u ? Rd , with Euclidean norm kuk = 1,
there
such that E Yt X t = xt = u? xt for all t ? 1. Hence X t = xt has label 1 with probability
(1 + u? xt )/2 ? [0, 1]. Note that SGN(f ? ), for f ? (x) = u? x, is the Bayes optimal classifier
for this noise model. In the following, all probabilities P and expectations E are understood with
respect to the joint distribution of the i.i.d. data process {(X 1 , Y1 ), (X 2 , Y2 ), . . . }. We use Pt
to denote conditioning on (X 1 , Y1 ), . . . , (X t , Yt ). Let f : Rd ? R be an arbitrary measurable
function. The instantaneous regret R(f ) is the excess risk of SGN(f ) w.r.t. the Bayes risk, i.e.,
R(f ) = P(Y1 f (X 1 ) < 0) ? P(Y1 f ? (X 1 ) < 0). Let f1 , f2 , . . . be a sequence of real functions
where each ft is measurable w.r.t. the ?-algebra generated by (X 1 , Y1 ), . . . , (X t?1 , Yt?1 ), X t .
When (X 1 , Y1 ), . . . , (X t?1 , Yt?1 ) is understood from the context, we write ft as a function of
X t only. Let Rt?1 (ft ) be the instantaneous conditional regret Rt?1 (ft ) = Pt?1 (Yt ft (X t ) <
0) ? Pt?1 (Yt f ? (X t ) < 0). Our goal is to bound the expected cumulative regret E R0 (f1 ) +
R1 (f2 ) + ? ? ? + Rn?1 (fn ) , as a function of n, and other relevant quantities. Observe that, although
the learner?s predictions can only depend on the queried examples, the regret is computed over all
time steps, including the ones when the selective sampler did not issue a query. In order to model
the distribution of the instances around the hyperplane u? x = 0, we use Mammen-Tsybakov low
noise condition [24]:
There exist c > 0 and ? ? 0 such that P |f ? (X 1 )| < ? ? c ?? for all ? > 0.
(1)
When the noise exponent ? is 0 the low noise condition becomes vacuous. In order to study
the case ? ? ?, one
can use the following equivalent formulation of (1) ?see, e.g., [5],
P f ? (X 1 )f (X 1 ) < 0 ? c R(f )?/(1+?) for all measurable f : Rd ? R. With this formulation, one can show that ? ? ? implies the hard margin condition |f ? (X 1 )| ? 1/(2c) w.p. 1.
3
Algorithms and theoretical analysis
d
We consider linear classifiers predicting the value of Yt through SGN(w?
t X t ), where w t ? R is
a dynamically updated weight vector which might be intended as the current estimate for u. Our
wt is an RLS estimator defined over the set of previously queried examples. More precisely,
let Nt
be the number of queried examples during the first t time steps, St?1 = x?1 , . . . , x?Nt?1 be the
?
?
matrix of the queried instances up to time t ? 1, and y t?1 = y1? , . . . , yN
be the vector of the
t?1
corresponding labels. Then the RLS estimator is defined by
?1
?
wt = I + St?1 St?1
+ xt x?
St?1 y t?1 ,
(2)
t
where I is the d ? d identity matrix. Note that wt depends on the current instance xt . The RLS
estimator in this particular form has been first considered by Vovk [25] and by Azoury and Warmuth [2]. Compared to standard RLS, here xt acts by futher reducing the variance of wt . We
b t to denote the margin w?
b
use ?
t X t whenever w t is understood from the context. Thus ?t is
b t is measurable w.r.t. the ?-algebra generated by
the current approximation to ?t . Note that ?
(X 1 , Y1 ), . . . , (X t?1 , Yt?1 ), X t . We also use ?t to denote the Bayes margin f ? (X t ) = u? X t .
The RLS estimator (2) can be stored in space ?(d2 ), which we need for the inverse of I +
?
St?1 St?1
+ xt x?
t . Moreover, using a standard formula for small-rank adjustments of inverse
matrices, we can compute updates and predictions in time ?(d2 ). The algorithm in (2) can also
be expressed in dual variable form. This is needed, for instance, when we want to use the feature
expansion facility provided by kernel functions. In this case, at time t the RLS estimator (2) can be
2
represented in O(Nt?1
) space. The update time is also quadratic in Nt?1 .
Our first result establishes a regret bound for the fully supervised algorithm, i.e., the algorithm that
predicts using RLS as in (2), queries the label of every instance, and stores all examples. This result
is the baseline against which we measure the performance of our selective sampling algorithm. The
regret bound is expressed i.t.o. the whole spectrum of the process covariance matrix E[X 1 X ?
1 ].
Theorem 1 Assume the low noise condition (1) holds with exponent ? ? 0 and constant c > 0.
Then the expected
steps
cumulative regret aftern1+?
of the fully supervised algorithm based on (2) is
1
2+?
bounded by E 4c(1 + ln |I + Sn Sn? |)
n 2+? . This, in turn, is bounded from above by
1+?
1+? 1
Pd
1
2+?
4c 1 + i=1 ln(1 + n?i )
n 2+? = O d ln n 2+? n 2+? . Here | ? | denotes the determi
nant of a matrix, Sn = X 1 , X 2 , . . . , X n , and ?i is the i-th eigenvalue of E[X 1 X ?
1 ].
When
? ? = 0 (corresponding to a vacuous noise condition) the bound of Theorem 1 reduces to
O d n ln n . When ? ? ? (corresponding to a hard margin condition) the bound gives the
Pd
logarithmic behavior O d ln n . Notice that i=1 ln(1 + n?i ) is substantially smaller than d ln n
whenever the spectrum of E[X 1 X ?
1 ] is rapidly decreasing. In fact, the second bound is clearly
meaningful even when d = ?, while the third one only applies to the finite dimensional case.
Parameters: ? > 0, ?t > 0 for each t ? 1.
Initialization: weight vector w = (0, . . . , 0)? ; storage counter N = 0.
At each time t = 1, 2, . . . do the following:
1. Observe instance xt ? Rd : ||xt || = 1;
2. Predict the label yt ? {?1, 1} with SGN(w?
t xt ), where w t is as in (2).
3. If N ? ?t then query label yt and store (xt , yt );
b 2t ? 128 ln t then schedule the query of yt+1 ;
4. Else if ?
?N
5. If (xt , yt ) is scheduled to be stored, then increment N and update wt using (xt+1 , yt+1 ).
Figure 1: The selective sampling algorithm.
Fast rates of convergence have typically been proven for batch-style algorithms, such as empirical
risk minimizers and SVM (see, e.g., [24, 22]), rather than for online algorithms. A reference closer
to our paper is Ying and Zhou [26], where the authors prove bounds for online linear classification
using the low noise condition (1), though under different distributional assumptions.
Our second result establishes a new regret bound, under low noise conditions, for the selective
sampler introduced in [10]. This variant, described in Figure 1, queries all labels (and stores all
examples) during an initial stage of length at least (16d)/?2 , where ? denotes the smallest nonzero
eigenvalue of the process covariance matrix E[X 1 X ?
1 ]. When this transient regime is over, the
b t . Specifsampler issues a query at time t based on both the query counter Nt?1 and the margin ?
ically, if evidence is collected that the number Nt?1 of stored examples is smaller than our current
b 2 ? (128 ln t)/(?Nt?1 ), then we query (and store) the label of the next
estimate of 1/?2t , that is if ?
t
instance xt+1 . Note that the margin threshold explicitly depends, through ?, on additional information about the data-generating process. This additional information is needed because, unlike the
fully supervised classifier of Theorem 1, the selective sampler queries labels at random steps. This
prevents
us from
bounding the sum of conditional variances of the involved RLS estimator through
lnI + Sn Sn? , as we can do when proving Theorem 1 (see below). Instead, we have to individually bound each conditional variance term via the smallest empirical eigenvalue of the correlation
matrix. The transient regime in Figure 1 is exactly needed to ensure that this smallest empirical
eigenvalue gets close enough to ?. Compared to the analysis contained in [10], we are able to better
capture the two main aspects of the selective sampling protocol: First, we control the probability
of making a mistake when we do not query labels; second, the algorithm is able to adaptively optimize the sampling rate by exploiting the additional information provided by the examples having
small margin. The appropriate sampling rate clearly depends on the (unknown) amount of noise ?
which the algorithm implicitly learns on the fly. In this respect, our algorithm is more properly an
adaptive sampler, rather than a selective sampler. Finally, we stress that it is fairly straightforward
to add to the algorithm in Figure 1 a mistake-driven rule for storing examples. Such a rule provides
that, when a small margin is detected, a query be issued (and the next example be stored) only if
b t ) 6= yt (i.e., only if the current prediction is mistaken). This turns out to be highly advantaSGN(?
geous from a computational standpoint, because of the sparsity of the computed solution. It is easy
to adapt our analysis to obtain even for this algorithm the same regret bound as the one established
in Theorem 2. However, in this case we can only give guarantees on the expected number of stored
examples (which can indeed be much smaller than the actual number of queried labels).
Theorem 2 Assume the low noise condition (1) holds with unknown exponent ? ? 0 and assume
the selective sampler of Figure 1 is run with ?t = ?162 max{d, ln t}. Then, after n steps, the expected
1+?
2
d + ln n ln n 3+? 3+?
cumulative regret is bounded by O
+
n
whereas the expected number
?2
?
?
2
d + ln n ln n 2+?
2+?
of queried labels (including the stored ones) is bounded by O
n
+
.
?2
?
b t is an almost unbiased estimate of the true
The proof, sketched below, hinges on showing that ?
margin ?t , and relies on known concentration properties of i.i.d. processes. In particular, we show
that our selective sampler is able to adaptively estimate the number of queries needed to ensure a
1/t increase of the regret when a query is not issued at time t.
As expected, when we compare our semi-supervised selective sampler (Theorem 2) to the fully
supervised ?yardstick? (Theorem 1), we see that the per-step regret of the former vanishes at a sig1+?
1+?
nificantly slower rate than the latter, i.e., n? 3+? vs. n? 2+? . Note, however, that the per-step regret
of the semi-supervised algorithm vanishes faster than its fully-supervised counterpart when both regrets are expressed in terms of the number N of issued queries. To see this consider first the case
? ? ? (the hard margin case, essentially analyzed in [10]). Then both algorithms have a per-step
regret of order (ln n)/n. However, since the semi-supervised algorithm makes only N = O(ln n)
queries, we have that, as a function of N , the per-step regret of the semi-supervised algorithm is of
order N/eN where the fully supervised has only (ln N )/N . We have thus recovered the exponential advantage observed in previous works [16, 17]. When ? = 0 (vacuous noise conditions), the
per-step regret rates in terms of N become (excluding logarithmic factors) of order N ?1/3 in the
semi-supervised case and of order N ?1/2 in the fully supervised case. Hence, there is a critical value
of ? where the semi-supervised bound becomes better. In order to find this critical value we write the
(1+?)(2+?)
rates of the per-step regret for 0 ? ? < ? obtaining N ? 2(3+?) (semi-supervised algorithm) and
1+?
N ? 2+? (fully supervised algorithm). By comparing the two exponents we find that, asymptotically,
?
the semi-supervised rate is better than the fully supervised one for all values of ? > 3 ? 1. This
indicates that selective sampling is advantageous when the noise level (as modeled by the MammenTsybakov condition) is not too high. Finally, observe that the way it is stated now, the bound of
Theorem 2 only applies to the finite-dimensional (d < ?) case. It turns out this is a fixable artifact
of our analysis, rather than an intrinsic limitation of the selective sampling scheme in Figure 1. See
Remark 3 below.
Proof of Theorem 1. The proof proceeds by relating the classification regret to the square loss regret via a comparison theorem. The square loss regret is then controlled by applying a known point2
wise bound. For all measurable f : Rd ? R, let R? (f ) = E[ 1 ? Y1 f (X 1 ) ? 1 ? Y1 f ? (X 1 )2 ]
be the square loss regret, and Rt?1,? its conditional version. We apply the comparison theorem from [5] with the ?-transform function ?(z) = z 2 associated with the square loss. Under
1+?
the low noise condition (1) this yields R(f ) ? 4c R? (f ) 2+? for all measurable f . We thus
1+?
hP
1+?
i
h P
i
Pn
2+?
2+?
n
n
have E t=1 Rt?1 (ft ) ? E
? E n 4c
R
(f
)
,
?,t?1
t
t=1
t=1 4c R?,t?1 (ft )
n
the last term following from Jensen?s inequality. Further, we observe that in our probabilistic model
f ? (x) = u? x is Bayes
x ? Rd , we have
optimal for? the square loss. In? fact,
for any unit norm
P
n
f ? (x) = arginf z?R (1 ? z)2 1+u2 x + (1 + z)2 1?u2 x = u? x . Hence t=1 R?,t?1 (ft ) =
Pn
?
2
?
2
t=1 (Yt ? w t X t ) ? (Y
t ? u X?t) which, in turn, can be bounded pointwise (see, e.g., [12,
Theorem 11.8]) by 1 + lnI + Sn Sn . Putting together gives the first bound. Next, we take the
1+?
bound just obtained and apply Jensen?s inequality twice, first to the concave function (?) 2+? of a real
argument, and then to the concave function ln |?| of a (positive definite) matrix argument. Observing
Pn
?
that ESn Sn? = E[ t=1 X t X ?
t ] = n EX 1 X 1 yields the second bound. The third bound derives
from the second one just by using ?i ? 1.
Proof sketch of Theorem 2.
We aim at bounding from above the cumulative regret
Pn
b
P(Y
?
<
0)
?
P(Y
?
<
0)
which, according to our probabilistic model, can be shown
t t
t t
t=1
P
n
b t ? 0, |?t | ? ?) . The last sum is upper bounded by
to be at most c n ?1+? + t=1 P(?t ?
n
n
X
X
128 ln t
2
b
P (Nt?1 ? ?t ) +
P ?t ?
, Nt?1 > ?t , |?t | ? ?
?Nt?1
t=1
t=1
|
{z
} |
{z
}
(I)
(II)
128 ln t
2
b
b
+
P ?t ?t ? 0, ?t >
, Nt?1 > ?t .
?Nt?1
t=1
|
{z
}
n
X
(III)
where: (I) are the initial time steps; (II) are the time steps on which we trigger the query of the next
b 2 is smaller than the threshold at time t); (III) are the steps that do not trigger any
label (because ?
t
queries at all.
Note that (III) bounds the regret over non-sampled examples. In what follows, we sketch the way
we bound each of the three terms separately. A bound on (I) is easily obtained as (I) ? ?n =
n
O( d+ln
?2 ) just because ?n ? ?t for all t ? n. To bound (II) and (III) we need to exploit the fact that
the subsequence of stored instances and labels is a sequence of i.i.d. random variables distributed
as (X 1 , Y1 ), see [10]. This allows us to carry out a (somewhat involved) bias-variance analysis
b t is an almost unbiased estimator
showing that for any fixed number Nt?1 = s of stored examples, ?
of ?t , whose bias and variance tend to vanish as 1/s when s is sufficiently large. In particular, if
b t ? ?t as long as Nt?1 is of the order of ln n2 . The variance of ?
b t is controlled
|?t | ? ? then ?
??
by known results (the one we used
is
[21,
Theorem
4.2])
on
the
concentration
of
eigenvalues of
P
?
1
an empirical correlation matrix s i X i X i to the eigenvalues of the process covariance matrix
E[X 1 X ?
1 ]. For such a result to apply, we have to impose that Nt?1 ? ?t . By suitably combining
n
n
these concentration results we can bound term (II) by O( d+ln
+ ln
?2
??2 ) and term (III) by O(ln n).
1+?
n 3+?
Putting together and choosing ? of the order of ln
gives the desired regret bound. The bound
?n
on the number of queried labels is obtained in a similar way.
Remark 3 The linear dependence on d in Theorem 2 derives from a direct application of the concentration results in [21]. In fact, it is possible to take into account in a fairly precise manner
the way the process spectrum decreases (e.g., [6, 7]), thereby extending the above analysis to the
infinite-dimensional case. In this paper, however, we decided to stick to the simpler analysis leading
to Theorem 2, since the resulting bounds would be harder to read, and would somehow obscure
understanding of regret and sampling rate behavior as a function of n.
4
Experimental analysis
In evaluating the empirical performance of our selective sampling algorithm, we consider two additional variants obtained by slightly modifying Step 4 in Figure 1. The first variant (which we just
call SS, Selective Sampler) queries the current label instead of the next one. The rationale here is that
we want to leverage the more informative content of small margin instances. The second variant is
a mistake-driven version (referred to as SSMD, Selective Sampling Mistake Driven) that queries the
current label (and stores the corresponding example) only if the label gets mispredicted. For clarity,
the algorithm in Figure 1 will then be called SSNL (Selective Sampling Next Label) since it queries
the next label whenever a small margin is observed. For all three algorithms we dropped the intial
transient regime (Step 3 in Figure 1).
We run our experiments on the first, in chronological order, 40,000 newswire stories from the
Reuters Corpus Volume 1 dataset (RCV1). Every example in this dataset is encoded as a vector
of real attributes computed through a standard TF - IDF bag-of-words processing of the original news
stories, and is tagged with zero or more labels from a set of 102 classes. The online categorization
of excerpts from a newswire feed is a realistic learning problem for selective sampling algorithms
since a newswire feed consists of a large amount of uncategorized data with a high labeling cost. The
classification performance is measured using a macroaveraged F -measure 2RP/(R + P ), where P
is the precision (fraction of correctly classified documents among all documents that were classified
positive for the given topic) and R is the recall (fraction of correctly classified documents among all
documents that are labelled with the given topic). All algorithms presented here are evaluated using
dual variable implementations and linear kernels.
The results are summarized in Figures 2 and 3. The former only refers to (an average over) the 50
most frequent categories, while the latter includes them all. In Figure 2 (left) we show how SSMD
compares to SSNL, and to its most immediate counterpart, SS. In Figure 2 (right) we compare SSMD
to other algorithms that are known to have good empirical performance, including the second-order
version of the label efficient classifier (SOLE), as described in [11], and the DKMPERC variant of
the DKM algorithm (see, e.g., [16, 20]). DKMPERC differs from DKM since it adopts a standard
perceptron update rule. The perceptron algorithm (PERC) and its second-order counterpart (SOP)
are reported here as a reference, since they are designed to query all labels. In particular, SOP is
a mistake-driven variant of the algorithm analyzed in Theorem 1. It is reasonable to assume that
in a selective sampling setup we are interested in the performance achieved when the fraction of
queried labels stays below some threshold, say 10%. In this range of sampling rate, SSMD has the
steepest increase in the achieved F -measure, and surpasses any other algorithm. Unsurprisingly, as
the number of queried labels gets larger, SSMD, SOLE and SOP exhibit similar behaviors. Moreover,
the less than ideal plot of SSNL seems to confirm the intuition that querying small margin instances
0.75
0.75
0.7
0.7
0.65
0.65
0.6
0.6
0.55
0.55
0.5
0.45
F-measure
0.5
0.45
0.4
0.35
0.3
0.25
0.4
0.35
0.3
0.25
0.2
0.2
0.15
0.15
0.1
0.05
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Fraction of queried labels
0.08
SSMD
DKMperc
SOLE
SOP
PERC
0.1
SSMD
SSNL
SS
0.05
0.09
0
0.01
0.1
0.02
0.03
0.04
0.05
0.06
0.07
Fraction of queried labels
0.08
0.09
0.1
Figure 2: Average F -measure obtained by different algorithms after 40,000 examples, as a function
of the number of queried labels. The average only refers to the 50 most frequent categories. Points
are obtained by repeatedly running each algorithm with different values of parameters (in Figure
1, the relevant parameter is ?). Trend lines are computed as approximate cubic splines connecting
consecutive points.
1
Number of stored examples (normalized)
Norm of the SVM weight vector (normalized)
F-measure
Fraction of positive examples
Fraction of queried labels
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
20
40
60
Topics
80
100
0
20
40
60
80
100
Topics
Figure 3: Left: Correlation between the fraction of stored examples and the difficulty of each binary
task, as measured by the separation margin. Right: F -measure achieved on the different binary
classification tasks compared to the number of positive examples in each topic, and to the fraction of
queried labels (including the stored ones). In both plots, topics are sorted by decreasing frequency
of positive examples. The two plots are produced by SSMD with a specific value of the ? parameter.
Varying ? does not significantly alter the reported trend.
provides a significant advantage. Under our test conditions DKMPERC proved ineffective, probably
because most tasks in the RCV1 dataset are not linearly separable. A similar behavior was observed
in [20]. It is fair to remark that DKMPERC is a perceptron-like linear-threshold classifier while the
other algorithms considered here are based on the more computationally intensive ridge regressionlike procedure.
In our selective sampling framework it is important to investigate how harder problems influence
the sampling rate of an algorithm and, for each binary problem, to assess the impact of the number
of positive examples on F-measure performance. Coarsely speaking, we would expect that the hard
topics are the infrequent ones. Here we focus on SSMD since it is reasonably the best candidate,
among our selective samplers, as applied to real-world problems. In Figure 3 (left) we report the
fraction of examples stored by SSMD on each of the 102 binary learning tasks (i.e., on each individual
topic, including the infrequent ones), and the corresponding levels of F -measure and queried labels
(right). Note that in both plots topics are sorted by frequency with the most frequent categories
appearing on the left. We represent the difficulty of a learning task by the norm of the weight vector
obtained by running the C - SVM algorithm on that task1 . Figure 3 (left) clearly shows that SSMD
rises the storage rate on difficult problems. In particular, even if two different tasks have largely
different numbers of positive examples, the storage rate achieved by SSMD on those tasks may be
1
The actual values were computed using SVM - LIGHT [19] with default parameters. Since the examples in
the Reuters Corpus Volume 1 are cosine normalized, the choice of default parameters amounts to indirectly
setting the parameter C to approximately 1.0.
similar when the norm of the weight vectors computed by C - SVM is nearly the same. On the other
hand, the right plot shows (to our surprise) that the achieved F-measure is fairly independent of the
number of positive examples, but this independence is obtained at the cost of querying more and
more labels. In other words, SSMD seems to realize the difficulty of learning infrequent topics and,
in order to achieve a good F-measure performance, it compensates by querying many more labels.
References
[1] D. Angluin. Queries revisited. In 12th ALT, pages 12?31. Springer, 2001.
[2] K.S. Azoury and M.K. Warmuth. Relative loss bounds for on-line density estimation with the exponential
family of distributions. Machine Learning, 43(3):211?246, 2001.
[3] M.F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In 23rd ICML, pages 65?72.
ACM Press, 2006.
[4] M.F. Balcan, A. Broder, and T. Zhang. Margin-based active learning. In 20th COLT, pages 35?50.
Springer, 2007.
[5] P.L. Bartlett, M.I. Jordan, and J.D. McAuliffe.
101(473):138?156, 2006.
Convexity, classification, and risk bounds.
JASA,
[6] G. Blanchard, O. Bousquet, and L. Zwald. Statistical properties of kernel principal component analysis.
Machine Learning, 66:259?294, 2007.
[7] M.L. Braun. Accurate error bounds for the eigenvalues of the kernel matrix. JMLR, 7:2303?2328, 2006.
[8] C. Campbell, N. Cristianini, and A. Smola. Query learning with large margin classifiers. In 17th ICML,
pages 111?118. Morgan Kaufmann, 2000.
[9] R. Castro and R.D. Nowak. Minimax bounds for active learning. IEEE Trans. IT, 2008. To appear.
[10] N. Cesa-Bianchi, A. Conconi, and C. Gentile. Learning probabilistic linear-threshold classifiers via selective sampling. In 16th COLT, pages 373?387. Springer, 2003.
[11] N. Cesa-Bianchi, C. Gentile, and L. Zaniboni. Worst-case analysis of selective sampling for linear classification. JMLR, 7:1205?1230, 2006.
[12] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
[13] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning,
15(2):201?221, 1994.
[14] R. Cohn, L. Atlas, and R. Ladner. Training connectionist networks with queries and selective sampling.
In NIPS 2. MIT Press, 1990.
[15] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In NIPS 20, pages
353?360. MIT Press, 2008.
[16] S. Dasgupta, A. T. Kalai, and C. Monteleoni. Analysis of Perceptron-based active learning. In 18th COLT,
pages 249?263. Springer, 2005.
[17] Y. Freund, S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28(2/3):133?168, 1997.
[18] R. Gilad-Bachrach, A. Navot, and N. Tishby. Query by committee made real. NIPS, 18, 2005.
[19] T. Joachims. Making large-scale SVM learning practical. In B. Sch?olkopf, C. Burges, and A. Smola,
editors, Advances in Kernel Methods: Support Vector Learning. MIT Press, 1999.
[20] C. Monteleoni and M. K?aa? ri?ainen. Practical online active learning for classification. In 24th IEEE CVPR,
pages 249?263. IEEE Computer Society Press, 2007.
[21] J. Shawe-Taylor, C.K.I. Williams, N. Cristianini, and J. Kandola. On the eigenspectrum of the Gram
matrix and the generalization error of kernel-PCA. IEEE Trans. IT, 51(7):2510?2522, 2005.
[22] I. Steinwart and C. Scovel Fast Rates for Support Vector Machines using Gaussian Kernels Annals of
Statistics, 35: 575-607, 2007.
[23] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. In
17th ICML, pages 999?1006. Morgan Kaufmann, 2000.
[24] A. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135?
166, 2004.
[25] V. Vovk. Competitive on-line statistics. International Statistical Review, 69:213?248, 2001.
[26] Y. Ying and D.X. Zhou. Online regularized classification algorithms. IEEE Transactions on Information
Theory, 52:4775?4788, 2006.
| 3477 |@word nificantly:1 version:6 advantageous:1 seems:3 norm:5 suitably:1 d2:2 covariance:3 thereby:1 harder:2 carry:1 initial:2 selecting:1 document:4 task1:1 current:11 recovered:1 nt:15 comparing:1 beygelzimer:1 scovel:1 issuing:1 realize:1 fn:1 realistic:1 informative:1 atlas:3 ainen:2 update:5 designed:1 v:1 plot:5 fewer:1 guess:1 warmuth:2 steepest:1 characterization:1 provides:2 revisited:1 simpler:1 zhang:1 dell:1 direct:1 become:1 dicom:1 prove:4 consists:1 combine:1 nonrealizable:1 manner:1 expected:6 indeed:1 market:2 behavior:5 decreasing:2 actual:2 becomes:2 provided:2 moreover:3 bounded:6 medium:1 agnostic:3 what:1 substantially:1 guarantee:1 every:2 act:1 concave:2 chronological:1 braun:1 exactly:1 universit:3 classifier:13 stick:1 control:1 unit:3 appear:1 yn:1 mcauliffe:1 before:1 positive:8 dropped:1 understood:3 mistake:8 disclosing:1 approximately:1 lugosi:1 might:1 twice:1 initialization:1 dynamically:1 range:1 decided:1 practical:3 spontaneously:1 testing:1 practice:1 regret:26 definite:1 differs:2 procedure:1 empirical:6 significantly:1 weather:1 confidence:2 word:2 refers:2 seeing:1 get:3 unlabeled:1 close:1 mispredicted:1 storage:3 risk:13 context:3 applying:1 influence:1 zwald:1 optimize:1 measurable:6 equivalent:1 yt:22 straightforward:1 williams:1 survey:1 bachrach:1 rule:5 estimator:7 insubria:1 proving:1 increment:1 updated:1 annals:2 shamir:2 pt:3 trigger:2 infrequent:3 hypothesis:2 trend:2 expensive:1 predicts:2 distributional:3 observed:4 poolbased:1 ft:8 fly:1 capture:1 worst:1 news:1 counter:2 decrease:1 intuition:1 pd:2 vanishes:2 convexity:1 seung:2 cristianini:3 depend:1 algebra:2 f2:2 learner:9 easily:1 joint:2 stock:1 represented:1 fast:2 query:37 detected:1 labeling:1 choosing:1 quite:1 encoded:2 whose:1 larger:1 cvpr:1 say:1 s:3 compensates:1 statistic:4 transform:1 online:8 sequence:4 advantage:4 eigenvalue:7 frequent:3 relevant:2 combining:1 realization:1 rapidly:1 achieve:1 description:1 olkopf:1 exploiting:1 convergence:4 regularity:1 r1:1 extending:1 generating:1 incremental:1 converges:3 categorization:1 coupling:1 measured:3 sole:4 received:1 strong:1 implies:1 quantify:1 attribute:1 modifying:1 milano:2 sgn:4 transient:3 require:1 f1:2 generalization:2 adjusted:1 hold:2 proximity:1 around:1 considered:2 sufficiently:1 predict:1 consecutive:1 smallest:3 estimation:1 bag:1 label:54 nant:1 individually:1 uninsubria:1 tf:1 establishes:2 mit:3 clearly:3 gaussian:1 aim:1 modified:1 rather:3 kalai:2 zhou:2 claudio:2 pn:4 varying:1 focus:1 joachim:1 properly:1 rank:1 indicates:1 adversarial:1 baseline:1 realizable:2 minimizers:1 entire:1 typically:1 koller:2 selective:40 interested:1 sketched:1 issue:3 classification:13 dual:2 among:3 colt:3 exponent:6 fairly:3 never:2 having:1 sampling:36 rls:11 nearly:1 icml:3 alter:1 report:1 spline:1 connectionist:1 kandola:1 individual:1 intended:1 n1:1 freedom:1 highly:1 investigate:1 analyzed:2 light:1 accurate:1 nowak:3 closer:1 unless:1 euclidean:2 taylor:1 desired:1 theoretical:1 instance:32 boolean:2 cost:2 surpasses:1 intial:1 uniform:1 tishby:3 too:1 stored:12 reported:2 adaptively:3 st:6 density:1 broder:1 international:1 stay:1 probabilistic:3 pool:1 together:2 connecting:1 cesa:5 interactively:1 choose:2 perc:2 stochastically:1 style:1 leading:1 account:1 summarized:1 includes:1 blanchard:1 explicitly:1 depends:3 try:1 observing:1 competitive:1 bayes:10 option:1 aggregation:1 contribution:1 ass:1 square:6 variance:6 macroaveraged:1 largely:1 kaufmann:2 yield:2 identify:2 produced:1 none:1 classified:3 monteleoni:4 whenever:4 competitor:1 against:1 esn:1 frequency:2 involved:2 associated:2 di:2 proof:4 sampled:1 hsu:1 dataset:4 proved:1 popular:1 recall:1 schedule:1 campbell:2 feed:2 originally:1 supervised:20 formulation:2 evaluated:1 though:1 just:4 smola:3 stage:1 correlation:3 langford:1 hand:2 receives:2 working:1 sketch:2 steinwart:1 cohn:3 somehow:2 artifact:1 reveal:1 scheduled:1 normalized:3 true:4 y2:1 unbiased:2 facility:1 hence:3 former:2 counterpart:3 read:1 tagged:1 nonzero:1 during:2 game:1 mammen:1 cosine:1 stress:1 ridge:1 balcan:2 wise:1 instantaneous:2 conditioning:1 exponentially:1 volume:2 relating:1 significant:1 cambridge:1 queried:18 rd:10 mistaken:1 similarly:1 hp:1 newswire:3 shawe:1 similarity:1 surface:1 add:1 italy:3 driven:6 store:5 issued:3 inequality:2 binary:7 zaniboni:1 morgan:2 gentile:4 additional:4 somewhat:1 impose:1 employed:1 r0:1 arginf:1 ii:5 semi:8 full:1 reduces:1 faster:2 adapt:1 sphere:1 long:1 controlled:2 impact:1 prediction:7 variant:12 basic:1 essentially:1 expectation:1 kernel:7 represent:1 gilad:1 achieved:6 receive:1 whereas:1 want:2 separately:1 else:1 source:1 standpoint:1 sch:1 unlike:1 ineffective:1 probably:1 tend:1 jordan:1 call:2 leverage:1 ideal:1 revealed:1 iii:6 enough:1 concerned:1 easy:1 independence:1 idea:1 intensive:1 whether:1 pca:1 utility:2 bartlett:1 forecasting:1 speaking:1 remark:3 repeatedly:1 useful:1 detailed:1 amount:3 nonparametric:2 tsybakov:4 category:3 angluin:2 exist:1 notice:1 per:6 correctly:2 write:2 dasgupta:3 coarsely:1 putting:2 threshold:6 drawn:2 clarity:1 kuk:1 asymptotically:2 fraction:10 sum:2 run:2 inverse:2 almost:2 reasonable:1 family:1 separation:1 excerpt:1 dkm:2 decision:1 bound:32 quadratic:1 precisely:1 idf:1 ri:2 bousquet:1 aspect:1 argument:2 rcv1:2 separable:1 according:2 ball:1 smaller:5 slightly:1 making:3 castro:3 determi:1 computationally:2 ln:28 previously:1 turn:4 committee:3 needed:5 parametrize:2 apply:3 observe:4 appropriate:1 indirectly:1 appearing:1 batch:1 slower:1 rp:1 original:1 denotes:3 running:2 ensure:2 hinge:1 exploit:1 society:1 quantity:1 strategy:3 concentration:4 rt:4 dependence:1 exhibit:2 landmark:1 topic:10 collected:1 eigenspectrum:1 studi:2 length:1 modeled:1 pointwise:1 ying:2 setup:1 difficult:1 favorably:1 sop:4 stated:1 rise:1 implementation:2 unknown:6 perform:2 bianchi:5 ladner:3 upper:1 finite:2 immediate:1 defining:1 excluding:2 precise:1 y1:11 rn:1 arbitrary:3 introduced:2 vacuous:3 pair:1 textual:1 established:2 nip:3 trans:2 address:1 able:3 fixable:1 proceeds:1 below:4 regime:3 sparsity:1 cavallanti:2 including:5 max:1 critical:2 natural:3 difficulty:3 regularized:2 predicting:1 minimax:1 scheme:1 improve:1 sn:8 text:1 review:1 understanding:2 literature:2 nicol:1 relative:1 unsurprisingly:1 freund:2 fully:12 dsi:4 bear:1 loss:6 ically:1 rationale:1 limitation:1 expect:1 querying:4 proven:1 jasa:1 editor:1 story:2 storing:1 share:1 obscure:1 last:2 free:1 bias:2 burges:1 perceptron:5 distributed:1 boundary:1 default:2 giovanni:1 world:2 cumulative:4 evaluating:1 preventing:1 author:1 made:2 adaptive:2 adopts:1 gram:1 transaction:1 excess:1 approximate:1 implicitly:1 confirm:1 decides:1 reveals:1 active:13 corpus:2 navot:1 degli:2 spectrum:3 subsequence:1 nature:1 reasonably:1 obtaining:2 improving:1 expansion:1 protocol:7 domain:1 did:1 main:2 azoury:2 linearly:1 whole:1 noise:23 bounding:2 reuters:2 n2:1 allowed:1 fair:1 referred:1 en:1 cubic:1 tong:2 precision:1 explicit:1 exponential:5 candidate:1 vanish:1 jmlr:2 third:2 learns:1 formula:1 theorem:18 xt:19 specific:1 showing:3 jensen:2 explored:1 svm:6 alt:1 evidence:1 derives:2 exists:2 intrinsic:1 sequential:1 magnitude:1 margin:21 kx:1 gap:1 surprise:1 logarithmic:3 likely:1 prevents:1 expressed:3 adjustment:1 contained:1 conconi:1 u2:2 applies:2 springer:4 aa:2 futher:1 relies:1 acm:1 conditional:5 goal:2 identity:1 sorted:2 labelled:1 content:1 hard:4 infinite:1 reducing:1 unimi:2 sampler:15 hyperplane:1 wt:5 vovk:2 principal:1 called:2 experimental:1 meaningful:1 internal:1 support:3 latter:3 assessed:1 yardstick:1 ex:1 |
2,732 | 3,478 | Clustering via LP-based Stabilities
Nikos Komodakis
University of Crete
[email protected]
Nikos Paragios
Ecole Centrale de Paris
INRIA Saclay Ile-de-France
[email protected]
Georgios Tziritas
University of Crete
[email protected]
Abstract
A novel center-based clustering algorithm is proposed in this paper. We first formulate clustering as an NP-hard linear integer program and we then use linear
programming and the duality theory to derive the solution of this optimization
problem. This leads to an efficient and very general algorithm, which works in the
dual domain, and can cluster data based on an arbitrary set of distances. Despite
its generality, it is independent of initialization (unlike EM-like methods such as
K-means), has guaranteed convergence, can automatically determine the number
of clusters, and can also provide online optimality bounds about the quality of the
estimated clustering solutions. To deal with the most critical issue in a centerbased clustering algorithm (selection of cluster centers), we also introduce the
notion of stability of a cluster center, which is a well defined LP-based quantity
that plays a key role to our algorithm?s success. Furthermore, we also introduce,
what we call, the margins (another key ingredient in our algorithm), which can be
roughly thought of as dual counterparts to stabilities and allow us to obtain computationally efficient approximations to the latter. Promising experimental results
demonstrate the potentials of our method.
1 Introduction
Clustering is considered as one of the most fundamental unsupervised learning problems. It lies
at the heart of many important tasks in machine learning, patter recognition, computer vision, data
mining, biology, marketing, just to mention a few of its application areas. Most of the clustering
methods are center-based, thus trying to extract a set of cluster centers that best ?describe? the input
data. Typically, this translates into an optimization problem where one seeks to assign each input
data point to a unique cluster center such that the total sum of the corresponding distances is minimized. These techniques are extremely popular and they are thus essential even to other types of
clustering algorithms such as Spectral Clustering methods [1],[2].
Currently, most center-based clustering methods rely on EM-like schemes for optimizing their clustering objective function [3]. K-means is the most characteristic (and perhaps the most widely
used) technique from this class. It keeps greedily refining a current set of cluster centers based on
a simple gradient descent scheme. As a result, it can very easily get trapped to bad local minima
and is extremely sensitive to initialization. It is thus likely to fail in problems with, e.g., a large
number of clusters. A second very important drawback of many center-based clustering methods,
which severely limits their applicability, is that they either require the input data to be of vectorial
form and/or impose strong restrictions on the type of distance functions they can handle. Ideally,
one would like to be able to cluster data based on arbitrary distances. This is an important point
because, by an appropriate choice of these distances, clustering results with completely different
characteristics can be achieved [4]. In addition to that, one would prefer that the number of clusters
is automatically estimated by the algorithm (e.g., as a byproduct of the optimization process) and
not given as input. In contrast to that, however, many algorithms assume that this number is known
a priori.
1
To circumvent all the issues mentioned above, a novel center-based clustering algorithm is proposed
in this paper. Similarly to other methods, it reduces clustering to a well-defined (but NP-hard)
minimization problem, where, of course, the challenge now is how to obtain solutions of minimum
objective value. To this end, we rely on the fact that the above problem admits a linear integer
programming formulation. By making heavy use of a dual LP relaxation to that program, we then
manage to derive a dual based algorithm for clustering. As in all center-based clustering techniques,
the most critical component in the resulting algorithm is deciding what cluster centers to choose.
To this end, we introduce, what we call, the stability of a data point as a cluster center (this is an
LP-based quantity), which we consider as another contribution of this work. Intuitively, the stability
of a data point as a cluster center tries to measure how much we need to penalize that point (by
appropriately modifying the objective function) such that it can no longer be chosen as a center in
an optimal solution of the modified problem. Obviously, one would like to choose as centers those
points having high stability. For applying this idea in practice, however, a crucial issue that one needs
to deal with is how to efficiently approximate these stability measures. To this end, we introduce,
what we call, the margins, another very important concept in our algorithm and a key contribution of
our work. As we prove in this paper, margins can be considered as dual to stabilities. Furthermore,
they allow us to approximate the latter on the fly, i.e., as our algorithm runs. The outcome is an
efficient and very easily implementable optimization algorithm, which works in the dual domain
by iteratively updating a dual solution via two very simple operations: DISTRIBUTE and PROJECT.
It can cluster data based on an arbitrary set of distances, which is the only input required by the
algorithm (as a result, it can find use in a wide variety of applications, even in case where nonvectorial data need to be used). Furthermore, an important point is that, despite its generality, it does
not get trapped to bad local minima. It is thus insensitive to initialization and can always compute
clusterings of very low cost. Similarly to [5], the number of clusters does not need to be predefined,
but is decided on the fly during the optimization process. However, unlike [5], convergence of the
proposed method is always guaranteed and no parameters? adjustment needs to take place for this.
Finally, an additional advantage of our method is that it can provide online optimality guarantees,
which can be used for assessing the quality of the generated clusterings. These guarantees come in
the form of lower bounds on the cost of the optimal clustering and are computed (for free) by simply
using the cost of the dual solutions generated during the course of the algorithm.
2 Clustering via stabilities based on Linear Programming
Given a set of objects V with distances d = {dpq }, clustering amounts to choosing a set of cluster
centers from V (say {qi }ki=1 ) such that the sum of distances between each object and its closest
center is minimized. To this end, we are going to use the following objective function E(?) (which
will be referred to as the primal cost hereafter):
X
X
min E({qi }ki=1 ) =
min dpqi +
dqi qi
(1)
p?V
k,{qi }k
i=1
i
i
Note that, in this case, we require that each cluster is chosen from the set V. Also note that, besides
{qi }, here we optimize over the number of cluster centers k as well. Of course, to avoid the trivial
solution of choosing all objects as centers, we regularize the problem by assigning a penalty dqq to
each chosen center q. Problem (1) has an equivalent formulation as a 0 ? 1 linear integer program
[6], whose relaxation leads to the following LP (denoted by P RIMAL hereafter):
X
P RIMAL ? min
dpq xpq
(2)
p,q?V
X
s.t.
xpq = 1
(3)
q?V
xpq ? xqq
xpq ? 0
(4)
(5)
To get an equivalent problem to (1), we simply have to replace xpq ? 0 with xpq ? {0, 1}. In this
case, each binary variable xpq with p 6= q indicates whether object p has been assigned to cluster
center q or not, while binary variable xqq indicates whether object q has been chosen as a cluster
center or not. Constraints (3) simply express the fact that each object must be assigned to exactly
one center, while constraints (4) require that if p has been assigned to q then object q must obviously
be chosen as a center.
2
Obviously at the core of any clustering problem of this type lies the issue of deciding which objects
will be chosen as centers. To deal with that, a key idea of our approach is to rely on, what we call, the
stability of an object. This will be a well defined measure which, intuitively, tries to quantitatively
answer the following question: ?How much do we need to penalize an object in order to ensure that
it is never selected as an optimal cluster center?? For formalizing this concept, we will make use
of the LP relaxation P RIMAL. We will thus define the stability S(q) of an object q as follows:
S(q) = inf{perturbation s that has to be applied to penalty dqq (i.e., dqq ? dqq + s)
such that P RIMAL has no optimal solution x with xqq > 0}
(6)
An object q can be stable or unstable depending on whether it holds S(q) ? 0 or S(q) < 0. To
select a set of centers Q, we will then rely on the following observation: a stable object with high
stability is also expected to be, with high probability, an optimal center in (1). The reason is that the
assumption of a high S(q) ? 0 is essentially a very strong requirement (much stronger than simply
requiring q to be active in the relaxed problem P RIMAL): it further requires that q will be active for all
problems P RIMAL(dqq + s)1 as well (where s ? S(q)). Hence, our strategy for generating Q will be
to sequentially select a set of stable objects, trying, at each step, to select an object of approximately
maximum stability (as already explained, there is high chance that this object will be an optimal
center in (1)). Furthermore, each time we insert a stable object q to Q, we reestimate stabilities for
the remaining objects in order to take this fact into account (e.g., an object may become unstable if
we know that it holds xqq = 1 for another object q). To achieve that, we will need to impose extra
constraints to P RIMAL (as we shall see, this will help us to obtain an accurate estimation for the
stabilities of the remaining objects given that objects in Q are already chosen as centers). Of course,
this process repeats until no more stable objects can be found.
2.1 Margins and dual-based clustering
For having a practical algorithm, the most critical issue is how to obtain a rough approximation to
the stability of an object q in a computationally efficient manner. As we shall see, to achieve this
we will need to to move to the dual domain and introduce a novel concept that lies at the core of
our approach: the margin of dual solutions. But, first, we need to introduce the dual to problem
P RIMAL, which is the linear program called D UAL in (7)2 :
X
D UAL ? max D(h) =
hp
(7)
p?V
s.t. hp = minq?V hpq ,
X
X
hpq =
p?V
p?V
hpq ? dpq
dpq ,
?p ? V
(8)
?q ? V
(9)
?p 6= q
(10)
Dual variables hpq can be thought of as representing pseudo-distances between objects, while each
variable hp represents the minimum pseudo-distance from p (which is, in fact, ?thought? by the dual
as an estimation of the actual distance between p and its closest active center).
Given a feasible dual solution h, we can now define its margin ?q (h) (with respect to object q) as
follows:
X
X
? p ? hp ) ?
?q (h) =
(h
(hpq ? max(hp , dpq )) ? hqq ? hq ,
(11)
p:hpq =hp
p6=q
where (for any h) ?
hp hereafter denotes the next-to-minimum pseudo-distance from p.
There is a very tight connection between margins of dual solutions and stabilities of objects. The
following lemma provides a first indication for this fact and shows that we can actually use margins
to decide whether an object is stable or not and also to lower bound or upper bound its stability
accordingly (see [7] for proofs):
Lemma 1 ([7]). Let h be an optimal dual solution to D UAL.
1
P RIMAL (z) denotes a modified problem P RIMAL where the penalty for q has been set equal to z.
Problem D UAL results from the standard dual to P RIMAL after applying a transformation to the dual
variables.
2
3
1. If ?q (h) > 0 then S(q) ? ?q (h).
2. If ?q (h) < 0 then S(q) ? ?q (h).
In fact, the following fundamental theorem goes even further by proving that stabilities can be fully
characterized solely in terms of margins. Hence, margins and stabilities are two concepts that can
be roughly considered as dual to each other:
Theorem 2 ([7]). The following equalities hold true:
S(q) ? 0 ? S(q) = sup{?q (h) | h optimal solution to D UAL} ,
S(q) ? 0 ? S(q) = inf{?q (h) | h optimal solution to D UAL} .
(12)
(13)
Furthermore, it can be shown that:
S(q) = sign(S(q)) ? sup{|?q (h)| h optimal solution to D UAL} .
(14)
What the above theorem essentially tells us is that one can compute S(q) exactly, simply by considering the margins of optimal dual solutions. Based on this fact, it is therefore safe to assume that
solutions h with high (but not necessarily maximum) dual objective D(h) will have margins that
are good approximations to S(q), i.e., it holds:
S(q) ? ?q (h) .
(15)
This is exactly the idea that our clustering algorithm will rely on in order to efficiently discover
objects that are stable. It thus maintains a dual solution h and a set Q containing all stable objects
chosen as centers up to the current point (Q is empty initially). At each iteration, it increases the
dual objective D(h) by updating solution h via an operation called DISTRIBUTE. This operation is
repeatedly applied until a high enough objective value D(h) is obtained such that at least one stable
object is revealed based on the estimated margins of h. At that point, the set Q is expanded and h is
updated (via an operation called PROJECT) to take account of this fact. The process is then repeated
until no more stable objects can be found. A remarkable thing to note in this process is that, as we
shall see, determining how to update h during the DISTRIBUTE operation (i.e., for increasing the
dual objective) also relies critically on the use of margins.
Another technical point that we need to solve comes from the fact that Q gets populated with objects
as the algorithm proceeds, which is something that we certainly need to take into account when
estimating object stabilities. Fortunately, there is a very elegant solution to this problem: since all
objects in Q are assumed to be cluster centers (i.e., it holds xqq = 1, ?q ? Q), instead of working
with problems P RIMAL and D UAL, it suffices that one works with the following primal-dual pair of
LPs called P RIMALQ and D UALQ 3 :
P RIMAL Q = min P RIMAL
s.t. xqq = 1, ?q ? Q
D UALQ = max D UAL
s.t. hpq = dpq , ?{p, q} ? Q =
6 ?
This means, e.g., that stability S(q) is now defined by using P RIMALQ (instead of P RIMAL) in (6).
Likewise, lemma 1 and theorem 2 still continue to hold true provided that D UAL is replaced with
D UALQ in the statement of these theorems. In addition to that, the definition of margin ?q (h) needs
to be modified as follows :
X
X
? p ? hp ) ?
?q (h) =
(h
(hpq ? max(hp , dpq )) ? hqq ? hq . (16)
p?Q:h
/
pq =hp
p?Q?{q}
/
The PROJECT operation: Given this modified definition of margins, we can now update Q at any
iteration in the following manner:
EXPAND :
Compute q? = arg max ?q (h) and if ?q?(h) ? 0 then set Q = Q ? {?
q} .
q?Q
/
(17)
Based on the fact that margins are used as approximations to the stabilities of objects, the above
update simply says that the object q? with maximum stability should be chosen as the new center at
the current iteration, provided of course that this object q? is stable. Furthermore, in this case, we also
3
Actually, to represent the dual of P RIMAL Q exactly, we need to add a constant in the objective function of
D UAL Q . Since, however, this constant does not affect maximization, it is thus omitted for clarity.
4
1:
2:
3:
4:
5:
6:
7:
h ? d;
while maxq?Q
/ ?q (h) < 0 do
Dprev ? D(h); h ? DISTRIBUTE (h);
if Dprev = D(h) then exit;
end
q? ? arg maxq?Q
q }; h ? PROJECT (h);
/ ?q (h); Q ? Q ? {?
goto 2;
Fig. 1: Pseudocode of our clustering algorithm.
need to update the current dual solution h in order to take account of the fact that extra constraints
have been added to D UALQ (these are a result of the extra constraint xq?q? = 1 that has been added to
P RIMALQ ). By definition of D UALQ , the new constraints are hq?p = dq?p , hp?q = dp?q for all p ?
/Q
and, so, one has to apply the following operation, which simply projects the current dual solution
into the feasible set of the updated linear program D UALQ :
PROJECT:
hpp += hq?p ? dq?p , hq?p = dq?p , hp?q = dp?q , ?p ?
/Q.
(18)
Note that update hpp += hq?p ? dq?p is needed for maintaining dual feasibility constraint (9). Essentially, PROJECT is a warm-start operation, that allows us to reuse existing information for computing
a solution h that has a high dual objective value D(h) and is also feasible to the updated D UALQ .
The DISTRIBUTE operation: In case it holds ?q (h) < 0 for all q ?
/ Q, this means that we are
unable to find an object with good stability at the current iteration. To counter that, we will thus
need to update solution h in order to increase its dual objective value (recall that, by lemma 1, stable
objects will necessarily be revealed at an optimal dual solution, i.e., at a dual solution of maximum
objective). Intuitively, what happens is that as we increase the dual objective D(h), objects not in
Q actually try to compete with each other for achieving a large margin. Interestingly enough, in
order to increase D(h), we will again have to rely on the margins of the current dual solution. In
particular, it turns out that, if ?q (h) < 0 holds true for all q ?
/ Q, then the following very simple
update of h is guaranteed to increase the dual objective:
?
?
?max(hp , dpq ), if p 6= q AND p ? LQ OR hp < dpq
?q (h)
else if hpq > hp
DISTRIBUTE : ?p, q ?
/ Q, hpq = hp ? |Vq | ,
?
?h
? p ? ?q (h) ,
else if hpq = hp
|Vq |
In the above update, we denote by LQ the set of objects whose minimum pseudo-distance hp is
attained at an object from Q, i.e., LQ = {p ?
/ Q | hp = minq?Q hpq }, while |Vq | denotes the
cardinality of the set Vq = {p ?
/ Q ? LQ | hp ? dpq } ? {q}. The following theorem then holds true:
Theorem 3. If maxq?Q
/ ?q (h) < 0, then the DISTRIBUTE operation maintains feasibility and,
unless V = Q ? LQ , it also strictly increases the dual objective.
The pseudocode of the resulting algorithm is shown in Fig. 1. As already explained, it is an iterative
algorithm, which keeps updating a dual solution h by using the DISTRIBUTE and PROJECT operations (the latter applied only when needed) until the dual objective can no longer increase. Note also
that, besides maintaining a dual solution h, the algorithm also maintains Q which provides a current
clustering and also has a primal cost E(Q). With respect to this cost, the following theorem can be
shown to hold true:
Theorem 4. If maxq?Q
/ ?q (h) > 0, then the EXPAND operation strictly decreases the primal cost
E(Q).
This implies that the sequence of primal costs E(Q) generated by the algorithm is decreasing (recall
that we actually want to minimize E(?)). It is worth noting at this point that nowhere have we
tried to enforce this property by explicitly considering the primal cost when updating Q. This
is achieved simply thanks to the requirement of always selecting objects with high stability, thus
showing how powerful this requirement actually is. We also note that the algorithm?s convergence
is always guaranteed: the algorithm terminates when neither the primal cost E(Q) decreases nor the
dual objective D(h) increases during the current iteration. Finally, we note that exactly the same
algorithm applies to the general case where the objects in V form a graph with edges E (distance dpq
is then defined only for pq ? E). In this case, it is easy to verify that the cost of each iteration will be
O(|E|). Furthermore, the algorithm converges extremely fast in practice (i.e. in very few iterations).
5
3 Related work
Before proceeding, let us briefly mention how our method relates to some state-of-the-art exemplarbased clustering techniques. Affinity propagation [5] is a recently proposed method for clustering,
which relies on minimizing exactly the same objective function (1). This is an iterative algorithm,
which repeatedly updates (through messages) the so-called responsibilities and availabilities. These
can be considered as counterparts to our pseudo-distances hpq . Affinity propagation also estimates
the so-called self-availabilities for measuring the likelihood of an object being a cluster center. On
the contrary, we use for the same purpose the margins that approximate the stability of an object.
Furthermore, compared to affinity propagation, our method offers the following significant advantages: its convergence is always guaranteed, it is parameter-free (no need for adjusting parameters
such as damping factors in order to ensure convergence), it is a descent method (objective function (1) always decreases), and it can make use of the computed dual solutions for deriving online
optimality bounds for free (these can be used for assessing that the derived solutions are almost
optimal). At the same time, our method performs equally well or better in practice. Very recently,
another exemplar-based algorithm has been proposed as well, which relies on solving a convex formulation of clustering [8]. We note, however, that this method is used for solving a different and
much easier problem, which is that of soft clustering. Furthermore, it relies on a convex relaxation
which is known to be much less tight than the LP relaxation P RIMAL we usePhere (essentially [8]
replaces all constraints xpq ? xqq , ?p ? V with the much looser constraint p xpq ? |V| ? xqq ).
As a result, generated solutions are expected to be of much lower quality. We also note that, unlike
EM-like clustering algorithms such as K-means, our method is totally insensitive to initialization
conditions and does not get stuck at bad local minima (thus yielding solutions of much better quality). Also, it is much more efficient than methods like [6], that require solving very large linear
programs.
4 Experimental results
To illustrate the robustness of our algorithm to noise and its insensitivity to initialization, we start
by showing clustering results on synthetic data. The synthetic datasets were generated using the
following procedure: 2D points were sampled from a mixture of gaussian distributions, where the
centers of the gaussians were arranged in an approximately grid-like fashion over the plane. In
addition to that, random outliers were generated uniformly all over the grid, with their number being
equal to half the number of the points drawn from the gaussian distributions. One such dataset
(consisting of 24 gaussians) is displayed in Fig. 2, where colored crosses correspond to samples
from gaussians, while the black dots correspond to outliers. The clustering result produced by our
algorithm is shown in Fig. 2(a). As can be seen from that figure, despite the heavy percentage of
noise, our method has been able to accurately detect all gaussian centers and successfully cluster
this 2D dataset. Note that the number of gaussians was not given as input to our algorithm. Instead,
it was inferred based on a common penalty term dqq for all objects q, which was set roughly equal to
the median distance between points. On the contrary, K-means was unable to produce a good result
for this dataset despite the fact that it was restarted multiple times (100 runs were used in this case).
This is, of course, due to its well known sensitivity to initialization conditions. We repeated multiple
experiments by varying the number of gaussians. Contrary to our algorithm, behavior of K-means
gets even worse as this number increases.
We have also plotted in Fig. 2(c) the primal and dual costs that were generated by our algorithm
when it was applied to the example of Fig. 2(a). These correspond to the solid red and dashed blue
curves respectively. Note that the dual costs represent lower bounds to the optimum value of the
objective function E(?), while the primal costs represent obviously upper bounds. This fact allows
us to obtain online optimality bounds with respect to how far our current primal solution Q is with
respect to the unknown optimum of E(?). These bounds are, of course, refined continuously as the
algorithm proceeds and can be useful for assessing its performance. For instance, in this particular
example, we can be sure that the primal cost of our final solution is within 1% of the unknown
optimum of function E(?), i.e., an approximately optimal solution has been obtained.
Next we show some results from applying our algorithm to the challenging problem of multibody 3D
segmentation, which has several applications in computer vision. As we shall see, a non-Euclidean
distance for clustering will have to be used in this case. According to the 3D segmentation problem,
we are given a set of N pixel correspondences between two images. These correspondences result
6
K?means clustering
1.4
1.4
1.2
1.2
1
1
0.8
0.8
0.6
0.6
0.4
0.4
1000
primal cost
dual cost
500
0.2
0.2
0
0.2
0.4
0.6
0.8
1
1.2
(a) Our algorithm
1.4
0
0.2
0.4
0.6
0.8
1
1.2
(b) K-means
1.4
0
0
20
40
60
(c) Primal and dual costs
Fig. 2: Clustering results for synthetic data. The centers of the big circles represent the points chosen as cluster
centers by the 2 algorithms. The primal and dual costs in (c) verify that the cost of our algorithm?s solution is
within 1% of the optimum cost.
from K objects undergoing K 3D rigid-body motions relative to a moving camera. The 3D-motion
segmentation problem is the task of clustering these N pixel pairs according to the K moving objects. We consider the more general and difficult scenario of a fully projective camera model. In this
case, each pixel pair, say, pi = (yi , zi ) that belongs to a moving object k should satisfy an epipolar
constraint:
yiT Fk zi = 0 ,
(19)
where Fk represents the fundamental matrix associated with the k-th 3D motion. Of course, the
matrices Fk corresponding to different motions are unknown to us. Hence, to solve the 3D segmentation problem, we need to estimate both the matrices Fk as well as the association of each pixel
pair pi = (yi , zi ) to the correct fundamental matric Fk . To this end, we sample a large set of fundamental matrices by using a RANSAC-based scheme (we recall that a random set of, e.g., 8 pixel
pairs pi is enough for generating a new fundamental matrix). The resulting matrices, say, {Fk } will
then correspond to cluster centers, whereas all the input pixel pairs {pi } will correspond to objects
that need to be assigned to an active cluster center. A clustering objective function of the form (1)
thus results and by minimizing it we can also obtain a solution to the 3D segmentation problem. Of
course, in this case, the distance function d(pi , Fk ) between an object pi = (yi , zi ) and a cluster
center will not be Euclidean. Instead, based on (19), we can use a distance of the following form:
d(pi , Fk ) = |yiT Fk zi | .
(20)
Due to being more robust, a normalized version of the above distance is usually preferred in practice.
Figure 3 displays 3D motion segmentation results that were obtained by applying our algorithm to
two image pairs (points with different colors correspond to different motions). These examples
were downloaded from a publicly available motion segmentation database [9] with ground-truth.
The ground-truth motion segmentation is also shown for each example and, as can be seen, it is
almost identical with the segmentation estimated by our algorithm.
We next compare our method to Affinity Propagation (AP). Some really impressive results on 4
very challenging datasets have been reported for that algorithm in [5], indicating that it outperforms
any other center-based clustering method. In particular, AP has been used for: clustering images
of faces (using the squared error distance), detecting genes in microarray data (using a distance
based on exons? transcriptions levels), identifying representative sentences in manuscripts (using
(a)
(b)
Fig. 3: Two 3D motion segmentation results. For each one we show (left) ground truth segmentation of feature
points and (right) estimated segmentation along with the input optical flow vectors.
7
1000
10
Our exemplars
Primal Cost E(Q)
Ours
AP
Faces
13430
13454
Genes
-210595 -210539
Cities
92154
92154
Sentences 10234
10241
#clusters
Ours
AP
60
62
1301
1290
7
7
4
4
900
9
800
8
700
7
600
6
500
5
400
4
300
3
200
2
1
1
2
3
4
5
6
7
8
9
10
100
Primal costs from
Affinity Propagation
0
20
40
60
80
100
120
140
160
180
200
(b)
(c)
(a)
Fig. 4: (a) Comparison of our algorithm with affinity propagation [5] on the 4 very challenging datasets ?Faces?,
?Genes?, ?Cities? and ?Sentences? from [5]. Since the goal of both algorithms is to minimize objective function
E(Q), for each dataset we report the final value of this function and the number of estimated clusters. We
have used exactly the same settings for both methods. (b) Our algorithm?s clustering when applied to the ?fourclouds? dataset from [1]. The primal costs generated by AP for this dataset (shown in (c)) demonstrate that AP
fails to converge in this case (to prevent that, a properly chosen damping factor has to be used).
the relative entropy as distance), and identifying cities that can be easily accessed by airline travel.
In Fig. 4(a), we compare our method to AP on these publicly available problems. Since both methods
rely on optimizing the same objective function, we list the values obtained by the two methods for
the corresponding problems. Exactly the same settings have been used for both algorithms, with
AP using the parameters proposed in [5]. Note that in all cases our algorithm manages to obtain
a solution of equal or lower value than AP. This is true even, e.g., in the Genes dataset, where
a higher number of clusters is selected by our algorithm (and thus a higher penalty for activating
them is paid). Furthermore, an additional advantage of our algorithm is that, unlike AP, it is always
guaranteed to converge (e.g., see Figs 4(b), 4(c)). We note that, due to lack of space, a running time
comparison with AP, as well as a comparison of our algorithm to the method in [10], are included in
[7].
5 Conclusions
In this paper we have introduced a very powerful and efficient center-based clustering algorithm,
derived from LP duality theory. The resulting algorithm has guaranteed convergence and can handle
data sets with arbitrary distance functions. Furthermore, despite its extreme generality, the proposed
method is insensitive to initialization and computes clusterings of very low cost. As such, and
considering the key role that clustering has in many problems, we believe that our method can find
use in a wide variety of tasks. As another very important (both practical and theoretical) contribution
of this work we also consider the fact of introducing the notions of LP-based stabilities and margins,
two quantities that, as we have proved, are dual to each other and can be used for deciding what
objects should be chosen as cluster centers. We strongly believe that these ideas can be of both
practical and theoretical interest not just for designing center-based clustering algorithms, but also
in many other contexts as well.
References
[1] A. Ng, M. Jordan, and Y. Weiss, ?On spectral clustering: Analysis and an algorithm,? in NIPS, 2001.
[2] D. Verma and M. Meila, ?A comparison of spectral clustering algorithms,? Tech. Rep., 2001.
[3] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, ?Clustering with bregman divergences,? J. Mach.
Learn. Res., vol. 6, pp. 1705?1749, 2005.
[4] B. Fischer, V. Roth, and J. Buhmann, ?Clustering with the connectivity kernel,? in NIPS, 2004.
[5] B. J. Frey and D. Dueck, ?Clustering by passing messages between data points,? Science, vol. 315, 2007.
? Tardos, and D. B. Shmoys, ?A constant-factor approximation algorithm for the
[6] M. Charikar, S. Guha, E.
k-median problem,? J. Comput. Syst. Sci., vol. 65, no. 1, pp. 129?149, 2002.
[7] N. Komodakis, N. Paragios, and G. Tziritas, ?Clustering via LP-based Stabilities,? Tech. Report, 2009.
[8] D. Lashkari and P. Golland, ?Convex clustering with exemplar-based models,? in NIPS, 2008.
[9] R. Tron and R. Vidal, ?A benchmark for the comparison of 3-d motion segmentation algorithms,? in
CVPR, 2007.
[10] M. Leone, Sumedha, and M. Weigt, ?Clustering by soft-constraint affinity propagation: applications to
gene-expression data,? Bioinformatics, vol. 23, no. 20, pp. 2708?2715, 2007.
8
| 3478 |@word version:1 briefly:1 stronger:1 seek:1 tried:1 paid:1 mention:2 solid:1 hereafter:3 selecting:1 ecole:1 ours:2 interestingly:1 outperforms:1 existing:1 current:10 hpp:2 assigning:1 must:2 update:9 half:1 selected:2 accordingly:1 plane:1 core:2 colored:1 provides:2 detecting:1 accessed:1 along:1 become:1 prove:1 manner:2 introduce:6 expected:2 behavior:1 roughly:3 nor:1 decreasing:1 automatically:2 actual:1 considering:3 increasing:1 cardinality:1 project:8 discover:1 estimating:1 formalizing:1 provided:2 totally:1 multibody:1 what:8 ghosh:1 transformation:1 guarantee:2 pseudo:5 dueck:1 exactly:8 before:1 local:3 frey:1 limit:1 severely:1 despite:5 mach:1 solely:1 approximately:3 ap:11 inria:1 black:1 initialization:7 challenging:3 projective:1 decided:1 unique:1 practical:3 camera:2 practice:4 procedure:1 area:1 thought:3 get:6 selection:1 context:1 applying:4 restriction:1 optimize:1 equivalent:2 center:47 roth:1 go:1 minq:2 convex:3 formulate:1 identifying:2 deriving:1 regularize:1 dqq:6 stability:29 handle:2 notion:2 proving:1 updated:3 tardos:1 play:1 programming:3 designing:1 xqq:8 nowhere:1 recognition:1 updating:4 database:1 role:2 fly:2 counter:1 decrease:3 mentioned:1 lashkari:1 ideally:1 tight:2 solving:3 exit:1 completely:1 exon:1 easily:3 patter:1 fast:1 describe:1 tell:1 outcome:1 choosing:2 refined:1 whose:2 widely:1 solve:2 cvpr:1 say:4 fischer:1 final:2 online:4 obviously:4 advantage:3 indication:1 sequence:1 fr:1 achieve:2 insensitivity:1 dpq:11 convergence:6 cluster:32 requirement:3 assessing:3 empty:1 produce:1 generating:2 optimum:4 converges:1 object:54 help:1 derive:2 depending:1 illustrate:1 exemplar:3 strong:2 tziritas:3 come:2 implies:1 safe:1 drawback:1 correct:1 modifying:1 require:4 activating:1 assign:1 suffices:1 really:1 insert:1 strictly:2 hold:10 considered:4 ground:3 deciding:3 omitted:1 purpose:1 estimation:2 travel:1 currently:1 sensitive:1 successfully:1 city:3 minimization:1 rough:1 always:7 gaussian:3 modified:4 avoid:1 varying:1 derived:2 refining:1 properly:1 indicates:2 likelihood:1 tech:2 contrast:1 greedily:1 detect:1 rigid:1 typically:1 initially:1 expand:2 going:1 france:1 pixel:6 issue:5 dual:49 arg:2 denoted:1 priori:1 art:1 equal:4 never:1 having:2 ng:1 biology:1 represents:2 identical:1 unsupervised:1 minimized:2 np:2 report:2 quantitatively:1 few:2 divergence:1 replaced:1 consisting:1 interest:1 message:2 mining:1 certainly:1 mixture:1 extreme:1 yielding:1 primal:17 predefined:1 accurate:1 bregman:1 edge:1 byproduct:1 unless:1 damping:2 euclidean:2 circle:1 plotted:1 re:1 theoretical:2 instance:1 soft:2 measuring:1 maximization:1 applicability:1 cost:25 introducing:1 gr:2 guha:1 reported:1 answer:1 synthetic:3 thanks:1 fundamental:6 sensitivity:1 continuously:1 connectivity:1 again:1 squared:1 manage:1 containing:1 choose:2 worse:1 exemplarbased:1 syst:1 account:4 potential:1 distribute:8 de:2 availability:2 satisfy:1 explicitly:1 try:3 responsibility:1 sup:2 red:1 start:2 maintains:3 contribution:3 minimize:2 publicly:2 merugu:1 characteristic:2 efficiently:2 likewise:1 correspond:6 shmoys:1 accurately:1 critically:1 produced:1 manages:1 worth:1 weigt:1 reestimate:1 definition:3 pp:3 proof:1 associated:1 sampled:1 proved:1 dataset:7 adjusting:1 popular:1 recall:3 color:1 segmentation:13 actually:5 manuscript:1 attained:1 higher:2 rimal:17 wei:1 leone:1 formulation:3 arranged:1 strongly:1 generality:3 furthermore:11 marketing:1 just:2 p6:1 until:4 working:1 banerjee:1 propagation:7 lack:1 quality:4 perhaps:1 believe:2 verify:2 concept:4 xpq:9 requiring:1 counterpart:2 true:6 hence:3 assigned:4 equality:1 normalized:1 iteratively:1 dhillon:1 deal:3 komodakis:2 during:4 self:1 trying:2 demonstrate:2 tron:1 performs:1 motion:10 image:3 novel:3 recently:2 common:1 pseudocode:2 insensitive:3 association:1 significant:1 meila:1 nonvectorial:1 populated:1 similarly:2 hp:20 grid:2 fk:9 pq:2 dot:1 moving:3 stable:12 longer:2 impressive:1 add:1 something:1 closest:2 optimizing:2 inf:2 belongs:1 scenario:1 dqi:1 binary:2 success:1 continue:1 rep:1 yi:3 seen:2 minimum:7 additional:2 relaxed:1 nikos:3 impose:2 fortunately:1 determine:1 converge:2 dashed:1 relates:1 multiple:2 reduces:1 technical:1 characterized:1 offer:1 cross:1 equally:1 feasibility:2 qi:5 ile:1 ransac:1 vision:2 essentially:4 iteration:7 represent:4 kernel:1 achieved:2 penalize:2 golland:1 addition:3 want:1 whereas:1 else:2 median:2 microarray:1 crucial:1 appropriately:1 extra:3 unlike:4 airline:1 sure:1 goto:1 elegant:1 thing:1 contrary:3 flow:1 jordan:1 integer:3 call:4 noting:1 revealed:2 enough:3 easy:1 variety:2 affect:1 zi:5 idea:4 translates:1 whether:4 expression:1 reuse:1 penalty:5 passing:1 repeatedly:2 useful:1 amount:1 percentage:1 sign:1 estimated:6 trapped:2 blue:1 shall:4 vol:4 express:1 key:5 achieving:1 drawn:1 yit:2 clarity:1 prevent:1 neither:1 graph:1 relaxation:5 sum:2 run:2 compete:1 powerful:2 place:1 almost:2 decide:1 looser:1 ecp:1 prefer:1 bound:9 ki:2 guaranteed:7 display:1 correspondence:2 replaces:1 vectorial:1 constraint:11 optimality:4 extremely:3 min:4 expanded:1 optical:1 charikar:1 according:2 centrale:1 terminates:1 uoc:2 em:3 lp:11 making:1 happens:1 intuitively:3 explained:2 outlier:2 heart:1 computationally:2 vq:4 turn:1 fail:1 needed:2 know:1 end:6 available:2 operation:12 gaussians:5 vidal:1 apply:1 appropriate:1 spectral:3 enforce:1 robustness:1 denotes:3 clustering:54 ensure:2 remaining:2 running:1 maintaining:2 matric:1 objective:23 move:1 question:1 quantity:3 already:3 added:2 strategy:1 gradient:1 dp:2 hq:6 distance:24 unable:2 affinity:7 sci:1 unstable:2 trivial:1 reason:1 besides:2 minimizing:2 difficult:1 statement:1 unknown:3 upper:2 observation:1 datasets:3 benchmark:1 implementable:1 descent:2 displayed:1 perturbation:1 arbitrary:4 inferred:1 introduced:1 pair:7 paris:1 required:1 crete:2 connection:1 sentence:3 maxq:4 nip:3 able:2 proceeds:2 usually:1 challenge:1 saclay:1 program:6 max:6 epipolar:1 critical:3 ual:11 rely:7 circumvent:1 warm:1 buhmann:1 representing:1 scheme:3 extract:1 xq:1 determining:1 georgios:1 relative:2 fully:2 ingredient:1 remarkable:1 downloaded:1 dq:4 verma:1 pi:7 heavy:2 course:9 repeat:1 free:3 allow:2 wide:2 face:3 curve:1 computes:1 stuck:1 far:1 approximate:3 preferred:1 transcription:1 keep:2 gene:5 active:4 sequentially:1 assumed:1 iterative:2 promising:1 hpq:13 learn:1 robust:1 necessarily:2 domain:3 csd:2 big:1 noise:2 repeated:2 body:1 fig:11 referred:1 representative:1 fashion:1 paragios:3 fails:1 lq:5 comput:1 lie:3 theorem:9 bad:3 showing:2 undergoing:1 list:1 admits:1 essential:1 margin:21 easier:1 entropy:1 simply:8 likely:1 adjustment:1 applies:1 restarted:1 truth:3 chance:1 relies:4 goal:1 replace:1 feasible:3 hard:2 included:1 uniformly:1 lemma:4 total:1 called:6 duality:2 experimental:2 indicating:1 select:3 latter:3 bioinformatics:1 |
2,733 | 3,479 | MAS: a multiplicative approximation scheme for
probabilistic inference
Christopher Meek
Microsoft Research
Redmond, WA 98052
[email protected]
Ydo Wexler
Microsoft Research
Redmond, WA 98052
[email protected]
Abstract
We propose a multiplicative approximation scheme (MAS) for inference problems
in graphical models, which can be applied to various inference algorithms. The
method uses -decompositions which decompose functions used throughout the
inference procedure into functions over smaller sets of variables with a known
error . MAS translates these local approximations into bounds on the accuracy
of the results. We show how to optimize -decompositions and provide a fast
closed-form solution for an L2 approximation. Applying MAS to the Variable
Elimination inference algorithm, we introduce an algorithm we call DynaDecomp
which is extremely fast in practice and provides guaranteed error bounds on the
result. The superior accuracy and efficiency of DynaDecomp is demonstrated.
1
Introduction
Probabilistic graphical models gained popularity in the recent decades due to their intuitive representation and because they enable the user to query about the value distribution of variables of
interest [19]. Although very appealing, these models suffer from the problem that performing inference in the model (e.g. computing marginal probabilities or its likelihood) is NP-hard [6].
As a result, a variety of approximate inference methods have been developed. Among these methods are loopy message propagation algorithms [24], variational methods [16, 12], mini buckets [10],
edge deletion [8], and a variety of Monte Carlo sampling techniques [13, 19, 21, 4, 25]. Approximation algorithms that have useful error bounds and speedup while maintaining high accuracy, include
the work of Dechter and colleagues [2, 3, 10, 17], which provide both upper and lower bounds on
probabilities, upper bounds suggested by Wainwright et.al. [23], and variational lower bounds [16].
In this paper we present an approximation scheme called the Multiplicative Approximation Scheme
(MAS), that provides error bounds for the computation of likelihood of evidence, marginal probabilities, and the Maximum Probability Explanation (MPE) in discrete directed and undirected graphical
models. The approximation is based on a local operation called an -decomposition, that decomposes functions used in the inference procedure into functions over smaller subsets of variables, with
a guarantee on the error introduced. The main difference from existing approximations is the ability
to translate the error introduced in the local decompositions performed during execution of the algorithm into bounds on the accuracy of the entire inference procedure. We note that this approximation
can be also applied to the more general class of multiplicative models introduced in [27].
We explore optimization of -decompositions and provide a fast optimal closed form solution for
the L2 norm. We also show that for the Kullback-Leiber divergence the optimization problem can
be solved using variational algorithms on local factors. MAS can be applied to various inference
algorithms. As an example we show how to apply MAS to the Variable Elimination (VE) algorithm [9, 20], and present an algorithm called DynaDecomp, which dynamically decomposes functions in the VE algorithm. In the results section we compare the performance of DynaDecomp with
that of Mini-buckets [10], GMF [28] and variational methods [26] for various types of models. We
find that our method achieves orders of magnitude better accuracy on all datasets.
2
Multiplicative Approximation Scheme (MAS)
We propose an approximation scheme, called the Multiplicative Approximation Scheme (MAS) for
inference problems in graphical models. The basic operations of the scheme are local approximations called -decompositions that decouple the dependency of variables. Every such local decomposition has an associated error that our scheme combines into an error bound on the result.
Consider a graphical
model for n variable X = {X1 , . . . , Xn } that encodes a probability distribuQ
tion P (X) = j ?j (dj ) where Dj ? X are sets determined by the model. Throughout the paper
we denote variables and sets of variables with capital letters and denote a value assigned to them
with lowercase letters. We denote the observed variables in the model by E = X \ H where E = e.
To simplify the proofs we assume ?j (dj ) > 1. When this is not the case, as in BNs, every function
?j can be multiplied
by a constant zj such that the assumption holds, and the result is obtained after
Q
dividing by j zj . Thus, here we assume positivity but discuss how this can be relaxed below.
In addition to approximating functions ? by which the original model is defined, we also may wish
to approximate other functions such as intermediate functions created in the course of an inference
algorithm. We can write the result of marginalizing out a set of hidden variables as a factor of
functions fi . The log of the probability distribution the model encodes after such marginalization
can then be written as
Y
X
log P (A, E) = log
fi (Ui ) =
?i (Ui )
(1)
i
i
where A ? H. When A = H we can choose sets Ui = Di and functions fi (Ui ) = ?i (Di ).
Definition 1 (-decomposition) Given a set of variables W , and a function ?(W ) that assigns real
values to every instantiation
W = w, a set of m functions ??l (Wl ), l = 1 . . . m, where Wl ? W is
S
an -decomposition if l Wl = W , and
P ?
?l (wl )
1
? l
?1+
(2)
1+
?(w)
for some ? 0, where wl is the projection of w on Wl .
Note that an -decomposition is not well defined for functions ? that equal zero or are infinite for
some instantiations. These functions can still be -decomposed for certain choices of subsets Wl
by defining 00 = 1 and ?
? = 1. We direct the interested reader to the paper of Geiger et.al. [12]
for a discussion on choosing such subsets. We also note that when approximating models in which
some assignments have zero probability, the theoretical error bounds can be arbitrarily bad, yet, in
practice the approximation can sometimes yield good results.
The following theorems show that using -decompositions the log-likelihood, log P (e), log of
marginal probabilities, the log of the Most Probable Explanation (MPE) and the log of the Maximum Aposteriori Probability (MAP) can all be approximated within a multiplicative factor using a
set of -decompositions.
Lemma 1 Let A ? H, and let P (A, E) factor according to Eq. 1, then the log of the joint probability P (a, e) can be approximated within a multiplicative factor of 1 + max using a set of i decompositions, where max = maxi {i }.
Proof:
log P? (a, e) ? log
Y
log P? (a, e) ? log
Y
?
X
?
e?il (uil ) =
i,l
i,l
e?il (uil ) =
??il (uil ) ?
X
(1 + i )?i (ui ) ? (1 + max ) log P (a, e)
i,l
i
X
??il (uil ) ?
X
i,l
i
1
1
?i (ui ) ?
log P (a, e)
1 + i
1 + max
P
Theorem 1 For a set A0 ? A the expression log a0 P (a, e) can be approximated within a multiplicative factor of 1 + max using a set of i -decompositions.
Proof: Recall that
P
j (cj )
r
?
P
j cj
r
for any set of numbers cj ? 0 and r ? 1. Therefore,
using Lemma 1 summing out any set of variables A0 ? A does not increase the error:
!1+max
!1+max
X
X Y
X
XY
P (a, e)
log
P? (a, e) ? log
e?i (ui )
? log
= (1+max ) log
e?i (ui )
a0
a0
a0
i
a0
i
Similarly for the upper bound approximation we use the fact that
of numbers cj ? 0 and 0 < r ? 1.
r
j (cj )
P
?
P
j cj
r
for any set
Note that whenever E = ?, Theorem 1 claims that the log of all marginal probabilities can be
approximated within a multiplicative factor of 1 + max . In addition, for any E ? X by setting
A0 = A the log-likelihood log P (e) can be approximated with the same factor.
A similar analysis can also be applied with minor modifications to the computation of related problems like the MPE and MAP. We adopt the simplification of the problems suggested in [10], reducing the problem of the Most Probable Explanation (MPE) to computing P (h? , e) = maxh P (h, e)
?
and the
P problem of the Maximum Aposteriori Probability (MAP) to computing P (a , e) =
maxa H\A=h? P (h, e) for a set A ? H.
Denote the operator ? as either a sum or a max operator. Then, similar to Eq. 1, for a set H 0 ? H
we can write
Y
X
log ?h0 P (h, e) = log
fi (Ui ) =
?i (Ui )
(3)
i
i
P
Theorem 2 Given a set A ? H, the log of the MAP probability log maxa H\A=h? P (h, e) can
be approximated within a multiplicative factor of 1 + max using a set of i -decompositions.
Proof:
The proof follows that of Theorem 1 with the addition of the fact that maxj (cj )r =
r
(maxj cj ) for any set of real numbers cj ? 0 and r ? 0.
An immediate conclusion from Theorem 2 is that the MPE probability can also be approximated
with the same error bounds, by choosing A = H.
2.1
Compounded Approximation
The results on using -decompositions assume that we decompose functions fi as in Eqs. 1 and 3.
Here we consider decompositions of any function created during the inference procedure, and in
particular compounded decompositions of functions that were already decomposed. Suppose that a
? ), that already incurs an error 1 compared to a function ?(W ), can be decomposed
function ?(W
with an error 2 . Then, according to Eq. 2, this results in a set of functions ??l (Wl ), such that the
P
error of l ??l (Wl ) is (1 + 1 ) ? (1 + 2 ) wrt ?(W ).
To understand what is the guaranteed error for an entire inference procedure consider a directed
graph where the nodes represent functions of the inference procedure, and each node v has an associated error rv . The nodes representing the initial potential functions of the model ?i have no parents
in the model and are associated with zero error (rv = 1). Every multiplication operation is denoted
by edges directed from the nodes S, representing the multiplied functions, to a node t representing
the resulting function, the error of which is rt = maxs?S rs . An -decomposition on the other hand
has a single source node s with an associated error rs , representing the decomposed function, and
several target nodes T , with an error rt = (1 + )rs for every t ? T . The guaranteed error for the
entire inference procedure is then the error associated with the sink function in the graph. In Figure 1
we illustrate such a graph for an inference procedure that starts with four functions (fa , fb , fc and
fd ) and decomposes three functions, fa , fg and fj , with errors 1 , 2 and 3 respectively. In this
example we assume that 1 > 2 and that 1 + 1 < (1 + 2 )(1 + 3 ).
2.2 -decomposition Optimization
-decompositions can be utilized in inference algorithms to reduce the computational cost by parsimoniously approximating factors that occur during the course of computation. As we discuss in
Section 3, both the selection of the form of the -decomposition (i.e., the sets Wi ) and which factors
to approximate impact the overall accuracy and runtime of the algorithm. Here we consider the
problem of optimizing the approximating functions ??i given a selected factorization Wi .
Given a function f (W ) = e?(W ) and the sets Wi , the goal is to optimize the functions ?i (Wi ) in
order to minimize the error f introduced in the decomposition. The objective function is therefore
(P
)
?
?(w)
i ?i (wi )
min max
(4)
,P
?
?1 ,...,?
?m ) w?W
?(w)
(?
i ?i (wi )
This problem can benformalized
as a convex
o problem using the following notations.
P
Let t = maxw?W
problem as
?i (wi )
?
, P ?(w)
?
?(w)
i ?i (wi )
i
min
?1 ,...,?
?m )
(?
t
and Sw =
?(w)
P ?
.
i ?i (wi )
s.t. ?(W = w) Sw ? t
Now we can reformulate the
?1
and Sw
?t
(5)
This type of problems can be solved with geometric programming techniques, and in particular
using interior-point methods [18]. Unfortunately, in the general case the complexity of solving this
problem requires O(m3 |W |3 ) time, and hence can be too expensive for functions over a large domain. On the other hand, many times functions defined over a small domain can not be decomposed
without introducing a large error. Thus, when trying to limit the error introduced, a significant
amount of time is needed for such optimization. To reduce the computational cost of the optimization we resort to minimizing similar measures, in the hope that they will lead to a small error f .
Note that by deviating from Eq. 4 to choose the functions ??i we may increase the worst case penalty
error but not necessarily the actual error achieved by the approximation. In addition, even when
using different measures for the optimization we can still compute f exactly.
2.2.1
Minimizing the L2 Norm
An alternative minimization measure, the L2 norm, is closely related to that in Eq. 4 and given as:
v
"
!
#2
u
uX
X
t
min
??i (wi ) ? ?(w)
(6)
?1 ,...,?
?m )
(?
w?W
i
We give a closed form analytic solution for this minimization problem when the sets Wi are disjoint,
but first we can remove the square root from the optimization formula due to the monotonicity of
the square root for positive values. Hence we are left with the task of minimizing:
Figure 1: A schematic description of an inference procedure along with the associated error. The procedure starts
with four functions (fa , fb , fc and fd ) and decomposes
three functions, fa , fg and fj , with errors 1 , 2 and 3
respectively. In this example we assume that 1 > 2 ,
which results in an error rk = 1 + 1 , and assume that
1 + 1 < (1 + 2 )(1 + 3 ), which results in the errors
rm = ro = (1 + 2 )(1 + 3 ).
Figure 2: An irreducible minor graph of a
4 ? 4 Ising model that can be obtained via VE
without creating functions of more than 3 variables. Applying MAS, only one function over
three variables needs to be decomposed into
two functions over overlapping sets of variables in order to complete inference using only
functions over three or less variables.
"
min
?1 ,...,?
?m )
(?
#2
!
X
X
w?W
i
??i (wi )
? ?(w)
(7)
We use the notation w ? wk to denote an instantiation W = w that is consistent with the instantiation Wk = wk . To find the optimal value of ??i (wi ) we differentiate
P Eq. 7 with respect to each
?(w)
P
??k (wk ) and set to zero. Choosing the constraint w ??i (wi ) = w m in the resulting underconstrained set of linear equations we get
P
P
?(w) X
?(w)
w?w
w
Q
??k (wk ) = Q k
?
|Wi |
m |Wj |
i6=k
i6=k
j
As the last term is independent of the index i we finally obtain
P
P
?(w) (m ? 1) ?(w)
w?w
w
??k (wk ) = Q k
?
|Wi |
m|W |
(8)
i6=k
The second term of Eq. 8 is computed once for a decomposition operation. Denoting |W | = N this
term can be computed in O(N ) time. Computing the first term of Eq. 8 also takes O(N ) time but it
needs to be computed for every resulting function ??k , hence taking an overall time of O(N m).
2.2.2
Minimizing the KL Divergence
The Kulback-liebert (KL) divergence is another common alternative measure used for optimization:
"
#
P ?
X X
?i (wi )
?
(9)
min
?i (wi ) log i
?
?
?(w)
(?1 ,...,?m )
i
w?W
Although no closed form solution is known for this minimization problem, iterative algorithms were
devised for variational approximation, which start with arbitrary functions ??i (Wi ) and converge
to a local minimum [16, 12]. Despite the drawbacks of unbounded convergence time and lack of
guarantee to converge to the global optimum, these methods have proven quite successful. In our
context this approach has the benefit of allowing overlapping sets Wi .
3
Applying MAS to Inference Algorithms
Our multiplicative approximation scheme offers a way to reduce the computational cost of inference
by decoupling variables via -decompositions. The fact that many existing inference algorithms
compute and utilize multiplicative factors during the course of computation means that the scheme
can be applied widely. The approach does require a mechanism to select functions to decompose,
however, the flexibility of the scheme allows a variety of alternative mechanisms. One simple costfocused strategy is to decompose a function whenever its size exceeds some threshold. An alternative
quality-focused strategy is to choose an and search for -decompositions Wi . Below we consider
the application of our approximation scheme to variable elimination with yet another selection strategy. We note that heuristics for choosing approximate factorizations exist for the selection of disjoint
sets [28] and for overlapping sets [5] and could be utilized. The ideal application of our scheme is
likely to depend both on the specific inference algorithm and the application of interest.
3.1
Dynamic Decompositions
One family of decomposition strategies which are of particular interest, are those which allow for dynamic decompositions during the inference procedure. In this dynamic framework, MAS can be incorporated into known exact inference algorithms for graphical models, provided that local functions
can be bounded according to Eq. 2. A dynamic decomposition strategy applies -decompositions to
functions in which the original model is defined and to intermediate functions created in the course
of the inference algorithm, according to Eq. 1 or Eq. 3, based on the current state of the algorithm,
and the accuracy introduced by the possible decompositions.
Unlike other approximation methods, such as the variational approach [16] or the edge deletion approach [8], dynamic decompositions has the capability of decoupling two variables in some contexts
while maintaining their dependence in others. If we wish to restrict ourselves to functions over three
or less variables when performing inference on a 4 ? 4 Ising model, the model in Figure 2 is an
inevitable minor, and from this point of the elimination, approximation is mandatory. In the variational framework, an edge in the graph should be removed, disconnecting the direct dependence
between two or more variables (e.g. removing the edge A-C would result in breaking the set ABC
into the sets AB and BC and breaking the set ACD into AD and CD). The same is true for the edge
deletion method, with the difference in the new potentials associated with the new sets. Dynamic
decompositions allow for a more refined decoupling, where the dependence is removed only in some
of the functions. In our example breaking the set ABC into AB and BC while keeping the set ACD
intact is possible and is also sufficient for reducing the complexity of inference to functions of no
more than three variables (the elimination order would be: A,B,F,H,C,E,D,G). Moreover, if decomposing the set ABC can be done with an error ABC , as defined in Eq. 2, then we are guaranteed not
to exceed this error for the entire approximate inference procedure. An extreme example will be the
functions for the sets ABC and ACD as appear in the tables of Figure 2. It is possible to decompose
the function over the set ABC into two functions over the sets AB and BC with an arbitrarily small
error, while the same is not possible for the function over the set ACD. Hence, in this example the
result of our method will be nearly equal to the solution of exact inference on the model, and the
theoretical error bounds will be arbitrarily small, while other approaches, such as the variational
method, can yield arbitrarily bad approximations.
We discuss how to incorporate MAS into the Variable Elimination (VE) algorithm for computing
the likelihood of a graphical model [9, 20]. In this algorithm variables V ? H are summed out
iteratively after multiplying all existing functions that include V , yielding intermediate functions
f (W ? X) where V ?
/ W . MAS can be incorporated into the VE algorithm by identifying decompositions for some of the intermediate functions f . This results in the elimination of f from
?
the pool of functions and adding instead the functions f?i (Wi ) = e?i (Wi ) . Note that the sets Wi
are not necessarily disjoint and can have common variables. Using -decompositions reduces the
computational complexity, as some variables are decoupled in specific points during execution of
the algorithm. Throughout the algorithm the maximal error max introduced by the decompositions
Algorithm 1: DynaDecomp
Table 1: Accuracy and speedup for grid-like
models. Upper panel: attractive Ising models; Middle panel: repulsive Ising models;
Lower panel: Bayesian network grids with
random probabilities.
Model
10 ? 10
10 ? 10
15 ? 15
15 ? 15
20 ? 20
25 ? 25
30 ? 30
10 ? 10
10 ? 10
15 ? 15
15 ? 15
20 ? 20
25 ? 25
30 ? 30
10 ? 10
12 ? 12
15 ? 15
18 ? 18
20 ? 20
10 ? 10
12 ? 12
7?7
8?8
Num Accuracy Bounds Speedup DD time
Values
(secs)
5
2
5
2
2
2
2
5
2
5
2
2
2
2
2
2
2
2
2
5
5
10
10
2.4e-4
2.1e-4
1.2e-4
2.2e-4
1.2e-4
2.6e-5
5.7e-4
3.2e-4
3.5e-4
3.2e-3
8.6e-4
4.5e-4
3.1e-5
8.1e-5
3.0e-3
8.1e-3
1.7e-3
3.0e-4
1.8e-3
2.8e-5
5.5e-4
1.8e-4
1.4e-4
0.0096 49.2
0.0094
2.5
0.0099 223.3
0.0096
8.3
0.0095 12.9
0.0092 20.9
0.0097 236.7
0.0099 38.2
0.0098
2.3
0.0099 568.4
0.0094
7.2
0.0091 14.3
0.0094 22.8
0.0099 218.7
0.0098
1.1
0.0096 11.3
0.0098 201.4
0.0090 1782.8
0.0097 7112.9
0.0095 49.3
0.0096 458.6
0.0093
7.8
0.0098
8.4
0.04
0.01
0.21
0.04
0.08
0.10
0.11
0.04
0.01
0.12
0.05
0.10
0.11
0.10
0.01
0.02
0.05
0.15
1.30
0.03
0.05
0.03
0.15
Input: A model for n variables X = {X1 , . . . , Xn } and
functions Q
?i (Di ? X), that encodes
P (X) = i ?i (Di ); A set E = X \ H of observed
variables and their assignment E = e; An
elimination order R over the variables in H; scalars
M and ?.
Output: The log-likelihood log P (e); an error .
Initialize: = 0; F ? {?i (Di )}; I(?i ) = f alse;
for i = 1 to n do
k ? R[i];
T ? {f : f contains Xk , f ? F };
F ?F
P\ T ;
f 0 ? xk ?(T );
V
I(f 0 ) = f ?T I(f );
0
if |f | ? M and I(f 0 ) = true then
(f 0 , F? ) ? (f 0 );
if f 0 ? ? then
?f? ? F? I(f?) = f alse;
F ? F ? F? ;
= max{, f 0 };
else
F ? F ? f 0;
else
F ? F ? f 0;
multiply all constant functions in F and put in p;
return log p, ;
can be easily computed by associating functions with errors, as explained in Section 2.1. In our
experiments we restrict attention to non-compounded decompositions. Our algorithm decomposes
a function only if it is over a given size M , and if it introduces no more than ? ?
error. The approximating functions in this algorithm are strictly disjoint, of size no more than M , and with
the variables assigned randomly to the functions. We call this algorithm DynaDecomp (DD) and
provide a pseudo-code in Algorithm 1. There we use the notation ?(T ) to denote multiplication of
the functions f ? T , and (f ) to denote decomposition of function f . The outcome of (f ) is a
pair (, F? ) where the functions f?i ? F? are over a disjoint set of variables.
We note that MAS can also be used on top of other common algorithms for exact inference in
probabilistic models which are widely used, thus gaining similar benefits as those algorithms. For
example, applying MAS to the junction tree algorithm [14] a decomposition can decouple variables in messages sent from one node in the junction tree to another, and approximate all marginal
distributions of single variables in the model in a single run, with similar guarantees on the error.
This extension is analogous to how the mini-clusters algorithm [17] extends the mini-bucket algorithm [10].
4
Results
We demonstrate the power of MAS by reporting the accuracy and theoretical bounds for our DynaDecomp algorithm for a variety of models. Our empirical study focuses on approximating the
likelihood of evidence, except when comparing to the results of Xing et. al. [28] on grid models. The quality of approximation is measured in terms of accuracy and speedup. The accuracy is
?
L log L
?
reported as max{ log
? , log L } ? 1 where L is the likelihood and L is the approximate likelihood
log L
achieved by DynaDecomp. We also report the theoretical accuracy which is the maximum error
introduced by decomposition operations. The speedup is reported as a ratio of run-times for obtaining the approximated and exact solutions, in addition to the absolute time of approximation. In all
experiments a random partition was used to decompose the functions, and the L2 norm optimization
introduced in Section 2.2.1 was applied to minimize the error. The parameter M was set to 10, 000
and the guaranteed accuracy ? was set to 1%, however, as is evident from the results, the algorithm
usually achieves better accuracy.
We compared the performance of DynaDecomp with the any-time Mini-buckets (MB) algorithm [10]. The parameters i and m, which are the maximal number of variables and functions
in a mini-bucket, were initially set to 3 and 1 respectively. The parameter was set to zero, not constraining the possible accuracy. Generally we allowed MB to run the same time it took DynaDecomp
to approximate the model, but not less than one iteration (with the initial parameters).
We used two types of grid-like models. The first is an Ising model with random attractive or repulsive
pair-wise potentials, as was used in [28]. When computing likelihood in these models we randomly
assigned values to 10% of the variables in the model. The other kind of grids were Bayesian networks where every variable Xij at position (i, j) in the grid has the variables Xi?1,j and Xi,j?1
as parents in the model. In addition, every variable Xij has a corresponding observed variable Yij
connected to it. Probabilities in these models were uniformly distributed between zero and one. Inference on these models, often used in computer vision [11], is usually harder than on Ising models,
due to reduced factorization. We used models where the variables had either two, five or ten values.
The results are shown in Table 1. In addition, we applied DynaDecomp to two 100 ? 100 Ising
grid models with binary variables. Inference in these models is intractable. We estimate the time
for exact computation using VE on current hardware to be 3 ? 1015 seconds. This is longer than
the time since the disappearance of the dinosaurs. Setting ? to 2%, DynaDecomp computated the
approximated likelihood in 7.09 seconds for the attractive model and 8.14 seconds for the repulsive
one.
Comparing our results with those obtained by the MB algorithm with an equivalent amount of computations, we find that on the average the accuracy of MB across all models in Tables 1 is 0.198
while the average accuracy of DynaDecomp is 9.8e?4 , more than 200 times better than that of MB.
In addition the theoretical guarantees are more than 30% for MB and 0.96% for DynaDecomp, a
30-fold improvement. As a side note, the MB algorithm performed significantly better on attractive Ising models than on repulsive ones. To compare our results with those reported in [28] we
computed all the marginal probabilities (without evidence) and calculated the L1 -based measure
P (xij ) ? P? (xij ). Running on the Ising models DynaDecomp obtained an average of
1.86e compared to 0.003 of generalized belief propagation (GBP) and 0.366 of generalized mean
field (GMF). Although the run times are not directly comparable due to differences in hardware,
DynaDecomp average run-time was less than 0.1 seconds, while the run-time of GBP and GMF was
previously reported [28] to be 140 and 1.6 seconds respectively, on 8 ? 8 grids.
P
i,j
P
xij
?5
We applied our method to probabilistic phylogenetic models. Inference on these large models,
which can contain tens of thousands of variables, is used for model selection purposes. Previous
works [15, 26] have obtained upper and lower bounds on the likelihood of evidence in the models
suggested in [22] using variational methods, reporting an error of 1%. Using the data as in [26],
we achieved less than 0.01% error on average within a few seconds, which improves over previous
results by two orders of magnitude both in terms of accuracy and speedup.
In addition, we applied DynaDecomp to 24 models from the UAI?06 evaluation of probabilistic
inference repository [1] with ? = 1%. Only models that did not have zeros and that our exact inference algorithm could solve in less than an hour were used. The average accuracy of DynaDecomp
on these models was 0.0038 with an average speedup of 368.8 and average run-time of 0.79 seconds.
We also applied our algorithm to two models from the CPCS benchmark (cpcs360b and cpcs422b).
DynaDecomp obtained an average accuracy of 0.008 versus 0.056 obtained by MB. We note that
the results obtained by MB are consistent with those reported in [10] for the MPE problem.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
Evaluation of probabilistic inference systems: http://tinyurl.com/3k9l4b, 2006.
Bidyuk and Dechter. An anytime scheme for bounding posterior beliefs. AAAI 2006.
Bidyuk and Dechter. Improving bound propagation. In ECAI 342?346, 2006.
Cheng and Druzdzel. AIS-BN: An adaptive importance sampling algorithm for evidential reasoning in
large Bayesian networks. JAIR 13:155?188, 2000.
Choi and Darwiche. A variational approach for approximating Bayesian networks by edge deletion. UAI
2006.
Cooper. The computational complexity of probabilistic inference using Bayesian belief networks. AI
42(2-3):393?405, 1990.
Dagum and Luby. Approximating probabilistic inference in Bayesian belief networks is NP-hard. AI,
60(1):141?153, 1993.
Darwiche, Chan, and Choi. On Bayesian network approximation by edge deletion. UAI 2005.
Dechter. Bucket elimination: A unifying framework for reasoning. AI 113(1-2):41?85, 1999.
Dechter and Rish. Mini-buckets:A general scheme for bounded inference. J.ACM 50:107?153, 2003.
W. Freeman, W. Pasztor, and O. Carmichael. Learning low-level vision. IJCV 40:25?47, 2000.
Geiger, Meek, and Wexler. A variational inference procedure allowing internal structure for overlapping
clusters and deterministic constraints. JAIR 27:1?23, 2006.
Henrion. Propagating uncertainty in bayesian networks by probabilistic logic sampling. UAI 1988.
Jensen, Lauritzen, and Olesen. Bayesian updating in causal probabilistic networks by local computations.
Comp. Stat. Quaterly 4:269?282, 1990.
Jojic, Jojic, Meek, Geiger, Siepel, Haussler, and Heckerman. Efficient approximations for learning phylogenetic hmm models from data. ISMB 2004.
Jordan, Ghahramani, Jaakkola, and Saul. An introduction to variational methods for graphical models.
Machine Learning 37(2):183?233, 1999.
Mateescu, Dechter, and Kask. Partition-based anytime approximation for belief updating. 2001.
Boyd and Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.
Shachter, D?Ambrosio, and Del Favero. Symbolic probabilistic inference in belief networks. AAAI 1990.
Shachter and Peot. Simulation approaches to general probabilistic inference on belief networks.UAI 1989.
Siepel and Haussler. Combining phylogenetic and HMMs in biosequence analysis. RECOMB 2003.
Wainwright, Jaakkola, and Willsky. A new class of upper bounds on the log partition function. IEEE
Trans. Info. Theory 51(7):2313?2335, 2005.
Weiss. Belief propagation and revision in networks with loops. Technical Report AIM-1616, 1997.
Wexler and Geiger. Importance sampling via variational optimization. UAI 2007.
Wexler and Geiger. Variational upper bounds for probabilistic phylogenetic models. RECOMB 2007.
Wexler and Meek. Inference for multiplicative models. UAI 2008.
Xing, Jordan, and Russell. Graph partition strategies for generalized mean field inference. UAI 2004.
| 3479 |@word repository:1 middle:1 norm:4 simulation:1 r:3 wexler:5 decomposition:41 bn:1 incurs:1 harder:1 initial:2 contains:1 denoting:1 bc:3 existing:3 current:2 com:3 comparing:2 rish:1 yet:2 written:1 dechter:6 partition:4 analytic:1 remove:1 siepel:2 selected:1 xk:2 num:1 provides:2 node:8 five:1 unbounded:1 phylogenetic:4 along:1 direct:2 ijcv:1 combine:1 darwiche:2 introduce:1 peot:1 ambrosio:1 freeman:1 decomposed:6 actual:1 revision:1 provided:1 notation:3 bounded:2 moreover:1 panel:3 what:1 kind:1 maxa:2 developed:1 guarantee:4 pseudo:1 every:8 runtime:1 exactly:1 ro:1 rm:1 appear:1 positive:1 local:9 limit:1 despite:1 dynamically:1 hmms:1 factorization:3 ismb:1 directed:3 practice:2 procedure:13 bidyuk:2 carmichael:1 empirical:1 significantly:1 projection:1 boyd:1 symbolic:1 get:1 interior:1 selection:4 operator:2 put:1 context:2 applying:4 disconnecting:1 optimize:2 equivalent:1 map:4 demonstrated:1 deterministic:1 attention:1 convex:2 focused:1 identifying:1 assigns:1 haussler:2 vandenberghe:1 analogous:1 target:1 suppose:1 user:1 exact:6 programming:1 us:1 approximated:9 expensive:1 utilized:2 updating:2 ising:9 observed:3 solved:2 leiber:1 worst:1 thousand:1 wj:1 connected:1 russell:1 removed:2 acd:4 ui:10 complexity:4 dynamic:6 depend:1 solving:1 efficiency:1 sink:1 easily:1 joint:1 various:3 fast:3 monte:1 query:1 choosing:4 h0:1 refined:1 outcome:1 quite:1 heuristic:1 widely:2 solve:1 ability:1 differentiate:1 took:1 propose:2 maximal:2 mb:9 combining:1 loop:1 translate:1 flexibility:1 intuitive:1 description:1 kulback:1 parent:2 convergence:1 optimum:1 cluster:2 illustrate:1 stat:1 propagating:1 measured:1 lauritzen:1 minor:3 eq:13 dividing:1 closely:1 drawback:1 enable:1 elimination:9 require:1 decompose:6 probable:2 yij:1 strictly:1 extension:1 hold:1 uil:4 claim:1 achieves:2 adopt:1 purpose:1 dagum:1 wl:9 hope:1 minimization:3 aim:1 jaakkola:2 focus:1 improvement:1 likelihood:12 inference:46 lowercase:1 entire:4 a0:8 initially:1 hidden:1 interested:1 overall:2 among:1 denoted:1 summed:1 initialize:1 marginal:6 equal:2 once:1 field:2 sampling:4 nearly:1 inevitable:1 np:2 others:1 simplify:1 report:2 few:1 irreducible:1 intelligent:1 randomly:2 divergence:3 ve:6 parsimoniously:1 deviating:1 maxj:2 ourselves:1 microsoft:4 ab:3 interest:3 message:2 fd:2 multiply:1 evaluation:2 introduces:1 extreme:1 yielding:1 edge:8 biosequence:1 xy:1 decoupled:1 tree:2 causal:1 theoretical:5 assignment:2 loopy:1 cost:3 introducing:1 subset:3 successful:1 too:1 reported:5 dependency:1 probabilistic:14 pool:1 aaai:2 choose:3 positivity:1 creating:1 resort:1 return:1 potential:3 sec:1 wk:6 ad:1 multiplicative:14 performed:2 tion:1 closed:4 root:2 mpe:6 start:3 xing:2 capability:1 minimize:2 il:4 square:2 accuracy:21 kaufmann:1 yield:2 bayesian:9 carlo:1 multiplying:1 comp:1 evidential:1 whenever:2 definition:1 colleague:1 associated:7 proof:5 di:5 recall:1 anytime:2 improves:1 cj:9 jair:2 wei:1 done:1 druzdzel:1 hand:2 christopher:1 overlapping:4 propagation:4 lack:1 del:1 quality:2 contain:1 true:2 hence:4 assigned:3 jojic:2 iteratively:1 attractive:4 during:6 generalized:3 trying:1 evident:1 complete:1 demonstrate:1 l1:1 fj:2 tinyurl:1 reasoning:3 variational:14 wise:1 fi:5 superior:1 common:3 significant:1 kask:1 cambridge:1 ai:4 grid:8 similarly:1 i6:3 dj:3 had:1 longer:1 maxh:1 posterior:1 recent:1 chan:1 optimizing:1 mandatory:1 certain:1 binary:1 arbitrarily:4 morgan:1 minimum:1 relaxed:1 converge:2 rv:2 reduces:1 compounded:3 exceeds:1 technical:1 offer:1 devised:1 impact:1 schematic:1 basic:1 vision:2 iteration:1 sometimes:1 represent:1 achieved:3 addition:9 else:2 source:1 unlike:1 undirected:1 sent:1 jordan:2 call:2 ideal:1 intermediate:4 exceed:1 constraining:1 variety:4 marginalization:1 restrict:2 associating:1 reduce:3 translates:1 expression:1 penalty:1 suffer:1 useful:1 generally:1 amount:2 ten:2 hardware:2 reduced:1 http:1 exist:1 xij:5 zj:2 disjoint:5 popularity:1 discrete:1 write:2 four:2 threshold:1 capital:1 utilize:1 graph:6 sum:1 run:7 letter:2 uncertainty:1 extends:1 throughout:3 reader:1 family:1 reporting:2 geiger:5 comparable:1 bound:19 guaranteed:5 meek:5 simplification:1 cheng:1 fold:1 occur:1 constraint:2 encodes:3 bns:1 extremely:1 min:5 performing:2 speedup:7 according:4 smaller:2 across:1 heckerman:1 wi:24 appealing:1 modification:1 alse:2 explained:1 bucket:7 equation:1 previously:1 discus:3 mechanism:2 wrt:1 needed:1 repulsive:4 junction:2 operation:5 decomposing:1 multiplied:2 apply:1 luby:1 alternative:4 original:2 top:1 running:1 include:2 graphical:8 maintaining:2 sw:3 unifying:1 ghahramani:1 approximating:8 objective:1 already:2 fa:4 strategy:6 rt:2 dependence:3 disappearance:1 hmm:1 willsky:1 code:1 index:1 mini:7 reformulate:1 minimizing:4 ratio:1 unfortunately:1 info:1 allowing:2 upper:7 datasets:1 pasztor:1 benchmark:1 immediate:1 defining:1 incorporated:2 arbitrary:1 introduced:9 pair:2 kl:2 gbp:2 deletion:5 hour:1 pearl:1 trans:1 redmond:2 suggested:3 below:2 usually:2 max:16 gaining:1 explanation:3 belief:8 wainwright:2 power:1 representing:4 scheme:16 created:3 liebert:1 geometric:1 l2:5 multiplication:2 marginalizing:1 recomb:2 proven:1 aposteriori:2 versus:1 sufficient:1 consistent:2 dd:2 cd:1 course:4 dinosaur:1 mateescu:1 last:1 keeping:1 ecai:1 side:1 allow:2 understand:1 saul:1 taking:1 absolute:1 fg:2 benefit:2 distributed:1 calculated:1 xn:2 fb:2 adaptive:1 approximate:8 kullback:1 logic:1 monotonicity:1 global:1 instantiation:4 uai:8 summing:1 xi:2 search:1 iterative:1 decade:1 decomposes:5 table:4 decoupling:3 obtaining:1 improving:1 necessarily:2 domain:2 did:1 main:1 bounding:1 allowed:1 x1:2 cooper:1 position:1 wish:2 breaking:3 theorem:6 formula:1 rk:1 bad:2 specific:2 removing:1 choi:2 jensen:1 maxi:1 evidence:4 intractable:1 underconstrained:1 adding:1 gained:1 importance:2 magnitude:2 execution:2 fc:2 explore:1 likely:1 shachter:2 ux:1 scalar:1 maxw:1 applies:1 abc:6 ma:17 acm:1 goal:1 hard:2 henrion:1 determined:1 infinite:1 reducing:2 except:1 uniformly:1 decouple:2 lemma:2 called:5 m3:1 intact:1 select:1 internal:1 olesen:1 incorporate:1 |
2,734 | 348 | Generalization Dynamics in
LMS Trained Linear Networks
Yves Chauvin?
Psychology Department
Stanford University
Stanford, CA 94305
Abstract
For a simple linear case, a mathematical analysis of the training and generalization (validation) performance of networks trained by gradient descent
on a Least Mean Square cost function is provided as a function of the learning parameters and of the statistics of the training data base. The analysis
predicts that generalization error dynamics are very dependent on a priori initial weights. In particular, the generalization error might sometimes
weave within a computable range during extended training. In some cases,
the analysis provides bounds on the optimal number of training cycles for
minimal validation error. For a speech labeling task, predicted weaving
effects were qualitatively tested and observed by computer simulations in
networks trained by the linear and non-linear back-propagation algorithm.
1
INTRODUCTION
Recent progress in network design demonstrates that non-linear feedforward neural networks can perform impressive pattern classification for a variety of real-world
applications (e.g., Le Cun et al., 1990; Waibel et al., 1989). Various simulations and
relationships between the neural network and machine learning theoretical literatures also suggest that too large a number of free parameters ("weight overfitting")
could substantially reduce generalization performance. (e.g., Baum, 1989 1989).
i
A number of solutions have recently been proposed to decrease or eliminate the
overfitting problem in specific situations. They range from ad hoc heuristics to
theoretical considerations (e.g., Le Cun et al., 1990; Chauvin, 1990a; Weigend et al.,
? Also with Thomson-CSF, Inc., 630 Hansen Way, Suite 250, Palo Alto, CA 94304.
890
Generalization Dynamics in LMS Trained Linear Networks
In Press). For a phoneme labeling application, Chauvin showed that the overfitting
phenomenon was actually observed only when networks were overtrained far beyond
their "optimal" performance point (Chauvin, 1990b). Furthermore, generalization
performance of networks seemed to be independent of the size of the network during
early training but the rate of decrease in performance with overtraining was indeed
related the number of weights.
The goal of this paper is to better understand training and generalization error dynamics in Least-Mean-Square trained linear networks. As we will see, gradient descent training on linear networks can actually generate surprisingly rich and insightful validation dynamics. Furthermore, in numerous applications, even non-linear
networks tend to function in their linear range, as if the networks were making use
of non-linearities only when necessary ('Veigend et al., In Press; Chauvin, 1990a).
In Section 2, I present a theoretical illustration yielding a better understanding of
training and validation error dynamics. In Section 3, numerical solutions to obtained analytical results make interesting predictions for validation dynamics under
overtraining. These predictions are tested for a phonemic labeling task. The obtained simulations suggest that the results of the analysis obtained with the simple
theoretical framework of Section 2 might remain qualitatively valid for non-linear
complex architectures.
2
2.1
THEORETICAL ILLUSTRATION
ASSUMPTIONS
Let us consider a linear network composed of n input units and n output units fully
connected by a n.n weight matrix W . Let us suppose the network is trained to
reproduce a noiseless output "signal" from a noisy input "signal" (the network can
be seen as a linear filter). 'Ve write F as the "signal", N the noise, X the input, Y
the output, and D the desired output. For the considered case, we have X = F+N,
Y = W X and D = F.
The statistical properties of the data base are the following. The signal is zero-mean
with covariance matrix CF. 'Ve write Ai and ei as the eigenvalues and eigenvectors
of C F (ei are the so-called principal components; we will call Ai the "signal ~ower
spectrum"). The noise is assumed to be zero-mean, with covariance matrix CN =
v.I where I is the identity matrix. We assume the noise is uncorrelated with the
signal: CFN
O. We suppose two sets of patterns have been sampled for training
and for validation. We write CF, CN and CFN the resulting covariance matrices for
the training set and CF, C N~nd CFN the corresp_onding matrices for the validation
set. We assume C F ~ C p ~ C F , CFN ~ C PN ~ CFN = 0, CN = v.I and C N= v'.I
with v' > v. (N umerous of these assumptions are made for the sake of clarity of
explanation: they can be relaxed without changing the resulting implications.)
=
The problem considered is much simpler than typical realistic applications. However, we will see below that (i) a formal analysis becomes complex very quickly
(ii) the validation dynamics are rich, insightful and can be mapped to a number
of results observed in simulations of realistic applications and (iii) an interesting
number of predictions can be obtained.
891
892
Chauvin
2.2
LEARNING
The network is trained by gradient descent on the Least Mean Square (LMS) error:
dW = -1JV'wE where 1J is the usual learning rate and, in the case considered,
E =
(Fp - Yp)T(Fp - Yp). We can write the gradient as a function of the
various covariance matrices: V' wE (I - W)CF + (I - 2W)CF N - W C N. From
the general assumptions, we get:
E;
=
V'wE ~ CF - WCF - WCN
(1)
We assume now that the principal components ei are also eigenvectors of the weight
matrix W at iteration k with corresponding eigenvalue Qik: Wk.ei Qikei. We can
then compute the image of each eigenvector ei at iteration k + 1:
=
Wk+l.ei
= 1JAi.ei + Qik[I-1J(Ai + v)).ei
(2)
Therefore, ei is also an eigenvector of Wk+l and Qi,k+l satisfies the induction:
Assuming Wo
=
=
Qi,k+l
1J Ai + Qik[l - 1J(Ai + v)]
(3)
0, we can compute the alpha-dynamics of the weight matrix W:
A?
A ' [1-(I-1J(Ai+ v ))k]
(4)
,+v
As k goes to infinity, provided 1J < 1/ AM + v, Qi approaches Ai/(A, + Vi), which
corresponds to the optimal (Wiener) value of the linear filter implemented by the
network. We will write the convergence rates ai I-1JA, -1JV. These rates depend
on the signal "power spectrum", on the noise power and on the learning rate 1J.
Qik=
=
If we now assume WO.ei
general), we get:
= QiO.ei with QiO #- 0 (this assumption can be made more
(5)
where bi = 1 - QiO - QiOV/ Ai. Figure 1 represents possible alpha dynamics for
arbitrary values of Ai with QiD = Qo #- O.
We can now compute the learning error dynamics by expanding the LMS error term
E at time k. Using the general assumptions on the covariance matrices, we find:
n
Ek =
n
E Eik = E Ai(1 -
Qik)2
+ VQ~k
(6)
Therefore, training error is a sum of error components, each of them being a
quadratic function of Qi. Figure 2 represents a training error component Ei as
a function of Q. Knowing the alpha-dynamics, we can write these error components
as a function of k:
\ b2 a 2k)
E? ... = A, (V+A?
(7)
h;
Ai + V
'
It is easy to see that E is a monotonic decreasing function (generated by gradient
descent) which converges to the bottom of the quadratic error surface, yielding the
residual asymptotic error:
(8)
Generalization Dynamics in LMS Trained Linear Networks
1.0-
1--------------------n,
o.~ -~
----------------
~---------------------
>.. = .2
,
O.O;---~--~I--~?~~I--~--~I--~--~I--~---,I
o
20
40
60
80
100
N umber of Cycles
=
=
Figure 1: Alpha dynamics for different values of >'i with 'T1 .01 and aiO ao =j:. O.
The solid lines represent the optimal values of ai for the training data set. The
dashed lines represent corresponding optimal values for the validation data set.
,
v!
LMS
o
~~
A;+V J A.+V
1
aik
Figure 2: Training and validation error dynamics as a function of ai. The dashed
curved lines represent the error dynamics for the initial conditions aiQ. Each training
error component follows the gradient of a quadratic learning curve (bottom). Note
the overtraining phenomenon (top curve) between
(optimal for validation) and
aioo (optimal for training).
at
893
894
Chauvin
2.3
GENERALIZATION
Considering the general assumptions on the statistics of the data base, we can
compute the validation error E' (N ote that "validation error" strictly applies to the
validation data set. "Generalization error" can qualify the validation data set or
the whole population, depending on context.):
n
Ek
n
= ~E:k = ~Ai(l- aik)2 + v'a;k
(9)
where the alpha-dynamics are imposed by gradient descent learning on the training
data set. Again, the validation error is a sum of error components Ei, quadratic
functions of ai. However, because the alpha-dynamics are adapted to the training
sample, they might generate complex dynamics which will strongly depend on the
inital values aiO (Figure 1). Consequently, the resulting error components
are not
monotonic decreasing functions anymore. As seen in Figure 2, each of the validation
error components might (i) decrease (ii) decrease then increase (overtraining) or
(iii) increase as a function of aiO. For each of these components, in the case of
overtraining, it is possible to compute the value of aik at which training should be
stopped to get minimal validation error:
E:
L
2L-+L
v'-v
og >.;+v'
og >';-aio(>'.+V')
Log(1 - 7JAi - 7Jv)
(10)
However, the validation error dynamics become much more complex when we con0, the minimum (or minima)
sider sums of these components. If we assume aiQ
of E' can be found to correspond to possible intersections of hyper-ellipsoids and
power curves. In general, it is possible to show that there exists at least one such
minimum. It is also possible to find simple bounds on the optimal training time for
minimal validation error:
=
(11)
These bounds are tight when the noise power is small compared to the signal "power
spectrum". For aiO =f. 0, a formal analysis of the validation error dynamics becomes
intractable. Because some error components might increase while others decrease,
it is possible to imagine multiple minima and maxima for the total validation error
(see simulations below). Considering each component's dynamics, it is nonetheless
possible to compute bounds within which E' might vary during training:
n AW'
Ai(V 2 + v' Ai)
~
-:---- < Ek'2:"
<
. Ai + v' - .
(Ai + v)2
,
,
(12)
Because of the "exponential" nature of training (Figure 1), it is possible to imagine
that this "weaving" effect might still be observed after a long training period, when
the training error itself has become stable. Furthermore, whereas the training error
will qualitatively show the same dynamics, validation error will very much depend
on aiO: for sufficiently large initial weights, validation dynamics might be very
dependent on particular simulation "runs".
Generalization Dynamics in LMS Trained Linear Networks
20
..5
10
"
o
Figure 3: Training (bottom curves) and validation (top curves) error dynamics in
17,).2
1.7, v
2, v'
10, l:?10
0 as l:?20 varies
a two-dimensional case for ).1
from 0 to 1.6 (bottom-up) in .2 increments.
=
3
3.1
=
=
=
=
SIMULATIONS
CASE STUDY
Equations 7 and 9 were simulated for a two-dimensional case (n = 2) with ).1
17,).2
1.7, v = 2, v'
10 and l:?10
O. The values of l:?20 determined the
relative dominance of the two error components during training. Figure 3 represents
training and validation dynamics as a function of k for a range of values of l:?20.
As shown analytically, training dynamics are basically unaffected by the initial
conditions of the weight matrix Woo However, a variety of validation dynamics
can be observed as l:?20 varies from 0 to 1.6. For 1.6 ~ l:?20 ~ 1.4, the validation
error is monotically decreasing and looks like a typical "gradient descent" training
error. For 1.2 ~ l:?20 ~ 1.0, each error component in turn imposes a descent rate:
the validation error looks like two "connected descents". For .8 ~ 0'20 ~ .6, E~ is
monotically decreasing with a slow convergence rate, forcing the validation error to
decrease long after E~ has become stable. This creates a minimum, followed by a
maximum, followed by a minimum for E'. Finally, for .4 ~ l:?20 ~ 0, both error
components have a single minimum during training and generate a single minimum
for the total validation error E'.
=
3.2
=
=
PHONEMIC LABELING
One of the main predictions obtained from the analytical results and from the
previous case study is that validation dynamics can demonstrate multiple local
minima and maxima. To my knowledge, this phenomenon has not been described in
the literature. However, the theory also predicts that the phenomenon will probably
appear very late in training, well after the training error has become stable, which
might explain the absence of such observations. The predictions were tested for a
phonemic labeling task with spectrograms as input patterns and phonemes as output
895
896
Chauvin
patterns. Various architectures were tested (direct connections or back-propagation
networks with linear or non-linear hidden layers). Due to the limited length of
this article, the complete simulations will be reported elsewhere. In all cases, as
predicted, multiple mimina/maxima were observed for the validation dynamics,
provided the networks were trained way beyond usual training times. Furthermore,
these generalization dynamics were very dependent on the initial weights (provided
sufficient variance on the initial weight distribution).
4
DISCUSSION
It is sometimes assumed that optimal learning is obtained when validation error
starts to increase during the course of training. Although for the theoretical study
presented, the first minimum of E' is probably always a global minimum, independently of aw, simulations of the speech labeling task show it is not always the
case with more complex architectures: late validation minima can sometimes (albeit
rarely) be deeper than the first "local" minimum. These observations and a lack
of theoretical understanding of statistical inference under limited data set raise the
question of the significance of a validation data set. As a final comment, we are
not reaDy interested in minimal validation error (E') but in minimal generalization
error (E'). Understanding the dynamics of the "population" error as a function
of training and validation errors necessitates, at least, an evaluation of the sample
statistics as a function of the number of training and validation patterns. This is
beyond the scope of this paper.
Acknowledgements
Thanks to Pierre Baldi and Julie Holmes for their helpful comments.
References
Baum, E. B. & Haussler, D. (1989). 'iVhat size net gives valid generalization? Neural
Computation, 1, 151-160.
Chauvin, Y. (1990a). Dynamic behavior of constrained back-propagation networks.
In D. S. Touretzky (Ed.), Neural Information Processing Systems (Vol. 2) (pp.
642-649). San Mateo, CA: Morgan Kaufman .
Chauvin, Y. (1990b). Generalization performance of overtrained back-propagation
networks. In L. B. Almeida & C. J. 'iVellekens (Eds.), Lecture Notes in Computer Science (Vo1. 412) (pp. 46-55). Berlin: Germany: Springer-Verlag.
Cun, Y. 1., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, 'iV.,
& Jackel, 1. D. (1990). Handwritten digit recognition with a back-propagation
network. In D. S. Touretzky (Ed.), Neural Information Processing Systems
(Vo1. 2) (pp. 396-404). San Mateo, CA: Morgan Kaufman.
'iVaibel, A., Sawai, H., & Shikano, K. (1989). Modularity and scaling in large
phonemic neural networks. IEEE Transactions on Acoustics, Speech and Signal
Processing, ASSP-37, 1888-1898.
'iVeigend, A. S., Huberman, B. A., & Rumelhart, D. E. (In Press). Predicting the
future: a connectionist approach. International Journal of Neural Systems.
| 348 |@word nd:1 simulation:9 covariance:5 solid:1 veigend:1 initial:6 wcn:1 realistic:2 numerical:1 provides:1 simpler:1 mathematical:1 direct:1 become:4 weave:1 baldi:1 con0:1 indeed:1 behavior:1 decreasing:4 ote:1 considering:2 becomes:2 provided:4 linearity:1 alto:1 kaufman:2 substantially:1 eigenvector:2 suite:1 demonstrates:1 unit:2 appear:1 t1:1 local:2 might:9 monotically:2 mateo:2 limited:2 range:4 bi:1 digit:1 sider:1 suggest:2 get:3 context:1 imposed:1 baum:2 go:1 independently:1 holmes:1 haussler:1 dw:1 population:2 increment:1 imagine:2 suppose:2 aik:3 rumelhart:1 recognition:1 predicts:2 observed:6 bottom:4 cycle:2 connected:2 decrease:6 dynamic:34 trained:10 depend:3 tight:1 raise:1 creates:1 necessitates:1 various:3 labeling:6 hyper:1 heuristic:1 stanford:2 statistic:3 noisy:1 itself:1 final:1 hoc:1 eigenvalue:2 analytical:2 net:1 convergence:2 converges:1 depending:1 progress:1 phonemic:4 implemented:1 predicted:2 csf:1 filter:2 ja:1 ao:1 generalization:16 strictly:1 sufficiently:1 considered:3 scope:1 lm:7 vary:1 early:1 hansen:1 palo:1 jackel:1 hubbard:1 always:2 pn:1 og:2 am:1 helpful:1 inference:1 dependent:3 eliminate:1 hidden:1 reproduce:1 umber:1 interested:1 germany:1 classification:1 priori:1 constrained:1 represents:3 look:2 eik:1 future:1 cfn:5 others:1 connectionist:1 composed:1 ve:2 evaluation:1 henderson:1 yielding:2 implication:1 necessary:1 iv:1 desired:1 theoretical:7 minimal:5 stopped:1 aio:6 sawai:1 cost:1 too:1 reported:1 aw:2 varies:2 my:1 thanks:1 international:1 quickly:1 again:1 ek:3 yp:2 b2:1 wk:3 inc:1 ad:1 vi:1 start:1 yves:1 square:3 wiener:1 phoneme:2 variance:1 correspond:1 handwritten:1 basically:1 unaffected:1 overtraining:5 explain:1 touretzky:2 ed:3 nonetheless:1 pp:3 sampled:1 knowledge:1 actually:2 back:5 strongly:1 furthermore:4 ei:13 qo:1 propagation:5 lack:1 effect:2 analytically:1 during:6 thomson:1 demonstrate:1 complete:1 image:1 consideration:1 recently:1 ai:20 inital:1 stable:3 impressive:1 surface:1 base:3 recent:1 showed:1 forcing:1 verlag:1 qualify:1 seen:2 minimum:13 morgan:2 relaxed:1 spectrogram:1 period:1 dashed:2 signal:9 ii:2 multiple:3 long:2 qi:4 prediction:5 noiseless:1 iteration:2 sometimes:3 represent:3 qik:5 whereas:1 probably:2 comment:2 tend:1 call:1 feedforward:1 iii:2 easy:1 variety:2 psychology:1 architecture:3 reduce:1 cn:3 knowing:1 computable:1 wo:2 speech:3 eigenvectors:2 generate:3 write:6 vol:1 dominance:1 clarity:1 changing:1 jv:3 sum:3 weigend:1 run:1 scaling:1 bound:4 layer:1 followed:2 quadratic:4 adapted:1 infinity:1 sake:1 department:1 waibel:1 remain:1 cun:3 making:1 equation:1 vq:1 turn:1 weaving:2 denker:1 pierre:1 anymore:1 top:2 cf:6 overtrained:2 question:1 usual:2 gradient:8 mapped:1 simulated:1 berlin:1 chauvin:10 induction:1 assuming:1 length:1 relationship:1 illustration:2 ellipsoid:1 design:1 perform:1 observation:2 howard:1 descent:8 curved:1 situation:1 extended:1 assp:1 arbitrary:1 connection:1 acoustic:1 boser:1 beyond:3 below:2 pattern:5 fp:2 explanation:1 power:5 predicting:1 residual:1 numerous:1 ready:1 woo:1 jai:2 literature:2 acknowledgement:1 understanding:3 asymptotic:1 relative:1 fully:1 lecture:1 interesting:2 validation:39 sufficient:1 imposes:1 article:1 uncorrelated:1 elsewhere:1 course:1 surprisingly:1 free:1 formal:2 understand:1 deeper:1 julie:1 curve:5 wcf:1 world:1 valid:2 rich:2 seemed:1 qualitatively:3 made:2 san:2 far:1 qio:3 transaction:1 alpha:6 global:1 overfitting:3 assumed:2 shikano:1 spectrum:3 modularity:1 nature:1 ca:4 expanding:1 complex:5 significance:1 main:1 whole:1 noise:5 slow:1 exponential:1 late:2 specific:1 insightful:2 exists:1 intractable:1 albeit:1 ower:1 intersection:1 monotonic:2 applies:1 springer:1 corresponds:1 satisfies:1 goal:1 identity:1 consequently:1 absence:1 typical:2 determined:1 huberman:1 vo1:2 principal:2 called:1 total:2 rarely:1 almeida:1 tested:4 phenomenon:4 |
2,735 | 3,480 | Spectral Clustering with Perturbed Data
Ling Huang
Intel Research
Donghui Yan
UC Berkeley
Michael I. Jordan
UC Berkeley
Nina Taft
Intel Research
[email protected]
[email protected]
[email protected]
[email protected]
Abstract
Spectral clustering is useful for a wide-ranging set of applications in areas such as
biological data analysis, image processing and data mining. However, the computational and/or communication resources required by the method in processing
large-scale data are often prohibitively high, and practitioners are often required to
perturb the original data in various ways (quantization, downsampling, etc) before
invoking a spectral algorithm. In this paper, we use stochastic perturbation theory
to study the effects of data perturbation on the performance of spectral clustering.
We show that the error under perturbation of spectral clustering is closely related
to the perturbation of the eigenvectors of the Laplacian matrix. From this result
we derive approximate upper bounds on the clustering error. We show that this
bound is tight empirically across a wide range of problems, suggesting that it can
be used in practical settings to determine the amount of data reduction allowed in
order to meet a specification of permitted loss in clustering performance.
1
Introduction
A critical problem in machine learning is that of scaling: Algorithms should be effective computationally and statistically as various dimensions of a problem are scaled. One general tool for
approaching large-scale problems is that of clustering or partitioning, in essence an appeal to the
principle of divide-and-conquer. However, while the output of a clustering algorithm may yield a
set of smaller-scale problems that may be easier to tackle, clustering algorithms can themselves be
complex, and large-scale clustering often requires the kinds of preprocessing steps that are invoked
for other machine learning algorithms [1], including proto-clustering steps such as quantization,
downsampling and compression. Such preprocessing steps also arise in the distributed sensing and
distributed computing setting, where communication and storage limitations may preclude transmitting the original data to centralized processors.
A number of recent works have begun to tackle the issue of determining the tradeoffs that arise
under various ?perturbations? of data, including quantization and downsampling [2, 3, 4]. Most of
these analyses have been undertaken in the context of well-studied domains such as classification,
regression and density estimation, for which there are existing statistical analyses of the effect of
noise on performance. Although extrinsic noise differs conceptually from perturbations to data
imposed by a data analyst to cope with resource limitations, the mathematical issues arising in the
two cases are similar and the analyses of noise have provided a basis for the study of the tradeoffs
arising from perturbations.
In this paper we focus on spectral clustering, a class of clustering methods that are based on eigendecompositions of affinity, dissimilarity or kernel matrices [5, 6, 7, 8]. These algorithms often outperform traditional clustering algorithms such as the K-means algorithm or hierarchical clustering.
To date, however, their impact on real-world, large-scale problems has been limited; in particular,
a distributed or ?in-network? version of spectral clustering has not yet appeared. Moreover, there
has been little work on the statistical analysis of spectral clustering, and thus there is little theory to
guide the design of distributed algorithms. There is an existing literature on numerical techniques for
1
Mis-clustering
rate
Procedure SpectralClustering (x1 , . . . , xn )
d
Input: n data samples {xi }n
i=1 , xi ? R
?
Output: Bipartition S and S of the input data
?
Eigen error
6
4.
5.
6.
k?
v2 ? v2 k2
?
kx ?x k2
3.
Proposition 1
6
1. Compute the similarity
? matrix K:
?
2.
?
Kij = exp ? i2?2j
, ?xi , xj
k
Compute the P
diagonal degree matrix D:
Di = n
j=1 Kij
Compute the normalized Laplacian matrix:
L = I ? D?1 K
Find the second eigenvector v2 of L
Obtain the two partitions using v2 :
S = {[i] : v2i > 0}, S? = {[i] : v2i ? 0}
Laplacian
matrix error
dL
6
Eqn. (5), (6)
Lemma 2 &
Eqn. (7)? (13)
?
Similarity
matrix error
dK
Lemma 3 or 4
6
?
Data error
Error propagation
Figure 1: A spectral bipartitioning algorithm.
?
Assumption A
Perturbation analysis
Figure 2: Perturbation analysis: from clustering
error to data perturbation error.
scaling spectral clustering (including downsampling [9, 10] and the relaxation of precision requirements for the eigenvector computation [7]), but this literature does not provide end-to-end, practical
bounds on error rates as a function of data perturbations.
In this paper we present the first end-to-end analysis of the effect of data perturbations on spectral
clustering. Our focus is quantization, but our analysis is general and can be used to treat other kinds
of data perturbation. Indeed, given that our approach is based on treating perturbations as random
variables, we believe that our methods will also prove useful in developing statistical analyses of
spectral clustering (although that is not our focus in this paper).
The paper is organized as follows. In Section 2, we provide a brief introduction to spectral clustering.
Section 3 contains the main results of the paper; specifically we introduce the mis-clustering rate
?, and present upper bounds on ? due to data perturbations. In Section 4, we present an empirical
evaluation of our analyses. Finally, in Section 5 we present our conclusions.
2
2.1
Spectral clustering and data perturbation
Background on spectral clustering algorithms
Given a set of data points {xi }ni=1 , xi ? R1?d and some notion of similarity between all pairs of data
points xi and xj , spectral clustering attempts to divide the data points into groups such that points in
the same group are similar and points in different groups are dissimilar. The point of departure of a
spectral clustering algorithm is a weighted similarity graph G(V, E), where the vertices correspond
to data points and the weights correspond to the pairwise similarities. Based on this weighted graph,
spectral clustering algorithms form the graph Laplacian and compute an eigendecomposition of this
Laplacian [5, 6, 7]. While some algorithms use multiple eigenvectors and find a k-way clustering
directly, the most widely studied algorithms form a bipartitioning of the data by thresholding the
second eigenvector of the Laplacian (the eigenvector with the second smallest eigenvalue). Larger
numbers of clusters are found by applying the bipartitioning algorithm recursively. We present a
specific example of a spectral bipartitioning algorithm in Fig. 1.
2.2
Input data perturbation
Let the data matrix X ? Rn?d be formed by stacking n data samples in rows. To this data matrix we
? of the original data
assume that perturbation W is applied, such that we obtain a perturbed version X
?
X. We assume that a spectral clustering algorithm is applied to X and we wish to compare the results
of this clustering with respect to the spectral clustering of X. This analysis captures a number of data
perturbation methods, including data filtering, quantization, lossy compression and synopsis-based
data approximation [11]. The multi-scale clustering algorithms that use ?representative? samples to
approximate the original data can be treated using our analysis as well [12].
2
3
Mis-clustering rate and effects of data perturbation
? and L
? be those
Let K and L be the similarity and Laplacian matrix on the original data X, and let K
on the perturbed data. We define the mis-clustering rate ? as the proportion of samples that have
?
different cluster memberships when computed on the two different versions of the data, X and X.
?
We wish to bound ? in terms of the ?magnitude? of the error matrix W = X ? X, which we now
define. We make the following general stochastic assumption on the error matrix W :
A. All elements of the error matrix W are i.i.d. random variables with zero mean, bounded
variance ? 2 and bounded fourth central moment ?4 ; and are independent of X.
Remark. (i) Note that we do not make i.i.d. assumptions on the elements of the similarity matrix;
rather, our assumption refers to the input data only. (ii) This assumption is distribution free, and
captures a wide variety of practical data collection and quantization schemes. (iii) Certain data
perturbation schemes may not satisfy the independence assumption. We have not yet conducted an
analysis of the robustness of our bounds to lack of independence, but in our empirical work we have
found that the bounds are robust to relatively small amounts of correlation.
We aim to produce practically useful bounds on ? in terms of ? and the data matrix X. The bounds
should be reasonably tight so that in practice they could be used to determine the degree of perturbation ? given a desired level of clustering performance, or to provide a clustering error guarantee
on the original data even though we have access only to its approximate version.
Fig. 2 outlines the steps in our theoretical analysis. Briefly, when we perturb the input data (e.g., by
filtering, quantization or compression), we introduce a perturbation W to the data which is quan? ? K in the similarity matrix, and in turn an error
tified by ? 2 . This induces an error dK := K
?
dL := L ? L in the Laplacian matrix. This further yields an error in the second eigenvector of
the Laplacian matrix, which results in mis-clustering error. Overall, we establish an analytical relationship between the mis-clustering rate ? and the data perturbation error ? 2 , where ? is usually
monotonically increasing with ? 2 . Our goal is to allow practitioners to specify a mis-clustering
rate ? ? , and by inverting this relationship, to determine the right magnitude of the perturbation ? ?
allowed. That is, our work can provide a practical method to determine the tradeoff between data
? instead of X. When the data
perturbation and the loss of clustering accuracy due to the use of X
perturbation can be related to computational or communications savings, then our analysis yields a
practical characterization of the overall resource/accuracy tradeoff.
Practical Applications Consider in particular a clustering task in a distributed networking system
that allows an application to specify a desired clustering error C ? on the distributed data (which is
not available to the coordinator). Through a communication protocol similar to that in [4], the coor? for spectral clustering.
dinator (e.g., network operation center) gets access to the perturbed data X
The coordinator can compute a clustering error bound C using our method. By setting C ? C ? , it
determines the tolerable data perturbation error ? ? and instructs distributed devices to use appropriate numbers of bits to quantize their data. Thus we can provide guarantees on the achieved error,
C ? C ? , with respect to the original distributed data even with access only to the perturbed data.
3.1
Upper bounding the mis-clustering rate
Little is currently known about the connection between clustering error and perturbations to the
Laplacian matrix in the spectral clustering setting. [5] presented an upper bound for the clustering
error, however this bound is usually quite loose and is not viable for practical applications. In this
section we propose a new approach based on a water-filling argument that yields a tighter, practical
? respectively. We derive a
? 2 be the unit-length second eigenvectors of L and L,
bound. Let v2 and v
2
2
relationship between the mis-clustering rate ? and ? := k?
v2 ? v2 k .
The intuition behind our derivation is suggested in Fig. 3. Let a and b denote the sets of components
in v2 corresponding to clusters of size k1 and k2 , respectively, and similarly for a? and b? in the case
? 2 . If v2 is changed to v
? 2 due to the perturbation, an incorrect clustering happens whenever a
of v
component of v2 in set a jumps to set b? , denoted as a ? b? , or a component in set b jumps to set a? ,
denoted as b ? a? . The key observation is that each flipping of cluster membership in either a ? b?
3
Wisconsin Breast Cancer Data
Component values
0.7
a
?
0.6
Perturbation
a
misclustering
Component
indices
misclustering
Upper Bound of Kannan
Our Upper Bound
Mis?clustering Rate
0.5
0.4
0.3
0.2
b
0.1
b?
0
0.005
Figure 3: The second eigenvector v2 and its per? 2 (denoted by dashed lines).
turbed counterpart v
0.01
0.015
0.02 0.025
? of noise
0.03
0.035
Figure 4: An example of the tightness of
the upper bound for ? in Eq. (1).
or b ? a? contributes a fairly large amount to the value of ? 2 , compared to the short-range drifts
in a ? a? or b ? b? . Given a fixed value of ? 2 , the maximum possible number of flippings (i.e.,
missed clusterings) is therefore constrained, and this translates into an upper bound for ?.
We make the following assumptions on the data X and its perturbation:
B1. The components of v2 form two clusters (with respect to the spectral bipartitioning algorithm in Fig. 1). The size of each cluster is comparable to n.
B2. The perturbation is small with the total number of mis-clusterings m < min(k1 , k2 ), and
? 2 form two clusters. The size of each cluster is comparable to n.
the components of v
B3. The perturbation of individual components of v2 in each set of a ? a? , a ? b? , b ? a?
and b ? b? have identical (not necessary independent) distributions with bounded second
moments, respectively, and they are uncorrelated with the components in v2 .
Our perturbation bound can now be stated as follows:
Proposition 1. Under assumptions B1, B2 and B3, the mis-clustering rate ? of the spectral bipartitioning algorithm under the perturbation satisfies ? ? ? 2 = k?
v2 ? v2 k2 . If we further assume that
? 2 ? v2 are independent, then
all components of v
? ? (1 + op (1))Ek?
v2 ? v2 k2 .
(1)
The proof of the proposition is provided in the Appendix.
Remarks. (i) Assumption B3 was motivated by our empirical work. Although it is difficult to
establish general necessary and sufficient conditions for B3 to hold, in the Appendix we present
some special cases that allow B3 to be verified a priori. It is also worth noting that B3 appears
to hold (approximately) across a range of experiments presented in Section 4. (ii) If we assume
piecewise constancy for v2 , then we can relax the uncorrelated assumption in B3. (iii) Our bound
has a different flavor than that obtained in [5]. Although the bound in Theorem 4.3 in [5] works for
k-way clustering, it assumes a block-diagonal Laplacian matrix and requires the gap between the
k th and (k + 1)th eigenvalues to be greater than 1/2, which is unrealistic in many data sets. In the
setting of 2-way spectral clustering and a small perturbation, our bound is much tighter than that
derived in [5]; see Fig. 4 in particular.
3.2
Perturbation on the second eigenvector of Laplacian matrix
We now turn to the relationship between the perturbation of eigenvectors with that of its matrix.
One approach is to simply draw on the classical domain of matrix perturbation theory; in particular,
applying Theorem V.2.8 from [13], we have the following bound on the (small) perturbation of the
second eigenvector:
k?
v2 ? v2 k ?
k4dLkF
?
,
? ? 2kdLkF
(2)
where ? is the gap between the second and the third eigenvalue. However, in our experimental
evaluation we found that ? can be quite small in some data sets, and in these cases the right-hand
4
(a) Wisconsin Breast Cancer Data
0.08
(b) Waveform Data
0.07
RHS
LHS
0.06
(c) Pen?digits Data
0.05
RHS
LHS
RHS
LHS
0.04
0.05
0.04
0.04
Value
Value
Value
0.06
0.03
0.03
0.02
0.02
0.02
0.01
0.01
0
0.005
0.01
0.015
0.02 0.025
? of noise
0.03
0.035
0
0.005
0.01
0.015
0.02 0.025
? of noise
0.03
0.035
0
0.005
0.01
0.015
0.02 0.025
? of noise
0.03
0.035
Figure 5: Experimental examples of the fidelity of the approximation in Eq. (5). We add i.i.d. zero mean
Gaussian noise to the input data with different ?, and we see that the right-hand side (RHS) of (5) approximately
upper bounds the left-hand side (LHS).
side of (2) can be quite large even for a small perturbation. Thus the bound given by (2) is often not
useful in practical applications.
To derive a more practically useful bound, we begin with a well-known first-order Taylor expansion
to compute the perturbation on the second eigenvector of a Laplacian matrix as follows:
n
n
n
n
X
X
vjT dLv2
vj X X
? 2 ? v2 =
v
vpj vq2 dLpq
vj + O(dL2 ) ?
?2 ? ?j
?2 ? ?j p=1 q=1
j=1,j6=2
j=1,j6=2
?
??
! ? n
n
n
n
X
X
X vpj ? vj
X
?
?
?
?
=
vq2 dLpq ?
=
?p up ,
(3)
?2 ? ?j
p=1
q=1
p=1
j=1,j6=2
Pn
where ?p = q=1 vq2 dLpq is a random variable determined by the effect of the perturbation on
Pn
v v
the Laplacian matrix L, and the vector up = j=1,j6=2 ?2pj??jj is a constant determined by the
eigendecomposition of the Laplacian matrix L. Then we have
n
2
n
n
n
X
X
X
X
2
E ?i ui ? ?j uTj .
(4)
Ek?
v2 ? v2 k ? E
?p up
=
Ek?p up k2 + 2
p=1
p=1
i=1 j=i+1
In our experimental work we have found that for i 6= j, ?i ui is either very weakly correlated with
?j uj (i.e., the total sum of all cross terms is typically one or two orders of magnitude less than that
of squared term), or negatively correlated with ?j uj (i.e., the total sum of all cross terms is less than
zero). This empirical evidence suggests the following approximate bound:
n
X
Ek?
v2 ? v2 k2 .
E?p2 ? kup k2 .
(5)
p=1
Examples of the fidelity of this approximation for particular data sets are shown in Fig. 5.
Finally, E?p2 is related to dLpq , and can be upper bounded by
!2
n
n X
n
X
X
2
E?p = E
vq2 dLpq
?
[vi2 vj2 ? E (dLpi ) E (dLpj ) + |vi2 vj2 |?pi ?pj ] ,
q=1
(6)
i=1 j=1
where ?pi is the variance of dLpi .
Remark. Through Eqs. (5) and (6), we can bound the squared norm of the perturbation on the
second eigenvector in expectation, which in turn bounds the mis-clustering rate. To compute the
bound, we need to estimate the first two moments of dL, which we discuss next.
3.3
Perturbation on the Laplacian matrix
P
Let D be the diagonal matrix with Di = j Kij . We define the normalized Laplacian matrix as
? ? D and dK = K
? ? K, we have the following approximation for
L = I ? D?1 K. Letting ? = D
?
dL = L ? L:
5
Lemma 2. If perturbation dK is small compared to K, then
dL = (1 + o(1)) ?D?2 K ? D?1 dK.
(7)
Then, element-wise, the first two moments of dL can be estimated as
E(dL) ? E(?)D?2 K ? D?1 E(dK)
?2
2
(8)
?2
?1
?2
?1
E(dL ) ? E ?D K ? ?D K ? 2D dK ? ?D K + D dK ? D
= E ?2 D?4 K 2 + D?2 E dK 2 ? 2E(?dK)D?3 ? K,
?1
dK
(9)
2
where ? denotes element-wise product. The quantities needed to estimate E(dL) and E(dL ) can
? ij . In
be obtained from moments and correlations among the elements of the similarity matrix K
particular, we have
2
2
? ij ? Kij , E(dKij )2 = EK
? ij
? ij + Kij
E(dKij ) = E K
? 2Kij E K
(10)
E?i
?2
ED
i
=
=
E(?dK)ij
? i ? Di ,
ED
?
E?
=
=
n
X
j=1
?i =
ED
n
X
j=1
?2
? ij ? =
K
n
X
? ij ,
E K
?2 + 2
EK
ij
j=1
? 2 ? 2Di ? ED
? i + D2 (11)
E?2i = ED
i
i
n
n
X
X
? ij EK
? iq + ?k ? k ? k
EK
ijq ij iq
j=1 q=j+1
(12)
? i ? Di )(K
? ij ? Kij ) = E D
? iK
? ij ? Di EK
? ij ? Kij E?i
E(D
?
?
??
n
X
2
? ij
? ij ?
? iq ?? ? Di EK
? ij ? Kij E?i
E ?K
+K
K
q=1,q6=j
=
2
? ij
EK
+
n
X
q=1,q6=j
k k
? ij EK
? iq + ?kijq ?ij
? ij ? Kij E?i , (13)
EK
?iq ? Di EK
k
? ij and ?1 ? ?k ? 1 is the correlation coefficient between
where ?ij
is the standard deviation of K
ijq
?
k
?
?
Kij and Kiq . Estimating all ?ijq s would require an intensive effort. For simplicity, we could set
?kijq to 1 in Eq. (12) and to ?1 in Eq. (13), and obtain an upper bound for E(dL2 ). This bound could
optionally be tightened by using a simulation method to estimate the values of ?kijq . However, in our
experimental work we have found that our results are insensitive to the values of ?kijq , and setting
?kijq = 0.5 usually achieves good results.
Remark. Eqs. (8)?(13) allow us to estimate (i.e., to upper bound) the first two moments of dL
using those of dK, which are computed using Eq. (15) or (16) in Section 3.4.
3.4
Perturbation on the similarity matrix
? on perturbed data X
? is
The similarity matrix K
2
? ij = exp ? ||xi ? xj + ?i ? ?j || ,
K
2?k2
(14)
? ij ? Kij ,
where ?k is the kernel bandwidth. Then, given data X, the first two moments of dKij = K
the error in the similarity matrix, can be determined by one of the following lemmas.
Lemma 3. Given X, if all components of ?i and ?j are i.i.d. Gaussian N (0, ? 2 ), then
2
2
? ij = Mij ? ?
? 2 = Mij ? 2?
E K
,
E
K
,
(15)
ij
?k2
?k2
i
h
?ij t
d/2
, and ?ij = ||xi ? xj ||2 /2? 2 .
/(1 ? 2t)
where Mij (t) = exp 1?2t
6
(a) Gaussian data
(b) Sin?sep data
5
3
4
2
3
1
2
(c) Concentric data
10
5
0
0
?1
?5
1
0
?2
?1
?10
?3
?2
?2
0
2
4
?2
?1
0
1
2
?15
?10
?5
0
5
10
Figure 6: Synthetic data sets illustrated in two dimensions.
Lemma 4. Under Assumption A, given X and for large values of the dimension d, the first two
? ij can be computed approximately as follows:
moments of K
? ij = Mij ? 1
? 2 = Mij ? 1 ,
E K
,
E
K
(16)
ij
2?k2
?k2
where Mij (t) = exp ?ij + 2d? 2 t + d?4 + d? 4 + 4? 2 ?2ij t2 , and ?ij = ||xi ? xj ||2 .
Remark. (i) Given data perturbation error ?, kernel bandwidth ?k and data X, the first two moments of dKij can be estimated directly using (15) or (16). (ii) Through Eqs. (1)?(16), we have
established a relationship between the mis-clustering rate ? and the data perturbation magnitude ?.
By inverting this relationship (e.g., using binary search), we can determine a ? ? for a given ? ? .
4
Evaluation
In this section we present an empirical evaluation of our analysis on 3 synthetic data sets (see Fig. 6)
and 6 real data sets from the UCI repository [14]. The data domains are diverse, including image, medicine, agriculture, etc., and the different data sets impose different difficulty levels on the
underlying spectral clustering algorithm, demonstrating the wide applicability of our analysis.
In the experiments, we use data quantization as the perturbation scheme to evaluate the upper bound
provided by our analysis on the clustering error. Fig. 7 plots the mis-clustering rate and the upper
bound for data sets subject to varying degrees of quantization. As expected, the mis-clustering
rate increases as one decreases the number of quantization bits. We find that the error bounds are
remarkably tight, which validate the assumptions we make in the analysis. It is also interesting to
note that even when using as few as 3-4 bits, the clustering degrades very little in both real error and
as assessed by our bound. The effectiveness of our bound should allow the practitioner to determine
the right amount of quantization given a permitted loss in clustering performance.
5
Conclusion
In this paper, we proposed a theoretical analysis of the clustering error for spectral clustering in the
face of stochastic perturbations. Our experimental evaluation has provided support for the assumptions made in the analysis, showing that the bound is tight under conditions of practical interest. We
believe that our work, which provides an analytical relationship between the mis-clustering rate and
the variance of the perturbation, constitutes a critical step towards enabling a large class of applications that seek to perform clustering of objects, machines, data, etc in a distributed environment.
Many networks are bandwidth constrained, and our methods can guide the process of data thinning
so as to limit the amount of data transmitted through the network for the purpose of clustering.
References
[1] L. Bottou and O. Bousquet, ?The tradeoffs of large scale learning,? in Advances in Neural Information
Processing Systems 20, 2007.
[2] A. Silberstein, G. P. A. Gelfand, K. Munagala, and J. Yang, ?Suppression and failures in sensor networks:
A Bayesian approach,? in Proceedings of VLDB, 2007.
[3] X. Nguyen, M. J. Wainwright, and M. I. Jordan, ?Nonparametric decentralized detection using kernel
methods,? IEEE Transactions on Signal Processing, vol. 53, no. 11, pp. 4053?4066, 2005.
7
(b) Concentric Circle Data
(a) Sin?sep Data
1.4
Upper Bound
Test Value
0.15
0.1
0.05
0.037
0.018
3
4
0.009
0.005
0.002
0.001
5
6
7
8
Number of quantization bits
1
0.8
0.6
0.4
0.001
0
9
0.03
0.02
0.01
0.036
0.018
3
4
0.009
0.004
0.002
0.001
0.001
5
6
7
8
Number of quantization bits
0.06
0
9
0.036
0.018
3
4
0.009
0.005
0.002
0.001
5
6
7
8
Number of quantization bits
0.001
9
4
0.07
0.04
0.03
0.02
0.01
0.056
0.029
2
3
0.015
0.008
0.004
0.002
4
5
6
7
Number of quantization bits
0.001
0
8
0.062
0.030
2
3
0.015
0.008
0.004
0.002
4
5
6
7
Number of quantization bits
0.05
Mis?Clustering Rate
0.02
0.015
0.01
2
0.017
0.009
0.004
0.04
0.03
0.02
0
8
0.071
0.036
2
3
0.002
Upper Bound
Test Value
0.04
0.03
0.02
3
4
5
6
7
Number of quantization bits
0.001
8
0
0.018
0.009
0.005
0.002
4
5
6
7
Number of quantization bits
0.001
8
(i) Waveform Data
Upper Bound
Test Value
0.08
0.06
0.04
0.02
0.01
0.005
0.037
0.05
0.01
0.001
Mis?Clustering Rate
Upper Bound
Test Value
0.025
0.070
0.06
(h) Wisconsin Breast Cancer Data
(g) Iris Data
0.03
Upper Bound
Test Value
0.08
Mis?Clustering Rate
Mis?Clustering Rate
6
(f) Wine Data
Upper Bound
Test Value
0.05
2
Mis?Clustering Rate
0.04
(e) Pen?digits Data
Upper Bound
Test Value
8
0
0.05
(d) Image Segmentation Data
?3
x 10
0
Upper Bound
Test Value
0.06
0.2
0
Mis?Clustering Rate
0.07
Mis?Clustering Rate
0.2
(c) Gaussian Data
Upper Bound
Test Value
1.2
Mis?Clustering Rate
Mis?Clustering Rate
0.25
0.074
0.036
2
3
0.018
0.009
0.005
0.002
4
5
6
7
Number of quantization bits
0.001
8
0
0.072
0.036
2
3
0.018
0.009
0.005
0.002
4
5
6
7
Number of quantization bits
0.001
8
Figure 7: Upper bounds of clustering error on approximate data obtained from quantization as a function of
the number of bits. (a?c) Simulated data sets (1000 sample size, 2, 2, 10 features, respectively); (d) Statlog
image segmentation data (2310 sample size, 19 features); (e) Handwritten digits data (10992 sample size, 16
features); (f) Wine data (178 sample size, 13 features); (g) Iris data (150 sample size, 4 features). (h) Wisconsin
breast cancer data (569 sample size, 30 features); (i) Waveform data (5000 sample size, 21 features). The x-axis
shows the number of quantization bits and (above the axis) the corresponding data perturbation error ?. Error
bars are derived from 25 replications. In the experiments, all data values are normalized in range [0, 1]. For
data sets with more than two clusters, we choose two of them for the experiments.
[4] L. Huang, X. Nguyen, M. Garofalakis, A. D. Joseph, M. I. Jordan, and N. Taft, ?In-network PCA and
anomaly detection,? in Advances in Neural Information Processing Systems (NIPS), 2006.
[5] R. Kannan, S. Vempala, and A. Vetta, ?On clusterings: Good, bad and spectral,? Journal of the ACM,
vol. 51, no. 3, pp. 497?515, 2004.
[6] A. Y. Ng, M. Jordan, and Y. Weiss, ?On spectral clustering: Analysis and an algorithm,? in Advances in
Neural Information Processing Systems (NIPS), 2002.
[7] J. Shi and J. Malik, ?Normalized cuts and image segmentation,? IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 22, no. 8, pp. 888?905, 2000.
[8] U. von Luxburg, M. Belkin, and O. Bousquet, ?Consistency of spectral clustering,? Annals of Statistics,
vol. 36, no. 2, pp. 555?586, 2008.
[9] P. Drineas and M. W. Mahoney, ?On the Nystr?
om method for approximating a Gram matrix for improved
kernel-based learning,? in Proceedings of COLT, 2005, pp. 323?337.
[10] C. Fowlkes, S. Belongie, F. Chung, and J. Malik, ?Spectral grouping using the Nystr?
om method,? IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 2, 2004.
[11] G. Cormode and M. Garofalakis, ?Sketching streams through the net: Distributed approximate query
tracking,? in Proceedings of VLDB, 2005, pp. 13?24.
[12] D. Kushnir, M. Galun, and A. Brandt, ?Fast multiscale clustering and manifold identification,? Pattern
Recognition, vol. 39, no. 10, pp. 1876?1891, 2006.
[13] G. W. Stewart and J. Guang Sun, Matrix Perturbation Theory.
Academic Press, 1990.
[14] A. Asuncion and D. Newman, ?UCI Machine Learning Repository, Department of Information and Computer Science,? 2007, http://www.ics.uci.edu/ mlearn/MLRepository.html.
8
| 3480 |@word repository:2 briefly:1 version:4 compression:3 proportion:1 norm:1 d2:1 vldb:2 simulation:1 seek:1 invoking:1 nystr:2 recursively:1 moment:9 reduction:1 contains:1 existing:2 com:2 yet:2 numerical:1 partition:1 treating:1 plot:1 intelligence:2 device:1 short:1 cormode:1 characterization:1 provides:1 brandt:1 instructs:1 mathematical:1 viable:1 ik:1 incorrect:1 prove:1 replication:1 introduce:2 pairwise:1 expected:1 indeed:1 themselves:1 multi:1 v2i:2 little:4 preclude:1 increasing:1 provided:4 begin:1 moreover:1 bounded:4 estimating:1 underlying:1 kind:2 eigenvector:10 guarantee:2 berkeley:4 tackle:2 prohibitively:1 scaled:1 k2:14 partitioning:1 unit:1 before:1 treat:1 limit:1 meet:1 approximately:3 studied:2 suggests:1 limited:1 range:4 statistically:1 practical:10 practice:1 block:1 differs:1 digit:3 procedure:1 area:1 bipartition:1 coor:1 yan:1 empirical:5 refers:1 get:1 storage:1 context:1 applying:2 www:1 imposed:1 center:1 shi:1 bipartitioning:6 simplicity:1 notion:1 annals:1 anomaly:1 element:5 recognition:1 cut:1 constancy:1 capture:2 sun:1 decrease:1 intuition:1 environment:1 ui:2 weakly:1 tight:4 negatively:1 basis:1 drineas:1 sep:2 various:3 derivation:1 fast:1 effective:1 query:1 newman:1 quite:3 gelfand:1 widely:1 larger:1 tightness:1 relax:1 statistic:1 eigenvalue:3 analytical:2 net:1 propose:1 product:1 uci:3 date:1 validate:1 cluster:9 requirement:1 r1:1 produce:1 object:1 derive:3 iq:5 misclustering:2 stat:1 ij:34 op:1 eq:8 p2:2 c:1 waveform:3 closely:1 stochastic:3 munagala:1 require:1 taft:3 proposition:3 biological:1 tighter:2 statlog:1 hold:2 practically:2 ic:1 exp:4 achieves:1 smallest:1 agriculture:1 purpose:1 wine:2 estimation:1 currently:1 tool:1 weighted:2 sensor:1 gaussian:4 aim:1 rather:1 pn:2 varying:1 derived:2 focus:3 suppression:1 membership:2 typically:1 coordinator:2 issue:2 classification:1 overall:2 fidelity:2 denoted:3 priori:1 among:1 colt:1 html:1 constrained:2 special:1 fairly:1 uc:2 saving:1 ng:1 identical:1 filling:1 constitutes:1 donghui:1 t2:1 piecewise:1 few:1 belkin:1 individual:1 attempt:1 detection:2 centralized:1 interest:1 mining:1 evaluation:5 mahoney:1 behind:1 necessary:2 lh:4 divide:2 taylor:1 desired:2 circle:1 theoretical:2 kij:12 stewart:1 stacking:1 applicability:1 vertex:1 deviation:1 eigendecompositions:1 conducted:1 perturbed:6 synthetic:2 density:1 michael:1 sketching:1 transmitting:1 squared:2 central:1 von:1 huang:3 choose:1 silberstein:1 ek:14 chung:1 suggesting:1 b2:2 coefficient:1 satisfy:1 stream:1 asuncion:1 om:2 formed:1 ni:1 accuracy:2 variance:3 yield:4 correspond:2 conceptually:1 bayesian:1 handwritten:1 identification:1 kup:1 worth:1 q6:2 j6:4 processor:1 mlearn:1 networking:1 whenever:1 ed:5 failure:1 pp:7 proof:1 mi:26 di:8 begun:1 organized:1 segmentation:3 thinning:1 appears:1 permitted:2 specify:2 synopsis:1 wei:1 improved:1 though:1 correlation:3 hand:3 eqn:2 multiscale:1 propagation:1 lack:1 lossy:1 believe:2 b3:7 effect:5 normalized:4 counterpart:1 i2:1 illustrated:1 sin:2 essence:1 mlrepository:1 iris:2 outline:1 ranging:1 image:5 invoked:1 wise:2 empirically:1 insensitive:1 consistency:1 similarly:1 specification:1 access:3 similarity:12 etc:3 add:1 recent:1 certain:1 binary:1 transmitted:1 greater:1 impose:1 determine:6 monotonically:1 dashed:1 ii:3 signal:1 multiple:1 academic:1 cross:2 laplacian:17 impact:1 regression:1 breast:4 expectation:1 kernel:5 achieved:1 background:1 remarkably:1 subject:1 quan:1 effectiveness:1 jordan:5 practitioner:3 garofalakis:2 noting:1 yang:1 iii:2 variety:1 xj:5 independence:2 approaching:1 bandwidth:3 dl2:2 tradeoff:5 translates:1 intensive:1 motivated:1 pca:1 effort:1 jj:1 remark:5 useful:5 eigenvectors:4 amount:5 nonparametric:1 induces:1 http:1 outperform:1 estimated:2 extrinsic:1 arising:2 per:1 diverse:1 vol:6 group:3 key:1 demonstrating:1 pj:2 verified:1 undertaken:1 graph:3 relaxation:1 sum:2 luxburg:1 fourth:1 missed:1 draw:1 appendix:2 scaling:2 vq2:4 comparable:2 bit:14 bound:48 bousquet:2 argument:1 min:1 vempala:1 utj:1 relatively:1 department:1 developing:1 across:2 smaller:1 joseph:1 happens:1 computationally:1 resource:3 vjt:1 turn:3 loose:1 discus:1 needed:1 letting:1 end:4 available:1 operation:1 decentralized:1 hierarchical:1 v2:27 spectral:32 appropriate:1 tolerable:1 fowlkes:1 robustness:1 eigen:1 original:7 assumes:1 clustering:85 denotes:1 medicine:1 perturb:2 k1:2 conquer:1 approximating:1 establish:2 classical:1 uj:2 malik:2 dkij:4 quantity:1 flipping:1 degrades:1 traditional:1 diagonal:3 affinity:1 simulated:1 manifold:1 water:1 nina:2 kannan:2 analyst:1 length:1 index:1 relationship:7 downsampling:4 optionally:1 difficult:1 stated:1 design:1 kushnir:1 perform:1 upper:24 observation:1 enabling:1 communication:4 vj2:2 rn:1 perturbation:55 drift:1 concentric:2 inverting:2 pair:1 required:2 connection:1 established:1 nip:2 suggested:1 bar:1 usually:3 pattern:3 departure:1 appeared:1 including:5 vi2:2 wainwright:1 unrealistic:1 critical:2 treated:1 difficulty:1 scheme:3 tified:1 brief:1 axis:2 ijq:3 literature:2 determining:1 wisconsin:4 loss:3 interesting:1 limitation:2 filtering:2 eigendecomposition:2 degree:3 sufficient:1 principle:1 thresholding:1 tightened:1 uncorrelated:2 pi:2 row:1 cancer:4 changed:1 free:1 guide:2 allow:4 side:3 wide:4 face:1 distributed:10 dimension:3 xn:1 world:1 gram:1 collection:1 jump:2 preprocessing:2 made:1 nguyen:2 cope:1 transaction:3 approximate:6 b1:2 belongie:1 xi:9 search:1 pen:2 reasonably:1 robust:1 contributes:1 quantize:1 expansion:1 bottou:1 complex:1 domain:3 protocol:1 vj:3 main:1 rh:4 bounding:1 ling:2 arise:2 noise:8 galun:1 guang:1 turbed:1 allowed:2 x1:1 fig:8 intel:4 representative:1 precision:1 wish:2 third:1 theorem:2 bad:1 specific:1 showing:1 sensing:1 appeal:1 dk:13 evidence:1 dl:11 grouping:1 quantization:22 dissimilarity:1 magnitude:4 kx:1 gap:2 easier:1 flavor:1 simply:1 tracking:1 mij:6 determines:1 satisfies:1 acm:1 vetta:1 goal:1 towards:1 specifically:1 determined:3 lemma:6 total:3 experimental:5 support:1 assessed:1 dissimilar:1 evaluate:1 proto:1 correlated:2 |
2,736 | 3,481 | Nonparametric sparse hierarchical models
describe V1 fMRI responses to natural images
Pradeep Ravikumar, Vincent Q. Vu and Bin Yu
Department of Statistics
University of California, Berkeley
Berkeley, CA 94720-3860
Thomas Naselaris, Kendrick N. Kay and Jack L. Gallant
Department of Psychology
University of California, Berkeley
Berkeley, CA
Abstract
We propose a novel hierarchical, nonlinear model that predicts brain activity in
area V1 evoked by natural images. In the study reported here brain activity was
measured by means of functional magnetic resonance imaging (fMRI), a noninvasive technique that provides an indirect measure of neural activity pooled over
a small volume (? 2mm cube) of brain tissue. Our model, which we call the
V-SPAM model, is based on the reasonable assumption that fMRI measurements
reflect the (possibly nonlinearly) pooled, rectified output of a large population of
simple and complex cells in V1. It has a hierarchical filtering stage that consists
of three layers: model simple cells, model complex cells, and a third layer in
which the complex cells are linearly pooled (called ?pooled-complex? cells). The
pooling stage then obtains the measured fMRI signals as a sparse additive model
(SpAM) in which a sparse nonparametric (nonlinear) combination of model complex cell and model pooled-complex cell outputs are summed. Our results show
that the V-SPAM model predicts fMRI responses evoked by natural images better than a benchmark model that only provides linear pooling of model complex
cells. Furthermore, the spatial receptive fields, frequency tuning and orientation
tuning curves of the V-SPAM model estimated for each voxel appears to be consistent with the known properties of V1, and with previous analyses of this data
set. A visualization procedure applied to the V-SPAM model shows that most of
the nonlinear pooling consists of simple compressive or saturating nonlinearities.
1
Introduction
An important step toward understanding the neural basis of vision is to develop computational models that describe how complex visual stimuli are mapping onto evoked neuronal responses. This task
is made challenging in part by the inherent difficulty of obtaining neurophysiological recordings
from single neurons in vivo. An alternative approach is to base models on brain activity measured
by means of functional magnetic resonance imaging (fMRI). fMRI measures changes in blood oxygenation and flow throughout the brain that occur as a consequence of metabolic demands. Although
the relationship between measured fMRI activity and the spiking activity of neurons is rather complex, as a first-order approximation the fMRI signal can be considered to be monotonically related
to the pooled activity of the underlying neural population.
1
In this paper we consider the task of predicting fMRI brain activity evoked by a series of grayscale natural images. Natural images are a useful stimulus set for efficiently probing the visual
system, because they are likely to evoke response from both early visual areas and from more central,
highly nonlinear visual areas. The fMRI scanner provides a three-dimensional image of the brain
with a spatial resolution of a few cubic millimeters and fairly low temporal resolution (about 0.5?1
Hz). After pre-processing the fMRI signals are represented as a vector of three-dimensional volume
elements called voxels. Here we restrict our analysis to voxels sampled from visual area V1, the
primary visual area in humans.
There are two problems that make predicting evoked responses of fMRI voxels difficult. First, fMRI
signals are noisy and non-stationary in time. Second, each voxel reflects the combined influence of
hundreds of thousands of neurons [4]. fMRI scans of a single voxel in human V1 likely reflect the
nonlinearly-pooled, rectified outputs of two functionally distinct classes of neurons: simple cells that
are sensitive to spatial phase, and phase-invariant complex cells [2]. Even if an accurate predictive
model is obtained, there remains the issue of interpretability. It is not sufficient to construct a model
that provides good predictions but whose function remains opaque (i.e., a black box). In order for
a predictive model to advance our understanding of the brain, the function of any predictive model
must be conceptually interpretable.
In this paper we propose a new model that aims to overcome some of these problems. Our V-SPAM
model is a hierarchical and sparse nonparametric additive model. It combines a biologically-inspired
hierarchical filtering scheme with a nonlinear (nonparametric) pooling of the outputs from various
levels of the hierarchical filtering stage. The model is estimated separately for each recorded fMRI
voxel using a fit data set, and then its predictions are evaluated against an entirely separate data set
reserved for this purpose.
The filtering component of the model consists of three distinct layers: simple cells, complex cells,
and linear combinations of the complex cells (here called pooled-complex cells). The fMRI response
is then modeled as a sparse additive combination of nonlinear (nonparametric) functions of the
complex and pooled-complex cell model outputs. This last step automatically learns the optimal
combinatorial output nonlinearity of the hierarchical filtering stage, and so permits us to model
nonlinear V1 responses not captured by the simple and complex cell model components alone [6].
The fMRI dataset used in this paper was collected as part of an earlier study by [5]. That study
also used a filtering model to describe the relationship between natural images and evoked fMRI
signals, and used the estimated models in turn to decode (identify) images. However, the earlier
study only provided linear pooling of model complex cell filters. Our results show that the V-SPAM
model predicts fMRI responses evoked by natural images better than does the earlier linear pooling
model. Furthermore, the spatial receptive fields, frequency tuning and orientation tuning curves of
the V-SPAM model estimated for each voxel appear to be consistent with the known properties of
V1, and with the previous results [5].
2
2.1
Background
Sparse Additive Models
The regression task consists of estimating the regression function E(Y |X) for a real-valued response
Y ? R and a predictor-vector X = (X1 , . . . , Xp ) ? Rp from data {(Xi , Yi ), i = 1, . . . n}.
In the nonparametric regression model, the response Yi = m(Xi ) + i , where m is a general
smooth function. Estimating this function (i.e., smoothing) becomes challenging when the number of predictors p is large. Even estimating linear models of the form Yi = ? > Xi + i , is
challenging in these high-dimensional settings. For linear models however, when the vector ? is
sparse, Tibshirani [8] and others have shown that the `1 penalized estimator (also called the Lasso),
P
Pp
?? = arg min? i (Yi ? ? > Xi )2 + ? j=1 |?j | can estimate a sparse model and has strong theoretical properties.
The sparse additive model (SpAM) framework of Ravikumar et al [7] extends these sparse linear
models to the nonparametric domain. In additive models, introduced by Hastie and
Pp Tibshirani [3],
the response Y is an additive combination of functions of the predictors, Y = j=1 fj (Xj ) +
Here the functions {fj } are constrained to lie in a class of smooth functions, such as the space of
2
functions with square integrable double derivatives (i.e., the Sobolev space of order two). A sparse
additive model then imposes a sparsity constraint on the set J = {j : fj 6? 0} of functions fj that
are nonzero.
2.2
Fitting Algorithm for Sparse Additive Models
The paper [7] proposes a fitting procedure for sparse additive models that has good statistical properties even in the large p small n regime. Their SpAM fitting algorithm is summarized in Figure 1.
It performs a coordinate descent (in the L2 (P n ) space, with P n the sample distribution). At each
step the algorithm performs nonparametric regression of the current residual onto a single predictor,
and then does a soft threshold.
Input: Data (Xi , Yi ), regularization parameter ?.
(0)
Initialize fj = fj , for j = 1, . . . , p.
Iterate until convergence:
For each j = 1, . . . , p:
P
Compute the residual: Rj = Y ? k6=k fk (Xk );
Estimate the conditional expectation Pj = E[Rj | Xj ] by smoothing: P?j = Sj Rj ;
Pn
Set s2j = n?1 i=1 P?j2 (i).
Soft-threshold: fj = [1 ? ?/?
sj ]+ P?j ;
Center: fj ? fj ? mean(fj ).
P
Output: Component functions fj and estimator m(X
? i ) = j fj (Xij ).
Figure 1: T HE S PAM BACKFITTING A LGORITHM
3
A model for pooled neural activity of voxels
Our V-SPAM model combines a biologically-inspired filtering scheme and a novel algorithm that
permits nonlinear pooling of the outputs of the filtering stage. The filtering stage itself consists of
three distinct layers, arranged hierarchically: simple cells, complex cells, and linear combinations
of the complex cells (here called pooled-complex cells). The output of this filtering operation is then
fed to an algorithm that estimates a nonlinear pooling function that optimizes predictive power.
3.1
Simple Cell Model
The first stage of the hierarchical filter is inspired by simple cells that are known to exist in area V1.
The receptive fields of V1 simple cells are known to be generally consistent with a Gabor wavelet
model [6]. Most importantly, they are spatially localized, oriented, spatial frequency band-pass and
phase selective. (see Figure 2.)
Figure 2: Gabor wavelets. Each row shows a family of Gabor wavelets that share a common spatial
location and frequency, but differ in orientation. This is only a small fraction of all of the wavelets
in the pyramid.
3
In our model the simple cell filter bank was implemented as a Gabor wavelet pyramid, as follows.
Let I denote an image, and d the number of pixels. It can thus be represented as a pixel vector
in Rd . Denote by j a Gabor wavelet sampled on a grid the size of the image, so that it too can
be represented as vector in Rd . Then our simple cell model, for the activation given the image I
as stimulus, is given by, Xj (I) = [j , I]+ , where ?, ? is the Euclidean inner product, and [? ]+
is a non-negative rectification. (See Figure 3.) Correspondingly, Xj (I) = [j , I]+ gives the
activation of the 180 spatial phase counterpart.
image
Gabor wavelet
non-negative rectification
output
Figure 3: Simple cell model. The activation of a model simple cell given an image is the inner
product of the image with a Gabor wavelet, followed by a non-negative rectification.
3.2
Complex Cell Model
The second stage of the hierarchical filter is inspired by complex cells that are also known to exist in
area V1. Complex cells are similar to simple cells, except they are not sensitive to spatial phase. In
our model the complex cell filter bank was implemented by taking the sum of squares of the outputs
of four simple cells (corresponding to the wavelet pairs that are identical up to phase), followed by
a fixed output nonlinearity. The activation of the model complex cell given an image I is given by,
2
2
(1)
Xj (I) = log(1 + [j , I]2+ + [j , I]2+ + [
j , I ]+ + [ j , I ]+ )
(2)
= log(1 + [j , I]2 + [ j , I ]2 ),
where j and
j are Gabor wavelets identical up to phase (also called a quadrature pair; see Figure 4).
+
image
output
Gabor wavelet quadrature pair
squaring
fixed nonlinearity
Figure 4: Complex cell model. The activation of a model complex cell given an image is the sum of
squares of the inner products of the image with a quadrature pair of Gabor wavelets followed by a
nonlinearity. This is equivalently modeled by summing the squares of 4 simple cell model outputs,
followed by a nonlinearity.
3.3
Pooled-complex Cell Model
The hierarchical filtering component of our model also includes a third filtering stage, linear pooling
of complex cells sharing a common spatial location and frequency. This stage has no direct biological interpretation in terms of area V1, but has been included to improve representational power of
the model: a linear combination of complex cells (the pooled-complex cell), followed by a nonlinearity, cannot be expressed as an additive combination of nonlinear functions of individual complex
cells. Note that this element might be particularly useful for modeling responses in higher visual
areas beyond V1.
If { Xj1 , ..., Xjk } correspond to complex cells with the same spatial location and frequency, then
the corresponding pooled-complex cell (which thus sums over different orientations) is given by,
k
Zj1 jk = l=1 Xjl . (See Figure 5.)
4
+
+
output
image
+
complex cells
Figure 5: Pooled-complex cell model. Subsets of complex cells that share a common spatial location
and frequency are summed.
3.4
V-SPAM model
Finally, the predicted fMRI response Y is obtained as a sparse additive combination of complex
cell and pooled-complex cell outputs. Denote the complex cell outputs by { X1 , ..., Xp } , and the
pooled-complex cell outputs by { Z1 , ...,
response Y is modeled as a sparse
p Zq } . Then the
fMRI
q
additive (nonparametric) model, Y = j=1 fj (Xj ) + l=1 gl (Zl ) + ?. Figure 6 summarizes the
entire V-SPAM model, including both filtering and pooling components.
+
image
fMRI voxel
response
simple cell
outputs
complex cell
outputs
pooled-complex cell
outputs
nonlinearities
Figure 6: V-SPAM model. The fMRI voxel response is modeled as the summation of nonlinear
functions of complex and pooled-complex cell outputs. The connections and components in the
dashed region are to be estimated from the data under the assumption that many of them are null.
4
4.1
Experiments
Data description
The data set analyzed in this paper consists of a total of 1,294 voxels recorded from area V1 of
one human observer. A 4T Varian MRI scanner provided voxels of size 2mm x 2mm x 2.5mm
at a frequency of 1Hz. The visual stimuli used in the experiment consisted of 1,750 20-by-20
degree grayscale natural images, masked by a circular aperture. A two-stage procedure was used for
data collection. In the first stage, 1,750 natural images were presented to the subject 2 times each.
This data set was used to fit the model. In the second stage, 120 additional natural images were
presented 13 times each. This data set was used for model validation. (Note that the images used for
estimation and validation were distinct.) In all cases images were flashed briefly 3 times during a 1
second display period, and there was a blank period of 3 seconds between successive images. After
acquisition the fMRI signals were pre-processed to reduce temporal non-stationarity and increase
signal-to-noise [5]. Complete details of the fMRI experiment can be found in [5].
5
4.2
V-SPAM model fitting
The V-SPAM model was fitted separately for each of the 1,294 voxels using the training set of
1,750 images and the evoked fMRI responses. The fitting procedure can be conceptualized in four
successive stages that roughly parallel the hierarchical layers of the model itself.
In the first stage, the model complex cell outputs are computed according to equation (2) using a
pyramid (or family) of Gabor wavelets sampled on a grid of 128 x 128 pixels. The pyramid includes
5 spatial frequencies (or scales): 1, 2, 4, 8, 16, and 32 cycles/field of view. At each spatial frequency
? the wavelets are positioned evenly on a ? ? ? grid covering the image. All combinations of 8
orientations and 2 phases occur at each of the ? ? ? positions. In total, the pyramid consists of
10,920 quadrature pairs plus 1 constant wavelet (corresponding to mean luminance).
In the second stage, the model complex cell outputs are pre-screened in order to eliminate complex
cell outputs that are unrelated to a voxel?s response, and to reduce the computational complexity
of successive stages of fitting. This is accomplished by considering the squared-correlation of the
response of each complex cell with the evoked voxel response, using the 1,750 images in the training
set. Only the top k complex cells are retained. In pilot studies we found empirically that k = 100
was enough to give good statistical and computational performance (data not shown).
In the third stage, pooled-complex cells (see Section 3) are formed from the complex cell outputs
that passed the pre-screening in fitting stage 2.
In the fourth and final stage, the complex and pooled-complex cell responses to the images in the
training set are used as predictors in the SpAM fitting algorithm (see Figure 1), and this is optimized
to fit the voxel responses evoked by the same 1,750 images in the training set. The smoothing is done
by means of Gaussian kernel regression with plug-in bandwidth, and the regularization parameter is
selected by the Akaike information criterion (AIC).
4.3
Model validation
For each voxel, we evaluate the fitted V-SPAM models by computing the predictive R2 (squared
correlation) of the predicted and actual fMRI responses evoked by each of the 120 images in the
validation set.
To permit a more complete evaluation of the V-SPAM model, we used the same data to fit a simpler
model more directly comparable to the one used in earlier work with this data set [5]. The sparse
linear pooling model aims to predict each voxel?s response as a linear P
combination of all 10,921
p
estimated complex cell outputs. This model has the form, Y (I) = ?0 + j=1 ?j Xj (I) + , where
the Xj (I) are the complex cell outputs estimated according to (2), with the p = 10, 921 Gabor
wavelets described in Section 4.2. The coefficients ?j , j = 0, . . . , p, were estimated by L2 Boosting
[1] with the stopping criterion determined by 5-fold cross-validation within the same data set. This
model is a sparsified version of the one used in [5], and has comparable prediction performance.
5
Results
Figure 7 (left) shows a scatterplot comparing the performance of the V-SPAM model with that of
the sparse linear pooling model for all 1,294 voxels. The vertical axis gives performance of the
V-SPAM model, and the horizontal axis the sparse linear pooling model. Each point corresponds
to a single voxel. The inset region contains 429 voxels for which both models had some predictive
power (R2 ? 0.1). For these voxels, the relative improvement of the V-SPAM model over the sparse
linear pooling model is shown in the histogram to the right. The predictions of the V-SPAM model
were on average 14% better than those of the sparse linear pooling model (standard deviation 17%).
5.1
Estimated receptive fields and tuning curves
Figure 8 shows the spatial receptive-fields (RF?s) and joint frequency and orientation tuning curves
estimated using the V-SPAM model for 3 voxels. These voxels were chosen because they had high
predictive power (R2 ?s of 0.65, 0.59, and 0.63, respectively from left to right) and so were modeled
accurately. The upper row of the figure shows the spatial RF of each voxel. The intensity at each
6
?
?
?
? ?? ?
?
??
?
??
?
?
?
?
??
??
?
?
???? ? ? ? ? ?
?
? ??
?
?
?
?? ? ?? ?
?
??
? ?? ?
?
?
?? ?
?
? ??
??? ?
? ? ?? ?? ??
????
?
?
? ????
? ?
?
?? ?
??? ? ?? ?
?
?
?
?? ? ?
?
? ?
?
?
? ?
?
? ?? ?
??
?
?
?? ?? ? ? ?
?? ???
? ?
? ??
? ? ?
?
??
???
?? ?
?
??
? ?? ???? ?
?
?
??
? ?? ?
? ?
?
?
?
?
?
??
?? ?
? ? ???
?
?
? ??
?
?
?? ?? ?
? ?
? ? ? ??
? ?
? ???
?
? ?? ? ?
?
?? ???? ? ?
?
?
?
? ?? ? ?
?? ? ? ? ?
?
?
?
?
?
?
??
?
??
?
???
?
?
?
? ???
?
? ? ?? ? ?? ??
??
??
? ? ?
? ?
? ? ?
? ?
?? ?
? ? ?
? ? ? ?? ?
?
? ? ??
? ?
?
?
?
? ???
???
? ?
?? ? ? ? ?
? ?
??
??
?
?
? ???
? ?? ? ?
?
?? ?
? ??
?
?
?
??
?
?
?
? ? ??
?
? ??
??? ?? ?? ?
?
? ?
? ? ?
?
?
??
?? ??
??
??
? ? ???
???
? ?
? ???
?? ?
?
??
? ? ?
???
??
?
? ? ?
?? ? ???? ?
?
?
?
?
? ? ?
?????
? ? ??
????
?
??
??
?
?? ?? ??
? ? ??
?
???? ??
? ?
?
??
?
? ??
? ?? ?
?
?
?
?
?
?
???
?
??
?
???
?
? ??
??
?
??
??? ?
?
?
?
?
?
?
?
?
?
?
?
????
?
? ? ?
?
?
?
?
?
?
?
?
??
?? ? ?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?? ?
?
?
?
?
???
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
??
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?? ? ?
?
?
0.1
0.2
0.3
0.4
Frequency
50
0
0.4
0.3
0.2
0.0
0.0
100
0.6
0.5
?
?
?
?
?
0.1
SpAM V1 Model
?
?
? ?
?
150
0.7
?
?
?
?
?
?
?
0.5
0.6
?20
sparse linear pooling model
0
20
40
60
80
100
relative improvement (%)
(mean = 14, SD = 17, median = 12, IQR = 17)
Figure 7: Predictive R2 of the fitted V-SPAM model compared against the fitted sparse linear pooling
model. (Left) Each of the 1,294 points in the scatterplot corresponds to a single voxel. (Right)
Relative performance for the 429 voxels contained in the inset region on the left.
location in the spatial RF represents the standardized predicted response of the voxel to an image
stimulus consisting of a single pixel at that location. The spatial RF?s of these voxels are clearly
localized in space, consistent with the known retinotopic organization of V1 and previous fMRI
results [9]. The lower row of Figure 8 shows the joint frequency and orientation tuning properties
of these same 3 voxels. Here the tuning curves were estimated by computing the predicted response
of the fitted voxel model to cosine gratings of varying orientation (degrees) and spatial frequency
(cycles/field of view). All of the voxels are tuned to spatial frequencies above about 8 cycles/field of
view, while orientation tuning varies from voxel to voxel. The joint spatial frequency and orientation
tuning of all 3 voxels appears to be non-separable (i.e. their orientation tuning is not a constant
function of frequency).
15
15
10
10
10
5
5
0
0
32
1
16
0
8
0
?2
0
0
45 90 135 180
orient
0
?1
2
24
1
16
?1
32
2
24
8
0
freq
2
24
freq
freq
32
5
1
16
0
8
?1
0
0
45 90 135 180
orient
?2
0
45 90 135 180
orient
Figure 8: (upper) Spatial receptive-fields (RF?s) and (lower) joint frequency and orientation tuning
curves estimated by the V-SPAM model for 3 voxels with high predictive power (R2 ?s of 0.65, 0.59,
0.63, left to right). Each location in the spatial RF shows the standardized predicted response of
the voxel to an image consisting of a single pixel at that location. The tuning curves show the
standardized predicted response of the voxel to cosine gratings of varying orientation (degrees) and
spatial frequency (cycles/field of view).
5.2
Nonlinearities
One of the potential advantages of the V-SPAM model over other approaches is that it can reveal
novel nonlinear tuning and pooling properties, as revealed by the nonlinear summation occurring
in the final stage of the V-SPAM model. Figure 9 illustrates some of these functions estimated for
a typical voxel with high predictive power (R2 of 0.63). These correspond to the nonlinearities
appearing in the final stage of the V-SPAM model (see Figure 6). Here the horizontal axis is the
input in standard units of the corresponding model complex or pooled-complex cell outputs, and
the vertical axis is the output in standard units of predicted responses. For this voxel, these are the
7
?1
0
1
2
?1
0
input
1
2
3
0.2
?0.1
?0.2
?1
input
0.0
output
0.1
0.2
?0.2
?0.1
0.0
output
0.1
0.2
0.1
0.0
output
?0.2
?0.1
0.0
?0.2
?0.1
output
0.1
0.2
4 largest (ranked by L2 norm) nonlinearities. All 4 of these nonlinearities are compressive. The
remaining 75 nonlinearities present in the voxel?s fitted model have similar shapes, but are much
smaller and hence contribute less to the predicted response. They are overlaid in the final panel of
Figure 9.
0
1
input
2
?1
0
1
2
3
input
Figure 9: Nonlinearities estimated in the V-SPAM model for a voxel with high predictive power
(R2 : 0.63). The 4 largest (ranked by L2 norm) are shown left to right by the thick lines. The other
75 nonlinearities for this voxel (overlaid in the right panel) are smaller and contribute less to the
predicted response.
6
Discussion and conclusions
Our V-SPAM model provides better predictions of fMRI activity evoked by natural images than
does a sparse linear model similar to that used in an earlier study of this data set [5]. This increased
predictive power of the V-SPAM model reflects the fact that it can describe explicitly the nonlinear
pooling that likely occurs among the many neurons whose pooled activity contributes to measured
fMRI signals. These pooled output nonlinearities are likely a critical component of nonlinear computation across the visual hierarchy. Therefore, the SpAM framework may be particularly useful for
modeling neurons or fMRI signals recorded in higher and more nonlinear stages of visual processing
beyond V1.
References
[1] Peter B?uhlmann and Bin Yu. Boosting with the l2 loss: Regression and classification. Journal of the
American Statistical Association, 98(462):324?339, 2003.
[2] R.L. De Valois and K. K. De Valois. Spatial Vision. Oxford University Press, 1990.
[3] Trevor Hastie and Robert Tibshirani. Generalized additive models. Chapman & Hall Ltd., 1999.
[4] D. J. Heeger, A. C. Huk, W. S. Geisler, and D. G. Albrecht. Spikes versus bold: what does neuroimaging
tell us about neuronal activity? Nat Neurosci, 3(7):631?633, 2000.
[5] Kendrick N. Kay, Thomas Naselaris, Ryan J. Prenger, and Jack L. Gallant. Identifying natural images from
human brain activity. Nature, 452(7185):352?355, 2008.
[6] Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a
sparse code for natural images. Nature, 381(6583):607?609, June 1996.
[7] Pradeep Ravikumar, Han Liu, John Lafferty, and Larry Wasserman. Spam: Sparse additive models. Neural
Information Processing Systems, 2007.
[8] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 58, No. 1:267?288,
1996.
[9] Brian A. Wandell, Serge O. Dumoulin, and Alyssa A. Brewer. Visual field maps in human cortex. Neuron,
56(2):366?383, 2007.
8
| 3481 |@word briefly:1 version:1 mri:1 norm:2 valois:2 series:1 contains:1 liu:1 tuned:1 current:1 blank:1 comparing:1 activation:5 must:1 john:1 additive:15 oxygenation:1 shape:1 interpretable:1 stationary:1 alone:1 selected:1 xk:1 provides:5 boosting:2 contribute:2 location:8 successive:3 simpler:1 direct:1 consists:7 backfitting:1 combine:2 fitting:8 roughly:1 brain:9 inspired:4 automatically:1 actual:1 considering:1 becomes:1 provided:2 estimating:3 underlying:1 unrelated:1 retinotopic:1 panel:2 null:1 what:1 compressive:2 temporal:2 berkeley:4 zl:1 unit:2 appear:1 sd:1 consequence:1 oxford:1 black:1 pam:1 might:1 plus:1 evoked:12 challenging:3 vu:1 procedure:4 area:10 kendrick:2 gabor:12 pre:4 onto:2 cannot:1 selection:1 influence:1 map:1 center:1 conceptualized:1 resolution:2 identifying:1 wasserman:1 estimator:2 importantly:1 kay:2 population:2 coordinate:1 hierarchy:1 decode:1 xjl:1 akaike:1 element:2 particularly:2 jk:1 predicts:3 thousand:1 region:3 cycle:4 complexity:1 predictive:12 basis:1 joint:4 indirect:1 represented:3 various:1 distinct:4 describe:4 prenger:1 tell:1 whose:2 valued:1 statistic:1 emergence:1 noisy:1 itself:2 final:4 advantage:1 propose:2 product:3 j2:1 representational:1 description:1 convergence:1 double:1 develop:1 measured:5 grating:2 soc:1 implemented:2 predicted:9 strong:1 differ:1 thick:1 filter:5 human:5 larry:1 bin:2 biological:1 ryan:1 summation:2 brian:1 mm:4 scanner:2 considered:1 hall:1 overlaid:2 mapping:1 predict:1 early:1 purpose:1 estimation:1 combinatorial:1 uhlmann:1 sensitive:2 largest:2 reflects:2 naselaris:2 clearly:1 gaussian:1 aim:2 rather:1 pn:1 shrinkage:1 varying:2 june:1 improvement:2 stopping:1 squaring:1 entire:1 eliminate:1 selective:1 pixel:5 issue:1 arg:1 orientation:13 among:1 classification:1 k6:1 proposes:1 resonance:2 spatial:24 summed:2 fairly:1 smoothing:3 cube:1 field:13 construct:1 initialize:1 constrained:1 chapman:1 identical:2 represents:1 yu:2 fmri:32 others:1 stimulus:5 inherent:1 few:1 oriented:1 individual:1 phase:8 consisting:2 stationarity:1 screening:1 organization:1 highly:1 circular:1 evaluation:1 analyzed:1 pradeep:2 accurate:1 euclidean:1 xjk:1 varian:1 theoretical:1 fitted:6 increased:1 earlier:5 soft:2 modeling:2 s2j:1 deviation:1 subset:1 hundred:1 predictor:5 masked:1 too:1 reported:1 varies:1 combined:1 geisler:1 squared:2 reflect:2 central:1 recorded:3 possibly:1 american:1 derivative:1 albrecht:1 potential:1 nonlinearities:10 de:2 pooled:24 summarized:1 includes:2 coefficient:1 bold:1 explicitly:1 view:4 observer:1 dumoulin:1 parallel:1 vivo:1 square:4 formed:1 reserved:1 efficiently:1 correspond:2 identify:1 serge:1 millimeter:1 conceptually:1 vincent:1 accurately:1 rectified:2 tissue:1 sharing:1 trevor:1 against:2 acquisition:1 frequency:19 pp:2 sampled:3 pilot:1 dataset:1 positioned:1 appears:2 higher:2 response:31 arranged:1 evaluated:1 box:1 done:1 furthermore:2 stage:23 until:1 correlation:2 horizontal:2 nonlinear:16 reveal:1 olshausen:1 xj1:1 consisted:1 lgorithm:1 counterpart:1 regularization:2 hence:1 spatially:1 nonzero:1 flashed:1 freq:3 during:1 covering:1 cosine:2 criterion:2 generalized:1 complete:2 performs:2 fj:13 image:38 jack:2 novel:3 common:3 wandell:1 functional:2 spiking:1 empirically:1 volume:2 association:1 interpretation:1 he:1 functionally:1 measurement:1 tuning:14 rd:2 fk:1 grid:3 nonlinearity:6 bruno:1 had:2 han:1 cortex:1 base:1 optimizes:1 yi:5 accomplished:1 integrable:1 captured:1 additional:1 period:2 monotonically:1 signal:9 dashed:1 rj:3 smooth:2 iqr:1 plug:1 cross:1 ravikumar:3 prediction:5 regression:7 vision:2 expectation:1 histogram:1 kernel:1 pyramid:5 cell:68 background:1 separately:2 median:1 pooling:19 recording:1 hz:2 subject:1 flow:1 lafferty:1 call:1 revealed:1 enough:1 iterate:1 xj:8 fit:4 psychology:1 hastie:2 restrict:1 lasso:2 bandwidth:1 inner:3 reduce:2 passed:1 ltd:1 peter:1 useful:3 generally:1 nonparametric:9 band:1 statist:1 processed:1 xij:1 exist:2 estimated:14 tibshirani:4 four:2 threshold:2 blood:1 pj:1 v1:17 imaging:2 luminance:1 fraction:1 sum:3 screened:1 orient:3 fourth:1 opaque:1 extends:1 throughout:1 reasonable:1 family:2 sobolev:1 summarizes:1 comparable:2 entirely:1 layer:5 followed:5 display:1 aic:1 fold:1 activity:13 occur:2 constraint:1 min:1 separable:1 department:2 according:2 combination:10 smaller:2 across:1 biologically:2 invariant:1 rectification:3 equation:1 visualization:1 remains:2 turn:1 brewer:1 fed:1 operation:1 permit:3 hierarchical:11 magnetic:2 appearing:1 alternative:1 rp:1 thomas:2 top:1 standardized:3 remaining:1 occurs:1 spike:1 receptive:7 primary:1 separate:1 evenly:1 collected:1 toward:1 code:1 modeled:5 relationship:2 retained:1 equivalently:1 difficult:1 neuroimaging:1 robert:1 negative:3 gallant:2 upper:2 vertical:2 neuron:7 benchmark:1 descent:1 sparsified:1 alyssa:1 intensity:1 introduced:1 david:1 nonlinearly:2 pair:5 z1:1 connection:1 optimized:1 california:2 beyond:2 regime:1 sparsity:1 rf:6 interpretability:1 including:1 royal:1 power:8 critical:1 natural:13 difficulty:1 ranked:2 predicting:2 residual:2 scheme:2 improve:1 axis:4 understanding:2 voxels:18 l2:5 relative:3 loss:1 filtering:13 versus:1 localized:2 validation:5 degree:3 sufficient:1 consistent:4 xp:2 imposes:1 metabolic:1 bank:2 share:2 row:3 penalized:1 gl:1 last:1 taking:1 correspondingly:1 sparse:25 curve:7 noninvasive:1 overcome:1 made:1 collection:1 spam:35 voxel:26 sj:2 obtains:1 aperture:1 evoke:1 summing:1 xi:5 grayscale:2 zq:1 nature:2 ca:2 obtaining:1 contributes:1 huk:1 complex:58 domain:1 hierarchically:1 linearly:1 neurosci:1 noise:1 quadrature:4 x1:2 neuronal:2 cubic:1 probing:1 position:1 heeger:1 lie:1 third:3 learns:1 wavelet:16 inset:2 r2:7 zj1:1 scatterplot:2 nat:1 illustrates:1 occurring:1 demand:1 likely:4 neurophysiological:1 visual:11 expressed:1 saturating:1 contained:1 corresponds:2 conditional:1 change:1 included:1 determined:1 except:1 typical:1 called:6 total:2 pas:1 scan:1 evaluate:1 |
2,737 | 3,482 | Using matrices to model symbolic relationships
Ilya Sutskever and Geoffrey Hinton
University of Toronto
{ilya, hinton}@cs.utoronto.ca
Abstract
We describe a way of learning matrix representations of objects and relationships.
The goal of learning is to allow multiplication of matrices to represent symbolic
relationships between objects and symbolic relationships between relationships,
which is the main novelty of the method. We demonstrate that this leads to excellent generalization in two different domains: modular arithmetic and family
relationships. We show that the same system can learn first-order propositions
such as (2, 5) ? +3 or (Christopher, Penelope) ? has wife, and higher-order
propositions such as (3, +3) ? plus and (+3, ?3) ? inverse or (has husband,
has wife) ? higher oppsex. We further demonstrate that the system understands
how higher-order propositions are related to first-order ones by showing that it can
correctly answer questions about first-order propositions involving the relations
+3 or has wife even though it has not been trained on any first-order examples
involving these relations.
1
Introduction
It is sometimes possible to find a way of mapping objects in a ?data? domain into objects in a ?target?
domain so that operations in the data domain can be modelled by operations in the target domain.
If, for example, we map each positive number to its logarithm, multiplication in the data domain can
be modelled by addition in the target domain. When the objects in the data and target domains are
more complicated than single numbers, it may be difficult to find good mappings using inspiration
alone. If we consider a continuous space of possible mappings and if we define a smooth measure of
how well any particular mapping works, it is possible to use gradient search to find good mappings
between the data and target domains.
Paccanaro and Hinton [10] introduced a method called ?Linear Relational Embedding? (LRE) that
uses multiplication of vectors by matrices in the target domain to model pairwise relations between
objects in the data domain. LRE applies to a finite set of objects ? and a finite set of relations
R where every relation R ? R is a set of pairs of objects, so R ? ? ? ?. Given the objects
and relations, LRE finds a column-vector representation A of each object A ? ?, and a matrix
representation R of each relation R ? R, such that the product RA is close to B for all pairs
(A, B) that are members of the relation R, and far from C for all pairs (A, C) that are not members
of R. LRE learns the vectors and matrices by performing gradient descent in a cost function C that
measures the similarities between RA and all B such that (A, B) ? R relative to the similarities
between RA and the vector representations of all the objects in the set of known objects ?:
C=?
X
X
R (A,B)?R
exp(?kRA ? Bk2 )
2
C?? exp(?kRA ? Ck )
log P
(1)
The cost function in Eq. 1 is ?discriminative? because it compares the distance from RA to each
correct answer with the distances from RA to all possible answers. This prevents trivial solutions
in which RA and B are always zero, but it also causes the cost function to be nonconvex, making
it hard to optimize. We can view exp(?kRA ? Bk2 ) as the unnormalized probability density of
B under a spherical Gaussian centered at RA. The cost function then represents the sum of the
negative log probabilities of picking the correct answers to questions of the form (A,?) ? R if we
pick answers stochastically in proportion to their probability densities under the spherical Gaussian
centered at RA.
We say that LRE accurately models a set of objects and relations if its answers to queries of the
form (A, ?) ? R are correct, which means that for each object A and relation R such that there are
k objects X satisfying (A, X) ? R, each vector representation X of each such object X must be
among the k closest vector representations to RA. The definition of correctness implies that LRE?s
answer to a query (A, ?) ? R that has no solutions is always trivially correct. More refined versions
of LRE handle such unsatisfiable queries more explicitly [9].
It may not be obvious how to determine if the representation found by LRE is good. One way is
to check if LRE?s representation generalizes to test data. More specifically, if LRE has not been
informed that B is an answer to the query (A, ?) ? R that has k correct answers (that is, (A, B) was
removed from R during LRE?s learning), yet LRE answers the query (A, ?) ? R correctly by placing
B among the k closest object representations to RA, then we can claim that LRE?s representation
generalizes. Such generalization can occur only if LRE learned the ?right? representations A, B,
and R from the other propositions, which can happen only if the true relation is plausible according
to LRE?s inductive bias that determines the subjective plausibility of every possible set of objects
and relations (see, e.g., [6]). If the representation is high-dimensional, then LRE can easily represent
any set of relations that is not too large, so its inductive bias finds all sets of relations plausible, which
prevents generalization from being good. However, if the representation is low-dimensional, then
LRE must make use of regularities in the training set in order to accurately model the data, but if
it succeeds in doing so, generalization will be good. Paccanaro and Hinton [10] show that lowdimensional LRE exhibits excellent generalization on datasets such as the family relations task. In
general, the dimensionality of the representation should grow with the total numbers of objects and
relations, because when there are few objects and relations, a high-dimensional representation easily
overfits, but if the number of objects and relations is large then the dimensionality can be higher,
without overfitting. The best dimensionality depends on the ?fit? between LRE and the data, and is
mainly an empirical question.
A drawback of LRE is that the square matrices it uses to represent relations are quadratically more
cumbersome than the vectors it uses to represent objects. This causes the number of free parameters
to grow rapidly when the dimensionality of the representations is increased. More importantly, it
also means that relations cannot themselves be treated as objects. Paccanaro and Hinton [10], for
example, describe a system that learns propositions of the form: (2, 5) ? +3 where +3 is a relation
that is represented by a learned matrix, but their system does not understand that the learned matrix
for +3 has anything in common with the learned vector that is used to model the number 3 in
propositions like (5, 3) ? ?2.
In this paper we describe ?Matrix Relational Embedding? (MRE), which is a version of LRE that
uses matrices as the representation for objects as well as for relations.1 MRE optimizes the same
cost function as LRE (equation 1), with the difference that RA ? C is now a matrix rather than a
vector and kRA ? Ck2 denotes the sum of the squares of the entries of the matrix. This choice
of matrix norm makes MRE a direct generalization of LRE. All distances between matrices will be
computed using this norm.
Although MRE is a simple variation of LRE, it has two important advantages.
The first advantage of MRE is that when using an N ? N matrix to represent each object it is
possible to make N much smaller than when using an N -dimensional vector, so MRE can use about
the same number of parameters as LRE for each object but many fewer parameters than LRE for
each relation, which is useful for ?simple? relations.
1
We have also experimented with a version of LRE that learns to generate a learned matrix representation of
a relation from a learned vector representation of the relation. This too makes it possible to treat relations as objects because they both have vector representations. However, it is less straightforward than simply representing
objects by matrices and it does not generalize quite as well.
The second advantage of MRE, which is also the main novelty of this paper, is that MRE is
capable of representing higher-order relations, instances of which are (+3, ?3) ? inverse or
(has husband, has wif e) ? higher oppsex. It can also represent relations involving an object
and a relation, for instance (3, +3) ? plus. Formally, we are given a finite set of higher-order rela? where a higher-order relation R
??R
? is a relation whose arguments can be relations as well
tions R,
?
? ? ? ? R (R is the set of the basic relations).
as objects, which we formalize as R ? R ? R or R
The matrix representation of MRE allows it to treat relations in (almost) the same way it treats basic
objects, so there is no difficulty representing relations whose arguments are also relations.
We show that MRE can answer questions of the form (4,?) ? +3 even though the training set
contains no examples of the basic relation +3. It can do this because it is told what +3 means by
being given higher-order information about +3. It is told that (3, +3) ? plus and it figures out what
plus means from higher-order examples of the form (2, +2) ? plus and basic examples of the form
(3, 5) ? +2. This enables MRE to understand a relation from an ?analogical definition?: if it is
told that has f ather to has mother is like has brother to has sister, etc., then MRE can answer
queries involving has f ather based on this analogical information alone. Finally, we show that
MRE can learn new relations after an initial set of objects and relations has already been learned and
the learned matrices have been fixed. This shows that MRE can add new knowledge to previously
acquired propositions without the need to relearn the original propositions. We believe that MRE
is the first gradient-descent learning system that can learn new relations from definitions, including
learning the meanings of the terms used in the definitions. This significantly extends the symbolic
learning abilities of connectionist-type learning algorithms.
Some of the existing connectionist models for representing and learning relations and analogies
[2, 4] are able to detect new relations and to represent hierarchical relations of high complexity.
They differ by using temporal synchrony for explicitly representing the binding of the relations to
object, and, more importantly, do not use distributed representations for representing the relations
themselves.
2
The modular arithmetic task
Paccanaro and Hinton [10] describe a very simple modular arithmetic task in which the 10 objects
are the numbers from 0 to 9 and the 9 relations are +0 to +4 and ?1 to ?4. Linear Relational
Embedding easily learns this task using two-dimensional vectors for the numbers and 2 ? 2 matrices
for the relations. It arranges the numbers in a circle centered at the origin and uses rotation matrices
to implement the relations. We used base 12 modular arithmetic, thus there are 12 objects, and made
the task much more difficult by using both the twelve relations +0 to +11 and the twelve relations ?0
to ?11. We did not include subtraction and division because in modular arithmetic every proposition
involving subtraction or division is equivalent to one involving addition or multiplication.
There are 288 propositions in the modular arithmetic ntask. We tried matrices of various sizes and
discovered that 4 ? 4 matrices gave the best generalization when some of the cases are held-out. We
held-out 30, 60, or 90 test cases chosen at random and used the remaining cases to learn the realvalued entries of the 12 matrices that represent numbers and the 24 matrices that represent relations.
The learning was performed by gradient descent in the cost function in Eq. 1. We repeated this five
times with a different random selection of held-out cases each time. Table 1 shows the number of
errors on the held-out test cases.
3
Details of the learning procedure
To learn the parameters, we used the conjugate gradient optimization algorithm available in the
?scipy? library of the Python programming language with the default optimization parameters. We
computed the gradient of the cost function on all of the training cases before updating the parameters,
and initialized the parameters by a random sample from a spherical Gaussian
with unit variance
P
on each dimension. We also included ?weight-decay? by adding 0.01 i wi2 to the cost function,
where i indexes all of the entries in the matrices for objects and relations. The variance of the
results is due to the nonconvexity of the objective function. The implementation is available in
[www.cs.utoronto.ca/?ilya/code/2008/mre.tar.gz].
Test results for the basic modular arithmetic.
errors on 5 test sets
mean test error
(30) 0
0
0
0
0
0.0
(60) 29 4
0
1
0
6.8
(90) 27 23 16 31 23
24.0
Table 1: Test results on the basic modular arithmetic task. Each entry shows the number of errors
on the randomly held-out cases. There were no errors on the training set. Each test query has 12
possible answers of which 1 is correct, so random guessing should be incorrect on at least 90% of
the test cases. The number of held-out cases of each run is written in brackets.
Christopher = Penelope
Margaret = Arthur
Andrew = Christine
Victoria = James
Jennifer = Charles
RA
Colin
Charlotte
Aurelio = Maria
Grazia = Pierino
Bortolo = Emma
Giannina = Pietro
Alberto
Mariemma
(a)
Doralice = Marcello
B
D
C
(b)
Figure 1: (a) Two isomorphic family trees (b) An example of a situation in which the discriminative
cost function in Eq. 1 causes the matrix RA produced by MRE to be far from the correct answer,
B (see section 5).
In an attempt to improve generalization, we tried constraining all of the 4 ? 4 matrices by setting
half of the elements of each matrix to zero so that they were each equivalent to two independent
2 ? 2 matrices. Separate experiments showed that 2 ? 2 matrices were sufficient for learning either
the mod 3 or the mod 4 version of our modular arithmetic task, so the mod 12 version can clearly be
done using a pair of 2 ? 2 matrices for each number or relation. However, the gradient optimization
gets stuck in poor local minima.
4
The standard family trees task
The ?standard? family trees task defined in [3] consists of the two family trees shown in figure
1(a) where the relations are {has husband, has wife, has son, has daughter, has father, has mother,
has brother, has sister, has nephew, has niece, has uncle, has aunt}. Notice that for the last four
relations there are people in the families in figure 1(a) for whom there are two different correct
answers to the question (A,?) ? R. When there are N correct answers, the best way to maximize
the sum of the log probabilities of picking the correct answer on each of the N cases is to produce
an output matrix that is equidistant from the N correct answers and far from all other answers. If
the designated correct answer on such a case is not among the N closest, we treat that case as an
error. If we count cases with two correct answers as two different cases the family trees task has 112
cases.
We used precisely the same learning procedure and weight-decay as for the modular arithmetic
task. We held-out 10, 20, or 30 randomly selected cases as test cases, and we repeated the random
selection of the test cases five times. Table 2 shows the number of errors on the test cases when 4 ? 4
matrices are learned for each person and for each relation. MRE generalizes much better than the
Test results for the basic family trees task.
errors on 5 test sets mean test error
(10) 0 0 0 0 2
0.4
(20) 6 0 0 0 0
1.2
(30) 0 2 4 0 4
2.0
Table 2: Test results on the basic family trees task. Each entry shows the number of errors on the
randomly held-out cases. There were no errors on the training set. The same randomly selected test
sets were used for the 4 ? 4 matrices. Each test query has 24 possible answers, of which at most 2
objects are considered correct. As there are 24 objects, random guessing is incorrect on at least 90%
of the cases.
feedforward neural network used by [3] which typically gets one or two test cases wrong even when
only four test cases are held-out. It also generalizes much better than all of the many variations
of the learning algorithms used by [8] for the family trees task. These variations cannot achieve
zero test errors even when only four test cases are held-out and the cases are chosen to facilitate
generalization.
5
The higher-order modular arithmetic task
We used a version of the modular arithmetic task in which the only basic relations were
{+0, +2, . . . , +11}, but we also included the higher-order relations plus, minus, inverse consisting of 36 propositions, examples of which are (3, +3) ? plus; (3, +9) ? minus; (+3, +9) ? inverse.
We then held-out all of the examples of one of the basic relations and trained 4 ? 4 matrices on all
of the other basic relations plus all of the higher-order relations.
Our first attempt to demonstrate that MRE could generalize from higher-order relations to basic
relations failed: the generalization was only slightly better than chance. The failure was caused by
a counter-intuitive property of the discriminative objective function in Eq. 1 [9]. When learning the
higher-order training case (3, +3) ? plus it is not necessary for the product of the matrix representing
3 and the matrix representing plus to be exactly equal to the matrix representing +3. The product
only needs to be closer to +3 than to any of the other matrices. In cases like the one shown in figure
1(b), the relative probability of the point B under a Gaussian centered at RA is increased by moving
RA up, because this lowers the unnormalized probabilities of C and D by a greater proportion than
it lowers the unnormalized probability of B. The discriminative objective function prevents all of the
representations collapsing to the same point, but it does not force the matrix products to be exactly
equal to the correct answer. As a result, the representation of +3 produced by the product of 3 and
plus does not work properly when it is applied to a number.
To overcome this problem, we modified the cost function for training the higher-order relations so
? is exactly equal to B
that it is minimized when RA
C=
X
X
? ? Bk2 ,
kRA
(2)
? R
? (A,B)?R
?
R?
? ranges over R,
? the set of all higher-order relations, and A and B can be either relations or
where R
? domain.
basic objects, depending on R?s
Even when using this non-discriminative cost function for training the higher-order relations, the
matrices could not all collapse to zero because the discriminative cost function was still being used
for training the basic relations. With this modification, the training caused the product of 3 and plus
to be very close to +3 and, as a result, there was often good generalization to basic relations even
when all of the basic relations involving +3 were removed from MRE?s training data and all it was
told about +3 was that (3, +3) ? plus, (9, +3) ? minus, and (+9, +3) ? inverse (see table 3).
Test results for higher-order arithmetic task.
errors on 5 test sets mean test error
+1 (12)
5 0 0 0 0
1.0
+4 (12)
0 0 6 6 1
2.6
+6 (12)
0 6 4 4 0
2.8
+10 (12) 3 8 0 0 7
3.6
Table 3: Test results on the higher-order arithmetic task. Each row shows the number of incorrectly
answered queries involving a relation (i.e., +1, +4, +6, or +10) all of whose basic examples were
removed from MRE?s training data, so MRE?s knowledge of this relation was entirely from the
other higher-order relations. Learning was performed 5 times starting from different initial random
parameters. There were no errors on the training set for any of the runs. The number of test cases is
written in brackets.
Test results for the higher-order family trees task.
errors on 5 test sets mean test error
has father (12) 0 12 0 0 0
2.4
has aunt (8)
4 8 4 0 4
4.0
has sister (6)
2 0 0 0 0
0.4
has nephew (8) 0 0 8 0 0
1.6
Table 4: Test results for the higher-order family trees task. In each row, all basic propositions
involving a relation are held-out (i.e., has father, has aunt, has sister, or has nephew). Each row
shows the number of errors MRE makes on these held-out propositions on 5 different learning runs
from different initial random parameters. The only information MRE has on these relations is in the
form of a single higher-order relation, higher oppsex. There were no errors on the training sets for
any of the runs. The number of held-out cases is written in brackets.
6
The higher-order family trees task
To demonstrate that similar performance is obtained on family trees task when higher-order relations
are used, we included in addition to the 112 basic relations the higher-order relation higher oppsex.
To define higher oppsex we observe that many relations have natural male and natural female
versions, as in: mother-father, nephew-niece, uncle-aunt, brother-sister, husband-wife, and sondaughter. We say that (A, B) ? higher oppsex for relations A and B if A and B can be seen as
natural counterparts in this sense. Four of the twelve examples of higher oppsex are given below:
1. (has father, has mother) ? higher oppsex
2. (has mother, has father) ? higher oppsex
3. (has brother, has sister) ? higher oppsex
4. (has sister, has brother) ? higher oppsex
We performed an analogous test to that in the previous section on the higher order modular arithmetic
task, using exactly the same learning procedure and learning parameters. For the results, see table
4.
The family trees task and its higher-order variant may appear difficult for systems such as MRE or
LRE because of the logical nature of the task, which is made apparent by hard rules such as (A, B) ?
has father, (A, C) ? has brother ? (C, B) ? has father. However, MRE does not perform any explicit logical deduction based on explicitly inferred rules, as would be done in an Inductive Logic
Programming system (e.g., [7]). Instead, it ?precomputes the answers? to all queries during training,
by finding the matrix representation that models its training set. Once the representation is found,
many correct facts become ?self-evident? and do not require explicit derivation. Humans may be
using a somewhat analogous mechanism (thought not necessarily one with matrix multiplications),
since when mastering a new and complicated set of concepts, some humans start by relying heavily
on relatively explicit reasoning using the definitions. With experience, however, many nontrivial
correct facts may become intuitive to such an extent that experts can make true conjectures whose
explicit derivation would be long and difficult. New theorems are easily discovered when the representations of all the concepts make the new theorem intuitive and self-evident.
The sequential higher-order arithmetic task.
errors on 5 test sets
mean test error
+1 (12)
0 0 0 2
4
1.2
+4 (12)
10 8 8 0
3
5.8
+6 (12)
0 0 4 9
0
2.6
+10 (12)
0 4 8 0 10
4.4
has
has
has
has
The sequential higher-order family trees task.
errors on 5 test sets
mean test error
father (12)
0 0 0 10 0
2.0
aunt (8)
0 0 0 8
0
1.6
sister (6)
0 0 0 0
0
0.0
nephew (8) 0 0 0 0
0
0.0
Table 5: Test results for the higher-order arithmetic task (top) and the higher-order family trees task
(bottom) when a held-out basic relation is learned from higher-order propositions after the rest of the
objects and relations have been learned and fixed. There were no errors on the training propositions.
Each entry shows the number of test errors, and the number of test cases is written in brackets.
Figure 2: A neural network that is equivalent to Matrix Relational Embedding (see text for details).
This is analogous to the idea that humans can avoid a lot of explicit search when playing chess
by ?compiling? the results of previous searches into a more complex evaluation function that uses
features which make the value of a position immediately obvious.
This does not mean that MRE can deal with general logical data of this kind, because MRE will fail
when there are many relations that have many special cases. The special cases will prevent MRE
from finding low dimensional matrices that fit the data well and cause it to generalize much more
poorly.
7
Adding knowledge incrementally
The previous section shows that MRE can learn to apply a basic relation correctly even though the
training set only contains higher-order propositions about the relation. We now show that this can be
achieved incrementally. After learning some objects, basic relations, and higher-order relations, we
freeze the weights in all of the matrices and learn the matrix for a new relation from a few higherorder propositions. Table 5 shows that this works about as well as learning all of the propositions at
the same time.
8
An equivalent neural network
Consider the neural network shown in Figure 2. The input vectors R and A represent a relation and
an object using a one-of-N encoding. If the outgoing weights from the two active input units are
set to R and A, these localist representations are converted into activity patterns in the first hidden
layer that represent the matrices R and A. The central part of the network consists of ?sigma-pi?
units [12], all of whose incoming and outgoing connections have fixed weights of 1. The sigma-pi
units perform a matrix multiplication by first taking the products of pairs of activities in the first
hidden layer and then summing the appropriate subsets of these products. As a result, the activities
in the next layer represent the matrix RA. The output layer uses a ?softmax? function to compute
the probability of each possible answer and we now show that if the weights and biases of the output
units are set correctly, this is equivalent to picking answers with a probability that is proportional to
their probability density under a spherical Gaussian centered at RA. Consider a particular output
unit that represents the answer B. If the weights into this unit are set to 2B and its bias is set to
?kBk2 , the total input to this unit will be:
X
Total input = ?kBk2 + 2
(RA)ij Bij
(3)
ij
The probability that the softmax assigns to B will therefore be:
P
?kBk2 +2
(RA)ij Bij
ij
e
P
p(B|A, R) = P
?kCk2 +2
(RA)ij Cij
ij
Ce
P
2
?kBk2 +2
(RA)ij Bij ?kRAk2
ij
e
e?kRA?Bk
P
P
= P
=
?kRA?Ck2
?kCk2 +2
(RA)ij Cij ?kRAk2
ij
C e
Ce
(4)
Maximizing the log probability of p(B|R, A) is therefore equivalent to minimizing the cost function
given in Eq. 1.
The fact that MRE generalizes much better than a standard feedforward neural network on the family
trees task is due to two features. First, it uses the same representational scheme (i.e., the same
matrices) for the inputs and the outputs, which the standard net does not; a similar representational
scheme was used in [1] to accurately model natural language. Second, it uses ?sigma-pi? units that
facilitate multiplicative interactions between representations. It is always possible to approximate
such interactions in a standard feedforward network, but it is often much better to build them into
the model [13, 5, 11].
Acknowledgments
We would like to thank Alberto Paccanaro and Dafna Shahaf for helpful discussions. This research
was supported by NSERC and CFI. GEH holds a Canada Research Chair in Machine Learning and
is a fellow of the Canadian Institute for Advanced Research.
References
[1] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. The Journal
of Machine Learning Research, 3:1137?1155, 2003.
[2] L.A.A. Doumas, J.E. Hummel, and C.M. Sandhofer. A Theory of the Discovery and Predication of
Relational Concepts. psychological Review, 115(1):1, 2008.
[3] G.E. Hinton. Learning distributed representations of concepts. Proceedings of the Eighth Annual Conference of the Cognitive Science Society, pages 1?12, 1986.
[4] J.E. Hummel and K.J. Holyoak. A Symbolic-Connectionist Theory of Relational Inference and Generalization. Psychological Review, 110(2):220?264, 2003.
[5] R. Memisevic and G.E. Hinton. Unsupervised learning of image transformations. Proceedings of IEEE
Conference on Computer Vision and Pattern Recognition, 2007.
[6] T.M. Mitchell. The need for biases in learning generalizations. Readings in Machine Learning. Morgan
Kaufmann, 1991.
[7] S. Muggleton and L. De Raedt. Inductive logic programming: Theory and methods. Journal of Logic
Programming, 19(20):629?679, 1994.
[8] R.C. O?Reilly. The LEABRA Model of Neural Interactions and Learning in the Neocortex. PhD thesis,
Carnegie Mellon University, 1996.
[9] A. Paccanaro. Learning Distributed Representations of Relational Data Using Linear Relational Embedding. PhD thesis, University of Toronto, 2002.
[10] A. Paccanaro and G. Hinton. Learning Distributed Representations of Concepts using Linear Relational
Embedding. IEEE Transactions on Knowledge and Data Engineering, 13(2):232?245, 2001.
[11] R.P.N. Rao and D.H. Ballard. Development of localized oriented receptive fields by learning a translationinvariant code for natural images. Network: Computation in Neural Systems, 9(2):219?234, 1998.
[12] D.E. Rumelhart, G.E. Hinton, and J.L. McClelland. A general framework for parallel distributed processing. Mit Press Computational Models Of Cognition And Perception Series, pages 45?76, 1986.
[13] J.B. Tenenbaum and W.T. Freeman. Separating Style and Content with Bilinear Models. Neural Computation, 12(6):1247?1283, 2000.
| 3482 |@word niece:2 version:7 proportion:2 norm:2 holyoak:1 tried:2 pick:1 minus:3 initial:3 contains:2 series:1 subjective:1 existing:1 yet:1 must:2 written:4 happen:1 enables:1 alone:2 half:1 fewer:1 selected:2 ck2:2 toronto:2 five:2 penelope:2 direct:1 become:2 incorrect:2 consists:2 emma:1 acquired:1 pairwise:1 ra:23 themselves:2 freeman:1 relying:1 spherical:4 what:2 kind:1 informed:1 finding:2 transformation:1 temporal:1 fellow:1 every:3 exactly:4 wrong:1 unit:9 appear:1 positive:1 before:1 engineering:1 local:1 treat:4 bilinear:1 encoding:1 plus:13 collapse:1 range:1 acknowledgment:1 implement:1 procedure:3 cfi:1 empirical:1 significantly:1 thought:1 reilly:1 symbolic:5 get:2 cannot:2 close:2 selection:2 optimize:1 equivalent:6 map:1 www:1 maximizing:1 straightforward:1 starting:1 arranges:1 immediately:1 assigns:1 scipy:1 rule:2 importantly:2 embedding:6 handle:1 variation:3 analogous:3 target:6 heavily:1 programming:4 us:9 origin:1 element:1 rumelhart:1 satisfying:1 recognition:1 updating:1 bottom:1 counter:1 removed:3 complexity:1 trained:2 division:2 easily:4 represented:1 various:1 derivation:2 describe:4 query:10 refined:1 quite:1 modular:13 whose:5 plausible:2 apparent:1 say:2 ducharme:1 ability:1 advantage:3 net:1 lowdimensional:1 interaction:3 product:8 rapidly:1 poorly:1 achieve:1 representational:2 margaret:1 analogical:2 intuitive:3 sutskever:1 regularity:1 produce:1 object:42 tions:1 depending:1 andrew:1 ij:10 eq:5 c:2 implies:1 differ:1 drawback:1 correct:17 centered:5 human:3 require:1 generalization:13 proposition:19 hold:1 considered:1 exp:3 mapping:5 cognition:1 claim:1 correctness:1 mit:1 clearly:1 always:3 gaussian:5 modified:1 ck:1 rather:1 avoid:1 tar:1 maria:1 properly:1 check:1 mainly:1 detect:1 sense:1 helpful:1 inference:1 typically:1 hidden:2 relation:92 deduction:1 among:3 development:1 special:2 softmax:2 equal:3 once:1 field:1 represents:2 placing:1 marcello:1 unsupervised:1 minimized:1 connectionist:3 few:2 randomly:4 oriented:1 consisting:1 hummel:2 attempt:2 evaluation:1 male:1 bracket:4 nephew:5 held:15 capable:1 closer:1 arthur:1 necessary:1 experience:1 tree:16 logarithm:1 initialized:1 circle:1 psychological:2 increased:2 column:1 instance:2 rao:1 raedt:1 localist:1 cost:13 entry:6 subset:1 father:9 too:2 answer:27 person:1 density:3 twelve:3 memisevic:1 told:4 probabilistic:1 picking:3 ilya:3 thesis:2 central:1 charlotte:1 collapsing:1 stochastically:1 cognitive:1 expert:1 style:1 converted:1 de:1 explicitly:3 caused:2 depends:1 performed:3 view:1 lot:1 multiplicative:1 doing:1 overfits:1 start:1 complicated:2 parallel:1 synchrony:1 square:2 variance:2 kaufmann:1 generalize:3 modelled:2 vincent:1 accurately:3 produced:2 cumbersome:1 husband:4 definition:5 failure:1 james:1 obvious:2 logical:3 mitchell:1 knowledge:4 dimensionality:4 formalize:1 understands:1 mre:31 higher:45 done:2 though:3 relearn:1 shahaf:1 christopher:2 incrementally:2 believe:1 facilitate:2 concept:5 true:2 counterpart:1 inductive:4 inspiration:1 deal:1 during:2 self:2 aunt:5 anything:1 unnormalized:3 paccanaro:7 evident:2 demonstrate:4 christine:1 reasoning:1 meaning:1 image:2 charles:1 common:1 rotation:1 lre:28 translationinvariant:1 mellon:1 freeze:1 mother:5 dafna:1 trivially:1 language:3 moving:1 similarity:2 etc:1 add:1 base:1 closest:3 showed:1 female:1 optimizes:1 nonconvex:1 morgan:1 seen:1 minimum:1 greater:1 somewhat:1 subtraction:2 novelty:2 determine:1 colin:1 maximize:1 arithmetic:17 smooth:1 plausibility:1 muggleton:1 long:1 alberto:2 unsatisfiable:1 involving:9 basic:22 variant:1 vision:1 represent:12 sometimes:1 achieved:1 addition:3 grow:2 rest:1 member:2 mod:3 constraining:1 feedforward:3 wif:1 canadian:1 bengio:1 fit:2 gave:1 equidistant:1 idea:1 cause:4 useful:1 neocortex:1 tenenbaum:1 mcclelland:1 generate:1 kck2:2 notice:1 brother:6 correctly:4 sister:8 carnegie:1 geh:1 four:4 prevent:1 ce:2 nonconvexity:1 pietro:1 sum:3 wife:5 run:4 inverse:5 extends:1 family:19 almost:1 entirely:1 layer:4 annual:1 activity:3 nontrivial:1 occur:1 precisely:1 answered:1 argument:2 chair:1 performing:1 relatively:1 conjecture:1 designated:1 according:1 leabra:1 poor:1 conjugate:1 smaller:1 slightly:1 son:1 mastering:1 making:1 modification:1 chess:1 kra:7 equation:1 previously:1 jennifer:1 count:1 mechanism:1 fail:1 generalizes:5 operation:2 available:2 victoria:1 observe:1 hierarchical:1 apply:1 appropriate:1 compiling:1 original:1 denotes:1 remaining:1 include:1 top:1 build:1 society:1 objective:3 question:5 already:1 receptive:1 guessing:2 exhibit:1 gradient:7 distance:3 separate:1 higherorder:1 thank:1 separating:1 whom:1 extent:1 trivial:1 code:2 index:1 relationship:6 minimizing:1 difficult:4 cij:2 sigma:3 negative:1 daughter:1 implementation:1 perform:2 datasets:1 finite:3 predication:1 descent:3 incorrectly:1 situation:1 hinton:10 relational:9 discovered:2 canada:1 inferred:1 introduced:1 bk:1 pair:5 janvin:1 connection:1 learned:11 quadratically:1 able:1 below:1 pattern:2 perception:1 eighth:1 wi2:1 reading:1 including:1 treated:1 difficulty:1 force:1 natural:5 advanced:1 representing:9 scheme:2 improve:1 library:1 realvalued:1 rela:1 gz:1 text:1 review:2 discovery:1 python:1 multiplication:6 relative:2 proportional:1 analogy:1 geoffrey:1 localized:1 sufficient:1 bk2:3 playing:1 pi:3 row:3 supported:1 last:1 free:1 bias:5 allow:1 understand:2 institute:1 taking:1 distributed:5 overcome:1 default:1 dimension:1 stuck:1 made:2 far:3 transaction:1 approximate:1 logic:3 overfitting:1 active:1 incoming:1 summing:1 discriminative:6 continuous:1 search:3 table:10 learn:7 nature:1 ballard:1 ca:2 uncle:2 excellent:2 necessarily:1 complex:1 domain:12 did:1 main:2 aurelio:1 repeated:2 position:1 explicit:5 learns:4 bij:3 theorem:2 utoronto:2 showing:1 experimented:1 decay:2 adding:2 sequential:2 phd:2 kbk2:4 simply:1 failed:1 prevents:3 nserc:1 applies:1 binding:1 determines:1 chance:1 precomputes:1 goal:1 content:1 hard:2 included:3 specifically:1 called:1 total:3 isomorphic:1 succeeds:1 formally:1 people:1 outgoing:2 |
2,738 | 3,483 | MCBoost: Multiple Classifier Boosting for Perceptual
Co-clustering of Images and Visual Features
Tae-Kyun Kim?
Sidney Sussex College
University of Cambridge
Cambridge CB2 3HU, UK
[email protected]
Roberto Cipolla
Department of Engineering
University of Cambridge
Cambridge CB2 1PZ, UK
[email protected]
Abstract
We present a new co-clustering problem of images and visual features. The problem involves a set of non-object images in addition to a set of object images and
features to be co-clustered. Co-clustering is performed in a way that maximises
discrimination of object images from non-object images, thus emphasizing discriminative features. This provides a way of obtaining perceptual joint-clusters
of object images and features. We tackle the problem by simultaneously boosting multiple strong classifiers which compete for images by their expertise. Each
boosting classifier is an aggregation of weak-learners, i.e. simple visual features.
The obtained classifiers are useful for object detection tasks which exhibit multimodalities, e.g. multi-category and multi-view object detection tasks. Experiments on a set of pedestrian images and a face data set demonstrate that the
method yields intuitive image clusters with associated features and is much superior to conventional boosting classifiers in object detection tasks.
1
Introduction
It is known that visual cells (visual features) selectively respond to imagery patterns in perception.
Learning process may be associated with co-clusters of visual features and imagery data in a way
of facilitating image data perception. We formulate this in the context of boosting classifiers with
simple visual features for object detection task [3]. There are two sets of images: a set of object
images and a set of non-object images, labelled as positive and negative class members respectively.
There are also a huge number of simple image features, only a smallP
fraction of which are selected to
discriminate the positive class from the negative class by H(x) = t ?t ht (x) where x is an input
vector, ?t , ht are the weight and the score of t-th weak-learner using a single feature. As object
images typically exhibit multi-modalities, a single aggregation of simple features often does not
dichotomise all object images from non-object images. Our problem is to find out subsets of object
images, each of which is associated with a set of features for maximising classification. Note that
image clusters to be obtained are coupled with selected features and likewise features to be selected
are dependent on image clusters, requiring a concurrent clustering of images and features.
See Figure 1 for an example where subsets of face images are pose-wise obtained with associated
features by the proposed method (Section 3). Features are placed around eyes, nose, mouth and etc.
as the cues for discriminating faces from background. As such facial features are distributed differently mainly according to face pose, the obtained pose-wise face clusters are, therefore, intuitive
and desirable in perception. Note the challenges in achieving this: The input set of face images are
mixed up by different faces, lighting conditions as well as pose. Some are photographs of real-faces
and the others are drawings. Desired image clusters are not observable in input space. See Figure 2
?
Webpage: http://mi.eng.cam.ac.uk/?tkk22
1
Visual feature set
Face image set
Face cluster-1
Feature set-1
Face cluster-2
Feature set-2
...
...
Random image set
Figure 1: Perceptual co-clusters of images and visual features. For given a set of face and random images
and simple visual features, the proposed method finds perceptual joint-clusters of face images and features,
which facilitates classification of face images from random images. Face clusters are pose-wise obtained.
for the result of the traditional unsupervised method (k-means clustering) applied to the face images.
Images of the obtained clusters are almost random with respect to pose. To obtain perceptual face
clusters, a method requires a discriminative process and part-based representations (like the simple
features used). Technically, we must be able to cope with an arbitrary initialisation of image clusters
(as target clusters are hidden) and feature selection among a huge number of simple visual features.
The proposed method (Section 3) has potential for wide-applications
Face cluster-1
in perceptual data exploration. It generally solves a new co-clustering
problem of a data set (e.g. a set of face images) and a feature set (e.g.
simple visual features) in a way to maximise discrimination of the
data set from another data set (e.g. a set of random images). The
Face cluster-2
method is also useful for object detection tasks. Boosting a classifier
with simple features [3] is a state-of-the-art in object detection tasks.
It delivers high accuracy and is very time-efficient. Conventionally,
multiple boosting classifiers are separately learnt for multiple categories and/or multiple views of object images [6]. It is, however,
Figure 2: Image sets obtedious to manually label category/pose for a large data set and, im- tained by the k-means clusportantly, it is not clear to define object categories and scopes of each tering method.
pose. Would there be a better partitioning for learning multiple boosting classifiers? We let this be a part of automatic learning in the proposed method. It simultaneously
boosts multiple strong classifiers, each of which has expertise on a particular set of object images by
a set of weak-learners.
The remainder of this paper is arranged as follows: we briefly review the previous work in Section 2
and present our solution in Section 3. Experiments and conclusions are drawn in Section 4 and
Section 5 respectively.
2
Related work
Existing co-clustering work (e.g. [1]) is formulated as an unsupervised learning task. It simultaneously clusters rows and columns of a co-occurrence table by e.g. maximising mutual information
between the cluster variables. Conversely, we make use of class labels for discriminative learning.
Using a co-occurrence table in prior work is also prohibitive due to a huge number of visual features
that we consider.
Mixture of Experts [2] (MoE) jointly learns multiple classifiers and data partitions. It much emphasises local experts and is suitable when input data can be naturally divided into homogeneous
subsets, which is, however, often not possible as observed in Figure 2. In practice, it is difficult to
establish a good initial data partition and to perform expert selection based on localities. Note that
EM in MoE resorts to a local optimum. Furthermore, the data partitions of MoE could be undesirably affected by a large background class in our problem and the linear transformations used in MoE
are limited for delivering a meaningful part-based representation of images.
2
Classifier 1
B
C
Classifier 2
B
C
A
B
C
A
A
C
C C
A
C C
Classifier 3
A
B
A
B B B
C
B
C
Step 1
Step 2
Step 3
A
BBB
A A
C
BBB
Step 4
A A
BBB
Step 5
Figure 3: (left) Risk map for given two class data (circle and cross). The weak-learners (either a vertical or
horizontal line) found by Adaboost method [7] are placed on high risk regions. (right) State diagram for the
concept of MCBoost.
Boosting [7] is a sequential method of aggregating multiple (weak) classifiers. It finds weak-learners
to correctly classify erroneous samples in previous weak-learners. While MoE makes a decision by
dynamically selected local experts, all weak-learners contribute to a decision with learnt weights in
boosting classifier. As afore-mentioned, expert selection is a difficult problem when an input space
is not naturally divided into sub-regions (clusters). Boosting classifier solves various non-linear
classification problems but cannot solve XOR problems where only half the data can be correctly
classified by a set of weak-learners. Two disjointed sets of weak-learners, i.e. two boosting classifiers, are required to conquer each half of data by a set of weak-learners.
Torralba et al. have addressed joint-learning of multiple boosting classifiers for multiple category
and multiple view object detection [4]. The complexity of resulting classifiers is reduced by sharing
visual features among classifiers. Each classifier in their method is based on each of category-wise
or pose-wise clusters of object images, which requires manual labels for cateogry/pose, whereas we
optimise image clusters and boosting classifiers simultaneously.
3
MCBoost: multiple strong classifier boosting
Our formulation considers K strong classifiers, each of which is represented by a linear combination
of weak-learners as
X
Hk (x) =
?kt hkt (x),
k = 1, ...K,
(1)
t
where ?kt and hkt are the weight and the score of t-th weak-learner of k-th strong classifier. Each
strong classifier is devoted to a subset of input patterns allowing repetition and each weak-learner
in a classifier comprises of a single visual feature and a threshold. For aggregating multiple strong
classifiers, we formulate Noisy-OR as
Y
P (x) = 1 ?
(1 ? Pk (x)),
(2)
k
1
where Pk (x) = 1+exp(?H
. It assigns samples to a positive class if any of classifiers does and
k (x))
assigns samples to a negative class if every classifier does. Conventional design in object detection
study [6] also favours OR decision as it does not require classifier selection. An individual classifier
is learnt from a subset of positive samples and all negative samples, enforcing a positive sample
to be accepted by one of the classifiers and a negative sample to be rejected by all. Our derivation
builds on the previous Noisy-OR Boost algorithm [5], which has been proposed for multiple instance
learning.
The sample weights are initialised by random partitioning of positive samples, i.e. wki = 1 if xi ? k
and wki = 0 otherwise, where i and k denote i-th sample and k-th classifier respectively. We set
wki = 1/K for all k?s for negative samples. For given weights, the method finds K weak-learners
3
Algorithm 1. MCBoost
Input: A data set (xi , yi ) and a set of pre-defined weak-learners
PT
Output: Multiple boosting classifiers Hk (x) = t=1 ?kt hkt (x), k = 1..., K
1.Compute a reduced set of weak-learners H by risk map (4) and randomly initialise the
weights wki
2.Repeat for t = 1, ..., T :
3. Repeat for k = 1, ..., K:
P
4.
Find weak-learners hkt that maximise i wki ? hkt (xi ), hkt ? H.
5.
Find the weak-learner weights ?kt that maximise J(H + ?kt hkt ).
(xi )
6.
Update the weights by wki = yiP?P
(xi ) ? Pk (xi ).
7. End
8.End
Figure 4: Pseudocode of MCBoost algorithm
at t-th round of boosting, to maximise
X
wki ? hkt (xi ),
hkt ? H,
(3)
i
where hkt ? {?1, +1} and H is a reduced set of weak-learners for speeding up the proposed
multiple classifier boosting. The reduced set is obtained by restricting the location of weak-learners
around the expected decision boundary. Each weak-learner, h(x) = sign(aT x + b), where a and b
represent a simple feature and its threshold respectively, can be represented by aT (x ? xo ), where
xo is interpreted as the location of the weak-learner. By limiting xo to the data points that have
high risk to be misclassified, the complexity of searching weak-learners at each round of boosting is
greatly reduced. The risk is defined as
P
2
j?N B kxi ? xj k
P i
R(xi ) = exp{?
}
(4)
1 + j?N W kxi ? xj k2
i
where NiB and NiW are the set of predefined number of nearest neighbors of xi in the opposite
class and the same class of xi (See Figure 3). The weak-learner weights ?kt , k = 1, ..., K are then
found to maximise J(H + ?kt hkt ) by a line search. Following the AnyBoost method [8], we set the
sample weights asQ
the derivative of the cost function with respect to the classifier score. For the cost
function J = log i P (xi )yi (1 ? P (xi ))(1?yi ) , where yi ? {0, 1} is the label of i-th sample, the
weight of k-th classifier over i-th sample is updated by
yi ? P (xi )
?J
wki =
=
? Pk (xi ).
(5)
?Hk (xi )
P (xi )
See Figure 4 for the pseudocode of the proposed method.
3.1
Data clustering
We propose a new data clustering method which assigns a positive sample xi to a classifier (or
cluster) that has the highest Pk (xi ).
The sample weight of k-th classifier in (5) is determined by the joint probability P (x) and the
probability of k-th classifier Pk (x). For a negative class (yi = 0), the weights only depend on the
probability of k-th classifier. The classifier gives high weights to the negative samples that are misclassified by itself, independently of other classifiers. For a positive class, high weights are assigned
to the samples that are misclassified jointly (i.e. the left term in (5)) but may be correctly classified
by the k-th classifier at next rounds (i.e. high Pk (x)). That is, classifiers concentrate on samples in
their expertise through the rounds of boosting. This can be interpreted as data partitioning.
3.2
Examples
Figure 3 (right) illustrates the concept of the MCBoost algorithm. The method iterates two main
steps: learning weak-learners and updating sample weights. States in the figure represent the sam4
classifier 1
31
1
1
1
31
weaklearner weight
31
classifier 2
classifier 3
1.2
1.2
1.3
1.1
1.1
1.2
1.1
1
1
0.9
0.9
1
0.8
0.8
0.9
0.7
0.7
0.8
0.6
0.6
0.7
0.5
0.5
0.6
0.4
0.4
0.5
0.3
0.3
0.2
10
20
30
0.2
0.4
10
20
30
10
20
30
boosting round
Figure 5: Example of learning on XOR classification problem. For a given random initialisation (three
different color blobs in the left), the method learns three classifiers that nicely settle into desired clusters and
decision boundaries (middle). The weak-learner weights (right) show the convergence.
ples that are correctly classified by weak-learners at each step. The sample weighting (5) is represented by data re-allocation. Assume that a positive class has samples of three target clusters denoted
by A, B and C. Samples of more than two target clusters are initially assigned to every classifier.
Weak-learners are found to classify dominant samples (bold letter) in each classifier (step 1). Classifiers then re-assign samples according to their expertise (step 2): Samples C that are misclassified
by all are given more importance (bold letter). Samples B are moved to the third classifier as the
expert on B. The first classifier learns next weak-learners for classifying sample C while the second
and third classifiers focus on samples A and B respectively (step 3). Similarly, samples A, C are
moved into the respective most experts (step 4) and all re-allocated samples are correctly classified
by weak-learners (step 5).
We present an example of XOR classification problems (See Figure 5). The positive class (circle)
comprising the three sub-clusters and the negative class (cross) in background make the XOR configuration. Any single or double boosting classifiers, therefore, cannot successfully dichotomise the
classes. We exploit vertical or horizontal lines as weak-learners and set the number of classifiers K
to be three. We performed random partitioning of positive samples (shown in the left by three different color blobs) for initialising the sample weights. The final decision boundaries and the tracks of
data cluster centres of the three boosting classifiers are shown in the middle. Despite the mixed-up
initialisation, the method learns the three classifiers that nicely settle into the target clusters after a
bit of jittering in the first few rounds. The weak-learner weights (in the right) show the convergence
of the three classifiers. Note that the method does not exploit any distance information between input
data points, by which conventional clustering methods can apparently yield the same data clusters
in this example. As exemplified in Figure 2, obtaining desired data clusters by conventional ways
are, however, difficult in practice. The proposed method works well with random initialisations and
desirably exhibits quicker convergence when a better initialisation is given.
3.3
Discussion on mixture of experts and future work
The existing local optimisation method, MoE, suffers from the absence of a good initialisation solution, but has nice properties once a good initialisation exists. We have implemented MoE in the
Anyboost framework. The sample probability in MoE is
X
P (xi ) = 1/(1 + exp(?
Qk (xi ) ? Hk (xi )))
k
where Qk (xi ) is the responsibility of k-th classifier over xi . Various clustering methods can define
the function Qk (xi ). By taking the derivative of the cost function, the sample weight of k-th classifier is given as wki = (yi ? P (xi )) ? Qk (xi ). An EM-like algorithm iterates each round of boosting
and the update of Qk (xi ). Dynamic selection of local experts helps time-efficient classification as it
does not use all experts.
Useful future studies on the MCBoost method include development of a method to automatically
determine K, the number of classifiers. At the moment, we first try a large K and decide the right
number as the number of visually heterogeneous clusters obtained (See Section 4). A post-corrective
step of initial weak-learners would be useful for more efficient classification. When the classifiers
start from wrong initial clusters and oscillate between clusters until settling down, some initial weak5
Random images and simple visual features
Pedestrian images
Image cluster centres
K=5
Face images
K=3
K=9
Figure 6: Perceptual clusters of pedestrian and face images. Clusters are found to maximise discrimination
power of pedestrian and face images from random images by simple visual features.
learners are wrong and others may be wasted to make up for the wrong ones. Once the classifiers
find right clusters, they exhibit convergence by decreasing the weak-learner weights.
4
Experiments
We performed experiments using a set of INRIA pedestrian data [10] and PIE face data [9]. The
INRIA set contains 618 pedestrian images as a positive class and 2436 random images as a negative
class in training and 589 pedestrian and 9030 random images in testing. The pedestrian images
show wide-variations in background, human pose and shapes, clothes and illuminations (Figure 6).
The PIE data set involves 900 face images as a positive class (20 persons, 9 poses and 5 lighting
conditions) and 2436 random images as a negative class in training and 900 face and 12180 random
images in testing. The 9 poses are distributed form left profile to right profile of face, and the 5
lighting conditions make sharp changes on face appearance as shown in Figure 6. Some facial parts
are not visible depending on both pose and illumination. All images are cropped and resized into
24?24 pixel images. A total number of 21780 simple rectangle features (as shown in Figure 1) were
exploited.
MCBoost learning was performed with the initial weights that were obtained by the k-means clustering method. Avoiding the case that any of the k-means clusters is too small (or zero) in size
has helped quick convergence in the proposed method. We set the portion of high risk data as
20% of total samples for speeding up. The number of classifiers was set as K ? {2, 3, 4, 5} and
K ? {3, 5, 7, 9} for the INRIA and PIE data set respectively. For all cases, every classifier converged
within 50 boosting rounds.
Figure 6 shows the cluster centers obtained by the proposed method. The object images were partitioned into K clusters (or classifiers) by assigning them to the classifier that has the highest Pk (x).
For the given pedestrian images, the first three cluster centres look unique and the last two are rather
redundant. The three pedestrian clusters obtained are intuitive. They emphasise the direction of
intensity changes at contours of the human body as discriminating cues of pedestrian images from
random images. It is interesting to see distinction of upper and lower body in the second cluster,
which may be due to different clothes. For the PIE data set, the obtained face clusters reflect both
pose and illumination changes, which is somewhat different from our initial expectation of getting
purely pose-wise clusters as the case in Figure 1. This result is, however, also reasonable when considering the strong illumination conditions that cause shadowing of face parts. For example, frontal
faces whose right-half side is not visible by the lighting cannot share any features with those having
left-half side not visible. Certain profile faces rather share more facial features (e.g. one eye, eye
brow and a half mouth) with the half-shadowed frontal faces, jointly making a cluster. All 9 face
clusters seem to capture unique characteristics of the face images.
We have also evaluated the proposed method in terms of classification accuracy. Figure 7 shows
false-negative and false-positive curves of MCBoost method and AdaBoost method [7]. We set all
6
False negatives
0.5
MCBoost
AdaBoost
0.4
0.3
0.5
0.5
0.4
0.4
0.4
0.3
0.1
0
0.1
0.2
0.3
0.4
0.5
0
False positives
False negatives
0.5
AdaBoost
MCBoost
0.4
0.3
0.1
0.2
0.3
0.2
0.3
0.4
0.5
0.2
0.2
0.1
0.1
0
0
0.1
0.4
False positives
0.5
0.2
0.3
0.4
0.5
0
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.2
0.1
0.1
0.1
0.1
0.2
0.3
0.1
0.4
0.5
0
0.2
0.3
0.4
0.5
AdaBoost
MCBoost
Pose label
K=9
0.2
0
0
0.3
0.2
0
K=5
K=7
K=5
0.2
0.1
0.1
0.3
K=3
0
0
0.3
K=4
K=3
0.2
0.1
0
0.3
K=2
0.2
0
0.5
0
0.1
0.2
0.3
0.4
0.5
0
0
0.1
0.2
0.3
0.4
0.5
Figure 7: ROC curves for the pedestrian data (top four) and face data (bottom four). MCBoost significantly outperformed AdaBoost method for both data sets and different cluster numbers K. MCBoost is also
much superior to AdaBoost method learnt with manual pose label (bottom right).
conditions (e.g. number of weak-learners) equivalent in both methods. The k-means clustering
method was applied to positive samples. Boosting classifiers were individually learnt by the positive
samples of each cluster and all negative samples in AdaBoost method. The clusters obtained by the
k-means method were exploited as the initialisation in MCBoost method. For the PIE data set, we
also performed data partitioning by the manual pose label and learnt boosting classifiers separately
for each pose in AdaBoost method. For both pedestrian and face experiments and all different
number of classifiers K, MCBoost significantly outperformed AdaBoost method by finding optimal
data clusters and associated feature sets. Our method is also much superior to the Adaboost learnt
with manual pose labels (bottom right).
In the AdaBoost method, increasing number
of clusters deteriorated the accuracy for the
pedestrian data, whereas it increased the performance for the face data. This may be
explained by the number of meaningful data
clusters. We observed in Figure 6 that there
are only three heterogenous pedestrian clusters
while there are more than nine face clusters. In
general, a smaller number of positive samples
in each classifier (i.e. a larger K) causes perFigure 8: Example pedestrian detection result.
formance degradation, if it is not counteracted
by finding meaningful clusters. We deduce, by a similar reason, that the performance of our method
was not much boosted when the number of classifiers was increased (although it tended to gradually
improve the accuracy for both data sets).
0.8
0.6
0.4
0.2
Figure 8 shows an example pedestrian detection result. Scanning the example image yields a total
number of 172,277 image patches to classify. Our method ran in 3.6 seconds by non-optimised
Matlab codes in a 3GHz CPU PC.
5
Conclusions
We have introduced a discriminative co-clustering problem of images and visual features and have
proposed a method of multiple classifier boosting called MCBoost. It simultaneously learns image
clusters and boosting classifiers, each of which has expertise on an image cluster. The method
works well with either random initialisation or initialisation by conventional unsupervised clustering
7
methods. We have shown in the experiments that the proposed method yields perceptual co-clusters
of images and features. In object detection tasks, it significantly outperforms two conventional
designs that individually learn multiple boosting classifiers by the clusters obtained by the k-means
clustering method and pose-labels.
We will apply MCBoost to various other co-clustering problems in the future. Some useful studies
on MCBoost method have also been discussed in Section 3.3. Learning with a more exhaustive
training set would improve the performance of the method in object detection tasks.
Acknowledgements
The authors are grateful to many people who have helped by proofreading drafts and providing
comments and suggestions. They include Z. Ghahramani, B. Stenger, T. Woodley, O. Arandjelovic,
F. Viola and J. Kittler. T-K. Kim is financially supported by the research fellowship of the Sidney
Sussex College of the University of Cambridge.
References
[1] I.S. Dhillon, S. Mallela and D.S. Modha, Information-theoretic co-clustering, Proc. ACM SIGKDD Int?l
Conf. on Knowledge discovery and data mining, pages 89?98, 2003.
[2] M.I. Jordan and R.A. Jacobs, Hierarchical mixture of experts and the EM algorithm, Neural Computation,
6(2):181?214, 1994.
[3] P. Viola and M. Jones, Robust real-time object detection, Int?l J. Computer Vision, 57(2):137?154, 2002.
[4] A. Torralba, K. P. Murphy and W. T. Freeman, Sharing visual features for multiclass and multiview object
detection, IEEE Trans. on Pattern Analysis and Machine Intelligence, 29(5):854?869, 2007.
[5] P. Viola, J.C. Platt and C. Zhang, Multiple Instance Boosting for Object Detection, Proc. Advances in
Neural Information Processing Systems, pages 1417?1426, 2006.
[6] S.Z. Li and Z. Zhang, Floatboost learning and statistical face detection, IEEE Trans. on Pattern Analysis
and Machine Intelligence, 26(9):1112?1123, 2004.
[7] R. Schapire, The strength of weak learnability, Machine Learning, 5(2):197?227, 1990.
[8] L. Mason, J. Baxter, P. Bartlett and M. Frean, Boosting algorithms as gradient descent, Proc. Advances in
Neural Information Processing Systems, pages 512?518, 2000.
[9] T. Sim, S. Baker, and M. Bsat, The CMU Pose, Illumination, and Expression Database, IEEE Trans. on
Pattern Analysis and Machine Intelligence, 25(12):1615?1618, 2003.
[10] N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, Proc. IEEE Conf.
Computer Vision and Pattern Recognition, pages 886?893, 2005.
8
| 3483 |@word middle:2 briefly:1 dalal:1 triggs:1 hu:1 eng:1 jacob:1 moment:1 initial:6 configuration:1 contains:1 score:3 initialisation:10 outperforms:1 existing:2 assigning:1 must:1 visible:3 partition:3 shape:1 update:2 discrimination:3 cue:2 selected:4 prohibitive:1 half:6 intelligence:3 provides:1 boosting:32 contribute:1 location:2 iterates:2 draft:1 zhang:2 anyboost:2 sidney:2 expected:1 multi:3 freeman:1 decreasing:1 floatboost:1 automatically:1 cpu:1 considering:1 increasing:1 wki:9 baker:1 interpreted:2 clothes:2 transformation:1 finding:2 every:3 brow:1 tackle:1 classifier:79 k2:1 uk:5 partitioning:5 wrong:3 platt:1 positive:19 maximise:6 engineering:1 local:5 aggregating:2 despite:1 optimised:1 modha:1 inria:3 emphasis:1 dynamically:1 conversely:1 co:14 limited:1 unique:2 testing:2 practice:2 cb2:2 significantly:3 pre:1 cannot:3 selection:5 context:1 risk:6 conventional:6 map:2 quick:1 center:1 equivalent:1 independently:1 formulate:2 assigns:3 initialise:1 searching:1 variation:1 limiting:1 updated:1 target:4 pt:1 deteriorated:1 homogeneous:1 recognition:1 updating:1 database:1 observed:2 bottom:3 quicker:1 weaklearner:1 capture:1 region:2 kittler:1 highest:2 ran:1 mentioned:1 complexity:2 cam:3 dynamic:1 jittering:1 depend:1 grateful:1 technically:1 purely:1 learner:36 joint:4 differently:1 various:3 represented:3 corrective:1 derivation:1 exhaustive:1 whose:1 larger:1 solve:1 drawing:1 otherwise:1 jointly:3 noisy:2 itself:1 final:1 blob:2 propose:1 remainder:1 intuitive:3 moved:2 getting:1 webpage:1 convergence:5 cluster:62 optimum:1 double:1 hkt:11 object:29 help:1 depending:1 ac:3 frean:1 pose:23 nearest:1 sim:1 strong:8 solves:2 implemented:1 involves:2 concentrate:1 direction:1 exploration:1 human:3 settle:2 require:1 assign:1 clustered:1 im:1 around:2 exp:3 visually:1 scope:1 torralba:2 proc:4 outperformed:2 shadowing:1 label:9 individually:2 concurrent:1 repetition:1 successfully:1 tained:1 rather:2 resized:1 boosted:1 focus:1 mainly:1 hk:4 greatly:1 multimodalities:1 sigkdd:1 kim:2 dependent:1 typically:1 initially:1 hidden:1 misclassified:4 comprising:1 pixel:1 classification:8 among:2 denoted:1 development:1 art:1 yip:1 mutual:1 once:2 nicely:2 having:1 manually:1 look:1 unsupervised:3 jones:1 future:3 others:2 few:1 randomly:1 oriented:1 simultaneously:5 individual:1 murphy:1 detection:17 huge:3 mining:1 mixture:3 bbb:3 pc:1 devoted:1 predefined:1 kt:7 respective:1 facial:3 ples:1 desired:3 circle:2 re:3 instance:2 column:1 classify:3 increased:2 cost:3 subset:5 too:1 learnability:1 scanning:1 learnt:7 disjointed:1 kxi:2 person:1 discriminating:2 stenger:1 imagery:2 reflect:1 conf:2 expert:11 resort:1 derivative:2 li:1 potential:1 bold:2 int:2 pedestrian:17 performed:5 view:3 try:1 responsibility:1 helped:2 undesirably:1 apparently:1 portion:1 start:1 aggregation:2 accuracy:4 xor:4 qk:5 characteristic:1 likewise:1 tering:1 yield:4 formance:1 who:1 weak:37 expertise:5 lighting:4 classified:4 converged:1 suffers:1 tended:1 sharing:2 manual:4 initialised:1 naturally:2 associated:5 mi:1 color:2 knowledge:1 adaboost:12 arranged:1 formulation:1 evaluated:1 furthermore:1 rejected:1 until:1 horizontal:2 requiring:1 concept:2 assigned:2 dhillon:1 round:8 sussex:2 multiview:1 theoretic:1 demonstrate:1 delivers:1 image:70 wise:6 superior:3 pseudocode:2 discussed:1 cambridge:5 counteracted:1 automatic:1 similarly:1 centre:3 arandjelovic:1 etc:1 deduce:1 dominant:1 certain:1 yi:7 exploited:2 niw:1 somewhat:1 mallela:1 determine:1 redundant:1 multiple:20 desirable:1 afore:1 cross:2 divided:2 post:1 heterogeneous:1 optimisation:1 expectation:1 vision:2 cmu:1 histogram:1 represent:2 cell:1 addition:1 background:4 separately:2 whereas:2 addressed:1 cropped:1 diagram:1 fellowship:1 modality:1 allocated:1 comment:1 facilitates:1 member:1 seem:1 jordan:1 baxter:1 xj:2 opposite:1 multiclass:1 favour:1 expression:1 bartlett:1 oscillate:1 cause:2 nine:1 matlab:1 useful:5 generally:1 clear:1 delivering:1 category:6 reduced:5 http:1 schapire:1 sign:1 correctly:5 track:1 affected:1 four:2 threshold:2 achieving:1 drawn:1 ht:2 rectangle:1 wasted:1 fraction:1 compete:1 letter:2 respond:1 almost:1 reasonable:1 decide:1 patch:1 decision:6 initialising:1 bit:1 strength:1 nib:1 proofreading:1 department:1 according:2 combination:1 smaller:1 em:3 partitioned:1 making:1 explained:1 gradually:1 xo:3 bsat:1 nose:1 desirably:1 end:2 apply:1 hierarchical:1 occurrence:2 top:1 clustering:18 include:2 exploit:2 ghahramani:1 build:1 establish:1 conquer:1 traditional:1 exhibit:4 financially:1 gradient:2 distance:1 considers:1 reason:1 enforcing:1 maximising:2 code:1 providing:1 difficult:3 pie:5 negative:15 design:2 perform:1 maximises:1 allowing:1 vertical:2 upper:1 descent:1 viola:3 kyun:1 arbitrary:1 sharp:1 intensity:1 introduced:1 moe:8 required:1 distinction:1 boost:2 heterogenous:1 trans:3 able:1 pattern:6 perception:3 exemplified:1 challenge:1 optimise:1 shadowed:1 mouth:2 power:1 suitable:1 settling:1 mcboost:19 improve:2 eye:3 conventionally:1 coupled:1 roberto:1 speeding:2 review:1 prior:1 nice:1 acknowledgement:1 discovery:1 mixed:2 interesting:1 suggestion:1 allocation:1 classifying:1 share:2 row:1 placed:2 repeat:2 last:1 supported:1 side:2 wide:2 neighbor:1 face:40 taking:1 emphasise:1 distributed:2 ghz:1 boundary:3 curve:2 contour:1 author:1 cope:1 observable:1 discriminative:4 xi:27 search:1 table:2 learn:1 robust:1 obtaining:2 pk:8 main:1 profile:3 facilitating:1 body:2 roc:1 sub:2 comprises:1 perceptual:8 weighting:1 third:2 learns:5 down:1 emphasizing:1 erroneous:1 mason:1 pz:1 exists:1 restricting:1 sequential:1 false:6 importance:1 illumination:5 illustrates:1 locality:1 photograph:1 appearance:1 visual:19 cipolla:2 acm:1 formulated:1 labelled:1 absence:1 change:3 determined:1 degradation:1 total:3 called:1 discriminate:1 accepted:1 meaningful:3 selectively:1 college:2 people:1 frontal:2 tae:1 avoiding:1 |
2,739 | 3,484 | Mind the Duality Gap:
Logarithmic regret algorithms for online optimization
Sham M. Kakade
Toyota Technological Institute at Chicago
[email protected]
Shai Shalev-Shwartz
Toyota Technological Institute at Chicago
[email protected]
Abstract
We describe a primal-dual framework for the design and analysis of online
strongly convex optimization algorithms. Our framework yields the tightest
known logarithmic regret bounds for Follow-The-Leader and for the gradient descent algorithm proposed in Hazan et al. [2006]. We then show that one can interpolate between these two extreme cases. In particular, we derive a new algorithm
that shares the computational simplicity of gradient descent but achieves lower
regret in many practical situations. Finally, we further extend our framework for
generalized strongly convex functions.
1
Introduction
In recent years, online regret minimizing algorithms have become widely used and empirically successful algorithms for many machine learning problems. Notable examples include efficient learning algorithms for structured prediction and ranking problems [Collins, 2002, Crammer et al., 2006].
Most of these empirically successful algorithms
are based on algorithms which are tailored to gen?
eral convex functions, whose regret is O( T ). Rather recently, there is a growing body of work
providing online algorithms for strongly convex loss functions, with regret guarantees that are only
O(log T ). These algorithms have potential to be highly applicable since many machine learning
optimization problems are in fact strongly convex ? either with strongly convex loss functions (e.g.
log loss, square loss) or, indirectly, via strongly convex regularizers (e.g. L2 or KL based regularization). Note that in this later case, the loss function itself may only be just convex but a strongly
convex regularizer effectively makes this a strongly convex optimization problem (e.g. the SVM optimization problem uses the hinge loss with L2 regularization). The aim of this paper is to provide
a template for deriving a wider class of regret-minimizing algorithms for online strongly convex
programming.
Online convex optimization takes place in a sequence of consecutive rounds. At each round, the
learner predicts a vector wt ? S ? Rn , and the environment responds with a convex loss function,
!t : S ? R. The goal of the learner is to minimize the difference between his cumulative loss and
!T
!T
the cumulative loss of the optimal fixed vector, t=1 !t (wt )?minw?S t=1 !t (w). This is termed
?regret? since it measures how ?sorry? the learner is, in retrospect, not to have predicted the optimal
vector.
Roughly speaking, the family of regret minimizing algorithms (for general convex functions) can be
seen as varying on two axes, the ?style? and the ?aggressiveness? of the update. In addition to online
algorithms? relative simplicity, the empirical successes are also due to having these two knobs to tune
for the problem at hand (which determine the nature of the regret bound). By style, we mean updates
which favor either rotational invariance (such as gradient descent like update rules) or sparsity (like
the multiplicative updates). Of course there is a much richer family here, including the Lp updates.
By the aggressiveness of the update, we mean how much the algorithm moves its decision to be
consistent with most recent loss functions. For example, the preceptron algorithm makes no update
1
when there is no error. In contrast, there is a family of algorithms which more aggressively update
the loss when there is a margin mistake. These algorithms are shown to have improved performance
(see for example the experimental study in Shalev-Shwartz and Singer [2007b]).
While historically much of the analysis of these algorithms have been done on a case by case basis,
in retrospect, the proof techniques have become somewhat boilerplate, which has lead to growing
body of work to unify these analyses (see Cesa-Bianchi and Lugosi [2006] for review). Perhaps the
most unified view of these algorithms is the ?primal-dual? framework of Shalev-Shwartz and Singer
[2006], Shalev-Shwartz [2007], for which the gamut of these algorithms can be largely viewed as
special cases. Two aspects are central in providing this unification. First, the framework works with
a complexity function, which determines the style of algorithm and the nature of the regret guarantee
(If this function is the L2 norm, then one obtains gradient like updates, and if this function is the KLdistance, then one obtains multiplicative updates). Second, the algorithm maintains both ?primal?
!T
and ?dual? variables. Here, the the primal objective function is t=1 !t (w) (where !t is the loss
function provided at round t), and one can construct a dual objective function Dt (?), which only
depends on the loss functions !1 , !2 , . . .! t?1 . The algorithm works by incrementally increasing the
dual objective value (in an online manner), which can be done since each Dt is only a function of the
previous loss functions. By weak duality, this can be seen as decreasing the duality gap. The level
of aggressiveness is seen to be how fast the algorithm is attempting to increase the dual objective
value.
This paper focuses on extending the duality framework for online convex programming to the case
of strongly convex functions. This analysis provides a more unified and intuitive view of the extant
algorithms for online strongly convex programming. An important observation we make is that any
?-strongly convex loss function can be rewritten as !i (w) = f (w) + gi (w), where f is a fixed ?strongly convex function (i.e. f does not depend on i), and gi is a convex function. Therefore, after t
!t
online rounds, the amount of intrinsic strong convexity we have in the primal objective i=1 !t (w)
is at least ? t. In particular, this explains the learning rate of ?1t proposed in the gradient descent
algorithm of Hazan et al. [2006]. Indeed, we show that our framework includes the gradient descent
algorithm of Hazan et al. [2006] as an important special case, in which the aggressiveness level
is minimal. At the most aggressive end, our framework yields the Follow-The-Leader algorithm.
Furthermore, the template algorithm serves as a vehicle for deriving new algorithms (which enjoy
logarithmic regret guarantees).
The remainder of the paper is outlined as follows. We first provide background on convex duality. As
a warmup, in Section 3, we present an intuitive primal-dual analysis of Follow-The-Leader (FTL),
when f is the Euclidean norm. This naturally leads to a more general primal-dual algorithm (for
which FTL is a special case), which we present in Section 4. Next, we further generalize our
algorithmic framework to include strongly convex complexity functions f with respect to arbitrary
norms & ?& . We note that the introduction of a complexity function was already provided in ShalevShwartz and Singer [2007a], but the analysis is rather specialized and does not have a knob which
can tune the aggressiveness of the algorithm. Finally, in Sec. 6 we conclude with a side-by-side
comparison of our algorithmic framework for strongly convex functions and the framework for
(non-strongly) convex functions given in Shalev-Shwartz [2007].
2
Mathematical Background
We denote scalars with lower case letters (e.g. w and ?), and vectors with bold face letters (e.g. w
and ?). The inner product between vectors x and w is denoted by 'x, w(. To simplify our notation,
given a sequence of vectors ?1 , . . . , ?t or a sequence of scalars ?1 , . . . ,? t we use the shorthand
?1:t =
t
"
?i
and
i=1
?1:t =
t
"
?i .
i=1
Sets are designated by upper case letters (e.g. S). The set of non-negative real numbers is denoted
by R+ . For any k ? 1, the set of integers {1, . . . , k} is denoted by [k]. A norm of a vector x is
denoted by &x&. The dual norm is defined as &?&" = sup{'x, ?( : &x& ? 1}.!
For example, the
Euclidean norm, &x&2 = ('x, x()1/2 is dual to itself and the L1 norm, &x&1 = i |xi |, is dual to
the L? norm, &x&? = maxi |xi |.
2
F OR t = 1, 2, . . . , T :
Define wt = ? ?
1
1:(t?1)
?t1:(t?1)
Receive a function !t (w) =
Update
?t
"w"2
2
+ gt (w) and suffer loss
t+1
t+1
?1 , . . . , ?t s.t. the following holds
t+1
(?t+1
) ? argmax Dt+1 (?1 , . . . , ?t )
1 , . . . , ?t
? 1 ,...,? t
!t (wt )
Figure 1: A primal-dual view of Follow-the-Leader. Here the algorithm?s decision wt is the best decision
with respect to the previous losses. This presentation exposes the implicit role of the dual variables. Slightly
abusing notation, ?1:0 = 0, so that w1 = 0. See text.
We next recall a few definitions from convex analysis. A function f is ?-strongly convex if
?
f (?u + (1 ? ?)v) ? ?f (u) + (1 ? ?)f (v) ? ? (1 ? ?) &u ? v&22 .
2
In Sec. 5 we generalize the above definition to arbitrary norms. If a function f is ?-strongly convex
then the function g(w) = f (w) ? ?2 &w&2 is convex.
The Fenchel conjugate of a function f : S ? R is defined as
f " (?) = sup 'w, ?( ? f (w) .
w?S
If f is closed and convex, then the Fenchel conjugate of f " is f itself (a function is closed if for
all ? > 0 the level set {w : f (w) ? ?} is a closed set). It is straightforward to verify that the
function f (w) is conjugate to itself. The definition of f " also implies that for c > 0 we have
(c f )" (?) = c f " (?/c).
A vector ? is a sub-gradient of a function f at w if for all w$ ? S, we have that f (w$ ) ? f (w) ?
'w$ ? w, ?(. The differential set of f at w, denoted ?f (w), is the set of all sub-gradients of f at
w. If f is differentiable at w, then ?f (w) consists of a single vector which amounts to the gradient
of f at w and is denoted by ?f (w).
The Fenchel-Young inequality states that for any w and ? we have that f (w) + f " (?) ? 'w, ?(.
Sub-gradients play an important role in the definition of the Fenchel conjugate. In particular, the
following lemma, whose proof can be found in Borwein and Lewis [2006], states that if ? ? ?f (w)
then the Fenchel-Young inequality holds with equality.
$
$
Lemma 1 Let f be a closed and convex function and
$ (w ) be its differential set at w . Then,
# $let ?f
$
$
$
$
"
$
for all ? ? ?f (w ), we have f (w ) + f (? ) = ? , w .
We make use of the following variant of Fenchel duality (see the appendix for more details):
max ?f " (?
?1 ,...,?T
3
T
"
t=1
?t ) ?
T
"
t=1
gt" (?t ) ? min f (w) +
w
T
"
gt (w) .
(1)
t=1
Warmup: A Primal-Dual View of Follow-The-Leader
In this section, we provide a dual analysis for the FTL algorithm. The dual view of FTL will help
us to derive a family of logarithmic regret algorithms for online convex optimization with strongly
convex functions.
Recall that FTL algorithm is defined as follows:
wt = argmin
w
t?1
"
!i (w) .
(2)
i=1
For each i ? [t ? 1] define gi (w) = !i (w) ? ?2i &w&2 , where ?i is the largest scalar such that gi is
still a convex function. The assumption that !i is ?-strongly convex guarantees that ?i ? ?. We can
3
therefore rewrite the objective function on the right-hand side of Eq. (2) as
Pt (w) =
t?1
"
?1:(t?1)
&w&2 +
gi (w) ,
2
i=1
(3)
!t?1
(recall that ?1:(t?1) = i=1 ?i ). The Fenchel dual optimization problem (see Sec. 2) is to maximize
the following dual objective function
Dt (?1 , . . . , ?t?1 ) = ?
1
2 ?1:(t?1)
&?1:(t?1) &2 ?
t?1
"
gi" (?i ) .
(4)
i=1
Let (?t1 , . . . , ?tt?1 ) be the maximizer of Dt . The relation between the optimal dual variables and
the optimal primal vector is given by (see again Sec. 2)
1
wt = ?
?t
.
(5)
?1:(t?1) 1:(t?1)
Throughout this section we assume that strong duality holds (i.e. Eq. (1) holds with equality). See
the appendix for sufficient conditions. In particular, under this assumption, we have that the above
setting for wt is in fact a minimizer of the primal objective, since (?t1 , . . . , ?tt?1 ) maximizes the dual
objective (see the appendix). The primal-dual view of Follow-the-Leader is presented in Figure 1.
Denote
t+1
) ? Dt (?t1 , . . . , ?tt?1 ) .
?t = Dt+1 (?t+1
1 , . . . , ?t
To analyze the FTL algorithm, we first note that (by strong duality)
T
"
t=1
?t = DT +1 (?T1 +1 , . . . , ?TT +1 ) = min PT +1 (w) = min
w
w
(6)
T
"
!t (w) .
(7)
t=1
t+1
) is the maximizer of Dt+1 implies that for any ? we have
Second, the fact that (?t+1
1 , . . . , ?t
(8)
?t ? Dt+1 (?t1 , . . . , ?tt?1 , ?) ? Dt (?t1 , . . . , ?tt?1 ) .
The following central lemma shows that there exists ? such that the right-hand side of the above is
sufficiently large.
1
Lemma 2 Let (?1 , . . . , ?t?1 ) be an arbitrary sequence of vectors. Denote w = ? ?1:(t?1)
?1:(t?1) ,
let v ? ?!t (w), and let ? = v ? ?t w. Then, ? ? ?gt (w) and
&v&2
Dt+1 (?1 , . . . , ?t?1 , ?) ? Dt (?1 , . . . , ?t?1 ) = !t (w) ?
.
2 ?1:t
Proof We prove the lemma for the case t > 1. The case t = 1 can be proved similarly. Since !t (w) = ?2t &w&2 + gt (w) and v ? ?!t (w) we have that ? ? ?gt (w). Denote
? t = Dt+1 (?1 , . . . , ?t?1 , ?) ? Dt (?1 , . . . , ?t?1 ). Simple algebraic manipulations yield
?
%
%
%
%
1
? t = ? 1 %?1:(t?1) + ?%2 +
%?1:(t?1) %2 ? gt" (?)
?
2?1:t
2?1:(t?1)
'
2 &
&?1:(t?1) &
?1:(t?1)
1
&?&2
1
=
+ 'w, ?(
?
?
? gt" (?)
2
?1:(t?1)
?1:t
?1:t
2?1:t
&
'
?1:(t?1)
?t
&?&2
?t &w&2
1?
?
? gt" (?)
=
+ 'w, ?(
2
?1:t
?1:t
2?1:t
& 2
'
?t 'w, ?( &?&2
?t &w&2
?t &w&2
+ 'w, ?( ? gt" (?) ?
+
+
=
?1:t
2?1:t
( 2
)*
+ ( 2?1:t
)*
+
A
B
Since ? ? ?gt (w), Lemma 1 thus implies that 'w, ?( ? gt" (?) = gt (w). Therefore, A = !t (w).
2
w+?%2
? t = !t (w) ? %?t w+?% . Plugging
Next, we note that B = %?t2?
. We have thus shown that ?
2?1:t
1:t
the definition of ? into the above we conclude our proof.
Combining Lemma 2 with Eq. (7) and Eq. (8) we obtain the following:
4
F OR t = 1, 2, . . . , T :
Define wt = ? ?
1
1:(t?1)
?t1:(t?1)
Receive a function !t (w) =
Update
t+1
?t+1
1 , . . . , ?t
?t
"w"2
2
+ gt (w) and suffer loss !t (wt )
s.t. the following holds
t+1
??t ? ?gt (wt ), s.t. Dt+1 (?t+1
) ? Dt+1 (?t1 , . . . , ?tt?1 , ?t )
1 , . . . , ?t
Figure 2: A primal-dual algorithmic framework for online convex optimization. Again, w1 = 0.
Corollary 1 Let !1 , . . . ,! T be a sequence of functions such that for all t ? [T ], !t is ?t -strongly
convex. Assume that the FTL algorithm runs on this sequence and for each t ? [T ], let vt be in
?!t (wt ). Then,
T
T
T
"
"
1 " &vt &2
!t (wt ) ? min
!t (w) ?
(9)
w
2 t=1 ?1:t
t=1
t=1
Furthermore, let L = maxt &vt & and assume that for all t ? [T ], ?t ? ?. Then, the regret is
2
bounded by L
2? (log(T ) + 1).
If we are dealing with the square loss !t (w) = &w ? ?t &22 (where nature chooses ?t ), then it is
straightforward to see that Eq. (8) holds with equality, and this leads to the previous regret bound
holding with equality. This equality is the underlying reason that the FTL strategy is a minimax
strategy (See Abernethy et al. [2008] for a proof of this claim).
4
A Primal-Dual Algorithm for Online Strongly Convex Optimization
In the previous section, we provided a dual analysis for FTL. Here, we extend the analysis and derive
a more general algorithmic framework for online optimization.
We start by examining the analysis of the FTL algorithm. We first make the important observation
that Lemma 2 is not specific to the FTL algorithm and in fact holds for any configuration of dual
variables. Consider an arbitrary sequence of dual variables: (?21 ), (?31 , ?32 ), . . . , (?T1 +1 , . . . , ?TT +1 )
and denote ?t as in Eq. (6). Using weak duality, we can replace the equality in Eq. (7) with the
following inequality that holds for any sequence of dual variables:
T
"
t=1
?t = DT +1 (?T1 +1 , . . . , ?TT +1 ) ? min PT +1 (w) = min
w
w
A summary of the algorithmic framework is given in Fig. 2.
T
"
!t (w) .
(10)
t=1
The following theorem, a direct corollary of the previous equation and Lemma 2, shows that all
instances of the framework achieve logarithmic regret.
Theorem 1 Let !1 , . . . ,! T be a sequence of functions such that for all t ? [T ], !t is ?t -strongly
convex. Then, any algorithm that can be derived from Fig. 2 satisfies
T
"
t=1
where vt ? ?!t (wt ).
!t (wt ) ? min
w
T
"
t=1
T
!t (w) ?
1 " &vt &2
2 t=1 ?1:t
Proof Let ?t be as defined in Eq. (6). The last condition in the algorithm implies that
?t ? Dt+1 (?t1 , . . . , ?tt?1 , vt ? ?t wt ) ? Dt (?t1 , . . . , ?tt?1 ) .
The proof follows directly by combining the above with Eq. (10) and Lemma 2.
We conclude this section by deriving several algorithms from the framework.
5
(11)
Example 1 (Follow-The-Leader) As we have shown in Sec. 3, the FTL algorithm (Fig. 1) is equivalent to optimizing the dual variables at each online round. This update clearly satisfies the condition
in Fig. 2 and is therefore a special case.
Example 2 (Gradient-Descent) Following Hazan et al. [2006], Bartlett et al. [2007] suggested the
following update rule for differentiable strongly convex function
wt+1 = wt ?
1
?!t (wt ) .
?1:t
(12)
Using a simple inductive argument, it is possible to show that the above update rule is equivalent to
the following update rule of the dual variables
t+1
) = (?t1 , . . . , ?tt?1 , ?!t (wt ) ? ?t wt ) .
(?t+1
1 , . . . , ?t
(13)
Clearly, this update rule satisfies the condition in Fig. 2. Therefore our framework encompasses this
algorithm as a special case.
Example 3 (Online Coordinate-Dual-Ascent) The FTL and the Gradient-Descent updates are
two extreme cases of our algorithmic framework. The former makes the largest possible increase of
the dual while the latter makes the smallest possible increase that still satisfies the sufficient dual
increase requirement. Intuitively, the FTL method should have smaller regret as it consumes more
of its potential earlier in the optimization process. However, its computational complexity is large
as it requires a full blown optimization procedure at each online round. A possible compromise is
to fully optimize the dual objective but only with respect to a small number of dual variables. In the
extreme case, we optimize only with respect to the last dual variable. Formally, we let
, t
?i
if i < t
t+1
?i
=
argmax Dt+1 (?t1 , . . . , ?tt?1 , ?t ) if i = t
?t
Clearly, the above update satisfies the requirement in Fig. 2 and therefore enjoys a logarithmic regret
bound. The computational complexity of performing this update is often small as we optimize over
a single dual vector. Occasionally, it is possible to obtain a closed-form solution of the optimization
problem and in these cases the computational complexity of the coordinate-dual-ascent update is
identical to that of the gradient-descent method.
5
Generalized strongly convex functions
In this section, we extend our algorithmic framework to deal with generalized strongly convex functions. We first need the following generalized definition of strong convexity.
Definition 1 A continuous function f is ?-strongly convex over a convex set S with respect to a
norm & ?& if S is contained in the domain of f and for all v, u ? S and ? ? [0, 1] we have
?
(14)
f (? v + (1 ? ?) u) ? ? f (v) + (1 ? ?) f (u) ? ? (1 ? ?) &v ? u&2 .
2
It is straightforward to show that the function f (w) = 12 &w&22 is strongly convex with respect to the
Euclidean norm. Less trivial examples are given below.
!n
Example 4 The function f (w) = i=1 wi log(wi / n1 ) is strongly convex over the probability simplex, S = {w ? Rn+ : &w&1 = 1}, with respect to the L1 norm. Its conjugate function is
!n
f " (?) = log( n1 i=1 exp(?i )).
1
Example 5 For q ? (1, 2), the function f (w) = 2(q?1)
&w&2q is strongly convex over S = Rn with
1
&?&2p , where p = (1 ? 1/q)?1 .
respect to the Lq norm. Its conjugate function is f " (?) = 2(p?1)
For proofs, see for example Shalev-Shwartz [2007]. In the appendix, we list several important
properties of strongly convex functions. In particular, the Fenchel conjugate of a strongly convex
function is differentiable.
6
I NPUT: A strongly convex function f
I NPUT: A ?-strongly convex function f
F OR t = 1, 2, . . . , T :
F OR t = 1, 2, . . . , T :
? t
?
?
1) Define wt = ?f " ? 1:(t?1)
?1:t
? t
?
?
?
1) Define wt = ?f " ? 1:(t?1)
t
2) Receive a function !t
2) Receive a function !t = ?f + gt
3) Suffer loss !t (wt )
3) Suffer loss !t (wt )
4) Update
t+1
?t+1
1 , . . . , ?t
t+1
4) Update ?t+1
s.t. there
1 , . . . , ?t
s.t. there
exists ?t ? ?gt (wt ) with
exists ?t ? ?lt (wt ) with
t+1
Dt+1 (?t+1
) ?
1 , . . . , ?t
t+1
Dt+1 (?t+1
) ?
1 , . . . , ?t
Dt+1 (?t1 , . . . , ?tt?1 , ?t )
Dt+1 (?t1 , . . . , ?tt?1 , ?t )
Figure 3: Primal-dual template algorithmsPfor general online convex optimization (left) and online strongly
convex optimization (right). Here a1:t =
a1:0 = 0. See text for description.
t
i=1
ai , and for notational convenient, we implicitly assume that
Consider the case where for all t, !t can be written as ?t f + gt where f is 1-strongly convex with
respect to some norm & ?& and gt is a convex function. We also make the simplifying assumption
that ?t is known to the forecaster before he defines wt .
For each round t, we now define the primal objective to be
Pt (w) = ?1:(t?1) f (w) +
The dual objective is (see again Sec. 2)
t?1
"
gi (w) .
(15)
i=1
t?1
. "
?1:(t?1)
gi" (?i ) .
?
Dt (?1 , . . . , ?t?1 ) = ? ?1:(t?1) f " ? ?1:(t?1)
(16)
i=1
An algorithmic framework for online optimization in the presence of general strongly convex functions is given on the right-hand side of Fig. 3.
The following theorem provides a logarithmic regret bound for the algorithmic framework given on
the right-hand side of Fig. 3.
Theorem 2 Let !1 , . . . ,! T be a sequence of functions such that for all t ? [T ], !t = ?t f + gt for f
being strongly convex w.r.t. a norm & ?& and gt is a convex function. Then, any algorithm that can
be derived from Fig. 3 (right) satisfies
T
"
t=1
!t (wt ) ? min
w
T
"
t=1
T
!t (w) ?
where vt ? ?gt (wt ) and & ?& " is the norm dual to & ?& .
1 " &vt &2"
,
2 t=1 ?1:t
(17)
The proof of the theorem is given in Sec. B
6
Summary
In this paper, we extended the primal-dual algorithmic framework for general convex functions from
Shalev-Shwartz and Singer [2006], Shalev-Shwartz [2007] to strongly convex functions. The template algorithms are outlined in Fig. 3. The left algorithm is the primal-dual algorithm for general
convex functions from Shalev-Shwartz and Singer [2006], Shalev-Shwartz [2007]. Here, f is the
complexity function, (?t1 , . . . , ?tt ) are the dual variables at time t, and Dt (?) is the dual objective
7
function at time t (which is a lower bound on primal value). The function ?f " is the gradient of the
conjugate function of f , which can be viewed as a projection ?
of the dual variables back into the primal space. At the least aggressive extreme, in order to obtain T regret, it is sufficient to set ?it (for
all i) to be a subgradient of the loss ?!t (wt ). We then recover an online ?mirror descent? algorithm
[Beck and Teboulle, 2003, Grove et al., 2001, Kivinen and Warmuth, 1997], which specializes to
gradient descent when f is the squared 2-norm or the exponentiated gradient descent algorithm when
f is the relative entropy. At the most aggressive extreme, where Dt is maximized at ?
each round, we
!t?1
have ?Follow the Regularized Leader?, which is wt = arg minw i=1 !i (w) + t f (w). Intermediate algorithms can also be devised such as the ?passive-aggressive? algorithms [Crammer et al.,
2006, Shalev-Shwartz, 2007].
The right algorithm in Figure 3 is our new contribution for strongly convex functions. Any ?strongly convex loss function can be decomposed into !t = ?f + gt , where gt is convex. The
algorithm for strongly convex functions is different in two ways. First, the effective learning rate is
1
rather than ?1t (see Step 1 in both algorithms). Second, more subtly, the condition on the
now ?1:t
dual variables (in Step 4) is only determined by the subgradient of gt , rather than the subgradient of
!t . At the most aggressive end of the spectrum, where Dt is maximized at each round, we have the
!t?1
?Follow the Leader? (FTL) algorithm: wt = arg minw i=1 !i (w). At the least aggressive end,
1
). Furwe have the gradient descent algorithm of Hazan et al. [2006] (which uses learning rate ?1:t
thermore, we provide algorithms which lie in between these two extremes ? it is these algorithms
which have the potential for most practical impact.
Empirical observations suggest that algorithms which most aggressively close the duality gap tend
to perform most favorably [Crammer et al., 2006, Shalev-Shwartz and Singer, 2007b]. However, at
the FTL extreme, this is often computationally prohibitive to implement (as one must solve a full
blown optimization problem at each round). Our template algorithm suggests a natural compromise,
which is to optimize the dual objective but only with respect to a small number of dual variables
(say the most current dual variable) ? we coin this algorithm online coordinate-dual-ascent. In
fact, it is sometimes possible to obtain a closed-form solution of this optimization problem, so that
the computational complexity of the coordinate-dual-ascent update is identical to that of a vanilla
gradient-descent method. This variant update still enjoys a logarithmic regret bound.
References
J. Abernethy, P. Bartlett, A. Rakhlin, and A. Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the Nineteenth Annual
Conference on Computational Learning Theory, 2008.
P. L. Bartlett, E. Hazan, and A. Rakhlin. Adaptive online gradient descent. In Advances in Neural Information Processing Systems 21, 2007.
A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31:167?175, 2003.
J. Borwein and A. Lewis. Convex Analysis and Nonlinear Optimization. Springer, 2006.
S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
M. Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Conference on Empirical Methods in
Natural Language Processing, 2002.
K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms. Journal of Machine Learning Research, 7:551?585, Mar
2006.
A. J. Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant updates. Machine Learning, 43(3):173?210, 2001.
E. Hazan, A. Kalai, S. Kale, and A. Agarwal. Logarithmic regret algorithms for online convex optimization. In Proceedings of the Nineteenth Annual Conference on
Computational Learning Theory, 2006.
J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1?64, January 1997.
S. Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, The Hebrew University, 2007.
S. Shalev-Shwartz and Y. Singer. Convex repeated games and Fenchel duality. In Advances in Neural Information Processing Systems 20, 2006.
S. Shalev-Shwartz and Y. Singer. Logarithmic regret algorithms for strictly convex repeated games. Technical report, The Hebrew University, 2007a. Available at
http://www.cs.huji.ac.il/?shais.
S. Shalev-Shwartz and Y. Singer. A unified algorithmic approach for efficient online label ranking. In aistat07, 2007b.
8
| 3484 |@word norm:17 dekel:1 forecaster:1 simplifying:1 configuration:1 current:1 written:1 must:1 chicago:2 update:26 prohibitive:1 warmuth:2 provides:2 org:2 warmup:2 mathematical:1 direct:1 become:2 differential:2 shorthand:1 consists:1 prove:1 manner:1 indeed:1 roughly:1 growing:2 decreasing:1 decomposed:1 increasing:1 provided:3 notation:2 bounded:1 maximizes:1 underlying:1 argmin:1 unified:3 guarantee:4 enjoy:1 t1:18 before:1 mistake:1 lugosi:2 suggests:1 practical:2 regret:23 implement:1 procedure:1 empirical:3 convenient:1 projection:1 boyd:1 suggest:1 close:1 optimize:4 equivalent:2 www:1 straightforward:3 kale:1 convex:71 unify:1 simplicity:2 rule:5 deriving:3 vandenberghe:1 his:1 coordinate:4 pt:4 play:1 programming:3 us:2 predicts:1 role:2 technological:2 thermore:1 consumes:1 environment:1 convexity:2 complexity:8 depend:1 rewrite:1 compromise:2 subtly:1 learner:3 basis:1 regularizer:1 fast:1 describe:1 effective:1 shalev:17 abernethy:2 whose:2 richer:1 widely:1 solve:1 nineteenth:2 say:1 favor:1 gi:8 itself:4 online:29 sequence:10 differentiable:3 product:1 remainder:1 combining:2 gen:1 achieve:1 intuitive:2 description:1 convergence:1 requirement:2 extending:1 tti:2 wider:1 derive:3 help:1 ac:1 eq:9 strong:4 predicted:1 c:1 implies:4 aggressiveness:5 explains:1 strictly:1 hold:8 sufficiently:1 exp:1 algorithmic:11 claim:1 achieves:1 consecutive:1 smallest:1 applicable:1 label:1 expose:1 largest:2 clearly:3 aim:1 rather:4 kalai:1 varying:1 knob:2 corollary:2 ax:1 focus:1 derived:2 notational:1 contrast:1 sorry:1 relation:1 hidden:1 arg:2 dual:51 denoted:6 special:5 construct:1 having:1 identical:2 simplex:1 t2:1 report:1 simplify:1 few:1 interpolate:1 beck:2 argmax:2 n1:2 highly:1 extreme:7 primal:20 regularizers:1 grove:2 unification:1 minw:3 euclidean:3 littlestone:1 minimal:1 fenchel:9 instance:1 earlier:1 teboulle:2 predictor:1 successful:2 examining:1 chooses:1 huji:1 extant:1 w1:2 borwein:2 central:2 cesa:2 again:3 squared:1 thesis:1 style:3 aggressive:7 potential:3 sec:7 bold:1 includes:1 notable:1 ranking:2 depends:1 vehicle:1 later:1 multiplicative:2 view:6 closed:6 hazan:7 sup:2 analyze:1 start:1 recover:1 maintains:1 shai:2 contribution:1 minimize:1 square:2 il:1 largely:1 maximized:2 yield:3 generalize:2 weak:2 definition:7 naturally:1 proof:9 proved:1 recall:3 back:1 dt:29 follow:9 improved:1 done:2 strongly:42 mar:1 furthermore:2 just:1 implicit:1 retrospect:2 hand:5 nonlinear:2 maximizer:2 incrementally:1 abusing:1 defines:1 perhaps:1 verify:1 inductive:1 regularization:2 equality:6 aggressively:2 former:1 deal:1 round:10 game:4 generalized:4 tt:16 l1:2 passive:2 recently:1 specialized:1 empirically:2 extend:3 he:1 cambridge:2 ai:1 vanilla:1 outlined:2 similarly:1 language:1 gt:25 recent:2 optimizing:1 termed:1 manipulation:1 occasionally:1 inequality:3 success:1 vt:8 seen:3 somewhat:1 determine:1 maximize:1 full:2 sham:2 technical:1 devised:1 plugging:1 a1:2 impact:1 prediction:2 variant:2 sometimes:1 tailored:1 agarwal:1 receive:4 addition:1 background:2 ftl:16 ascent:4 tend:1 integer:1 presence:1 intermediate:1 inner:1 bartlett:3 suffer:4 algebraic:1 speaking:1 tewari:1 tune:2 amount:2 http:1 blown:2 subgradient:4 year:1 run:1 letter:4 place:1 family:4 throughout:1 decision:3 appendix:4 eral:1 bound:8 annual:2 aspect:1 argument:1 min:8 attempting:1 performing:1 structured:1 designated:1 conjugate:8 smaller:1 slightly:1 wi:2 lp:1 kakade:1 intuitively:1 computationally:1 equation:1 singer:10 mind:1 end:3 serf:1 available:1 tightest:1 rewritten:1 operation:1 indirectly:1 coin:1 include:2 hinge:1 move:1 objective:14 already:1 nput:2 strategy:3 responds:1 shais:1 gradient:21 discriminant:1 trivial:1 reason:1 providing:2 minimizing:3 rotational:1 hebrew:2 holding:1 favorably:1 negative:1 design:1 perform:1 bianchi:2 upper:1 observation:3 markov:1 descent:16 january:1 situation:1 extended:1 rn:3 arbitrary:4 kl:1 suggested:1 below:1 sparsity:1 encompasses:1 including:1 max:1 natural:2 regularized:1 kivinen:2 minimax:2 historically:1 gamut:1 specializes:1 text:2 review:1 l2:3 relative:2 loss:23 fully:1 versus:1 sufficient:3 consistent:1 share:1 maxt:1 course:1 summary:2 last:2 enjoys:2 side:6 exponentiated:2 perceptron:1 institute:2 template:5 face:1 cumulative:2 adaptive:1 projected:1 obtains:2 implicitly:1 dealing:1 conclude:3 leader:9 shwartz:17 xi:2 spectrum:1 discriminative:1 continuous:1 nature:3 schuurmans:1 domain:1 repeated:2 body:2 fig:10 sub:3 lq:1 lie:1 shalevshwartz:1 toyota:2 young:2 theorem:5 specific:1 maxi:1 list:1 rakhlin:2 svm:1 intrinsic:1 exists:3 effectively:1 mirror:2 keshet:1 phd:1 margin:1 gap:3 entropy:1 logarithmic:10 lt:1 contained:1 scalar:3 springer:1 minimizer:1 determines:1 lewis:2 satisfies:6 goal:1 viewed:2 presentation:1 replace:1 determined:1 wt:33 lemma:10 duality:11 invariance:1 experimental:1 formally:1 latter:1 crammer:4 collins:2 |
2,740 | 3,485 | On the Efficient Minimization of Classification
Calibrated Surrogates
Richard Nock
C EREGMIA ? Univ. Antilles-Guyane
97275 Schoelcher Cedex, Martinique, France
[email protected]
Frank Nielsen
L IX - Ecole Polytechnique
91128 Palaiseau Cedex, France
[email protected]
Abstract
Bartlett et al (2006) recently proved that a ground condition for convex surrogates,
classification calibration, ties up the minimization of the surrogates and classification risks, and left as an important problem the algorithmic questions about the
minimization of these surrogates. In this paper, we propose an algorithm which
provably minimizes any classification calibrated surrogate strictly convex and differentiable ? a set whose losses span the exponential, logistic and squared losses
?, with boosting-type guaranteed convergence rates under a weak learning assumption. A particular subclass of these surrogates, that we call balanced convex
surrogates, has a key rationale that ties it to maximum likelihood estimation, zerosum games and the set of losses that satisfy some of the most common requirements for losses in supervised learning. We report experiments on more than 50
readily available domains of 11 flavors of the algorithm, that shed light on new
surrogates, and the potential of data dependent strategies to tune surrogates.
1
Introduction
A very active supervised learning trend has been flourishing over the last decade: it studies functions
known as surrogates ? upperbounds of the empirical risk, generally with particular convexity properties ?, whose minimization remarkably impacts on empirical / true risks minimization [3, 4, 10].
Surrogates play fundamental roles in some of the most successful supervised learning algorithms,
including AdaBoost, additive logistic regression, decision tree induction, Support Vector Machines
[13, 7, 10]. As their popularity has been rapidly spreading, authors have begun to stress the need to
set in order the huge set of surrogates, and better understand their properties. Statistical consistency
properties have been shown for a wide set containing most of the surrogates relevant to learning,
classification calibrated surrogates (CCS) [3]; other important properties, like the algorithmic questions about minimization, have been explicitly left as important problems to settle [3].
In this paper, we address and solve this problem for all strictly convex differentiable CCS, a set
referred to as strictly convex surrogates (SCS). We propose a minimization algorithm, ULS, which
outputs linear separators, with two key properties: it provably achieves the optimum of the surrogate,
and meets Boosting-type convergence under a weak learning assumption. There is more, as we
show that SCS strictly contains another set of surrogates of important rationale, balanced convex
surrogates (BCS). This set, which contains the logistic and squared losses but not the exponential
loss, coincides with the set of losses satisfying three common requirements about losses in learning.
In fact, BCS spans a large subset of the expected losses for zero-sum games of [9], by which ULS may
also be viewed as an efficient learner for decision making (in simple environments, though).
Section 2 gives preliminary definitions; section 3 presents surrogates losses and risks; sections 4 and
5 present ULS and its properties; section 6 discusses experiments with ULS; section 7 concludes.
2
Preliminary definitions
Unless otherwise stated, bold-faced variables like w denote vectors (components are w i , i =
1, 2, ...), calligraphic upper-cases like S denote sets, and blackboard faces like O denote subsets
of R, the set of real numbers. We let set O denote a domain (Rn , [0, 1]n , etc., where n is the
number of description variables), whose elements are observations. An example is an ordered pair
(o, c) ? O ? {c? , c+ }, where {c? , c+ } denotes the set of classes (or labels), and c+ (resp. c? ) is
the positive class (resp. negative class). Classes are abstracted by a bijective mapping to one of two
other sets:
+
c ? {c? , c+ } y ? ? {?1, +1} y ? {0, 1} .
(1)
?
The convention is c
+1
1 and c
?1
0. We thus have three distinct notations for an
example: (o, c), (o, y ? ), (o, y), that shall be used without ambiguity. We suppose given a set of m
examples, S = {(oi , ci ), i = 1, 2, ..., m}. We wish to build a classifier H, which can either be a
function H : O ? O ? R (hereafter, O is assumed to be symmetric with respect to 0), or a function
H : O ? [0, 1]. Following a convention of [6], we compute to which extent the outputs of H and
the labels in S disagree, ?(S, H), by summing a loss which quantifies pointwise disagreements:
X
.
?(S, H) =
`(ci , H(oi )) .
(2)
i
0/1
The fundamental loss is the 0/1 loss, ` (c, H) (to ease readability, the second argument is written
H instead of H(o)). It takes on two forms depending on im(H):
.
.
?
0/1
`0/1
R (y , H) = 1y ? 6=??H if im(H) = O , or `[0,1] (y, H) = 1y6=? ?H if im(H) = [0, 1] .
(3)
The following notations are introduced in (3): for a clear distinction of the output of H, we put in
index to ` and ? an indication of the loss? domain of parameters: R, meaning it is actually some
O ? R, or [0, 1]. The exponent to ` gives the indication of the loss name. Finally, 1 ? is the indicator
variable that takes value 1 iff predicate ? is true, and 0 otherwise; ? : R ? {?1, +1} is +1 iff
x ? 0 and ?1 otherwise; ? : [0, 1] ? {0, 1} is 1 iff x ? 1/2, and 0 otherwise.
Both losses `R and `[0,1] are defined simultaneously via popular transforms on H, such as the logit
.
0/1
?
transform logit(p) = log(p/(1 ? p)), ?p ? [0, 1] [7]. We have indeed `0/1
[0,1] (y, H) = `R (y , logit(H))
0/1
?
0/1
?1
and `R (y , H) = `[0,1] (y, logit (H)). We have implicitly closed the domain of the logit, adding
two symbols ?? to ensure that the eventual infinite values for H can be mapped back to [0, 1].
In supervised learning, the objective is to carry out the minimization of the expectation of the 0/1
loss in generalization, the so-called true risk. Very often however, this task can be relaxed to the
minimization of the empirical risk of H, which is (2) with the 0/1 loss [6]:
X
.
?0/1 (S, H) =
`0/1 (ci , H(oi )) .
(4)
i
.
The main classifiers we investigate are linear separators (LS). In this case, H(o) =
features ht with im(ht ) ? R and leveraging coefficients ?t ? R.
3
P
t
?t ht (o) for
Losses and surrogates
A serious alternative to directly minimizing (4) is to rather focus on the minimization of a surrogate risk [3]. This is a function ?(S, H) as in (2) whose surrogate loss `(c, H(o)) satisfies
`0/1 (c, H(o)) ? `(c, H(o)). Four are particularly important in supervised learning, defined via
the following surrogate losses:
.
?
?
`exp
(5)
R (y , H) = exp(?y H) ,
.
?
`log
R (y , H)
=
sqr
?
`R (y , H)
.
=
hinge
?
.
`R (y , H)
=
log(1 + exp(?y ? H)) ,
?
2
(1 ? y H) ,
?
max{0, 1 ? y H} .
(6)
(7)
(8)
(5) is the exponential loss, (6) is the logistic loss, (7) is the squared loss and (8) is hinge loss.
Definition 1 A Strictly Convex Loss (SCL) is a strictly convex function ? : X ? R+ differentiable
on int(X) with X symmetric interval with respect to zero, s. t. ?? (0) < 0.
a? im(?? )
F? (y ? H)
?
? im(H) = (? (?y ? H) ? a? )/b?
?
p
?y ? H+ (1??)2 +(y ? H)2
.
??,??(0,1) (x) = ? + (1 ? ?) x(1 ? x) ?
R
1??
p
. p
?
?M (x) = x(1 ? x)
0
R
?y H + 1 + (y ? H)2
?(x)
.
?Q (x) = ?x log x ? (1 ? x) log(1 ? x) 0
.
?B (x) = x(1 ? x)
0
R
[?1, 1]
?
Pr[c
= c+ |H; o]
(H)
= ??1
?
1
2
+ ?
2
1
2
?
log(1 + exp(?y H))
(1 ? y ? H)2
+
H
(1??)2 +H 2
?H
2 1+H 2
exp(H)
1+exp(H)
1
+H
2
2
Table 1: permissible functions, their corresponding BCLs and the matching [0, 1] predictions.
?. is the gradient notation (here, the derivative). Any surrogate risk built from a SCL is called a
Strictly Convex Surrogate (SCS). From Theorem 4 in [3], it comes that SCL contains all classification
calibrated losses (CCL) that are strictly convex and differentiable, such as (5), (6), (7).
.
Fix ? ? SCL. The Legendre conjugate ? ? of ? is ? ? (x) = supx0 ?int(X) {xx0 ? ?(x0 )}. Because of the strict convexity of ?, the analytic expression of the Legendre conjugate becomes
.
?1
?
? ? (x) = x??1
? (x) ? ?(?? (x)). ? is also strictly convex and differentiable. A function
? : [0, 1] ? R+ is called permissible iff it is differentiable on (0, 1), strictly concave, symmet.
ric about x = 1/2, and with ?(0) = ?(1) = a? ? 0. We let b? = ?(1/2) ? a? > 0. Permissible
functions with a? = 0 span a very large subset of generalized entropies [9]. Permissible func.
tions are useful to define the following subclass of SCL, of particular interest (here, ? = ??).
12
(? = ?B)
(? = ?M)
(? = ?? = 1/3)
(? = ?Q)
10
Definition 2 Let ? permissible.
(BCL) with signature ?, F? , is:
The Balanced Convex Loss
8
F? (x)
6
?
.
(? (?x) ? a? )/b? .
=
4
2
0
-3
-2
-1
0
1
2
3
(9)
Balanced Convex Surrogates (BCS) are defined accordingly. All
?
BCL share a common shape. Indeed, ? (x) satisfies the following
relationships:
?
? (x)
Figure 1: Bold curves depict plots
?
of ? (?x) for the ? in Table 1; thin
dotted half-lines are its asymptotes.
lim
x?infim(?? )
?
? (x)
?
= ? (?x) + x ,
(10)
= a? .
(11)
Noting that F? (0) = 1 and ?F? (0) = ?(1/b? )???1 (0) < 0, it follows that BCS ? SCS,
where the strict inequality comes from the fact that (5) is a SCL but not a BCL. It also follows
limx?supim(?? ) F? (x) = 0 from (11), and limx?infim(?? ) F? (x) = ?x/b? from (10). We get that
.
the asymptotes of any BCL can be summarized as `(x) = x(?(x) ? 1)/(2b? ). When b? = 1, this is
.
the linear hinge loss [8], a generalization of (8) for which x = y ? H ? 1. Thus, while hinge loss is
not a BCL, it is the limit behavior of any BCL (see Figure 1).
Table 1 (left column) gives some examples of permissible ?. When scaled so that ?( 1/2) = 1,
some confound with popular choices: ?B with Gini index, ?Q with the Bit-entropy, and ?M with
Matsushita?s error [10, 11]. Table 1 also gives the expressions of F? along with the im(H) = O ? R
allowed by the BCL, for the corresponding permissible function. It is interesting to note the constraint
on im(H) for the squared loss to be a BCL, which makes it monotonous in the interval, but implies
to rescale the outputs of classifiers like linear separators to remain in [?1, 1].
4
ULS: the efficient minimization of any SCS
For any strictly convex function ? : X ? R differentiable on int(X), the Bregman Loss Function
(BLF) D? with generator ? is [5]:
D? (x||x0 )
.
= ?(x) ? ?(x0 ) ? (x ? x0 )?? (x0 ) .
(12)
The following Lemma states some relationships that are easy to check using ? ?? = ?. They are
particularly interesting when im(H) = O ? R.
Algorithm 1: Algorithm ULS(M, ?)
Input: M ? Rm?T , SCL ? with dom(?) = R;
Let ?1 ? 0; Let w0 ? ??1
? (0)1;
?
for j = 1, 2, ...J do
[WU] (weight update) wj ? (M ?j ) w0 ;
Let Tj ? {1, 2, ..., T }; let ?j ? 0;
Pm
[LC] (leveraging coefficients) ?t ? Tj , pick ?j,t such that: i=1 mit ((M ?j ) wj )i = 0 ;
Let ?j+1 ? ?j + ?j ;
. PT
Output: H(x) = t=1 ?J+1,t ht (x) ? LS
?
?
Lemma 1 For any SCL ?, ?(y ? H) = D?? (0||??1
? ? (y H)) ? ? (0). Furthermore, for any BCL F? ,
?1
?1
?
D? (y||?? (H)) = b? F? (y H) and D? (y||?? (H)) = D? (1||??1
(y ? H)).
?
The second equality is important because it ties real predictions (right) with [0, 1] predictions (left).
It also separates SCL and BCL, as for any ? in SCL, it can be shown that there exists a functions ?
?
such that D? (y||??1
? (H)) = ?(y H) iff ? ? BCL . We now focus on the minimization of any SCS .
We show that there exists an algorithm, ULS, which fits a linear separator H to the minimization
. P
?
of any SCS ??
R =
i ?(yi H(oi )) for any SCL ? with dom(?) = R, in order not to restrict the LS
built. To simplify notations, we let:
.
?
?(x)
= ? ? (?x) .
(13)
With this notation, the first equality in Lemma 1 becomes:
?
?
= D?? (0||??1
? (?y H)) ? ?(0) .
?
?(y ? H)
(14)
.
We let W = dom(??? ) = ?im(?? ), where this latter equality comes from ??? (x) =
???? (?x) = ???1
? ) = R. Because any BLF is strictly convex
? (?x). It also comes im(??
in its first argument, we can compute its Legendre conjugate. In fact, we shall essentially need the
argument that realizes the supremum: for any x ? R, for any p ? W, we let:
xp
.
=
argp0 ?W sup{xp0 ? D?? (p0 ||p)} .
(15)
We do not make reference to ?? in the notation, as it shall be clear from context. We name x p
the Legendre dual of the ordered pair (x, p), closely following a notation by [6]. The Legendre dual
is unique and satisfies:
??? (x p) = x + ??? (p) ,
0
0
0
?x, x ? R, ?p ? W, x (x p) = (x + x ) p .
(16)
(17)
To state ULS, we follow the setting of [6] and suppose that we have T features h t (t = 1, 2, ..., T )
known in advance, the problem thus reducing to the computation of the leveraging coefficients. We
define m ? T matrix M with:
mit
.
= ?yi? ht (oi ) .
Given leveraging coefficients vector ? ? RT , we get:
?yi? H(oi )
=
(M ?)i .
(18)
(19)
We can specialize this setting to classical greedy induction frameworks for LS: in classical boosting,
at step j, we would fit a single ?t [6]; in totally corrective boosting, we would rather fit {?t , 1 ? t ?
j} [14]. Intermediate schemes may be used as well for Tj , provided they ensure that, at each step j of
the algorithm and for any feature ht , it may be chosen at some j 0 > j. ULS is displayed in Algorithm
1. In Algorithm 1, notations are vector-based: the Legendre duals are computed component-wise;
furthermore, Tj may be chosen according to whichever scheme underlined above. The following
Theorem provides a first general convergence property for ULS.
Theorem 1 ULS(M , ?) converges to a classifier H realizing the minimum of ??
R.
Proof sketch: In step [WU] in ULS, (17) brings wj+1 = (M ?j+1 ) w0 = (M ?j ) wj . After
few derivations involving the choice of ?j and step [LC] in ULS, we obtain (with vector notations,
BLFs are the sum of the component-wise BLFs):
.
D?? (0||wj+1 ) ? D?? (0||wj )
= ?D?? (wj+1 ||wj )
(20)
Let A?? (wj+1 , wj ) = ?D?? (wj+1 ||wj ), which is just, from (20) and (14), the difference between
two successive SCL in Algorithm 1. Thus, A?? (wj+1 , wj ) < 0 whenever wj+1 6= wj . Should we
be able to prove that when ULS has converged, w. ? KerM > , this would make A?? (wj+1 , wj ) an
auxiliary function for ULS, which is enough to prove the convergence of ULS towards the optimum
[6]. Thus, suppose that wj+1 = wj (ULS has converged). Suppose that P
Tj is a singleton (e.g.
m
classical
boosting
scheme).
In
this
case,
?
=
0
and
so
?t
=
1,
2,
...,
T,
j
i=1 mit (0 wj )i =
Pm
>
>
>
>
m
w
=
0,
i.e.
w
M
=
w
M
=
0
,
and
w
,
w
?
KerM
.
The case of totally
it j,i
j
j+1
j
j+1
i=1
corrective boosting is simpler, as after the last iteration we would have wJ+1 ? KerM > . Intermediate choices for Tj ? {1, 2, ..., T } are handled in the same way.
We emphasize the fact that Theorem 1 proves the convergence towards the global optimum of ? ?
R,
regardless of ?. The optimum is defined by the LS with features in M that realizes the smallest
??
R . Notice that in practice, it may be a tedious task to satisfy exactly (20), in particular for totally
corrective boosting [14].
ULS has the flavor of boosting algorithms, repeatedly modifying a set of weights w over the examples. In fact, this similarity is more than syntactical, as ULS satisfies two first popular algorithmic
boosting properties, the first of which being that step [LC] in ULS is equivalent to saying that this
LS has zero edge on wj+1 [14]. The following Lemma shows that this edge conditions is sound.
Lemma 2 Suppose that there does not exist some ht with all mit of the same sign, ?i = 1, 2, ..., m.
Then, for any choice of Tj , step [LC] in ULS has always a finite solution.
Proof: Let:
Pm
Z
.
= D?? (0||(M ?j+1 ) w0 ) .
(21)
?
? +
We have Z = m?(0)
i=1 ?(?(M (?j + ?j ))i ). from (14), a function convex in all leveraging
coefficients. Define |Tj | ? |Tj | matrix E with euv = ? 2 Z/(??j,u ?j,v )P(for the sake of simplicity,
m
Tj = {1, 2, ..., |Tj |}, where |.| denotes the cardinal). We have euv = i=1 miu miv /?(((M ?j )
.
2
?
?
wj )i ), with ?(x) = d2 ?(x)/dx
a function strictly positive in int(W)
convex.
Pmsince ? is strictly
.
>
? i i2 ? 0, ?x ?
Let qi,j = 1/?(((M ?j )wj )i ) > 0. It is easy to show that x Ex = i=1 qi,j hx, m
.
? i ? R|Tj | the vector with m
R|Tj | , with m
? it = mit . Thus, E is positive semidefinite; as such, step
[LC] in ULS, which is the same as solving ?Z/??j,u = 0, ?u ? Tj (i.e. minimizing Z) has always
a solution.
The condition for the Lemma to work is absolutely not restrictive, as if such an h t were to exist, we
would not need to run ULS: indeed, we would have either ?0/1 (S, ht ) = 0, or ?0/1 (S, ?ht ) = 0. The
second property met by ULS is illustrated in the second example below.
We give two examples of specializations of ULS. Take for example ?(x) = exp(?x) (5). In this case, W = R+ , w0 = 1 and it is
not hard to see that ULS matches real AdaBoost with unnormalized weights [13]. The difference is syntactical: the LS output
x
by ULS and real AdaBoost are the same. Now, take any BCL. In
this case, ?? = ?, W = [0, 1] (scaling issues underlined for the
logit in Section 2 make it desirable to close W), and w0 = 1/21.
1
0
In
all these cases, where W ? R+ , wj is always a distribution
xp
1/2 p
up to a normalization factor, and this would also be the case for
??
any strictly monotonous SCS ?. The BCL case brings an appealing display of how the weights behave. Figure 2 displays a typical Legendre dual for a BCL. Consider example (oi , yi ), and its
Figure 2: A typical ?? (red: weight update, w ? (M ? ) w = (?y ? H(o )) w for
j,i
j i
0,i
i
0,i
i
strictly increasing, symmetric wrt the current classifier H. Fix p = w and x = ?y ? H(o ) in Fig0,i
i
i
point (1/2, 0)), with Legendre dual ure 2. We see that the new weight of the example gets larger iff
x p computed from x and p.
x > 0, i.e. iff the example is given the wrong class by H, which
is the second boosting property met by ULS.
ULS turns out to meet a third boosting property, and the most important as it contributes to root the
algorithm in the seminal boosting theory of the early nineties: we have guarantees on its convergence
rate under a generalization of the well-known ?Weak Learning Assumption? (WLA) [13]. To state
the WLA, we plug the iteration in the index of the distribution normalization coefficient in (21), and
.
define Zj = ||wj ||1 (||.||k is the Lk norm). The WLA is:
(WLA)?j, ??j > 0 : |(1/|Tj |)
X
(1/Zj )
m
X
t?Tj
i=1
mit wj,i |
? ?j .
(22)
This is indeed a generalization of the usual WLA for boosting algorithms, that we obtain taking
|Tj | = 1, ht ? {?1, +1} [12]. Few algorithms are known that formally boost WLA in the sense that
requiring only WLA implies guaranteed rates for the minimization of ??
R . We show that ULS meets
this property ?? ? SCL. To state this, we need few more definitions. Let mt denote the tth column
.
.
vector of M , am = maxt ||mt ||2 and aZ = minj Zj . Let a? denote the average of ?j (?j), and
.
a? = minx?int(W) ?(x) (? defined in the proof of Lemma 2).
Theorem 2 Under the WLA, ULS terminates in at most J = O(ma2m /(a? a2Z a2? )) iterations.
? and then the mean-value
Proof sketch: We use Taylor expansions with Lagrange remainder for ?,
theorem, and obtain that ?w, w + ? ? W, ?w ? ? [min{w + ?, w}, max{w + ?, w}] such that
D?? (w + ?||w) = ?2 ?(w? )/2 ? (?2 /2)a? ? 0. We use m times this inequality with w = wj,i
and ? = (wj+1,i ? wj,i ), sum the inequalities, combine with Cauchy - Schwartz and Jensen?s
inequalities, and obtain:
D?? (wj+1 ||wj )
? a? (aZ ?j /(2am ))2 .
(23)
?
Using (20), we obtain that D?? (0||wJ+1 ) ? m?(0)
equals:
? + D ? (0||w1 ) +
?m?(0)
?
J
X
j=1
(D?? (0||wj+1 ) ? D?? (0||wj )) = m?(0) ?
J
X
j=1
D?? (wj+1 ||wj )(24)
.
?
But, (14) together with the definition of wj in [WU] (see ULS) yields D?? (0||wJ+1,i ) = ?(0)
+
?
?(yi H(oi )), ?i = 1, 2, ..., m, which ties up the SCS to (24); the guaranteed decrease in the rhs
of (24) by (23) makes that there remains to check when the rhs becomes negative to conclude that
ULS has terminated. This gives the bound of the Theorem.
The bound in Theorem 2 is mainly useful to prove that the WLA guarantees a convergence rate of
order O(m/a2? ) for ULS, but not the best possible as it is in some cases far from being optimal.
5
ULS, BCL, maximum likelihood and zero-sum games
BCL matches through the second equality in Lemma 1 the set of losses that satisfy the main requirements about losses used in machine learning. This is a strong rationale for its use. Suppose
im(H) ? [0, 1], and consider the following requirements about some loss `[0,1] (y, H):
(R1) The loss is lower-bounded. ?z ? R such that inf y,H `[0,1] (y, H) = z.
(R2) The loss is a proper scoring rule. Consider a singleton domain O = {o}. Then, the best
. ?
(constant) prediction is arg minx?[0,1] ?[0,1] (S, x) = p = Pr[c
= c+ |o] ? [0, 1], where p is
the relative proportion of positive examples with observation o.
(R3) The loss is symmetric in the following sense: `[0,1] (y, H) = `[0,1] (1 ? y, 1 ? H).
R1 is standard. For R2, we can write ?[0,1] (S, x) = p`[0,1] (1, x) + (1 ? p)`[0,1] (0, x) = L(p, x),
which is just the expected loss of zero-sum games used in [9] (eq. (8)) with Nature states reduced
to the class labels. The fact that the minimum is achieved at x = p makes the loss a proper scoring
rule. R3 implies `[0,1] (1, 1) = `[0,1] (0, 0), which is virtually assumed for any domain; otherwise, it
scales to H ? [0, 1] a well-known symmetry in the cost matrix that holds for domains without class
dependent misclassification costs. For these domains indeed, it is assumed ` [0,1] (1, 0) = `[0,1] (0, 1).
Finally, we say that loss `[0,1] is properly defined iff dom(`[0,1] ) = [0, 1]2 and it is twice differentiable
on (0, 1)2 . This is only a technical convenience: even the 0/1 loss coincides on {0, 1} with properly
defined losses. In addition, the differentiability condition would be satisfied by many popular losses.
The proof of the following Lemma involves Theorem 3 in [1] and additional facts to handle R3.
Lemma 3 Assume im(H) ? [0, 1]. Loss `[0,1] (., .) is properly defined and meets requirements R1,
R2, R3 iff `[0,1] (y, H) = z + D? (y||H) for some permissible ?.
Thus, ? maybe viewed as the ?signature? of the loss. The second equality in Lemma 1 makes a tight
connection between the predictions of H in [0, 1] and R. Let it be more formal: the matching [0, 1]
prediction for some H with im(H) = O is:
.
? ? [c = c+ |H; o] =
(25)
Pr
??1 (H(o)) ,
?
With this definition, illustrated in Table 1, Lemma 3 and the second equality in Lemma 1 show that
BCL matches the set of losses of Lemma 3. This definition also brings the true nature of the minimization of any BCS with real valued hypotheses like linear separators (in ULS). From Lemma 3 and
[2], there exists a bijection between BCL and a subclass of the exponential families whose members?
pdfs may be written as: Pr? [y|?] = exp(?D? (y||??1
(?)) + ?(y) ? ?(y)), where ? ? R is the
?
natural parameter and ?(.) is used for normalization. Plugging ? = H(o), using (25) and the second
P
equality in Lemma 1, we obtain that any BCS can be rewritten as ??R = U + i ? log Pr? [yi |H(oi )],
where U does not play a role in its minimization. We obtain the following Lemma, in which we suppose im(H) = O.
Lemma 4 Minimizing any BCS with classifier H yields the maximum likelihood estimation, for each
observation, of the natural parameter ? = H(o) of an exponential family defined by signature ?.
.
In fact, one exponential family is concerned in fine. To see this, we can factor the pdf as Pr[y|?] =
?
exp (??(y) ? ?(?)) /z, with ? = ? the cumulant function, ?(y) the sufficient statistic and z the
normalization function. Since y ? {0, 1}, we easily end up with Pr? [y|?] = 1/(1 + exp(??)), the
logistic prediction for a Bernoulli prior. To summarize, minimizing any loss that meets R1, R2 and
R3 (i.e. any BCL) amounts to the same ultimate goal; Since ULS works for any of the corresponding
surrogate risks, the crux of the choice of the BCL relies on data-dependent considerations.
Finally, we can go further in the parallel with game theory developed above for R2: using notations
in [9], the loss function of the decision maker can be written L(X, q) = D? (1||q(X)). R3 makes it
easy to recover losses like the log loss or the Brier score [9] respectively from ? Q and ?B (Table 1).
In this sense, ULS is also a sound learner for decision making in the zero-sum game of [9]. Notice
however that, to work, it requires that Nature has a restricted sample space size ({0, 1}).
6
Experiments
We have compared against each other 11 flavors of ULS, including real AdaBoost [13], on a benchmark of 52 domains (49 from the UCI repository). True risks are estimated via stratified 10-fold
cross validation; ULS is ran for r (fixed) features ht , each of which is a Boolean rule: If Monomial then Class= ?1 else Class = ?1, with at most l (fixed) literals, induced following the greedy
minimization of the BCS at hand. Leveraging coefficients ([LC] in ULS) are approximated up to
10?10 precision. Figure 3 summarizes the results for two values of the couple (l, r). Histograms
are ordered from left to right in increasing average true risk over all domains (shown below histograms). The italic numbers give, for each algorithm, the number of algorithms it beats according to a Student paired t-test over all domains with .1 threshold probability. Out of the 10 flavors of ULS, the first four flavors pick ? in Table 1. The fifth uses another permissible function:
.
?? (x) = (x(1 ? x))? , ?? ? (0, 1). The last five adaptively tune the BCS at hand out-of-a-bag
of BCS. The first four fit the BCS at each stage of the inner loop (for j ...) of ULS. Two (noted
?F. ?) pick the BCS which minimizes the empirical risk in the bag; two others (noted ?E. ?) pick the
BCS which maximizes the current edge. There are two different bags corresponding to four permissible functions each: the first (index ?1?) contains the ? in Table 1, the second (index ?2?) replaces
?B by ?? . We wanted to evaluate ?B because it forces to renormalize the leveraging coefficients in
H each time it is selected, to ensure that the output of H lies in [?1, 1]. The last adaptive flavor,
F ? , ?externalizes? the choice of the BCS: it selects for each fold the BCS which yields the smallest
empirical risk in a bag corresponding to five ?: those of Table 1 plus ?? .
25
20
15
10
5
0
1 2 3 4 5 6 7 8 9 10 11
F?
14.18 (10)
1 2 3 4 5 6 7 8 9 10 11
?M
14.70 (5)
1 2 3 4 5 6 7 8 9 10 11
??
14.71 (3)
1 2 3 4 5 6 7 8 9 10 11
??
14.83 (2)
1 2 3 4 5 6 7 8 9 10 11
F2
15.03 (1)
1 2 3 4 5 6 7 8 9 10 11
?Q
15.06 (1)
1 2 3 4 5 6 7 8 9 10 11
E1
15.22 (1)
1 2 3 4 5 6 7 8 9 10 11
?B
15.25 (1)
1 2 3 4 5 6 7 8 9 10 11
AdaBoost
15.35 (1)
1 2 3 4 5 6 7 8 9 10 11
E2
15.36 (1)
1 2 3 4 5 6 7 8 9 10 11
F1
17.37 (0)
25
20
15
10
5
0
1 2 3 4 5 6 7 8 9 10 11
F?
12.15 (10)
1 2 3 4 5 6 7 8 9 10 11
?Q
12.39 (3)
1 2 3 4 5 6 7 8 9 10 11
AdaBoost
12.56 (3)
1 2 3 4 5 6 7 8 9 10 11
?M
12.59 (3)
1 2 3 4 5 6 7 8 9 10 11
?B
12.62 (3)
1 2 3 4 5 6 7 8 9 10 11
E2
12.63 (3)
1 2 3 4 5 6 7 8 9 10 11
??
12.74 (2)
1 2 3 4 5 6 7 8 9 10 11
??
12.79 (2)
1 2 3 4 5 6 7 8 9 10 11
F2
13.10 (2)
1 2 3 4 5 6 7 8 9 10 11
F1
17.57 (1)
1 2 3 4 5 6 7 8 9 10 11
E1
23.60 (0)
Figure 3: Summary of our results over the 52 domains for the 11 algorithms (top: l = 2, r = 10;
bottom: l = 3, r = 100). Vertical (red) bars show the average rank over all domains (see text).
Three main conclusions emerge from Figure 3. First, F ? appears to be superior to all other approaches, but slightly more sophisticated choices for the SCS (i.e. E. , F. ) fail at improving the
results; this is a strong advocacy for a particular treatment of this surrogate tuning problem. Second,
Matsushita?s BCL, built from ?M , appears to be a serious alternative to the logistic loss. Third and
last, a remark previously made by [10] for decision trees seems to hold as well for linear separators,
as stronger concave regimes for ? in BCLs tend to improve performances at least for small r.
Conclusion
In this paper, we have shown the existence of a supervised learning algorithm which minimizes
any strictly convex, differentiable classification calibrated surrogate [3], inducing linear separators.
Since the surrogate is now in the input of the algorithm, along with the learning sample, it opens
the interesting problem of the tuning of this surrogate to the data at hand to further reduce the true
risk. While the strategies we have experimentally tested are, with this respect, a simple primer for
eventual solutions, they probably display the potential and the non triviality of these solutions.
References
[1] A. Banerjee, X. Guo, and H. Wang. On the optimality of conditional expectation as a bregman predictor.
IEEE Trans. on Information Theory, 51:2664?2669, 2005.
[2] A. Banerjee, S. Merugu, I. Dhillon, and J. Ghosh. Clustering with Bregman divergences. Journal of
Machine Learning Research, 6:1705?1749, 2005.
[3] P. Bartlett, M. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the Am.
Stat. Assoc., 101:138?156, 2006.
[4] P. Bartlett and M. Traskin. Adaboost is consistent. In NIPS*19, 2006.
[5] L. M. Bregman. The relaxation method of finding the common point of convex sets and its application to
the solution of problems in convex programming. USSR Comp. Math. and Math. Phys., 7:200?217, 1967.
[6] M. Collins, R. Schapire, and Y. Singer. Logistic regression, adaboost and Bregman distances. In COLT?00,
pages 158?169, 2000.
[7] J. Friedman, T. Hastie, and R. Tibshirani. Additive Logistic Regression : a Statistical View of Boosting.
Ann. of Stat., 28:337?374, 2000.
[8] C. Gentile and M. Warmuth. Linear hinge loss and average margin. In NIPS*11, pages 225?231, 1998.
[9] P. Gr?unwald and P. Dawid. Game theory, maximum entropy, minimum discrepancy and robust Bayesian
decision theory. Ann. of Statistics, 32:1367?1433, 2004.
[10] M.J. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms.
Journal of Comp. Syst. Sci., 58:109?128, 1999.
[11] K. Matsushita. Decision rule, based on distance, for the classification problem. Ann. of the Inst. for Stat.
Math., 8:67?77, 1956.
[12] R. Nock and F. Nielsen. A Real Generalization of discrete AdaBoost. Artif. Intell., 171:25?41, 2007.
[13] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. In
COLT?98, pages 80?91, 1998.
[14] M. Warmuth, J. Liao, and G. R?atsch. Totally corrective boosting algorithms that maximize the margin. In
ICML?06, pages 1001?1008, 2006.
| 3485 |@word repository:1 seems:1 norm:1 logit:6 proportion:1 tedious:1 wla:9 stronger:1 d2:1 open:1 p0:1 pick:4 carry:1 contains:4 score:1 hereafter:1 ecole:1 current:2 dx:1 written:3 readily:1 additive:2 shape:1 analytic:1 wanted:1 asymptote:2 plot:1 depict:1 update:2 half:1 greedy:2 selected:1 warmuth:2 accordingly:1 realizing:1 provides:1 boosting:17 bijection:1 readability:1 successive:1 math:3 simpler:1 five:2 along:2 specialize:1 prove:3 combine:1 x0:5 indeed:5 expected:2 behavior:1 brier:1 totally:4 becomes:3 provided:1 increasing:2 notation:10 bounded:1 maximizes:1 minimizes:3 developed:1 ag:1 ghosh:1 finding:1 guarantee:2 subclass:3 concave:2 shed:1 tie:4 exactly:1 classifier:6 scaled:1 rm:1 wrong:1 schwartz:1 assoc:1 mcauliffe:1 positive:4 limit:1 meet:5 ure:1 plus:1 twice:1 ease:1 stratified:1 unique:1 practice:1 empirical:5 matching:2 confidence:1 get:3 convenience:1 close:1 put:1 risk:15 context:1 seminal:1 equivalent:1 go:1 regardless:1 l:7 convex:20 simplicity:1 rule:4 handle:1 resp:2 pt:1 play:2 suppose:7 programming:1 us:1 hypothesis:1 trend:1 element:1 satisfying:1 particularly:2 approximated:1 dawid:1 bottom:1 lix:1 role:2 wang:1 wj:40 decrease:1 ran:1 balanced:4 environment:1 convexity:3 dom:4 signature:3 solving:1 tight:1 f2:2 learner:2 easily:1 corrective:4 derivation:1 univ:2 distinct:1 gini:1 sc:10 whose:5 larger:1 solve:1 valued:1 say:1 otherwise:5 ability:1 statistic:2 transform:1 differentiable:9 indication:2 propose:2 fr:2 blackboard:1 remainder:1 relevant:1 uci:1 loop:1 rapidly:1 iff:9 description:1 inducing:1 az:2 convergence:7 requirement:5 optimum:4 r1:4 converges:1 tions:1 depending:1 stat:3 rescale:1 eq:1 strong:2 auxiliary:1 involves:1 come:4 implies:3 convention:2 met:2 closely:1 nock:2 modifying:1 duals:1 xp0:1 settle:1 hx:1 crux:1 fix:2 generalization:5 f1:2 preliminary:2 im:15 strictly:17 hold:2 ground:1 exp:10 algorithmic:3 mapping:1 achieves:1 early:1 smallest:2 a2:2 estimation:2 realizes:2 spreading:1 label:3 maker:1 bag:4 minimization:17 mit:6 always:3 rather:2 focus:2 properly:3 pdfs:1 bernoulli:1 likelihood:3 check:2 mainly:1 rank:1 sense:3 am:3 inst:1 dependent:3 france:2 selects:1 provably:2 issue:1 classification:9 dual:4 arg:1 colt:2 exponent:1 ussr:1 equal:1 y6:1 icml:1 thin:1 discrepancy:1 report:1 others:1 simplify:1 serious:2 richard:1 few:3 cardinal:1 simultaneously:1 divergence:1 intell:1 ccl:1 friedman:1 huge:1 interest:1 limx:2 investigate:1 blf:2 semidefinite:1 light:1 tj:17 bregman:5 edge:3 unless:1 tree:3 taylor:1 renormalize:1 monotonous:2 column:2 boolean:1 cost:2 subset:3 predictor:1 successful:1 predicate:1 gr:1 calibrated:5 adaptively:1 fundamental:2 together:1 w1:1 squared:4 ambiguity:1 satisfied:1 containing:1 literal:1 derivative:1 syst:1 potential:2 singleton:2 bold:2 summarized:1 student:1 coefficient:8 int:5 satisfy:3 explicitly:1 root:1 view:1 closed:1 sup:1 red:2 recover:1 parallel:1 oi:9 merugu:1 sqr:1 yield:3 weak:3 bayesian:1 comp:2 cc:2 converged:2 minj:1 phys:1 whenever:1 definition:8 against:1 e2:2 proof:5 couple:1 xx0:1 proved:1 treatment:1 begun:1 popular:4 lim:1 nielsen:3 sophisticated:1 actually:1 back:1 appears:2 supervised:6 follow:1 adaboost:9 improved:1 though:1 furthermore:2 just:2 stage:1 sketch:2 hand:3 banerjee:2 logistic:8 brings:3 artif:1 name:2 requiring:1 true:7 equality:7 symmetric:4 dhillon:1 i2:1 illustrated:2 game:7 euv:2 noted:2 coincides:2 unnormalized:1 generalized:1 pdf:1 stress:1 bijective:1 polytechnique:2 meaning:1 wise:2 consideration:1 recently:1 common:4 superior:1 mt:2 tuning:2 consistency:1 pm:3 calibration:1 similarity:1 etc:1 inf:1 inequality:4 calligraphic:1 underlined:2 yi:6 scoring:2 minimum:3 additional:1 relaxed:1 gentile:1 maximize:1 sound:2 bcs:15 desirable:1 technical:1 match:3 plug:1 cross:1 e1:2 plugging:1 paired:1 impact:1 prediction:8 involving:1 regression:3 qi:2 liao:1 essentially:1 expectation:2 iteration:3 normalization:4 histogram:2 achieved:1 addition:1 remarkably:1 fine:1 interval:2 else:1 permissible:10 strict:2 cedex:2 zerosum:1 induced:1 virtually:1 tend:1 probably:1 member:1 leveraging:7 jordan:1 call:1 noting:1 intermediate:2 easy:3 enough:1 concerned:1 fit:4 hastie:1 restrict:1 inner:1 reduce:1 expression:2 handled:1 syntactical:2 bartlett:3 specialization:1 ultimate:1 triviality:1 repeatedly:1 remark:1 generally:1 useful:2 clear:2 tune:2 maybe:1 transforms:1 amount:1 differentiability:1 tth:1 reduced:1 schapire:2 exist:2 zj:3 notice:2 dotted:1 sign:1 estimated:1 popularity:1 tibshirani:1 write:1 discrete:1 shall:3 key:2 four:4 threshold:1 ht:11 relaxation:1 sum:6 run:1 saying:1 family:3 wu:3 decision:8 ric:1 scaling:1 summarizes:1 bit:1 bound:3 guaranteed:3 matsushita:3 display:3 fold:2 replaces:1 constraint:1 sake:1 argument:3 span:3 min:1 optimality:1 according:2 legendre:8 conjugate:3 remain:1 terminates:1 ninety:1 slightly:1 appealing:1 making:2 restricted:1 pr:7 confound:1 remains:1 previously:1 discus:1 turn:1 r3:6 fail:1 wrt:1 singer:2 scl:13 whichever:1 end:1 available:1 rewritten:1 disagreement:1 alternative:2 primer:1 existence:1 denotes:2 top:2 ensure:3 clustering:1 hinge:5 upperbounds:1 restrictive:1 build:1 prof:1 classical:3 objective:1 question:2 strategy:2 rt:1 usual:1 surrogate:31 italic:1 gradient:1 minx:2 distance:2 separate:1 mapped:1 sci:1 w0:6 extent:1 cauchy:1 induction:2 pointwise:1 index:5 relationship:2 traskin:1 minimizing:4 frank:1 stated:1 negative:2 proper:2 upper:1 disagree:1 observation:3 vertical:1 benchmark:1 finite:1 behave:1 displayed:1 beat:1 rn:1 mansour:1 advocacy:1 introduced:1 pair:2 connection:1 distinction:1 boost:1 nip:2 trans:1 address:1 able:1 bar:1 below:2 regime:1 summarize:1 built:3 including:2 max:2 misclassification:1 natural:2 force:1 indicator:1 scheme:3 miv:1 improve:1 rated:1 lk:1 concludes:1 func:1 faced:1 prior:1 text:1 bcl:21 relative:1 loss:53 rationale:3 interesting:3 generator:1 validation:1 sufficient:1 consistent:1 share:1 maxt:1 summary:1 last:5 monomial:1 formal:1 understand:1 wide:1 face:1 miu:1 taking:1 fifth:1 emerge:1 curve:1 author:1 made:1 adaptive:1 palaiseau:1 far:1 emphasize:1 implicitly:1 supremum:1 abstracted:1 global:1 active:1 summing:1 assumed:3 conclude:1 uls:43 symmet:1 decade:1 quantifies:1 table:9 nature:3 robust:1 symmetry:1 contributes:1 improving:1 flourishing:1 expansion:1 separator:7 domain:13 main:3 rh:2 terminated:1 allowed:1 referred:1 lc:6 precision:1 wish:1 exponential:6 lie:1 third:2 ix:1 theorem:9 down:1 jensen:1 symbol:1 r2:5 exists:3 adding:1 ci:3 margin:2 flavor:6 entropy:3 lagrange:1 ordered:3 satisfies:4 relies:1 conditional:1 viewed:2 goal:1 ann:3 towards:2 eventual:2 hard:1 experimentally:1 infinite:1 typical:2 reducing:1 lemma:18 kearns:1 called:3 unwald:1 atsch:1 formally:1 support:1 guo:1 latter:1 collins:1 cumulant:1 absolutely:1 evaluate:1 tested:1 ex:1 |
2,741 | 3,486 | Privacy-preserving logistic regression
Kamalika Chaudhuri
Information Theory and Applications
University of California, San Diego
[email protected]
Claire Monteleoni?
Center for Computational Learning Systems
Columbia University
[email protected]
Abstract
This paper addresses the important tradeoff between privacy and learnability,
when designing algorithms for learning from private databases. We focus on
privacy-preserving logistic regression. First we apply an idea of Dwork et al. [6]
to design a privacy-preserving logistic regression algorithm. This involves bounding the sensitivity of regularized logistic regression, and perturbing the learned
classifier with noise proportional to the sensitivity.
We then provide a privacy-preserving regularized logistic regression algorithm
based on a new privacy-preserving technique: solving a perturbed optimization
problem. We prove that our algorithm preserves privacy in the model due to [6].
We provide learning guarantees for both algorithms, which are tighter for our new
algorithm, in cases in which one would typically apply logistic regression. Experiments demonstrate improved learning performance of our method, versus the
sensitivity method. Our privacy-preserving technique does not depend on the sensitivity of the function, and extends easily to a class of convex loss functions. Our
work also reveals an interesting connection between regularization and privacy.
1
Introduction
Privacy-preserving machine learning is an emerging problem, due in part to the increased reliance on
the internet for day-to-day tasks such as banking, shopping, and social networking. Moreover, private data such as medical and financial records are increasingly being digitized, stored, and managed
by independent companies. In the literature on cryptography and information security, data privacy
definitions have been proposed, however designing machine learning algorithms that adhere to them
has not been well-explored. On the other hand, data-mining algorithms have been introduced that
aim to respect other notions of privacy that may be less formally justified.
Our goal is to bridge the gap between approaches in the cryptography and information security community, and those in the data-mining community. This is necessary, as there is a tradeoff between
the privacy of a protocol, and the learnability of functions that respect the protocol. In addition to
the specific contributions of our paper, we hope to encourage the machine learning community to
embrace the goals of privacy-preserving machine learning, as it is still a fledgling endeavor.
In this work, we provide algorithms for learning in a privacy model introduced by Dwork et al. [6].
The -differential privacy model limits how much information an adversary can gain about a particular private value, by observing a function learned from a database containing that value, even if
she knows every other value in the database. An initial positive result [6] in this setting depends on
the sensitivity of the function to be learned, which is the maximum amount the function value can
change due to an arbitrary change in one input. Using this method requires bounding the sensitivity
of the function class to be learned, and then adding noise proportional to the sensitivity. This might
be difficult for some functions that are important for machine learning.
?
The majority of this work was done while at UC San Diego.
1
The contributions of this paper are as follows. First we apply the sensitivity-based method of designing privacy-preserving algorithms [6] to a specific machine learning algorithm, logistic regression.
Then we present a second privacy-preserving logistic regression algorithm. The second algorithm is
based on solving a perturbed objective function, and does not depend on the sensitivity. We prove
that the new method is private in the -differential privacy model. We provide learning performance
guarantees for both algorithms, which are tighter for our new algorithm, in cases in which one would
typically apply logistic regression. Finally, we provide experiments demonstrating superior learning
performance of our new method, with respect to the algorithm based on [6]. Our technique may have
broader applications, and we show that it can be applied to certain classes of optimization problems.
1.1
Overview and related work
At the first glance, it may seem that anonymizing a data-set ? namely, stripping it of identifying
information about individuals, such as names, addresses, etc ? is sufficient to preserve privacy.
However, this is problematic, because an adversary may have some auxiliary information, which
may even be publicly available, and which can be used to breach privacy. For more details on such
attacks, see [12].
To formally address this issue, we need a definition of privacy which works in the presence of
auxiliary knowledge by the adversary. The definition we use is due to Dwork et al. [6], and has been
used in several applications [4, 11, 2]. We explain this definition and privacy model in more detail
in Section 2.
Privacy and learning. The work most related to ours is [8] and [3]. [8] shows how to find classifiers
that preserve -differential privacy; however, their algorithm takes time exponential in d for inputs
in Rd . [3] provides a general method for publishing data-sets while preserving -differential privacy
such that the answers to all queries of a certain type with low VC-dimension are approximately
correct. However, their algorithm can also be computationally inefficient.
Additional related work. There has been a substantial amount of work on privacy in the literature,
spanning several communities. Much work on privacy has been done in the data-mining community
[1, 7], [14, 10], however the privacy definitions used in these papers are different, and some are susceptible to attacks when the adversary has some prior information. In contrast, the privacy definition
we use avoids these attacks, and is very strong.
2
Sensitivity and the -differential privacy model
Before we define the privacy model that we study, we will note a few preliminary points. Both in
that model, and for our algorithm and analyses, we assume that each value in the database is a real
vector with norm at most one. That is, a database contains values x1 , . . . , xn , where xi ? Rd ,
and kxi k ? 1 for all i ? {1, . . . , n}. This assumption is used in the privacy model. In addition,
we assume that when learning linear separators, the best separator passes through the origin. Note
that this is not an assumption that the data is separable, but instead an assumption that a vector?s
classification is based on its angle, regardless of its norm.
In both privacy-preserving logistic regression algorithms that we state, the output is a parameter
vector, w, which makes prediction SGN(w ? x), on a point x. For a vector x, we use ||x|| to denote
its Euclidean norm. For a function G(x) defined on Rd , we use ?G to denote its gradient and ?2 G
to denote its Hessian.
Privacy Definition. The privacy definition we use is due to Dwork et al. [6, 5]. In this model, users
have access to private data about individuals through a sanitization mechanism, usually denoted by
M . The interaction between the sanitization mechanism and an adversary is modelled as a sequence
of queries, made by the adversary, and responses, made by the sanitizer. The sanitizer, which is
typically a randomized algorithm, is said to preserve -differential privacy, if the private value of
any one individual in the data set does not affect the likelihood of a specific answer by the sanitizer
by more than .
More formally, -differential privacy can be defined as follows.
2
Definition 1 A randomized mechanism M provides -differential privacy, if, for all databases D1
and D2 which differ by at most one element, and for any t,
Pr[M (D1 ) = t]
? e
Pr[M (D2 ) = t]
It was shown in [6] that if a mechanism satisfies -differential privacy, then an adversary who knows
the private value of all the individuals in the data-set, except for one single individual, cannot figure
out the private value of the unknown individual, with sufficient confidence, from the responses of
the sanitizer. -differential privacy is therefore a very strong notion of privacy.
[6] also provides a general method for computing an approximation to any function f while preserving -differential privacy. Before we can describe their method, we need a definition.
Definition 2 For any function f with n inputs, we define the sensitivity S(f ) as the maximum, over
all inputs, of the difference in the value of f when one input of f is changed. That is,
S(f ) = max
|f (x1 , . . . , xn?1 , xn = a) ? f (x1 , . . . , xn?1 , xn = a0 )|
0
(a,a )
[6] shows that for any input x1 , . . . , xn , releasing f (x1 , . . . , xn ) + ?, where ? is a random variable
)
drawn from a Laplace distribution with mean 0 and standard deviation S(f
preserves -differential
privacy.
In [13], Nissim et al. showed that given any input x to a function, and a function f , it is sufficient
)
to draw ? from a Laplace distribution with standard deviation SS(f
, where SS(f ) is the smoothedsensitivity of f around x. Although this method sometimes requires adding a smaller amount of
noise to preserve privacy, in general, smoothed sensitivity of a function can be hard to compute.
3
A Simple Algorithm
Based on [6], one can come up with a simple algorithm for privacy-preserving logistic regression,
which adds noise to the classifier obtained by logistic regression, proportional to its sensitivity. From
2
Corollary 2, the sensitivity of logistic regression is at most n?
. This leads to Algorithm 1, which
obeys the privacy guarantees in Theorem 1.
Algorithm 1:
1. Compute w? , the classifier obtained by regularized logistic regression on the labelled examples (x1 , y1 ), . . . , (xn , yn ).
n?
2. Pick a noise vector ? according to the following density function: h(?) ? e? 2 ||?|| .
2
To pick such a vector, we choose the norm of ? from the ?(d, n?
) distribution, and the
direction of ? uniformly at random.
3. Output w? + ?.
Theorem 1 Let (x1 , y1 ), . . . , (xn , yn ) be a set of labelled points over Rd such that ||xi || ? 1 for
all i. Then, Algorithm 1 preserves -differential privacy.
P ROOF : The proof follows by a combination of [6], and Corollary 2, which states that the sensitivity
2
of logistic regression is at most n?
.
Lemma 1 Let G(w) and g(w) be two convex functions, which are continuous and differentiable at
g1
all points. If w1 = argminw G(w) and w2 = argminw G(w) + g(w), then, ||w1 ? w2 || ? G
. Here,
2
T 2
g1 = maxw ||?g(w)|| and G2 = minv minw v ? G(w)v, for any unit vector v.
The main idea of the proof is to examine the gradient and the Hessian of the functions G and g
around w1 and w2 . Due to lack of space, the full proof appears in the full version of our paper.
Corollary 2 Given a set of n examples x1 , . . . , xn in Rd , with labels y1 , . . . , yn , such that for all i,
2
.
||xi || ? 1, the sensitivity of logistic regression with regularization parameter ? is at most n?
3
P ROOF : We use a triangle inequality and the fact that G2 ? ? and g1 ?
1
n.
Learning Performance. In order to assess the performance of Algorithm 1, we first try to bound
the performance of Algorithm 1 on the training data. To do this, we need to define some notation.
For a classifier w, we use L(w) to denote the expected loss of w over the data distribution, and
?
?
L(w)
to denote the empirical average loss of w over the training data. In other words, L(w)
=
Pn
1
?yi wT xi
), where, (xi , yi ), i = 1, . . . , n are the training examples.
i=1 log(1 + e
n
Further, for a classifier w, we use the notation f? (w) to denote the quantity 12 ?||w||2 + L(w) and
?
f?? (w) to denote the quantity 21 ?||w||2 + L(w).
Our guarantees on this algorithm can be summarized
by the following lemma.
Lemma 3 Given a logistic regression problem with regularization parameter ?, let w1 be the classifier that minimizes f?? , and w2 be the classifier output by Algorithm 1 respectively. Then, with prob2
log2 (d/?)
.
ability 1 ? ? over the randomness in the privacy mechanism, f?? (w2 ) ? f?? (w1 ) + 2d (1+?)
?2 n2 2
Due to lack of space, the proof is deferred to the full version.
From Lemma 3, we see that performance of Algorithm 1 degrades with decreasing ?, and is poor in
particular when ? is very small. One question is, can we get a privacy-preserving approximation to
logistic regression, which has better performance bounds for small ?? To explore this, in the next
section, we look at a different algorithm.
4
A New Algorithm
In this section, we provide a new privacy-preserving algorithm for logistic regression. The input to
our algorithm is a set of examples x1 , . . . , xn over Rd such that ||xi || ? 1 for all i, a set of labels
y1 , . . . , yn for the examples, a regularization constant ? and a privacy parameter , and the output is
a vector w? in Rd . Our algorithm works as follows.
Algorithm 2:
1. We pick a random vector b from the density function h(b) ? e? 2 ||b|| . To implement this,
we pick the norm of b from the ?(d, 2 ) distribution, and the direction of b uniformly at
random.
2. Given examples x1 , . . . , xn , with labels y1 , . . . , yn and a regularization constant ?, we
Pn
T
T
compute w? = argminw 21 ?wT w + b nw + n1 i=1 log(1 + e?yi w xi ). Output w? .
We observe that our method solves a convex programming problem very similar to the logistic
regression convex program, and therefore it has running time similar to that of logistic regression.
In the sequel, we show that the output of Algorithm 2 is privacy preserving.
Theorem 2 Given a set of n examples x1 , . . . , xn over Rd , with labels y1 , . . . , yn , where for each
i, ||xi || ? 1, the output of Algorithm 2 preserves -differential privacy.
P ROOF : Let a and a0 be any two vectors over Rd with norm at most 1, and y, y 0 ?
{?1, 1}. For any such (a, y), (a0 , y 0 ), consider the inputs (x1 , y1 ), . . . , (xn?1 , yn?1 ), (a, y) and
(x1 , y1 ) . . . , (xn?1 , yn?1 ), (a0 , y 0 ). Then, for any w? output by our algorithm, there is a unique
value of b that maps the input to the output. This uniqueness holds, because both the regularization
function and the loss functions are differentiable everywhere.
Let the values of b for the first and second cases respectively, be b1 and b2 .
Since w? is the value that minimizes both the optimization problems, the derivative of both optimization functions at w? is 0.
This implies that for every b1 in the first case, there exists a b2 in the second case such that: b1 ?
0 0
ya
1
1
= b2 ? 1+eyy0 wa ?T a0 . Since ||a|| ? 1, ||a0 || ? 1, and 1+eyw
?1
?T a ? 1, and
1+eyw?T a
1+ey0 w?T a0
4
for any w? , ||b1 ? b2 || ? 2. Using the triangle inequality, ||b1 || ? 2 ? ||b2 || ? ||b1 || + 2. Therefore,
for any pair (a, y), (a0 , y 0 ),
h(b1 )
Pr[w? |x1 , . . . , xn?1 , y1 , . . . , yn?1 , xn = a, yn = y]
=
= e? 2 (||b1 ||?||b2 ||)
?
0
0
Pr[w |x1 , . . . , xn?1 , y1 , . . . , yn?1 , xn = a , yn = y ]
h(b2 )
where h(bi ) for i = 1, 2 is the density of bi . Since ?2 ? ||b1 || ? ||b2 || ? 2, this ratio is at most e .
theorem follows.
We notice that the privacy guarantee for our algorithm does not depend on ?; in other words, for any
value of ?, our algorithm is private. On the other hand, as we show in Section 5, the performance of
our algorithm does degrade with decreasing ? in the worst case, although the degradation is better
than that of Algorithm 1 for ? < 1.
Other Applications. Our algorithm for privacy-preserving logistic regression can be generalized to
provide privacy-preserving outputs for more general convex optimization problems, so long as the
problems satisfy certain conditions. These conditions can be formalized in the theorem below.
Theorem 3 Let X = {x1 , . . . , xn } be a database containing private data of individuals.
Pn Suppose
we would like to compute a vector w that minimizes the function F (w) = G(w) + i=1 l(w, xi ),
over w ? Rd for some d, such that all of the following hold:
1. G(w) and l(w, xi ) are differentiable everywhere, and have continuous derivatives
2. G(w) is strongly convex and l(w, xi ) are convex for all i
3. ||?w l(w, x)|| ? ?, for any x.
?
Let b = B ? ?b, where B is drawn from ?(d, 2?
the surface of a d ), and b is drawn uniformly from
Pn
?
dimensional unit sphere. Then, computing w , where w? = argminw G(w) + i=1 l(w, xi ) + bT w,
provides -differential privacy.
5
Learning Guarantees
In this section, we show theoretical bounds on the number of samples required by the algorithms to
learn a linear classifier. For the rest of the section, we use the same notation used in Section 3.
First we show that, for Algorithm 2, the values of f?? (w2 ) and f?? (w1 ) are close.
Lemma 4 Given a logistic regression problem with regularization parameter ?, let w1 be the classifier that minimizes f?? , and w2 be the classifier output by Algorithm 2 respectively. Then, with
2
log2 (d/?)
.
probability 1 ? ? over the randomness in the privacy mechanism, f?? (w2 ) ? f?? (w1 ) + 8d ?n
2 2
The proof is in the full version of our paper. As desired, for ? < 1, we have attained a tighter bound
using Algorithm 2, than Lemma 3 for Algorithm 1.
Now we give a performance guarantee for Algorithm 2.
Theorem 4 Let w0 be a classifier with expected loss L over the data distribution. If the training ex2 d log( d )||w ||
0
?
amples are drawn independently from the data distribution, and if n > C max( ||w02|| ,
),
g
g
for some constant C, then, with probability 1 ? ?, the classifier output by Algorithm 2 has loss at
most L + g over the data distribution.
P ROOF : Let w? be the classifier that minimizes f? (w) over the data distribution, and let w1 and w2
T
be the classifiers that minimize f?? (w) and f?? (w) + b nw over the data distribution respectively. We
can use an analysis as in [15] to write that:
L(w2 ) = L(w0 ) + (f? (w2 ) ? f? (w? )) + (f? (w? ) ? f? (w0 )) +
5
?
?
||w0 ||2 ? ||w2 ||2
2
2
(1)
8d2 log2 (d/?)
. Using this and [16], we can bound
?n2 2
16d2 log2 (d/?)
1
?
the second quantity in equation 1 as f? (w2 ) ? f? (w ) ?
+ O( ?n
). By definition of
?n2 2
g
?
w , the third quantity in Equation 1 is non-positive. If ? is set to be ||w0 ||2 , then, the fourth quantity
2
1
in Equation 1 is at most 2g . Now, if n > C ? ||w02|| for a suitable constant C, ?n
? 4g . In addition,
g
16d2 log2 ( d
||w ||d log( d )
?)
if n > C ? 0 g ? , then,
? 4g . In either case, the total loss of the classifier w2
?n2 2
Notice that from Lemma 4, f?? (w2 ) ? f?? (w1 ) ?
output by Algorithm 2 is at most L(w0 ) + g .
The same technique can be used to analyze the sensitivity-based algorithm, using Lemma 3, which
yields the following.
Theorem 5 Let w0 be a classifier with expected loss L over the data distribution.
If
the training examples are drawn independently from the data distribution, and if n >
2 d log( d )||w || d log( d )||w ||2
0
0
?
?
,
), for some constant C, then, with probability 1 ? ?, the
C max( ||w02|| ,
3/2
g
g
g
classifier output by Algorithm 2 has loss at most L + g over the data distribution.
It is clear that this bound is never lower than the bound for Algorithm 2. Note that for problems in
which (non-private) logistic regression performs well, kw0 k ? 1 if w0 has low loss, since otherwise
one can show that the loss of w0 would be lower bounded by log(1 + 1e ). Thus the performance
guarantee for Algorithm 2 is significantly stronger than for Algorithm 1, for problems in which one
would typically apply logistic regression.
6
Results in Simulation
Sensitivity method
New method
Standard LR
Uniform, margin=0.03
0.2962?0.0617
0.1426?0.1284
0?0.0016
Unseparable (uniform with noise 0.2 in margin 0.1)
0.3257?0.0536
0.1903?0.1105
0.0530?0.1105
Figure 1: Test error: mean ? standard deviation over five folds. N=17,500.
We include some simulations that compare the two privacy-preserving methods, and demonstrate
that using our privacy-preserving approach to logistic regression does not degrade learning performance terribly, from that of standard logistic regression. Performance degradation is inevitable
however, as in both cases, in order to address privacy concerns, we are adding noise, either to the
learned classifier, or to the objective.
In order to obtain a clean comparison between the various logistic regression variants, we first experimented with artificial data that is separable through the origin. Because the classification of a
vector by a linear separator through the origin depends only its angle, not its norm, we sampled the
data from the unit hypersphere. We used a uniform distribution on the hypersphere in 10 dimensions
with zero mass within a small margin (0.03) from the generating linear separator. Then we experimented on uniform data that is not linearly separable. We sampled data from the surface of the unit
ball in 10 dimensions, and labeled it with a classifier through the origin. In the band of margin ? 0.1
with respect to the perfect classifier, we performed random label flipping with probability 0.2. For
our experiments, we used convex optimization software provided by [9].
Figure 1 gives mean and standard deviation of test error over five-fold cross-validation, on 17,500
points. In both simulations, our new method is superior to the sensitivity method, although incurs
more error than standard (non-private) logistic regression. For both problems, we tuned the logistic
regression parameter, ?, to minimize the test error of standard logistic regression, using five-fold
cross-validation on a holdout set of 10,000 points (the tuned values are: ? = 0.01 in both cases).
For each test error computation, the performance of each of the privacy-preserving algorithms was
evaluated by averaging over 200 random restarts, since they are both randomized algorithms.
In Figure 2a)-b) we provide learning curves. We graph the test error after each increment of 1000
points, averaged over five-fold cross validation. The learning curves reveal that, not only does the
6
0.55
Avg test error over 5?fold cross?valid. 200 random restarts.
Avg test error over 5?fold cross?valid. 200 random restarts.
0.7
Our method
Standard LR
Sensitivity method
0.6
0.5
0.4
0.3
0.2
0.1
0
2
4
6
8
10
12
14
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0.6
Our method
Sensitivity method
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
2
4
6
8
10
12
14
N/1000. Learning curve for unseparable data. d=10, epsilon=0.1, lambda=0.01
Avg test error over 5?fold cross?valid. 200 random restarts.
Avg test error over 5!fold cross!valid. 200 random restarts.
N/1000. Learning curve for uniform data. d=10, epsilon=0.1, margin=0.03, lambda=0.01
0.55
Our method
Standard LR
Sensitivity method
0.5
0.2
0.55
Our method
Sensitivity method
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0
Epsilon. Uniform data, d=10, n=10,000, margin=0.03, lambda=0.01
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Epsilon. Unseparable data, d=10, n=10,000, lambda=0.01
Figure 2: Learning curves: a) Uniform distribution, margin=0.03, b) Unseparable data.
Epsilon curves: c) Uniform distribution, margin=0.03, d) Unseparable data.
new method reach a lower final error than the sensitivity method, but it also has better performance
at most smaller training set sizes.
In order to observe the effect of the level of privacy on the learning performance of the privacypreserving learning algorithms, in Figure 2c)-d) we vary , the privacy parameter to the two algorithms, on both the uniform, low margin data, and the unseparable data. As per the definition of
-differential privacy in Section 2, strengthening the privacy guarantee corresponds to reducing .
Both algorithms? learning performance degrades in this direction. For the majority of values of
that we tested, the new method is superior in managing the tradeoff between privacy and learning
performance. For very small , corresponding to extremely stringent privacy requirements, the sensitivity method performs better but also has a predication accuracy close to chance, which is not
useful for machine learning purposes.
7
Conclusion
In conclusion, we show two ways to construct a privacy-preserving linear classifier through logistic
regression. The first one uses the methods of [6], and the second one is a new algorithm. Using the -differential privacy definition of Dwork et al. [6], we prove that our new algorithm is
privacy-preserving. We provide learning performance guarantees for the two algorithms, which are
tighter for our new algorithm, in cases in which one would typically apply logistic regression. In
simulations, our new algorithm outperforms the method based on [6].
Our work reveals an interesting connection between regularization and privacy: the larger the regularization constant, the less sensitive the logistic regression function is to any one individual example, and as a result, the less noise one needs to add to make it privacy-preserving. Therefore,
regularization not only prevents overfitting, but also helps with privacy, by making the classifier less
7
sensitive. An interesting future direction would be to explore whether other methods that prevent
overfitting also have such properties.
Other future directions would be to apply our techniques to other commonly used machine-learning
algorithms, and to explore whether our techniques can be applied to more general optimization
problems. Theorem 3 shows that our method can be applied to a class of optimization problems
with certain restrictions. An open question would be to remove some of these restrictions.
Acknowledgements. We thank Sanjoy Dasgupta and Daniel Hsu for several pointers.
References
[1] R. Agrawal and R. Srikant. Privacy-preserving data mining. SIGMOD Rec., 29(2):439?450, 2000.
[2] B. Barak, K. Chaudhuri, C. Dwork, S. Kale, F. McSherry, and K. Talwar. Privacy, accuracy, and consistency too: a holistic solution to contingency table release. In PODS, pages 273?282, 2007.
[3] A. Blum, K. Ligett, and A. Roth. A learning theory approach to non-interactive database privacy. In R. E.
Ladner and C. Dwork, editors, STOC, pages 609?618. ACM, 2008.
[4] K. Chaudhuri and N. Mishra. When random sampling preserves privacy. In C. Dwork, editor, CRYPTO,
volume 4117 of Lecture Notes in Computer Science, pages 198?213. Springer, 2006.
[5] C. Dwork. Differential privacy. In M. Bugliesi, B. Preneel, V. Sassone, and I. Wegener, editors, ICALP
(2), volume 4052 of Lecture Notes in Computer Science, pages 1?12. Springer, 2006.
[6] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis.
In Theory of Cryptography Conference, pages 265?284, 2006.
[7] A. Evfimievski, J. Gehrke, and R. Srikant. Limiting privacy breaches in privacy preserving data mining.
In PODS, pages 211?222, 2003.
[8] S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. Smith. What can we learn
privately? In Proc. of Foundations of Computer Science, 2008.
[9] C. T. Kelley. Iterative Methods for Optimization. SIAM, 1999.
[10] A. Machanavajjhala, J. Gehrke, D. Kifer, and M. Venkitasubramaniam. l-diversity: Privacy beyond kanonymity. In ICDE, page 24, 2006.
[11] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, pages 94?103, 2007.
[12] A. Narayanan and V. Shmatikov. Robust de-anonymization of large sparse datasets. In IEEE Symposium
on Security and Privacy, pages 111?125. IEEE Computer Society, 2008.
[13] K. Nissim, S. Raskhodnikova, and A. Smith. Smooth sensitivity and sampling in private data analysis. In
D. S. Johnson and U. Feige, editors, STOC, pages 75?84. ACM, 2007.
[14] P. Samarati and L. Sweeney. Protecting privacy when disclosing information: k-anonymity and its enforcement through generalization and suppression. In Proc. of the IEEE Symposium on Research in
Security and Privacy, 1998.
[15] S. Shalev-Shwartz and N. Srebro. Svm optimization: Inverse dependence on training set size. In International Conference on Machine Learning(ICML), 2008.
[16] K. Sridharan, N. Srebro, and S. Shalev-Schwartz. Fast rates for regularized objectives. In Neural Information Processing Systems, 2008.
8
| 3486 |@word private:14 version:3 norm:7 stronger:1 open:1 d2:5 simulation:4 pick:4 incurs:1 initial:1 contains:1 daniel:1 tuned:2 ours:1 outperforms:1 mishra:1 remove:1 ligett:1 smith:3 record:1 lr:3 pointer:1 hypersphere:2 provides:4 kasiviswanathan:1 attack:3 five:4 differential:19 symposium:2 focs:1 prove:3 ex2:1 privacy:87 expected:3 examine:1 decreasing:2 company:1 provided:1 moreover:1 notation:3 bounded:1 mass:1 what:1 minimizes:5 emerging:1 guarantee:10 every:2 interactive:1 classifier:23 schwartz:1 unit:4 medical:1 yn:12 positive:2 before:2 limit:1 disclosing:1 approximately:1 might:1 bi:2 obeys:1 averaged:1 unique:1 minv:1 implement:1 empirical:1 significantly:1 confidence:1 word:2 get:1 cannot:1 close:2 raskhodnikova:2 restriction:2 map:1 center:1 roth:1 kale:1 regardless:1 independently:2 pod:2 sweeney:1 convex:8 formalized:1 identifying:1 prob2:1 d1:2 financial:1 notion:2 increment:1 laplace:2 limiting:1 diego:2 suppose:1 user:1 programming:1 us:1 designing:3 origin:4 element:1 anonymity:1 rec:1 database:8 labeled:1 worst:1 substantial:1 depend:3 solving:2 triangle:2 easily:1 various:1 fast:1 describe:1 query:2 artificial:1 shalev:2 larger:1 s:2 otherwise:1 ability:1 g1:3 final:1 sequence:1 differentiable:3 agrawal:1 interaction:1 strengthening:1 argminw:4 holistic:1 chaudhuri:3 requirement:1 generating:1 perfect:1 help:1 solves:1 strong:2 auxiliary:2 involves:1 come:1 implies:1 differ:1 direction:5 correct:1 vc:1 sgn:1 terribly:1 stringent:1 shopping:1 generalization:1 preliminary:1 tighter:4 hold:2 around:2 nw:2 vary:1 purpose:1 uniqueness:1 proc:2 evfimievski:1 label:5 bridge:1 sensitive:2 gehrke:2 hope:1 aim:1 pn:4 broader:1 corollary:3 release:1 focus:1 she:1 likelihood:1 contrast:1 suppression:1 typically:5 bt:1 a0:8 issue:1 classification:2 ey0:1 denoted:1 uc:1 construct:1 never:1 sampling:2 look:1 icml:1 inevitable:1 future:2 few:1 preserve:9 individual:8 roof:4 n1:1 dwork:10 mining:5 deferred:1 mcsherry:3 encourage:1 necessary:1 minw:1 euclidean:1 desired:1 theoretical:1 increased:1 deviation:4 uniform:9 johnson:1 too:1 learnability:2 stored:1 stripping:1 answer:2 perturbed:2 kxi:1 density:3 international:1 sensitivity:27 randomized:3 siam:1 sequel:1 lee:1 anonymization:1 w1:10 containing:2 anonymizing:1 choose:1 lambda:4 inefficient:1 derivative:2 amples:1 diversity:1 de:1 summarized:1 b2:8 satisfy:1 depends:2 performed:1 try:1 observing:1 analyze:1 contribution:2 ass:1 minimize:2 publicly:1 accuracy:2 who:1 yield:1 modelled:1 machanavajjhala:1 randomness:2 explain:1 networking:1 monteleoni:1 reach:1 definition:14 proof:5 gain:1 sampled:2 holdout:1 hsu:1 knowledge:1 appears:1 attained:1 day:2 restarts:5 response:2 improved:1 done:2 evaluated:1 strongly:1 hand:2 lack:2 glance:1 logistic:34 reveal:1 name:1 effect:1 calibrating:1 managed:1 regularization:10 generalized:1 demonstrate:2 performs:2 superior:3 perturbing:1 overview:1 volume:2 rd:10 consistency:1 kelley:1 access:1 surface:2 etc:1 add:2 showed:1 certain:4 inequality:2 yi:3 preserving:27 additional:1 managing:1 full:4 smooth:1 cross:7 long:1 sphere:1 prediction:1 variant:1 regression:34 sometimes:1 justified:1 addition:3 adhere:1 sanitizer:4 w2:15 releasing:1 rest:1 pass:1 privacypreserving:1 sridharan:1 seem:1 presence:1 wegener:1 affect:1 idea:2 tradeoff:3 whether:2 hessian:2 useful:1 clear:1 amount:3 band:1 narayanan:1 problematic:1 srikant:2 notice:2 per:1 write:1 dasgupta:1 reliance:1 demonstrating:1 blum:1 drawn:5 prevent:1 clean:1 graph:1 icde:1 angle:2 everywhere:2 fourth:1 talwar:2 inverse:1 extends:1 draw:1 banking:1 bound:7 internet:1 fold:8 software:1 extremely:1 separable:3 embrace:1 according:1 combination:1 poor:1 ball:1 smaller:2 feige:1 increasingly:1 making:1 pr:4 computationally:1 equation:3 crypto:1 kw0:1 mechanism:7 know:2 enforcement:1 kifer:1 available:1 apply:7 observe:2 running:1 include:1 publishing:1 log2:5 sigmod:1 epsilon:5 society:1 objective:3 question:2 quantity:5 flipping:1 degrades:2 dependence:1 said:1 gradient:2 thank:1 majority:2 w0:9 degrade:2 nissim:4 spanning:1 ratio:1 difficult:1 susceptible:1 stoc:2 design:2 unknown:1 ladner:1 datasets:1 predication:1 protecting:1 digitized:1 y1:10 ucsd:1 smoothed:1 arbitrary:1 shmatikov:1 community:5 introduced:2 namely:1 ccls:1 pair:1 required:1 connection:2 security:4 california:1 learned:5 address:4 beyond:1 adversary:7 usually:1 below:1 program:1 max:3 suitable:1 regularized:4 columbia:2 breach:2 prior:1 literature:2 acknowledgement:1 loss:11 lecture:2 icalp:1 interesting:3 proportional:3 srebro:2 versus:1 validation:3 foundation:1 contingency:1 sufficient:3 cmontel:1 editor:4 claire:1 changed:1 barak:1 sparse:1 curve:6 dimension:3 xn:20 valid:4 avoids:1 made:2 avg:4 san:2 commonly:1 social:1 eyy0:1 overfitting:2 reveals:2 b1:9 xi:12 shwartz:1 continuous:2 iterative:1 table:1 learn:2 robust:1 sanitization:2 separator:4 protocol:2 main:1 linearly:1 privately:1 bounding:2 noise:9 n2:4 cryptography:3 x1:16 exponential:1 third:1 theorem:9 specific:3 explored:1 experimented:2 svm:1 concern:1 exists:1 adding:3 kamalika:2 margin:9 gap:1 explore:3 prevents:1 g2:2 maxw:1 springer:2 corresponds:1 satisfies:1 chance:1 acm:2 goal:2 endeavor:1 labelled:2 soe:1 change:2 hard:1 except:1 uniformly:3 reducing:1 wt:2 averaging:1 lemma:8 degradation:2 total:1 sanjoy:1 ya:1 formally:3 tested:1 |
2,742 | 3,487 | Sparse Signal Recovery Using Markov Random Fields
Volkan Cevher
Rice University
[email protected]
Marco F. Duarte
Rice University
[email protected]
Chinmay Hegde
Rice University
[email protected]
Richard G. Baraniuk
Rice University
[email protected]
Abstract
Compressive Sensing (CS) combines sampling and compression into a single subNyquist linear measurement process for sparse and compressible signals. In this
paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov Random Fields
(MRFs) to represent sparse signals whose nonzero coefficients are clustered. Our
new model-based recovery algorithm, dubbed Lattice Matching Pursuit (LaMP),
stably recovers MRF-modeled signals using many fewer measurements and computations than the current state-of-the-art algorithms.
1 Introduction
The Shannon/Nyquist sampling theorem tells us that in order to preserve information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many
important and emerging applications, the resulting Nyquist rate can be so high that we end up with
too many samples and must compress in order to store or transmit them. In other applications, including imaging systems and high-speed analog-to-digital converters, increasing the sampling rate
or density beyond the current state-of-the-art is very expensive. A transform compression system
reduces the effective dimensionality of an N -dimensional signal by re-representing it in terms of a
sparse expansion in some basis (for example, the discrete cosine transform for JPEG). By sparse we
mean that only K ? N of the basis coefficients are nonzero.
The new theory of compressive sensing (CS) combines sampling and compression into a single subNyquist linear measurement process for sparse signals [1, 2]. In CS, we measure not periodic signal
samples but rather inner products with M < N known measurement vectors; random measurement
vectors play a starring role. We then recover the signal by searching for the sparsest signal that
agrees with the measurements. Research in CS to date has focused on reducing both the number
of measurements M (as a function of N and K) and on reducing the computational complexity of
the recovery algorithm. Today?s state-of-the-art CS systems can recover K-sparse and more general
compressible signals using M = O(K log(N/K)) measurements using polynomial-time linear
programming or greedy algorithms.
While such sub-Nyquist measurement rates are impressive, our contention in this paper is that for
CS to truly live up its name it must more fully leverage concepts from state-of-the-art compression
algorithms. In virtually all such algorithms, the key ingredient is a signal model that goes beyond
simple sparsity by providing a model for the basis coefficient structure. For instance, JPEG does not
only use the fact that most of the DCT of a natural image are small. Rather, it also exploits the fact
that the values and locations of the large coefficients have a particular structure that is characteristic
of natural images. Coding this structure using an appropriate model enables JPEG and other similar
algorithms to compress images close to the maximum amount possible, and significantly better than
a naive coder that just assigns bits to each large coefficient independently.
1
In this paper, we extend the theory of CS to include signals that are concisely represented in terms
of a graphical model [3]. We use Markov Random Fields (MRFs) to represent sparse signals whose
nonzero coefficients also cluster together. Our new model-based recovery algorithm, dubbed Lattice
Matching Pursuit (LaMP), performs rapid and numerically stable recovery of MRF-modeled signals
using far fewer measurements than standard algorithms.
The organization of the paper is as follows. In Sections 2 and 3, we briefly review the CS and MRF
theories. We develop LaMP in Section 4 and present experimental results in Section 5 using both
simulated and real world data. We conclude by offering our perspective on the future directions of
model-based CS research in Section 6.
2 Compressive sensing: From sparsity to structured sparsity
Sparse signal recovery. Any signal x ? RN can be represented in terms of N coefficients {?i }
in a basis {? i }N
i=1 ; stacking the ? i as columns into the matrix ?N ?N , we can write succinctly
that x = ??. We say that x has a sparse representation if only K ? N entries of ? are nonzero,
N
and we denote by ?K the set of K
possible supports for such K-sparse signals. We say that x is
compressible if the sorted magnitudes of the entries of ? decay rapidly enough that it can be well
approximated as K-sparse.
In Compressive Sensing (CS), the signal is not acquired by measuring x or ? directly. Rather, we
measure the M < N linear projections y = ?x = ??? using the M ? N matrix ?. In the
sequel, without loss of generality, we focus on two-dimensional image data and assume that ? = I
(the N ? N identity matrix) so that x = ?. The most commonly used criterion for evaluating the
quality of a CS measurement matrix is the restricted isometry property (RIP). A matrix ? satisfies
the K-RIP if there exists a constant ?K > 0 such that for all K-sparse vectors x,
(1 ? ?K )kxk2 ? k?xk2 ? (1 + ?K )kxk2 .
(1)
The recovery of the set of significant coefficients ?i is achieved using optimization: we search for
the sparsest ? that agrees with the measurements y. While in principle recovery is possible using a
matrix that has the 2K-RIP with ?2K < 1, such an optimization is combinatorially complex (NPcomplete) and numerically unstable. If we instead use a matrix that has the 3K-RIP with ?3K < 1/2,
then numerically stable recovery is possible in polynomial time using either a linear program [1, 2]
or a greedy algorithm [4]. Intriguingly, a random Gaussian or Bernoulli matrix works with high
probability, leading to a randomized acquisition protocol instead of uniform sampling.
Structured sparsity. While many natural and manmade signals and images can be described to the
first-order as sparse or compressible, their sparse supports (set of nonzero coefficients) often have an
underlying order. This order plays a central role in the transform compression literature, but it has
barely been explored in the CS context [5, 6]. The theme of this paper is that by exploiting a priori
information on coefficient structure in addition to signal sparsity, we can make CS better, stronger,
and faster.
Figure 1 illustrates a real-world example of structured sparse support in a computer vision application. Figure 1(b) is a background subtracted image computed from a video sequence of a parking
lot with two moving people (one image frame is shown in Figure 1(a)). The moving people form
the foreground (white in (b)), while the rest of the scene forms the background (black in (b)). The
background subtraction was computed from CS measurements of the video sequence. Background
subtracted images play a fundamental role in making inferences about objects and activities in a
scene and, by nature, they have structured spatial sparsity corresponding to the foreground innovations. In other words, compared to the scale of the scene, the foreground innovations are usually not
only sparse but also clustered in a distinct way, e.g., corresponding to the silhouettes of humans and
vehicles. Nevertheless, this clustering property is not exploited by current CS recovery algorithms.
Probabilistic RIP. The RIP treats all possible K-sparse supports equally. However, if we incorporate a probabilistic model on our signal supports and consider only the signal supports with the
highest likelihoods, then we can potentially do much better in terms of the required number of
measurements required for stable recovery.
We say that ? satisfies the (K, ?)-probabilistic RIP (PRIP) if there exists a constant ?K > 0 such
that for a K-sparse signal x generated by a specified probabilistic signal model, (1) holds with
probability at least 1 ? ? over the signal probability space. We propose a preliminary result on the
2
PSfrag
(a)
(b)
(c)
Figure 1: A camera surveillance image (b) with the background subtracted image (b) recovered using compressive measurements of the scene. The background subtracted image has resolution N = 240 ? 320 and
sparsity K = 390. (c) A random K = 390 sparse image in N = 240 ? 320 dimensions. The probability of
image (b) under the Ising model is approximately 10856 times greater than the probability of image (c).
number of random measurements needed under this new criterion; this is a direct consequence of
Theorem 5.2 of [8]. (See also [9] for related results.)
Lemma 1. Suppose that M , N , and ? ? [0, 1] are given and that the signal x is generated by
a known probabilistic model. Let ?K,? ? ?K denote the smallest set of supports for which the
probability that a K-sparse signal x has supp(x) ?
/ ?K,? is less than ?, and denote D = |?K,? |.
If ? is a matrix with normalized i.i.d. Gaussian or Bernoulli/Rademacher (?1) random entries,
then ? has the (K, ?)-PRIP with probability at least 1 ? e?c2 M if M ? c1 (K + log(D)), where
c1 , c2 > 0 depend only on the PRIP constant ?K .
To illustrate the significance of the above lemma, consider the following probabilistic model for
an N -dimensional, K-sparse signal. We assume that the locations of the non-zeros follow a homogeneous Poisson process with rate ? = ? log(?/K)N ?? , where ? ? 1. Thus, a particular
non-zero coefficient occurs within a distance of N ? of its predecessor with probability 1 ? ?/K. We
determine the size of the likely K-sparse support set ?K under this particular signal model using
a simple counting argument. The location of the first non-zero coefficients is among the first N ?
indices with probability 1 ? ?/K. After fixing the location of the first coefficient, the location of
the second coefficient is among the next N ? indices immediately following the first location with
probability 1 ? ?/K. Proceeding this way, after the locations of the first j ? 1 coefficients, have been
fixed, we have that the j th non-zero coefficient is among N ? candidate locations with probability
1 ? ?/K. In this way, we obtain a set of supports ?K,? of size N ?K that will occur with probability
(1 ? ?/K)K > 1 ? ?. Thus for the (K, ?)-PRIP to hold for a random matrix, the matrix must have
M = cK(1 + ? log N ) rows, as compared to the cK log(N/K) rows required for the standard
K-RIP to hold. When ? is on the order of (log N )?1 , the number of measurements required and the
complexity of the solution method grow essentially linearly in K, which is a considerable improvement over the best possible M = O(K log(N/K)) measurements required without such a priori
information.
3 Graphical models for compressive sensing
Clustering of the nonzero coefficients in a sparse signal representation can be realistically captured
by a probabilistic graphical model such as a Markov random field (MRF); in this paper we will
focus for concreteness on the classical Ising model [10].
Support model. We begin with an Ising model for the signal support. Suppose we have a K-sparse
signal x ? RN whose support is represented by s ? {?1, 1}N such that si = ?1 when xi = 0 and
si = 1 when xi 6= 0. The probability density function (PDF) of the signal support can be modeled
using a graph Gs = (Vs , Es ), where Vs = {1, . . . , N } denotes a set of N vertices ? one for each
of the support indices ? and Es denotes the set of edges connecting support indices that are spatial
neighbors (see Figure 2(a)). The contribution of the interaction between two elements {si , sj } in
the support of x is controlled by the coefficient ?ij > 0. The contribution of each element si is
controlled by a coefficient ?i , resulting in the following PDF for the sparse support s:
?
?
? X
?
X
p(s; ?) = exp
?ij si sj +
?i si ? Zs (?) ,
(2)
?
?
i?Vs
(i,j)?Es
where Zs (?) is a strictly convex partition function with respect to ? that normalizes the distribution
so that it integrates to one. The parameter vector ? quantifies our prior knowledge regarding the
3
y1
xi
xi
sj
si
yM
xj
xj
sj
si
sj
si
(a)
(b)
(c)
Figure 2: Example graphical models: (a) Ising model for the support, (b) Markov random field model for the
resulting coefficients, (c) Markov random field with CS measurements.
signal support s and consists of the edge interaction parameters ?ij and the vertex bias parameters
?i . These parameters can be learned from data using ?1 -minimization techniques [11].
The Ising model enforces coefficient clustering. For example, compare the clustered sparsity of the
real background subtracted image in Figure 1(b) with the dispersed ?independent? sparsity of the
random image in Figure 1(c). While both images (b) and (c) are equally sparse, under a trained Ising
model (?ij = 0.45 and ?i = 0), the image (b) is approximately 10856 times more likely than the
image (c).
Signal model. Without loss of generality, we focus on 2D images that are sparse in the space
domain, as in Figure 1(b). Leveraging the Ising support model from above, we apply the MRF
graphical model in Figure 2(b) for the pixel coefficient values. Under this model, the support is
controlled by an Ising model, and the signal values are independent given the support. We now
develop a joint PDF for the image pixel values x, the support labels s, and the CS measurements y.
We begin with the support PDF p(s) from (2) and assume that we are equipped with a sparsitypromoting PDF p(x|s) for x given s. The most commonly used PDF is the Laplacian density (which
is related to the ?1 -norm of x); however, other reference priors, such as generalized Gaussians that
are related to the ?p -norm of x, p < 1, can be more effective [12]. We assume
that the measurements
y are corrupted by i.i.d. Gaussian noise, i.e., p(y|x) = N y|?x, ? 2 I , where ? 2 is the unknown
noise variance.
From Figure 2(c), it is easy to show that, given the signal x, the signal support s and the compressive
measurements y are independent using the D-separation property of graphs [13]. Hence, the joint
distribution of the vertices in the graph in Figure 2(b) can be written as
p(z) = p(s, x, y) = p(s, x)p(y|s, x) = p(s)p(x|s)p(y|x),
T
T
(3)
T T
where z = [s , x , y ] . Then, (3) can be explicitly written as
p(z) ? exp
?
? X
?
?ij si sj +
(i,j)?Es
X
[?i si + log(p(xi |si ))] ?
i?Vs
?
?
1
||y ? ?x||22 .
?
2? 2
(4)
4 Lattice matching pursuit
Using the coefficient graphical model from Section 3, we are now equipped to develop a new modelbased CS signal recovery algorithm. Lattice Matching Pursuit (LaMP) is a greedy algorithm for
signals on 2D lattices (images) in which the likelihood of the signal support is iteratively evaluated
and optimized under an Ising model. By enforcing a graphical model, (i) partial knowledge of
the sparse signal support greatly decreases the ambiguity and thus size of the search space for the
remaining unknown part, accelerating the speed of the algorithm; and (ii) signal supports of the same
size but different structures result in different likelihoods (recall Figure 1(b) and (c)), decreasing the
required number of CS measurements and increasing the numerical stability.
Algorithm. The LaMP pseudocode is given in Algorithm 1. Similar to other greedy recovery algorithms such as matching pursuit and CoSaMP [4], each iteration of LaMP starts by estimating a
data residual r {k} given the current estimate of the signal x{k?1} (Step 1). After calculating the
{k}
residual, LaMP calculates a temporary signal estimate (Step 2) denoted by xt . This signal esti?
mate is the sum of the previous estimate x{k?1} and ? r {k} , accounting for the current residual.
Using this temporary signal estimate as a starting point, LaMP then maximizes the likelihood (4)
over the support via optimization (Step 3). This can be efficiently solved using graph cuts with
4
Algorithm 1: LaMP ? Lattice Matching Pursuit
e (desired sparsity).
Input: y, ?, x{0} = 0, s{0} = ?1, and K
e
Output: A K-sparse approximation x of the acquired signal.
Algorithm:
repeat {Matching Pursuit Iterations}
Step 1. Calculate data residual:
r{k} = y ? ?x{k?1} ;
Step 2. Propose a temporary target signal estimate:
{k}
xt = ?? r {k} + x{k?1} ;
Step 3. Determine MAP estimate of the support using graph
cuts:
h
i
P
P
{k}
{k}
s
= maxs?{?1,+1}N (i,j)?Es ?ij si sj + i?Vs ?i si + log(p([xt ]i |si )) ;
Step 4. Estimate target signal:
e
t = 0; t[s{k} = 1] = ?? [:, s{k} = 1]y; x{k} = Prune{t; K};
Step 5. Iterate:
k = k + 1;
until Maximum iterations or
r{k}
< threshold;
Return x = x{k} .
1
?1
p(xi |si = ?1)
?2
?3
L
p(xi |si = +1)
?1
? log ?1
? log p(xi |si = ?1)
?
log p(xi |si =?1)
? log ?1
1
log ?2
log ?3
??
? log p(xi |si = +1)
?
log ?1
?
log p(xi |si =+1)
log ?1
?1
U?1 (xi ; ? )
? ?1
0
U+1 (xi ; ? )
Figure 3: Geometrical approximations of p(xi |si = ?1) and log p(xi |si = +1).
O(N ) complexity [14]. In particular, for planar Ising models, the global minimum of the problem
can be obtained. Once a likely signal support s{k} is obtained in Step 3, LaMP obtains an updated signal estimate x{k} using least squares with the selected columns of the measurement matrix
e signal coefficients (Step 4). Hence, the parameter
?[:, s{k} = 1] and pruning back to the largest K
e controls the sparsity of the approximation. In Step 4, a conjugate gradient method is used for
K
efficiently performing the product by a pseudoinverse. If the graphical model includes dependencies
between the signal values xi , we then replace the pseudoinverse product by a belief propagation
algorithm to efficiently solve for the signal values x{k} within Step 4.
Signal log-likelihood log p(x|s). The correct signal PDF to use given the support p(x|s) is
problem-dependent. Here, we provide one approximation that mimics the ?0 minimization for CS
recovery for the signal graphical model in Figure 2(c); we also use this in our experiments in Section 5. The state si = 1 represents a nonzero coefficient; thus, all nonzero values of xi should
have equal probability, and the value xi = 0 should have zero probability. Similarly, the state
si = ?1 represents a zero-valued coefficient; thus, the mass of its probability function is concentrated at zero. Hence, we use the approximations for xi ? [?L, L], a restricted dynamic range:
p(xi |si = ?1) = ?(xi ) and p(xi |si = 1) = (1 ? ?(xi ))/2L. However, the optimization over
the joint PDF in (4) requires a ?smoothing? of these PDFs for two reasons: (i) to obtain robustness
against noise and numerical issues; and (ii) to extend the usage of the algorithm from sparse to
compressible signals.
We approximate log p(xi |si = ?1) using the parametric form illustrated in Figure 3. Here, the
constant ? is a slack parameter to separate large and small signal coefficients, and ?1 , ?2 , and ?3 are
chosen according to ? and L to normalize each PDF. We also denote a = ?3 L, with a ? 1. Using
the normalization constraints, it is possible to show that as the dynamic range increases,
lim ?
L??
1
log ?2
?
and
log ?1
?a
5
lim ?
L??
log ?3
? 0.
log ?1
Hence, we approximate the likelihoods using the utility functions Usi (x; ? ) that follow this form.
The optimization problem used by Step 3 of LaMP to determine the support is then approximately
equivalent to the following problem
i
X
Xh
eij si sj +
ei si + Us ([x{k+1} ]i ; ? ) ,
s{k+1} =
max
?
?
(5)
i
t
s?{?1,+1}N
i?V
(i,j)?Es
e =
where ?
s
?
log ?1 .
If the signal values are known to be positive, then the definitions of Usi can
eij is related to the desired
be changed to enforce the positivity during estimation. The choice of ?
sparseness on the lattice structure.
e on the lattice structure, we apply statistical mechanics results on
To enforce a desired sparsity K
eij = 0.5 arcsin((1 ? m8 )? 14 ), where m is called the average
the 2D Ising model and choose ?
magnetization. In our recovery problem,
the average magnetization
h
i and the desired signal sparsity
ei = 0 unless there
e
e
has a simple relationship: m = (+1) ? K + (?1) ? (N ? K) /N . We set ?
is prior information on the signal support. The threshold ? is chosen at each iteration adaptively by
e
sorting the magnitudes of the temporary target signal estimate coefficients and determining the 5K
e
threshold; this gives preference to the largest 5K coefficients that attain states si = 1, unless the
cost incurred by enforcing the lattice structure is too large. The pruning operation in Step 4 of LaMP
e
then enforces the desired sparsity K.
5 Experiments
We now use several numerical simulations to demonstrate that for spatially clustered sparse signals,
which have high likelihood under our MRF model, LaMP requires far fewer measurements and
fewer computations for robust signal recovery than state-of-the-art greedy and optimization techniques.1
Experiment 1: Shepp-Logan phantom. Figure 4 (top left) shows the classical N = 100 ?
100 Shepp-Logan phantom image. Its sparsity in the space domain is K = 1740. We obtained
compressive measurements of this image, which were then immersed in additive white Gaussian
noise to an SNR of 10dB. The top row of Figure 4 illustrates the iterative image estimates obtained
using LaMP from just M = 2K = 3480 random Gaussian measurements of the noisy target.
Within 3 iterations, the support of the image is accurately determined; convergence occurs at the 5th
iteration.
Figure 4 (bottom) compares LaMP to CoSaMP [4], a state-of-the-art greedy recovery algorithm, and
fixed-point continuation (FPC) [17], a state-of-the-art ?1 -norm minimization recovery algorithm using the same set of measurements. Despite the presence of high noise (10dB SNR), LaMP perfectly
recovers the signal support from only a small number of measurements. It also outperforms both
CoSaMP and FPC in terms of speed.
Experiment 2: Numerical stability. We demonstrate LaMP?s stability in the face of substantial
measurement noise. We tested both LaMP and FPC with a number of measurements that gave close
to perfect recovery of the Shepp-Logan phantom in the presence of a small amount of noise; for
LaMP, setting M = 1.7K suffices, while FPC requires M = 4K. We then studied the degradation
of the recovery quality as a function of the noise level for both algorithms. For reference, a value
of ? = 20 corresponds to a measurement-to-noise ratio of just 6dB. The results in Figure 5(a)
demonstrate that LaMP is stable for a wide range of measurement noise levels. Indeed, the rate of
increase of the LaMP recovery error as a function of the noise variance ? (a measure of the stability
to noise) is comparable to that of FPC, while using far fewer measurements.
Experiment 3: Performance on real background subtracted images. We test the recovery
algorithms over a set of background subtraction images. The images were obtained from a test
video sequence, one image frame of which is shown in Figure 1, by choosing at random two frames
from the video and subtracting them in a pixel-wise fashion. The large-valued pixels in the resulting
images are spatially clustered and thus are well-modeled by the MRF enforced by LaMP. We created
100 different test images; for each image, we define the sparsity K as the number of coefficients
1
We use the GCOptimization package [14?16] to solve the support recovery problem in Step 3 in Algorithm
1 in our implementation of LaMP.
6
Noise-free target
LaMP Iter. #1
LaMP Iter. #2
LaMP Iter. #3
LaMP Iter. #4
LaMP Iter. #5, 0.9s
CoSaMP, 6.2s
FPC, 6.5s
Figure 4: Top: LaMP recovery of the Shepp-Logan phantom (N = 100 ? 100, K = 1740, SNR = 10dB)
Maximum reconstruction error
3000
2500
Average normalized error magnitude
from M = 2K = 3480 noisy measurements. Bottom: Recoveries from LaMP, CoSaMP, and FPC, including
running times on the same computer.
LaMP, M = 1.7K
FPC, M = 5K
FPC, M = 4K
2000
1500
1000
500
0
0
5
10
?
15
20
(a)
LaMP
CoSaMP
FPC
1.5
1
0.5
0
0
1
2
3
M/K
4
5
(b)
Figure 5: Performance of LaMP. (a) Maximum recovery error over 1000 noise iterations as a function of the
input noise variance. LaMP has the same robustness to noise as the FPC algorithm. (b) Performance over
background subtraction dataset of 100 images. LaMP achieves the best performance at M ? 2.5K , while both
FPC and CoSaMP require M > 5K to achieve the same performance.
that contain 97% of the image energy. We then performed recovery of the image using the LaMP,
CoSaMP, and FPC algorithms under varying number of measurements M , from 0.5K to 5K. An
example recovery is shown in Figure 6.
For each test and algorithm, we measured the magnitude of the estimation error normalized by the
magnitude of the original image. Figure 5(b) shows the mean and standard deviations for the normalized error magnitudes of the three algorithms. LaMP?s graphical model reduces the number of
measurements necessary for acceptable recovery quality to M ? 2.5K, while the standard algorithms require M ? 5K measurements to achieve the same quality.
6 Conclusions
We have presented an initial study of model-based CS signal recovery using an MRF model to capture the structure of the signal?s sparse coefficients. As demonstrated in our numerical simulations,
for signals conforming to our model, the resulting LaMP algorithm requires significantly fewer CS
measurements, has lower computational complexity, and has equivalent numerical stability to the
current state-of-the-art algorithms. We view this as an initial step toward harnessing the power of
modern compression and data modeling methods for CS reconstruction.
Much work needs to be done, however. We are working to precisely quantify the reduction in the
required number of measurements (our numerical experiments suggest that M = O(K) is sufficient
for stable recovery) and computations. We also assert that probabilistic signal models hold the key
to formulating inference problems in the compressive measurement domain since in many signal
processing applications, signals are acquired merely for the purpose of making an inference such as
a detection or classification decision.
7
Target
LaMP
CoSaMP
FPC
Figure 6: Example recoveries for background subtraction images, using M = 3K for each image.
Acknowledgements. We thank Wotao Yin for helpful discussions, and Aswin Sankaranarayanan
for data used in Experiment 3. This work was supported by grants NSF CCF-0431150 and CCF0728867, DARPA/ONR N66001-08-1-2065, ONR N00014-07-1-0936 and N00014-08-1-1112,
AFOSR FA9550-07-1-0301, ARO MURI W311NF-07-1-0185, and the TI Leadership Program.
References
[1] D. L. Donoho. Compressed sensing. IEEE Trans. Info. Theory, 52(4):1289?1306, Sept. 2006.
[2] E. J. Cand`es. Compressive sampling. In Proc. International Congress of Mathematicians, volume 3,
pages 1433?1452, Madrid, Spain, 2006.
[3] S. L. Lauritzen. Graphical Models. Oxford University Press, 1996.
[4] D. Needell and J. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples.
Applied and Computational Harmonic Analysis, June 2008. To appear.
[5] C. La and M. N. Do. Tree-based orthogonal matching pursuit algorithm for signal reconstruction. In
IEEE Int. Conf. Image Processing (ICIP), pages 1277?1280, Atlanta, GA, Oct. 2006.
[6] M. F. Duarte, M. B. Wakin, and R. G. Baraniuk. Wavelet-domain compressive signal reconstruction using
a hidden Markov tree model. In ICASSP, pages 5137?5140, Las Vegas, NV, April 2008.
[7] V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa. Compressive
sensing for background subtraction. In ECCV, Marseille, France, Oct. 2008.
[8] R. G. Baraniuk, M. Davenport, R. A. DeVore, and M. B. Wakin. A simple proof of the restricted isometry
property for random matrices. 2006. To appear in Const. Approx.
[9] T. Blumensath and M. E. Davies. Sampling theorems for signals from the union of linear subspaces.
2007. Preprint.
[10] B. M. McCoy and T. T. Wu. The two-dimensional Ising model. Harvard Univ. Press, 1973.
[11] M. J. Wainwright, P. Ravikumar, and J. D. Lafferty. High-dimensional graphical model selection using
?1 -regularized logistic regression. In Proc. of Advances in NIPS, 2006.
[12] D. P. Wipf and B. D. Rao. Sparse bayesian learning for basis selection. IEEE Trans. Sig. Proc.,
52(8):2153?2164, August 2004.
[13] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, 1988.
[14] V. Kolmogorov and R. Zabin. What energy functions can be minimized via graph cuts? IEEE Trans. on
Pattern Anal. and Mach. Int., 26(2):147?159, 2004.
[15] Y. Boykov, O. Veksler, and R. Zabih. Efficient approximate energy minimization via graph cuts. IEEE
Trans. on Pattern Anal. and Mach. Int., 20(12):1222?1239, Nov. 2001.
[16] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy
minimization in vision. IEEE Trans. on Pattern Anal. and Mach. Int., 26(9):1124?1137, Sept. 2004.
[17] E. T. Hale, W Yin, and Y. Zhang. A fixed-point continuation method for ?1 -regularized minimization with
applications to compressed sensing. Technical Report TR07-07, Rice University, CAM Dept., 2007.
8
| 3487 |@word briefly:1 polynomial:2 compression:6 stronger:1 norm:3 simulation:2 accounting:1 reduction:1 initial:2 offering:1 outperforms:1 current:6 recovered:1 si:31 starring:1 must:4 written:2 conforming:1 dct:1 numerical:7 partition:1 additive:1 enables:1 v:5 greedy:6 fewer:6 selected:1 lamp:39 leadership:1 fa9550:1 volkan:2 location:8 compressible:5 preference:1 zhang:1 c2:2 direct:1 predecessor:1 psfrag:1 consists:1 blumensath:1 combine:2 acquired:3 indeed:1 rapid:1 cand:1 mechanic:1 m8:1 decreasing:1 equipped:2 increasing:2 begin:2 estimating:1 underlying:1 spain:1 maximizes:1 mass:1 coder:1 what:1 emerging:1 z:2 compressive:12 mathematician:1 dubbed:2 esti:1 assert:1 ti:1 fpc:14 control:1 grant:1 appear:2 positive:1 treat:1 congress:1 consequence:1 despite:1 mach:3 oxford:1 approximately:3 black:1 studied:1 range:3 camera:1 enforces:2 union:1 significantly:2 attain:1 projection:1 matching:8 word:1 davy:1 suggest:1 close:2 ga:1 selection:2 context:1 live:1 equivalent:2 map:1 hegde:1 phantom:4 demonstrated:1 go:1 starting:1 independently:1 convex:1 focused:1 resolution:1 recovery:33 assigns:1 immediately:1 needell:1 usi:2 stability:5 searching:1 transmit:1 updated:1 target:6 play:3 today:1 rip:8 suppose:2 programming:1 homogeneous:1 sig:1 harvard:1 element:2 expensive:1 approximated:1 cut:5 ising:12 muri:1 bottom:2 role:3 preprint:1 solved:1 capture:1 calculate:1 decrease:1 highest:1 marseille:1 substantial:1 complexity:4 cam:1 dynamic:2 trained:1 depend:1 basis:5 icassp:1 joint:3 darpa:1 represented:4 kolmogorov:2 univ:1 distinct:1 effective:2 chellappa:1 tell:1 choosing:1 harnessing:1 whose:3 solve:2 valued:2 say:3 plausible:1 compressed:2 transform:3 noisy:2 sequence:3 propose:2 subtracting:1 interaction:2 product:3 reconstruction:4 aro:1 date:1 rapidly:1 achieve:2 realistically:1 normalize:1 exploiting:1 convergence:1 cluster:1 cosamp:10 rademacher:1 perfect:1 object:1 illustrate:1 develop:3 fixing:1 measured:1 ij:6 lauritzen:1 c:24 quantify:1 direction:1 manmade:1 correct:1 human:1 require:2 suffices:1 clustered:5 preliminary:1 strictly:1 marco:1 hold:4 exp:2 achieves:1 smallest:1 xk2:1 purpose:1 estimation:2 proc:3 integrates:1 label:1 agrees:2 combinatorially:1 largest:2 minimization:6 gaussian:5 rather:3 ck:2 surveillance:1 varying:1 mccoy:1 focus:3 june:1 improvement:1 pdfs:1 bernoulli:2 likelihood:7 greatly:1 duarte:4 helpful:1 inference:4 mrfs:2 dependent:1 inaccurate:1 hidden:1 france:1 pixel:4 issue:1 among:3 classification:1 denoted:1 priori:2 art:8 spatial:2 smoothing:1 field:6 once:1 equal:1 intriguingly:1 sampling:8 represents:2 foreground:3 future:1 mimic:1 wipf:1 minimized:1 intelligent:1 richard:1 report:1 modern:1 preserve:1 detection:1 organization:1 atlanta:1 truly:1 edge:2 partial:1 necessary:1 orthogonal:1 unless:2 tree:2 incomplete:1 re:1 desired:5 logan:4 cevher:2 instance:1 column:2 modeling:1 rao:1 jpeg:3 measuring:1 lattice:9 stacking:1 cost:1 vertex:3 entry:3 deviation:1 snr:3 veksler:1 uniform:1 too:2 dependency:1 periodic:1 corrupted:1 adaptively:1 density:3 fundamental:1 randomized:1 international:1 sequel:1 probabilistic:9 modelbased:1 connecting:1 together:1 ym:1 central:1 ambiguity:1 choose:1 positivity:1 davenport:1 conf:1 leading:1 return:1 supp:1 coding:1 includes:1 coefficient:31 int:4 chinmay:2 explicitly:1 vehicle:1 performed:1 lot:1 view:1 start:1 recover:2 contribution:2 square:1 variance:3 characteristic:1 efficiently:3 kaufmann:1 bayesian:1 accurately:1 definition:1 against:1 energy:4 acquisition:1 proof:1 recovers:2 dataset:1 recall:1 knowledge:2 lim:2 dimensionality:1 back:1 follow:2 planar:1 devore:1 april:1 evaluated:1 done:1 generality:2 just:3 until:1 working:1 tropp:1 ei:2 propagation:1 logistic:1 stably:1 quality:4 name:1 usage:1 concept:1 normalized:4 contain:1 ccf:1 hence:4 spatially:2 nonzero:8 iteratively:1 illustrated:1 white:2 during:1 cosine:1 criterion:2 generalized:1 pdf:9 demonstrate:3 magnetization:2 performs:1 npcomplete:1 geometrical:1 reasoning:1 image:40 wise:1 harmonic:1 contention:1 vega:1 boykov:2 pseudocode:1 volume:1 extend:3 analog:1 numerically:3 measurement:41 significant:1 approx:1 similarly:1 moving:2 stable:5 impressive:1 isometry:2 perspective:1 store:1 n00014:2 onr:2 exploited:1 captured:1 minimum:1 greater:1 morgan:1 prune:1 subtraction:5 determine:3 signal:79 ii:2 reduces:2 technical:1 faster:2 equally:2 ravikumar:1 controlled:3 laplacian:1 calculates:1 mrf:8 regression:1 vision:2 essentially:1 poisson:1 iteration:7 represent:2 normalization:1 achieved:1 c1:2 addition:1 background:12 grow:1 publisher:1 rest:1 nv:1 virtually:1 db:4 prip:4 leveraging:1 lafferty:1 flow:1 leverage:1 counting:1 presence:2 enough:1 easy:1 iterate:1 xj:2 gave:1 tr07:1 bandwidth:1 converter:1 perfectly:1 inner:1 regarding:1 utility:1 accelerating:1 nyquist:3 amount:2 concentrated:1 zabih:1 continuation:2 nsf:1 discrete:1 write:1 key:2 iter:5 nevertheless:1 threshold:3 n66001:1 imaging:1 graph:7 concreteness:1 immersed:1 sum:1 merely:1 enforced:1 package:1 baraniuk:4 wu:1 separation:1 decision:1 acceptable:1 comparable:1 bit:1 g:1 activity:1 occur:1 constraint:1 precisely:1 scene:4 speed:3 argument:1 min:1 formulating:1 performing:1 structured:4 according:1 richb:1 conjugate:1 making:2 restricted:3 reddy:1 slack:1 needed:1 end:1 pursuit:8 gaussians:1 operation:1 apply:2 appropriate:1 enforce:2 subtracted:6 robustness:2 original:1 compress:2 denotes:2 clustering:3 include:2 remaining:1 top:3 graphical:13 running:1 wakin:2 const:1 calculating:1 exploit:1 classical:2 occurs:2 parametric:1 gradient:1 subspace:1 distance:1 separate:1 thank:1 simulated:1 unstable:1 barely:1 enforcing:2 reason:1 toward:1 modeled:4 index:4 relationship:1 providing:1 ratio:1 innovation:2 potentially:1 info:1 zabin:1 implementation:1 anal:3 unknown:2 wotao:1 markov:7 mate:1 frame:3 rn:2 y1:1 august:1 required:7 specified:1 optimized:1 icip:1 concisely:2 learned:1 temporary:4 pearl:1 nip:1 trans:5 shepp:4 beyond:2 usually:1 aswin:1 pattern:3 sparsity:16 program:2 including:2 max:3 video:4 belief:1 wainwright:1 power:1 natural:3 regularized:2 residual:4 representing:1 created:1 naive:1 sept:2 review:1 literature:1 prior:3 acknowledgement:1 determining:1 afosr:1 fully:1 loss:2 ingredient:1 digital:1 incurred:1 sufficient:1 principle:1 eccv:1 row:3 succinctly:1 normalizes:1 changed:1 repeat:1 supported:1 free:1 bias:1 neighbor:1 wide:1 face:1 sparse:34 dimension:1 evaluating:1 world:2 commonly:2 far:3 sj:8 pruning:2 obtains:1 approximate:3 nov:1 silhouette:1 global:1 pseudoinverse:2 conclude:1 xi:24 search:2 iterative:2 quantifies:1 nature:1 robust:1 expansion:1 complex:1 protocol:1 domain:4 significance:1 linearly:1 noise:16 madrid:1 fashion:1 sub:1 theme:1 sparsest:2 xh:1 parking:1 candidate:1 kxk2:2 wavelet:1 theorem:3 xt:3 hale:1 sensing:8 explored:1 decay:1 exists:2 sankaranarayanan:2 magnitude:6 illustrates:2 sparseness:1 arcsin:1 sorting:1 yin:2 eij:3 likely:3 corresponds:1 satisfies:2 dispersed:1 rice:9 oct:2 sorted:1 identity:1 donoho:1 replace:1 considerable:1 determined:1 uniformly:1 reducing:2 lemma:2 degradation:1 called:1 experimental:2 e:7 la:2 shannon:1 support:37 people:2 incorporate:1 dept:1 tested:1 |
2,743 | 3,488 | Semi-supervised Learning with Weakly-Related
Unlabeled Data: Towards Better Text Categorization
Liu Yang
Machine Learning Dept.
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh, PA 15213
[email protected]
Rong Jin
Dept. of Computer Sci. and Eng.
3115 Engineering Building
Michigan State University
East Lansing, MI 48824
[email protected]
Rahul Sukthankar
Intel Research Pittsburgh
and Carnegie Mellon Univ.
4720 Forbes Avenue, #410
Pittsburgh, PA 15213
[email protected]
Abstract
The cluster assumption is exploited by most semi-supervised learning (SSL) methods. However, if the unlabeled data is merely weakly related to the target classes,
it becomes questionable whether driving the decision boundary to the low density
regions of the unlabeled data will help the classification. In such case, the cluster assumption may not be valid; and consequently how to leverage this type of
unlabeled data to enhance the classification accuracy becomes a challenge. We
introduce ?Semi-supervised Learning with Weakly-Related Unlabeled Data?
(SSLW), an inductive method that builds upon the maximum-margin approach,
towards a better usage of weakly-related unlabeled information. Although the
SSLW could improve a wide range of classification tasks, in this paper, we focus
on text categorization with a small training pool. The key assumption behind this
work is that, even with different topics, the word usage patterns across different
corpora tends to be consistent. To this end, SSLW estimates the optimal wordcorrelation matrix that is consistent with both the co-occurrence information derived from the weakly-related unlabeled documents and the labeled documents.
For empirical evaluation, we present a direct comparison with a number of stateof-the-art methods for inductive semi-supervised learning and text categorization.
We show that SSLW results in a significant improvement in categorization accuracy, equipped with a small training set and an unlabeled resource that is weakly
related to the test domain.
1 Introduction
Semi-supervised Learning (SSL) takes advantage of a large amount of unlabeled data to enhance
classification accuracy. Its application to text categorization is stimulated by the easy availability of
an overwhelming number of unannotated web pages, in contrast to the limited number of annotated
ones. Intuitively, corpora with different topics may not be content wise related, however, word usage
exhibits consistent patterns within a language. Then the question is, what would be an effective SSL
strategy to extract these valuable word usage patterns embedded in the unlabeled corpus? In this
paper, we aim to identify a new data representation, that is on one hand informative to the target
class (category), and on the other hand consistent with the feature coherence patterns exhibiting in
the weakly related unlabeled data. We further turn it into a convex optimization problem, and solve
it efficiently by an approximate approach. In this section, we first review the two types of semisupervised learning: transductive SSL and inductive SSL. Then we state SSL with weakly related
unlabeled data as a new challenge. Finally, we provide a strategy of how to address this challenge in
the domain of text categorization, as well as a brief summary of related work in text categorization.
1
A variety of methods have been developed for transductive SSL [14, 21]. These methods can
be grouped as: EM with generative mixture models, bootstrapping methods (Self-training, Cotraining and the Yarowsky Algorithm), discriminative models (Transductive Support Vector Machines (TSVM) [2]) and data based methods, including Manifold Regularization [1], Information
Regularization [17], and Low Density Separation(LDS) [11]. Specifically, TSVM extends the maximum margin principle of SVM to unlabeled data. It combines the regularization of SVMs on the
labeled points with the cluster assumption on the unlabeled points, to enforce the decision boundary to lie in low density regions. Data based methods discover an inherent geometry in the data,
and exploit it in finding a good classifier, to which additional regularization based on unlabeled
data is added to avoid overfitting. Manifold Regularization uses the combinatorial Laplacian as a
smoothness term. Based on the assumption that different classes usually form separate manifolds,
it constructs decision functions that vary little along the data manifolds. Information Regularization seeks a good conditional Pr(y|x), assuming that the decision boundary lies in a low density
area and Pr(y|x) only varies a little in the area of high density. Low Density Separation makes a
similar assumption as Manifold Regularization and Information Regularization. In addition, it further computes a new data representation based on the unlabeled data, which often results in better
classification performance for SSL.
Not many inductive SSL approaches have been presented. In general, the essential distinction between transductive learning and inductive learning is that transductive learning produces labels only
for the available unlabeled data; while inductive learning not only produces labels for the unlabeled
data, but also learns a classifier that can be used to predict labels for new data. In this sense, some
SSL algorithms, though named as ?transductive?, have an inductive nature. For example, TSVM
is an inductive learner, because it learns a classifier from a mixture of labeled and unlabeled data.
Similarly, as an inductive component of Low Density Separation (LDS) [11], ? TSVMs learns the
SVM classification model in the primal, which can be used for predicting new data. However, the
graph part of LDS is transductive, because the kernel and the graph distances are addressed by a
prior eigen-decompostion and re-representation (MDS); thus, it is unclear how to make a prediction
of a new test point other than by rebuilding the graph with the new test point. Manifold Regularization [1] also has an implementation with inductive nature. Harmonic Mixtures [22] is a recent work
that aims to overcome the limitations of non-inductive inference. It models the data by a generative
mixture of Gaussians, and adds discriminative regularization using the graph Laplacian.
In this paper, we focus on inductive SSL. In contrast to previous work in this area, we focus on the
following important problem that has been overlooked before. As stated in [11], either directly or
indirectly, all successful semi-supervised algorithms typically make the cluster assumption, which
puts the decision boundary in low density areas without crossing the high density regions. Note that
the cluster assumption is only meaningful when the labeled and unlabeled data are somehow closely
related. When the unlabeled data comes from arbitrary data sources, their input patterns may not
be closely related to that of labeled ones. As a result, the labeled and unlabeled data could be well
separated, which makes it difficult, if not impossible, to exploit the cluster assumption. Hence, the
key challenge is how to leverage the seemingly unrelated unlabeled data to improve the classification accuracy of the target classes. Analogous to transfer learning in which information from one
category may be generalized to the others, we propose a scheme that helps the categorization of one
data source, by making use of information from other unlabeled data sources with little relevance.
Our study stands in contrast to the previous ones in that we aim to make maximum use of the unlabeled data that is weakly related to the test bed. We refer to this problem as ?SSL with weakly
related unlabeled data?, or SSLW for short. We first build a maximum margin framework for
SSL with weakly related unlabeled data. We then cast the framework into an Second Order Cone
Programming (SOCP) problem that can be efficiently solved.
A typical approach for semi-supervised learning with weakly related unlabeled data, presented in
the recent study [13] is to first derive a new data representation from unlabeled data, and then apply
supervised learning technique to the derived new data representation. In [13], the authors proposed
a SSL scheme termed as self-taught learning, which essentially conducts the unsupervised dimension reduction using sparse coding [10]. The new dimensions derived from the unlabeled data can
then be used to represent the labeled data points for supervised learning. Notably, self-taught learning [13] performs coding and classification in two separate stages. In contrast, in our method, the
construction of a good data representation is combined with the training of a maximum margin classifier under a unified framework. In particular, the data representation generated by our method
2
exploits both labeled and unlabeled data, which differentiates the proposed framework from selftaught learning.
In general, SSLW could improve a wide range of classification tasks. However in this study, we
focus on text categorization with a small training set. Text categorization has been actively studied
in the communities of Web data mining, information retrieval and statistical learning [9, 20]. A
number of statistical learning techniques have been applied to text categorization [19], including
the K Nearest Neighbor approaches, decision trees, Bayesian classifiers, inductive rule learning,
neural networks, support vector machines (SVM), and logistic regression. Empirical studies [7]
have shown that support vector machines (SVM) is the leading technique for text categorization.
Given the limited amount of labeled documents, the key of semi-supervised text categorization is to
exploit the unlabeled documents. The popular implementations of semi-supervised SVMs in [8, 15]
are considered to be state-of-the-art in text categorization.
For text categorization with a small training pool, it is very likely that a large portion of words used
by the testing documents are unseen in the training set, which could lead to a poor estimation of the
similarity between documents. If we can identify the coherence information of words (e.g., word
correlation) from both the labeled and unlabeled documents, we will be able to more accurately
estimate the document similarity, particularly for documents sharing few or no common words, thus
improving the overall classification accuracy. A straightforward approach is to utilize the word cooccurrence information for computing document similarity. However, this straightforward approach
may not serve the best interests of word correlation, because not all of the co-occurrence patterns
are useful. Some co-occurrence patterns (e.g., co-occurrence with common words) do not reflect
the semantic relations among words, and some are not related to the target class. Consequently,
it is critical to identify a subset of co-occurrence patterns that are most informative to the target
classification problems. To address this problem, SSLW explicitly estimates the optimal wordcorrelation matrix for the target document categorization problem. The rest of paper is organized
as follows. Section 2 introduces the basic notations and gives a brief review of the SVM dualism.
In Section 3, we propose the framework of SSL with weakly-related unlabeled data, followed by an
efficient algorithm for its computation in Section 4. Section 5 evaluates SSLW; and in section 6 we
provide some insights into the experimental evidence and discuss future work.
2 Preliminaries
We introduce the notation used throughout this paper and briefly review the SVM dual formulation.
Denote L = {(x1 , y1 ), . . . , (xl , yl )} as the collection of labeled documents, where yi is +1 when
document xi belongs to a given document category and ?1 when it does not (text categorization
problem for multi-labeled documents can be treated as a set of independent binary classification
problems). Let U = {xl+1 . . . , xn } be the unlabeled collection of documents. Let V denote the
size of the vocabulary. Importantly, as an SSL task with weakly-related unlabeled data, U comes
from some external resources that are weakly related to the test domain. To facilitate our discussion,
we denote the document-word matrix on L by D = (d1 , d2 , . . . , dl ), where di ? NV represents
the word-frequency vector for document di . The word-document matrix on L + U is denoted by
G = (g1 , g2 , . . . , gV ), where gi = (gi,1 , gi,2 , . . . , gi,n ) represents the occurrence of the ith word in
all the n documents. Recall the dual formalism for SVM:
max
?
s.t.
1
?> e ? (? ? y)> K(? ? y)
2
>
? y=0
0 ? ?i ? C, i = 1, 2, . . . , n,
(1)
where ? = (?i , ?2 , . . . , ?n ) are the weights assigned to the training documents, e is a vector
with all elements being 1, and the symbol ? denotes an element-wise product between two vectors.
K ? Rn?n is the kernel matrix representing the document pairwise similarity and K = D> D.
3 The Framework of Semi-supervised Learning with Weakly-Related
Unlabeled Data
In this section, we present the algorithm of Semi-supervised Learning with Weakly-Related Unlabeled Data (SSLW). As analysized in Section 1, the kernel similarity measure in the standard SVM
3
dual formalism K = D> D, is problematic in the sense that the similarity between two documents
will be zero if they do not share any common words, even if there exists a pairwise relationship between the seen words and the unseen ones, from a large collection of documents. To solve this problem, we take into account a word-correlation matrix when computing the kernel similarity matrix,
and we search for an optimal word-correlation matrix, towards maximizing the categorization margin. Specifically, we define the kernel matrix as K = D> RD, by introducing the word-correlation
matrix R ? RV ?V , where each element Ri,j represents the correlation between the ith and the jth
words. Note G> G is not a desirable solution to R, because it is improper to assign a high correlation to two words simply because of their high co-occurrence; the two words may be not closely
related as judged by the maximum-margin criterion. Therefore, it is important to search for the optimal word-correlation matrix R in addition to the maximum discovered in Eqn. (1), to maximize the
categorization margin. We denote the optimal value of the objective function in Eqn. (1) as ?(K):
1
?(K) = max ?> e ? (? ? y)> K(? ? y)
(2)
?
2
Given the fact that ?(K) is inversely-related to the categorization margin [4], minimizing ?(K) is
equivalent to maximizing the categorization margin.
Now we consider how to make maximum use of the weakly-related source U. The G matrix is crucial
in capturing the word correlation information from the weakly-related external source U. Thus, to
incorporate the external source into the learning of the word-correlation matrix R, we regularize R
according to G by introducing an internal representation of words W = (w1 , w2 , . . . , wV ), where
vector wi is the internal representation of the ith word (This idea is similar to non-negative matrix
factorization (NMF) [6]). We expect that W carries an equivalent amount of information as G does,
i.e., G and W are roughly equivalent representations of words. As there exists a matrix U such that
the matrix G can be recovered from W by a linear transformation G = U W , the word-correlation
matrix can be computed as R = W > W . Further, the constraints G = U W and R = W W > can be
combined to obtain the following positive semi-definite constraint
R G>
0,
(3)
G T
where T = U U > [18]. Another strategy we use to involve the unlabeled data into the learning of
word correlation, is to construct the word correlation matrix R as a non-negative linear combination
of the top p right eigenvectors of G, i.e.,
p
X
(4)
R = ?IV +
(?i ? ?)si s>
i ,
i=1
where {si , i = 1, 2, . . . , n} denote the right eigenvectors of matrix G, sorted in descending order
of their eigenvalues ?i . IV is the V ? V identity matrix, and ?i ? 0, i = 1, . . . , p and ? ? 0 are
non-negative combination weights. Note that introducing ?IV ensures non-singularity of the matrix
R, which is important when computing the expression for matrix T ). This simplification of R allows us to effectively extract and utilize the word co-occurrence information in the external source
U. Additionally, the positive semi-definite constraint R 0 is converted into simple non-negative
constraints, i.e., ? ? 0 and {?i ? 0}pi=1 . The number of variables in R, which was originally
O(V 2 ), is now reduced to p + 1. A further insight into the combination weights reveals that, both
the straightforward co-occurrence matrix G> G and Manifold Regulization, give predefined weights
for eigenvector combination and thus can be seen as the special cases of SSLW. Precisely speaking, the straightforward co-occurrence matrix G> G, directly uses the eigenvalues as the weights.
Manifold Regularization does a slightly better job by defining the weights as a strict function of the
eigenvalues. Different from both, we give SSLW the entire freedom to learn the weights from data.
In this sense, SSLW generalizes these two methods.
Based on the above analysis, we reformulate an extension of SVM dual in Eqn. (1), to search for an
optimal word-correlation matrix R, by exploiting the word co-occurrence information in the external
U, under maximum-margin criterion, i.e.,
?(D> RD)
min
R??,U,W
where the word-correlation matrix
R is restricted
to domain? thatis defined as
R G>
V ?V
? = R ? S+
0.
:
G T
4
(5)
(6)
if we use (3) for R, and
(
?=
R = ?IV +
p
X
(?i ?
?)si s>
i
: ? ? 0, ?i ? 0, i = 1, . . . , p
i=1
)
(7)
if we use Eqn. (4) for R. Given the definition of ? in Eqn. (2), Eqn. (5) is the following min-max
problem without analytic solution.
1
min
max ?> e ? (? ? y)> (D> RD)(? ? y)
(8)
?
R??,U,W
2
4 An Efficient Algorithm of SSLW
This section provides a computationally-efficient and scalable algorithm for solving the min-max
problem in Eqn. (8), with domain ? defined in (6). We first rewrite the maximization problem in
Eqn. (1) into a minimization problem by computing its dual form:
t + 2C? > e
K
? ? y + ?e
s.t.
0
>
(? ? y + ?e)
t
?=e+???
?i ? 0, ?i ? 0, i = 1, 2, . . . , n.
(9)
Then, by plugging Eqn. (9) back into Eqn. (5), we transform the min-max problem in Eqn. (8) into
the following minimization problem:
min
t,?,?,?
min
t,?,?,?,R
s.t.
t + 2C? > e + Ct tr(T ) + Cr tr(R)
D> RD
? ? y + ?e
0
(? ? y + ?e)>
t
?i ? 0, ?i ? 0, i = 1, 2, . . . , n
R G>
? = e + ? ? ?,
0.
G T
(10)
Note that as our goal is to compute R and T , thus any valid (W, U ) is sufficient, and no uniqueness
constraints are imposed on W and U .
In Eqn. (10), Ct tr(T ) and Cr tr(R) serve as sparse regularizers for R and T . They are added to
improve the stability of the optimal solution, as well as to favor a simpler model over sophisticated
ones. The parameters Ct and Cr are used to weigh the importance of the two regularization terms.
The trace heuristic has been widely used to enforce a low-rank matrix by minimizing its trace in
place of its rank. In the generalization of the trace heuristic presented by [5], the dual of the spectrum
norm is the convex envelope of the rank on the set of matrices with norm less than one. The rank
objective can be replaced with the dual of the spectral norm, for rank minimization. In other words,
the best convex regularizer one can get for rank minimization is the trace function.
Eqn. (10) is a Semi-Definite Programming (SDP) problem [3], and in general can be solved using
SDP packages such as SeDuMi [16]. However, solving a SDP problem is computationally expensive
and does not easily scale to a large number of training examples. [18] recently provides an elegant
scheme of rewriting a SDP problem into a Second Order Cone Programming (SOCP) problem that
can be much more efficiently solved [3]. Technically, we adopt this procedure and rewrite Eqn. (10)
into a typical SOCP problem that can be efficiently solved. Given the estimated word-correlation
matrix R and K = D> RD, the example weights ? in SVM model can be estimated using the KKT
conditions ? = (yy> ? K)?1 (e + ? ? ? + ?y). And the threshold b in SVM can be obtained by
solving the primal SVM using the linear programming technique.
5 Evaluation
In this section, we evaluate SSLW on text categorization with limited training data. The experiment
set up is purely inductive, i.e., the testing feature space is invisible in the training phrase. As an SSL
5
task with weakly-related unlabeled data, the provided unlabeled data have little relevance to the test
domain. We show that SSLW can achieve noticeable gains over the state-of-the-art methods in both
inductive SSL and text categorization, and we provide insight into why this happens. Following [18],
our implementation of SSLW selects the top 200 right eigenvectors of the document-word matrix G
matrix to construct the R matrix. As defined in Section 3, the G matrix covers both the training sets
and the weakly-related external collection.
Evaluation datasets Two standard datasets for text categorization are used as the evaluation test bed:
the Reuters-21578 dataset and the WebKB dataset. For computational simplicity, 1000 documents
are randomly selected from the TREC AP88 dataset and are used as an external information source
for both datasets. The AP88 dataset includes a collection of news documents reported by Associated
Press in 1988. The same pre-processing and indexing procedure are applied to these three datasets,
by using the Lemur Toolkit 1 . For the Reuters-21578 dataset, among the 135 TOPICS categories,
the 10 categories with the largest amount of documents are selected (see Table 1). This results in
a collection of 9, 400 documents. For the WebKB dataset, which has seven categories: student,
faculty, staff, department, course, project, and other, we discard the category of ?other? due to its
unclear definition (see Table 2). This results in 4, 518 data samples in the selected dataset. The
Reuters-21578 dataset and the TREC AP88 dataset have very limited relevance in topic; and the
WebKB dataset and the TREC AP88 dataset are even less content-wise related.
Category
# Samples
earn
3987
acq
2448
money-fx
801
crude
634
grain
628
trade
552
interest
513
wheat
306
ship
305
corn
254
Table 1: The ten categories of the Reuters-21578 dataset with the largest amount of documents.
Category
# Samples
course
930
department
182
faculty
1124
project
504
staff
137
student
1641
Table 2: The six categories of the WebKB dataset.
Evaluation Methodology We focus on binary classification. For each class, 4 positive samples and
4 negative samples are randomly selected to form the training set; and the rest of the data serve
as the testing set. As a rare classification problem, the testing data is very unbalanced. Therefore,
we adopt the area under the ROC curve (AUR) [12] as the quantitative measurement of the binary
classification performance for text categorization. AUR is computed based on the output of realvalue scores of the classifiers returned for testing documents. Each experiment is repeated ten times,
and the AUR averaged over these trials is reported.
Baseline Methods We use six baseline methods to demonstrate the strength of SSLW from different perspectives. The first two baselines are the standard SVM and the traditional TSVM.The
third baseline is ? TSVM 2 , the inductive component of LDS, which delivers the state-of-the-art
performance of SSL. The fourth baseline Manifold Regularization 3 (ManifoldR for short) is included as a state-of-the-art SSL approach with an inductive nature, and more importantly, being
able to incorporate word relationship into the regularization. For the fifth baseline, we compare the
word-correlation matrix estimated by SSLW, with the trivial word-correlation matrix G> G; and we
name this baseline as COR. Finally, self-taught learning [13] serves as our sixth baseline method,
named as Self-taught. It uses the unlabeled data to find an low-dimension representation, and then
conducts standard classification in this new space.
Text Categorization with Limited Training Data We describe the AUR results of both the Reuters21578 dataset and the WebKB datset, by using different methods. For the Reuters-21578 dataset,
Table 3 summarizes the AUR comparison between the six baseline methods and SSLW. Both mean
and variance of AUR are shown in the table. We observe that SSLW consistently outperforms the six
baselines in AUR across most of the ten categories. In general, a t-test shows our performance gain is
statistically significant compared to all the baselines at a significance level of 0.05. Detailed analysis
is provided below. First, TSVM and ?TSVM overall perform even worse than the standard SVM.
This observation reveals that if the unlabeled data are only weakly relevant to the target class, it could
1
http://www.lemurproject.org/
http://www.kyb.tuebingen.mpg.de/bs/people/chapelle/lds/
3
http://manifold.cs.uchicago.edu/manifold_regularization/software.html
2
6
harm the categorization accuracy by simply pushing the decision boundary towards the low density
regions, and away from the high density areas of the unlabeled data. It also justifies our intuitive
hypothesis that the cluster assumption is not valid in this case. Second, the dramatic advantage
of SSLW over the COR method confirms our previous analysis ? learning a good word-correlation
matrix that is jointly determined by the co-occurrence matrix and the classification margin (as SSLW
does), can achieve significant gains over simply using the trivial form G> G. Third, we observe that
SSLW algorithm consistently improves over Manifold Regularization, except on ?trace? category
where ManifoldR has a little advantage. Most noticeably, on ?wheat? category and ?ship? category,
the AUR is improved by more than 10%, as a result of SSLW. These results demonstrate that SSLW
is effective in improving text categorization accuracy with a small amount of training data. We also
notice that, ?TSVM outperforms TSVM on some categories, but is slightly worse than TSVM on
some others. The unstable performance of ?TSVM can possibly be explained by its gradient descent
nature. Finally, our method receives gains against self-taught learning [13] on most categories.
This proves SSLW is more effective than self-taught learning in using unlabeled data to improve
classification. The gains can be attributed to the fact that Self-taught does coding and classification
in two separate stages, while SSLW achieves these two purposes simultaneously.
A more careful examination indicates that SSLW also reduces the standard deviation in classification
accuracy. The standard deviations by SSLW are mostly less than 2.5%; while those by the baseline
methods are mostly above 2.5%. Over all the ten categories except the ?money-fix? category, SSLW
always delivers the lowest or the second lowest standard deviation, among all the six methods. We
hypothesize that the large standard deviation by the baseline models is mainly due to the small
number of training documents. In this situation, many words should only appear in a few training
documents. As a result, the association between these words and the class labels can not be reliably
established. In extreme cases where these words do not appear in any of the training documents, no
association can be established between these words and the class labels. Evidently, test documents
related to these unseen words are likely to be classified incorrectly. By contrast, SSLW can resolve
this problem by estimating the word correlation. For a missing word, its association with the class
label can be reliably estimated through the correlation with other words that appear frequently in the
training examples.
Table 4 shows the AUR results of the WebKB dataset, from which we observe the similar trends as
described above in the Reuters-21578 dataset. It is shown that SSLW maintains its clear advantage
over the six baseline methods, across all the six categories.
Category
earn
acq
money-fx
crude
grain
trade
interest
wheat
ship
corn
SVM
82.3 ? 2.1
69.7 ? 3.0
71.3 ? 2.6
69.7 ? 3.3
70.7 ? 3.5
82.7 ? 3.4
79.3 ? 1.5
77.6 ? 3.8
70.4 ? 2.6
80.8 ? 2.9
TSVM
70.9 ? 4.1
63.1 ? 3.3
67.4 ? 3.1
68.6 ? 3.2
68.7 ? 2.3
65.1 ? 5.0
60.2 ? 3.9
61.9 ? 3.6
64.5 ? 2.9
65.4 ? 2.1
?TSVM
70.1 ? 5.2
59.2 ? 4.1
70.0 ? 2.0
59.9 ? 4.7
66.4 ? 3.5
71.5 ? 4.2
70.4 ? 3.1
64.7 ? 4.6
65.8 ? 3.9
66.5 ? 5.3
ManifoldR
86.4 ? 2.1
70.1 ? 3.0
74.0 ? 2.6
71.5 ? 3.3
75.1 ? 3.5
85.1 ? 3.4
85.0 ? 1.5
79.1 ? 3.8
72.3 ? 2.6
77.0 ? 5.0
COR
62.6 ? 5.8
51.2 ? 4.7
76.5 ? 4.6
56.0 ? 5.7
62.1 ? 5.4
78.8 ? 5.2
69.4 ? 4.7
54.4 ? 5.7
52.1 ? 5.0
54.5 ? 5.6
Self-taught
65.9 ? 3.5
68.2 ? 2.6
75.7 ? 3.9
67.6 ? 3.1
69.0 ? 2.9
78.5 ? 4.4
76.5 ? 2.5
67.1 ? 2.6
68.0 ? 2.1
66.8 ? 3.7
SSLW
89.3 ? 1.6
73.5 ? 3.3
82.1 ? 4.4
77.5 ? 1.7
82.7 ? 2.0
84.4 ? 3.9
89.4 ? 1.8
89.4 ? 1.6
82.8 ? 1.4
86.4 ? 2.3
Table 3: The AUR results (%) on the Reuters-21578 dataset with 8 training examples per category.
Category
course
dept.
faculty
project
staff
student
SVM
66.8 ? 2.2
72.2 ? 2.8
56.7 ? 3.4
59.6 ? 2.9
58.1 ? 1.6
59.2 ? 2.7
TSVM
61.5 ? 2.0
58.8 ? 5.2
56.4 ? 2.6
57.0 ? 2.3
53.0 ? 1.1
54.0 ? 2.3
?TSVM
61.8 ? 2.9
63.7 ? 3.5
54.2 ? 3.0
60.3 ? 1.4
51.6 ? 1.3
55.3 ? 2.7
ManifoldR
68.4 ? 2.8
73.4 ? 5.9
56.9 ? 2.8
61.8 ? 3.1
52.9 ? 0.9
59.4 ? 3.1
COR
63.3 ? 5.4
58.3 ? 5.1
53.1 ? 4.6
50.0 ? 5.9
46.4 ? 1.6
56.0 ? 4.1
Self-taught
66.0 ? 3.9
70.8 ? 3.6
61.7 ? 3.3
58.7 ? 3.0
59.9 ? 1.9
61.0 ? 1.9
SSLW
76.2 ? 2.5
87.6 ? 2.2
61.6 ? 3.4
69.5 ? 3.2
58.3 ? 1.5
67.7 ? 2.6
Table 4: The AUR results (%) on the WebKB dataset with 8 training examples per category.
7
6 Conclusion
This paper explores a new challenge in semi-supervised learning, i.e., how to leverage the unlabeled
information that is weakly related to the target classes, to improve classification performance. We
propose the algorithm of Semi-supervised Learning with Weakly-Related Unlabeled Data (SSLW)
to address this challenge. SSLW extends the theory of support vector machines to effectively identify those co-occurrence patterns that are most informative to the categorization margin and ignore
those that are irrelevant to the categorization task. Applied to text categorization with limited number of training samples, SSLW automatically estimates the word correlation matrix by effectively
exploiting the word co-occurrence embedded in the weakly-related unlabeled corpus. Empirical
studies show that SSLW significantly improves both the accuracy and the reliability of text categorization, given a small training pool and the additional unlabeled data that are weakly related to the
test bed. Although SSLW is presented in the context of text categorization, it potentially facilitates
classification tasks in a variety of domains. In future work, we will evaluate the benefits of SSLW on
larger data sets and in other domains. We will also investigate SSLW?s dependencies on the number
of eigenvectors used, and its behavior when varying the number of labeled training examples.
Acknowledgments
The work was supported by the National Science Foundation (IIS-0643494) and National Institute
of Health (1R01GM079688-01). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF and
NIH.
References
[1] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning
from labeled and unlabeled examples. Technical report, Univ. of Chicago, Dept. of Comp. Sci., 2004.
[2] K. Bennett and A. Demiriz. Semi-supervised support vector machines. In Proc. NIPS, 1998.
[3] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004.
[4] C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge
Discovery, 2(2), 1998.
[5] M. Fazel, H. Hindi, and S. Boyd. A rank minimization heuristic with application to minimum order
system approximation. In Proc. American Control Conf., 2001.
[6] P. O. Hoyer. Non-negative matrix factorization with sparseness constraints. J. Mach. Learn. Res., 5, 2004.
[7] T. Joachims. Text categorization with support vector machines: learning with many relevant features. In
Proc. ECML, 1998.
[8] T. Joachims. Transductive inference for text classification using support vector machines. In Proc. ICML,
1999.
[9] M. Lan, C. L. Tan, H.-B. Low, and S. Y. Sung. A comprehensive comparative study on term weighting
schemes for text categorization with support vector machines. In Proc. WWW, 2005.
[10] H. Lee, A. Battle, R. Rajat, and A. Ng. Efficient sparse coding algorithms. In Proc. NIPS, 2007.
[11] A. Z. Olivier Chapelle. Semi-supervised classification by low density separation. In Proc. Inter. Workshop
on Artificial Intelligence and Statistics, 2005.
[12] F. Provost, T. Fawcett, and R. Kohavi. The case against accuracy estimation for comparing induction
algorithms. In Proc. ICML, 1998.
[13] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: transfer learning from unlabeled data. In Proc. ICML, 2007.
[14] M. Seeger. Learning with labeled and unlabeled data. Technical report, Univ. of Edinburgh, 2001.
[15] V. Sindhwani and S. S. Keerthi. Large scale semi-supervised linear support vector machines. In Proc.
ACM SIGIR, 2006.
[16] J. F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods Software, 11/12(1?4), 1999.
[17] M. Szummer and T. Jaakkola. Information regularization with partially labeled data. In Proc. NIPS, 2002.
[18] L. Yang, R. Jin, C. Pantofaru, and R. Sukthankar. Discriminative cluster refinement: Improving object
category recognition given limited training data. In Proc. CVPR, 2007.
[19] Y. Yang. An evaluation of statistical approaches to text categorization. Journal of Info. Retrieval, 1999.
[20] Y. Yang and J. O. Pedersen. A comparative study on feature selection in text categorization. In Proc.
ICML, 1997.
[21] X. Zhu. Semi-supervised learning literature survey. Technical report, UW-Madison, Comp. Sci., 2006.
[22] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic
functions. In Proc. ICML, 2003.
8
| 3488 |@word trial:1 faculty:3 briefly:1 norm:3 d2:1 confirms:1 seek:1 eng:1 dramatic:1 tr:4 carry:1 reduction:1 liu:1 score:1 document:35 outperforms:2 recovered:1 comparing:1 si:3 r01gm079688:1 grain:2 chicago:1 informative:3 analytic:1 gv:1 kyb:1 hypothesize:1 generative:2 selected:4 intelligence:1 ith:3 short:2 provides:2 cse:1 org:1 simpler:1 along:1 direct:1 combine:1 introduce:2 lansing:1 pairwise:2 inter:1 notably:1 behavior:1 mpg:1 frequently:1 sdp:4 multi:1 roughly:1 automatically:1 resolve:1 little:5 overwhelming:1 equipped:1 becomes:2 provided:2 discover:1 unrelated:1 notation:2 webkb:7 project:3 estimating:1 lowest:2 what:1 eigenvector:1 developed:1 unified:1 finding:2 transformation:1 bootstrapping:1 sung:1 quantitative:1 questionable:1 classifier:6 yarowsky:1 control:1 appear:3 before:1 positive:3 engineering:1 lemur:1 tends:1 mach:1 studied:1 co:13 limited:7 factorization:2 range:2 statistically:1 averaged:1 fazel:1 acknowledgment:1 testing:5 definite:3 procedure:2 area:6 empirical:3 significantly:1 boyd:2 word:53 pre:1 get:1 unlabeled:52 selection:1 judged:1 put:1 context:1 impossible:1 sukthankar:2 descending:1 equivalent:3 imposed:1 www:3 missing:1 maximizing:2 straightforward:4 convex:4 sigir:1 survey:1 simplicity:1 rule:1 insight:3 importantly:2 regularize:1 vandenberghe:1 stability:1 fx:2 analogous:1 target:8 construction:1 tan:1 programming:4 olivier:1 us:3 hypothesis:1 pa:2 crossing:1 element:3 expensive:1 particularly:1 trend:1 recognition:2 labeled:16 solved:4 region:4 ensures:1 improper:1 news:1 wheat:3 trade:2 valuable:1 weigh:1 cooccurrence:1 weakly:26 solving:3 rewrite:2 serve:3 upon:1 technically:1 purely:1 learner:1 easily:1 regularizer:1 univ:3 separated:1 effective:3 describe:1 artificial:1 rebuilding:1 heuristic:3 widely:1 solve:2 larger:1 cvpr:1 favor:1 niyogi:1 gi:4 unseen:3 g1:1 transductive:8 transform:1 jointly:1 demiriz:1 statistic:1 seemingly:1 advantage:4 eigenvalue:3 evidently:1 propose:3 product:1 relevant:2 achieve:2 bed:3 intuitive:1 exploiting:2 cluster:8 produce:2 categorization:37 comparative:2 object:1 help:2 derive:1 nearest:1 noticeable:1 job:1 c:3 come:2 exhibiting:1 closely:3 annotated:1 opinion:1 material:1 noticeably:1 assign:1 fix:1 generalization:1 preliminary:1 singularity:1 rong:1 extension:1 considered:1 predict:1 driving:1 vary:1 adopt:2 achieves:1 purpose:1 uniqueness:1 estimation:2 proc:14 combinatorial:1 label:6 grouped:1 largest:2 reuters21578:1 minimization:5 always:1 gaussian:1 aim:3 avoid:1 cr:3 varying:1 jaakkola:1 derived:3 focus:5 joachim:2 improvement:1 consistently:2 rank:7 indicates:1 mainly:1 contrast:5 seeger:1 baseline:14 sense:3 inference:2 typically:1 entire:1 relation:1 pantofaru:1 selects:1 overall:2 classification:24 among:3 dual:7 stateof:1 denoted:1 html:1 art:5 ssl:20 special:1 field:1 construct:3 ng:2 represents:3 unsupervised:1 icml:5 future:2 others:2 report:3 inherent:1 few:2 belkin:1 randomly:2 simultaneously:1 national:2 comprehensive:1 packer:1 replaced:1 geometry:1 keerthi:1 freedom:1 interest:3 mining:2 investigate:1 evaluation:6 introduces:1 mixture:4 extreme:1 primal:2 behind:1 regularizers:1 predefined:1 decompostion:1 sedumi:2 conduct:2 tree:1 iv:4 re:2 formalism:2 cover:1 maximization:1 phrase:1 introducing:3 deviation:4 subset:1 rare:1 successful:1 reported:2 dependency:1 varies:1 combined:2 density:12 explores:1 aur:11 lee:2 yl:1 pool:3 enhance:2 earn:2 w1:1 reflect:2 possibly:1 worse:2 external:7 conf:1 american:1 leading:1 actively:1 account:1 converted:1 socp:3 de:1 coding:4 student:3 availability:1 includes:1 explicitly:1 unannotated:1 view:1 portion:1 maintains:1 tsvm:15 forbes:2 acq:2 accuracy:10 variance:1 efficiently:4 identify:4 lds:5 bayesian:1 pedersen:1 accurately:1 comp:2 selftaught:1 classified:1 sharing:1 definition:2 sixth:1 evaluates:1 against:2 frequency:1 associated:1 mi:1 di:2 attributed:1 gain:5 dataset:19 popular:1 recall:1 knowledge:1 improves:2 organized:1 sophisticated:1 back:1 originally:1 supervised:20 methodology:1 rahul:1 improved:1 formulation:1 though:1 stage:2 correlation:22 hand:2 eqn:14 receives:1 web:2 sturm:1 somehow:1 logistic:1 semisupervised:1 usage:4 building:1 facilitate:1 name:1 inductive:17 regularization:17 hence:1 assigned:1 symmetric:1 semantic:1 self:11 criterion:2 generalized:1 demonstrate:2 invisible:1 performs:1 delivers:2 wise:3 harmonic:2 recently:1 nih:1 common:3 association:3 mellon:2 significant:3 refer:1 measurement:1 tsvms:1 cambridge:1 smoothness:1 rd:5 similarly:1 language:1 reliability:1 toolkit:1 chapelle:2 similarity:7 money:3 add:1 recent:2 perspective:1 belongs:1 irrelevant:1 discard:1 termed:1 ship:3 liuy:1 binary:3 wv:1 yi:1 exploited:1 seen:2 minimum:1 additional:2 staff:3 maximize:1 semi:21 rv:1 ii:1 desirable:1 reduces:1 technical:3 dept:4 retrieval:2 plugging:1 laplacian:2 prediction:1 scalable:1 regression:1 basic:1 essentially:1 cmu:2 kernel:5 represent:1 fawcett:1 addition:2 addressed:1 source:8 crucial:1 kohavi:1 w2:1 rest:2 envelope:1 strict:1 nv:1 elegant:1 facilitates:1 lafferty:1 yang:4 leverage:3 easy:1 variety:2 idea:1 avenue:2 whether:1 expression:1 six:7 returned:1 speaking:1 matlab:1 useful:1 detailed:1 involve:1 eigenvectors:4 clear:1 amount:6 ten:4 svms:2 category:25 reduced:1 http:3 problematic:1 nsf:1 notice:1 tutorial:1 estimated:4 per:2 yy:1 carnegie:2 taught:10 key:3 threshold:1 lan:1 regulization:1 rewriting:1 utilize:2 uw:1 graph:4 merely:1 cone:3 package:1 fourth:1 named:2 extends:2 throughout:1 place:1 separation:4 decision:7 coherence:2 summarizes:1 capturing:1 ct:3 followed:1 simplification:1 strength:1 constraint:6 precisely:1 ri:1 software:2 min:7 corn:2 department:2 according:1 combination:4 poor:1 battle:2 across:3 slightly:2 em:1 wi:1 making:1 happens:1 b:1 intuitively:1 restricted:1 pr:2 indexing:1 explained:1 computationally:2 resource:2 turn:1 discus:1 differentiates:1 end:1 cor:4 serf:1 available:1 gaussians:1 generalizes:1 apply:1 observe:3 away:1 enforce:2 indirectly:1 spectral:1 occurrence:14 eigen:1 denotes:1 top:2 madison:1 pushing:1 exploit:4 ghahramani:1 build:2 prof:1 objective:2 question:1 added:2 strategy:3 md:1 traditional:1 unclear:2 exhibit:1 gradient:1 hoyer:1 distance:1 separate:3 sci:3 topic:4 manifold:12 seven:1 tuebingen:1 trivial:2 unstable:1 induction:1 assuming:1 relationship:2 reformulate:1 minimizing:2 difficult:1 mostly:2 potentially:1 info:1 trace:5 stated:1 negative:6 implementation:3 reliably:2 perform:1 observation:1 datasets:4 jin:2 descent:1 ecml:1 incorrectly:1 defining:1 situation:1 y1:1 rn:1 discovered:1 trec:3 provost:1 arbitrary:1 community:1 nmf:1 overlooked:1 cast:1 toolbox:1 distinction:1 established:2 nip:3 address:3 able:2 usually:1 pattern:10 below:1 challenge:6 including:2 max:6 critical:1 treated:1 examination:1 predicting:1 raina:1 hindi:1 zhu:2 representing:1 scheme:4 improve:6 brief:2 inversely:1 extract:2 health:1 text:28 review:3 prior:1 geometric:1 discovery:1 literature:1 embedded:2 expect:1 limitation:1 foundation:1 sufficient:1 consistent:4 principle:1 share:1 pi:1 course:3 summary:1 supported:1 jth:1 uchicago:1 burges:1 institute:1 wide:2 neighbor:1 fifth:1 sparse:3 benefit:1 edinburgh:1 curve:1 boundary:5 overcome:1 dimension:3 valid:3 stand:1 xn:1 computes:1 vocabulary:1 author:2 collection:6 refinement:1 approximate:1 ignore:1 overfitting:1 reveals:2 kkt:1 corpus:4 pittsburgh:3 harm:1 discriminative:3 xi:1 spectrum:1 msu:1 search:3 why:1 table:9 stimulated:1 additionally:1 nature:4 transfer:2 learn:2 rongjin:1 improving:3 necessarily:1 domain:8 significance:1 reuters:7 repeated:1 x1:1 intel:1 roc:1 xl:2 lie:2 crude:2 cotraining:1 third:2 weighting:1 learns:3 symbol:1 datset:1 svm:16 evidence:1 dl:1 essential:1 exists:2 workshop:1 effectively:3 importance:1 justifies:1 sparseness:1 margin:12 michigan:1 simply:3 likely:2 expressed:1 g2:1 partially:1 recommendation:1 sindhwani:2 acm:1 conditional:1 sorted:1 identity:1 goal:1 consequently:2 careful:1 towards:4 bennett:1 content:2 included:1 specifically:2 typical:2 determined:1 except:2 experimental:1 east:1 meaningful:1 internal:2 support:10 people:1 szummer:1 unbalanced:1 relevance:3 rajat:1 incorporate:2 evaluate:2 d1:1 |
2,744 | 3,489 | Rademacher Complexity Bounds
for Non-I.I.D. Processes
Mehryar Mohri
Courant Institute of Mathematical Sciences
and Google Research
251 Mercer Street
New York, NY 10012
Afshin Rostamizadeh
Department of Computer Science
Courant Institute of Mathematical Sciences
251 Mercer Street
New York, NY 10012
[email protected]
[email protected]
Abstract
This paper presents the first Rademacher complexity-based error bounds for noni.i.d. settings, a generalization of similar existing bounds derived for the i.i.d. case.
Our bounds hold in the scenario of dependent samples generated by a stationary
?-mixing process, which is commonly adopted in many previous studies of noni.i.d. settings. They benefit from the crucial advantages of Rademacher complexity
over other measures of the complexity of hypothesis classes. In particular, they are
data-dependent and measure the complexity of a class of hypotheses based on the
training sample. The empirical Rademacher complexity can be estimated from
such finite samples and lead to tighter generalization bounds. We also present
the first margin bounds for kernel-based classification in this non-i.i.d. setting and
briefly study their convergence.
1 Introduction
Most learning theory models such as the standard PAC learning framework [13] are based on the assumption that sample points are independently and identically distributed (i.i.d.). The design of most
learning algorithms also relies on this key assumption. In practice, however, the i.i.d. assumption
often does not hold. Sample points have some temporal dependence that can affect the learning process. This dependence may appear more clearly in times series prediction or when the samples are
drawn from a Markov chain, but various degrees of time-dependence can also affect other learning
problems.
A natural scenario for the analysis of non-i.i.d. processes in machine learning is that of observations
drawn from a stationary mixing sequence, a standard assumption adopted in most previous studies,
which implies a dependence between observations that diminishes with time [7,9,10,14,15]. The pioneering work of Yu [15] led to VC-dimension bounds for stationary ?-mixing sequences. Similarly,
Meir [9] gave bounds based on covering numbers for time series prediction [9]. Vidyasagar [14]
studied the extension of PAC learning algorithms to these non-i.i.d. scenarios and proved that under
some sub-additivity conditions, a PAC learning algorithm continues to be PAC for these settings.
Lozano et al. studied the convergence and consistency of regularized boosting under the same assumptions [7]. Generalization bounds have also been derived for stable algorithms with weakly
dependent observations [10]. The consistency of learning under the more general scenario of ?mixing with non-stationary sequences has also been studied by Irle [3] and Steinwart et al. [12].
This paper gives data-dependent generalization bounds for stationary ?-mixing sequences. Our
bounds are based on the notion of Rademacher complexity. They extend to the non-i.i.d. case the
Rademacher complexity bounds derived in the i.i.d. setting [2, 4, 5]. To the best of our knowledge,
these are the first Rademacher complexity bounds derived for non-i.i.d. processes. Our proofs make
1
use of the so-called independent block technique due to Yu [15] and Bernstein and extend the applicability of the notion of Rademacher complexity to non-i.i.d. cases.
Our generalization bounds benefit from all the advantageous properties of Rademacher complexity
as in the i.i.d. case. In particular, since the Rademacher complexity can be bounded in terms of
other complexity measures such as covering numbers and VC-dimension [1], it allows us to derive
generalization bounds in terms of these other complexity measures, and in fact improve on existing
bounds in terms of these other measures, e.g., VC-dimension. But, perhaps the most crucial advantage of bounds based on the empirical Rademacher complexity is that they are data-dependent: they
measure the complexity of a class of hypotheses based on the training sample and thus better capture
the properties of the distribution that has generated the data. The empirical Rademacher complexity can be estimated from finite samples and lead to tighter bounds. Furthermore, the Rademacher
complexity of large hypothesis sets such as kernel-based hypotheses, decision trees, convex neural networks, can sometimes be bounded in some specific ways [2]. For example, the Rademacher
complexity of kernel-based hypotheses can be bounded in terms of the trace of the kernel matrix.
In Section 2, we present the essential notion of a mixing process for the discussion of learning in
non-i.i.d. cases and define the learning scenario. Section 3 introduces the idea of independent blocks
and proves a bound on the expected deviation of the error from its empirical estimate. In Section 4,
we present our main Rademacher generalization bounds and discuss their properties.
2 Preliminaries
This section introduces the concepts needed to define the non-i.i.d. scenario we will consider, which
coincides with the assumptions made in previous studies [7, 9, 10, 14, 15].
2.1 Non-I.I.D. Distributions
The non-i.i.d. scenario we will consider is based on stationary ?-mixing processes.
?
Definition 1 (Stationarity). A sequence of random variables Z = {Zt }t=?? is said to be stationary if for any t and non-negative integers m and k, the random vectors (Zt , . . . , Zt+m ) and
(Zt+k , . . . , Zt+m+k ) have the same distribution.
Thus, the index t or time, does not affect the distribution of a variable Zt in a stationary sequence
(note that this does not imply independence).
?
Definition 2 (?-mixing). Let Z = {Zt }t=?? be a stationary sequence of random variables. For
any i, j ? Z ? {??, +?}, let ?ij denote the ?-algebra generated by the random variables Zk ,
i ? k ? j. Then, for any positive integer k, the ?-mixing coefficient of the stochastic process Z is
defined as
i
h
?(k) = sup En
sup Pr[A | B] ? Pr[A] .
(1)
?
n B???? A??n+k
Z is said to be ?-mixing if ?(k) ? 0. It is said to be algebraically ?-mixing if there exist real
numbers ?0 > 0 and r > 0 such that ?(k) ? ?0 /k r for all k, and exponentially mixing if there
exist real numbers ?0 and ?1 such that ?(k) ? ?0 exp(??1 k r ) for all k.
Thus, a sequence of random variables is mixing when the dependence of an event on those occurring
k units of time in the past weakens as a function of k.
2.2 Rademacher Complexity
Our generalization bounds will be based on the following measure of the complexity of a class of
functions.
Definition 3 (Rademacher Complexity). Given a sample S ? X m , the empirical Rademacher
complexity of a set of real-valued functions H defined over a set X is defined as follows:
m
X
2
b
RS (H) =
E sup
?i h(xi )S = (x1 , . . . , xm ) .
(2)
m ? h?H i=1
2
The expectation is taken over ? = (?1 , . . . , ?n ) where ?i s are independent uniform random variables taking values in {?1, +1} called Rademacher random variables. The Rademacher complexity
b S (H) over all samples of size m:
of a hypothesis set H is defined as the expectation of R
b S (H)|S| = m .
Rm (H) = E R
(3)
S
The definition of the Rademacher complexity depends on the distribution according to which samples S of size m are drawn, which in general is a dependent ?-mixing distribution D. In the rare
e is considered, typically for an i.i.d. setting, we explicitly
instances where a different distribution D
e
indicate that distribution as a superscript: RD
m (H).
The Rademacher complexity measures the ability of a class of functions to fit noise. The empirical
Rademacher complexity has the added advantage that it is data-dependent and can be measured from
finite samples. This can lead to tighter bounds than those based on other measures of complexity
such as the VC-dimension [2, 4, 5].
bS (h) the empirical average of a hypothesis h : X ? R and by R(h) its expecWe will denote by R
tation over a sample S drawn according to a stationary ?-mixing distribution:
m
X
bS (h) = 1
R
h(zi )
m i=1
bS (h)].
R(h) = E[R
S
(4)
The following proposition shows that this expectation is independent of the size of the sample S, as
in the i.i.d. case.
Proposition 1. For any sample S of size m drawn from a stationary distribution D, the following
bS (h)] = Ez?D [h(z)].
holds: ES?Dm [R
Proof. Let S = (x1 , . . . , xm ). By stationarity, Ezi ?D [h(zi )] = Ezj ?D [h(zj )] for all 1 ? i, j ? m,
thus, we can write:
m
m
X
1 X
bS (h)] = 1
E[R
E[h(zi )] =
E[h(zi )] = E[h(z)].
z
S
m i=1 S
m i=1 zi
3 Proof Components
Our proof makes use of McDiarmid?s inequality [8] to show that the empirical average closely
estimates its expectation. To derive a Rademacher generalization bound, we apply McDiarmid?s
inequality to the following random variable, which is the quantity we wish to bound:
bS (h).
?(S) = sup R(h) ? R
(5)
h?H
McDiarmid?s inequality bounds the deviation of ? from its mean, thus, we must also bound the
expectation E[?]. However, we immediately face two obstacles: both McDiarmid?s inequality and
the standard bound on E[?] hold only for samples drawn in an i.i.d. fashion. The main idea behind
our proof is to analyze the non-i.i.d. setting and transfer it to a close independent setting. The
following sections will describe in detail our solution to these problems.
3.1 Independent Blocks
We derive Rademacher generalization bounds for the case where training and test points are drawn
from a stationary ?-mixing sequence. As in previous non-i.i.d. analyses [7, 9, 10, 15], we use a
technique transferring the original problem based on dependent points to one based on a sequence
of independent blocks. The method consists of first splitting a sequence S into two subsequences S0
and S1 , each made of ? blocks of a consecutive points. Given a sequence S = (z1 , . . . , zm ) with
m = 2a?, S0 and S1 are defined as follows:
S0 = (Z1 , Z2 , . . . , Z? ),
S1 =
(1)
(1)
(Z1 , Z2 , . . . , Z?(1) ),
where Zi = (z(2i?1)+1 , . . . , z(2i?1)+a ),
where
3
(1)
Zi
= (z2i+1 , . . . , z2i+a ).
(6)
(7)
Instead of the original sequence of odd blocks S0 , we will be working with a sequence Se0 of
independent blocks of equal size a to which standard i.i.d. techniques can be applied: Se0 =
e1 , Z
e2 , . . . , Z
e? ) with mutually independent Z
ek s, but, the points within each block Zek follow the
(Z
same distribution as in Zk . As stated by the following result of Yu [15][Corollary 2.7], for a sufficiently large spacing a between blocks and a sufficiently fast mixing distribution, the expectation of
a bounded measurable function h is essentially unchanged if we work with Se0 instead of S0 .
Corollary 1 ([15]). Let h be a measurable function bounded by M ? 0 defined over the blocks Zk ,
then the following holds:
| E [h] ? E [h]| ? (? ? 1)M ?(a),
(8)
S0
e0
S
where ES0 denotes the expectation with respect to S0 , ESe0 the expectation with respect to the Se0 .
e the distribution corresponding to the independent blocks Z
ek . Also, to work with
We denote by D
block sequences, we extend some of our definitions: P
we define the extension ha : Z a ? R of any
a
1
hypothesis h ? H to a block-hypothesis by ha (B) = a i=1 h(Zi ) for any block B = (z1 , . . . , za ) ?
a
Z , and define Ha as the set of all block-based hypotheses ha generated from h ? H.
It will also be useful to define the subsequence S? , which consists of ? singleton points separated
by a gap of 2a ? 1 points. This can be thought of as the sequence constructed from S0 , or S1 , by
selecting only the jth point from each block, for any fixed j ? {1, . . . , a}.
3.2 Concentration Inequality
McDiarmid?s inequality requires the sample to be i.i.d. Thus, we first show that Pr[?(S)] can be
bounded in terms of independent blocks and then apply McDiarmid?s inequality to the independent
blocks.
Lemma 1. Let H be a set of hypotheses bounded by M . Let S denote a sample, of size m, drawn
according to a stationary ?-mixing distribution and let Se0 denote a sequence of independent blocks.
Then, for all a, ?, ? > 0 with 2?a = m and ? > ESe0 [?(Se0 )], the following bound holds:
Pr[?(S) > ?] ? 2 Pr[?(Se0 ) ? E [?(Se0 )] > ?? ] + 2(? ? 1)?(a),
S
e0
S
e0
S
where ? = ? ? ESe0 [?(Se0 )].
?
Proof. We first rewrite the left-hand side probability in terms of even and odd blocks and then apply
Corollary 1 as follows:
bS (h)) > ?]
Pr[?(S) > ?] = Pr[sup(R(h) ? R
S
S
h
h
bS (h)
R(h)?R
0
= Pr sup
+
2
S
h
bS (h)
R(h)?R
1
2
>?
i
bS (h))
(def. of R
h1
i
bS0 (h)) + sup(R(h) ? R
bS1 (h)) > ? (convexity of sup)
? Pr
sup(R(h) ? R
S 2
h
h
= Pr[?(S0 ) + ?(S1 ) > 2?]
(def. of ?)
S
? Pr[?(S0 ) > ?] + Pr[?(S1 ) > ?]
S0
(union bound)
S1
= 2 Pr[?(S0 ) > ?]
(stationarity)
S0
= 2 Pr[?(S0 ) ? E [?(Se0 )] > ?? ].
S0
(def. of ?? )
e0
S
The second inequality holds by the union bound and the fact that ?(S0 ) or ?(S1 ) must surpass ?
for their sum to surpass 2?. To complete the proof, we apply Corollary 1 to the expectation of the
indicator variable of the event {?(S0 ) ? ESe0 [?(Se0 )] > ?? }, which yields
2 Pr[?(S0 ) ? E [?(Se0 )] > ?? ] ? 2 Pr[?(Se0 ) ? E [?(Se0 )] > ?? ] + 2(? ? 1)?(a).
S0
e0
S
e0
S
e0
S
We can now apply McDiarmid?s inequality to the independent blocks of Lemma 1.
4
Proposition 2. For the same assumptions as in Lemma 1, the following bound holds for all ? >
ESe0 [?(Se0 )]:
?2???2
Pr[?(S) > ?] ? 2 exp
+ 2(? ? 1)?(a),
S
M2
where ?? = ? ? E e [?(Se0 )].
S0
Proof. To apply McDiarmid?s inequality, we view each block as an i.i.d. point with respect to ha .
b e (ha ) = R(ha ) ? 1 P? ha (Zek ).
?(Se0 ) can be written in terms of ha as: ?(Se0 ) = R(ha ) ? R
k=1
S0
?
1
e
e
e
e
Thus, changing a block Zk of the sample S0 can change ?(S0 ) by at most |h(Zk )| ? M/?. By
?
McDiarmid?s inequality, the following holds for any ? > 2(? ? 1)M ?(a):
?2??2
?2???2
Pr[?(Se0 ) ? E [?(Se0 )] > ?? ] ? exp P?
=
exp
.
2
e0
e0
M2
S
S
i=1 (M/?)
Plugging in the right-hand side in the statement of Lemma 1 proves the proposition.
3.3 Bound on the Expectation
Here, we give a bound on ESe0 [?(S0 )] based on the Rademacher complexity, as in the i.i.d. case [2].
But, unlike the standard case, the proof requires an analysis in terms of independent blocks.
Lemma 2. The following inequality holds for the expectation E e [?(Se0 )] defined in terms of an
e
independent block sequence:ESe0 [?(Se0 )] ? RD
? (H).
S0
Proof. By the convexity of the supremum function and Jensen?s inequality, ESe0 [?(Se0 )] can be
bounded in terms of empirical averages over two samples:
b e? (h)] ? R
b e (h)] ? E [ sup R
b e? (h) ? R
b e (h)].
E [?(Se0 )] = E [ sup E [R
e0
S
e0 h?H S
e?
S
0
S0
S0
e0 ,S
e? h?H
S
0
S0
S0
We now proceed with a standard symmetrization argument with the independent blocks thought of
as i.i.d. points:
b e? (h) ? R
b e (h)]
E [?(Se0 )] ? E [ sup R
e0
S
e0 ,S
e? h?H
S
0
= E
S0
sup
e0 ,S
e? ha ?Ha
S
0
S0
?
1X
ha (Zi ) ? ha (Zi? )
? i=1
?
1X
?i (ha (Zi ) ? ha (Zi? ))
e0 ,S
e? ,? ha ?Ha ?
S
0
i=1
?
?
1X
1X
?
? E
sup
?i ha (Zi ) + E
sup
?i ha (Zi )
e0 ,S
e? ,? ha ?Ha ?
e0 ,S
e? ,? ha ?Ha ?
S
S
0
0
i=1
i=1
?
1X
= 2 E sup
?i ha (Zi ) .
e0 ,? ha ?Ha ?
S
i=1
=
E
sup
b
(def. of R)
(Rad. var.?s)
(sub-add. of sup)
In the second equality, we introduced the Rademacher random variables ?i s. With probability 1/2,
?i = 1 and the difference ha (Zi ) ? ha (Zi? ) is left unchanged; and, with probability 1/2, ?i = ?1
and Zi and Zi? are permuted. Since the blocks Zi , or Zi? are independent, taking the expectation over
? leaves the expectation unchanged. The inequality follows from the sub-additivity of the supremum
function and the linearity of expectation. The final equality holds because Se0 and Se0? are identically
distributed due to the assumption of stationarity.
We now relate the Rademacher block sequence to a sequence over independent points. The righthand side of the inequality just presented can be rewritten as
?
?
a
2X 1X
1X
(i)
?i ha (Zi ) = E sup
?i
h(zj ) ,
2 E sup
e0 ,? h?H ?
e0 ,? ha ?Ha ?
a j=1
S
S
i=1
i=1
5
(i)
where zj denotes the jth point of the ith block. For j ? [1, a], let Se0j denote the i.i.d. sample
constructed from the jth point of each independent block Zi , i ? [1, ?]. By reversing the order of
summations and using the convexity of the supremum function, we obtain the following:
?
a
1X2X
(i)
?i h(zj )
(reversing order of sums)
E [?(Se0 )] ? E sup
e0
e0 ,? h?H a
? i=1
S
S
j=1
?
a
1X
2X
(i)
?
E sup
?i h(zj )
(convexity of sup)
a j=1 Se0 ,? h?H ? i=1
?
a
1X
2X
(i)
=
E sup
?i h(zj )
(marginalization)
a j=1 Se0j ,? h?H ? i=1
?
2 X
e
?i h(zi ) ? RD
= E sup
? (H).
e
?
S? ,? h?H
i=1
e?
zi ?S
The first equality in this derivation is obtained by marginalizing over the variables that do not appear
within the inner sum. Then, the second equality holds since, by stationarity, the choice of j does
not change the value of the expectation. The remaining quantity, modulo absolute values, is the
Rademacher complexity over ? independent points.
4 Non-i.i.d. Rademacher Generalization Bounds
4.1 General Bounds
This section presents and analyzes our main Rademacher complexity generalization bounds for stationary ?-mixing sequences.
Theorem 1 (Rademacher complexity bound). Let H be a set of hypotheses bounded by M ? 0.
Then, for any sample S of size m drawn from a stationary ?-mixing distribution, and for any ?, a >
0 with 2?a = m and ? > 2(? ? 1)?(a), with probability at least 1 ? ?, the following inequality
holds for all hypotheses h ? H:
s
log ?2?
e
bS (h) + RD
,
R(h) ? R
? (H) + M
2?
where ? ? = ? ? 2(? ? 1)?(a).
Proof. Setting the right-hand side of Proposition 2 to ? and using Lemma 2 to bound ESe0 [?(Se0 )]
e
with the Rademacher complexity RD
? (H) shows the result.
As pointed out earlier, a key advantage of the Rademacher complexity is that it can be measured
from data, assuming that the computation of the minimal empirical error can be done effectively and
b S (H), where S? is a subsample of the sample S
efficiently. In particular we can closely estimate R
?
drawn from a ?-mixing distribution, by considering random samples of ?. The following theorem
bS .
gives a bound precisely with respect to the empirical Rademacher complexity R
?
Theorem 2 (Empirical Rademacher complexity bound). Under the same assumptions as in Theorem 1, for any ?, a > 0 with 2?a = m and ? > 4(? ? 1)?(a), with probability at least 1 ? ?, the
following inequality holds for all hypotheses h ? H:
s
4
b S (H) + 3M log ?? ,
bS (h) + R
R(h) ? R
?
2?
where ? ? = ? ? 4(? ? 1)?(a).
6
e
b
Proof. To derive this result from Theorem 1, it suffices to bound RD
? (H) in terms of RS? (H). The
e
D
b S (H) > ?} yields
application of Corollary 1 to the indicator variable of the event {R? (H) ? R
?
e
e
D
D
b S? (H) > ? ? Pr R (H) ? R
b e (H) > ? + (? ? 1)?(2a ? 1).
Pr R? (H) ? R
(9)
?
S?
e
b e (H) which is defined over points
Now, we can apply McDiarmid?s inequality to RD
? (H) ? RS
?
b e by at most (2M/?), thus, McDidrawn in an i.i.d. fashion. Changing a point of S? can affect R
S?
armid?s inequality gives
???2
e
b
Pr RD
+ (? ? 1)?(2a ? 1).
(10)
? (H) ? RS? (H) > ? ? exp
2M 2
Note ? is a decreasing function, which
?(2a ? 1) ? ?(a). Thus, with probability at least
q implies
2 log ?1?
b
, with ? ? = ?/2 ? (? ? 1)?(a), a fortiori with ? ? =
1 ? ?/2, R? (H) ? RS? (H) + M
?
?/4 ? (? ? 1)?(a). The result follows this inequality combined with the statement of Theorem 1
for a confidence parameter ?/2.
This theorem can be used to derive generalization bounds for a variety of hypothesis sets and learning
settings. In the next section, we present margin bounds for kernel-based classification.
4.2 Classification
Let X denote the input space, Y = {?1, +1} the target values in classification, and Z = X ? Y . For
b? (h) denote the average amount by which yh(x) deviates
any hypothesis h and margin ? > 0, let R
S
P
?
b (h) = 1 m (? ? yi h(xi ))+ . Given a positive definite symmetric
from ? over a sample S: R
S
i=1
m
kernel K : X ?X ?P
R, let K denote its Gram matrix for the sample S and HK the kernel-based
m
hypothesis set {x 7? i=1 ?i K(xi , x) : ?K?T ? 1}, where ? ? Rm?1 denotes the column-vector
with components ?i , i = 1, . . . , m.
Theorem 3 (Margin bound). Let ? > 0 and K be a positive definite symmetric kernel. Then, for any
?, a > 0 with 2?a = m and ? > 4(? ? 1)?(a), with probability at least 1 ? ? over samples S of size
m drawn from a stationary ?-mixing distribution, the following inequality holds for all hypotheses
h ? HK :
s
log ?4?
1 b?
4 p
Pr[yh(x) ? 0] ? RS (h) +
Tr[K] + 3
,
?
??
2?
where ? ? = ? ? 4(? ? 1)?(a).
Proof. For any h ? H, let h denote the corresponding hypothesis defined over Z by: ?z ? Z, h(z) =
?yh(x); and H K the hypothesis set {z ? Z 7? h(z) : h ? HK }. Let L denote the loss function
b ? (h). Then, Pr[yh(x) ? 0] ? Pr[(L ? h)(z) ? 0] = R(L ? h).
associated to the margin loss R
S
b S ((L ? 1) ? H K ) ?
Since L ? 1 is 1/?-Lipschitz and (L ? 1)(0) = 0, by Talagrand?s lemma [6], R
b
2RS (H K )/?. The result is then obtained by applying Theorem 2 to R((L ? 1) ? h) = R(L ? h) ? 1
b
b ? h) ? 1, and using the known bound for the empirical Rademacher
with R((L
? 1) ? h) = R(L
p
b S (H K ) ? 2 Tr[K].
complexity of kernel-based classifiers [2, 11]: R
|S|
In order to show that this bound converges, we must appropriately choose the parameter ?, or equivalently a, which will depend on the mixing parameter ?. In the case of algebraic mixing and using
the straightforward bound Tr[K] ? mR2 for the kernel trace, where R is the radius of the ball that
contains the data, the following corollary holds.
Corollary 2. With the same assumptions as in Theorem 3, if ? is further algebraically ?-mixing,
?(a) = ?0 a?r , then, with probability at least 1 ? ?, the following bound holds for all hypotheses
h ? HK :
r
8Rm?1
1 b?
4
?2
Pr[yh(x) ? 0] ? RS (h) +
+ 3m
log ? ,
?
?
?
3
3
? 1 , ?2 = 12 2r+4
? 1 and ? ? = ? ? 2?0 m?1 .
where ?1 = 21 r+2
7
2r+1
This bound is obtained by choosing ? = 21 m 2r+4 , which, modulo a multiplicative constant, is the
?
minimizer of ( m/? + ??(a)). Note that for r > 1 we have ?1 , ?2 < 0 and thus, it is clear that
the bound converges, while the actual rate will depend on the distribution parameter r. A tighter
estimate of the trace of the kernel matrix, possibly derived from data, would provide a better bound,
as would stronger mixing assumptions, e.g., exponential mixing. Finally, we note that as r ? ?
and ?0 ? 0, that is as the dependence between points vanishes, the right-hand side of the bound
b? + 1/?m), which coincides with the asymptotic behavior in the i.i.d. case [2,4,5].
approaches O(R
S
5 Conclusion
We presented the first Rademacher complexity error bounds for dependent samples generated by a
stationary ?-mixing process, a generalization of similar existing bounds derived for the i.i.d. case.
We also gave the first margin bounds for kernel-based classification in this non-i.i.d. setting, including explicit bounds for algebraic ?-mixing processes. Similar margin bounds can be obtained for
the regression setting by using Theorem 2 and the properties of the empirical Rademacher complexity, as in the i.i.d. case. Many non-i.i.d. bounds based on other complexity measures such as
the VC-dimension or covering numbers can be retrieved from our framework. Our framework and
the bounds presented could serve as the basis for the design of regularization-based algorithms for
dependent samples generated by a stationary ?-mixing process.
Acknowledgements
This work was partially funded by the New York State Office of Science Technology and Academic Research
(NYSTAR).
References
[1] M. Anthony and P. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University
Press, Cambridge, UK, 1999.
[2] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural
results. Journal of Machine Learning Research, 3:2002, 2002.
[3] A. Irle. On the consistency in nonparametric estimation under mixing assumptions. Journal of Multivariate Analysis, 60:123?147, 1997.
[4] V. Koltchinskii and D. Panchenko. Rademacher processes and bounding the risk of function learning. In
High Dimensional Probability II, pages 443?459. preprint, 2000.
[5] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error
of combined classifiers. Annals of Statistics, 30, 2002.
[6] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer,
1991.
[7] A. Lozano, S. Kulkarni, and R. Schapire. Convergence and consistency of regularized boosting algorithms
with stationary ?-mixing observations. Advances in Neural Information Processing Systems, 18, 2006.
[8] C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics, pages 148?188.
Cambridge University Press, 1989.
[9] R. Meir. Nonparametric time series prediction through adaptive model selection. Machine Learning,
39(1):5?34, 2000.
[10] M. Mohri and A. Rostamizadeh. Stability bounds for non-iid processes. Advances in Neural Information
Processing Systems, 2007.
[11] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press,
2004.
[12] I. Steinwart, D. Hush, and C. Scovel. Learning from dependent observations. Technical Report LA-UR06-3507, Los Alamos National Laboratory, 2007.
[13] L. G. Valiant. A theory of the learnable. ACM Press New York, NY, USA, 1984.
[14] M. Vidyasagar. Learning and Generalization: with Applications to Neural Networks. Springer, 2003.
[15] B. Yu. Rates of convergence for empirical processes of stationary mixing sequences. Annals Probability,
22(1):94?116, 1994.
8
| 3489 |@word mr2:1 briefly:1 advantageous:1 stronger:1 r:8 tr:3 series:3 contains:1 selecting:1 past:1 existing:3 scovel:1 z2:2 must:3 written:1 stationary:20 leaf:1 ith:1 boosting:2 mcdiarmid:11 mathematical:2 constructed:2 consists:2 expected:1 behavior:1 decreasing:1 actual:1 es0:1 considering:1 bounded:10 linearity:1 bs0:1 temporal:1 rm:3 classifier:2 uk:1 unit:1 appear:2 positive:3 tation:1 koltchinskii:2 studied:3 practice:1 block:30 union:2 definite:2 empirical:16 thought:2 confidence:1 close:1 selection:1 risk:2 applying:1 measurable:2 straightforward:1 independently:1 convex:1 survey:1 splitting:1 immediately:1 m2:2 stability:1 notion:3 annals:2 target:1 modulo:2 hypothesis:22 continues:1 preprint:1 capture:1 vanishes:1 convexity:4 complexity:41 panchenko:2 cristianini:1 weakly:1 rewrite:1 depend:2 algebra:1 serve:1 basis:1 various:1 derivation:1 additivity:2 separated:1 fast:1 describe:1 choosing:1 valued:1 z2i:2 ability:1 statistic:1 superscript:1 final:1 advantage:4 sequence:22 ledoux:1 zm:1 mixing:33 los:1 convergence:4 rademacher:41 converges:2 derive:5 weakens:1 measured:2 ij:1 odd:2 c:1 implies:2 indicate:1 radius:1 closely:2 stochastic:1 vc:5 suffices:1 generalization:16 preliminary:1 proposition:5 tighter:4 rostami:1 summation:1 extension:2 hold:17 sufficiently:2 considered:1 exp:5 ezj:1 consecutive:1 estimation:1 diminishes:1 symmetrization:1 clearly:1 gaussian:1 office:1 corollary:7 derived:6 hk:4 rostamizadeh:2 dependent:11 typically:1 transferring:1 classification:5 equal:1 yu:4 report:1 national:1 stationarity:5 righthand:1 introduces:2 behind:1 chain:1 tree:1 taylor:1 e0:23 theoretical:1 minimal:1 instance:1 column:1 earlier:1 obstacle:1 applicability:1 deviation:2 rare:1 uniform:1 alamo:1 combined:2 choose:1 possibly:1 ek:2 singleton:1 coefficient:1 combinatorics:1 explicitly:1 depends:1 multiplicative:1 h1:1 view:1 analyze:1 sup:25 efficiently:1 yield:2 iid:1 za:1 definition:5 dm:1 e2:1 proof:13 associated:1 proved:1 knowledge:1 bs1:1 courant:2 follow:1 done:1 furthermore:1 just:1 talagrand:2 working:1 steinwart:2 hand:4 google:1 perhaps:1 irle:2 usa:1 concept:1 lozano:2 equality:4 regularization:1 symmetric:2 laboratory:1 covering:3 coincides:2 complete:1 permuted:1 exponentially:1 banach:1 extend:3 cambridge:4 rd:8 consistency:4 similarly:1 pointed:1 shawe:1 funded:1 stable:1 ezi:1 add:1 fortiori:1 multivariate:1 retrieved:1 scenario:7 inequality:21 yi:1 analyzes:1 algebraically:2 zek:2 ii:1 technical:1 academic:1 e1:1 se0:30 plugging:1 prediction:3 regression:1 essentially:1 expectation:15 kernel:13 sometimes:1 spacing:1 x2x:1 crucial:2 appropriately:1 unlike:1 integer:2 structural:1 bernstein:1 identically:2 variety:1 marginalization:1 affect:4 independence:1 gave:2 fit:1 zi:25 inner:1 idea:2 bartlett:2 algebraic:2 york:4 proceed:1 useful:1 clear:1 amount:1 nonparametric:2 schapire:1 meir:2 exist:2 zj:6 estimated:2 write:1 key:2 drawn:11 changing:2 sum:3 decision:1 bound:62 def:4 precisely:1 argument:1 department:1 according:3 ball:1 b:13 s1:8 pr:25 taken:1 mutually:1 discus:1 needed:1 adopted:2 rewritten:1 apply:7 original:2 denotes:3 remaining:1 prof:2 unchanged:3 added:1 quantity:2 concentration:1 dependence:6 said:3 street:2 afshin:1 assuming:1 index:1 equivalently:1 statement:2 relate:1 trace:3 noni:2 negative:1 stated:1 design:2 zt:7 observation:5 markov:1 finite:3 introduced:1 z1:4 rad:1 hush:1 pattern:1 xm:2 pioneering:1 including:1 vidyasagar:2 event:3 natural:1 regularized:2 indicator:2 isoperimetry:1 improve:1 technology:1 imply:1 cim:1 deviate:1 acknowledgement:1 marginalizing:1 asymptotic:1 loss:2 var:1 foundation:1 degree:1 s0:31 mercer:2 mohri:3 jth:3 side:5 institute:2 taking:2 face:1 absolute:1 benefit:2 distributed:2 dimension:5 gram:1 commonly:1 made:2 adaptive:1 supremum:3 xi:3 subsequence:2 zk:5 transfer:1 mehryar:1 anthony:1 main:3 bounding:2 noise:1 subsample:1 x1:2 en:1 fashion:2 ny:3 sub:3 wish:1 explicit:1 exponential:1 yh:5 theorem:11 specific:1 pac:4 jensen:1 learnable:1 nyu:2 essential:1 mendelson:1 effectively:1 valiant:1 occurring:1 margin:8 gap:1 led:1 ez:1 partially:1 springer:2 minimizer:1 relies:1 acm:1 lipschitz:1 change:2 reversing:2 surpass:2 lemma:7 called:2 e:1 la:1 kulkarni:1 |
2,745 | 349 | Dynamics of Generalization in Linear Perceptrons
Anders Krogh
Niels Bohr Institute
Blegdamsvej 17
DK-2100 Copenhagen, Denmark
John A. Hertz
NORDITA
Blegdamsvej 17
DK-2100 Copenhagen, Denmark
Abstract
We study the evolution of the generalization ability of a simple linear perceptron with N inputs which learns to imitate a "teacher perceptron". The
system is trained on p = aN binary example inputs and the generalization ability measured by testing for agreement with the teacher on all 2N
possible binary input patterns. The dynamics may be solved analytically
and exhibits a phase transition from imperfect to perfect generalization
at a = 1. Except at this point the generalization ability approaches its
asymptotic value exponentially, with critical slowing down near the transition; the relaxation time is ex (1 - y'a)-2. Right at the critical point,
1
the approach to perfect generalization follows a power law ex t - '2. In
the presence of noise, the generalization ability is degraded by an amount
ex
1)-1 just above a = 1.
(va -
1
INTRODUCTION
It is very important in practical situations to know how well a neural network will
generalize from the examples it is trained on to the entire set of possible inputs. This
problem is the focus of a lot of recent and current work [1-11]. All this work, however, deals with the asymptotic state of the network after training. Here we study
a very simple model which allows us to follow the evolution of the generalization
ability in time under training. It has a single linear output unit, and the weights
obey adaline learning. Despite its simplicity, it exhibits nontrivial behaviour: a dynamical phase transition at a critical number of training examples, with power-law
decay right at the transition point and critical slowing down as one approaches it
from either side.
897
898
Krogh and Hertz
2
THE MODEL
1
=
Our simple linear neuron has an output V
N-"2 2:i Wjei, where ei is the ith input.
It learns to imitate a teacher [1] whose weights are Uj by training on p examples of
input-output pairs (er, ,~) with
(1)
generated by the teacher. The adaline learning equation [11] is then
Wi =
1
p
1
Vii 'E('~ - v'N ~ Wje;)er
~=1
1
= N ~(Uj - Wj)e;er.
(2)
~J
J
By introducing the difference between the teacher and the pupil,
(3)
and the training input correlation matrix
1 p
A IJ.. -- -N ""'
r'!cf ,
L.J"'J""
(4)
~=1
the learning equation becomes
Vi = - 'EAijVj.
(5)
j
We let the example inputs er take the values ?1, randomly and independently, but it
is straightforward to generalize it to any distribution of inputs with (ereJ)e ex 6ij6~v .
For a large number of examples (p O( N) ~
the resulting generalization ability
will be independent of just which p of the 2 possible binary input patterns we
choose. All our results will then depend only on the fact that we can calculate the
spectrum of the matrix A.
=
3
V,
GENERALIZATION ABILITY
To measure the generalization ability, we test whether the output of our percept ron
with weights Wi agrees with that of the teacher with weights Ui on all possible binary
inputs. Our objective function, which we call the generalization error, is just the
square of the error, averaged over all these inputs:
F
(6)
=
j.) That is, F is just proportional to
(We used that 2~ 2:{q} (Tj(Tj is zero unless i
the square of the difference between the teacher and pupil weight vectors. With the
Dynamics of Generalization in Linear Perceptrons
N- 1 normalization factor F will then vary between 1 (tabula rasa) and 0 (perfect
generalization) if we normalize it to length .IN. During learning, Wi and thus Vi
depends on time, so F is a function of t. The complementary quantity 1 - F(t)
could be called the generalization ability.
In the basis where A is diagonal, the learning equation (5) is simply
Vr
= -Arvr
(7)
where Ar are the eigenvalues of A. This has the solution
vr(t)
= vr(O)e- Art = ur(O)e- Art ,
(8)
where it is assumed that the weights are zero at time t = 0 (we will come back to
the more general case later). Thus we find
1
F(t) = N
L v;(t) = N1 L u;e-
2Art
(9)
r
r
A veraging over all possible training sets of size p this can be expressed in terms of
the density of eigenvalues of A, peE):
F(t)
= 1~2
J
d?p( ?)e- 2ft .
In the following it will be assumed that the length of it is normalized to
prefactor disappears.
(10)
.IN, so the
For large N, the eigenvalue density is (see, e.g. [11], where it can be obtained simply
from the imaginary part of the Green's function in eq.(57))
peE)
= _1_)(?+ _
?)(? _ L)
271'?
+ (1 -
0:)0(1 - 0:)8(?),
(11)
where
?? = (1 ? fo)2
(12)
and O() is the unit step function. The density has two terms: a 'deformed semicircle'
between the roots ?_ and ?+, and for 0: < 1 a delta function at ? = 0 with weight
1 - 0:. The delta-function term appears because no learning takes place in the
subspace orthogonal to that spanned by the training patterns. For 0: > 1 the
patterns span the whole space, and therefore the delta-function is absent.
The results at infinite time are immediately evident. For 0: < 1 there is a nonzero
limit, F( 00) = 1 - 0:, while F( 00) vanishes for 0: > 1, indicating perfect generalization (the solid line in Figure 1). While on the one hand it may seem remarkable
that perfect generalization can be obtained from a training set which forms an infinitesimal fraction of the entire set of possible examples, the meaning of the result
is just that N points are sufficient to determine an N - I-dimensional hyperplane
in N dimensions.
Figure 2 shows F(t) as obtained numerically from (10) and (11). The qualitative
form of the approach to F (00) can be obtained analytically by inspection. For
0: i= 1, the asymptotic approach is governed by the smallest nonzero eigenvalue ?_.
Thus we have critical slowing down, with a divergent relaxation time
1
T
=
?_
1
=
lfo _ 112
(13)
899
900
Krogh and Hertz
2 .????..
..
r:.. 1
..
....
..
..
..
... ....'..
.
...: ....
O~
____________
o
----
...........________-_-_ _
~~
-_-~-
1
a
2
Figure 1: The asymptotic generalization error as a function of (}. The full line
1 and
corresponds to A 0, the dashed line to A = 0.2, and the dotted line to Wo
=
A = O.
as the transition at (}
=
= 1 is approached. Right at the critical point, the eigenvalue
1
density diverges for small
f
like (-'2, which leads to the power law
F(t) ex
1
Vi
(14)
at long times. Thus, while exactly N examples are sufficient to produce perfect
generalization, the approach to this desirable state is rather slow. A little bit
above (}
1, F(t) will also follow this power law for times t ~ T, going over to
(slow) exponential decay at very long times (t > T). By increasing the training set
size well above N (say, to ~N), one can achieve exponentially fast generalization.
Below (} = 1, where perfect generalization is never achieved, there is at least the
consolation that the approach to the generalization level the network does reach is
exponential (though with the same problem of a long relaxation time just below the
transition as just above it).
=
4
EXTENSIONS
In this section we briefly discuss some extensions of the foregoing calculation. We
will see what happens if the weights are non-zero at t
0, discuss weight decay,
and finally consider noise in the learning process.
=
Weight decay is a simple and frequently-used way to limit the growth of the weights,
which might be desirable for several reasons. It is also possible to approximate the
problem with binary weights using a weight decay term (the so-called spherical
model, see [11]). We consider the simplest kind of weight decay, which comes in as
an additive term, -AWi = -A( Ui - Vi), in the learning equation (2), so the equation
Dynamics of Generalization in Linear Perceptrons
1.0
0.8
-..
0.6
"-"
~
0.4
a=O.B
0.2
............
... ....-.........
a=I.0
--- - - - - - - - - - ~?~??~~~~~i~2~ ?~??~? ?~?~?~?~
0.0
0
10
5
15
20
t
Figure 2: The generalization error as a function of time for a couple of different o .
(5) for the difference between teacher and pupil is now
Vi = - LAijVj
+ >'(Ui -
Vi) = - L(Aij
j
+ >'8ij)Vj + >'Ui.
(15)
j
Apart from the last term this just shifts the eigenvalue spectrum by>..
In the basis where A is diagonal we can again write down the general solution to
this equation:
\
(1 - e -(Ar+,x)t) I\U
r
_
(16)
Vr \
+ vr (0) e-(Ar+,x)t .
Ar
+
1\
The square of this is
v 2 = u 2 [ >'(1
r
r
e-(Ar+,x)t)
-
Ar
+ >.
W (0)
]2
+ e-(Ar+,x)t + _r_e-(Ar+,x)t
(17)
Ur
As in (10) this has to be integrated over the eigenvalue spectrum to find the averaged
generalization error. Assuming that the initial weights are random, so that wr(O) =
0, and that they have a relative variance given by
(18)
the average of F(t) over the distibution of initial conditions now becomes
F(t)
=
J
dept e) [
(,,(1-;, :~+?') + e-('+?')
2
+ w 6e- 2('+?']
.
(19)
(Again it is assumed the length of it is .IN.)
For >. = 0 we see the result is the same as before except for a factor 1 + w5 in front
of the integral. This means that the asymptotic generalization error is now
F(oo) = { (1
o
+ w5)(1 - 0)
for
0
> 1,
for 0 < 1
(20)
901
902
Krogh and Hertz
=
which is shown as a dotted line in Figure 1 for Wo 1. The excess error can easily
be understood as a contribution to the error from the non-relaxing part of the initial
weight vector in the subspace orthogonal to the space spanned by the patterns. The
relaxation times are unchanged for A O..
=
For A > 0 the relaxation times become finite even at a
eigenvalue is shifted by A, so (13) is now
1
1
T = L + A = lfo _ 1F + A'
= 0,
because the smallest
(21)
In this case the asymptotic error can easily be obtained numerically from (19), and
is shown by the dashed line in Figure 1. It is smaller than for A 0 for w5 > 1 at
sufficiently small a. This is simply because the weight decay makes the part of w(O)
orthogonal to the pattern space decay away exponentially, thereby eliminating the
excess error due to large initial weight components in this subspace.
=
This phase transition is very sensitive to noise. Consider adding a noise term 77i(t)
to the right-hand side of (2), with
= 2T6(t - t').
(22)
Here we restrict our attention to the case A = O. Carrying the extra term through
(r/i(t)77j(t'?
the succeeding manipulations leads, in place of (7), to
vr = -Arvr + 77r(t).
(23)
The additional term leads to a correction (after Fourier transforming)
6Vr (W ) -_
77r(w)
?
A
-zw+ r
(24)
and thus to an extra (time-independent) piece of the generalization error F(t):
6F
For a
= ~ '"
N L...J
r
J
dw
211"
(l77r(w)12)
1- iw + Arl2
= ~ '" I-.
N L...J
Ar
r
> 1, where there are no zero eigenvalues, we have
6F = T j~+ dfP(f)
E_
(25)
(26)
f
which has the large a-limit T / a, as found in equilibrium analyses (also for threshold perceptrons [2,3,5,6,7,8,9]). Equation (26) gives a generalization error which
1:
diverges as one approaches the transition at a
6F IX T f -1/2 -_ r.:.T .
(27)
ya-1
=
Equation (25) blows up for a < 1, where some of the Ar are zero. This divergence
just reflects the fact that in the subspace orthogonal to the training patterns, v feels
only the noise and so exhibits a random walk whose variance diverges as t --+- 00.
Keeping more careful track of the dynamics in this subspace leads to
6F
=
2T(1 - a)t + T
cx-::;-
2T
1~+ dfP~f)
[(1 - a)t + OC-Yra)]
(28)
Dynamics of Generalization in Linear Perceptrons
5
CONCLUSION
Generalization in the linear perceptron can be understood in the following picture.
To get perfect generalization the training pattern vectors have to span the whole
input space - N points (in general position) are enough to specify any hyperplane.
This means that perfect generalization appears only for a > 1. As a approaches
1 the relaxation time - i.e. learning time - diverges, signaling a phase transition,
as is common in physical systems. Noise has a severe effect on this transition. It
leads to a degradation of the generalization ability which diverges as one reduces
the number of training examples toward the critical number.
This model is of course much simpler than most real-life training problems. However, it does allow us to examine in detail the dynamical phase transition separating
perfect from imperfect generalization. Further extensions of the model can also be
solved and will be reported elsewhere.
References
[1] Gardner, E. and B. Derrida: Three Unfinished Works on the Optimal Storage
Capacity of Networks. Journal of Physics A 22, 1983-1994 (1989).
[2] Schwartz, D.B., V.K. Samalam, S.A. Solla, and J .S. Denker: Exhaustive Learning. Neural Computation 2, 371-382 (1990).
[3] Tishby, N., E. Levin, and S.A. Solla: Consistent Inference of Probabilities in
Layered Networks: Predictions and Generalization. Proc. IJCNN Washington
1989, vol. 2 403-410, Hillsdale: Erlbaum (1989).
[4] Baum, E.B. and D. Haussler: What Size Net Gives Valid Generalization. Neural Computation 1, 151-160 (1989).
[5] Gyorgyi, G. and N. Tishby: Statistical Theory of Learning a Rule. In Neural
Networks and Spin Glasses, eds W.K. Theumann and R. Koeberle. Singapore:
World Scientific (1990).
[6] Hansel, D. and H. Sompolinsky: Learning from Examples in a Single-Layer
Neural Network. Europhysics Letters 11, 687-692 (1990).
[7] Vallet, F., J. Cailton and P. Refregier: Linear and Nonlinear Extension of the
Pseudo-Inverse Solution for Learning Boolean Functions. Europhysics Letters
9, 315-320 (1989).
[8] Opper, M., W. Kinzel, J. Kleinz, and R. Nehl: On the Ability of the Optimal
Perceptron to Generalize. Journal of Physics A 23, L581-L586 (1990).
[9] Levin, E., N. Tishby, and S. A. Solla: A Statistical Approach to Learning and
Generalization in Layered Neural Networks. AT&T Bell Labs, preprint (1990).
[10] Gyorgyi, G.: Inference of a Rule by a Neural Network with Thermal Noise.
Physical Review Letters 64, 2957-2960 (1990).
[11] Hertz, J .A., A. Krogh, and G.I. Thorbergsson: Phase Transitions in Simple
Learning. Journal of Physics A 22, 2133-2150 (1989).
903
| 349 |@word deformed:1 briefly:1 eliminating:1 thereby:1 solid:1 initial:4 imaginary:1 current:1 john:1 additive:1 succeeding:1 imitate:2 slowing:3 inspection:1 ith:1 ron:1 simpler:1 become:1 qualitative:1 frequently:1 examine:1 spherical:1 little:1 increasing:1 becomes:2 what:2 kind:1 pseudo:1 growth:1 exactly:1 schwartz:1 unit:2 theumann:1 before:1 understood:2 limit:3 despite:1 might:1 awi:1 relaxing:1 averaged:2 practical:1 testing:1 signaling:1 semicircle:1 bell:1 get:1 layered:2 storage:1 baum:1 straightforward:1 attention:1 independently:1 simplicity:1 immediately:1 rule:2 haussler:1 spanned:2 dw:1 feel:1 agreement:1 ft:1 prefactor:1 preprint:1 solved:2 calculate:1 wj:1 sompolinsky:1 solla:3 vanishes:1 transforming:1 ui:4 dynamic:6 trained:2 depend:1 carrying:1 basis:2 easily:2 fast:1 approached:1 exhaustive:1 whose:2 foregoing:1 say:1 ability:11 dfp:2 eigenvalue:9 net:1 adaline:2 achieve:1 normalize:1 diverges:5 produce:1 perfect:10 oo:1 derrida:1 measured:1 ij:2 eq:1 krogh:5 come:2 hillsdale:1 behaviour:1 generalization:37 extension:4 correction:1 sufficiently:1 equilibrium:1 vary:1 smallest:2 pee:2 niels:1 proc:1 iw:1 hansel:1 sensitive:1 agrees:1 reflects:1 rather:1 focus:1 glass:1 inference:2 anders:1 entire:2 integrated:1 going:1 art:3 never:1 washington:1 distibution:1 randomly:1 divergence:1 phase:6 n1:1 w5:3 severe:1 tj:2 bohr:1 integral:1 orthogonal:4 unless:1 walk:1 boolean:1 ar:10 samalam:1 introducing:1 levin:2 erlbaum:1 front:1 tishby:3 reported:1 teacher:8 refregier:1 density:4 physic:3 e_:1 again:2 choose:1 blow:1 vi:6 depends:1 piece:1 later:1 root:1 lot:1 lab:1 contribution:1 square:3 spin:1 degraded:1 variance:2 percept:1 generalize:3 fo:1 reach:1 ed:1 infinitesimal:1 couple:1 back:1 appears:2 follow:2 specify:1 though:1 just:9 correlation:1 hand:2 ei:1 nonlinear:1 scientific:1 effect:1 normalized:1 evolution:2 analytically:2 nonzero:2 deal:1 during:1 oc:1 evident:1 meaning:1 common:1 kinzel:1 physical:2 exponentially:3 numerically:2 rasa:1 recent:1 wje:1 apart:1 manipulation:1 binary:5 life:1 tabula:1 additional:1 determine:1 dashed:2 full:1 desirable:2 reduces:1 calculation:1 long:3 europhysics:2 va:1 prediction:1 normalization:1 achieved:1 extra:2 zw:1 seem:1 vallet:1 call:1 near:1 presence:1 enough:1 restrict:1 imperfect:2 absent:1 shift:1 whether:1 wo:2 gyorgyi:2 amount:1 simplest:1 singapore:1 shifted:1 dotted:2 delta:3 wr:1 track:1 write:1 nordita:1 vol:1 threshold:1 relaxation:6 fraction:1 inverse:1 letter:3 place:2 bit:1 layer:1 nontrivial:1 ijcnn:1 fourier:1 span:2 hertz:5 smaller:1 ur:2 wi:3 happens:1 equation:8 discus:2 know:1 denker:1 obey:1 away:1 cf:1 uj:2 unchanged:1 objective:1 quantity:1 diagonal:2 exhibit:3 subspace:5 separating:1 blegdamsvej:2 capacity:1 reason:1 toward:1 denmark:2 assuming:1 length:3 neuron:1 finite:1 thermal:1 situation:1 copenhagen:2 pair:1 dynamical:2 pattern:8 below:2 green:1 power:4 critical:7 picture:1 disappears:1 gardner:1 review:1 asymptotic:6 law:4 relative:1 proportional:1 remarkable:1 sufficient:2 consistent:1 unfinished:1 course:1 elsewhere:1 last:1 keeping:1 t6:1 aij:1 side:2 allow:1 perceptron:4 institute:1 dimension:1 opper:1 transition:12 valid:1 world:1 excess:2 approximate:1 veraging:1 assumed:3 spectrum:3 vj:1 whole:2 noise:7 complementary:1 pupil:3 slow:2 vr:7 position:1 exponential:2 governed:1 learns:2 ix:1 down:4 er:4 dk:2 decay:8 divergent:1 adding:1 vii:1 cx:1 simply:3 expressed:1 thorbergsson:1 corresponds:1 careful:1 infinite:1 except:2 hyperplane:2 degradation:1 called:2 ya:1 perceptrons:5 indicating:1 dept:1 ex:5 |
2,746 | 3,490 | Learning the Semantic Correlation: An
Alternative Way to Gain from Unlabeled Text
Yi Zhang
Machine Learning Department
Carnegie Mellon University
[email protected]
Jeff Schneider
The Robotics Institute
Carnegie Mellon University
[email protected]
Artur Dubrawski
The Robotics Institute
Carnegie Mellon University
[email protected]
Abstract
In this paper, we address the question of what kind of knowledge is generally transferable from unlabeled text. We suggest and analyze the semantic correlation of words as a generally transferable structure of the
language and propose a new method to learn this structure using an appropriately chosen latent variable model. This semantic correlation contains structural information of the language space and can be used to
control the joint shrinkage of model parameters for any specific task in
the same space through regularization. In an empirical study, we construct 190 different text classification tasks from a real-world benchmark,
and the unlabeled documents are a mixture from all these tasks. We test
the ability of various algorithms to use the mixed unlabeled text to enhance all classification tasks. Empirical results show that the proposed
approach is a reliable and scalable method for semi-supervised learning, regardless of the source of unlabeled data, the specific task to be
enhanced, and the prediction model used.
1 Introduction
The availability of large amounts of unlabeled data such as text on the Internet is a strong
motivation for research in semi-supervised learning [4]. Currently, most of these methods
assume that the unlabeled data belong to the same classes or share the generative distributions with the labeled examples, e.g., generative models [10], low-density separation
[8, 13], and graph-based methods [3]. As indicated in [11], unlabeled data in real-world
applications do not necessarily follow the classes or distribution of labeled examples, and
semi-supervised learning algorithms that give up this assumption have wider applicability
in practice. As a result, some algorithms avoid using unlabeled examples directly in model
training and instead focus on ?changes of representation? that find a more informative representation from unlabeled data and use it to encode the labeled examples [4, 1, 11].
However, even algorithms for learning good features from unlabeled data still make a strong
assumption: those learned high-level features will be relevant to the specific prediction task
at hand. This assumption might be problematic. Many functions can be defined over an
input space and a specific task corresponds to only one of them. The feature extraction
on unlabeled data is an unsupervised process and thus a ?blindly? learned representation
might be irrelevant to a specific task, especially when the unlabeled data are not from
the same task. To tackle this problem, some recent work avoids blind feature extraction by
incorporating external knowledge about the task being enhanced [1]: the high-level features
are learned by principal component analysis on the weights of several models, and these
models are trained from some ?auxiliary? tasks constructed by domain knowledge.
In this paper, we explore the possibility of extracting generally transferable knowledge
from unlabeled text without information about the task to be enhanced. This knowledge
is represented as the semantic correlation structure of the words in the text domain and
is shown to be transferable among documents of different themes. This structure is extracted using a latent topic model combined with a bootstrapping procedure. The rationale
is that the latent topics (or more generally, high-level features) extracted from unlabeled
data might be irrelevant to a particular task, but the word distribution in these topics reveals
the structural information of the language, represented by the semantic correlation among
words. For any specific task defined on the same input space, this information can be used
to control the joint shrinkage of model parameters through informative regularization.
The use of covariance or correlation structure has already been mentioned in transfer learning [12, 9]. A covariance structure can be transferred from a few related tasks to a target
task [12] or inferred from meta-features [9]. In fact, one way to view the present work
is: 1) we automatically construct a large number of diverse but meaningful ?tasks? from
unlabeled text without using external knowledge, where each ?task? is actually extracted
as a latent variable; 2) we propose to learn the semantic correlation structure of the word
space from these dummy tasks and show that this structure is generally transferable regardless of the source of unlabeled data; 3) this structure can be efficiently incorporated into a
broad category of prediction models via regularization, which leads to a very scalable and
applicable semi-supervised learning framework.
2 Semantic Correlation: Transferable Structure from Unlabeled Text
2.1 Latent Topics and Semantic Structure
Latent topics extracted from unlabeled text might be irrelevant to a particular task, but the
composition of these topics in terms of word distribution reveals information about the
semantic structure of the language. Assume a latent topic model [7, 2] of the word space
X, or more generally, a latent variable model characterizing the input space X:
x = Az
(1)
where x = [x1 , x2 , . . . , xp ]T is the p-dimensional vector of input variables, and z =
[z1 , z2 , . . . , zk ]T represents latent variables in the k-dimensional latent space Z. A is a
p ? k matrix, representing a generative process from a probabilistic view or a projection
from a deterministic view. For a latent topic model, x corresponds to the bag-of-words
vector of a document divided by the document length, z is the distribution of k latent topics
in the document, and A is the distribution of p words in k latent topics. Various models fit
in this formula including PCA, ICA, sparse coding, and non-negative matrix factorization.
Different documents have different topic distributions, z, and thus different word distributions, x, but A can be considered an invariant structure of the language. Each pdimensional column vector of A denotes the word distribution in a latent topic, and serves
as an ?observation? in the p dimensional word space, indicating the semantic roles of p
words in this topic. Given a large set of k latent topics represented by k p-dimensional vectors {a(,1) , a(,2) , . . . , a(,k) }, we can define the semantic covariance of p words as follows.
Let A denote the matrix formed by treating each vector a(,t) , t = 1, 2, . . . , k as a column,
and let a(i,) and a(i,t) denote a row vector and an element of this matrix, respectively. The
semantic covariance of word i and word j is defined as:
covs (xi , xj ) =
k
k
1X
1X
(ait ? a(i,) )(ajt ? a(j,) ) =
ait ajt ? a(i,) a(j,)
k t=1
k t=1
(2)
where a(i,) is the mean of the ith row in A. Naturally, the semantic correlation is:
covs (xi , xj )
corrs (xi , xj ) = p
covs (xi , xi )covs (xj , xj )
(3)
2.2 Comparing Semantic Correlation and Data Correlation
Suppose we observe a set of n documents in word space X, denoted by an n ? p data
matrix DX where each document corresponds to a p-dimensional bag-of-words vector of
counts. We refer to the correlation between words computed directly from DX as the data
correlation. This data correlation may not be transferable between tasks since documents
from different themes may have distinct topic distributions and word distributions, which
lead to different word correlations in data space.
Here we show intuitively why we expect the data correlation to have limited use across
distinct tasks, while we expect the semantic correlation to be transferable. Consider the
latent variable model in eq. (1), which relates A to data space X. We focus on semantic
covariance and data covariance, and assume that the bag-of-words vector is divided by the
length of the document so that it corresponds to x in eq. (1). From eq. (1), an input variable
Pk
xi can be written as xi = t=1 ait zt , and therefore, the data covariance of word i and
word j can be expressed as:
cov(xi , xj ) = E[(xi ? Exi )(xj ? Exj )]
k
X
= E[
ait (zt ? Ezt )
t=1
=
k X
k
X
t=1
=
k
X
(4)
ajt (zt ? Ezt )]
t=1
ait ajt? E[(zt ? Ezt )(zt? ? Ezt? )]
t? =1
k X
k
X
ait ajt? cov(zt , zt? )
t=1 t? =1
Thus, data covariance is directly related to the covariance among latent topics. Documents
from different sources have different topic distributions and thus different covariance terms
cov(zt , zt? ) in latent space. As a result, the data covariance learned from one source of
documents may not be transferable to another class of documents. On the other hand, the
semantic covariance in eq. (2) is completely determined by the structure of A.
Intuitively, the data covariance among words must contain some information about the
semantic relationship of words. This can also be observed from eq. (4). If we ignore
the effect of the covariance among topics by assuming that latent topics are independently
distributed and have the same variance (denoted as ? 2 ), eq. (4) can be written as:
cov(xi , xj ) =
?2
k
X
t=1
ait ajt
(5)
Algorithm 1 Estimation of semantic correlation structure
Input: data D = Du ? Dl , latent variable model M
Output: semantic correlation matrix ?s
Parameters: ?, k, N
Initialize V ? ?
repeat
Dsamp ? Sampling(D, ?)
{(z1 , a(,1) ), (z2 , a(,2) ), . . . , (zk , a(,k) )} ? M (k, Dsamp )
V ? V ? {a(,1) , a(,2) , . . . , a(,k) }
until |V| ? kN
Compute ?s : ?s (i, j) ? corrs (xi , xj )
Comparing this to the last form in eq. (2), we see the similarity between data and semantic
covariance. In fact, our empirical study shows that data correlation from unlabeled text
does contain useful information, but is not as informative as semantic correlation.
3 Semantic Structure Learning and Informative Regularization
Consider a set of nl labeled documents Dl = {(xli , yil ) ? X ? Yl , i = 1, ? ? ? nl }, where
X ? Rp is the p-dimensional word space, and Yl = {?1, 1} for classification and Yl ? R
for regression. Also assume that a large set of nu unlabeled documents Du = {xui ?
X, i = 1, ? ? ? nu } is available. The goal is to learn a good function fl : X ? Yl , which is
a classifier or a regressor. In this section we introduce a framework to transfer knowledge
from unlabeled text. Section 3.1 proposes an approach to learning the semantic structure
of the word space from a set of unlabeled text. In section 3.2, we discuss how to efficiently
apply the learned structure to a broad category of prediction models through regularization.
3.1 Learning the Semantic Correlation
The semantic correlation among words can be estimated using eq. (3) by observing a large
number of different latent topics. However, obtaining a large set of diverse but meaningful
topics is hard, since the number of meaningful topics extracted by a latent topic model is
usually not very large. To solve this problem, resampling techniques such as bootstrapping
[5] can be combined with a chosen latent variable model, which provides a principled way
to estimate the semantic correlation. The procedure is given in Algorithm 1, which uses
all the available data D = Du ? Dl and a latent variable model M as the input. The
algorithm repeats N iterations. In each iteration it draws an ? percentage sample1 from
the data and extracts k latent topics from the sample by applying the model M . After N
iterations, the p ? p semantic correlation matrix ?s is estimated from the kN observations
of word distribution in latent topics. The algorithm requires an appropriate latent variable
model M (e.g., latent dirichlet allocation for text data), and a number k of latent variables
extracted each iteration from the sampled data. The number of iterations N is set as large
as necessary to obtain a reliable estimation.
3.2 Knowledge Transfer by Informative Regularization
This section discusses how to use the semantic structure ?s in any specific learning task
defined on the input space X. For the prediction model, we mainly consider regularized
linear models with an l-2 norm penalty, e.g., support vector machines, ridge regression,
logistic regression with a Gaussian prior, etc. The model is represented by a p-dimensional
weight vector w and an intercept b. The prediction is computed as wT x + b for regression
1
In this paper, we use ? = 50% sampling without replacement. Other choices can be made.
or by setting a threshold ? (usually ? = 0) on wT x + b for classification. To learn w and
b, we minimize a loss function L on the training examples plus a regularization term on w:
argmin
w,b
nl
X
L(yil , wT xli + b) + ?wT w
(6)
i=1
Different models correspond to different loss functions [6], e.g., SVMs use hinge loss, logistic regression uses log-likelihood loss, and ridge regression uses squared error loss. The
regularization term ?wT w = ?wT I?1 w is well known to be equivalent to the Bayesian
approach that imposes a Gaussian prior with zero mean and an identity correlation matrix.
The correlation is often set to an identity matrix due to lack of knowledge about the input
space. If a covariance or correlation structure is known, e.g., the semantic structure of the
word space, the prior can be more informative [12]. Incorporating ?s into the Gaussian
prior leads to a new regularization term and the resulting model is:
argmin
w,b
nl
X
L(yil , wT xli + b) + ?wT ??1
s w
(7)
i=1
Extending the discussion on SVMs in [9], all regularized linear models in the form of
eq. (7) can be easily solved by three steps. First, transform the training examples by
1
x
?li = ?s2 xli
(8)
Second, learn the standard linear model in the transformed space:
argmin
w,b
?
nl
X
L(yil , w
?Tx
?li + b) + ?w
?Tw
?
(9)
i=1
Finally, the optimal solution for (7) is obtained by:
1
w = ?s2 w
?
(10)
This equivalence is derived from wT xli = w
?Tx
?li and wT ??1
? T w.
? Semantic
s w = w
correlation is transferable to any specific task and thus can be computed offline. As a
result, semi-supervised learning for any task simply requires the linear transformation in
eq. (8) before training on the labeled examples, which is very scalable.
4 Experiments
We use the by-date version of the 20-NewsGroups data set2 , where 11314 training and 7532
testing documents are divided by date and denoted as Dtr and Dts here. Documents are
represented by bag-of-words vectors. The vocabulary is built to include the most frequent
200 words in each of the 20 newsgroups, while the 20 most frequent words over all 20
newsgroups are removed. This yields an input space X with p = 1443 features (words).
Documents come from 20 newsgroups, so we construct 190 binary classification tasks, one
for each pair of newsgroups. For each task, a few documents in the two newsgroups are
selected from Dtr as the labeled examples, denoted as Dl in section 3. The rest of the
documents in Dtr are used as the unlabeled data, denoted by Du . Note that Du is a mixture
from all the 20 newsgroups. In this sense, semi-supervised learning algorithms that assume
the unlabeled data come from the target task or the same generative distribution are unlikely
to work very well. The test data for each binary task are all the relevant documents in Dts ,
i.e., documents in Dts that belong to one of the two chosen newsgroups. For any task we
2
http://people.csail.mit.edu/jrennie/20Newsgroups/
always have Du ? Dl = Dtr , so Algorithm 1 is run only once on Dtr to learn the semantic
correlation structure ?s that is used by all 190 tasks.
The documents are well distributed over the 20 newsgroups and thus there are large numbers of training documents in Dtr for each newsgroup. To limit the number of labeled
examples for each binary prediction task, we use 5%, 10%, 20% of the relevant documents
in Dtr as the labeled examples Dl , and the rest of the relevant and all irrelevant documents in Dtr as the unlabeled data Du . We denote these tests as 5%-Test, 10%-Test, and
20%-Test. The result of each test is averaged over 10 random runs, with Dl randomly
selected from Dtr . The testing data for each task are fixed to be all relevant documents
in Dts , which is invariant for a task among different tests and random runs. Methods for
comparison are as follows.
(1) Comparison based on SVM. For each classification task, we compare: SVM directly trained on labeled examples Dl (denoted SV M ), SVM trained on Dl in the latent
topic space extracted by latent dirichlet allocation on Dl ? Du [2] (denoted SV MLDA ),
SVM trained on Dl in principal component space extracted by PCA on Dl ? Du (denoted
SV MP CA ), SVM trained on Dl via informative regularization with semantic correlation
?s in the prior (denoted SV MIR ), SVM trained on Dl via informative regularization with
data correlation in the prior (denoted SV MIR(data) ), where the data correlation ? is estimated from bag-of-words vectors of documents in Dl ? Du .
(2) Comparison based on L-2 Regularized Logistic Regression. Analogous to the SVM
comparison with logistic regression (denoted LGR) as the base classifier.
(3) Comparison based on ridge regression. Ridge regression (denoted RR) is used as the
base classifier: examples are labeled as +1 and ?1, and prediction is made by wT x+b > 0.
(4) Comparison to semi-supervised SVM. Recently a fast semi-supervised SVM using
L-2 loss was proposed [13], which makes it possible to handle large-scale unlabeled documents. We compare: L2-SVM directly trained on Dl (L2-SV M ), semi-supervised L2SVM trained on Dl ? Du (L2-S 3 V M ), and L2-SVM trained on Dl via informative regularization with semantic correlation (L2-SV MIR ). The semi-supervised SVM should not
work well since the unlabeled data is a mixture from all tasks. Therefore, we also test an
?oracle? semi-supervised SVM, using labeled examples together with unlabeled examples
coming only from the two relevant newsgroups (L2-S 3 V Moracle ).
Here are additional implementation details. The regularization parameter ? for each
model is determined by 5-fold cross-validation in the range 10?6 to 106 . LibSVM 2.85
is used for SVM. For PCA, we tried 10, 20, 30, 50, 100, 200, 400 principal components
and report PCA using 200 principal components as the best result. For latent dirichlet
allocation, we use the implementation at http://chasen.org/?daiti-m/dist/lda/. We tried
k = 10, 20, 30, 50, 100, 200 latent topics with 30 topics performing best. For the proposed
method, Algorithm 1 uses latent dirichlet allocation with k = 30 topics per sampling, repeats N = 100 iterations, and ?s is estimated from these 3000 latent topics. L2-S 3 V M
(code available as SVMlin [13]) has a second parameter ?u for unlabeled examples, which
is set to 1 as in [13]. Unlabeled data for L2-S 3V M is downsampled to 3000 documents
for each run to make training (and cross-validation) feasible.
Empirical results are shown in Tables 1- 4. For each semi-supervised learning algorithm,
we report two performance measures: the average classification error over all 190 tasks,
and the gain/loss ratio compared to the corresponding supervised learning method. The
former measures the effectiveness of using the unlabeled data, while the latter measures
the reliability of the knowledge transfer. From Tables 1 - 3, IR based methods with semantic correlation significantly outperform standard supervised learning, LDA based methods,
PCA based methods, and is also generally more effective than IR with data correlation. The
LDA based algorithms slightly improve the prediction performance when using SVM or logistic regression as the base classifier, while decreasing the performance when using ridge
Table 1: Comparison over 190 tasks, based on SVMs
5%-Test
10%-Test
20%-Test
SV M
14.22%
10.34%
7.88%
SV MLDA(30)
9.76% (179/11)
8.01% (171/19) 6.90% (161/29)
SV MP CA(200) 13.32% (123/67) 10.31% (104/86) 8.29% (89/101)
SV MIR
7.58% (190/0)
6.11% (190/0) 5.13% (183/7)
SV MIR(data)
9.40% (185/5)
7.14% (183/7)
5.70% (180/10)
Table 2: Comparison over 190 tasks, based on regularized logistic regression
5%-Test
10%-Test
20%-Test
LGR
11.70%
8.43%
6.67%
LGRLDA(30)
8.21% (171/19) 7.38% (156/34) 6.79% (134/56)
LGRP CA(200) 11.43% (105/85) 8.95% (65/125) 7.28% (64/122)
LGRIR
6.70% (189/1) 5.78% (181/9) 5.19% (169/21)
LGRIR(data)
8.46% (172/18) 7.21% (157/33) 6.46% (132/58)
Table 3: Comparison over 190 tasks, based on ridge regression
5%-Test
10%-Test
20%-Test
RR
14.13%
10.73%
8.90%
RRLDA(30)
14.08% (111/101) 11.98% (67/102) 11.34% (42/148)
RRP CA(200)
15.50% (56/132)
12.80% (33/157) 11.53% (17/173)
RRIR
10.55% (182/8) 8.88% (161/29) 8.01% (134/56)
RRIR(data) 10.68% (176/14) 8.94% (157/33) 7.99% (139/51)
Table 4: Comparison to semi-supervised SVMs over 190 tasks, based on L2-SVM
5%-Test
10%-Test
20%-Test
L2-SV M
11.18%
8.41%
6.65%
L2-S 3 V M
14.14% (14/176) 11.64% (5/185)
10.04% (1/189)
L2-S 3V Moracle
8.22% (189/1)
6.95% (185/5)
6.00% (164/24)
L2-SV MIR
6.87% (188/2) 5.73% (180/10) 4.98% (177/13)
regression. This is possibly because the loss function of ridge regression is not a good approximation to the 0/1 classification error, and therefore, ridge regression is more sensitive
to irrelevant latent features extracted from mixed unlabeled documents. The PCA based
methods are generally worse than standard supervised learning, which indicates they are
sensitive to the mixed unlabeled data. In Table 4, the L2-S 3 V M performs worse than standard L2-SV M , showing that traditional semi-supervised learning cannot handle unlabeled
data outside the target task. We can also see that the L2-SV MIR even outperforms the
oracle version of semi-supervised SVM (L2-S 3V Moracle) by achieving similar gain/loss
ratio but better average classification error. This is a very promising result since it shows
that information can be gained from other tasks even in excess of what can be gained from
a significant amount of unlabeled data on the task at hand. In conclusion, the empirical
results show that the proposed approach is an effective and reliable (also scalable) method
for semi-supervised learning, regardless of the source of unlabeled data, the specific task
to be enhanced, and the base prediction model used.
It is interesting to directly compare the semantic correlation ?s and the data correlation
? matrices learned from the data. We make three observations: 1) The average value
of entries is 0.0147 in the semantic correlation and 0.0341 in the data correlation. We
Table 5: Top 10 distinct word pairs in terms of semantic correlation vs. data correlation
gaza/lebanes
biker/yamaha motorcycl/yamaha
batter/clemen
yanke/catcher
0.956/0.007
0.937/?0.004
0.970/0.030
0.932/?0.002
0.934/0.002
palestin/lebanes
cage/ama
toyota/mileag
mileag/mustang
brave/batter
0.946/0.181
0.921/?0.005
0.934/0.009
0.923/?0.002
0.950/0.025
have 1617834 entries with higher data correlation and 462972 entries with higher semantic
correlation. Thus overall word pairs tend to have higher values in the data correlation.
2) However, if we list the top 1000 pairs of words with the largest absolute difference
between the two correlations, they all have very high semantic correlation and low data
correlation. 3) We list the top 10 such word pairs and their semantic/data correlations in
Table 5. The words are indeed quite related. In conclusion, entries in ?s seem to have
a power-law distribution where a few pairs of words have very high correlation and the
rest have low correlation, which is consistent with our intuition about words. However,
the data correlation misses highly correlated words found by the semantic correlation even
though it generally assigns higher correlation to most word pairs. This is consistent with
the data correlation not being transferable among documents of different themes. When the
unlabeled documents are a mixture from different sources, the estimation of data correlation
is affected by the fact that the mixture of input documents is not consistent.
Acknowledgments
This work was supported by the Centers of Disease Control and Prevention (award R01-PH
000028) and by the National Science Foundation (grant IIS-0325581).
References
[1] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks
and unlabeled data. JMLR, 6:1817?1853, 2005.
[2] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. JMLR, 3:993?1022, 2003.
[3] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In
ICML, pages 19?26, 2001.
[4] O. Chapelle, B. Scholkopf, and A. Zien. Semi-supervised Learning. The MIT Press, 2006.
[5] B. Efron. Bootstrap methods: Another look at the jackknife. The Annals of Statistics, 7, 1979.
[6] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining,
Inference and Prediction. Springer, New York, 2001.
[7] T. Hofmann. Probabilistic latent semantic analysis. In UAI, 1999.
[8] T. Joachims. Transductive inference for text classification using support vector machines. In
ICML, pages 200?209, 1999.
[9] E. Krupka and N. Tishby. Incorporating Prior Knowledge on Features into Learning. In AISTATS, pages 227?234, 2007.
[10] K. Nigam, A. K. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and
unlabeled documents using em. Machine Learning, 39:103?134, 2000.
[11] R. Raina, A. Battle, H. Lee, and B. P. A. Y. Ng. Self-taught learning: Transfer learning from
unlabeled data. In ICML, pages 759?766, 2007.
[12] R. Raina, A. Y. Ng, and D. Koller. Constructing informative priors using transfer learning. In
ICML, pages 713?720, 2006.
[13] V. Sindhwani and S. Keerthi. Large scale semi-supervised linear svms. In SIGIR, 2006.
| 3490 |@word version:2 norm:1 tried:2 covariance:16 contains:1 document:35 outperforms:1 z2:2 comparing:2 dx:2 written:2 must:1 informative:10 hofmann:1 treating:1 resampling:1 v:1 generative:4 selected:2 mccallum:1 ith:1 blei:1 provides:1 org:1 zhang:2 constructed:1 scholkopf:1 introduce:1 indeed:1 ica:1 dist:1 decreasing:1 automatically:1 what:2 kind:1 argmin:3 transformation:1 bootstrapping:2 tackle:1 classifier:4 control:3 grant:1 before:1 limit:1 krupka:1 might:4 plus:1 equivalence:1 factorization:1 limited:1 range:1 averaged:1 acknowledgment:1 testing:2 practice:1 bootstrap:1 procedure:2 empirical:5 significantly:1 projection:1 word:45 downsampled:1 suggest:1 cannot:1 unlabeled:42 applying:1 intercept:1 equivalent:1 deterministic:1 center:1 regardless:3 independently:1 clemen:1 sigir:1 assigns:1 artur:1 chasen:1 handle:2 analogous:1 annals:1 enhanced:4 target:3 suppose:1 awd:1 us:4 element:2 labeled:13 observed:1 role:1 solved:1 removed:1 mentioned:1 principled:1 intuition:1 disease:1 trained:9 predictive:1 completely:1 easily:1 joint:2 exi:1 various:2 represented:5 tx:2 distinct:3 fast:1 effective:2 outside:1 quite:1 solve:1 ability:1 cov:4 statistic:1 transductive:1 transform:1 rr:2 propose:2 coming:1 frequent:2 relevant:6 date:2 ama:1 az:1 corrs:2 extending:1 wider:1 eq:10 strong:2 auxiliary:1 c:3 come:2 xui:1 brave:1 considered:1 moracle:3 estimation:3 applicable:1 bag:5 currently:1 sensitive:2 largest:1 mit:2 gaussian:3 always:1 avoid:1 shrinkage:2 encode:1 derived:1 focus:2 joachim:1 likelihood:1 mainly:1 indicates:1 sense:1 inference:2 unlikely:1 koller:1 transformed:1 overall:1 classification:11 among:8 denoted:12 proposes:1 prevention:1 initialize:1 construct:3 pdimensional:1 extraction:2 once:1 sampling:3 ng:3 represents:1 broad:2 look:1 unsupervised:1 icml:4 report:2 few:3 randomly:1 national:1 replacement:1 keerthi:1 ando:1 friedman:1 mlda:2 possibility:1 highly:1 mining:1 mixture:5 nl:5 necessary:1 column:2 svmlin:1 applicability:1 entry:4 tishby:1 kn:2 sv:16 combined:2 density:1 csail:1 probabilistic:2 yl:4 lee:1 regressor:1 enhance:1 together:1 batter:2 squared:1 possibly:1 worse:2 external:2 li:3 coding:1 availability:1 mp:2 blind:1 view:3 analyze:1 observing:1 minimize:1 formed:1 ir:2 variance:1 efficiently:2 correspond:1 yield:1 xli:5 bayesian:1 naturally:1 gain:3 sampled:1 mitchell:1 knowledge:11 efron:1 actually:1 higher:4 supervised:21 follow:1 though:1 biker:1 correlation:56 until:1 hand:3 cage:1 lack:1 logistic:6 lda:3 indicated:1 effect:1 contain:2 former:1 regularization:13 semantic:43 self:1 transferable:11 ridge:8 performs:1 recently:1 yil:4 belong:2 mellon:3 composition:1 refer:1 significant:1 exj:1 schneide:1 language:5 reliability:1 jrennie:1 chapelle:1 similarity:1 etc:1 base:4 recent:1 irrelevant:5 meta:1 binary:3 yi:1 additional:1 schneider:1 semi:18 zien:1 relates:1 ii:1 multiple:1 cross:2 divided:3 award:1 prediction:11 scalable:4 regression:16 cmu:3 blindly:1 iteration:6 robotics:2 source:6 appropriately:1 rest:3 mir:7 tend:1 effectiveness:1 seem:1 jordan:1 extracting:1 structural:2 newsgroups:11 xj:9 fit:1 yamaha:2 hastie:1 pca:6 penalty:1 york:1 generally:9 useful:1 l2svm:1 amount:2 ph:1 svms:5 category:2 http:2 outperform:1 percentage:1 problematic:1 estimated:4 per:1 tibshirani:1 dummy:1 diverse:2 carnegie:3 affected:1 taught:1 threshold:1 blum:1 achieving:1 libsvm:1 graph:2 run:4 separation:1 draw:1 fl:1 internet:1 fold:1 oracle:2 x2:1 performing:1 jackknife:1 transferred:1 department:1 battle:1 across:1 slightly:1 em:1 tw:1 intuitively:2 invariant:2 discus:2 count:1 serf:1 available:3 apply:1 observe:1 appropriate:1 chawla:1 alternative:1 covs:4 rp:1 denotes:1 dirichlet:5 include:1 top:3 hinge:1 especially:1 r01:1 question:1 already:1 traditional:1 thrun:1 catcher:1 topic:30 dubrawski:1 assuming:1 lgr:2 length:2 code:1 relationship:1 ratio:2 dtr:9 negative:1 implementation:2 zt:9 observation:3 benchmark:1 incorporated:1 inferred:1 pair:7 z1:2 learned:6 dts:4 nu:2 address:1 usually:2 built:1 reliable:3 including:1 power:1 regularized:4 raina:2 representing:1 improve:1 extract:1 text:16 prior:8 l2:17 law:1 loss:9 expect:2 rationale:1 mixed:3 interesting:1 allocation:5 validation:2 foundation:1 xp:1 imposes:1 consistent:3 share:1 row:2 repeat:3 last:1 supported:1 offline:1 institute:2 characterizing:1 absolute:1 sparse:1 distributed:2 vocabulary:1 world:2 avoids:1 made:2 excess:1 ignore:1 reveals:2 uai:1 xi:11 latent:38 why:1 table:9 promising:1 learn:6 zk:2 transfer:6 ca:4 obtaining:1 nigam:1 du:11 ajt:6 necessarily:1 constructing:1 domain:2 aistats:1 pk:1 motivation:1 s2:2 ait:7 x1:1 theme:3 jmlr:2 toyota:1 formula:1 specific:9 showing:1 list:2 svm:17 dl:18 incorporating:3 gained:2 simply:1 explore:1 set2:1 expressed:1 sindhwani:1 springer:1 corresponds:4 extracted:9 ezt:4 goal:1 identity:2 jeff:1 feasible:1 change:1 hard:1 determined:2 wt:11 miss:1 principal:4 mincuts:1 meaningful:3 newsgroup:1 indicating:1 support:2 people:1 latter:1 correlated:1 |
2,747 | 3,491 | Measures of Clustering Quality: A Working Set of
Axioms for Clustering
Margareta Ackerman and Shai Ben-David
School of Computer Science
University of Waterloo, Canada
Abstract
Aiming towards the development of a general clustering theory, we discuss abstract axiomatization for clustering. In this respect, we follow up on the work of
Kleinberg, ([1]) that showed an impossibility result for such axiomatization. We
argue that an impossibility result is not an inherent feature of clustering, but rather,
to a large extent, it is an artifact of the specific formalism used in [1].
As opposed to previous work focusing on clustering functions, we propose to
address clustering quality measures as the object to be axiomatized. We show that
principles like those formulated in Kleinberg?s axioms can be readily expressed in
the latter framework without leading to inconsistency.
A clustering-quality measure (CQM) is a function that, given a data set and its partition into clusters, returns a non-negative real number representing how strong or
conclusive the clustering is. We analyze what clustering-quality measures should
look like and introduce a set of requirements (axioms) for such measures. Our
axioms capture the principles expressed by Kleinberg?s axioms while retaining
consistency.
We propose several natural clustering quality measures, all satisfying the proposed
axioms. In addition, we analyze the computational complexity of evaluating the
quality of a given clustering and show that, for the proposed CQMs, it can be
computed in polynomial time.
1
Introduction
In his highly influential paper, [1], Kleinberg advocates the development of a theory of clustering that
will be ?independent of any particular algorithm, objective function, or generative data model.? As a
step in that direction, Kleinberg sets up a set of ?axioms? aimed to define what a clustering function
is. Kleinberg suggests three axioms, each sounding plausible, and shows that these seemingly natural
axioms lead to a contradiction - there exists no function that satisfies all three requirements.
Kleinberg?s result is often interpreted as stating the impossibility of defining what clustering is, or
even of developing a general theory of clustering. We disagree with this view. In this paper we show
that the impossibility result is, to a large extent, due to the specific formalism used by Kleinberg
rather than being an inherent feature of clustering.
Rather than attempting to define what a clustering function is, and demonstrating a failed attempt,
as [1] does, we turn our attention to the closely related issue of evaluating the quality of a given
data clustering. In this paper we develop a formalism and a consistent axiomatization of that latter
notion.
As it turns out, the clustering-quality framework is richer and more flexible than that of clustering
functions. In particular, it allows the postulation of axioms that capture the features that Kleinberg?s
axioms aim to express, without leading to a contradiction.
1
A clustering-quality measure is a function that maps pairs of the form (dataset, clustering) to
some ordered set (say, the set of non-negative real numbers), so that these values reflect how ?good?
or ?cogent? that clustering is.
Measures for the quality of a clusterings are of interest not only as a vehicle for axiomatizing clustering. The need to measure the quality of a given data clustering arises naturally in many clustering
issues. The aim of clustering is to uncover meaningful groups in data. However, not any arbitrary
partitioning of a given data set reflects such a structure. Upon obtaining a clustering, usually via
some algorithm, a user needs to determine whether this clustering is sufficiently meaningful to rely
upon for further data mining analysis or practical applications. Clustering-quality measures (CQMs)
aim to answer that need by quantifying how good is any specific clustering.
Clustering-quality measures may also be used to help in clustering model-selection by comparing
different clusterings over the same data set (e.g., comparing the results of a given clustering paradigm
over different choices of clustering parameters, such as the number of clusters).
When posed with the problem of finding a clustering-quality measure, a first attempt may be to
invoke the loss (or objective) function used by a clustering algorithm, such as k-means or k-median,
as a clustering-quality measure. However, such measures have some shortcomings for the purpose
at hand. Namely, these measures are usually not scale-invariant, and they cannot be used to compare
the quality of clusterings obtained by different algorithms aiming to minimize different clustering
costs (e.g., k-means with different values of k). See Section 6 for more details.
Clustering quality has been previously discussed in the applied statistics literature, where a variety
of techniques for evaluating ?cluster validity? were proposed. Many of these methods, such as the
external criteria discussed in [2], are based on assuming some predetermined data generative model,
and as such do not answer our quest for a general theory of clustering. In this work, we are concerned
with quality measures regardless of any specific generative model, for examples, see the internal
criteria surveyed in [2].
We formulate a theoretical basis for clustering-quality evaluations. We propose a set of requirements (?axioms?) of clustering-quality measures. We demonstrate the relevance and consistency of
these axioms by showing that several natural notions satisfy these requirements. In particular, we
introduce quality-measures that reflect the underlying intuition of center-based and linkage-based
clustering. These notions all satisfy our axioms, and, given a data clustering, their value on that
clustering can be computed in polynomial time.
Paper outline: we begin by presenting Kleinberg?s axioms for clustering functions and discuss their
failure. In Section 4.3 we show how these axioms can be translated into axioms pertaining clustering
quality measures, and prove that the resulting set of axioms is consistent. In Section 4, we discuss
desired properties of an axiomatization and propose an accordingly revised set of axioms. Next, in
Section 5 we present several clustering-quality measures, and claim that they all satisfy our axioms.
Finally, in Section 5.3, we show that the quality of a clustering can be computed in polynomial time
with respect to our proposed clustering-quality measures.
2
Definitions and Notation
Let X be some domain set (usually finite). A function d : X ? X ? R is a distance function over
X if d(xi , xi ) ? 0 for all xi ? X, for any xi , xj ? X, d(xi , xj ) > 0 if and only if xi 6= xj , and
d(xi , xj ) = d(xj , xi ) otherwise. Note that we do not require the triangle inequality.
A k-clustering of X is a k-partition, C = {C1 , C2 , . . . , Ck }. That is, Ci ? Cj = ? for i 6= j and
?ki=1 Ci = X. A clustering of X is a k-clustering of X for some k ? 1. A clustering is trivial if
each of its clusters contains just one point, or if it consists of just one cluster.
For x, y ? X and clustering C of X, we write x ?C y whenever x and y are in the same cluster of
clustering C and x 6?C y, otherwise.
A clustering function for some domain set X is a function that takes a distance function d over X,
and outputs a clustering of X.
2
A clustering-quality measure (CQM) is a function that is given a clustering C over (X, d) (where
d is a distance function over X) and returns a non-negative real number, as well as satisfies some
additional requirements. In this work we explore the question of what these requirements should be.
3
Kleinberg?s Axioms
Kleinberg, [1], proposes the following three axioms for clustering functions. These axioms are
intended to capture the meaning of clustering by determining which functions (from a domain set
endowed with a distance function) are worthy of being considered clustering functions and which
are not. Kleinberg shows that the set is inconsistent - there exist no functions that satisfies all three
axioms.
The first two axioms require invariance of the clustering that f defines under some changes of the
input distance function.
Function Scale Invariance: Scale invariance requires that the output of a clustering function be
invariant to uniform scaling of the input.
A function f is scale-invariant if for every distance function d and positive ?, f (d) = f (?d) (where
?d is defined by setting, for every pair of domain points x, y, ?d(x, y) = ? ? d(x, y)).
Function Consistency: Consistency requires that if within-cluster distances are decreased, and
between-cluster distances are increased, then the output of a clustering function does not change.
Formally,
? Given a clustering C over (X, d), a distance function d0 is a C-consistent variant of d, if
d0 (x, y) ? d(x, y) for all x ?C y, and d0 (x, y) ? d(x, y) for all x 6?C y.
? A function f is consistent if f (d) = f (d0 ) whenever d0 is an f (d)-consistent variant of d.
Function Richness: Richness requires that by modifying the distance function, any partition of the
underlying data set can be obtained.
A function f is rich if for each partitioning, C, of X, there exists a distance function d over X so
that f (d) = C.
Theorem 1 (Kleinberg, [1]) There exists no clustering function that simultaneously satisfies scale
invariance, consistency and richness.
Discussion: The intuition behind these axioms is rather clear. Let us consider, for example, the
Consistency requirement. It seems reasonable that by pulling closer points that are in the same
cluster and pushing further apart points in different clusters, our confidence in the given clustering
will only rise. However, while this intuition can be readily formulated in terms of clustering quality
(namely, ?changes as these should not decrease the quality of a clustering?), the formulation through
clustering functions says more. It actually requires that such changes to the underlying distance
function should not create any new contenders for the best-clustering of the data.
For example, consider Figure 1, where we illustrate a good 6-clustering. On the right hand-side, we
show a consistent change of this 6-clustering. Notice that the resulting data has a 3-clustering that is
reasonably better than the original 6-clustering. While one may argue that the quality of the original
6-clustering has not decreased as a result of the distance changes, the quality of the 3-clustering has
improved beyond that of the 6-clustering. This illustrates a significant weakness of the consistency
axiom for clustering functions.
The implicit requirement that the original clustering remains the best clustering following a consistent change is at the heart of Kleinberg?s impossibility result. As we shall see below, once we relax
that extra requirement the axioms are no longer unsatisfiable.
4
Axioms of Clustering-Quality Measures
In this section we change the primitive that is being defined by the axioms from clustering functions
to clustering-quality measures (CQM). We reformulate the above three axioms in terms of CQMs
3
Figure 1: A consistent change of a 6-clustering.
and show that this revised formulation is not only consistent, but is also satisfied by a number of
natural clustering quality measures. In addition, we extend the set of axioms by adding another
axiom (of clustering-quality measures) that is required to rule out some measures that should not be
counted as CQMs.
4.1
Clustering-Quality Measure Analogues to Kleinberg?s Axioms
The translation of the Scale Invariance axiom to the CQM terminology is straightforward:
Definition 1 (Scale Invariance) A quality measure m satisfies scale invariance if for every clustering C of (X, d), and every positive ?, m(C, X, d) = m(C, X, ?d).
The translation of the Consistency axiom is the place where the resulting CQM formulation is indeed
weaker than the original axiom for functions. While it clearly captures the intuition that consistent
changes to d should not hurt the quality of a given partition, it allows the possibility that, as a result
of such a change, some partitions will improve more than others1 .
Definition 2 (Consistency) A quality measure m satisfies consistency if for every clustering C over
(X, d), whenever d0 is a C consistent variant of d, then m(C, X, d0 ) ? m(C, X, d).
Definition 3 (Richness) A quality measure m satisfies richness if for each non-trivial clustering C
of X, there exists a distance function d over X such that C = Argmax{m(C, X, d)}.
Theorem 2 Consistency, scale invariance, and richness for clustering-quality measures form a consistent set of requirements.
Proof: To show that scale-invariance, consistency, and richness form a consistent set of axioms, we
present a clustering quality measure that satisfies the three axioms. This measure captures a quality
that is intuitive for center-based clusterings. In Section 5, we introduce more quality measures that
capture the goal of other types of clusterings. All of these CQM?s satisfy the above three axioms.
For each point in the data set, consider the ratio of the distance from the point to its closest center to
the distance from the point to its second closest center. Intuitively, the smaller this ratio is, the better
the clustering (points are ?more confident? about their cluster membership). We use the average of
this ratio as a quality measure.
Definition 4 (Relative Point Margin) The K-Relative Point Margin of x ? X is K-RMX,d (x) =
d(x,cx )
0
d(x,cx0 ) , where cx ? K is the closest center to x, cx ? K is a second closest center to x, and
K ? X.
1
The following formalization assumes that larger values of m indicate better clustering quality. For some
quality measures, smaller values indicate better clustering quality - in which case we reverse the direction of
inequalities for consistency and use Argmin instead of Argmax for richness.
4
A set K is a representative set of a clustering C if it consists of exactly one point from each cluster
of C.
Definition 5 (Representative Set) A set K is a representative set of clustering C
{C1 , C2 , . . . , Ck } if |K| = k and for all i, K ? Ci 6= ?.
=
Definition 6 (Relative Margin) The Relative Margin of a clustering C over (X, d) is
RMX,d (C) =
min
K is a representative set of C
avgx?X\K K-RMX,d (x).
Smaller values of Relative Margin indicate better clustering quality.
Lemma 1 Relative Margin is scale-invariant.
proof: Let C be a clustering of (X, d). Let d0 be a distance function so that d0 (x, y) = ?d(x, y)
d0 (x,y)
for all x, y ? X and some ? ? R+ . Then for any points x, y, z ? X, d(x,y)
d(x,z) = d0 (x,z) . Note also
that scaling does not change the centers selected by Relative Margin. Therefore, RMX,d0 (C) =
RMX,d (C).
Lemma 2 Relative Margin is consistent.
proof: Let C be a clustering of distance function (X, d). Let d0 be a C consistent variant of d. Then
for x ?C y, d0 (x, y) ? d(x, y) and for x 6?C y, d0 (x, y) ? d(x, y). Therefore, RMX,d0 (C) ?
RMX,d (C).
Lemma 3 Relative Margin is rich.
proof: Given a non-trivial clustering C over a data set X, consider the distance function d where
d(x, y) = 1 for all x ?C y, and d(x, y) = 10 for all x 6?C y. Then C = Argmin{m(C, X, d)}.
It follows that scale-invariance, consistency, and richness are consistent axioms.
4.2
Soundness and Completeness of Axioms
What should a set of ?axioms for clustering? satisfy? Usually, when a set of axioms is proposed
for some semantic notion (or a class of objects, say clustering functions), the aim is to have both
soundness and completeness. Soundness means that every element of the described class satisfies
all axioms (so, in particular, soundness implies consistency of the axioms), and completeness means
that every property shared by all objects of the class is implied by the axioms. Intuitively, ignoring
logic subtleties, a set of axioms is complete for a class of objects if any element outside that class
fails at least one of these axioms.
In our context, there is a major difficulty - there exist no semantic definition of what clustering is.
We wish to use the axioms as a definition of clustering functions, but then what is the meaning of
soundness and completeness? We have to settle for less. While we do not have a clear definition of
what is clustering and what is not, we do have some examples of functions that should be considered
clustering functions, and we can come up with some examples of partitionings that are clearly not
worthy of being called ?clustering?. We replace soundness by the requirement that all of our axioms
are satisfied by all these examples of common clustering functions (relaxed soundness), and we want
that partitioning functions that are clearly not clusterings fail at least one of our axioms (relaxed
completeness).
In this respect, the axioms of [1] badly fail (the relaxed version of) soundness. For each of these
axioms there are natural clustering functions that fail to satisfy it (this is implied by Kleinberg?s
demonstration that any pair of axioms is satisfied by a natural clustering, while the three together
never hold). We argue that our scale invariance, consistency, and richness, are sound for the class
of CQMs. However, they do not make a complete set of axioms, even in our relaxed sense. There
are functions that should not be considered ?reasonable clustering quality measures? and yet they
satisfy these three axioms. One type of ?non-clustering-functions? are functions that make cluster
membership decisions based on the identity of domain points. For example, the function that returns
5
the Relative Margin of a data set whenever some specific pair of data points belong to the same
cluster, and twice the Relative Margin of the data set otherwise. We overcome this problem by
introducing a new axiom.
4.3
Isomorphism Invariance
This axiom resembles the permutation invariance objective function axiom by Puzicha et al. [3],
modeling the requirement that clustering should be indifferent to the individual identity of clustered elements. This axiom of clustering-quality measures does not have a corresponding Kleinberg
axiom.
Definition 7 (Clustering Isomorphism) Two clusterings C and C 0 over the same domain, (X, d),
are isomorphic, denoted C ?d C 0 , if there exists a distance-preserving isomorphism ? : X ? X,
such that for all x, y ? X, x ?C y if and only if ?(x) ?C 0 ?(y).
Definition 8 (Isomorphism Invariance) A quality measure m is isomorphism -invariant if for all
clusterings C, C 0 over (X, d) where C ?d C 0 , m(C, X, d) = m(C 0 , X, d).
Theorem 3 The set of axioms consisting of Isomorphism Invariance, Scale Invariance, Consistency,
and Richness, (all in their CQM formulation) is a consistent set of axioms.
Proof: Just note that the Relative Margin quality measure satisfies all four axioms.
As mentioned in the above discussion, to have a satisfactory axiom system, for any notion, one needs
to require more than just consistency. To be worthy of being labeled ?axioms?, the requirements we
propose should be satisfied by any reasonable notion of CQM. Of course, since we cannot define
what CQMs are ?reasonable?, we cannot turn this into a formal statement. What we can do, however,
is demonstrate that a variety of natural CQMs do satisfy all our axioms. This is done in the next
section.
5
Examples of Clustering Quality Measures
In a survey of validity measures, Milligan [2] discusses examples of quality measures that satisfy
our axioms (namely, scale-invariance, consistency, richness, and perturbation invariance). We have
verified that the best performing internal criteria examined in [2], satisfy all our axioms.
In this section, we introduce two novel QCMs; a measure that reflects the underlying intuition of
linkage-based clustering, and a measure for center-based clustering. In addition to satisfying the
axioms, given a clustering, these measures can computed in polynomial time.
5.1
Weakest Link
In linkage-based clustering, whenever a pair of points share the same cluster they are connected via
a tight chain of points in that cluster. The weakest link quality measure focuses on the longest link
in such a chain.
Definition 9 (Weakest Link Between Points)
C-W LX,d (x, y) =
min
x1 ,x2 ,...,x` ?Ci
(max(d(x, x1 ), d(x1 , x2 ), . . . , d(x` , y))),
where C is a clustering over (X, d) and Ci is a cluster in C.
The weakest link of C is the maximal value of C-W LX,d (x, y) over all pairs of points belonging to
the same cluster, divided by the shortest between-cluster distance.
Definition 10 (Weakest Link of C) The Weakest Link of a clustering C over (X, d) is
W L(C) =
maxx?C y C-W LX,d (x, y)
.
minx6?C y d(x, y)
The range of values of weakest link is (0, ?).
6
5.2
Additive Margin
In Section 4.3, we introduced Relative Margin, a quality measure for center-based clustering. We
now introduce another quality measure for center-based clustering. Instead of looking at ratios,
Additive Margin evaluates differences.
Definition 11 (Additive Point Margin) The K-Additive Point Margin of x is K-AMX,d (x) =
d(x, cx0 ) ? d(x, cx ), where cx ? K is the closest center to x, cx0 ? K is a second closest center to x, and K ? X.
The Additive Margin of a clustering is the average Additive Point Margin, divided by the average
within-cluster distance. The normalization is necessary for scale invariance.
Definition 12 (Additive Margin) The Additive Margin of a center-based clustering C over (X, d)
is
P
1
x?X K-AMX,d (x)
|X|
P
AMX,d (C) =
min
.
1
K is a representative set of C
x?C y d(x, y)
|{{x,y}?X|x?C y}|
Unlike Relative Margin, Additive Margin gives higher values to better clusterings.
5.3
Computational complexity
For a clustering-quality measure to be useful, it is important to be able to quickly compute the quality
of a clustering using that measure. The quality of a clustering using the measures presented in this
paper can be computed in polynomial time in terms of n (the number of points in the data set).
Using relative or Additive Margin, it takes O(nk+1 ) operations to compute the clustering quality
of a data set, which is exponential in k. If a set of centers is given, the Relative Margin can be
computed in O(nk) operations and the Additive Margin can be computed in O(n2 ) operations. The
weakest link of a clustering can be computed in O(n3 ) operations.
5.4
Variants of quality measures
Given a clustering-quality measure, we can construct new quality measures with different characteristics by applying the quality measure on a subset of clusters. It suffices to consider a quality
measure m that is defined for clusterings consisting of 2 clusters. Given such measure, we can
create new quality measures. For example,
mmin (C, X, d) =
min
m(S, X, d),
S?C,|S|=2
measures the worst quality of a pair of clusters in C.
Alternately, we can define, mmax (C, X, d) and mavg (C, X, d), which evaluate the best or average
quality of a pair of clusters in C. A nice feature of these variations is that if m satisfies the four
axioms of clustering-quality measures then so do mmin , mmax , and mavg .
More generally, if m is defined for clusterings on an arbitrary number of clusters, we can define a
quality measure as a function of m over larger clusterings. For example, mmax subset (C, X, d) =
maxS?C,|S|?2 m(S, X, d). Many such variations, which apply existing clustering-quality measures
on subsets of clusters, satisfy the axioms of clustering-quality measures whenever the original quality measure satisfies the axioms.
6
Dependence on Number of Clusters
The clustering-quality measures discussed in this paper up to now are independent of the number
of clusters, which enables the comparison of clusterings with a different number of clusters. In this
section we discuss an alternative type of clustering quality evaluation, that depends on the number of
clusters. Such quality measures arise naturally from common loss functions (or, objective functions)
that drive clustering algorithm, such as k-means or k-median.
7
These common loss functions fail to satisfy two of our axioms, scale-invariance and richness. One
can easily overcome the dependence on scaling by normalization. As we will show, the resulting
normalized loss functions make a different type of clustering-quality measures from the measures
we previously discussed, due to their dependence on the number of clusters.
A natural remedy to the failure of scale invariance is to normalize a loss function by dividing it by
the variance of the data, or alternatively, by the loss of the 1-clustering of the data.
Definition 13 (L-normalization) The L-normalization of a clustering C over (X, d) is
L-normalize(C, X, d) =
L(Call , X, d)
.
L(C, X, d)
where Call denotes the 1-clustering of X.
Common loss functions, even after normalization, usually have a bias towards either more refined
or towards more coarse clusterings ? they assign lower cost (that is, higher quality) to more refined
(respectively, coarse) clusterings. This prevents using them as a meaningful tool for comparing
the quality of clusterings with different number of clusters. We formalize this feature of common
clustering loss functions through the notion of refinement preference:
Definition 14 (Refinement and coarsening) For a pair of clusterings C, C 0 of the same domain,
we say C 0 is a refinement of C (or, equivalently, that C is a coarsening of C 0 ) if for every cluster Ci
of C, Ci is a union of clusters of C 0 .
Definition 15 (Refinement/Coarsening Preference) A measure m is refinement-preferring if for
every clustering C of (X, d) if it has a non-trivial refinement, then there exists such a refinement C 0 of
C for which m(C 0 , X, d) > m(C, X, d). Coarsening-preferring measures are defined analogously.
Note that both refinement preferring and coarsening preferring measures fail to satisfy the Richness
axiom.
It seems that there is a divide between two types of evaluations for clusterings; those that satisfy
richness, and those that satisfy either refinement or coarsening preference. To evaluate the quality of
a clustering using a refinement (and coarsening) preferring measure, it is essential to fix the number
of clusters. Since the correct number of clusters is often unknown, measures that are independent of
the number of clusters apply in a more general setting.
7
Conclusions
We have investigated the possibility of providing a general axiomatic basis for clustering. Our
starting point was the impossibility result of Kleinberg. We argue that a natural way to overcome
these negative conclusions is by changing the primitive used to formulate the axioms from clustering
functions to clustering quality measures (CQMs). We demonstrate the merits of the latter framework
by providing a set of axioms for CQMs that captures the essence of all of Kleinberg?s axioms while
maintaining consistency. We propose several CQMs that satisfy our proposed set of axioms. We
hope that this work, and our demonstration of a way to overcome the ?impossibility result? will
stimulate further research towards a general theory of clustering.
References
[1] Jon Kleinberg. ?An Impossibility Theorem for Clustering.? Advances in Neural Information Processing
Systems (NIPS) 15, 2002.
[2] Glen W. Milligan. ?A Monte-Carlo study of 30 internal criterion measures for cluster-analysis.? Psychometrica 46, 187-195, 1981.
[3] J. Puzicha, T. Hofmann, and J. Buhmann. ?Theory of Proximity Based Clustering: Structure Detection by
Optimization,? Pattern Recognition, 33(2000).
8
| 3491 |@word version:1 polynomial:5 seems:2 contains:1 existing:1 comparing:3 yet:1 readily:2 additive:11 partition:5 predetermined:1 hofmann:1 enables:1 generative:3 selected:1 accordingly:1 completeness:5 coarse:2 lx:3 preference:3 c2:2 prove:1 consists:2 advocate:1 introduce:5 indeed:1 begin:1 notation:1 underlying:4 what:12 argmin:2 interpreted:1 finding:1 every:9 exactly:1 partitioning:4 positive:2 aiming:2 twice:1 resembles:1 examined:1 suggests:1 range:1 practical:1 union:1 axiom:80 maxx:1 confidence:1 cannot:3 selection:1 context:1 milligan:2 applying:1 map:1 center:14 primitive:2 attention:1 regardless:1 straightforward:1 starting:1 survey:1 formulate:2 contradiction:2 rule:1 his:1 notion:7 variation:2 hurt:1 user:1 element:3 satisfying:2 recognition:1 labeled:1 postulation:1 capture:7 worst:1 connected:1 richness:15 decrease:1 mentioned:1 intuition:5 complexity:2 tight:1 upon:2 basis:2 triangle:1 translated:1 easily:1 shortcoming:1 monte:1 pertaining:1 outside:1 refined:2 richer:1 posed:1 plausible:1 larger:2 say:4 relax:1 otherwise:3 statistic:1 soundness:8 seemingly:1 propose:6 maximal:1 ackerman:1 intuitive:1 normalize:2 cluster:38 requirement:13 ben:1 object:4 help:1 illustrate:1 develop:1 stating:1 school:1 strong:1 dividing:1 indicate:3 implies:1 come:1 direction:2 closely:1 correct:1 modifying:1 settle:1 require:3 assign:1 suffices:1 clustered:1 fix:1 hold:1 proximity:1 sufficiently:1 considered:3 claim:1 major:1 purpose:1 axiomatic:1 waterloo:1 create:2 tool:1 reflects:2 hope:1 clearly:3 aim:4 rather:4 ck:2 focus:1 longest:1 impossibility:8 sense:1 membership:2 issue:2 flexible:1 denoted:1 retaining:1 development:2 proposes:1 once:1 never:1 construct:1 look:1 jon:1 inherent:2 simultaneously:1 individual:1 intended:1 argmax:2 consisting:2 attempt:2 detection:1 interest:1 highly:1 mining:1 possibility:2 evaluation:3 indifferent:1 weakness:1 behind:1 chain:2 closer:1 necessary:1 divide:1 desired:1 theoretical:1 increased:1 formalism:3 modeling:1 cost:2 introducing:1 subset:3 uniform:1 answer:2 contender:1 confident:1 preferring:5 axiomatization:4 invoke:1 together:1 quickly:1 analogously:1 reflect:2 satisfied:4 opposed:1 minx6:1 external:1 leading:2 return:3 satisfy:16 depends:1 vehicle:1 view:1 analyze:2 shai:1 minimize:1 variance:1 characteristic:1 carlo:1 drive:1 whenever:6 definition:19 failure:2 evaluates:1 naturally:2 proof:5 dataset:1 cj:1 formalize:1 uncover:1 actually:1 focusing:1 higher:2 follow:1 improved:1 formulation:4 done:1 just:4 implicit:1 working:1 hand:2 defines:1 artifact:1 quality:82 stimulate:1 pulling:1 validity:2 normalized:1 remedy:1 satisfactory:1 semantic:2 mmax:3 mmin:2 essence:1 criterion:4 presenting:1 outline:1 complete:2 demonstrate:3 cx0:3 meaning:2 novel:1 common:5 discussed:4 extend:1 belong:1 significant:1 consistency:20 longer:1 closest:6 showed:1 apart:1 reverse:1 inequality:2 inconsistency:1 preserving:1 additional:1 relaxed:4 determine:1 paradigm:1 shortest:1 sound:1 d0:16 divided:2 unsatisfiable:1 variant:5 normalization:5 c1:2 addition:3 want:1 decreased:2 median:2 extra:1 unlike:1 sounding:1 inconsistent:1 coarsening:7 call:2 concerned:1 variety:2 xj:5 whether:1 isomorphism:6 linkage:3 useful:1 generally:1 clear:2 aimed:1 exist:2 notice:1 write:1 shall:1 express:1 group:1 four:2 terminology:1 demonstrating:1 changing:1 verified:1 place:1 reasonable:4 decision:1 scaling:3 ki:1 badly:1 x2:2 n3:1 kleinberg:21 min:4 attempting:1 performing:1 influential:1 developing:1 belonging:1 smaller:3 intuitively:2 invariant:5 heart:1 previously:2 remains:1 discus:5 turn:3 fail:5 merit:1 operation:4 cogent:1 endowed:1 apply:2 alternative:1 original:5 assumes:1 clustering:176 denotes:1 maintaining:1 pushing:1 implied:2 objective:4 question:1 dependence:3 distance:22 link:9 argue:4 extent:2 trivial:4 assuming:1 reformulate:1 ratio:4 demonstration:2 margareta:1 providing:2 equivalently:1 statement:1 negative:4 rise:1 unknown:1 disagree:1 revised:2 finite:1 defining:1 looking:1 worthy:3 perturbation:1 arbitrary:2 canada:1 david:1 introduced:1 pair:9 namely:3 required:1 conclusive:1 alternately:1 nip:1 address:1 beyond:1 able:1 usually:5 below:1 pattern:1 max:2 analogue:1 natural:9 rely:1 difficulty:1 buhmann:1 representing:1 improve:1 rmx:7 literature:1 nice:1 determining:1 relative:16 loss:8 permutation:1 consistent:17 principle:2 share:1 translation:2 course:1 side:1 weaker:1 formal:1 bias:1 overcome:4 evaluating:3 rich:2 refinement:10 counted:1 logic:1 xi:8 alternatively:1 mavg:2 reasonably:1 ignoring:1 obtaining:1 investigated:1 domain:7 arise:1 n2:1 x1:3 representative:5 formalization:1 surveyed:1 fails:1 wish:1 exponential:1 amx:3 glen:1 theorem:4 specific:5 showing:1 weakest:8 exists:6 essential:1 adding:1 ci:7 illustrates:1 margin:26 nk:2 cx:5 explore:1 failed:1 prevents:1 expressed:2 ordered:1 subtlety:1 satisfies:12 goal:1 formulated:2 identity:2 quantifying:1 towards:4 shared:1 replace:1 change:12 lemma:3 called:1 isomorphic:1 invariance:21 meaningful:3 formally:1 puzicha:2 internal:3 quest:1 latter:3 arises:1 relevance:1 evaluate:2 |
2,748 | 3,492 | Translated Learning: Transfer Learning across
Different Feature Spaces
?
Wenyuan Dai, ? Yuqiang Chen, ? Gui-Rong Xue, ? Qiang Yang and ? Yong Yu
?
Shanghai Jiao Tong University
Shanghai 200240, China
{dwyak,yuqiangchen,grxue,yyu}@apex.sjtu.edu.cn
?
Hong Kong University of Science and Technology
Kowloon, Hong Kong
[email protected]
Abstract
This paper investigates a new machine learning strategy called translated learning. Unlike many previous learning tasks, we focus on how to use labeled data
from one feature space to enhance the classification of other entirely different
learning spaces. For example, we might wish to use labeled text data to help learn
a model for classifying image data, when the labeled images are difficult to obtain. An important aspect of translated learning is to build a ?bridge? to link one
feature space (known as the ?source space?) to another space (known as the ?target space?) through a translator in order to migrate the knowledge from source to
target. The translated learning solution uses a language model to link the class
labels to the features in the source spaces, which in turn is translated to the features in the target spaces. Finally, this chain of linkages is completed by tracing
back to the instances in the target spaces. We show that this path of linkage can
be modeled using a Markov chain and risk minimization. Through experiments
on the text-aided image classification and cross-language classification tasks, we
demonstrate that our translated learning framework can greatly outperform many
state-of-the-art baseline methods.
1
Introduction
Traditional machine learning relies on the availability of a large amount of labeled data to train a
model in the same feature space. However, labeled data are often scarce and expensive to obtain. In
order to save much labeling work, various machine learning strategies have been proposed, including
semi-supervised learning [13], transfer learning [3, 11, 10], self-taught learning [9], etc. One commonality among these strategies is they all require the training data and test data to be in the same
feature space. For example, if the training data are documents, then the classifiers cannot accept test
data from a video space. However, in practice, we often face the problem where the labeled data are
scarce in its own feature space, whereas there are sufficient labeled data in other feature spaces. For
example, there may be few labeled images available, but there are often plenty of labeled text documents on the Web (e.g., through the Open Directory Project, or ODP, http://www.dmoz.org/).
Another example is cross-language classification where labeled documents in English are much
more than ones in some other languages such as Bangla, which has only 21 Web pages in the ODP.
Therefore, it would be great if we could learn the knowledge across different feature spaces and to
save a substantial labeling effort.
To address the transferring of knowledge across different feature spaces, researchers have proposed
multi-view learning [2, 8, 7] in which each instance has multiple views in different feature spaces.
Different from multi-view learning, in this paper, we focus on the situation where the training data
are in a source feature space, and the test data are in a different target feature space, and that there
is no correspondence between instances in these spaces. The source and target feature spaces can be
(a) Supervised Learning
(b) Semi-supervised Learning
(c) Transfer Learning
(d) Self-taught Learning
Elephants
are large
and gray ...
(e) Multi-view Learning
big
mammals
on earth...
thickskinned,
...
(f) Translated Learning
massive
hoofed
mammal ...
Test Data
Figure 1: An intuitive illustration to different kinds of learning strategies using classification of
image elephants and rhinos as the example. The images in orange frames are labeled data, while the
ones without frames are unlabeled data.
very different, as in the case of text and images. To solve this novel learning problem, we develop
a novel framework named as translated learning, where training data and test data can be in totally
different feature spaces. A translator is needed to be exploited to link the different feature spaces.
Clearly, the translated learning framework is more general and difficult than traditional learning
problems. Figure 1 presents an intuitive illustration of six different learning strategies, including
supervised learning, semi-supervised learning [13], transfer learning [10], self-taught learning [9],
multi-view learning [2], and finally, translated learning.
An intuitive idea for translated learning is to somehow translate all the training data into a target
feature space, where learning can be done within a single feature space. This idea has already been
demonstrated successful in several applications in cross-lingual text classification [1]. However, for
the more general translated learning problem, this idea is hard to be realized, since machine translation between different feature spaces is very difficult to accomplish in many non-natural language
cases, such as translating documents to images. Furthermore, while a text corpus can be exploited
for cross-langauge translation, for translated learning, the learning of the ?feature-space translator?
from available resources is a key issue.
Our solution is to make the best use of available data that have both features of the source and target
domains in order to construct a translator. While these data may not be sufficient in building a good
classifier for the target domain, as we will demonstrate in our experimental study in the paper, by
leveraging the available labeled data in the source domain, we can indeed build effective translators.
An example is to translate between the text and image feature spaces using the social tagging data
from Web sites such as Flickr (http://www.flickr.com/).
The main contribution of our work is to combine the feature translation and the nearest neighbor
learning into a unified model by making use of a language model [5]. Intuitively, our model can be
represented using a Markov chain c ? y ? x, where y represents the features of the data instances
x. In translated learning, the training data xs are represented by the features ys in the source feature
space, while the test data xt are represented by the features yt in the target feature space. We model
the learning in the source space through a Markov chain c ? ys ? xs , which can be connected to
another Markov chain c ? yt ? xt in the target space. An important contribution of our work then
is to show how to connect these two paths, so that the new chain c ? ys ? yt ? xt , can be used
to translate the knowledge from the source space to the target one, where the mapping ys ? yt is
acting as a feature-level translator. In our final solution, which we call TLRisk, we exploit the risk
minimization framework in [5] to model translated learning. Our framework can accept different
distance functions to measure the relevance between two models.
2
2.1
Translated Learning Framework
Problem Formulation
We first define the translated learning problem formally. Let Xs be the source instance space. In this
(1)
(n )
(i)
space, each instance xs ? Xs is represented by a feature vector (ys , . . . , ys s ), where ys ? Ys
and Ys is the source feature space. Let Xt be the target instance space, in which each instance
(1)
(n )
(i)
xt ? Xt is represented by a feature vector (yt , . . . , yt t ), where yt ? Yt and Yt is the target
(i) (i)
feature space. We have a labeled training data set Ls = {(xs , cs )}ni=1 in the source space, where
(i)
(i)
(i)
xs ? Xs and cs ? C = {1, . . . , |C|} is the true class-label of xs . We also have another labeled
(i) (i)
(i)
(i)
training data set Lt = {(xt , ct )}m
i=1 in the target space, where xt ? Xt and ct ? C. Usually, m
is assumed to be small, so that Lt is not enough to train a reliable prediction model. The unlabeled
(i)
(i)
(i)
test data set U is a set of k examples {xu }ki=1 , where xu ? Xt . Note that xs is in a different
(i)
(i)
(i)
(i)
(i)
feature space from xt and xu . For example, xs may be a text document, while xt and xu may
be visual images.
To link the two feature spaces, a feature translator p(yt |ys ) ? ?(yt , ys ) is constructed. However,
for ease of explanation, we first assume that the translator ? is given, and will discuss the derivation
of ? later in this section, based on co-occurrence data. We focus on our main objective in learning,
(i)
which is to estimate a hypothesis ht : Xt 7? C to classify the instances xu ? U as accurately as
possible, by making use of the labeled training data L = Ls ? Lt and the translator ?.
2.2
Risk Minimization Framework
First, we formulate our objective in terms of how to minimize an expected risk function with respect
to the labeled training data L = Ls ? Lt and the translator ? by extending the risk minimization
framework in [5].
In this work, we use the risk function R(c, xt ) to measure the the risk for classifying xt to the
category c. Therefore, to predict the label for an instance xt , we need only to find the class-label c
which minimizes the risk function R(c, xt ), so that
ht (xt ) = arg min R(c, xt ).
(1)
c?C
The risk function R(c, xt ) can be formulate as the expected loss when c and xt are relevant; formally,
Z Z
R(c, xt ) ? L(r = 1|c, xt ) =
L(?C , ?Xt , r = 1)p(?C |c) p(?Xt |xt ) d?Xt d?C . (2)
?C
? Xt
Here, r = 1 represents the event of ?relevant?, which means (in Equation (2)) ?c and xt are relevant?,
or ?the label of xt is c?. ?C and ?Xt are the models with respect to classes C and target space instances
Xt respectively. ?C and ?Xt are two corresponding model spaces involving all the possible models.
Note that, in Equation (2), ?C only depends on c and ?Xt only depends to xt . Thus, we use p(?C |c) to
replace p(?C |c, xt ), and use p(?Xt |xt ) to replace p(?Xt |c, xt ). L(?C , ?Xt , r = 1) is the loss function
with respect to the event of ?C and ?Xt being relevant. We next address the estimation of the risk
function in Equation (2).
2.3
Estimation
The risk function in Equation (2) is difficult to estimate, since the sizes of ?C and ?Xt can be
exponential in general. Therefore, we have to use approximation for estimating the risk function
for efficiency. First of all, the loss function L(?C , ?Xt , r = 1) can be formulated using distance
functions between the two models ?C and ?Xt , so that L(?C , ?Xt , r = 1) = ??(?C , ?Xt ), where
?(?C , ?Xt ) is the distance (or dissimilarity) function, e.g. the Kullback-Leibler divergence. Replacing L(?C , ?Xt , r = 1) with ?(?C , ?Xt ), the risk function is reformulated as
Z Z
R(c, xt ) ?
?(?C , ?Xt )p(?C |c) p(?Xt |xt ) d?Xt d?C .
(3)
?C
? Xt
Since the sizes of ?C and ?Xt are exponential in general, we cannot calculate Equation (3) straightforwardly. In this paper, we approximate the risk function by its value at the posterior mode:
R(c, xt ) ? ?(??c , ??x )p(??c |c)p(??x |xt ) ? ?(??c , ??x )p(??c |c),
(4)
t
t
t
where ??c = arg max?C p(?C |c), and ??xt = arg max?Xt p(?Xt |xt ).
In Equation (4), p(??c |c) is the prior probability of ??c with respect to the target class c. This prior can
be used to balance the influence of different classes in the class-imbalance case. When we assume
there is no prior difference among all the classes, the risk function can be rewritten into
Algorithm 1 Risk Minimization Algorithm for Translated Learning: (TLRisk)
Input: Labeled training data L in the source space, unlabeled test data U in the target space, a
translator ? to link the two feature spaces Ys and Yt and a dissimilarity function ?(?, ?).
Output: The prediction label ht (xt ) for each xt ? U.
Procedure TLRisk train
1: for each c ? C do
2:
Estimate the model ??c based on Equation (6).
3: end for
Procedure TLRisk test
1: for each xt ? U do
2:
Estimate the model ??xt based on Equation (7).
3:
Predict the label ht (xt ) for xt based on Equations (1) and (5).
4: end for
R(c, xt ) ? ?(??c , ??xt ),
(5)
where ?(??c , ??xt ) denotes the dissimilarity between two models ??c and ??xt . To achieve this objective,
as in [5], we formulate these two models in the target feature space Yt ; specifically, if we use KL
divergence as the distance function, ?(??c , ??xt ) can be measured by KL(p(Yt |??c )||p(Yt |??xt )).
Our estimation is based on the Markov chain assumption where ??c ? c ? ys ? yt ? xt ? ??xt
and ??c ? c ? yt ? xt ? ??xt , so that
Z X
X
p(yt |??c ) =
p(yt |ys )p(ys |c0 )p(c0 |??c ) dys + ?
p(yt |c0 )p(c0 |??c ),
(6)
Ys c0 ?C
c0 ?C
0
where p(yt |ys ) can be estimated using the translator ?; p(ys |c ) can be estimated based on the
statistical observations in the labeled text data set Ls in the source feature space Ys ; p(yt |c0 ) can be
estimated based on Lt in the target feature space Yt ; p(c0 |??c ) can be calculated as: p(c0 |??c ) = 1 if
c = c0 , and otherwise, p(c0 |??c ) = 0; and ? is a trade-off parameter which controls the influence of
target space labeled data Lt .
For another model p(Yt |??xt ), it can be estimated by
Z
p(yt |??xt ) =
p(yt |x0t )p(x0t |??xt ) dx0t ,
(7)
Xt
where p(yt |x0t ) can be estimated using the feature extractor in the target feature space Yt , and
p(x0t |??xt ) can be calculated as p(x0t |??xt ) = 1 if x0t = xt ; otherwise p(x0t |??xt ) = 0.
Integrating Equations (1), (5), (6) and (7), our translated learning framework is summarized as
algorithm TLRisk, an abbreviation for Translated Learning via Risk Minimization, which is shown
in Algorithm 1.
Considering the computational cost of Algorithm 1, due to the Markov chain assumption, our algorithm TLRisk can be implemented using dynamic programming. Therefore, in the worst case,
the time complexity of TLRisk is O(|C||Yt | + |Yt ||Ys |) in training, and O(|C||Yt |) for predicting
an instance. In practice, the data are quite sparse, and good feature mappings (or translator) should
also be sparse, otherwise it will consist of many ambiguous cases. Therefore, TLRisk can perform
much faster than the worst cases generally, and the computational cost of TLRisk is linear in the
non-zero occurrences in feature mappings.
2.4
Translator ?
We now explain in particular how to build the translator ?(yt , ys ) ? p(yt |ys ) to connect two different feature spaces. As mentioned before, to estimate the translator p(yt |ys ), we need some cooccurrence data across the two feature spaces: source and target. Formally, we need co-occurrence
data in the form of p(yt , ys ), p(yt , xs ), p(xt , ys ), or p(xt , xs ). In cross-language problems, dictionaries can be considered as data in the form of p(yt , ys ) (feature-level co-occurrence). On the Web,
DATA S ET
horse vs coin
kayak vs bear
electric-guitar vs snake
cake vs binoculars
laptop vs sword
bonsai vs comet
DATA
DOCUMENTS
+
1610
1045
335
265
210
166
?
1345
885
326
320
203
164
S IZE
IMAGES
+
270
102
122
104
128
122
?
123
101
112
216
102
120
DATA S ET
dog vs canoe
greyhound vs cd
stained-glass vs microwave
rainbow vs sheet-music
tomato vs llama
frog vs saddle
DATA
DOCUMENTS
+
1084
380
331
261
175
150
?
1047
362
267
256
172
148
S IZE
IMAGES
+
102
94
99
102
102
115
?
103
102
107
84
119
110
Table 1: The description for each data set. Here, horse vs coin indicates all the positive instances are about horse while all the negative instances are about coin. ?+? means positive
instance; ??? means negative instances.
social annotations on images (e.g. Flickr, images associated with keywords) and search-engine results in response to queries are examples for correlational data in the forms of p(yt , xs ) and p(xt , ys )
(feature-instance co-occurrence). Moreover, multi-view data (e.g. Web pages including both text
and pictures) is an example for data in the form of p(xt , xs ) (instance-level co-occurrence). Where
there is a pool of such co-occurrence data available, we can build the translator ? for estimating the
Markov chains in the previous subsections.
In particular, to estimate the translator ?, at first, the feature-instance co-occurrence data (p(yt , xs )
or p(xt , ys )) can be used
co-occurrence p(yt , ys );
R to estimate the probabilities for feature-level
R
formally, p(yt , ys ) = Xs p(yt , xs )p(ys |xs ) dxs and p(yt , ys ) = Xt p(xt , ys )p(yt |xt ) dxt . The
instance-level
data can also be converted to feature-level co-occurrence; formally,
R co-occurrence
R
p(yt , ys ) = Xt Xs p(xt , xs )p(ys |xs )p(yt |xt ) dxs dxt . Here, p(ys |xs ) and p(yt |xt ) are two feature
extractors in Ys and Yt . Using the feature-level
co-occurrence probability p(yt , ys ), we can estimate
R
the translator as p(yt |ys ) = p(yt , ys )/ Yt p(yt0 , ys )dyt0 .
3
Evaluation: Text-aided Image Classification
In this section, we apply our framework TLRisk to a text-aided image classification problem, which
uses binary labeled text documents as auxiliary data to enhance the image classification. This problem is derived from the application where a user or a group of users may have expressed preferences
over some text documents, and we wish to translate these preferences to images for the same group
of users. We will show the effectiveness of TLRisk on text-aided image classification. Our objective is to demonstrate that even with a small amount of labeled image training data, we can still
build a high-quality translated learning solution for image classification by leveraging the text documents, even if the co-occurrence data themselves are not sufficient when directly used for training
a classification model in the target space.
3.1
Data Sets
The data sets of Caltech-2561 and Open Directory Project (ODP, http://www.dmoz.org/)
were used in our evaluation, as the image and text corpora. Our ODP collection was crawled during
August 2006, and involves 1,271,106 English Web pages. We generated 12 binary text-to-image
classification tasks from the above corpora. The description for each data set is presented in Table
1. The first column presents the name of each data set, e.g. horse vs coin indicates all the
positive instances are about horse while all the negative instances are about coin. We collected
the corresponding documents from ODP for each category. However, due to space limitation, we
omit the detailed ODP directory information with respect to each data set here. In the table, we
also listed the data sizes for each task, including documents and images. Generally, the number of
documents is much larger than the number of images.
For data preprocessing, the SIFT descriptor [6] was used to find and describe the interesting points
in the images, and then clustered the extracted interest points into 800 clusters to obtain the codebook. Based on the code-book, each image can be converted to a corresponding feature vector. For
text documents, we first extracted and stemmed all the tokens from the ODP Web pages, and then
information gain [12] was used to select the most important features for further learning process.
We collected the co-occurrence data from a commercial image search engine during April 2008.
The collected data are in the form of feature-instance co-occurrence p(ys , xt ), so that we have to
convert them to feature-level co-occurrence p(ys , yt ) as discussed in Section 2.4.
1
http://www.vision.caltech.edu/Image Datasets/Caltech256/
Kullback?Leibler Divergence
Cosine
Image Only
Search+Image
TLRisk
Lowerbound
0.25
0.30
0.25
0.20
0.20
0.15
12 4
8
16
32
number of labeled images per category
Image Only
Search+Image
TLRisk
Lowerbound
0.35
Error Rate
0.30
0.40
Image Only
Search+Image
TLRisk
Lowerbound
0.35
Error Rate
Error Rate
0.35
0.15
Pearson?s Correlation Coefficient
0.40
0.40
0.30
0.25
0.20
0.15
12 4
8
16
32
number of labeled images per category
(a)
12 4
8
16
32
number of labeled images per category
(b)
(c)
Figure 2: The average error rates over 12 data sets for text-aided image classification with different
number of labeled images Lt .
Cosine
Kullback?Liebler Divergence
0.35
Error Rate
0.25
0.20
0.30
Error Rate
0.30
0.30
Error Rate
average over 12 data sets
average over 12 data sets
average over 12 data sets
0.15
Pearson?s Correlation Coefficient
0.35
0.35
0.25
0.20
0.0625
0.25
1
4
? (in log scale)
16
0.15
0.25
0.20
0.0625
(a)
0.25
1
4
? (in log scale)
16
(b)
0.15
0.0625
0.25
1
4
? (in log scale)
16
(c)
Figure 3: The average error rates over 12 data sets for text-aided image classification with different
trade-off ?.
3.2
Evaluation Methods
Few existing research works addressed the text-aided image classification problem, so that for the
baseline methods in our experiments, we first simply used the labeled data Lt as the training data in
the target space to train a classification model; we refer to this model as Image Only. The second
baseline is to use the category name (in this case, there are two names for binary classification
problems) to search for training images and then to train classifiers together with labeled images in
Lt ; we refer to this model as Search+Image.
Our framework TLRisk was evaluated under three different dissimilarity functions: (1) KullbackR
p(yt |?C )
Leibler divergence (named KL): Yt p(yt |?C ) log p(y
dyt ; (2) Negative of cosine function
t |?X )
t
(named NCOS):
? qR
R
Yt
Yt
p2 (y
p(yt |?C )p(yt |?Xt )dyt
qR
;
p2 (yt |?Xt )dyt
t |?C )dyt
Y
(3) Negative of the Pearson?s correlation co-
t
efficient (named NPCC): ? ?
cov(p(Yt |?C ),p(Yt |?Xt ))
var(p(Yt |?C ))var(p(Yt |?Xt ))
.
We also evaluated the lower bound of the error rate with respect to each data set. To estimate the
lower bound, we conducted a 5-fold cross-validation on the test data U. Note that this strategy, which
is referred to as Lowerbound, is unavailable in our problem setting, since it uses a large amount of
labeled data in the target space. In our experiments, this lower bound is used just for reference. We
also note that on some data sets, the performance of Lowerbound may be slightly worse than that
of TLRisk, because Lowerbound was trained based on images in Caltech-256, while TLRisk
was based on the co-occurrence data. These models used different supervisory knowledge.
3.3
Experimental Results
The experimental results were evaluated in terms of error rates, and are shown in Figure 2. On
one hand, from the table, we can see that our framework TLRisk greatly outperforms the baseline
methods Image Only and Search+Image, no matter which dissimilarity function is applied.
On the other hand, compared with Lowerbound, TLRisk also shows comparable performance.
It indicates that our framework TLRisk can effectively learn knowledge across different feature
spaces in the case of text-to-image classification.
Moreover, when the number of target space labeled images decreases, the performance of Image
Only declines rapidly, while the performances of Search+Image and TLRisk stay very sta-
DATA S ET
1
2
3
4
5
E NGLISH
L OCATION
Top: Sport: Ballsport
Top: Computers: Internet
Top: Arts: Architecture: Building Types
Top: Home: Cooking: Recipe Collections
Top: Science: Agriculture
Top: Society: Crime
Top: Sports: Skating: Roller Skating
Top: Health: Public Health and Safety
Top: Recreation: Outdoors: Hunting
Top: Society: Holidays
G ERMAN
S IZE
2000
2000
1259
475
1886
1843
926
2361
2919
2258
L OCATION
Top: World:
Top: World:
Top: World:
Top: World:
Top: World:
Top: World:
Top: World:
Top: World:
Top: World:
Top: World:
Deutsch:
Deutsch:
Deutsch:
Deutsch:
Deutsch:
Deutsch:
Deutsch:
Deutsch:
Deutsch:
Deutsch:
Sport: Ballsport
Computer: Internet
Kultur: Architektur: Geb?audetypen
Zuhause: Kochen: Rezeptesammlungen
Wissenschaft: Agrarwissenschaften
Gesellschaft: Kriminalit?at
Sport: Rollsport
Gesundheit: Public Health
Freizeit: Outdoor: Jagd
Gesellschaft: Fest?und Feiertage
S IZE
128
126
71
72
71
69
70
71
70
72
Table 2: The description for each cross-language classification data set.
ble. This indicates that TLRisk is not quite sensitive to the size of Lt ; in other words, TLRisk
has good robustness. We also want to note that, sometimes TLRisk performs slightly better than
Lowerbound. This is not a mistake, because these two methods use different supervisory knowledge: Lowerbound is based on images in the Caltech-256 corpus; TLRisk is based on the cooccurrence data. In these experiments, Lowerbound is just for reference.
In TLRisk, a parameter to tune is the trade off parameter ? (refer to Equation (6)). Figure 3 shows
the average error rate curves on all the 12 data sets, when ? gradually changes from 2?5 to 25 .
In this experiment, we fixed the number of target training images per category to one, and set the
threshold K (which is the number of images to collect for each text keyword, when collecting the
co-occurrence data) to 40. From the figure, we can see that, on one hand, when ? is very large, which
means the classification model mainly builds on the target space training images Lt , the performance
is rather poor. On the other hand, when ? is small such that the classification model relies more on
the auxiliary text training data Ls , the classification performance is relatively stable. Therefore, we
suggest to set the trade-off parameter ? to a small value, and in these experiments, all the ?s are set
to 1, based on Figure 3.
4
Evaluation: Cross-language Classification
In this section, we apply our framework TLRisk to another scenario, the cross-language classification. We focused on English-to-German classification, where English documents are used as the
source data to help classify German documents, which are target data.
In these experiments, we collected the documents from corresponding categories from ODP English
pages and ODP German pages, and generated five cross-language classification tasks, as shown in
Table 2. For the co-occurrence data, we used the English-German dictionary from the Internet Dictionary Project2 (IDP). The dictionary data are in the form of feature-level co-occurrence p(yt , ys ).
We note that while most cross-language classification works rely on machine translation [1], our
assumption is that the machine translation is unavailable and we rely on dictionary only.
We evaluated TLRisk with the negative of cosine (named NCOS) as the dissimilarity function. Our
framework TLRisk was compared to classification using only very few German labeled documents
as a baseline, called German Labels Only. We also present the lower bound of error rates by
performing 5-fold cross-validation on the test data U, which we refer to as Lowerbound. The
performances of the evaluated methods are presented in Table 3. In this experiment, we have only
sixteen German labeled documents in each category. The error rates in Table 3 were evaluated
by averaging the results of 20 random repeats. From the figure, we can see that TLRisk always
shows marked improvements compared with the baseline method German Labels Only, although there are still gaps between TLRisk and the ideal case Lowerbound. This indicates our
algorithm TLRisk is effective on the cross-language classification problem.
DATA S ET
German Labels Only
TLRisk
Lowerbound
1
0.246 ? 0.061
0.191 ? 0.045
0.170 ? 0.000
2
0.133 ? 0.037
0.122 ? 0.043
0.116 ? 0.000
3
0.301 ? 0.067
0.253 ? 0.062
0.157 ? 0.000
4
0.257 ? 0.053
0.247 ? 0.059
0.176 ? 0.000
5
0.277 ? 0.068
0.183 ? 0.072
0.166 ? 0.000
Table 3: The average error rate and variance on each data set, given by all the evaluation methods,
for English-to-German cross-language classification.
We have empirically tuned the trade-off parameter ?. Similar to the results of the text-aided image
classification experiments, when ? is small, the performance of TLRisk is better and stable. In
2
http://www.ilovelanguages.com/idp/index.html
these experiments, we set ? to 2?4 . However, due to space limitation, we cannot present the curves
for ? tuning here.
5
Related Work
We review several prior works related to our work. To solve the label sparsity problem, researchers
proposed several learning strategies, e.g. semi-supervised learning [13] and transfer learning [3,
11, 10, 9, 4]. Transfer learning mainly focuses on training and testing processes being in different
scenarios, e.g. multi-task learning [3], learning with auxiliary data sources [11], learning from
irrelevant categories [10], and self-taught learning [9, 4]. The translated learning proposed in this
paper can be considered as an instance of general transfer learning; that is, transfer learning from
data in different feature spaces.
Multi-view learning addresses learning across different feature spaces. Co-training [2] established
the foundation of multi-view learning, in which the classifiers in two views learn from each other
to enhance the learning process. Nigam and Ghani [8] proposed co-EM to apply EM algorithm to
each view, and interchange probabilistic labels between different views. Co-EMT [7] is an active
learning multi-view learning algorithm, and has shown more robustness empirically. However, as
discussed before, multi-view learning requires that each instance should contain two views, while in
translated learning, this requirement is relaxed. Translated learning can accept training data in one
view and test data in another view.
6
Conclusions
In this paper, we proposed a translated learning framework for classifying target data using data
from another feature space. We have shown that in translated learning, even though we have very
little labeled data in the target space, if we can find a bridge to link the two spaces through feature
translation, we can achieve good performance by leveraging the knowledge from the source data.
We formally formulated our translated learning framework using risk minimization, and presented
an approximation method for model estimation. In our experiments, we have demonstrated how this
can be done effectively through the co-occurrence data in TLRisk. The experimental results on
the text-aided image classification and the cross-language classification show that our algorithm can
greatly outperform the state-of-the-art baseline methods.
Acknowledgement We thank the anonymous reviewers for their greatly helpful comments.
Wenyuan Dai and Gui-Rong Xue are supported by the grants from National Natural Science Foundation of China (NO. 60873211) and the MSRA-SJTU joint lab project ?Transfer Learning and its
Application on the Web?. Qiang Yang thanks the support of Hong Kong CERG Project 621307.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
N. Bel, C. Koster, and M. Villegas. Cross-lingual text categorization. In ECDL, 2003.
A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 1998.
R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
W. Dai, Q. Yang, G.-R. Xue, and Y. Yu. Self-taught clustering. In ICML, 2008.
J. Lafferty and C. Zhai. Document language models, query models, and risk minimization for information
retrieval. In SIGIR, 2001.
D. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer
Vision, 60(2):91?110, 2004.
I. Muslea, S. Minton, and C. Knoblock. Active + semi-supervised learning = robust multi-view learning.
In ICML, 2002.
K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In CIKM, 2000.
R. Raina, A. Battle, H. Lee, B. Packer, and A. Ng. Self-taught learning: transfer learning from unlabeled
data. In ICML, 2007.
R. Raina, A. Ng, and D. Koller. Constructing informative priors using transfer learning. In ICML, 2006.
P. Wu and T. Dietterich. Improving svm accuracy by training on auxiliary data sources. In ICML, 2004.
Y. Yang and J. Pedersen. A comparative study on feature selection in text categorization. In ICML, 1997.
X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, University of WisconsinMadison, 2007.
| 3492 |@word multitask:1 kong:3 c0:11 open:2 mammal:2 hunting:1 tuned:1 document:20 outperforms:1 existing:1 com:2 stemmed:1 ust:1 informative:1 v:14 directory:3 codebook:1 cse:1 preference:2 org:2 five:1 constructed:1 combine:1 tagging:1 expected:2 indeed:1 themselves:1 multi:11 muslea:1 little:1 considering:1 totally:1 project:4 estimating:2 moreover:2 laptop:1 kind:1 minimizes:1 unified:1 collecting:1 classifier:4 control:1 grant:1 omit:1 cooking:1 before:2 positive:3 safety:1 mistake:1 analyzing:1 path:2 might:1 frog:1 china:2 collect:1 co:26 ease:1 lowerbound:13 testing:1 practice:2 procedure:2 word:1 integrating:1 suggest:1 cannot:3 unlabeled:5 selection:1 sheet:1 risk:19 influence:2 www:5 demonstrated:2 yt:66 reviewer:1 l:5 focused:1 formulate:3 sigir:1 survey:1 yuqiang:1 holiday:1 nglish:1 target:32 commercial:1 massive:1 user:3 programming:1 us:3 hypothesis:1 expensive:1 skating:2 labeled:33 caltech256:1 worst:2 calculate:1 connected:1 keyword:1 trade:5 decrease:1 substantial:1 mentioned:1 und:1 complexity:1 cooccurrence:2 dynamic:1 trained:1 distinctive:1 efficiency:1 translated:27 joint:1 various:1 represented:5 derivation:1 train:5 jiao:1 effective:2 describe:1 query:2 labeling:2 horse:5 pearson:3 quite:2 larger:1 solve:2 elephant:2 otherwise:3 cov:1 final:1 relevant:4 combining:1 rapidly:1 translate:4 achieve:2 intuitive:3 description:3 qr:2 recipe:1 cluster:1 requirement:1 extending:1 categorization:2 comparative:1 help:2 develop:1 measured:1 nearest:1 keywords:1 p2:2 implemented:1 c:2 auxiliary:4 involves:1 deutsch:10 idp:2 translating:1 public:2 villegas:1 require:1 clustered:1 anonymous:1 rong:2 considered:2 great:1 ocation:2 mapping:3 predict:2 dictionary:5 commonality:1 agriculture:1 earth:1 estimation:4 label:12 bridge:2 sensitive:1 minimization:8 clearly:1 kowloon:1 always:1 rather:1 crawled:1 minton:1 derived:1 focus:4 improvement:1 indicates:5 mainly:2 hk:1 greatly:4 baseline:7 glass:1 helpful:1 snake:1 transferring:1 accept:3 rhino:1 koller:1 issue:1 classification:34 among:2 arg:3 html:1 colt:1 art:3 orange:1 construct:1 ng:2 qiang:2 langauge:1 represents:2 yu:2 icml:6 plenty:1 report:1 few:3 sta:1 packer:1 divergence:5 national:1 gui:2 interest:1 evaluation:5 recreation:1 chain:9 microwave:1 instance:25 classify:2 column:1 caruana:1 cost:2 applicability:1 successful:1 conducted:1 bonsai:1 straightforwardly:1 connect:2 xue:3 accomplish:1 thanks:1 international:1 stay:1 probabilistic:1 off:5 lee:1 pool:1 enhance:3 together:1 worse:1 book:1 converted:2 summarized:1 availability:1 coefficient:2 matter:1 llama:1 depends:2 later:1 view:17 lowe:1 lab:1 annotation:1 odp:9 contribution:2 minimize:1 ni:1 accuracy:1 descriptor:1 variance:1 translator:19 pedersen:1 accurately:1 researcher:2 liebler:1 explain:1 flickr:3 associated:1 gain:1 mitchell:1 knowledge:8 subsection:1 back:1 supervised:8 response:1 wisconsinmadison:1 april:1 formulation:1 done:2 evaluated:6 though:1 furthermore:1 just:2 correlation:3 hand:4 web:8 replacing:1 somehow:1 mode:1 outdoors:1 gray:1 quality:1 supervisory:2 building:2 dietterich:1 name:3 contain:1 true:1 ize:4 leibler:3 gesellschaft:2 during:2 self:6 ambiguous:1 cosine:4 hong:3 demonstrate:3 performs:1 image:60 novel:2 x0t:7 empirically:2 emt:1 shanghai:2 discussed:2 refer:4 tuning:1 dxs:2 language:16 knoblock:1 apex:1 stable:2 etc:1 posterior:1 own:1 irrelevant:1 scenario:2 binary:3 exploited:2 caltech:4 dai:3 relaxed:1 semi:6 multiple:1 keypoints:1 technical:1 faster:1 cross:16 retrieval:1 y:44 prediction:2 involving:1 vision:2 sometimes:1 whereas:1 want:1 addressed:1 source:20 unlike:1 comment:1 leveraging:3 lafferty:1 effectiveness:2 call:1 yang:4 ideal:1 enough:1 architecture:1 idea:3 cn:1 decline:1 msra:1 six:1 linkage:2 effort:1 reformulated:1 migrate:1 generally:2 detailed:1 listed:1 tune:1 amount:3 category:10 http:5 outperform:2 estimated:5 stained:1 per:4 cikm:1 taught:6 group:2 key:1 threshold:1 comet:1 blum:1 ht:4 convert:1 koster:1 named:5 wenyuan:2 wu:1 home:1 ble:1 dy:1 investigates:1 comparable:1 entirely:1 ki:1 internet:3 ct:2 bound:4 correspondence:1 fold:2 yong:1 aspect:1 min:1 performing:1 relatively:1 poor:1 lingual:2 battle:1 across:6 slightly:2 em:2 making:2 intuitively:1 gradually:1 invariant:1 resource:1 equation:11 turn:1 discus:1 german:10 needed:1 end:2 available:5 rewritten:1 apply:3 occurrence:21 save:2 coin:5 robustness:2 cake:1 denotes:1 top:20 clustering:1 completed:1 music:1 exploit:1 build:6 society:2 objective:4 already:1 realized:1 strategy:7 traditional:2 distance:4 link:6 thank:1 collected:4 sjtu:2 dmoz:2 code:1 modeled:1 index:1 illustration:2 zhai:1 balance:1 difficult:4 negative:6 perform:1 imbalance:1 observation:1 markov:7 datasets:1 situation:1 frame:2 august:1 dog:1 kl:3 bel:1 crime:1 engine:2 established:1 address:3 usually:1 sparsity:1 including:4 reliable:1 video:1 explanation:1 max:2 event:2 natural:2 rely:2 predicting:1 scarce:2 raina:2 geb:1 zhu:1 technology:1 picture:1 health:3 roller:1 text:29 prior:5 review:1 acknowledgement:1 literature:1 loss:3 bear:1 dxt:2 interesting:1 limitation:2 var:2 sixteen:1 validation:2 foundation:2 sword:1 sufficient:3 tomato:1 classifying:3 cd:1 translation:6 yt0:1 token:1 repeat:1 supported:1 english:7 neighbor:1 face:1 sparse:2 tracing:1 rainbow:1 calculated:2 curve:2 world:10 interchange:1 collection:2 preprocessing:1 social:2 approximate:1 kullback:3 active:2 corpus:4 assumed:1 search:9 table:9 learn:4 transfer:11 robust:1 nigam:2 unavailable:2 improving:1 electric:1 domain:3 constructing:1 main:2 big:1 fest:1 ghani:2 xu:5 site:1 referred:1 tong:1 wish:2 exponential:2 outdoor:1 extractor:2 xt:104 sift:1 x:23 svm:1 guitar:1 consist:1 effectively:2 dissimilarity:6 chen:1 gap:1 lt:11 simply:1 saddle:1 visual:1 expressed:1 sport:4 relies:2 extracted:2 abbreviation:1 marked:1 formulated:2 kochen:1 replace:2 hard:1 aided:9 change:1 specifically:1 acting:1 averaging:1 correlational:1 called:2 dyt:4 experimental:4 formally:6 select:1 support:1 relevance:1 |
2,749 | 3,493 | Nonrigid Structure from Motion in Trajectory Space
Ijaz Akhter
LUMS School of Science and Engineering
Lahore, Pakistan
[email protected]
Yaser Sheikh
Carnegie Mellon University
Pittsburgh, PA, USA
[email protected]
Sohaib Khan
LUMS School of Science and Engineering
Lahore, Pakistan
[email protected]
Takeo Kanade
Carnegie Mellon University
Pittsburgh, PA, USA
[email protected]
Abstract
Existing approaches to nonrigid structure from motion assume that the instantaneous 3D shape of a deforming object is a linear combination of basis shapes,
which have to be estimated anew for each video sequence. In contrast, we propose that the evolving 3D structure be described by a linear combination of basis
trajectories. The principal advantage of this approach is that we do not need to
estimate any basis vectors during computation. We show that generic bases over
trajectories, such as the Discrete Cosine Transform (DCT) basis, can be used to
compactly describe most real motions. This results in a significant reduction in
unknowns, and corresponding stability in estimation. We report empirical performance, quantitatively using motion capture data, and qualitatively on several
video sequences exhibiting nonrigid motions including piece-wise rigid motion,
partially nonrigid motion (such as a facial expression), and highly nonrigid motion
(such as a person dancing).
1
Introduction
Nonrigid structure from motion is the process of recovering the time varying 3D coordinates of
points on a deforming object from their 2D locations in an image sequence. Factorization approaches, first proposed for recovering rigid structure by Tomasi and Kanade in [1], were extended
to handle nonrigidity in the seminal paper by Bregler et al. in [2]. The key idea in [2] is that observed shapes can be represented as a linear combination of a compact set of basis shapes. Each
instantaneous structure, such as the mouth of a smiling actor shown in Figure 1(a), is expressed
as a point in the linear space of shapes spanned by the shape basis. A number of approaches that
develop the use of shape basis have subsequently been proposed, including [3, 4, 5]. Since the space
of spatial deformations is highly object specific, the shape basis need to be estimated anew for each
video sequence. The shape basis of a mouth smiling, for instance, cannot be recycled to compactly
represent a person walking.
In this paper, we posit that representing nonrigid structure as a combination of basis shapes is one
of two ways of looking at the space-time structure induced by P points seen across F frames. Instead of a shape space representation, we propose looking across time, representing the time-varying
structure of a nonrigid object as a linear combination of a set of basis trajectories, as illustrated in
Figure 1(b). The principal advantage of taking this ?lateral? approach arises from the fact that compact representation in trajectory space is better motivated physically than compact representation in
shape space. To see this, consider a deformable object being acted upon by a force. The extent
of its deformation is limited by the force that can be applied. Hence, a tree swaying in the wind
or a person walking cannot arbitrarily and randomly deform; the trajectories of their points are a
function of the speed of the wind and the flexing of muscles respectively. Deformations are, there-
S2
S1
q2
S3
(a)
q1
q3
(b)
Figure 1: 3D points on a smiling mouth: a comparison of shape and trajectory space. (a) In approaches that
represent the time varying structure in shape space, all 3D points observed at one time instant are projected onto
a single point in the shape space. S1 , S2 , ? ? ? , Sk each represent a shape basis vector. (b) In our approach, we
represent the time varying structure in trajectory space, where a 3D point?s trajectory over time is projected to
a single point in the trajectory space. ?1 , ?2 , ? ? ? , ?k each represent a trajectory basis vector. P points observed
across F frames are expressed as F projected points in shape space and P points in trajectory space.
fore, constrained by the physical limits of actuation to remain incremental, not random, across time.
Since this property is, to a large degree, ubiquitous, basis can be defined in trajectory that are object
independent.
We show that while the inherent representative power of both shape and trajectory projections of
structure data are equal (a duality exists), the significant reduction in number of unknowns that
results from knowing the basis apriori allows us to handle much more nonrigidity of deformation
than state of the art methods, like [4] and [5]. In fact, most previous results consider deformations
which have a large rigid component, such as talking-head videos or the motion of a swimming
shark. To the best of our knowledge, we are the first to show reasonable reconstructions of highly
nonrigid motions from a single video sequence without making object specific assumptions. For all
results, we use the same trajectory basis, the Discrete Cosine Transform (DCT) basis, underlining
the generic nature of the trajectory space representation. A useful byproduct of this approach is
that structure is automatically compressed for compact transmission without the need for post facto
compression or the overhead transmission of object specific basis.
2
Related work
If deformation of a 3D scene is unconstrained, the structure observed in each image would be independent of those in other images. In this case, recovering structure from motion is ill-posed,
equivalent to finding 3D structure from a single 2D image at each time instant. To make nonrigid
structure recovery tractable, some consistency in the deformation of structure has to be imposed.
One early measure of consistency that was applied assumes that the scene consists of multiple rigid
objects which are moving independently [6, 7, 8]. However, the first general solution to the problem
of nonrigid structure recovery was introduced by Bregler et al. in [2], approximating the structure at
each time instant as a linear combination of basis shapes. They recovered the structure, the shape basis and the camera rotations simultaneously, by exploiting orthonormality constraints of the rotation
matrices. Xiao et al. [4] showed that these orthonormality constraints alone lead to ambiguity in the
solution, and introduced additional constraints to remove ambiguity. In [9] Xiao et al. proposed a
rank deficient basis. Other extensions of the work by Bregler et al. include [10] which improved the
numerical stability of the estimation process and [3] which introduced a Gaussian prior on the shape
coefficients. Common to all of these approaches is that results are shown on objects which have a
significant number of points that move rigidly, such as faces. Some approaches, such as [11] make
explicit use of this fact to initialize rotation matrices, while others favor such sequences for stability
in estimation.
In contrast to this entire corpus of work, which approximate structure by a shape basis, we propose a
new representation of time varying structure, as a collection of trajectories. We not only demonstrate
that a compact trajectory space can be defined, but also that the basis of this trajectory space can
be pre-defined, removing a large number of unknowns from the estimation process altogether. The
duality of spatial and temporal representations has been hinted at earlier in literature. Shashua [12]
discusses the duality of the joint image space and the joint point space in the context of multiview
geometry. Zelnik-Manor and Irani [13] have exploited a similar duality for an alternate approach to
=
ax1
... + axk
+ ax2
Figure 2: As described in Equation 3, each trajectory is represented as a linear combination of k predefined
basis trajectories. In this paper, we use DCT basis to compactly represent trajectories.
segmenting video sequences. Ours is the first paper to use this dual representation in the structure
from motion problem, and to note that a generic basis can be defined in trajectory space which
compactly represents most real trajectories.
3
Representing Nonrigid Structure
The structure at a time instant t can be represented by arranging the 3D locations of the P points in
a matrix S(t) ? R3?P ,
"
#
Xt1
XtP
S(t) = Yt1 ? ? ? YtP
.
Zt1
ZtP
The complete time varying structure can be represented by concatenating these instantaneous structures as S3F ?P = [S(1)T S(2)T ? ? ? S(F )T ]T . In [2], each instantaneous shape matrix S(t) is
approximated as a linear combination of basis shapes,
X
S(t) =
cj (t)S j ,
(1)
j
j
3?P
where S ? R
is a basis shape and cj (t) is the coefficient of that basis shape. If the set of
observed structures can be compactly expressed in terms of k such basis shapes, S has a rank of at
most 3k. This rank constraint can be restated by rearrangement of S as the following rank k matrix,
?
?
X11 ? ? ? X1P Y11 ? ? ? Y1P Z11 ? ? ? Z1P
?
..
..
..
..
.. ? .
S? = ? ...
(2)
.
.
.
.
. ?
XF 1 ? ? ? XF P YF 1 ? ? ? YF P ZF 1 ? ? ? ZF P
The row space of this matrix corresponds to the shape space. Since the row and column space of a
matrix are of equal dimension, it follows that the columns of S? are also spanned by k vectors. We
call the column space of this matrix the trajectory space and note that it enjoys a dual relationship
with the shape space. Specifically, if the time varying shape of an object can be expressed by a
minimum of k shape basis, then there exist exactly k trajectory basis vectors that can represent the
same time varying shape.
To represent the time varying structure in terms of trajectory basis, we consider the structure
as a set of trajectories, T (i) = [Tx (i)T Ty (i)T Tz (i)T ]T , (see Figure 1(b)) where Tx (i) =
[X1i , ? ? ? , XF i ]T , Ty (i) = [Y1i , ? ? ? , YF i ]T , Tz (i) = [Z1i , ? ? ? , ZF i ]T are the x, y, and z coordinates
of the ith trajectory. As illustrated in Figure 2, we describe each trajectory as a linear combination
of basis trajectory,
Tx (i) =
k
X
axj (i)?j ,
Ty (i) =
j=1
k
X
ayj (i)?j , Tz (i) =
j=1
k
X
azj (i)?j ,
(3)
j=1
where ?j ? RF is a trajectory basis vector and axj (i), ayj (i) and azj (i) are the coefficients corresponding to that basis vector. The time varying structure matrix can then be factorized into an inverse
projection matrix and coefficient matrix as S3F ?P = ?3F ?3k A3k?P , where A = [ATx ATy ATz ]T
and
? T
?
?1
?
ax1 (1)
..
Ax = ?
.
axk (1)
???
???
?
?
?
?
ax1 (P )
?
..
?,? = ?
.
?
? ?T
axk (P )
? F
?
?1T
..
.
?FT
?
?
?
?,
?
?
?
?
?1T ?
?FT
(4)
Here ?i represents a truncated basis for transformation from coefficient space to original space. The
principal benefit of the trajectory space representation is that a basis can be pre-defined that can
compactly approximate most real trajectories. A number of bases such as the Hadamard Transform
basis, the Discrete Fourier Transform basis, and the Discrete Wavelet Transform basis can all compactly represent trajectories in an object independent way. In this paper, we use the Discrete Cosine
Transform basis set to generate ? (shown in Figure 2) for all reconstructions results shown. The
efficacy of the DCT basis has been demonstrated for compressing motion capture data, [14], and has
been effective in our experiments as well.
4
Nonrigid Structure and Motion Factorization
The measured 2D trajectories are contained in a 2F ? P measurement matrix W, containing the
location of P image points across F frames,
? u
?
. . . u1P
11
v
.
.
.
v
1P ?
? 11
?
.. ?
W = ? ...
.
. ?
?
?
uF 1
vF 1
...
...
uF P
vF P
This measurement matrix can be decomposed as W = RS where R is a 2F ? 3F matrix,
?
?
R1
..
R=?
?,
.
RF
and Rt is a 2 ? 3 orthographic projection matrix. In the previous section we showed that S = ?A,
as a result we can further factorize W as
W = R?A = ?A,
(5)
where ? = R?. Since ? is a 3F ? 3k matrix, the rank of matrix W will be at most 3k. This is a
dual property to the rank constraint defined by [2]. We can use SVD to factorize W as,
? A.
?
W=?
? and A? will not be equal to ? and A respectively, because the above factorIn general, the matrix ?
? and Q?1 A are also valid factorizaization is not unique. For any invertible 3k ? 3k matrix Q, ?Q
tions. Therefore, to recover metric structure we need to estimate the rectification matrix Q such that
the following equations hold true,
?
? = ?Q,
5
?
A = Q?1 A.
(6)
Metric Upgrade
The problem of recovering the rotation and structure is reduced to estimating the rectification matrix
Q. The elements of matrix ? are,
? 1 T
1 T
1 T ?
r1 ?1
? r41 ?1T
?
?=?
?
? rF ?T
1 F
r4F ?FT
r2 ? 1
r51 ?1T
..
.
r2F ?FT
r5F ?FT
r3 ? 1
r61 ?1T ?
r3F ?FT
r6F ?FT
?
?.
?
?
? and A? it is sufficient to estimate only three
Instead of estimating the whole matrix Q, to rectify ?
columns of Q. Let us define Q||| to be the first, k + 1st and 2k + 1st columns of the matrix Q. From
Equation 6, if we just use Q||| instead of Q, we get
?
?
? ||| = ?
?Q
?1,1 R1
..
?.
.
?F,1 RF
(7)
Condition # of ?T ?
10
10
10
10
10
10
10
10
K=2
K=3
K=4
K=5
K=6
7
6
10
5
4
3
2
10
10
10
10
10
1
10
0
10
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Camera motion per frame (in Degrees)
5
F=400
8
6
4
3
2
6
10
5
10
4
10
3
10
2
10
1
1
10
0
0
10
1
1.5
2
2.5
3
3.5
4
4.5
5
K=2
K=3
K=4
K=5
K=6
7
10
5
0.5
F=800
10
K=2
K=3
K=4
K=5
K=6
7
Condition # of ?T ?
10
F=200
8
Condition # of ?T ?
10
0.5
Camera motion per frame (in Degrees)
1
1.5
2
2.5
3
3.5
4
4.5
5
Camera motion per frame (in Degrees)
Figure 3: Effect of increasing camera motion on reconstruction stability. Reconstruction stability
is measured in terms of condition number of matrix ?T ? with different values of k and different
values of F . Synthetic rotations were generated by revolving the camera around the z-axis and
camera motion was measured in terms of the angle the camera moved per frame.
This equation shows that the unknowns in matrix Q||| can be found by exploiting the fact that Ri
? 2i?1:2i denotes the two rows of
is a truncated rotation matrix (as was done in [1]). Specifically, if ?
?
matrix ? at positions 2i ? 1 and 2i, then we have
2
? 2i?1:2i Q||| QT ?
?T
?
||| 2i?1:2i = ?i,1 I2?2 ,
(8)
where I2?2 is an identity matrix, giving three indepedent constraints for each image i. Therefore
for F frames, we have 3F constraints and 9k unknowns in Q||| . Hence at least 3k non-degenerate
images are required to estimate Q||| . Once Q||| has been computed, using a nonlinear minimization
routine (e.g. Levenberg Marquardt), we can estimate the rotation matrices, and therefore R, using
Equation 7.
Once R is known, it can be multiplied with the (known) DCT basis matrix ?3F ?3k to recover
the matrix ?2F ?3k = R2F ?3F ?3F ?3k . The coefficients can then be estimated by solving the
following overconstrained linear system of equations,
?2F ?3k A?3k?P = W2F ?P .
6
(9)
Results
The proposed algorithm has been validated quantitatively on motion capture data over different
actions and qualitatively on video data. We have tested the approach extensively on highly nonrigid
human motion like volleyball digs, handstands, karate moves and dancing. Figure 4 shows a few
sample reconstructions of different actors. As mentioned earlier, we choose DCT as the basis for
the trajectory space. In subsequent experiments, we compare our approach with [5] and [9] (we use
code kindly provided by the respective authors). The results, data and the code used to produce the
results are all shared at http://cvlab.lums.edu.pk/nrsfm.
In nonrigid structure from motion, the key relationship that determines successful reconstruction
is the one between the degree of deformation of the object, measured by the number of basis k
required to approximate it and the degree of camera motion. To test the relationship between k,
camera motion and reconstruction stability, we constructed ? matrices using different values of k
and synthetic rotations around the z-axis, at various magnitudes of motion per frame. In Figure 3,
the reconstruction stability, measured by the condition number of ?T ?, is shown as k is varied
between 2 and 6, for 200, 400, and 800 frames (at different angular velocities per frame). The plots
confirm intuition: the smaller the degree of object deformation and the larger the camera motion,
the more stable reconstruction tends to be.
For quantitative evaluation of reconstruction accuracy we used the drink, pickup, yoga, stretch,
and dance actions from the CMU Mocap database, and the shark dataset of [3]. Multiple rigid
body data was generated by simulation of points on rigidly moving cubes. We generated synthetic
camera rotations and projected 3D data using these rotations to get image observations. The camera
rotation for the Mocap datasets was 5 degrees per frame and 2 degrees per frame for the multi-body
0.9
X-coordinate
0.4
0.2
0.8
0
0.2
0.7
0.2
0
0.4
0.6
0.2
0.6
0.5
0.4
0.8
0.4
of foot
1
0.3
0.6
0.2
0.8
1.2
1.4
0.1
0
50
100
150
1
0.6
X-coordinate
of hand
0
50
100
150
0
0.4
0.2
0.2
0.4
20
40
60
80
100
120
140
160
180
0
20
40
60
80
100
120
140
160
180
40
60
80
100
120
140
160
180
0
0.8
1
1
0.4
1.2
1.2
0.6
1.4
1.4
0
50
100
150
1.6
1.6
0
50
100
150
0.5
0.3
0.4
0.4
0.2
0.2
0.3
0.1
0
0.2
0
0.2
0.1
0.1
0
of head
0
0.2
0.6
0.8
0.8
X-coordinate
1.6
0.2
0.4
0.6
0
0.2
0.1
0.4
0.2
0.6
0.2
0.3
0.8
0.3
0.4
0.4
0
50
100
Volleyball Dig
0
50
100
150
Hand-stand
150
1
0
20
Slip and Fall
Figure 4: Simultaneous reconstruction accuracy for three actors. The X-coordinate trajectories for three
different points on the actors is shown. The approximation error introduced by DCT projection has a smoothing
impact on the reconstruction. Red lines indicate ground truth data and blue lines indicate reconstructed data.
Trajectory
Basis
Torresani et al.
[5]
Xiao et al.
[9]
Figure 5: The dance sequence from the CMU mocap database. The black dots are the ground truth points
while the gray circles are the reconstructions by the three methods respectively.
sequence. We did not rotate the camera for the dance and shark sequences, since the object itself was
rotating in these sequences. In obtaining the results discussed below, k was chosen to provide the
best reconstructions, the value varying between 2 and 13 depending on the length of the sequence
and the nonrigidity of motion. We normalize the structure, so that the average standard deviation
of the structure matrix S becomes equal to unity (to make comparison of error across datasets more
meaningful).
Table 1 shows a quantitative comparison of our method with the shape basis approach of Torresani
et al. [5] and Xiao and Kanade [9]. This table shows both the camera rotation estimation error and
structure reconstruction error. The estimated structure is valid up to a 3D rotation and translation
and the estimated rotations also have a 3D rotation ambiguity. We therefore align them for error
measurement. Procrustes analysis was used for aligning camera rotations and the 3D structure. The
error measure for camera rotations was the average Frobenius norm difference between the original
camera rotation and the estimated camera rotation. For structure evaluation we compute the per
frame mean squared error between original 3D points and the estimated 3D points.
Finally, to test the proposed approach on real data, we used a face sequence from the PIE dataset,
a sequence from the movie ?The Matrix?, a sequence capturing two rigidly moving cubes and a
sequence of a toy dinosaur moving nonrigidly. For the last three sequences, the image points were
tracked in a semi-automatic manner, using the approach proposed in [15] with manual correction.
We show the resulting reconstructions in Figure 6, and compare against the reconstructions obtained
from Torresani et al. [5] and Xiao and Kanade [9].
Table 1: The quantitative comparison of proposed algorithm with the techniques described in Xiao and Kanade
[9] and Torresani et al. [5]. The Erot is the average Frobenius difference between original rotations and aligned
estimated rotations, and E? is the average distance between original 3D points and aligned reconstructed points
Datset
D RINK
P ICK U P
YOGA
S TRETCH
M ULTI R IGID
DANCE
S HARK
7
Trajectory Bases
Erot
E?
5.8E-03
2.50E-02
1.55E-01 2.37E-01
1.06E-01 1.62E-01
5.49E-02 1.09E-01
1.96E-08 4.88E-02
NA
2.96E-01
NA
3.12E-01
Torresani?s EM-Gaussian
Erot
E?
0.2906
0.3393
0.4277
0.5822
0.8089
0.8097
0.7594
1.1111
0.1718
2.5902
NA
0.9839
NA
0.1086
Xiao?s Shape Bases
Erot
E?
0.3359
3.5186
0.4687
3.3721
1.2014
7.4935
0.9489
4.2415
0.0806
11.7013
NA
2.9962
NA
0.4772
Conclusion
We describe an algorithm to reconstruct nonrigid structure of an object from 2D trajectories of
points across a video sequence. Unlike earlier approaches that require an object-specific shape basis
to be estimated for each new video sequence, we demonstrate that a generic trajectory basis can
be defined that can compactly represent the motion of a wide variety of real deformations. Results
are shown using the DCT basis to recover structures of piece-wise rigid motion, facial expressions,
actors dancing, walking, and doing yoga. Our experiments show that there is a relationship between
camera motion, degree of object deformation, and reconstruction stability. We observe that as the
motion of the camera increases with respect to the degree of deformation, the reconstruction stability
increases. Future directions of research include experimenting with different unitary transform bases
to verify that DCT basis are, in fact, the best generic basis to use, and developing a synergistic
approach to use both shape and trajectory bases concurrently.
8
Acknowledgements
This research was partially supported by a grant from the Higher Education Commission of Pakistan.
The authors would like to acknowledge Fernando De La Torre for useful discussions. We further
thank J. Xiao, L. Agapito, I. Matthews and L. Torresani for making their code or data available to
us. The motion capture data used in this project was obtained from http://mocap.cs.cmu.edu.
References
[1] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9:137?154, 1992.
[2] C. Bregler, A. Hertzmann, and H. Biermann. Recovering non-rigid 3D shape from image
streams. CVPR, 2:690?696, 2000.
[3] L. Torresani, A. Hertzmann, and C. Bregler. Learning non-rigid 3D shape from 2D motion.
NIPS, 2005.
[4] J. Xiao, J. Chai, and T. Kanade. A closed form solution to non-rigid shape and motion recovery.
IJCV, 67:233?246, 2006.
[5] L. Torresani, A. Hertzmann, and C. Bregler. Nonrigid structure-from motion: Estimating shape
and motion with hierarchical priors. PAMI, 30(5):878?892, May 2008.
[6] J.P. Costeira and T. Kanade. A multibody factorization method for independently moving
objects. IJCV, 49:159?179, 1998.
[7] M. Han and T. Kanade. Reconstruction of a scene with multiple linearly moving objects. IJCV,
59:285?300, 2004.
[8] A. Gruber and Y. Weiss. Multibody factorization with uncertainity and missing data using the
EM algorithm. CVPR, 1:707?714, 2004.
[9] J. Xiao and T. Kanade. Non-rigid shape and motion recovery: Degenerate deformations.
CVPR, 1:668?675, 2004.
Trajectory
Basis
Torresani et al.
[5]
Xiao et al.
[9]
Trajectory
Basis
Torresani et al.
[5]
Xiao et al.
[9]
Trajectory
Basis
Torresani et al.
[5]
Xiao et al.
[9]
Trajectory
Basis
Torresani et al.
[5]
Xiao et al.
[9]
Figure 6: Results on Dinosaur, Matrix, PIE face, and Cubes sequences. k was set to 12, 3, 2, and 2 respectively.
[10] M. Brand. Morphable 3D models from video. CVPR, 2:456, 2001.
[11] A. Del Bue, F.Smeraldi, and L. Agapito. Non-rigid structure from motion using ranklet-based
tracking and non-linear optimization. IVC, pages 297?310, 2007.
[12] Amnon Shashua. Trilinear tensor: The fundamental construct of multiple-view geometry and
its applications. AFPAC, 1997.
[13] Lihi Zelnik-Manor and Michal Irani. Temporal factorization vs. spatial factorization. ECCV,
2004.
[14] O. Arikan. Compression of motion capture databases. ACM Trans. on Graphics, 2006.
[15] A. Datta, Y. Sheikh, and T. Kanade. Linear motion estimation for systems of articulated planes.
CVPR, 2008.
| 3493 |@word compression:2 norm:1 zelnik:2 simulation:1 r:1 q1:1 reduction:2 efficacy:1 ours:1 existing:1 recovered:1 michal:1 marquardt:1 takeo:1 dct:9 subsequent:1 numerical:1 shape:42 remove:1 plot:1 v:1 alone:1 plane:1 ith:1 flexing:1 location:3 x1p:1 constructed:1 consists:1 ijcv:4 overhead:1 manner:1 multi:1 decomposed:1 automatically:1 increasing:1 becomes:1 provided:1 estimating:3 project:1 multibody:2 factorized:1 q2:1 finding:1 transformation:1 temporal:2 quantitative:3 exactly:1 facto:1 s3f:2 grant:1 segmenting:1 engineering:2 tends:1 limit:1 rigidly:3 pami:1 black:1 factorization:7 limited:1 unique:1 camera:21 atx:1 orthographic:1 empirical:1 evolving:1 ax1:3 projection:4 pre:2 get:2 onto:1 synergistic:1 cannot:2 context:1 seminal:1 equivalent:1 imposed:1 demonstrated:1 missing:1 independently:2 restated:1 recovery:4 spanned:2 stability:9 handle:2 coordinate:6 arranging:1 slip:1 pa:2 element:1 velocity:1 approximated:1 walking:3 database:3 observed:5 ft:7 capture:5 compressing:1 mentioned:1 intuition:1 lums:5 hertzmann:3 smeraldi:1 lahore:2 solving:1 upon:1 basis:57 compactly:8 joint:2 represented:4 tx:3 various:1 articulated:1 describe:3 effective:1 posed:1 larger:1 cvpr:5 tested:1 reconstruct:1 compressed:1 favor:1 transform:7 itself:1 sequence:20 advantage:2 propose:3 reconstruction:20 aligned:2 hadamard:1 degenerate:2 deformable:1 moved:1 frobenius:2 normalize:1 exploiting:2 chai:1 transmission:2 r1:3 produce:1 incremental:1 tk:1 object:20 tions:1 develop:1 depending:1 measured:5 qt:1 school:2 recovering:5 c:3 indicate:2 exhibiting:1 direction:1 posit:1 foot:1 uncertainity:1 torre:1 subsequently:1 human:1 education:1 require:1 hinted:1 bregler:6 extension:1 stretch:1 hold:1 correction:1 around:2 ground:2 matthew:1 early:1 estimation:6 ick:1 minimization:1 concurrently:1 gaussian:2 manor:2 varying:11 axj:2 q3:1 ax:1 validated:1 rank:6 experimenting:1 contrast:2 r2f:2 rigid:11 entire:1 x11:1 dual:3 ill:1 smoothing:1 art:1 spatial:3 constrained:1 initialize:1 apriori:1 equal:4 once:2 cube:3 construct:1 represents:2 lihi:1 future:1 report:1 others:1 quantitatively:2 inherent:1 few:1 torresani:12 randomly:1 simultaneously:1 geometry:2 rearrangement:1 highly:4 evaluation:2 predefined:1 byproduct:1 respective:1 facial:2 tree:1 rotating:1 circle:1 deformation:13 instance:1 column:5 earlier:3 ytp:1 deviation:1 successful:1 graphic:1 commission:1 synthetic:3 person:3 st:2 fundamental:1 invertible:1 atz:1 na:6 squared:1 ambiguity:3 containing:1 choose:1 tz:3 toy:1 deform:1 de:1 coefficient:6 ax2:1 piece:2 stream:2 view:1 wind:2 closed:1 doing:1 shashua:2 red:1 recover:3 accuracy:2 trilinear:1 upgrade:1 trajectory:47 fore:1 dig:2 simultaneous:1 manual:1 rink:1 z1i:1 against:1 ty:3 arikan:1 y1p:1 dataset:2 knowledge:1 ubiquitous:1 cj:2 overconstrained:1 routine:1 higher:1 costeira:1 improved:1 wei:1 done:1 underlining:1 just:1 angular:1 hand:2 axk:3 nonlinear:1 del:1 yf:3 gray:1 revolving:1 effect:1 smiling:3 verify:1 orthonormality:2 true:1 usa:2 agapito:2 hence:2 irani:2 i2:2 illustrated:2 during:1 ulti:1 levenberg:1 cosine:3 nonrigid:17 multiview:1 complete:1 demonstrate:2 motion:42 image:12 wise:2 instantaneous:4 common:1 rotation:21 physical:1 tracked:1 discussed:1 mellon:2 significant:3 measurement:3 nrsfm:1 unconstrained:1 consistency:2 automatic:1 dot:1 rectify:1 moving:6 stable:1 actor:5 han:1 morphable:1 base:6 align:1 aligning:1 showed:2 arbitrarily:1 exploited:1 muscle:1 seen:1 minimum:1 additional:1 mocap:4 fernando:1 semi:1 multiple:4 z11:1 xf:3 post:1 akhter:2 impact:1 cmu:5 aty:1 metric:2 physically:1 represent:10 orthography:1 ztp:1 unlike:1 induced:1 deficient:1 call:1 unitary:1 variety:1 idea:1 knowing:1 amnon:1 expression:2 motivated:1 yaser:2 action:2 useful:2 procrustes:1 extensively:1 reduced:1 generate:1 http:2 exist:1 s3:1 estimated:9 per:9 blue:1 carnegie:2 discrete:5 dancing:3 key:2 swimming:1 inverse:1 angle:1 reasonable:1 shark:3 yt1:1 vf:2 capturing:1 drink:1 constraint:7 scene:3 ri:1 y1i:1 fourier:1 speed:1 acted:1 uf:2 developing:1 alternate:1 combination:9 across:7 remain:1 smaller:1 em:2 unity:1 sheikh:2 making:2 s1:2 azj:2 rectification:2 equation:6 discus:1 r3:2 tractable:1 volleyball:2 available:1 pakistan:3 multiplied:1 observe:1 hierarchical:1 generic:5 z1p:1 altogether:1 original:5 assumes:1 denotes:1 include:2 instant:4 giving:1 approximating:1 tensor:1 move:2 rt:1 distance:1 thank:1 lateral:1 w2f:1 extent:1 karate:1 code:3 length:1 relationship:4 pie:2 y11:1 unknown:5 zf:3 observation:1 datasets:2 acknowledge:1 pickup:1 truncated:2 extended:1 looking:2 head:2 frame:14 varied:1 datta:1 introduced:4 required:2 khan:1 tomasi:2 nip:1 trans:1 below:1 rf:4 including:2 video:10 mouth:3 power:1 force:2 representing:3 movie:1 axis:2 prior:2 literature:1 acknowledgement:1 degree:11 ayj:2 sufficient:1 xiao:14 gruber:1 translation:1 row:3 eccv:1 dinosaur:2 supported:1 last:1 enjoys:1 fall:1 wide:1 taking:1 face:3 benefit:1 dimension:1 valid:2 stand:1 author:2 qualitatively:2 collection:1 projected:4 zt1:1 reconstructed:2 approximate:3 compact:5 confirm:1 anew:2 corpus:1 pittsburgh:2 xt1:1 factorize:2 sk:1 table:3 kanade:11 nature:1 recycled:1 obtaining:1 kindly:1 pk:3 did:1 linearly:1 s2:2 whole:1 cvlab:1 body:2 representative:1 xtp:1 position:1 explicit:1 concatenating:1 x1i:1 wavelet:1 removing:1 specific:4 r2:1 datset:1 exists:1 magnitude:1 bue:1 expressed:4 contained:1 tracking:1 partially:2 talking:1 corresponds:1 truth:2 determines:1 acm:1 identity:1 yoga:3 shared:1 specifically:2 principal:3 duality:4 svd:1 la:1 biermann:1 brand:1 deforming:2 meaningful:1 arises:1 rotate:1 r41:1 actuation:1 dance:4 |
2,750 | 3,494 | Gaussian-process factor analysis for low-dimensional
single-trial analysis of neural population activity
Byron M. Yu1,2,4 , John P. Cunningham1 , Gopal Santhanam1 ,
Stephen I. Ryu1,3 , Krishna V. Shenoy1,2
1
Department of Electrical Engineering, 2 Neurosciences Program,
3
Department of Neurosurgery, Stanford University, Stanford, CA 94305
{byronyu,jcunnin,gopals,seoulman,shenoy}@stanford.edu
4
Maneesh Sahani4
Gatsby Computational Neuroscience Unit, UCL
London, WC1N 3AR, UK
[email protected]
Abstract
We consider the problem of extracting smooth, low-dimensional neural trajectories that summarize the activity recorded simultaneously from tens to hundreds of
neurons on individual experimental trials. Current methods for extracting neural
trajectories involve a two-stage process: the data are first ?denoised? by smoothing over time, then a static dimensionality reduction technique is applied. We first
describe extensions of the two-stage methods that allow the degree of smoothing
to be chosen in a principled way, and account for spiking variability that may vary
both across neurons and across time. We then present a novel method for extracting neural trajectories, Gaussian-process factor analysis (GPFA), which unifies
the smoothing and dimensionality reduction operations in a common probabilistic framework. We applied these methods to the activity of 61 neurons recorded
simultaneously in macaque premotor and motor cortices during reach planning
and execution. By adopting a goodness-of-fit metric that measures how well the
activity of each neuron can be predicted by all other recorded neurons, we found
that GPFA provided a better characterization of the population activity than the
two-stage methods.
1
Introduction
Neural responses are typically studied by averaging noisy spiking activity across multiple experimental trials to obtain firing rates that vary smoothly over time. However, particularly in cognitive
tasks (such as motor planning or decision making) where the neural responses are more a reflection
of internal processing rather than external stimulus drive, the timecourse of the neural responses may
differ on nominally identical trials. In such settings, it is critical that the neural data not be averaged
across trials, but instead be analyzed on a trial-by-trial basis [1, 2, 3, 4].
Single-trial analyses can leverage the simultaneous monitoring of large populations of neurons in
vivo, currently ranging from tens to hundreds in awake, behaving animals. The approach adopted
by recent studies is to consider each neuron being recorded as a noisy sensor reflecting the timeevolution of an underlying neural process [3, 5, 6, 7, 8, 9, 10]. The goal is to uncover this neural
process by extracting a smooth, low-dimensional neural trajectory from the noisy, high-dimensional
recorded activity on a single-trial basis. The neural trajectory provides a compact representation of
1
the high-dimensional recorded activity as it evolves over time, thereby facilitating data visualization
and studies of neural dynamics under different experimental conditions.
A common method to extract neural trajectories is to first estimate a smooth firing rate profile for
each neuron on a single trial (e.g., by convolving each spike train with a Gaussian kernel), then
apply a static dimensionality reduction technique (e.g., principal components analysis, PCA) [8, 11].
Smooth firing rate profiles may also be obtained by averaging across a small number of trials (if the
neural timecourses are believed to be similar on different trials) [6, 7, 9, 10], or by applying more
advanced statistical methods for estimating firing rate profiles from single spike trains [12, 13].
Numerous linear and non-linear dimensionality reduction techniques exist, but to our knowledge
only PCA [8, 9, 11] and locally linear embedding (LLE) [6, 7, 10, 14] have been applied in this
context to neural data.
While this two-stage method of performing smoothing then dimensionality reduction has provided
informative low-dimensional views of neural population activity, there are several aspects that can be
improved. (i) For kernel smoothing, the degree of smoothness is often chosen in an ad hoc way. We
would instead like to learn the appropriate degree of smoothness from the data. Because the operations of kernel smoothing, PCA, and LLE are all non-probabilistic, standard likelihood techniques
for model selection are not applicable. Even if a probabilistic dimensionality reduction algorithm is
used, the likelihoods would not be comparable because different smoothing kernels yield different
smoothed data. (ii) The same kernel width is typically used for all spike trains, which implicitly
assumes that the neural population activity evolves with a single timescale. We would instead like
to allow for the possibility that the system operates under multiple timescales. (iii) PCA and LLE
have no explicit noise model and, therefore, have difficulty distinguishing between spiking noise
(whose variance may vary both across neurons and across time) and changes in the underlying lowdimensional neural state. (iv) Because the smoothing and dimensionality reduction are performed
sequentially, there is no way for the dimensionality reduction algorithm to influence the degree or
form of smoothing used. This is relevant both to the identification of the low-dimensional space, as
well as to the extraction of single-trial neural trajectories.
We first briefly describe relatively straightforward extensions of the two-stage methods that can help
to address issues (i) and (iii) above. For (i), we adopt a goodness-of-fit metric that measures how
well the activity of each neuron can be predicted by the activity of all other recorded neurons, based
on data not used for model fitting. This metric can be used to compare different smoothing kernels
and allows for the degree of smoothness to be chosen in a principled way. In Section 6, we will use
this as a common metric by which different methods for extracting neural trajectories are compared.
For (iii), we can apply the square-root transform to stabilize the spiking noise variance and factor
analysis (FA) [15] to explicitly model possibly different independent noise variances for different
neurons. These extensions are detailed in Sections 2 and 3.
Next, we introduce Gaussian-process factor analysis (GPFA), which unifies the smoothing and dimensionality reduction operations in a common probabilistic framework. GPFA takes steps toward
addressing all of the issues (i)?(iv) described above, and is shown in Section 6 to provide a better characterization of the recorded population activity than the two-stage methods. Because GPFA
performs the smoothing and dimensionality reduction operations simultaneously rather than sequentially, the degree of smoothness and the relationship between the low-dimensional neural trajectory
and the high-dimensional recorded activity can be jointly optimized. Different dimensions in the
low-dimensional space (within which the neural state evolves) can have different timescales, whose
optimal values can be found automatically by fitting the GPFA model to the recorded activity. As in
FA, GPFA specifies an explicit noise model that allows different neurons to have different independent noise variances. The time series model involves Gaussian processes (GP), which only require
the specification of the correlation structure of the neural state over time.
A critical assumption when attempting to extract a low-dimensional neural trajectory is that the
recorded activity evolves within a low-dimensional manifold. Previous studies have typically assumed that the neural trajectories lie in a three-dimensional space for ease of visualization. In this
work, we will investigate whether this low-dimensional assumption is justified in the context of
motor preparation and execution and, if so, attempt to identify the appropriate dimensionality. Sections 2 and 3 detail GPFA and the goodness-of-fit metric, respectively. Section 4 relates GPFA to
dynamical systems approaches. After describing the experimental setup in Section 5, we apply the
2
developed methods to neural activity recorded in premotor and motor cortices during reach planning
and execution in Section 6.
2
Gaussian-process factor analysis
The motivation for GPFA can be traced back to the use of PCA for extracting informative lowdimensional views of high-dimensional neural data. Consider spike counts taken in non-overlapping
time bins. PCA (or its probabilistic form, PPCA [15]) attempts to find the directions in the highdimensional data with greatest variance. This is problematic for neural data for two reasons. First,
because neurons with higher mean counts are known to exhibit higher count variances, the directions
found by PCA tend to be dominated by the most active neurons. Second, PCA assumes that the
spiking noise variance is time independent; however, neurons are known to change their firing rates,
and therefore noise variances, over time. A possible solution is to replace the Gaussian likelihood
model of PPCA with a point-process [5] or Poisson [3] likelihood model. Here, we consider a
simpler approach that preserves computational tractability. The square-root transform is known to
both stabilize the variance of Poisson counts and allow Poisson counts to be more closely modeled
by a Gaussian distribution, especially at low Poisson means [16]. Thus, the two issues above can
be largely resolved by applying PCA/PPCA to square-rooted spike counts, rather than raw spike
counts. However, the spiking noise can deviate from a Poisson distribution [17], in which case the
noise variance is not entirely stabilized. As will be shown in Section 6, the square-rooted counts
can be better characterized by further replacing PCA/PPCA with FA [15], which allows different
neurons to have different independent noise variances.
In this work, we extend FA for use with time series data. PCA, PPCA, and FA are all static dimensionality reduction techniques. In other words, none of them take into account time labels when
applied to time series data; the time series data are simply treated as a collection of data points.
GPFA is an extension of FA that can leverage the time label information to provide more powerful
dimensionality reduction. The GPFA model is simply a set of factor analyzers (one per timepoint,
each with identical parameters) that are linked together in the low-dimensional state space by a
Gaussian process (GP) [18] prior. Introducing the GP allows for the specification of a correlation
structure across the low-dimensional states at different timepoints. For example, if the system underlying the time series data is believed to evolve smoothly over time, we can specify that the system?s
state should be more similar between nearby timepoints than between faraway timepoints. Extracting a smooth, low-dimensional neural trajectory can therefore be viewed as a compromise between
the low-dimensional projection of each data point found by FA and the desire to string them together
using a smooth function over time. The GPFA model can also be obtained by letting time indices
play the role of inputs in the semiparametric latent factor model [19].
The following is a mathematical description of GPFA. Let y:,t ? Rq?1 be the high-dimensional
vector of square-rooted spike counts recorded at timepoint t ? {1, . . . , T }, where q is the number of
neurons being recorded simultaneously. We seek to extract a corresponding low-dimensional latent
neural state x:,t ? Rp?1 at each timepoint, where p is the dimensionality of the state space (p < q).
For notational convenience, we group the neural states from all timepoints into a neural trajectory
denoted by the matrix X = [x:,1 , . . . , x:,T ] ? Rp?T . Similarly, the observations can be grouped
into a matrix Y = [y:,1 , . . . , y:,T ] ? Rq?T . We define a linear-Gaussian relationship between the
observations y:,t and neural states x:,t
y:,t | x:,t ? N (Cx:,t + d, R) ,
(1)
where C ? Rq?p , d ? Rq?1 , and R ? Rq?q are model parameters to be learned. As in FA, we
constrain the covariance matrix R to be diagonal, where the diagonal elements are the independent
noise variances of each neuron. In general, different neurons can have different independent noise
variances. Although a Gaussian is not strictly a distribution on square-rooted counts, its use in (1)
preserves computational tractability.
The neural states x:,t at different timepoints are related through Gaussian processes, which embody
the notion that the neural trajectories should be smooth. We define a separate GP for each dimension
of the state space indexed by i ? {1, . . . , p}
xi,: ? N (0, Ki ) ,
3
(2)
where xi,: ? R1?T is the ith row of X and Ki ? RT ?T is the covariance matrix for the ith GP
[20]. The form of the GP covariance can be chosen to provide different smoothing properties on the
neural trajectories. In this work, we chose the commonly-used squared exponential (SE) covariance
function
2
(t
?
t
)
1
2
2
2
Ki (t1 , t2 ) = ?f,i
+ ?n,i
? exp ?
? ?t1 ,t2 ,
(3)
2 ? ?i2
where Ki (t1 , t2 ) denotes the (t1 , t2 )th entry of Ki and t1 , t2 ? {1, . . . , T }. The SE covariance
2
is defined by its signal variance ?f,i
? R+ , characteristic timescale ?i ? R+ , and noise variance
2
?n,i ? R+ . Due to redundancy in the scale of X and C, we fix the scale of X and allow C to
be learned unconstrained, without loss of generality. By direct analogy to FA, we defined the prior
2
2
distribution of the neural state x:,t at each timepoint t to be N (0, I) by setting ?f,i
= 1 ? ?n,i
,
2
2
where 0 < ?n,i ? 1. Furthermore, because we seek to extract smooth neural trajectories, we set ?n,i
to a small value (10?3 ). Thus, the timescale ?i is the only (hyper)parameter of the SE covariance
that is learned. The SE is an example of a stationary covariance; other stationary and non-stationary
GP covariances [18] can be applied in a seamless way.
The parameters of the GPFA model can be learned in a straightforward way using the expectationmaximization (EM) algorithm. In the E-step, the Gaussian posterior distribution P (X | Y ) can
be computed exactly because the x:,t and y:,t across all timepoints are jointly Gaussian, by definition. In the M-step, the parameters updates for C, d, and R can be expressed in closed form. The
characteristic timescales ?i can be updated using any gradient optimization technique. Note that the
degree of smoothness (defined by the timescales) and the relationship between the low-dimensional
neural trajectory and the high-dimensional recorded activity (defined by C) are jointly optimized.
Furthermore, a different timescale is learned for each state dimension indexed by i. For the results
shown in Section 6, the parameters C, d, and R were initialized using FA, and the ?i were initialized
to 100 ms. Although the learned timescales were initialization-dependent, their distributions were
similar for different initializations. In particular, most learned timescales were less than 150 ms, but
there were usually one or two larger timescales around 300 and 500 ms.
Once the GPFA model is learned, we can apply a post-processing step to orthonormalize the columns
of C. Applying the singular value decomposition, Cx:,t can be rewritten as UC (DC VC x:,t ), where
? :,t = DC VC x:,t ? Rp?1 is referred to as the
the columns of UC ? Rq?p are orthonormal and x
orthonormalized neural state at timepoint t. While each dimension of x:,t possesses a single char? :,t represents a mixture of timescales defined by the columns
acteristic timescale, each dimension of x
? :,t rather than x:,t is that the elements of x
? :,t (and the correof VC . An advantage of considering x
sponding columns of UC ) are ordered by the amount of data covariance explained. In contrast, the
elements of x:,t (and the corresponding columns of C) have no particular order. Especially when
the number of state dimensions p is large, the ordering facilitates the identification and visualization
of the dimensions of the orthonormalized neural trajectory that are most important for explaining
the recorded activity. Because the columns of UC are orthonormal, one can readily picture how the
low-dimensional trajectory relates to the high-dimensional space of recorded activity, in much the
same spirit as for PCA. This orthonormalization procedure is also applicable to PPCA and FA. In
fact, it is through this orthonormalization procedure that the principal directions found by PPCA are
equated to those found by PCA.
3
Leave-neuron-out prediction error
We would like to directly compare GPFA to the two-stage methods described in Section 1. Neither
the classic approach of comparing cross-validated likelihoods nor the Bayesian approach of comparing marginal likelihoods is applicable here, for the same reason that they cannot be used to select
the appropriate degree of smoothness in the two-stage methods. Namely, when the data are altered
by different pre-smoothing operations (or the lack thereof in the case of GPFA), the likelihoods
are no longer comparable. Instead, we adopted the goodness-of-fit metric mentioned in Section 1,
whereby a prediction error is computed based on trials not used for model fitting. The idea is to
leave out one neuron at a time and ask how well each method is able to predict the activity of that
neuron, given the activity of all other recorded neurons. For GPFA, the model prediction for neuron
? j,: = E [yj,: | Y?j,: ], where yj,: is the jth row of Y and Y?j,: ? R(q?1)?T represents all but
j is y
4
the jth row of Y . The model prediction can be computed analytically because all variables in Y
are jointly Gaussian, by definition. Model predictions using PPCA and FA are analogous, but each
timepoint is considered individually. The prediction error is defined as the sum-of-squared errors
between the model prediction and the observed square-rooted spike count across all neurons and
timepoints.
One way to compute the GPFA model prediction is via the low-dimensional state space. One can
first estimate the neural trajectory using all but the jth neuron P (X | Y?j,: ), then map this estimate
? j,: . Equivalently,
back out into the space of recorded activity for the jth neuron using (1) to obtain y
one can convert P (X | Y?j,: ) into its orthonormalized form before mapping it out into the space
of recorded activity using the jth row of UC . Because the orthonormalized dimensions are ordered,
? :,t , where
we can evaluate the prediction error using only the top p? orthonormalized dimensions of x
p? ? {1, . . . , p}. This reduced GPFA model can make use of a larger number p of timescales than its
effective dimensionality p?.
4
Linear and non-linear dynamical systems
Another way to extract neural trajectories is by defining a parametric dynamical model that describes
how the low-dimensional neural state evolves over time. A first-order linear auto-regressive (AR)
model [5] captures linear Markovian dynamics. Such a model can be expressed as a Gaussian
process, since the state variables are jointly Gaussian. This can be shown by defining a separate
first-order AR model for each state dimension indexed by i ? {1, . . . , p}
xi,t+1 | xi,t ? N ai xi,t , ?i2 .
(4)
Given enough time (t ? ?) and |ai | < 1, the model will settle into a stationary state that is
equivalent to (2) with
?i2
|t ?t |
Ki (t1 , t2 ) =
a 1 2,
(5)
1 ? a2i i
as in [21]. Different covariance structures Ki can be obtained by going from a first-order to an
nth-order AR model. One drawback of this approach is that it is usually not easy to construct an
nth-order AR model with a specified covariance structure. In contrast, the GP approach described in
Section 2 requires only the specification of the covariance structure, thus allowing different smoothing properties to be applied in a seamless way. AR models are generally less computationally demanding than those based on GP, but this advantage shrinks as the order of the AR model grows.
2
Another difference is that (5) does not contain an independent noise term ?n,i
? ?t1 ,t2 as in (3). The
2
innovations noise ?i in (4) is involved in setting the smoothness of the time series, as shown in (5).
Thus, (4) would need to be augmented to explicitly capture departures from the AR model.
One may also consider defining a non-linear dynamical model [3], which typically has a richer set of
dynamical behaviors than linear models. The identification of the model parameters provides insight
into the dynamical rules governing the time-evolution of the system under study. However, especially in exploratory data analyses, it may be unclear what form this model should take. Even if an
appropriate non-linear model can be identified, learning such a model can be unstable and slow due
to approximations required [3]. In contrast, learning the GPFA model is stable and approximationfree, as described in Section 2. The use of GPFA can be viewed as a practical way of going beyond
a first-order linear AR model without having to commit to a particular non-linear system, while
retaining computational tractability.
5
Behavioral task and neural recordings
The details of the neural recordings and behavioral task can be found elsewhere [22]. Briefly, a
rhesus macaque performed delayed center-out reaches to visual targets presented on a fronto-parallel
screen. On a given trial, the peripheral reach target was presented at one of 14 possible locations ?
two distances (60 and 100 mm) and seven directions (0, 45, 90, 135, 180, 225, 315?). Delay periods
were randomly chosen between 200 and 700 ms. Neural activity was recorded using a 96-electrode
array (Cyberkinetics, Foxborough, MA) in dorsal premotor and motor cortices. Only those units (61
single and multi-units, experiment G20040123) with robust delay period activity were included in
our analyses.
5
? 104
Prediction error
3.05
100 ms
25 ms
100 ms
3
50 ms
25 ms
50 ms
2.95
5
10
State dimensionality, p
6
15
Figure 1: Prediction errors of two-stage methods (PPCA:
red, FA: green), first-order AR model (blue), GPFA (dashed
black), and reduced GPFA (solid black), computed using 4fold cross-validation. Labels at right are standard deviations
of Gaussian kernels (referred to as kernel widths) for the twostage methods. For reduced GPFA, the horizontal axis corresponds to p? rather than p, where the prediction error is computed using only the top p? orthonormalized dimensions of a
GPFA model fit with p = 15. Star indicates minimum of
solid black curve. Analyses in this figure are based on 56 trials for the reach target at distance 60 mm and direction 135?.
Results
We considered neural data for one reach target at a time, ranging from 200 ms before reach target
onset to movement end. This period comprised the 200 ms pre-target time, the randomly chosen
delay period (200?700 ms), the monkey?s reaction time (mean?s.d.: 293?48 ms), and the duration
of the monkey?s reach (269?40 ms). Spike counts were taken in non-overlapping 20 ms bins,
then square-rooted. For the two-stage methods, these square-rooted counts were smoothed over
time using a Gaussian kernel. We also considered smoothing spike trains directly, which yielded
qualitatively similar results for the two-stage methods.
Using the goodness-of-fit metric described in Section 3, we can find the appropriate degree of
smoothness for the two-stage methods. Fig. 1 shows the prediction error for PPCA (red) and FA
(green) for different kernel widths and state dimensionalities. There are two primary findings. First,
FA yielded lower prediction error than PPCA across a range of kernel widths and state dimensionalities. The reason is that FA allows different neurons to have different independent noise variances.
Second, for these data, the optimal smoothing kernel width (s.d. of Gaussian kernel) is approximately 40 ms for both FA and PPCA. This was found using a denser sweep of the kernel width than
shown in Fig. 1.
It is tempting to try to relate this optimal smoothing kernel width (40 ms) to the timescales ?i learned
by GPFA, since the SE covariance has the same shape as the Gaussian smoothing kernel. However,
nearly all of the timescales learned by GPFA are greater than 40 ms. This apparent mismatch can be
understood by considering the equivalent kernel of the SE covariance [23], which takes on a sinclike shape whose main lobe is generally far narrower than a Gaussian kernel with the same width
parameter. It is therefore reasonable that the timescales learned by GPFA are larger than the optimal
smoothing kernel width.
The same goodness-of-fit metric can be used to compare the two-stage methods, parametric dynamical models, and GPFA. The parametric dynamical model considered in this work is a first-order AR
model described by (2) and (5), coupled with the linear-Gaussian observation model (1). Note that a
separate stationary, one-dimensional first-order AR model is defined for each of the p latent dimensions. As shown in Fig. 1, the first-order AR model (blue) yielded lower prediction error than the
two-stage methods (PPCA: red, FA: green). Furthermore, GPFA (dashed black) performed as well
or better than the two-stage methods and the first-order AR model, regardless of the state dimensionality or kernel width used. As described in Section 3, the prediction error can also be computed
for a reduced GPFA model (solid black) using only the top p? orthonormalized dimensions, in this
case based on a GPFA model fit with p = 15 state dimensions. By definition, the dashed and solid
black lines coincide at p? = 15. The solid black curve reaches its minimum at p? = 10 (referred to
as p? ). Thus, removing the lowest five orthonormalized dimensions decreased the GPFA prediction
error. Furthermore, this prediction error was lower than when fitting the GPFA model directly with
p = 10 (dashed black).
These latter findings can be understood by examining the orthonormalized neural trajectories extracted by GPFA shown in Fig. 2. The traces plotted are the orthonormalized form of E[X | Y ].
The panels are arranged in decreasing order of data covariance explained. The top orthonormalized
dimensions indicate fluctuations in the recorded population activity shortly after target onset (red
6
2
1
? 1,:
x
? 2,:
x
? 3,:
x
? 4,:
x
? 5,:
x
? 6,:
x
? 7,:
x
? 8,:
x
? 9,:
x
? 10,:
x
? 11,:
x
? 12,:
x
? 13,:
x
? 14,:
x
? 15,:
x
0
-1
-2
2
1
0
-1
-2
2
1
0
-1
-2
400 ms
Figure 2: Orthonormalized neural trajectories for GPFA with p = 15. Each panel corresponds to
one of the 15 dimensions of the orthonormalized neural state, which is plotted versus time. The
orthonormalized neural trajectory for one trial comprises one black trace from each panel. Dots
indicate time of reach target onset (red), go cue (green), and movement onset (blue). Due to differing
trial lengths, the traces on the left/right half of each panel are aligned on target/movement onset for
clarity. However, the GPFA model was fit using entire trials with no gaps. Note that the polarity
of these traces is arbitrary, as long as it is consistent with the polarity of UC . Each trajectory
corresponds to planning and executing a reach to the target at distance 60 mm and direction 135?.
For clarity, only 10 trials with delay periods longer than 400 ms are plotted.
dots) and again after the go cue (green dots). Furthermore, the neural trajectories around the time
of the arm movement are well-aligned on movement onset. These observations are consistent with
previous analyses of the same dataset [22], as well as other studies of neural activity collected during
similar tasks in the same cortical areas. Whereas the top 10 orthonormalized dimensions (upper and
middle rows) show repeatable temporal structure across trials, the bottom five dimensions (lower
row) appear to be largely capturing noise. These ?noise dimensions? could be limiting GPFA?s predictive power. This is confirmed by Fig. 1: when the bottom five orthonormalized dimensions were
removed, the GPFA prediction error decreased.
It still remains to be explained why the GPFA prediction error using only the top 10 orthonormalized
dimensions is lower than that obtained by directly fitting a GPFA model with p = 10. Each panel
in Fig. 2 represents a mixture of 15 characteristic timescales. Thus, the top 10 orthonormalized
dimensions can make use of up to 15 timescales. However, a GPFA model fit with p = 10 can have
at most 10 timescales. By fitting a GPFA model with a large number of state dimensions p (each
with its own timescale) and taking only the top p? = p? orthonormalized dimensions, we can obtain
neural trajectories whose effective dimensionality is smaller than the number of timescales at play.
Based on the solid black line in Fig. 1 and Fig. 2, we consider the effective dimensionality of the
recorded population activity to be p? = 10. In other words, the linear subspace within which the
recorded activity evolved during reach planning and execution for this particular target was 10dimensional. Across the 14 reach targets, the effective dimensionality ranged from 8 to 12. All
major trends seen in Fig. 1 were preserved across all reach targets.
7
Conclusion
GPFA offers a flexible and intuitive framework for extracting neural trajectories, whose learning
algorithm is stable, approximation-free, and simple to implement. Because only the GP covariance
structure needs to be specified, GPFA is particularly attractive for exploratory data analyses, where
the rules governing the dynamics of the system under study are unknown. Based on the trajectories
obtained by GPFA, one can then attempt to define an appropriate dynamical model that describes
how the neural state evolves over time.
7
Compared with two-stage methods, the choice of GP covariance allows for more explicit specification of the smoothing properties of the low-dimensional trajectories. This is important when
investigating (possibly subtle) properties of the system dynamics. For example, one may wish to ask
whether the system exhibits second-order dynamics by examining the extracted trajectories. In this
case, it is critical that second-order effects not be built-in by the smoothness assumptions used to
extract the trajectories. With GPFA, it is possible to select a triangular GP covariance that assumes
smoothness in position, but not in velocity. In contrast, it is unclear how to choose the shape of the
smoothing kernel to achieve this in the two-stage methods.
In future work, we would like to couple the covariance structure of the one-dimensional GPs, which
would allow for a richer description of the multi-dimensional neural state x:,t evolving over time.
We also plan to apply non-stationary GP kernels, since the neural data collected during a behavioral
task are usually non-stationary. In addition, we would like to extend GPFA by allowing for the
discovery of non-linear manifolds and applying point-process likelihood models.
Acknowledgments
This work was supported by NIH-NINDS-CRCNS 5-R01-NS054283-03, NSF, NDSEGF, Gatsby,
SGF, CDRF, BWF, ONR, Sloan, and Whitaker. We would like to thank Dr. Mark Churchland,
Melissa Howard, Sandra Eisensee, and Drew Haven.
References
[1] K. L. Briggman, H. D. I. Abarbanel, and W. B. Kristan Jr. Science, 307(5711):896?901, Feb. 2005.
[2] K. L. Briggman, H. D. I. Abarbanel, and W. B. Kristan Jr. Curr Opin Neurobiol, 16(2):135?144, 2006.
[3] B. M. Yu, A. Afshar, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. In Y. Weiss, B. Scholkopf,
and J. Platt, eds., Adv Neural Info Processing Sys 18, pp. 1545?1552. MIT Press, 2006.
[4] M. M. Churchland, B. M. Yu, M. Sahani, and K. V. Shenoy. Curr Opin Neurobiol, 17(5):609?618, 2007.
[5] A. C. Smith and E. N. Brown. Neural Comput, 15(5):965?991, 2003.
[6] M. Stopfer, V. Jayaraman, and G. Laurent. Neuron, 39:991?1004, Sept. 2003.
[7] S. L. Brown, J. Joseph, and M. Stopfer. Nat Neurosci, 8(11):1568?1576, Nov. 2005.
[8] R. Levi, R. Varona, Y. I. Arshavsky, M. I. Rabinovich, and A. I. Selverston. J Neurosci, 25(42):9807?
9815, Oct. 2005.
[9] O. Mazor and G. Laurent. Neuron, 48:661?673, Nov. 2005.
[10] B. M. Broome, V. Jayaraman, and G. Laurent. Neuron, 51:467?482, Aug. 2006.
[11] M. A. L. Nicolelis, L. A. Baccala, R. C. S. Lin, and J. K. Chapin. Science, 268(5215):1353?1358, 1995.
[12] I. DiMatteo, C. R. Genovese, and R. E. Kass. Biometrika, 88(4):1055?1071, 2001.
[13] J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani. In J. Platt, D. Koller, Y. Singer, and S. Roweis,
eds., Adv Neural Info Processing Sys 20. MIT Press, 2008.
[14] S. T. Roweis and L. K. Saul. Science, 290(5500):2323?2326, Dec. 2000.
[15] S. Roweis and Z. Ghahramani. Neural Comput, 11(2):305?345, 1999.
[16] N. A. Thacker and P. A. Bromiley. The effects of a square root transform on a Poisson distributed quantity.
Technical Report 2001-010, University of Manchester, 2001.
[17] D. J. Tolhurst, J. A. Movshon, and A. F. Dean. Vision Res, 23(8):775?785, 1983.
[18] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for machine learning. MIT Press, 2006.
[19] Y. W. Teh, M. Seeger, and M. I. Jordan. In R. G. Cowell and Z. Ghahramani, eds., Proceedings of the
Tenth International Workshop on Artificial Intelligence and Statistics (AISTATS). Society for Artificial
Intelligence and Statistics, 2005.
[20] N. D. Lawrence and A. J. Moore. In Z. Ghahramani, ed., Proceedings of the 24th Annual International
Conference on Machine Learning (ICML 2007), pp. 481?488. Omnipress, 2007.
[21] R. E. Turner and M. Sahani. Neural Comput, 19(4):1022?1038, 2007.
[22] M. M. Churchland, B. M. Yu, S. I. Ryu, G. Santhanam, and K. V. Shenoy. J Neurosci, 26(14):3697?3712,
Apr. 2006.
[23] P. Sollich and C. K. I. Williams. In L. K. Saul, Y. Weiss, and L. Bottou, eds., Advances in Neural
Information Processing Systems 17, pp. 1313?1320. MIT Press, 2005.
8
| 3494 |@word trial:21 sgf:1 middle:1 briefly:2 seek:2 rhesus:1 lobe:1 covariance:19 decomposition:1 thereby:1 solid:6 briggman:2 reduction:12 series:6 reaction:1 current:1 comparing:2 ka:1 readily:1 john:1 informative:2 shape:3 motor:5 opin:2 update:1 stationary:7 cue:2 half:1 intelligence:2 sys:2 ith:2 smith:1 regressive:1 characterization:2 provides:2 tolhurst:1 location:1 simpler:1 five:3 mathematical:1 timecourses:1 direct:1 scholkopf:1 yu1:1 fitting:6 behavioral:3 ndsegf:1 introduce:1 jayaraman:2 behavior:1 embody:1 planning:5 nor:1 multi:2 decreasing:1 automatically:1 considering:2 provided:2 estimating:1 underlying:3 chapin:1 panel:5 lowest:1 what:1 evolved:1 neurobiol:2 string:1 monkey:2 developed:1 selverston:1 differing:1 finding:2 temporal:1 exactly:1 biometrika:1 uk:2 platt:2 unit:3 appear:1 shenoy:5 t1:7 before:2 engineering:1 understood:2 laurent:3 firing:5 fluctuation:1 approximately:1 black:10 chose:1 initialization:2 studied:1 ease:1 range:1 averaged:1 practical:1 acknowledgment:1 yj:2 implement:1 orthonormalization:2 procedure:2 area:1 maneesh:2 evolving:1 projection:1 word:2 pre:2 melissa:1 convenience:1 cannot:1 selection:1 context:2 applying:4 influence:1 equivalent:2 map:1 dean:1 center:1 straightforward:2 regardless:1 go:2 duration:1 williams:2 insight:1 rule:2 array:1 orthonormal:2 population:8 embedding:1 notion:1 classic:1 exploratory:2 analogous:1 updated:1 limiting:1 target:13 play:2 gps:1 distinguishing:1 element:3 trend:1 velocity:1 particularly:2 observed:1 role:1 bottom:2 electrical:1 capture:2 adv:2 ordering:1 movement:5 removed:1 principled:2 rq:6 byronyu:1 mentioned:1 dynamic:5 compromise:1 predictive:1 churchland:3 basis:2 resolved:1 train:4 describe:2 london:1 effective:4 artificial:2 hyper:1 whose:5 premotor:3 stanford:3 larger:3 richer:2 denser:1 apparent:1 triangular:1 statistic:2 commit:1 timescale:6 transform:3 noisy:3 jointly:5 gp:13 hoc:1 advantage:2 ucl:2 lowdimensional:2 relevant:1 aligned:2 achieve:1 roweis:3 description:2 intuitive:1 manchester:1 electrode:1 r1:1 leave:2 executing:1 help:1 ac:1 expectationmaximization:1 aug:1 predicted:2 involves:1 indicate:2 differ:1 direction:6 closely:1 drawback:1 vc:3 char:1 settle:1 bin:2 require:1 sandra:1 fix:1 timepoint:6 extension:4 strictly:1 mm:3 around:2 considered:4 exp:1 lawrence:1 mapping:1 predict:1 major:1 vary:3 adopt:1 applicable:3 label:3 currently:1 individually:1 grouped:1 orthonormalized:19 neurosurgery:1 mit:4 sensor:1 gaussian:24 gopal:1 rather:5 validated:1 notational:1 likelihood:8 indicates:1 arshavsky:1 contrast:4 seeger:1 kristan:2 dependent:1 typically:4 entire:1 cunningham:1 koller:1 going:2 issue:3 flexible:1 denoted:1 retaining:1 plan:1 animal:1 smoothing:22 uc:6 marginal:1 once:1 construct:1 extraction:1 having:1 identical:2 represents:3 yu:4 icml:1 nearly:1 genovese:1 future:1 t2:7 stimulus:1 report:1 haven:1 randomly:2 simultaneously:4 preserve:2 individual:1 delayed:1 attempt:3 curr:2 possibility:1 investigate:1 analyzed:1 mixture:2 wc1n:1 indexed:3 iv:2 initialized:2 re:1 plotted:3 mazor:1 fronto:1 varona:1 column:6 markovian:1 ar:14 goodness:6 rabinovich:1 tractability:3 introducing:1 addressing:1 entry:1 deviation:1 hundred:2 comprised:1 delay:4 examining:2 thacker:1 seoulman:1 international:2 seamless:2 dimatteo:1 probabilistic:5 together:2 broome:1 squared:2 again:1 recorded:24 choose:1 possibly:2 dr:1 cognitive:1 external:1 convolving:1 abarbanel:2 account:2 star:1 bromiley:1 stabilize:2 explicitly:2 sloan:1 ad:1 onset:6 performed:3 root:3 view:2 closed:1 try:1 linked:1 red:5 denoised:1 parallel:1 vivo:1 square:10 afshar:1 variance:16 largely:2 characteristic:3 yield:1 identify:1 identification:3 unifies:2 raw:1 bayesian:1 none:1 trajectory:32 monitoring:1 drive:1 confirmed:1 simultaneous:1 reach:14 ed:5 definition:3 pp:3 involved:1 thereof:1 static:3 couple:1 ppca:13 dataset:1 ask:2 knowledge:1 dimensionality:22 subtle:1 uncover:1 reflecting:1 back:2 higher:2 response:3 improved:1 specify:1 wei:2 arranged:1 shrink:1 generality:1 furthermore:5 governing:2 stage:17 correlation:2 horizontal:1 replacing:1 overlapping:2 lack:1 cunningham1:1 grows:1 effect:2 contain:1 ranged:1 brown:2 evolution:1 analytically:1 moore:1 gopals:1 i2:3 attractive:1 during:5 width:10 rooted:7 whereby:1 m:21 performs:1 reflection:1 omnipress:1 ranging:2 novel:1 nih:1 common:4 spiking:6 extend:2 ai:2 smoothness:10 unconstrained:1 similarly:1 analyzer:1 dot:3 specification:4 stable:2 cortex:3 behaving:1 longer:2 feb:1 posterior:1 own:1 recent:1 onr:1 krishna:1 minimum:2 greater:1 seen:1 period:5 tempting:1 dashed:4 signal:1 stephen:1 multiple:2 ii:1 relates:2 smooth:8 technical:1 characterized:1 believed:2 cross:2 long:1 offer:1 lin:1 post:1 prediction:20 vision:1 metric:8 poisson:6 kernel:22 adopting:1 sponding:1 dec:1 justified:1 whereas:1 semiparametric:1 preserved:1 addition:1 decreased:2 bwf:1 singular:1 posse:1 recording:2 tend:1 byron:1 facilitates:1 spirit:1 jordan:1 extracting:8 leverage:2 iii:3 enough:1 easy:1 fit:10 identified:1 cyberkinetics:1 idea:1 whether:2 pca:13 movshon:1 generally:2 detailed:1 involve:1 se:6 amount:1 ten:2 locally:1 reduced:4 specifies:1 exist:1 problematic:1 stabilized:1 nsf:1 neuroscience:2 per:1 blue:3 santhanam:2 group:1 redundancy:1 levi:1 traced:1 clarity:2 neither:1 tenth:1 sum:1 convert:1 powerful:1 shenoy1:1 reasonable:1 decision:1 ninds:1 comparable:2 entirely:1 ki:7 capturing:1 fold:1 yielded:3 activity:30 annual:1 constrain:1 awake:1 dominated:1 nearby:1 aspect:1 performing:1 attempting:1 relatively:1 department:2 peripheral:1 jr:2 across:14 describes:2 em:1 smaller:1 sollich:1 joseph:1 evolves:6 making:1 explained:3 taken:2 computationally:1 visualization:3 remains:1 describing:1 count:13 singer:1 letting:1 end:1 adopted:2 operation:5 rewritten:1 apply:5 appropriate:6 a2i:1 shortly:1 rp:3 assumes:3 denotes:1 top:8 whitaker:1 ghahramani:3 especially:3 society:1 r01:1 sweep:1 quantity:1 spike:10 fa:18 parametric:3 rt:1 primary:1 diagonal:2 unclear:2 exhibit:2 gradient:1 subspace:1 distance:3 separate:3 thank:1 seven:1 manifold:2 collected:2 unstable:1 toward:1 reason:3 g20040123:1 length:1 modeled:1 relationship:3 index:1 polarity:2 innovation:1 equivalently:1 setup:1 relate:1 info:2 trace:4 unknown:1 allowing:2 upper:1 teh:1 neuron:32 observation:4 howard:1 defining:3 variability:1 dc:2 orthonormalize:1 smoothed:2 arbitrary:1 namely:1 required:1 specified:2 optimized:2 timecourse:1 learned:11 ryu:2 macaque:2 address:1 able:1 beyond:1 dynamical:9 usually:3 mismatch:1 departure:1 summarize:1 program:1 built:1 green:5 greatest:1 critical:3 demanding:1 difficulty:1 treated:1 power:1 nicolelis:1 advanced:1 nth:2 arm:1 baccala:1 altered:1 turner:1 numerous:1 picture:1 axis:1 extract:6 auto:1 coupled:1 sept:1 sahani:4 deviate:1 prior:2 discovery:1 evolve:1 loss:1 analogy:1 versus:1 validation:1 degree:9 consistent:2 row:6 elsewhere:1 supported:1 free:1 rasmussen:1 jth:5 allow:5 lle:3 explaining:1 saul:2 taking:1 distributed:1 curve:2 dimension:25 cortical:1 equated:1 collection:1 commonly:1 twostage:1 qualitatively:1 coincide:1 far:1 nov:2 compact:1 implicitly:1 stopfer:2 sequentially:2 active:1 investigating:1 assumed:1 xi:5 latent:3 why:1 learn:1 robust:1 ca:1 bottou:1 aistats:1 apr:1 timescales:16 main:1 neurosci:3 motivation:1 noise:19 profile:3 facilitating:1 augmented:1 fig:9 referred:3 crcns:1 screen:1 gatsby:3 slow:1 position:1 comprises:1 explicit:3 timepoints:7 exponential:1 wish:1 lie:1 comput:3 removing:1 repeatable:1 workshop:1 drew:1 execution:4 nat:1 gap:1 smoothly:2 cx:2 simply:2 faraway:1 visual:1 desire:1 expressed:2 ordered:2 nominally:1 cowell:1 gpfa:50 corresponds:3 extracted:2 ma:1 oct:1 goal:1 viewed:2 narrower:1 replace:1 change:2 included:1 operates:1 averaging:2 principal:2 experimental:4 select:2 highdimensional:1 internal:1 mark:1 latter:1 dorsal:1 preparation:1 evaluate:1 |
2,751 | 3,495 | Weighted Sums of Random Kitchen Sinks: Replacing
minimization with randomization in learning
Paper #858
Abstract
Randomized neural networks are immortalized in this AI Koan:
In the days when Sussman was a novice, Minsky once came to him as he sat
hacking at the PDP-6.
?What are you doing?? asked Minsky. ?I am training a randomly wired
neural net to play tic-tac-toe,? Sussman replied. ?Why is the net wired randomly?? asked Minsky. Sussman replied, ?I do not want it to have any preconceptions of how to play.?
Minsky then shut his eyes. ?Why do you close your eyes?? Sussman asked
his teacher. ?So that the room will be empty,? replied Minsky. At that moment,
Sussman was enlightened.
We analyze shallow random networks with the help of concentration of measure
inequalities. Specifically, we consider architectures that compute a weighted sum
of their inputs after passing them through a bank of arbitrary randomized nonlinearities. We identify conditions under which these networks exhibit good classification performance, and bound their test error in terms of the size of the dataset
and the number of random nonlinearities.
1
Introduction
In the earliest days of artificial intelligence, the bottom-most layer of neural networks consisted
of randomly connected ?associator units? that computed random binary functions of their inputs
[1]. These randomized shallow networks have largely been superceded by optimally, or nearly
optimally, tuned shallow architectures such as weighted sums of positive definite kernels (as in
Support Vector Machines), or weigted sums of weak classifiers (as in Adaboost). But recently,
architectures that randomly transform their inputs have been resurfacing in the machine learning
community [2, 3, 4, 5], largely motivated by the fact that randomization is computationally cheaper
than optimization. With the help of concentration of measure inequalities on function spaces, we
show that training a shallow architecture by randomly choosing the nonlinearities in the first layer
results in a classifier that is not much worse than one constructed by optimally tuning the nonlinearities. The main technical contributions of the paper are an approximation error bound (Lemma
1), and a synthesis of known techniques from learning theory to analyze random shallow networks.
Consider the problem of fitting a function f : X ? R to a training data set of m input-output pairs
{xi , yi }i=1...m , drawn iid from some unknown distribution P (x, y), with xi ? X and yi ? ?1. The
fitting problem consists of finding an f that minimizes the empirical risk
m
Remp [f ] ?
1 X
c(f (xi ), yi ).
m i=1
(1)
The loss c(y, y 0 ) penalizes the deviation between the prediction f (x) and the label y. Popular choices
for c are the hinge loss, max(0, 1 ? yy 0 ), used in the Support Vector Machine [6], the exponential
0
loss, e?yy , used in Adaboost [7, 8], and the quadratic loss, (y ? y 0 )2 , used in matching pursuit [9]
and regularized least squares classification [10].
1
Similarly
to kernel machines andR Adaboost, we will consider functions of the form f (x) =
P?
?(w
?(w)?(x; w) dw, where feature functions ? : X ? ? ? R,
i )?(x; wi ) or f (x) =
i=1
parameterized by some vector w ? ?, are weighted by a function ? : ? ? R. In kernel machines,
the feature functions ? are the eigenfunctions of a positive definite kernel k, and in Adaboost they
are typically decision trees or stumps. Adaboost [8, 7] and matching pursuit [11, 9] find approximate
empirical risk minimizer over this class of functions by greedily minimizing over a finite number of
scalar weights ? and parameter vectors w jointly:
"K
#
X
minimize
Remp
?(x; wk )?k .
(2)
w1 , . . . , wK ? ?
k=1
??A
But it is also possible to randomize over w and minimize over ?. Rather than jointly optimizing over
? and w, the following algorithm first draws the parameters of the nonlinearities randomly from a
pre-specificied distribution p. Then with w fixed, it fits the weights ? optimally via a simple convex
optimization:
Algorithm 1 The Weighted Sum of Random Kitchen Sinks fitting procedure.
Input: A dataset {xi , yi }i=1...m of m points, a bounded feature function |?(x; w)| ? 1, an integer
K, a scalar C, and a probability distribution p(w) on the parameters of ?.
PK
Output: A function f?(x) = k=1 ?(x; wk )?k .
Draw w1 , . . . , wK iid from p.
Featurize the input: zi ? [?(xi ; w1 ), . . . , ?(xi ; wK )]> .
With w fixed, solve the empirical risk minimization problem
m
minimize
??RK
1 X
c ?> zi , yi
m i=1
s.t. k?k? ? C/K.
(3)
(4)
In pratice, we let C be large enough that the constraint (4) remains inactive. The when c is the
quadratic loss, the minimization (3) is simple linear least squares, and when c is the hinge loss, it
amounts of fitting a linear SVM to a dataset of m K-dimensional feature vectors.
Randomly setting the nonlinearities is appealing for several reasons. First, the fitting procedure
is simple: Algorithm 1 can be implemented in a few lines of MATLAB code even for complex
feature functions ?, whereas fitting nonlinearities with Adaboost requires much more care. This
flexibility allows practioners to experiment with a wide variety of nonlinear feature fuctions without
first having to devise fitting procedures for them. Second, the algorithm is fast: experiments show
between one and three orders of magnitude speedup over Adaboost. On the down side, one might
expect to have to tune the sampling distribution p for each dataset. But in practice, we find that to
obtain accuracies that are competitive with Adaboost, the same sampling distribution can be used
for all the datasets we considered if the coordinates of the data are first zero-meaned and rescaled to
unit variance.
Formally, we show that Algorithm 1 returns a function that has low true risk. The true risk of a
function f is
R[f ] ? E c(f (x), y),
(5)
(x,y)?P
and measures the expected loss of f on as-yet-unseen test points, assuming these test points are
generated from the same distribution that generated the training data. The following theorem states
that with very high probability, Algorithm 1 returns a function whose true risk is near the lowest true
risk attainable by functions in the class Fp defined below:
Theorem 1 (Main result). Let p be a distribution on ?, and let ? satisfy supx,w |?(x; w)| ? 1.
Define the set
Z
Fp ? f (x) =
?(w)?(x; w) dw |?(w)| ? Cp(w) .
(6)
?
2
Suppose c(y, y 0 ) = c(yy 0 ), with c(yy 0 ) L-Lipschitz. Then for any ? > 0, if the training data
{xi , yi }i=1...m are drawn iid from some distribution P , Algorithm 1 returns a function f? that satisfies
q
1
1
? +?
R[f?] ? min R[f ] ? O
LC log 1?
(7)
f ?Fp
m
K
with probability at least 1 ? 2? over the training dataset and the choice of the parameters
w1 , . . . , wK .
Note that the dependence on ? in the bound is logarithmic, so even small ??s do not cause the
bound to blow up. The set Fp is a rich class of functions. It consists of functions whose weights
?(w) decays more rapidly than the given sampling distribution p. For example, when ?(x; w) are
sinusoids with frequency w, Fp is the set of all functions whose Fourier transforms decay faster than
C p(w).
We prove the theorem in the next section, and demonstrate the algorithm on some sample datasets in
Section 4. The proof of the theorem provides explicit values for the constants in the big O notation.
2
Proof of the Main Theorem
Algorithm 1 returns a function that lies in the random set
(
K
X
?
?k ?(x; wk ) |?k | ?
Fw ? f (x) =
)
C
K
.
(8)
k=1
The bound in the main theorem can be decomposed in a standard way into two bounds:
1. An approximation error bound that shows that the lowest true risk attainable by a function
in F?w is not much larger than the lowest true risk attainable in Fp (Lemma 2).
2. An estimation error bound that shows that the true risk of every function in F?w is close to
its empirical risk (Lemma 3).
The following Lemma is helpful in bounding the approximation error:
Lemma 1. Let ? be a measure on X , and f ? a function in Fp . If w1 , . . . , wK are drawn iid from p,
then for any ? > 0, with probability at least 1 ? ? over w1 , . . . , wK , there exists a function f? ? F?w
so that
sZ
q
2
C
1
?
?
f (x) ? f (x) d?(x) ? ?
1 + 2 log ? .
(9)
K
X
The proof relies on Lemma 4 of the Appendix, which states that the average of bounded vectors in
a Hilbert space concentrates towards its expectation in the Hilbert norm exponentially fast.
R
Proof. Since f ? ? Fp , we can write f ? (x) = ? ?(w)?(x; w) dw. Construct the functions fk =
PK ?k
k)
?
?
?k ?(?; wk ), k = 1 . . . K, with ?k ? ?(?
k=1 K ?(x; ?k ) be the
p(?k ) , so that E fk = f . Let f (x) =
?
?
sample average ofR these functions. Then f ? Fw because |?k /K| ? C/K. Also, under the inner
product hf, gi = f (x)g(x) d?(x), k?k ?(?; wk )k ? C. The Lemma follows by applying Lemma
4 to f1 , . . . , fK under this inner product.
Lemma 2 (Bound on the approximation error). Suppose c(y, y 0 ) is L-Lipschitz in its first argument.
Let f ? be a fixed function in Fp . If w1 , . . . , wK are drawn iid from p, then for any ? > 0, with
probability at least 1 ? ? over w1 , . . . , wK , there exists a function f? ? F?w that satisfies
q
LC
?
1
?
1 + 2 log ? .
(10)
R[f ] ? R[f ] + ?
K
3
Proof. For any two functions f and g, the Lipschitz condition on c followed by the concavity of
square root gives
R[f ] ? R[g] = E c(f (x), y) ? c(g(x), y) ? E |c(f (x), y) ? c(g(x), y)|
p
? L E |f (x) ? g(x)| ? L E(f (x) ? g(x))2 .
(11)
(12)
The lemma then follows from Lemma 1.
Next, we rely on a standard result from statistical learning theory to show that for a given choice of
w1 , . . . , wK the empirical risk of every function in F?w is close to its true risk.
Lemma 3 (Bound on the estimation error). Suppose c(y, y 0 ) = c(yy 0 ), with c(yy 0 ) L-Lipschitz. Let
w1 , ? ? ? , wK be fixed. If {xi , yi }i=1...m are drawn iid from a fixed distribution, for any ? > 0, with
probability at least 1 ? ? over the dataset, we have
q
1
?f ?F?w |R[f ] ? Remp [f ]| ? ?
4LC + 2|c(0)| + LC 12 log 1? .
(13)
m
Proof sketch. By H?older, the functions in F?w are bounded above by C. The Rademacher complexity
?
of F?w can be shown to be bounded above by C/ m (see the Appendix). The theorem follows by
results from [12] which are summarized in Theorem 2 of the Appendix.
Proof of Theorem 1. Let f ? be a minimizer of R over Fp , f? a minimizer of Remp over F?w (the
output of the algorithm), and f?? a minimizer of R over F?w . Then
R[f?] ? R[f ? ] = R[f?] ? R[f?? ] + R[f?? ] ? R[f ? ]
? |R[f?] ? R[f?? ]| + R[f?? ] ? R[f ? ].
(14)
(15)
The first term in the right side is an estimation error: By Lemma 3, with probability at least
1 ? ?, |R[f?? ] ? Remp [f?? ]| ? est and simultaneously, |R[f?] ? Remp [f?]| ? est , where est
is the right side of the bound in Lemma 3. By the optimality of f?, Remp [f?] ? Remp [f?? ].
?
??
Combining
these facts gives
q that with
probability at least 1 ? ?, |R[f ] ? R[f ]| ? 2est =
?2
m
4LC + 2|c(0)| + LC
1
2
log
1
?
.
The second term in Equation (15) is the approximation
qerror, and
by Theorem 1, with probability at
least 1 ? ?, it is bounded above by app =
LC
?
K
1+
2 log
1
?
.
By the union bound, with probability at least 1?2?, the right side of Equation (15) is bounded above
by 2est + app .
3
Related Work
Greedy algorithms for fitting networks of the form (2) have been analyzed, for example, in [7, 11, 9].
Zhang analyzed greedy algorithms and a randomized algorithm similar to Algorithm 1 for fitting
sparse Gaussian processes to data, a more narrow setting than we consider here. He obtained bounds
on the expected error for this sparse approximation problem by viewing these methods as stochastic
gradient descent.
Approximation error bounds such as that of Maurey [11][Lemma 1], Girosi [13] and Gnecco and
Sanguineti [14] rely on random sampling to guarantee the existence of good parameters w1 , . . . , wk ,
but they require access to the representation of f ? to actually produce these parameters. These approximation bounds cannot be used to guarantee the performance of Algorithm 1 because Algorithm
1 is oblivious of the data when it generates the parameters. Lemma 2 differs from these bounds in
that it relies on f ? only to generate the weights ?1 , . . . , ?K , but it remains oblivious to f ? when
generating the parameters by sampling them from p instead. Furthermore, because F?w is smaller
than the classes considered by [11, 14], the approximation error rate in Lemma 1 matches those of
existing approximation error bounds.
4
Adaboost
RKS
26
Adaboost
RKS
Adaboost
RKS
2
10
2
training+testing time (seconds)
% error
22
20
18
16
training+testing time (seconds)
10
24
1
10
0
10
0
100
200
300
400
500
600
# weak learners (K)
700
800
900
1000
0
10
?1
?1
10
10
14
1
10
0
100
200
300
400
500
600
# weak learners (K)
700
800
900
14
1000
15
16
17
18
19
% error
20
21
22
23
24
30
Adaboost
RKS
Adaboost
RKS
3
10
28
Adaboost
RKS
3
10
26
% error
22
20
18
16
training+testing time (seconds)
training+testing time (seconds)
24
2
10
1
10
2
10
1
10
14
0
0
10
10
10
12
0
50
100
150
200
250
# weak learners (K)
300
350
400
0
20
50
100
150
200
250
# weak learners (K)
300
350
10
400
12
14
16
18
20
% error
22
24
26
28
30
3
3
10
10
Adaboost
RKS
Adaboost
RKS
Adaboost
RKS
18
2
2
10
10
% error
14
12
training+testing time (seconds)
training+testing time (seconds)
16
1
10
0
10
?1
10
1
10
0
10
?1
10
10
?2
?2
8
6
10
10
?3
?3
0
100
200
300
400
# weak learners (K)
500
600
700
10
0
100
200
300
400
# weak learners (K)
500
600
700
10
6
8
10
12
14
16
18
20
% error
Figure 1: Comparisons between Random Kitchen Sinks and Adaboosted decision stumps on adult
(first row), activity (second row), and KDDCUP99 (third row). The first column plots test error
of each classifier as a function of K. The accuracy of Random Kitchen Sinks catches up to that of
Adaboost as K grows. The second column plots the total training and testing time as a function
of K. For a given K, Random Kitchen Sinks is between two and three orders of magnitude faster
than Adaboost. The third column combines the previous two columns. It plots testing+training time
required to achieve a desired error rate. For a given error rate, Random Kitchen Sinks is between
one and three orders of magnitude faster than Adaboost.
4
Experiments
Since others have already empirically demonstrated the benefits of random featurization [2, 3, 4, 5],
we only a present a few illustrations in this section.
We compared Random Kitchen Sinks with Adaboost on three classification problems: The adult
dataset has roughly 32,000 training instances. Each categorical variable was replaced by a binary indicator variable over the categories, resulting in 123 dimensions per instance. The test set consists of
15,000 instances. KDDCUP99 is a network intrusion detection problem with roughly 5,000,000 127dimensional training instances, subsampled to 50,000 instances. The test set consists of 150,000 instances. activity is a human activity recognition dataset with 20,0000 223-dimensional instance,
of which about 200 are irrelevant for classification. The test set constists of 50,000 instances. The
datasets were preprocessed by zero-meaning and rescaling each dimension to unit variance. The
feature functions in these experiments were decision stumps?(x; w) = sign(xwd ? wt ), which simply determine whether the wd th dimension of x is smaller or greater than the threshold wt . The
sampling distribution p for Random Kitchen Sinks drew the threshold parameter wt from a normal
distribution and the coordinate wd from a uniform distribution over the coorindates. For some experiments, we could afford to run Random Kitchen Sinks for larger K than Adaboost, and these runs
are included in the plots. We used the quadratic loss, but find no substantial differences in quality
under the hinge loss (though there is degradation in speed by a factor of 2-10). We used MATLAB
optimized versions of Adaboost and Random Kitchen Sinks, and report wall clock time in seconds.
Figure 1 compares the results on these datasets. Adaboost expends considerable effort in choosing
the decision stumps and achieves good test accuracy with a few of them. Random Kitchen Sinks
5
0.4
||?||?
0.35
0.3
0.25
0.2
0.15
50
100
150
200
250
K
300
350
400
450
Figure 2: The L? norm of ? returned by RKS for 500 different runs of RKS with various settings
of K on adult. k?k? decays with K, which justifies dropping the constraint (4) in practice.
requires more nonlinearities to achieve similar accuracies. But because it is faster than Adaboost, it
can produce classifiers that are just as accurate as Adaboost?s with more nonlinearities in less total
time. In these experiments, Random Kitchen Sinks is almost as accurate as Adaboost but faster by
one to three orders of magnitude.
We defer the details of the following experiments to a technical report: As an alternative to Adaboost,
we have experimented with conjugate gradient-descent based fitting procedures for (2), and find
again that randomly generating the nonlinearities produces equally accurate classifiers using many
more nonlinearities but in much less time. We obtain similar results as those of Figure 1 with the
random features of [4], and random sigmoidal ridge functions ?(x; w) = ?(w0 x),
To simplify the implementation of Random Kitchen Sinks, we ignore the constraint (4) in practice.
The scalar C controls the size of F?w and Fp , and to eliminate the constraint, we implicitly set C it
to a large value so that the constraint is never tight. However, for the results of this paper to hold, C
cannot grow faster than K. Figure 2 shows that the L? norm of the unconstrained optimum of (3)
for the adult dataset does decays linearly with K, so that there exists a C that does not grow with
K for which the constraint is never tight, thereby justifying dropping the constraint.
5
Discussion and Conclusions
Various hardness of approximation lower bounds for fixed basis functions exist (see, for example
[11]). The guarantee in Lemma 1 avoids running afoul of these lower bounds because it does not
seek to approximate every function in Fp simultaneously, but rather only the true risk minimizer
with high probability.
It may be surprising that Theorem 1 holds even when the feature functions ? are nearly orthogonal. The result works because the importance sampling constraint |?(w)| ? Cp(w) ensures that
a feature function does not receive a large weight if it is unlikely to be sampled by p. When
the feature
R functions are highly linearly dependent,
R better bounds can be obtained because any
f (x) = ?(w)?(x; w) can be rewritten as f (x) = ?0 (w)?(x; w) with |?0 |/p ? |?|/p, improving
the importance ratio C. This intuition can be formalized via the the Rademacher complexity of ?, a
result which we leave for future work.
One may wonder whether Algorithm 1 has good theoretical guarantees on Fp because Fp is
too
R small small class of functions. Indeed, when ? are the Fourier bases, |?|/p ? C implies
|?(w)| dw ? C, so every function in Fp has an absolutely integrable Frourier transform. Thus
?
Fp is smaller than the set considered by
? Jones [9] for greedy matching pursuit, and for which he
obtained an approximation rate of O(1/ K). The most reliable way to show that Fp is rich enough
for practical applications is to conduct experiments with real data. The experiment show that Fp
indeed contains good predictors.
The convergence
? rate for Adaboost [7] is exponentially fast in K, which at first appears to be much
faster than 1/ K. However, the base of the exponent is the minimum weighted margin encountered
by the algorithm through all iterations, a quantity that is difficult to bound a priori. This makes a
direct comparison of the bounds difficult, though we have tried to provide empirical comparisons.
6
A
Exponentially Fast Concentration of Averages towards the Mean in a
Hilbert Space
Lemma 4. Let X = {x1 , ? ? ? , xK } be iid random variables in a ball H of radius M centered
PK
1
around the origin in a Hilbert space. Denote their average by X = K
k=1 xk . Then for any
? > 0, with probability at least 1 ? ?,
q
X ? E X
? ?M 1 + 2 log 1 .
(16)
?
K
Proof. We use McDiarmid?s inequality to show that the scalar function f (X) =
X ? EX X
is
?
concentrated about its mean, which shrinks as O(1/ K).
? = {x1 , ? ? ? , x
The function f is stable under perturbation of its ith argument. Let X
?i , ? ? ? , xK }
be a copy of X with the ith element replaced by an arbitrary element of H. Applying the triangle
inequality twice gives
? ? E Xk| ? kX ? Xk
? ?
? = |kX ? E Xk ? kX
|f (X) ? f (X)|
kxi ? x
?i k
2M
?
.
K
K
(17)
To bound the expectation of f , use the familiar identity about the variance of the average of iid
random variables
2
1
2
2
(18)
E kxk ? k E xk ,
E
X ? E X
=
K
in conjunction with Jensen?s inequality and the fact that kxk ? M to get
q
p
2
M
(19)
E f (X) ? E f 2 (X) = E
X ? E X
? ? .
K
This bound for the expectation of f and McDiarmid?s inequality give
M
K2
Pr f (X) ? ? ? ? Pr f (X) ? E f (X) ? ? exp ?
X
X
2M 2
K
(20)
To get the final result, set ? to the right hand side, solve for , and rearrange.
B
Generalization bounds that use Rademacher complexity
One measure of the size of a class F of functions is its Rademacher complexity:
#
"
m
1 X
?i f (xi ) ,
Rm [F] ?
sup
E
x1 ,??? ,xm
f ?F m i=1
? ,??? ,?
1
m
The variables ?1 , ? ? ? , ?m are iid Bernouli random variables that take on the value -1 or +1 with
equal probability and are independent of x1 , . . . , xm .
The
Rademacher
complexity
of F?w can be bounded as follows.
Define S
?
C
? ? RK k?k? ? K
:
!
m
K
K
m
1 X
X
X
1 X
?
Rm [Fw ] = E sup
?i
?k ?(xi ; ?k ) = E sup
?i ?(xi ; ?k )
?k
?,X ??S
m
?,X ??S m
i=1
i=1
k=1
k=1
(21)
v
!2
u
K
m
K u
m
X
X
X
1 X
C
C
1
t
?i ?(xi ; ?k ) ? E
?i ?(xi ; ?k )
? E
E
m
XK
m
?
?,X K
i=1
k=1
k=1
k=1
v
K u
m
K r
?
C Xu
1 X 2
C X 1
t
=E
? (xi ; ?k ) ?
? C/ m,
E 2
K
m
? m
X K
k=1
k=1
k=1
7
(22)
(23)
where the first inequality follows by H?older, the second by the concavity of square root, the third by
the fact that conditioned on ?, E? ?i ?(xi ; ?)?j ?(xj ; ?) = 0 when i 6= j, and the fourth follows by
the boundedness of ?.
The following theorem is a summary of the results from [12]:
Theorem 2. Let F be a class of bounded functions so that supx |f (x)| ? C for all f ? F, and
suppose c(y, y 0 ) = c(yy 0 ), with c(yy 0 ) L-Lipschitz. Then with probability at least 1 ? ? with respect
to training samples {xi , yi }m drawn from a probabilisty distribution P on X ? {?1, +1}, every
function in F satisfies
r
2|c(0)|
1
R[f ] ? Remp [f ] + 4LRm [F] + ?
log 1? .
(24)
+ LC
2m
m
References
[1] H. D. Block. The perceptron: a model for brain functioning. Review of modern physics,
34:123?135, January 1962.
[2] Y. Amit and D. Geman. Shape quantization and recognition with randomized trees. Neural
Computation, 9(7):1545?1588, 1997.
[3] F. Moosmann, B. Triggs, and F. Jurie. Randomized clustering forests for building fast and
discriminative visual vocabularies. In Advances in Neural Information Processing Systems
(NIPS), 2006.
[4] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in
Neural Information Processing Systems (NIPS), 2007.
[5] W. Maass and H. Markram. On the computational power of circuits of spiking neurons. Journal
of Computer and System Sciences, 69:593?616, December 2004.
[6] E. Osuna, R. Freund, and F. Girosi. Training support vector machines: an application to face
detection. In Computer Vision and Pattern Recognition (CVPR), 1997.
[7] R. E. Schapire. The boosting approach to machine learning: An overview. In D. D. Denison,
M. H. Hansen, C. Holmes, B. Mallick, and B. Yu, editors, Nonlinear Estimation and Classification. Springer, 2003.
[8] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of
boosting. Technical report, Dept. of Statistics, Stanford University, 1998.
[9] L. K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence
rates for projection pursuit regression and neural network training. The Annals of Statistics,
20(1):608?613, March 1992.
[10] R. Rifkin, G. Yeo, and T. Poggio. Regularized least squares classification. Advances in Learning Theory: Methods, Model and Applications, NATO Science Series III: Computer and Systems Sciences, 190, 2003.
[11] A.R. Barron. Universal approximation bounds for superpositions of a sigmoidal function.
IEEE Transactions on Information Theory, 39:930?945, May 1993.
[12] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and
structural results. Journal of Machine Learning Research (JMLR), 3:463?482, 2002.
[13] F. Girosi. Approximation error bounds that use VC-bounds. In International Conference on
Neural Networks, pages 295?302, 1995.
[14] G. Gnecco and M. Sanguineti. Approximation error bounds via Rademacher?s complexity.
Applied Mathematical Sciences, 2(4):153?176, 2008.
8
| 3495 |@word version:1 norm:3 triggs:1 seek:1 tried:1 attainable:3 thereby:1 boundedness:1 moment:1 contains:1 series:1 tuned:1 existing:1 wd:2 surprising:1 yet:1 additive:1 shape:1 girosi:3 plot:4 intelligence:1 greedy:4 denison:1 shut:1 xk:8 ith:2 provides:1 boosting:2 sigmoidal:2 mcdiarmid:2 zhang:1 mathematical:1 constructed:1 direct:1 consists:4 prove:1 fitting:10 combine:1 hardness:1 indeed:2 expected:2 roughly:2 brain:1 decomposed:1 bounded:8 notation:1 circuit:1 lowest:3 what:1 tic:1 minimizes:1 finding:1 guarantee:4 every:5 classifier:5 rm:2 control:1 unit:3 positive:2 sanguineti:2 lrm:1 might:1 sussman:5 twice:1 jurie:1 practical:1 testing:8 practice:3 union:1 definite:2 differs:1 block:1 procedure:4 universal:1 empirical:6 matching:3 projection:1 pre:1 get:2 cannot:2 close:3 risk:15 applying:2 demonstrated:1 convex:1 formalized:1 holmes:1 his:2 dw:4 coordinate:2 annals:1 play:2 suppose:4 origin:1 element:2 recognition:3 geman:1 bottom:1 ensures:1 connected:1 rescaled:1 substantial:1 intuition:1 complexity:7 asked:3 tight:2 learner:6 basis:1 sink:13 triangle:1 replied:3 various:2 fast:5 artificial:1 choosing:2 whose:3 larger:2 solve:2 cvpr:1 stanford:1 statistic:2 gi:1 unseen:1 transform:2 jointly:2 final:1 net:2 product:2 bernouli:1 combining:1 rapidly:1 rifkin:1 flexibility:1 achieve:2 convergence:2 empty:1 optimum:1 rademacher:7 wired:2 produce:3 generating:2 leave:1 help:2 implemented:1 implies:1 concentrate:1 radius:1 stochastic:1 vc:1 centered:1 human:1 viewing:1 featurization:1 require:1 f1:1 generalization:1 wall:1 randomization:2 hold:2 around:1 considered:3 normal:1 exp:1 achieves:1 estimation:4 label:1 hansen:1 superposition:1 him:1 weighted:6 minimization:3 immortalized:1 gaussian:2 rather:2 conjunction:1 earliest:1 intrusion:1 greedily:1 am:1 helpful:1 dependent:1 typically:1 eliminate:1 unlikely:1 classification:6 exponent:1 priori:1 equal:1 once:1 construct:1 having:1 never:2 sampling:7 jones:2 yu:1 nearly:2 hacking:1 future:1 others:1 report:3 simplify:1 few:3 oblivious:2 modern:1 randomly:8 simultaneously:2 cheaper:1 subsampled:1 kitchen:13 minsky:5 replaced:2 familiar:1 friedman:1 detection:2 highly:1 analyzed:2 rearrange:1 accurate:3 kddcup99:2 poggio:1 orthogonal:1 tree:2 conduct:1 penalizes:1 desired:1 theoretical:1 instance:8 column:4 deviation:1 uniform:1 predictor:1 wonder:1 too:1 optimally:4 teacher:1 supx:2 kxi:1 recht:1 international:1 randomized:6 physic:1 synthesis:1 w1:11 again:1 worse:1 return:4 rescaling:1 yeo:1 nonlinearities:11 stump:4 blow:1 summarized:1 wk:16 satisfy:1 root:2 view:1 doing:1 analyze:2 sup:3 competitive:1 hf:1 defer:1 contribution:1 minimize:3 square:5 accuracy:4 variance:3 largely:2 identify:1 weak:7 iid:9 app:2 frequency:1 toe:1 proof:8 fuctions:1 sampled:1 dataset:9 popular:1 remp:9 hilbert:5 actually:1 appears:1 day:2 adaboost:29 though:2 shrink:1 furthermore:1 just:1 clock:1 sketch:1 hand:1 replacing:1 nonlinear:2 logistic:1 quality:1 grows:1 building:1 consisted:1 true:9 functioning:1 sinusoid:1 maass:1 ridge:1 demonstrate:1 cp:2 meaning:1 recently:1 spiking:1 empirically:1 overview:1 exponentially:3 he:3 ai:1 tac:1 tuning:1 unconstrained:1 fk:3 similarly:1 access:1 stable:1 practioners:1 base:2 optimizing:1 irrelevant:1 inequality:7 binary:2 came:1 yi:8 devise:1 ofr:1 integrable:1 minimum:1 greater:1 care:1 determine:1 rahimi:1 technical:3 enlightened:1 faster:7 match:1 justifying:1 equally:1 prediction:1 regression:2 vision:1 expectation:3 rks:11 iteration:1 kernel:5 receive:1 whereas:1 want:1 grow:2 featurize:1 eigenfunctions:1 december:1 integer:1 structural:1 near:1 iii:1 enough:2 variety:1 xj:1 fit:1 zi:2 architecture:4 hastie:1 inner:2 inactive:1 whether:2 motivated:1 bartlett:1 effort:1 returned:1 passing:1 cause:1 afford:1 matlab:2 tune:1 amount:1 transforms:1 concentrated:1 category:1 generate:1 schapire:1 exist:1 andr:1 sign:1 per:1 yy:8 tibshirani:1 write:1 dropping:2 threshold:2 drawn:6 preprocessed:1 sum:5 run:3 parameterized:1 you:2 fourth:1 almost:1 draw:2 decision:4 appendix:3 bound:30 layer:2 followed:1 quadratic:3 encountered:1 activity:3 constraint:8 your:1 generates:1 fourier:2 speed:1 argument:2 min:1 optimality:1 speedup:1 ball:1 march:1 conjugate:1 smaller:3 osuna:1 wi:1 appealing:1 shallow:5 pr:2 computationally:1 equation:2 remains:2 moosmann:1 pursuit:4 rewritten:1 barron:1 alternative:1 existence:1 running:1 clustering:1 hinge:3 amit:1 already:1 quantity:1 randomize:1 concentration:3 dependence:1 exhibit:1 gradient:2 w0:1 reason:1 assuming:1 code:1 illustration:1 ratio:1 minimizing:1 difficult:2 implementation:1 unknown:1 neuron:1 datasets:4 finite:1 descent:2 january:1 pdp:1 perturbation:1 arbitrary:2 community:1 pair:1 required:1 optimized:1 narrow:1 nip:2 adult:4 below:1 pattern:1 xm:2 fp:18 max:1 reliable:1 power:1 mallick:1 rely:2 regularized:2 indicator:1 older:2 eye:2 categorical:1 catch:1 review:1 freund:1 loss:9 expect:1 maurey:1 editor:1 bank:1 row:3 summary:1 copy:1 side:5 perceptron:1 wide:1 face:1 markram:1 sparse:2 benefit:1 dimension:3 vocabulary:1 avoids:1 rich:2 concavity:2 novice:1 transaction:1 approximate:2 ignore:1 implicitly:1 nato:1 sz:1 sat:1 xi:16 discriminative:1 why:2 associator:1 improving:1 forest:1 complex:1 pk:3 main:4 linearly:2 big:1 bounding:1 x1:4 xu:1 lc:8 explicit:1 exponential:1 lie:1 jmlr:1 third:3 rk:2 down:1 theorem:13 jensen:1 decay:4 svm:1 experimented:1 exists:3 mendelson:1 quantization:1 drew:1 importance:2 magnitude:4 justifies:1 conditioned:1 margin:1 kx:3 logarithmic:1 simply:1 visual:1 kxk:2 scalar:4 springer:1 expends:1 minimizer:5 satisfies:3 relies:2 identity:1 towards:2 room:1 lipschitz:5 considerable:1 fw:3 included:1 specifically:1 wt:3 lemma:20 degradation:1 total:2 est:5 formally:1 support:3 absolutely:1 dept:1 ex:1 |
2,752 | 3,496 | Influence of graph construction on graph-based
clustering measures
Markus Maier
Ulrike von Luxburg
Max Planck Institute for Biological Cybernetics, T?ubingen, Germany
Matthias Hein
Saarland University, Saarbr?ucken, Germany
Abstract
Graph clustering methods such as spectral clustering are defined for general
weighted graphs. In machine learning, however, data often is not given in form
of a graph, but in terms of similarity (or distance) values between points. In this
case, first a neighborhood graph is constructed using the similarities between the
points and then a graph clustering algorithm is applied to this graph. In this paper we investigate the influence of the construction of the similarity graph on
the clustering results. We first study the convergence of graph clustering criteria such as the normalized cut (Ncut) as the sample size tends to infinity. We
find that the limit expressions are different for different types of graph, for example the r-neighborhood graph or the k-nearest neighbor graph. In plain words:
Ncut on a kNN graph does something systematically different than Ncut on an
r-neighborhood graph! This finding shows that graph clustering criteria cannot be
studied independently of the kind of graph they are applied to. We also provide
examples which show that these differences can be observed for toy and real data
already for rather small sample sizes.
1
Introduction
In many areas of machine learning such as clustering, dimensionality reduction, or semi-supervised
learning, neighborhood graphs are used to model local relationships between data points and to build
global structure from local information. The easiest and most popular neighborhood graphs are the
r-neighborhood graph, in which every point is connected to all other points within a distance of r,
and the k-nearest neighbor (kNN) graph, in which every point is connected to the k closest neighboring points. When applying graph based machine learning methods to given sets of data points,
there are several choices to be made: the type of the graph to construct (e.g., r-neighborhood graph
or kNN graph), and the connectivity parameter (r or k, respectively). However, the question how
these choices should be made has received only little attention in the literature. This is not so severe
in the domain of supervised learning, where parameters can be set using cross-validation. However,
it poses a serious problem in unsupervised learning. While different researchers use different heuristics and their ?gut feeling? to set these parameters, neither systematic empirical studies have been
conducted (for example: how sensitive are the results to the graph parameters?), nor do theoretical
results exist which lead to well-justified heuristics. Our goal in this paper is to address the theoretical
side of this question in the context of graph based clustering.
In this work, we consider clustering in a statistical setting: we assume that a finite set of data
points has been sampled from some underlying distribution. Ultimately, what we want to find is a
good clustering of the underlying data space. We assume that the quality of a clustering is defined
by some clustering objective function. In this paper we focus on the case of the normalized cut
1
objective function Ncut (Shi and Malik, 2000) and on the question if and how the results of graph
based clustering algorithms are affected by the graph type and the parameters that are chosen for the
construction of the neighborhood graph.
To this end, we first want to study the convergence of the clustering criterion (Ncut) on different
kinds of graphs (kNN graph and r-neighborhood graph), as the sample size tends to infinity. To our
own surprise, when studying this convergence it turned out that, depending on the type of graph,
the normalized cut converges to different limit values! That is, the (suitably normalized) values of
Ncut tend to a different limit functional, depending on whether we use the r-neighborhood graph or
the kNN graph on the finite sample. Intuitively, what happens is as follows: On any given graph,
the normalized cut is one unique, well-defined mathematical expression. But of course, given a
fixed partition of a sample of points, this Ncut value is different for different graphs constructed
on the sample (different graph constructions put different numbers of edges between points, which
leads to different Ncut values). It can now be shown that even after appropriate rescaling, such
differences remain visible in the limit for the sample size tending to infinity. For example, we will
see that depending on the type of graph, the limit criterion integrates over different powers of the
density. This can lead to the effect that the minimizer of Ncut on the kNN graph is different from
the minimizer of Ncut on the r-graph.
This means that ultimately, the question about the ?best Ncut? clustering, given infinite amount of
data, has different answers, depending on which underlying graph we use! This observation opens
Pandora?s box on clustering criteria: the ?meaning? of a clustering criterion does not only depend
on the exact definition of the criterion itself, but also on how the graph on the finite sample is
constructed. In the case of Ncut this means that Ncut is not just ?one well-defined criterion?, but
it corresponds to a whole bunch of criteria, which differ depending on the underlying graph. More
sloppy: Ncut on a kNN graph does something different than Ncut on an r-neighborhood graph!
The first part of our paper is devoted to the mathematical derivation of our results. We investigate
how and under which conditions the Ncut criterion converges on the different graphs, and what
the corresponding limit expressions are. The second part of our paper shows that these findings
are not only of theoretical interest, but that they also influence concrete algorithms such as spectral
clustering in practice. We give examples of well-clustered distributions (mixtures of Gaussians),
where the optimal limit cut on the kNN graph is different from the one on the r-neighborhood
graph. Moreover, these results can be reproduced with finite samples. That is, given a finite sample
from some well-clustered distribution, normalized spectral clustering on the kNN graph produces
systematically different results from spectral clustering on the r-neighborhood graph.
2
Definitions and assumptions
Given a graph G = (V, E) with
P weights w : E ? R and a partition of thePnodes V into (C, V \ C)
we define cut(C, V \ C) = u?C,v?V \C w(u, v) + w(v, u), vol(C) = u?C,v?V w(u, v), and
1
1
Ncut(C, V \ C) = cut(C, V \ C)
+
.
vol(C) vol(V \ C)
Given a finite set of points x1 , . . . , xn we consider two main types of neighborhood graphs:
? the r-neighborhood graph Gn,r : there is an edge from point xi to point xj if dist(xi , xj ) ? r
for all 1 ? i, j ? n, i 6= j.
? the directed k-nearest neighbor graph Gn,k : there is a directed edge from xi to xj if xj is one of
the k nearest neighbors of xi for 1 ? i, j ? n, i 6= j.
In the following we work on the space Rd with Euclidean metric dist. We denote by ?d the volume
of the d-dimensional unit ball in Rd and by B(x, r) the ball with radius r centered at x. On the
space Rd we will study partitions which are induced by some hypersurface S. Given a surface S
which separates the data points in two non-empty parts C + and C ? , we denote by cutn,r (S) the
number of edges in Gn,r that go from a sample point on one side of the surface to a sample point on
the other side of the surface. The corresponding quantity for the directed k-nearest neighbor graph
is denoted by cutn,k (S). For a set A ? Rd the volume of {x1 , . . . , xn } ? A in the graph Gn,r is
denoted by voln,r (A), and correspondingly voln,k (A) in the graph Gn,k .
2
General assumptions in the whole paper: The data points x1 , ..., xn are drawn independently
from some density p on Rd . This density is bounded from below and above, that is 0 < pmin ?
p(x) ? pmax . In particular, it has compact support C. We assume that the boundary ?C of C
is well-behaved, that means it is a set of Lebesgue measure 0 and we can find a constant ? > 0
such that for r sufficiently small, vol(B(x, r) ? C) ? ? vol(B(x, r)) for all x ? C. Furthermore
we assume that p is twice differentiable in the interior of C and that the derivatives are bounded.
The measure
on Rd induced by p will be denoted by ?, that means, for a measurable set A we set
R
?(A) = A p(x)dx. For the cut surface S, we assume that the volume of S ? ?C with respect to the
(d ? 1)-dimensional measure on S is a set of measure 0. Moreover, S splits the space Rd into two
sets C + and C ? with positive probability mass.
While the setting introduced above is very general, we make some substantial simplifications in this
paper. First, we consider all graphs as unweighted graphs (the proofs are already technical enough
in this setting). We have not yet had time to prove the corresponding theorems for weighted graphs,
but would expect that this might lead yet to other limit expressions. This will be a point for future
work. Moreover, in the case of the kNN-graph we consider the directed graph for simplicity. Some
statements can be carried over by simple arguments from the directed graph to the symmetric graph,
but not all of them. In general, we study the setting where one wants to find two clusters which
are induced by some hypersurface in Rd . In this paper we only consider the case where S is a
hyperplane. Our results can be generalized to more general (smooth) surfaces, provided one makes
a few assumptions on the regularity of the surface S. The proofs are more technical, though.
3
Limits of quality measures
In this section we study the asymptotic behavior of the quantities introduced above for both the
unweighted directed kNN graph and the unweighted r-graph. Due to the lack of space we only
provide proof sketches; detailed proofs can be found in the supplement Maier et al. (2008).
Let (kn )n?N be an increasing sequence. Given a finite sample x1 , ..., xn from the underlying distribution, we will construct the graph Gn,kn and study the convergence of Ncutn,kn (S), that is the
Ncut value induced by S, evaluated on the graph Gn,kn . Similarly, given a sequence (rn )n?N of
radii,
we consider the convergence of Ncutn,rn induced by S on the graph Gn,rn . In the following
R
ds
denotes the (d ? 1)-dimensional surface integral along S. Here is our main result:
S
Theorem 1 (Limit values of Ncut on different graphs) Assume the general assumptions hold for
the density p on Rd and a fixed hyperplane S in Rd . Consider the sequences (kn )n?N ? ?
N and
(rn )n?N ? R. For the kNN graph, assume that kn /n ? 0. In case d = 1, assume that kn / n ?
?, in case d ? 2 assume kn / log n ? ?. Then we have for n ? ?
r
d
n
2?d?1
a.s.
Ncutn,kn (S) ??
1+1/d
kn
(d + 1)?d
Z
p1?1/d (s)ds
?? Z
p(x)dx
??1
+
?Z
??1 ?
.
C?
C+
S
p(x)dx
For the r-neighborhood graph, assume rn > 0, rn ? 0 and nrnd+1 ? ? for n ? ?. Then
1
2?d?1
a.s.
Ncutn,rn (S) ??
rn
(d + 1)?d
Z
p2 (s)ds
?? Z
??1 ? Z
p2 (x)dx
+
C?
C+
S
??1 ?
p2 (x)dx
.
Proof (Sketch for the case of the kNN graph, the case of the r graph is similar. Details see Maier
et al., 2008.). Define the scaling factors ccut (n, kn ) = n?1+1/d k ?1?1/d and cvol (n, kn ) =
(nkn )?1 . Then, (n/kn )1/d Ncut(S) can be decomposed in cut and volume term:
ccut (n, kn ) cutn,kn (S) ? (cvol (n, kn ) voln,kn (C + ))?1 + (cvol (n, kn ) voln,kn (C ? ))?1 .
In Proposition 3 below we will see that the volume term satisfies
Z
a.s.
cvol (n, kn ) voln,kn (C + ) ??
p(x)dx,
C+
?
and the corresponding expression holds for C . For the cut term we will prove below that
Z
2?d?1
a.s.
p1?1/d (s)ds.
ccut (n, kn ) cutn,kn (S) ??
1+1/d
S
(d + 1)?d
3
(1)
This will be done using a standard decomposition into variance and bias term, which will be treated
in Propositions 1 and 2, respectively.
Proposition 1 (Limit values of E cutn,kn and E cutn,rn ) Let the general assumptions hold, and S
be an arbitrary, but fixed hyperplane. For the kNN graph, if kn /n ? 0 and kn / log n ? ? for
n ? ?, then
r
Z
1 d n
2?d?1 ?1?1/d
E
cutn,kn (S)
?
?d
p1?1/d (s)ds.
nkn kn
d+1
S
For the r-neighborhood graph, if rn ? 0, rn > 0 for n ? ?, then
Z
2?d?1
cutn,rn (S)
?
p2 (s)ds.
E
d+1 S
n2 rnd+1
Proof (Sketch, see Maier et al., 2008) . We start with the case of the r-neighborhood graph. By Ni
(i = 1, ..., n) denote the number of edges in the graph that start in point xi and end in some point
on the other side of the cut surface S. As all points are sampled i.i.d, we have
Pn
E cutn,rn (S) =
i=1 ENi = nEN1 .
Suppose the position of the first point is x. The idea to compute the expected number of edges
originating in x is as follows. We consider a ball B(x, rn ) of radius rn around x (where rn is the
current parameter of the r-neighborhood graph). The expected number of edges originating in x
equals the expected number of points which lie in the intersection of this ball with the other side of
the hyperplane. That is, setting
? B(x, rn ) ? C +
if x ? C ?
g(x, rn ) =
?
? B(x, rn ) ? C
if x ? C +
we have E(N1 |X1 = x) = (n ? 1)g(x, rn ), since the number of points in the intersection of
B(x, rn ) with the other side of the hyperplane is binomially distributed with parameters n ? 1 and
g(x, rn ). Integrating this conditional expectation over all positions of the point x in Rd gives
Z
E cutn,rn (S) = n(n ? 1)
g(x, rn )p(x)dx.
Rd
The second important idea is that instead of integrating over Rd , we first integrate over the hyperplane S and then, at each point s ? S, along the normal line through s, that is the line s + t~n, t ? R,
where ~n denotes the normal vector of the hyperplane pointing towards C + . This leads to
Z
Z Z ?
n(n ? 1)
g(x, rn )p(x)dx = n(n ? 1)
g(s + t~n, rn )p(s + t~n) dt ds.
Rd
S
??
This has two advantages. First, if x is far enough from S (that is, dist(x, s) > rn for all s ? S),
then g(x, rn ) = 0 and the corresponding terms in the integral vanish. Second, if x is close to s ? S
and the radius rn is small, then the density on the ball B(x, rn ) can be considered approximately
homogeneous, that is p(y) ? p(s) for all y ? B(x, rn ). Thus,
Z ?
Z rn
g(s + t~n, rn )p(s + t~n) dt =
g(s + t~n, rn )p(s + t~n) dt
??
?rn
Z rn
?2
p(s) vol B(s + t~n, rn ) ? C ? p(s) dt.
0
?
It is not hard to see that vol B(s + t~n, rn ) ? C = rnd A(t/rn ), where A(t/rn ) denotes the volume
of the cap of the unit ball capped at distance t/rn . Solving the integrals leads to
Z rn
Z 1
?d?1
.
vol B(s + t~n, rn ) ? C ? dt = rnd+1
A(t)dt = rnd+1
d+1
0
0
Combining the steps above we obtain the result for the r-neighborhood graph.
4
In the case of the kNN graph, the proof follows a similar principle. We have to replace the radius
rn by the k-nearest neighbor radius, that is, the distance of a data point to its kth nearest neighbor.
This leads to additional difficulties, as this radius is a random variable as well. By a technical
lemma one can show that for large n, under the condition kn / log n ? ? we can replace the
integration over the possible values of the kNN radius by its expectation. Then we observe that as
kn /n ? 0, the expected kNN radius converges to 0, that is for large n we only have to integrate
over balls of homogeneous density. In a region of homogeneous density p?, the expected kNN
radius is given as (k/((n?1)?d p?))1/d . Now similar arguments as above lead to the desired result.
Proposition 1 already shows one of the most important differences between the limits of the expected
cut for the different graphs: For the r-graph we integrate over p2 , while we integrate over p1?1/d
for the kNN graph. This difference comes from the fact that the kNN-radius is a random quantity,
which is not the case for the deterministically chosen radius rn in the r-graph.
Proposition 2 (Deviation of cutn,kn and cutn,rn from their means)?Let the general assumptions
hold. For the kNN graph, if the dimension d = 1 then assume kn / n ? ?, for d ? 2 assume
kn / log n ? ?. In both cases let kn /n ? 0. Then
1 rn
1 rn
a.s.
d
d
cut
(S)
?
E
cutn,kn (S) ?? 0.
n,kn
nkn kn
nkn kn
For the r-neighborhood graph, let rn > 0, rn ? 0 such that nrnd+1 ? ? for n ? ?. Then
1
1
a.s.
2 d+1 cutn,rn (S) ? E 2 d+1 cutn,rn (S) ?? 0.
n rn
n rn
Proof (Sketch, details see Maier et al., 2008). Using McDiarmid?s inequality (with a kissing
number argument to obtain the bounded differences condition) or a U-statistics argument leads to
exponential decay rates for the deviation probabilities (and thus to convergence in probability). The
almost sure convergence can then be obtained using the Borel-Cantelli lemma.
Proposition 3 (Limits of voln,kn and voln,rn ) Let the general assumptions hold, and H ? Rd an
arbitrary measurable subset. Then, as n ? ?, for the kNN graph we have
1
a.s.
voln,kn (H) ?? ?(H).
nkn
For the r-neighborhood graph, if nrd ? ? we have
Z
1
a.s.
vol
(H)
??
?
p2 (x)dx.
n,r
d
n
n2 rnd
H
Proof. In the graph Gn,kn there are exactly k outgoing edges from each node. Thus the expected
number of edges originating in H depends on the number of sample points in H only, which
is binomially distributed with parameters n and ?(H). For the graph Gn,rn we decompose the
volume into the contributions of all the points, and for a single point we condition on its location.
The number of outgoing edges, provided the point is at position x, is the number of other points
in B(x, rn ), which is binomially distributed with parameters (n ? 1) and ?(B(x, rn )). If rn is
sufficiently small we can approximate ?(B(x, rn )) by ?d rnd p(x) under our conditions on the density.
Almost sure convergence is proved using McDiarmid?s inequality or a U-statistics argument.
Other convergence results. In the literature, we only know of one other limit result for graph cuts,
proved by Narayanan et al. (2007). Here the authors study the case of a fully connected graph with
Gaussian weights wt (xi , xj ) = 1/(4?t)d/2 exp(?dist(xi ? xj )2 /4t). Denoting the corresponding
cut value by cutn,t , the authors show that if tn ? 0 such that tn > 1/n1/(2d+2) , then
?
Z
?
? cutn,tn ?
p(s) ds a.s.
n tn
S
Comparing this result to ours, we can see that it corroborates our finding: yet another graph leads to
yet another limit result (for cut, as the authors did not study the Ncut criterion).
5
4
Examples where different limits of Ncut lead to different optimal cuts
In Theorem 1 we have proved that the kNN graph leads to a different limit functional for Ncut(S)
than the r-neighborhood graph. Now we want to show that this difference is not only a mathematical
subtlety without practical relevance. We will see that if we select an optimal cut based on the limit
criterion for the kNN graph we can obtain a different result than if we use the limit criterion based
on the r-neighborhood graph. Moreover, this finding does not only apply to the limit cuts, but also
to cuts constructed on finite samples. This shows that on finite data sets, different constructions of
the graph can lead to systematic differences in the clustering results.
Consider
Gaussian mixture distributions in one and two dimensions of the form
P3
?
N
([?
i
i , 0, . . . , 0], ?i I) which are set to 0 where they are below a threshold ? (and
i=1
properly rescaled), with specific parameters
dim
1
2
?1
0
?1.1
?2
0.5
0
?3
1
1.3
?1
0.4
0.2
?1
0.1
0.4
?1
0.1
0.1
?1
0.66
0.4
?2
0.17
0.55
?3
0.17
0.05
?
0.1
0.01
For density plots, see Figure 1. We first investigate the theoretic limit Ncut values, for hyperplanes
which cut perpendicular to the first dimension (which is the ?informative? dimension of the data).
For the chosen densities, the limit Ncut expressions from Theorem 1 can be computed analytically.
The plots in Figure 2 show the theoretic limits. In particular, the minimal Ncut value in the kNN
case is obtained at a different position than the minimal value in the r-neighborhood case.
This effect can also be observed in a finite sample setting. We sampled n = 2000 points from the
given distributions and constructed the (unweighted) kNN graph (we tried a range of parameters of
k and r, our results are stable with respect to this choice). Then we evaluated the empirical Ncut
values for all hyperplanes which cut perpendicular to the informative dimension, similar as in the
last paragraph. This experiment was repeated 100 times. Figure 2 shows the means of the Ncut
values of these hyperplanes, evaluated on the sample graphs. We can see that the empirical plots are
very similar to the limit plots produced above.
Moreover, we applied normalized spectral clustering (cf. von Luxburg, 2007) to the mixture data
sets. Instead of the directed kNN graph we used the undirected one, as standard spectral clustering is
not defined for directed graphs. We compare different clusterings by the minimal matching distance:
Pn
dM M (Clust1 , Clust2 ) = min
i=1 1Clust1 (xi )6=?(Clust2 (xi )) /(2n)
?
where the minimum is taken over all permutations ? of the labels. In the case of two clusters, this
distance corresponds to the 0-1-loss as used in classification: a minimal matching distance of 0.38,
say, means that 38% of the data points lie in different clusters. In our spectral clustering experiment,
we could observe that the clusterings obtained by spectral clustering are usually very close to the
theoretically optimal hyperplane splits predicted by theory (the minimal matching distances to the
optimal hyperplane splits were always in the order of 0.03 or smaller). As predicted by theory, both
kinds of graph give different cuts in the data. An illustration of this phenomenon for the case of
dimension 2 can be found in Figure 3. To give a quantitative evaluation of this phenomenon, we
computed the mean minimal matching distances between clusterings obtained by the same type of
graph over the different samples (denoted dkNN and dr ), and the mean difference dkNN ?r between
the clusterings obtained by different graph types:
Example
1 dim
2 dim
dkNN
0.00039 ? 0.0005
0.0029 ? 0.0013
dr
0.0005 ? 0.00045
0.0005 ? 0.0005
dkNN ?r
0.32 ? 0.012
0.48 ? 0.045
We can see that for the same graph, the clustering results are very stable (differences in the order of
10?3 ) whereas the differences between the kNN graph and the r-neighborhood graph are substantial
(0.32 and 0.48, respectively). This difference is exactly the one induced by assigning the middle
mode of the density to different clusters, which is the effect predicted by theory.
It is tempting to conjecture that these effects might be due to the fact that the number of Gaussians
and the number of clusters we are looking for do not 0. But this is not the case: for a sum of two
6
Density example 1
Density example 2
(informative dimension only)
1
1
0.5
0
?1
0.5
0
1
0
?2
2
?1
0
1
2
Figure 1: Densities in the examples. In the two-dimensional case, we plot the informative dimension
(marginal over the other dimensions) only. The dashed blue vertical line depicts the optimal limit
cut of the r-graph, the solid red vertical line the optimal limit cut of the kNN graph.
NCut of hyperplanes, kNN graph,
d=1, n=2000, k=30
20
emp
pred
NCut of hyperplanes, kNN graph,
d=2, n=2000, k=100
20
emp
pred
10
10
0
?2
0
0
?2
2
0
2
Ncut of hyperplanes, r?graph,
d=1, n=2000, r=0.1
20
emp
pred
Ncut of hyperplanes, r?graph,
d=2, n=2000, r=0.3
20
emp
pred
10
10
0
?2
0
0
?2
2
0
2
Figure 2: Ncut values for hyperplanes: theoretical predictions (dashed) and empirical means (solid).
The optimal cut is indicated by the dotted line. The top row shows the results for the kNN graph,
the bottom row for the r-graph. In the left column the result for one dimension, in the right column
for two dimensions.
Gaussians in one dimension with means 0.2 and 0.4, variances 0.05 and 0.03, weights 0.8 and 0.2,
and a threshold of 0.1 the same effects can be observed.
Finally, we conducted an experiment similar to the last one on two real data sets (breast cancer and
heart from the Data Repository by G. R?atsch). Here we chose the parameters k = 20 and r = 3.2
for breast cancer and r = 4.3 for heart (among the parameters we tried, these were the parameters
where the results were most stable, that is where dkNN and dr were minimal). Then we ran spectral
clustering on different subsamples of the data sets (n = 200 for breast cancer, n = 170 for heart). To
evaluate whether our clusterings were any useful at all, we computed the minimal matching distance
between the clusterings and the true class labels and obtained distances of 0.27 for the r-graph and
0.44 for the kNN graph on breast cancer and 0.17 and 0.19 for heart. These results are reasonable
(standard classifiers lead to classification errors of 0.27 and 0.17 on these data sets). Moreover, to
exclude other artifacts such as different cluster sizes obtained with the kNN or r-graph, we also
computed the expected random distances between clusterings, based on the actual cluster sizes we
obtained in the experiments. We obtained the following table:
Example
breast canc.
heart
dkNN
0.13 ? 0.15
0.06 ? 0.02
rand. dkNN
0.48 ? 0.01
0.47 ? 0.02
dr
0.40 ? 0.10
0.06 ? 0.02
rand. dr
0.22 ? 0.01
0.44 ? 0.02
dkNN ?r
0.40 ? 0.10
0.07 ? 0.03
rand. dkNN ?r
0.44 ? 0.01
0.47 ? 0.02
We can see that in the example of breast cancer, the distances dkNN and dr are much smaller than the
distance dkNN ?r . This shows that the clustering results differ considerably between the two kinds
of graph (and compared to the expected random effects, this difference does not look random at all).
For heart, on the other side, we do not observe significant differences between the two graphs.
This experiment shows that for some data sets a systematic difference between the clusterings based
on different graph types exists. But of course, such differences can occur for many reasons. The
7
r?graph, n=2000, r=0.3
kNN graph, n=2000, k=100
1.5
1.5
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
?1
?1.5
?2
?1.5
?2
?1
0
1
2
?1
0
1
2
Figure 3: Results of spectral clustering in two dimensions, for r-graph (left) and kNN graph (right)
different limit results might just be one potential reason, and other reasons might exist. But whatever
the reason is, it is interesting to observe these systematic differences between graph types in real data.
5
Discussion
In this paper we have investigated the influence of the graph construction on graph-based clustering
measures such as the normalized cut. We have seen that depending on the type of graph, the Ncut
criterion converges to different limit results. In our paper, we computed the exact limit expressions
for the r-neighborhood graph and the kNN graph. 2, yet a different limit result for a complete
graph using Gaussian weights exists in the literature (Narayanan et al., 2007). The fact that all
these different graphs lead to different clustering criteria shows that these criteria cannot be studied
isolated from the graph they will be applied to.
From a theoretical side, there are several directions in which our work can be improved. Some
technical improvements concern using the symmetric instead of the directed kNN graph, and adding
weights to the edges. In the supplement (Maier et al., 2008) we also prove rates of convergence for
our results. It would be interesting to use these to determine an optimal choice of the connectivity
parameter k or r of the graphs (we have already proved such results in a completely different graph
clustering setting, cf. Maier et al., 2007). Another extension which does not look too difficult
is obtaining uniform convergence results. Here one just has to take care that one uses a suitably
restricted class of candidate surfaces S (note that uniform convergence results over the set of all
partitions of Rd are impossible, cf. von Luxburg et al., 2008).
For practice, it will be important to study how the different limit results influence clustering results.
So far, we do not have much intuition about when the different limit expressions lead to different
optimal solutions, and when these solutions will show up in practice. The examples we provided
above already show that different graphs indeed can lead to systematically different clusterings in
practice. Gaining more understanding of this effect will be an important direction of research if one
wants to understand the nature of different graph clustering criteria.
References
Data Repository by G. R?atsch. http://ida.first.fraunhofer.de/projects/bench/benchmarks.htm.
M. Maier, M. Hein, and U. von Luxburg. Cluster identification in nearest-neighbor graphs. In M.Hutter,
R. Servedio, and E. Takimoto, editors, Proceedings of the 18th Conference on Algorithmic Learning Theory,
volume 4754 of Lecture Notes in Artificial Intelligence, pages 196?210. Springer, Berlin, 2007.
Markus Maier, Ulrike von Luxburg, and Matthias Hein. Influence of graph construction on graph-based quality
measures - technical supplement. http://www.kyb.mpg.de/bs/people/mmaier/nips08supplement.html, 2008.
Hariharan Narayanan, Mikhail Belkin, and Partha Niyogi. On the relation between low density separation,
spectral clustering and graph cuts. In NIPS 20, 2007.
J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 22(8):888?905, 2000.
U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395 ? 416, 2007.
U. von Luxburg, S. Bubeck, S. Jegelka, and M. Kaufmann. Consistent minimization of clustering objective
functions. In NIPS 21, 2008.
8
| 3496 |@word repository:2 middle:1 suitably:2 open:1 tried:2 decomposition:1 solid:2 reduction:1 pandora:1 denoting:1 ours:1 current:1 comparing:1 ida:1 yet:5 dx:9 assigning:1 visible:1 partition:4 informative:4 kyb:1 plot:5 intelligence:2 node:1 location:1 hyperplanes:8 mcdiarmid:2 saarland:1 mathematical:3 constructed:5 along:2 prove:3 paragraph:1 theoretically:1 indeed:1 expected:9 behavior:1 p1:4 nor:1 dist:4 mpg:1 decomposed:1 little:1 ucken:1 actual:1 increasing:1 provided:3 project:1 underlying:5 moreover:6 bounded:3 mass:1 easiest:1 what:3 kind:4 finding:4 quantitative:1 every:2 exactly:2 classifier:1 whatever:1 unit:2 planck:1 positive:1 local:2 tends:2 limit:32 approximately:1 might:4 chose:1 twice:1 studied:2 perpendicular:2 range:1 directed:9 unique:1 practical:1 practice:4 area:1 empirical:4 matching:5 word:1 integrating:2 cannot:2 interior:1 close:2 put:1 context:1 influence:6 applying:1 impossible:1 www:1 measurable:2 shi:2 go:1 attention:1 independently:2 simplicity:1 construction:7 suppose:1 exact:2 homogeneous:3 us:1 cut:29 observed:3 bottom:1 region:1 connected:3 rescaled:1 ran:1 substantial:2 intuition:1 ultimately:2 depend:1 solving:1 completely:1 htm:1 derivation:1 artificial:1 neighborhood:27 heuristic:2 say:1 statistic:3 knn:38 niyogi:1 itself:1 reproduced:1 subsamples:1 sequence:3 differentiable:1 advantage:1 matthias:2 neighboring:1 turned:1 combining:1 convergence:12 empty:1 cluster:8 regularity:1 produce:1 converges:4 depending:6 pose:1 nearest:8 received:1 p2:6 predicted:3 come:1 differ:2 direction:2 radius:12 centered:1 clustered:2 decompose:1 proposition:6 biological:1 extension:1 hold:5 sufficiently:2 around:1 considered:1 normal:2 exp:1 algorithmic:1 pointing:1 cutn:17 integrates:1 label:2 sensitive:1 weighted:2 minimization:1 gaussian:3 always:1 rather:1 pn:2 nkn:5 gut:1 focus:1 properly:1 improvement:1 cantelli:1 dim:3 relation:1 originating:3 germany:2 classification:2 among:1 html:1 denoted:4 integration:1 marginal:1 equal:1 construct:2 ncutn:4 look:2 unsupervised:1 future:1 nrd:1 serious:1 few:1 belkin:1 lebesgue:1 n1:2 interest:1 investigate:3 evaluation:1 severe:1 mixture:3 devoted:1 edge:11 integral:3 euclidean:1 desired:1 hein:3 isolated:1 theoretical:5 minimal:8 hutter:1 column:2 gn:10 deviation:2 subset:1 uniform:2 conducted:2 too:1 kn:41 answer:1 considerably:1 density:15 systematic:4 canc:1 concrete:1 connectivity:2 von:7 dr:6 derivative:1 rescaling:1 pmin:1 toy:1 exclude:1 potential:1 de:2 depends:1 ulrike:2 start:2 red:1 contribution:1 partha:1 ni:1 hariharan:1 variance:2 maier:9 kaufmann:1 identification:1 produced:1 bunch:1 researcher:1 cybernetics:1 definition:2 servedio:1 dm:1 proof:9 sampled:3 proved:4 popular:1 cap:1 dimensionality:1 segmentation:1 dt:6 supervised:2 improved:1 rand:3 evaluated:3 box:1 though:1 done:1 furthermore:1 just:3 d:8 sketch:4 lack:1 mode:1 artifact:1 quality:3 behaved:1 indicated:1 effect:7 normalized:9 true:1 analytically:1 symmetric:2 criterion:17 generalized:1 theoretic:2 complete:1 tn:4 meaning:1 image:1 kissing:1 tending:1 functional:2 volume:8 significant:1 rd:16 similarly:1 had:1 stable:3 similarity:3 surface:9 something:2 closest:1 own:1 ubingen:1 inequality:2 seen:1 minimum:1 additional:1 care:1 determine:1 tempting:1 dashed:2 semi:1 smooth:1 technical:5 cross:1 prediction:1 breast:6 metric:1 expectation:2 justified:1 whereas:1 want:5 sure:2 induced:6 tend:1 undirected:1 split:3 enough:2 xj:6 idea:2 whether:2 expression:8 useful:1 detailed:1 amount:1 narayanan:3 http:2 exist:2 tutorial:1 dotted:1 blue:1 vol:9 affected:1 threshold:2 drawn:1 neither:1 takimoto:1 graph:147 sum:1 luxburg:7 almost:2 reasonable:1 p3:1 separation:1 scaling:1 simplification:1 occur:1 infinity:3 markus:2 argument:5 min:1 conjecture:1 ball:7 remain:1 smaller:2 b:1 happens:1 intuitively:1 restricted:1 taken:1 heart:6 know:1 end:2 studying:1 gaussians:3 apply:1 observe:4 spectral:12 appropriate:1 denotes:3 clustering:47 cf:3 top:1 build:1 objective:3 malik:2 already:5 question:4 quantity:3 kth:1 distance:14 separate:1 berlin:1 reason:4 relationship:1 illustration:1 difficult:1 statement:1 pmax:1 binomially:3 vertical:2 observation:1 eni:1 benchmark:1 finite:10 looking:1 rn:60 arbitrary:2 introduced:2 pred:4 saarbr:1 nip:2 address:1 capped:1 below:4 usually:1 pattern:1 max:1 gaining:1 power:1 treated:1 difficulty:1 carried:1 fraunhofer:1 literature:3 understanding:1 asymptotic:1 fully:1 expect:1 permutation:1 loss:1 lecture:1 interesting:2 sloppy:1 validation:1 integrate:4 jegelka:1 consistent:1 principle:1 editor:1 systematically:3 row:2 cancer:5 course:2 last:2 side:8 bias:1 understand:1 institute:1 neighbor:8 emp:4 correspondingly:1 mikhail:1 voln:8 distributed:3 boundary:1 plain:1 xn:4 dimension:13 unweighted:4 author:3 made:2 feeling:1 far:2 hypersurface:2 transaction:1 approximate:1 compact:1 global:1 corroborates:1 xi:9 table:1 nature:1 obtaining:1 investigated:1 domain:1 did:1 main:2 whole:2 n2:2 repeated:1 x1:5 borel:1 depicts:1 position:4 deterministically:1 exponential:1 lie:2 candidate:1 vanish:1 theorem:4 specific:1 decay:1 concern:1 exists:2 adding:1 supplement:3 surprise:1 intersection:2 bubeck:1 ncut:34 subtlety:1 rnd:6 springer:1 corresponds:2 minimizer:2 satisfies:1 conditional:1 goal:1 towards:1 replace:2 hard:1 infinite:1 hyperplane:9 wt:1 lemma:2 atsch:2 select:1 support:1 people:1 relevance:1 evaluate:1 outgoing:2 bench:1 phenomenon:2 |
2,753 | 3,497 | Accelerating Bayesian Inference over Nonlinear
Differential Equations with Gaussian Processes
Ben Calderhead
Dept. of Computing Sci.
University of Glasgow
[email protected]
Mark Girolami
Dept. of Computing Sci.
University of Glasgow
[email protected]
Neil D. Lawrence
School of Computer Sci.
University of Manchester
[email protected]
Abstract
Identification and comparison of nonlinear dynamical system models using noisy
and sparse experimental data is a vital task in many fields, however current methods are computationally expensive and prone to error due in part to the nonlinear
nature of the likelihood surfaces induced. We present an accelerated sampling
procedure which enables Bayesian inference of parameters in nonlinear ordinary
and delay differential equations via the novel use of Gaussian processes (GP). Our
method involves GP regression over time-series data, and the resulting derivative
and time delay estimates make parameter inference possible without solving the
dynamical system explicitly, resulting in dramatic savings of computational time.
We demonstrate the speed and statistical accuracy of our approach using examples
of both ordinary and delay differential equations, and provide a comprehensive
comparison with current state of the art methods.
1
Introduction
Mechanistic system modeling employing nonlinear ordinary or delay differential equations 1 (ODEs
or DDEs) is oftentimes hampered by incomplete knowledge of the system structure or the specific parameter values defining the observed dynamics [16]. Bayesian, and indeed non-Bayesian,
approaches for parameter estimation and model comparison [19] involve evaluating likelihood functions, which requires the explicit numerical solution of the differential equations describing the
model. The computational cost of obtaining the required numerical solutions of the ODEs or DDEs
can result in extremely slow running times. In this paper we present a method for performing
Bayesian inference over mechanistic models by the novel use of Gaussian processes (GP) to predict
the state variables of the model as well as their derivatives, thus avoiding the need to solve the system explicitly. This results in dramatically improved computational efficiency (up to four hundred
times faster in the case of DDEs). We note that state space models offer an alternative approach
for performing parameter inference over dynamical models particularly for on-line analysis of data,
see [2]. Related to the work we present, we also note that in [6] the use of GPs has been proposed
in obtaining the solution of fully parameterised linear operator equations such as ODEs. Likewise
in [12] GPs are employed as emulators of the posterior response to parameter values as a means of
improving the computational efficiency of a hybrid Monte Carlo sampler.
Our approach is different and builds significantly upon previous work which has investigated the use
of derivative estimates to directly approximate system parameters for models described by ODEs.
A spline-based approach was first suggested in [18] for smoothing experimental data and obtaining
derivative estimates, which could then be used to compute a measure of mismatch for derivative
values obtained from the system of equations. More recent developments of this method are described in [11]. All of these approaches, however, are plagued by similar problems. The methods
1
The methodology in this paper can also be straightforwardly extended to partial differential equations.
are all critically dependent on additional regularisation parameters to determine the level of data
smoothing. They all exhibit the problem of providing sub-optimal point estimates; even [11] may
not converge to a reasonable solution depending on the initial values selected, as we demonstrate in
Section 5.1. Furthermore, it is not at all obvious how these methods can be extended for partially
observed systems, which are typical in, e.g. systems biology [10, 1, 8, 19]. Finally, these methods
only provide point estimates of the ?correct? parameters and are unable to cope with multiple solutions. (Although it should be noted that [11] does offer a local estimate of uncertainty based on
second derivatives, at additional computational cost.) It is therefore unclear how objective model
comparison could be implemented using these methods.
In contrast we provide a Bayesian solution, which is capable of sampling from multimodal distributions. We demonstrate its speed and statistical accuracy and provide comparisons with the current
best methods. It should also be noted that the papers mentioned above have focussed only on parameter estimation for fully observed systems of ODEs; we additionally show how parameter inference
over both fully and partially observed ODE systems as well as DDEs may be performed efficiently
using our state derivative approach.
2
Posterior Sampling by Explicit Integration of Differential Equations
A dynamical system may be described by a collection of N ordinary differential equations and model
parameters ? which define a functional relationship between the process state, x(t), and its time
?
derivative such that x(t)
= f (x, ?, t). Likewise delay differential equations can be used to describe
certain dynamic systems, where now an explicit time-delay ? is employed. A sequence of process
observations, y(t), are usually contaminated with some measurement error which is modeled as
y(t) = x(t) + (t) where (t) defines an appropriate multivariate noise process, e.g. a zero-mean
Gaussian with variance ?n2 for each of the N states. If observations are made at T distinct time points
the N ? T matrices summarise the overall observed system as Y = X + E. In order to obtain values
for X the system of ODEs must be solved, so that in the case of an initial value problem X(?, x0 )
denotes the solution of the system of equations at the specified time points for the parameters ? and
initial conditions x0 . Figure 1(a) illustrates graphically the conditional dependencies of the overall
statistical model and from this the posterior
density follows by employing appropriate priors such
Q
that p(?, x0 , ?|Y) ? ?(?)?(x0 )?(?) n NYn,? (X(?, x0 )n,? , I?n2 ). The desired marginal p(?|Y)
can be obtained from this joint posterior2 .
Various sampling schemes can be devised to sample from the joint posterior. However, regardless
of the sampling method, each proposal requires the specific solution of the system of differential
equations which, as will be demonstrated in the experimental sections, is the main computational
bottleneck in running an MCMC scheme for models based on differential equations. The computational complexity of numerically solving such a system cannot be easily quantified since it depends
on many factors such as the type of model and its stiffness, which in turn depends on the specific
parameter values used. A method to alleviate this bottleneck is the main contribution of this paper.
3
Auxiliary Gaussian Processes on State Variables
Let us assume independent3 Gaussian process priors on the state variables such that p(Xn,? |?n ) =
N (0, C?n ), where C?n denotes the matrix of covariance function values with hyperparameters
?n . With noise n ? N (0, ?n2 IT ), the state posterior, p(Xn,? |Yn,? , ?n , ?n ) follows as N (?n , ?n )
where ?n = C?n (C?n + ?n2 I)?1 Yn,? and ?n = ?n2 C?n (C?n + ?n2 I)?1 . Given priors ?(?n ) and
?(?n ) the corresponding posterior is p(?n , ?n |Yn,? ) ? ?(?n )?(?n )NYn,? (0, ?n2 I + C?n ) and
from this we can obtain the joint posterior, p(X, ?n=1???N , ?n=1???N |Y, ), over a non-parametric
GP model of the state-variables. Note that a non-Gaussian noise model may alternatively be
implemented using warped GPs [14]. The conditional distribution for the state-derivatives is
2
This distribution is implicitly conditioned on the numerical solver and associated error tolerances.
The dependencies between state variables can be modeled by defining the overall state vector as x =
vec(X) and using a GP prior of the form x ? N (0, ? ? C) where ? denotes the Kronecker matrix product
and ? is an N ? N positive semi-definite matrix specifying inter-state similarities with C, the T ? T matrix
defining intra-state similarities [13].
3
(a)
(b)
(c)
Figure 1: (a) Graphical model representing explicit solution of an ODE system, (b) Graphical model representing approach developed in this paper with dashed lines showing how the two models are combined in
product form, (c) Likelihood surface for a simple oscillator model
? n,? |Xn,? , ? , ?n ) = N (mn , Kn ), where the mean and covariance are given by
p(X
n
mn = 0 C?n (C?n + ?n2 I)?1 Xn,?
and
00
0
Kn = C?n ? 0 C?n (C?n + ?n2 I)?1 C?n
00
0
where C?n denotes the auto- covariance for each state- derivative with C?n and 0 C?n denoting
the cross- covariances between the state and its derivative [13, 15]. The main advantage of using
the Gaussian process model now becomes apparent. The GP specifies a jointly Gaussian distribution over the function and its derivatives ([13], pg.191). This allows us to evaluate a posterior over parameters ? consistent with the differential equation based on the smoothed state and
state derivative estimates, see Figure 1(b). Assuming Normal errors between the state- derivatives
? n,? and the functional, fn (X, ?, t) evaluated at the GP generated state- values, X corresponding
X
? n,? |X, ?, ?n ) = N (fn (X, ?, t), I?n ) with ?n a state- specific
to time points t = t1 ? ? ? tT then p(X
? n,? |X, ?, ?n ) can be linked
? n,? |Xn,? , ?n , ?n ) and p(X
error variance. Both statistical models p(X
? n,? |X, ?, ?n , ? , ?n ) ?
in the form of a Product of Experts [7] to define the overall density p(X
Q n
N (mn , Kn )N (fn (X, ?, t), I?n ) [see e.g. 20]. Introducing priors ?(?) and ?(?) = n ?(?n )
Z
? ?, ?|X, ?, ?)dX
?
p(?, ?|X, ?, ?) =
p(X,
?
?(?)?(?)
YZ
? n,?
N (mn , Kn )N (fn (X, ?, t), I?n )dX
n
?
(
)
1X
?(?)?(?)
T
?1
Q
exp ?
(fn ? mn ) (Kn + I?n ) (fn ? mn )
2 n
n Z(?n )
1
where fn ? fn (X, ?, t), and Z(?n ) = |2?(Kn + I?n )| 2 is a normalizing constant. Since the
gradients appear only linearly and their conditional distribution given X is Gaussian they can be
marginalized exactly. In other words, given observations Y, we can sample from the conditional
distribution for X and marginalize the augmented derivative space. The differential equation need
now never be explicitly solved, its implicit solution is integrated into the sampling scheme.
4
Sampling Schemes for Fully and Partially Observed Systems
The introduction of the auxiliary model and its associated variables has enabled us to recast the
differential equation as another component of the inference process. The relationship between the
auxiliary model and the physical process that we are modeling is shown in Figure 1(b), where the
dotted lines represent a transfer of information between the models. This information transfer takes
place through sampling candidate solutions for the system in the GP model. Inference is performed
by combining these approximate solutions with the system dynamics from the differential equations.
It now remains to define an overall sampling scheme for the structural parameters. For brevity, we
omit normalizing constants and assume that the system is defined in terms of ODEs. However,
our scheme is easily extended for delay differential equations (DDEs) where now predictions at
each time point t and the associated delay (t ? ? ) are required ? we present results for a DDE
system in Section 5.2. We can now consider the complete sampling scheme by also inferring the
hyperparameters and corresponding predictions of the state variables and derivatives using the GP
framework described in Section 3. We can obtain samples ? from the desired marginal posterior
p(?|Y)4 by sampling from the joint posterior p(?, ?, X, ?, ?|Y) as follows
?n , ?n |Yn,?
Xn,? |Yn,? , ?n , ?n
?, ?|X, ?, ?
? p(?n , ?n |Yn,? ) ? ?(?n )?(?n )NYn,? (0, ?n2 I + C?n )
(1)
? p(Xn,? |Yn,? , ?n , ?n ) = NXn,? (?n , ?n )
(2)
(
)
1X T
? p(?, ?|X, ?, ?) ? ?(?)?(?) exp ?
? (Kn + I?n )?1 ? n (3)
2 n n
where ? n ? fn ? mn . This requires two Metropolis sampling schemes; one for inferring the parameters of the GP, ? and ?, and another for the parameters of the structural system, ? and ?. However,
as a consequence of the system induced dynamics the corresponding likelihood surface defined by
p(Y|?, x0 , ?) can present formidable challenges to standard sampling methods. As an example
Figure 1(c) illustrates the induced likelihood surface of a simple dynamic oscillator similar to that
presented in the experimental section. Recent advances in MCMC methodology suggest solutions
to this problem in the form of population-based MCMC methods [8], which we therefore implement
to sample the structural parameters of our model. Population MCMC enables samples to be drawn
from a target density p(?) by defining
a product of annealed densities indexed by a temperature
Q
parameter ?, such that p(?|?) = i p(?|?i ) and the desired target density p(?) is defined for one
value of ?i . It is convenient to fix a geometric path between the prior and posterior, which we do in
our implementation, although other sequences are possible [3]. A time homogeneous Markov transition kernel which has p(?) as its stationary distribution can then be constructed from both local
Metropolis proposal moves and global temperature switching moves between the tempered chains
of the population [8], allowing freer movement within the parameter space.
The computational scaling for each component of the sampler is now considered. Sampling of the
GP covariance function parameters by a Metropolis step requires computation of a matrix determinant and its inverse, so for all N states in the system a dominant scaling of O(N T 3 ) will be
obtained. This poses little problem for many applications in systems biology since T is often fairly
small (T ? 10 to 100). For larger values of T , sparse approximations can offer much improved
computational scaling of order O(N M 2 T ), where M is the number of time points selected [9].
Sampling from a multivariate Normal whose covariance matrix and corresponding decompositions
have already been computed therefore incurs no dominating additional computational overhead.
The final Metropolis step (Equation 3) requires each of the Kn matrices to be constructed and the
associated determinants and inverses computed thus incurring a total O(N T 3 ) scaling per sample.
An approximate scheme can be constructed by first obtaining the maximum a posteriori values for
? n , and then employing these in
? ?,
? X
the GP hyperparameters and posterior mean state values, ?,
?
Equation 3. This will provide samples from p(?, ?|X, ?,
? ?
? , Y) which may be a useful surrogate
for the full joint posterior incurring lower computational cost as all matrix operations will have been
pre-computed, as will be demonstrated later in the paper.
We can also construct a sampling scheme for the important special case where some states are
unobserved. We partition X into Xo , and Xu . Let o index the observed states, then we may infer all
the unknown variables as follows
)
(
1 X o,u T
p(?, ?, Xu |Xo , ?, ?) ? ?(?)?(?)?(Xu ) exp ?
(? ) (Kn + I?n )?1 (? o,u
n )
2 n?o n
where ? o,u
n ? fn (Xo , Xu , ?, t) ? mn and ?(Xu ) is an appropriately chosen prior. The values of
unobserved species are obtained by propagating their sampled initial values using the corresponding
discrete versions of the differential equations and the smoothed estimates of observed species. The
p53 transcriptional network example we include requires inference over unobserved protein species,
see Section 5.3.
4
Note that this is implicitly conditioned on the class of covariance function chosen.
5
Experimental Examples
We now demonstrate our GP-based method using a standard squared exponential covariance function on a variety of examples involving both ordinary and delay differential equations, and compare
the accuracy and speed with other state-of-the-art methods.
5.1
Example 1 - Nonlinear Ordinary Differential Equations
We first consider the FitzHugh-Nagumo model [11] which was originally developed to model
the behaviour of spike potentials in the giant axon of squid neurons and is defined as V? =
c V ? V 3 /3 + R , R? = ? (V ? a + bR) /c. Although consisting of only 2 equations and 3 parameters, this dynamical system exhibits a highly nonlinear likelihood surface [11], which is induced
by the sharp changes in the properties of the limit cycle as the values of the parameters vary. Such
a feature is common to many nonlinear systems and so this model provides an excellent test for our
GP-based parameter inference method.
Data is generated from the model, with parameters a = 0.2, b = 0.2, c = 3, at {40, 80, 120} time
points with additive Gaussian noise, N (0, v) for v = 0.1 ? ?n , where ?n is the standard deviation
for the nth species. The parameters were then inferred from these data sets using the full Bayesian
sampling scheme and the approximate sampling scheme (Section 4), both employing population
MCMC. Additionally, we inferred the parameters using 2 alternative methods, the profiled estimation method of Ramsay et al. [11] and a Population MCMC based sampling scheme, in which the
ODEs were solved explicitly (Section 2), to complete the comparative study. All the algorithms
were coded in Matlab, and the population MCMC algorithms were run with 30 temperatures, and
used a suitably diffuse ?(2, 1) prior distribution for all parameters, forming the base distribution for
? statistic [5]
the sampler. Two of these population MCMC samplers were run in parallel and the R
was used to monitor convergence of all chains at all temperatures. The required numerical approximations to the ODE were calculated using the Sundials ODE solver, which has been demonstrated to
be considerably (up to 100 times) faster than the standard ODE45/ODE15s solvers commonly used
in Matlab. In our experiments the chains generally converged after around 5000 iterations, and 2000
samples were then drawn to form the posterior distributions. Ramsay?s method [11] was implemented using the Matlab code which accompanies their paper. The optimal algorithm settings were
used, tuned for the FitzHugh-Nagumo model (see [11] for details) which they also investigated. Each
experiment was repeated 100 times, and Table 1 shows summary statistics for each of the inferred
parameters. All of the three sampling methods based on population MCMC produced low variance
samples from posteriors positioned close to the true parameters values. Most noticeable from the
results in Figure 2 is the dramatic speed advantage the GP based methods have over the more direct
approach, whereby the differential equations are solved explicitly; the GP methods introduced in
this paper offer up to a 10-fold increase in speed, even for this relatively simple system of ODEs.
We found the performance of the profiled estimation method [11] to be very sensitive to the initial
parameter values. In practice parameter values are unknown, indeed little may be known even about
the range of possible values they may take. Thus it seems sensible to choose initial values from a
wide prior distribution so as to explore as many regions of parameter space as possible. Employing
Samples
40
80
120
FitzHugh-Nagumo ODE Model
Method
a
b
GP MAP
0.1930 ? 0.0242 0.2070 ? 0.0453
GP Fully Bayesian 0.1983 ? 0.0231 0.2097 ? 0.0481
Explicit ODE
0.2015 ? 0.0107 0.2106 ? 0.0385
GP MAP
0.1950 ? 0.0206 0.2114 ? 0.0386
GP Fully Bayesian 0.2068 ? 0.0194 0.1947 ? 0.0413
Explicit ODE
0.2029 ? 0.0121 0.1837 ? 0.0304
GP MAP
0.1918 ? 0.0145 0.2088 ? 0.0317
GP Fully Bayesian 0.1971 ? 0.0162 0.2081 ? 0.0330
Explicit ODE
0.2071 ? 0.0112 0.2123 ? 0.0286
c
2.9737 ? 0.0802
3.0133 ? 0.0632
3.0153 ? 0.0247
2.9801 ? 0.0689
3.0139 ? 0.0585
3.0099 ? 0.0158
3.0137 ? 0.0489
3.0069 ? 0.0593
3.0112 ? 0.0139
Table 1: Summary statistics for each of the inferred parameters of the FitzHugh-Nagumo model. Each experiment was repeated 100 times and the mean parameter values are shown. We observe that all three populationbased MCMC methods converge close to the true parameter values, a = 0.2, b = 0.2 and c = 3.
Figure 2: Summary statistics of the overall time taken for the algorithms to run to completion. Solid bars show
mean time for 100 runs; superimposed boxplots display median results with upper and lower quartiles.
profiled estimation using initial parameter values drawn from a wide gamma prior, however, yielded
highly biased results, with the algorithm often converging to local maxima far from the true parameter values. The parameter estimates become more biased as the variance of the prior is increased,
i.e. as the starting points move further from the true parameter values. E.g. consider parameter a;
for 40 data points, for initial values a, b, c ? N ({0.2, 0.2, 3}, 0.2), the range of estimated values for
a
? was [Min, Median, Max] = [0.173, 0.203, 0.235]. For initial values a, b, c ? ?(1, 0.5), the a
? had
a range [Min, Median, Max] = [?0.329, 0.205, 9.3 ? 109 ] and for a wider prior a, b, c ? ?(2, 1),
then a
? had range [Min, Median, Max] = [?1.4 ? 1010 , 0.195, 2.2 ? 109 ]. Lack of robustness
therefore seems to be a significant problem with this profiled estimation method. The speed of the
profiled estimation method was also extremely variable, and this was observed to be very dependent on the initial parameter values e.g. for initial values a, b, c ? N ({0.2, 0.2, 3}, 0.2), the times
recorded were [Min, Mean, Max] = [193, 308, 475]. Using a different prior for initial values such
that a, b, c ? ?(1, 0.5), the times were [Min, Mean, Max] = [200, 913, 3265] and similarly for a
wider prior a, b, c ? ?(2, 1), [Min, Mean, Max] = [132, 4171, 37411]. Experiments performed with
noise v = {0.05, 0.2} ? ?n produced similar and consistent results, however they are omitted due
to lack of space.
5.2
Example 2 - Nonlinear Delay Differential Equations
This example model describes the oscillatory behaviour of the concentration of mRNA and its corresponding protein level in a genetic regulatory network, introduced by Monk [10]. The translocation
of mRNA from the nucleus to the cytosol is explicitly described by a delay differential equation.
1
dp
d?
=
? ?m ?
= ? ? ?p p
n
dt
1 + (p(t ? ? )/p0 )
dt
where ?m and ?p are decay rates, p0 is the repression threshold, n is a Hill coefficient and ? is the
time delay. The application of our method to DDEs is of particular interest since numerical solutions
to DDEs are generally much more computationally expensive to obtain than ODEs. Thus inference
of such models using MCMC methods and explicitly solving the system at each iteration becomes
less feasible as the complexity of the system of DDEs increases.
We consider data generated from the above model, with parameters ?m = 0.03, ?p = 0.03,
p0 = 100, ? = 25, at {40, 80, 120} time points with added random noise drawn from a Gaussian distribution, N (0, v) for v = 0.1 ? ?n , where ?n is the standard deviation of the time-series
data for the nth species. The parameters were then inferred from these data sets using our GP-based
population MCMC methods. Figure 3 shows a time comparison for 10 iterations of the GP sampling
algorithms and compares it to explicitly solving the DDEs using the Matlab solver DDE23 (which
is generally faster than the Sundials solver for DDEs). The GP methods are around 400 times faster
for 40 data points. Using the GP methods, samples from the full posterior can be obtained in less
than an hour. Solving the DDEs explicitly, the population MCMC algorithm would take in excess of
two weeks computation time, assuming the chains take a similar number of iterations to converge.
Samples
40
80
120
Method
GP MAP
GP Full Bayes
GP MAP
GP Full Bayes
GP MAP
GP Full Bayes
Monk DDE Model
?m
?p ?10?3
100.21 ? 2.08 29.7 ? 1.6
99.75 ? 1.50
29.8 ? 1.2
99.48 ? 1.29
29.5 ? 0.9
100.26 ? 1.03 30.1 ? 0.6
99.91 ? 1.02
30.0 ? 0.5
100.23 ? 0.92 30.0 ? 0.4
p0 ?10?3
30.1 ? 0.3
30.1 ? 0.2
30.1 ? 0.1
30.1 ? 0.1
30.0 ? 0.1
30.0 ? 0.1
?
25.65 ? 1.04
25.33 ? 0.85
24.81 ? 0.59
24.87 ? 0.44
24.97 ? 0.38
25.03 ? 0.25
Table 2: Summary statistics for each of the inferred parameters of the Monk model. Each experiment was
repeated 100 times and we observe that both GP population-based MCMC methods converge close to the true
parameter values, ?m = 100, ?p = 30 ? 10?3 and p0 = 30 ? 10?3 . The time-delay parameter, ? = 25, is
also successfully inferred.
Figure 3: Summary statistics of the time taken for the algorithms to complete 10 iterations using DDE model.
5.3
Example 3 - The p53 Gene Regulatory Network with Unobserved Species
Our third example considers a linear and a nonlinear model describing the regulation of 5 target
genes by the tumour repressor transcription factor protein p53. We consider the following differential equations which relate the expression level xj (t) of the jth gene at time t to the concentration of
the transcription factor protein f (t) which regulates it, x? j = Bj +Sj g(f (t))?Dj xj (t), where Bj is
the basal rate of gene j, Sj is the sensitivity of gene j to the transcription factor and Dj is the decay
rate of the mRNA. Letting g(f (t)) = f (t) gives us the linear model originally investigated in [1],
and letting g(f (t)) = exp(f (t)) gives us the nonlinear model investigated in [4]. The transcription
factor f (t) is unobserved and must be inferred along with the other structural parameters Bj , Sj
and Dj using the sampling scheme detailed in Section 4.1. In this experiment, priors on the unobserved species used were f (t) ? ?(2, 1) with a log-Normal proposal. We test our method using the
(a) Linear Model
(b) Nonlinear Model
Figure 4: The predicted output of the p53 gene using data from Barenco et al. [1] and the accelerated GP
inference method for (a) the linear model and (b) the nonlinear response model. Note that the asymmetric error
bars in (b) are due to exp(y) being plotted, as opposed to just y in (a). Our results are compared to the results
obtained by Barenco et al. [1] (shown as crosses) and are comparable to those obtained by Lawrence et al. [4].
leukemia data set studied in [1], which comprises 3 measurements at each of 7 time points for each
of the 5 genes. Figure 4 shows the inferred missing species and the results are in good accordance
with recent biological studies. For this example, our GP sampling algorithms ran to completion in
under an hour on a 2.2GHz Centrino laptop, with no difference in speed between using the linear
and nonlinear models; indeed the equations describing this biological system could be made more
complex with little additional computational cost.
6
Conclusions
Explicit solution of differential equations is a major bottleneck for the application of inferential
methodology in a number of application areas, e.g. systems biology, nonlinear dynamic systems.
We have addressed this problem and placed it within a Bayesian framework which tackles the main
shortcomings of previous solutions to the problem of system identification for nonlinear differential
equations. Our methodology allows the possibility of model comparison via the use of Bayes factors,
which may be straightforwardly calculated from the samples obtained from the population MCMC
algorithm. Possible extensions to this method include more efficient sampling exploiting control
variable methods [17], embedding characteristics of a dynamical system in the design of covariance
functions and application of our method to models involving partial differential equations.
Acknowledgments
Ben Calderhead is supported by Microsoft Research through its European PhD Scholarship Programme. Mark Girolami is supported by an EPSRC Advanced Research Fellowship EP/EO52029
and BBSRC Research Grant BB/G006997/1.
References
[1] Barenco, M., Tomescu, D., Brewer, D., Callard, D., Stark, J. and Hubank, M. (2006) Ranked prediction of
p53 targets using hidden variable dynamic modeling, Genome Biology, 7 (3):R25.
[2] Doucet, A., de Freitas, N. and Gordon, N., (2001) Sequential Monte Carlo Methods in Practice, Springer.
[3] Friel, N. and Pettitt, A. N. (2008) Marginal Likelihood Estimation via Power Posteriors. Journal of the
Royal Statistical Society: Series B, 70 (3), 589-607.
[4] Gao, P., Honkela, A., Rattray, M. and Lawrence, N.D. (2008) Gaussian Process Modelling of Latent
Chemical Species: Applications to Inferring Transcription Factor Activities, Bioinformatics, 24, i70-i75.
[5] Gelman, A., Carlin, J.B., Stern, H.S. and Rubin, D.B. (2004) Bayesian Data Analysis, Chapman & Hall.
[6] Graepel, T., (2003) Solving noisy linear operator equations by Gaussian processes: application to ordinary
and partial differential equations, Proc. ICML 2003.
[7] Mayraz, G. and Hinton, G. (2001) Recognizing Hand-Written Digits Using Hierarchical Products of
Experts, Proc. NIPS 13.
[8] Jasra, A., Stephens, D.A. and Holmes, C.C., (2007) On population-based simulation for static inference,
Statistics and Computing, 17, 263-279.
[9] Lawrence, N.D., Seeger, M. and Herbrich, R. (2003) Fast sparse Gaussian process methods: the informative
vector machine, Proc. NIPS 15.
[10] Monk, N. (2003) Oscillatory Expression of Hes1, p53, and NF-kB Driven by Transcriptional Time Delays.
Current Biology, 13 (16), 1409-1413.
[11] Ramsay, J., Hooker, G., Campbell, D. and Cao, J. (2007) Parameter Estimation for Differential Equations:
A Generalized Smoothing Approach. Journal of the Royal Statistical Society: Series B, 69 (5), 741-796.
[12] Rasmussen, C, E., (2003) Gaussian processes to speed up hybrid Monte Carlo for expensive Bayesian
integrals, Bayesian Statistics, 7, 651-659.
[13] Rasmussen, C.E. and Williams, C.K.I. (2006) Gaussian Processes for Machine Learning, The MIT Press.
[14] Snelson, E., Rasmussen, C.E. and Ghahramani, Z. (2004), Warped Gaussian processes, Proc. NIPS 16.
[15] Solak, E., Murray-Smith, R., Leithead, W.E., Leith, D.J. and Rasmussen, C.E. (2003) Derivative
observations in Gaussian Process models of dynamic systems, Proc. NIPS 15.
[16] Tarantola, A. (2005) Inverse Problem Theory and Methods for Model Parameter Estimation, SIAM.
[17] Titsias, M. and Lawrence, N. (2008) Efficient Sampling for Gaussian Process Inference using Control
Variables, Proc. NIPS 22.
[18] Varah, J.M. (1982) A spline least squares method for numerical parameter estimation in differential
equations. SIAM J. Scient. Comput., 3, 28-46.
[19] Vyshemirsky, V. and and Girolami, M., (2008), Bayesian ranking of biochemical system models
Bioinformatics 24, 833-839.
[20] Williams, C.K.I., Agakov, F.V., Felderof, S.N. (2002), Products of Gaussians, Proc. NIPS 14.
| 3497 |@word determinant:2 version:1 seems:2 suitably:1 squid:1 simulation:1 covariance:9 decomposition:1 pg:1 p0:5 incurs:1 dramatic:2 solid:1 initial:12 series:4 denoting:1 bc:1 tuned:1 genetic:1 freitas:1 current:4 mayraz:1 dx:2 must:2 dde:3 written:1 fn:10 numerical:6 partition:1 additive:1 informative:1 tarantola:1 enables:2 stationary:1 selected:2 monk:4 smith:1 provides:1 herbrich:1 along:1 constructed:3 direct:1 differential:29 become:1 overhead:1 x0:6 inter:1 indeed:3 little:3 solver:5 becomes:2 formidable:1 laptop:1 developed:2 scient:1 unobserved:6 giant:1 nf:1 tackle:1 exactly:1 uk:3 control:2 grant:1 omit:1 yn:7 appear:1 positive:1 t1:1 local:3 accordance:1 leithead:1 limit:1 consequence:1 switching:1 leith:1 friel:1 path:1 studied:1 quantified:1 specifying:1 range:4 acknowledgment:1 practice:2 definite:1 implement:1 digit:1 procedure:1 area:1 significantly:1 convenient:1 inferential:1 word:1 pre:1 suggest:1 protein:4 cannot:1 marginalize:1 close:3 operator:2 gelman:1 map:6 demonstrated:3 missing:1 annealed:1 graphically:1 regardless:1 starting:1 mrna:3 williams:2 glasgow:2 holmes:1 enabled:1 population:13 embedding:1 target:4 gps:3 homogeneous:1 expensive:3 particularly:1 asymmetric:1 agakov:1 observed:9 epsrc:1 ep:1 solved:4 region:1 cycle:1 movement:1 ran:1 mentioned:1 complexity:2 dynamic:8 solving:6 calderhead:2 upon:1 titsias:1 efficiency:2 multimodal:1 joint:5 easily:2 various:1 distinct:1 fast:1 describe:1 shortcoming:1 monte:3 freer:1 apparent:1 whose:1 larger:1 solve:1 dominating:1 statistic:8 neil:1 gp:35 jointly:1 noisy:2 final:1 sequence:2 advantage:2 product:6 cao:1 combining:1 exploiting:1 manchester:1 convergence:1 comparative:1 ben:2 wider:2 depending:1 ac:3 propagating:1 pose:1 completion:2 school:1 noticeable:1 implemented:3 c:1 involves:1 auxiliary:3 predicted:1 girolami:4 correct:1 quartile:1 kb:1 behaviour:2 fix:1 alleviate:1 biological:2 extension:1 around:2 considered:1 hall:1 normal:3 plagued:1 exp:5 lawrence:5 predict:1 week:1 bj:3 major:1 vary:1 omitted:1 nyn:3 estimation:11 proc:7 sensitive:1 successfully:1 mit:1 gaussian:20 modelling:1 likelihood:7 superimposed:1 contrast:1 seeger:1 posteriori:1 inference:14 dependent:2 biochemical:1 integrated:1 hidden:1 hubank:1 overall:6 development:1 art:2 smoothing:3 integration:1 fairly:1 marginal:3 field:1 construct:1 saving:1 never:1 special:1 sampling:25 chapman:1 biology:5 icml:1 leukemia:1 contaminated:1 spline:2 summarise:1 gordon:1 gamma:1 comprehensive:1 consisting:1 microsoft:1 interest:1 highly:2 possibility:1 intra:1 gla:2 chain:4 integral:1 capable:1 partial:3 indexed:1 incomplete:1 desired:3 plotted:1 populationbased:1 increased:1 modeling:3 ordinary:7 cost:4 introducing:1 deviation:2 hundred:1 r25:1 delay:14 recognizing:1 straightforwardly:2 dependency:2 kn:9 considerably:1 combined:1 density:5 sensitivity:1 siam:2 repression:1 squared:1 recorded:1 opposed:1 choose:1 warped:2 derivative:17 expert:2 stark:1 potential:1 de:1 repressor:1 coefficient:1 explicitly:9 ranking:1 depends:2 performed:3 later:1 linked:1 bayes:4 parallel:1 contribution:1 square:1 accuracy:3 variance:4 characteristic:1 likewise:2 efficiently:1 bayesian:15 identification:2 critically:1 produced:2 carlo:3 converged:1 oscillatory:2 obvious:1 associated:4 static:1 sampled:1 knowledge:1 graepel:1 positioned:1 campbell:1 originally:2 dt:2 methodology:4 response:2 improved:2 evaluated:1 furthermore:1 parameterised:1 implicit:1 just:1 honkela:1 hand:1 nonlinear:16 lack:2 defines:1 true:5 chemical:1 bbsrc:1 noted:2 whereby:1 translocation:1 generalized:1 hill:1 tt:1 demonstrate:4 complete:3 temperature:4 snelson:1 novel:2 common:1 functional:2 physical:1 regulates:1 numerically:1 measurement:2 significant:1 vec:1 similarly:1 ramsay:3 had:2 dj:3 similarity:2 surface:5 base:1 dominant:1 posterior:17 multivariate:2 recent:3 driven:1 certain:1 tempered:1 additional:4 employed:2 determine:1 converge:4 dashed:1 semi:1 stephen:1 multiple:1 full:6 infer:1 faster:4 offer:4 cross:2 dept:2 nagumo:4 devised:1 coded:1 prediction:3 involving:2 regression:1 converging:1 varah:1 iteration:5 represent:1 kernel:1 proposal:3 fellowship:1 ode:18 addressed:1 median:4 appropriately:1 biased:2 induced:4 structural:4 vital:1 variety:1 xj:2 carlin:1 br:1 bottleneck:3 expression:2 accelerating:1 accompanies:1 matlab:4 dramatically:1 useful:1 generally:3 detailed:1 involve:1 specifies:1 dotted:1 estimated:1 per:1 rattray:1 discrete:1 basal:1 four:1 threshold:1 monitor:1 drawn:4 boxplots:1 run:4 inverse:3 uncertainty:1 place:1 reasonable:1 scaling:4 comparable:1 display:1 neill:1 fold:1 yielded:1 activity:1 kronecker:1 diffuse:1 speed:8 extremely:2 min:6 performing:2 fitzhugh:4 relatively:1 barenco:3 tomescu:1 jasra:1 p53:6 describes:1 metropolis:4 xo:3 taken:2 computationally:2 equation:37 remains:1 describing:3 turn:1 brewer:1 mechanistic:2 letting:2 operation:1 incurring:2 gaussians:1 stiffness:1 observe:2 hierarchical:1 appropriate:2 alternative:2 robustness:1 callard:1 hampered:1 denotes:4 running:2 include:2 graphical:2 marginalized:1 scholarship:1 build:1 yz:1 ghahramani:1 society:2 murray:1 objective:1 move:3 already:1 added:1 spike:1 parametric:1 concentration:2 surrogate:1 unclear:1 exhibit:2 gradient:1 transcriptional:2 dp:1 tumour:1 unable:1 sci:3 sensible:1 considers:1 assuming:2 code:1 modeled:2 relationship:2 index:1 providing:1 regulation:1 relate:1 implementation:1 design:1 stern:1 unknown:2 allowing:1 upper:1 observation:4 neuron:1 markov:1 vyshemirsky:1 defining:4 extended:3 hinton:1 dc:2 smoothed:2 sharp:1 inferred:9 sundial:2 introduced:2 required:3 specified:1 hour:2 nip:6 suggested:1 bar:2 dynamical:6 usually:1 mismatch:1 challenge:1 recast:1 max:6 royal:2 power:1 ranked:1 hybrid:2 advanced:1 mn:8 representing:2 scheme:14 nth:2 pettitt:1 auto:1 prior:15 geometric:1 regularisation:1 nxn:1 fully:7 nucleus:1 consistent:2 rubin:1 emulator:1 prone:1 summary:5 placed:1 supported:2 rasmussen:4 jth:1 profiled:5 wide:2 focussed:1 sparse:3 tolerance:1 ghz:1 calculated:2 xn:7 evaluating:1 transition:1 genome:1 collection:1 made:2 commonly:1 oftentimes:1 employing:5 far:1 cope:1 programme:1 bb:1 excess:1 approximate:4 sj:3 implicitly:2 transcription:5 gene:7 global:1 doucet:1 alternatively:1 regulatory:2 latent:1 hooker:1 table:3 additionally:2 nature:1 transfer:2 obtaining:4 solak:1 improving:1 investigated:4 excellent:1 complex:1 european:1 main:4 linearly:1 noise:6 hyperparameters:3 n2:10 repeated:3 xu:5 augmented:1 slow:1 axon:1 sub:1 inferring:3 comprises:1 explicit:8 exponential:1 comput:1 candidate:1 third:1 specific:4 showing:1 decay:2 normalizing:2 sequential:1 phd:1 illustrates:2 conditioned:2 explore:1 forming:1 gao:1 partially:3 springer:1 conditional:4 oscillator:2 man:1 feasible:1 change:1 typical:1 sampler:4 total:1 specie:9 experimental:5 mark:2 brevity:1 bioinformatics:2 accelerated:2 evaluate:1 mcmc:15 avoiding:1 |
2,754 | 3,498 | Predicting the Geometry of Metal Binding Sites from
Protein Sequence
Paolo Frasconi
Universit`a degli Studi di Firenze
Via di S. Marta 3, 50139 Firenze, Italy
[email protected]
Andrea Passerini
Universit`a degli Studi di Trento
Via Sommarive, 14, 38100 Povo, Italy
[email protected]
Abstract
Metal binding is important for the structural and functional characterization of
proteins. Previous prediction efforts have only focused on bonding state, i.e. deciding which protein residues act as metal ligands in some binding site. Identifying the geometry of metal-binding sites, i.e. deciding which residues are jointly
involved in the coordination of a metal ion is a new prediction problem that has
been never attempted before from protein sequence alone. In this paper, we formulate it in the framework of learning with structured outputs. Our solution relies on
the fact that, from a graph theoretical perspective, metal binding has the algebraic
properties of a matroid, enabling the application of greedy algorithms for learning
structured outputs. On a data set of 199 non-redundant metalloproteins, we obtained precision/recall levels of 75%/46% correct ligand-ion assignments, which
improves to 88%/88% in the setting where the metal binding state is known.
1
Introduction
Metal ions play important roles in protein function and structure and metalloproteins are involved
in a number of diseases for which medicine is still seeking effective treatment, including cancer,
Parkinson, dementia, and AIDS [10]. A metal binding site typically consists of an ion bound to one
or more protein residues (called ligands). In some cases, the ion is embedded in a prosthetic group
(e.g. in the case of heme). Among the 20 amino acids, the four most common ligands are cysteine
(C), histidine (H), aspartic acid (D), and glutamic acid (E). Highly conserved residues are more likely
to be involved in the coordination of a metal ion, although in the case of cysteines, conservation is
also often associated with the presence of a disulfide bridge (a covalent bond between the sulfur
atoms of two cysteines) [8]. Predicting metal binding from sequence alone can be very useful in
genomic annotation for characterizing the function and the structure of non determined proteins,
but also during the experimental determination of new metalloproteins. Current high-throughput
experimental technologies only annotate whole proteins as metal binding [13], but cannot determine
the involved ligands. Most of the research for understanding metal binding has focused on finding
sequence patterns that characterize binding sites [8]. Machine learning techniques have been applied
only more recently.
The easiest task to formulate in this context is bonding state prediction, which is a binary classification problem: either a residue is involved in the coordination of a metal ion or is free (in the case of
cysteines, a third class can also be introduced for disulfide bridges). This prediction task has been
addressed in a number of recent works in the case of cysteines only [6], in the case of transition
metals (for C and H residues) [12] and for in the special but important case of zinc proteins (for
C,H,D, and E residues) [11, 14]. Hovever, classification of individual residues does not provide
sufficient information about a binding site. Many proteins bind to several ions in their holo form
and a complete characterization requires us to identify the site geometry, i.e. the tuple of residues
coordinating each individual ion. This problem has been only studied assuming knowledge of the
protein 3D structure (e.g. [5, 1]), limiting its applicability to structurally determined proteins or their
close homologs, but not from sequence alone. Abstracting away the biology, this is a structured
output prediction problem where the input consists of a string of protein residues and the output is a
labeling of each residue with the corresponding ion identifier (specific details are given in the next
section).
The supervised learning problem with structured outputs has recently received a considerable
amount of attention (see [2] for an overview). The common idea behind most methods consists
of learning a function F (x, y) on input-output pairs (x, y) and, during prediction, searching the
argument y that maximises F when paired with the query input x. The main difficulty is that the
search space on which y can take values has usually exponential size (in the length of the query).
Different structured output learners deal with this issue by exploiting specific domain properties
for the application at hand. Some researchers have proposed probabilistic modeling and efficient
dynamic programming algorithms (e.g. [16]). Others have proposed large margin approaches combined with clever algorithmic ideas for reducing the number of constraints (e.g. [15] in the case of
graph matching). Another solution is to construct the structured output in a suitable Hilbert space of
features and seek the corresponding pre-image for obtaining the desired discrete structure [17]. Yet
another is to rely on a state-space search procedure and learn from examples good moves leading to
the desired goal [4].
In this paper we develop a large margin solution that does not require a generative model for producing outputs. We borrow ideas from [15] and [4] but specifically take advantage of the fact that, from
a graph theoretical perspective, the metal binding problem has the algebraic structure of a matroid,
enabling the application of greedy algorithms.
2
A formalization of the metal binding sites prediction problem
A protein sequence s is a string in the alphabet of the 20 amino acids. Since only some of the 20
amino acids that exist in nature can act as ligands, we begin by extracting from s the subsequence
x obtained by deleting characters corresponding to amino acids that never (or very rarely) act as
ligands. By using T = {C, H, D, E} as the set of candidate ligands, we cover 92% ligands of structurally known proteins. A large number of interesting cases (74% in transition metals) is covered by
just considering cysteines and histidines, i.e. T = {C, H}. We also introduce the set I of symbols
associated with metal ion identifiers. I includes the special nil symbol. The goal is to predict the
coordination relation between amino acids in x and metal ions identifiers in I. Amino acids that are
not metal-bound are linked to nil. Ideally, it would be also interesting to predict the chemical element
of the bound metal ion. However, previous studies suggest that distinguishing the chemical element
from sequence alone is a difficult task [12]. Hence, ion identifiers will have no chemical element attribute attached. In practice, we fix a maximum number m of possible ions (m = 4 in the subsequent
experiments, covering 93% of structurally known proteins) and let I = {nil , ?1 , . . . , ?m }.
The number of admissible binding geometries for a given protein chain having n candidate ligands
n!
is the multinomial coefficient k1 !k2 !???km !(n?k
being m the number of ions and ki the
1 ?????km )!
number of ligands for ion ?i . In practice, each ion is coordinated by a variable number of ligands
(typically ranging from 1 to 4, but occasionally more), and each protein chain binds a variable
number of ions (typically ranging from 1 to 4). The number of candidate ligands n grows linearly
with the protein chain. For example, in the case of PDB chain 1H0Hb (see Figure 1), there are
n = 52 candidate ligands and m = 3 ions coordinated by 4 residues each, yielding a set of 7 ? 1015
admissible conformations.
It is convenient to formulate the problem in a graph theoretical setting. In this view, the string
x should be regarded as a set of vertices labeled with the corresponding amino acid in T . The
semantic of x will be clear from the context and for simplicity we will avoid additional notation.
Definition 2.1 (MBG property). Let x and I be two sets of vertices (associated with candidate
ligands and metal ion identifiers, respectively). We say that a bipartite edge set y ? x ? I satisfies
the metal binding geometry (MBG) property if the degree of each vertex in x in the graph (x ? I, y)
is at most 1.
For a given x, let Yx denote the set of y that satisfy the MBG property. Let Fx : Yx 7? IR+ be a
function that assigns a positive score to each bipartite edge set in Yx . The MBG problem consists of
finding arg maxy?Yx Fx (y).
nil
?
?1
?2
?3
D C C C C H E H D H H E E D D D C H C C D E D H D D C D E D E C D E C D C D C C D E E E D C D D C H H E
1
1
2
3
4
5
0
0
0
0
0
Figure 1: Metal binding structure of PDB entry 1H0Hb. For readability, only a few connections
from free residues to the nil symbol are shown.
Note that the MBG problem is not a matching problem (such as those studied in [15]) since more
than one edge can be incident to vertices belonging to I. As discussed above, we are not interested
in distinguishing metal ions based on the element type. Hence, any two label-isomorphic bipartite
graphs (obtained by exchanging two non-nil metal ion vertices) should be regarded as equivalent.
Outputs y should be therefore regarded as equivalence classes of structures (in the 1H0Hb example
above, there are 7 ? 1015 /3! equivalence classes, each corresponding to a permutation of ?1 , ?2 , ?3 ).
For simplicity, we will slightly abuse notation and avoid this distinction in the following.
We could also look over the MBG problem by analogy with language parsing using formal grammars. In this view, the binding geometry consists of a very shallow ?parse tree? for string x, as
examplified in Figure 1. A difficulty that is immediately apparent is that the underlying grammar
needs to be context sensitive in order to capture the crossing-dependencies between bound amino
acids. In real data, when representing metal bonding state in this way, crossing edges are very
common. This view enlightens a difficulty that would be encountered by attempting to solve the
structured output problem with a generative model as in [16].
3
A greedy algorithm for constructing structured outputs
The core idea of the solution used in this paper is to avoid a generative model as a component of
the structured output learner and cast the construction of an output structure into a maximum weight
problem that can be solved by a greedy algorithm.
Definition 3.1 (Matroid). A matroid (see e.g. [9]) is an algebraic structure M = (S, Y) where S is
a finite set and Y a family of subsets of S such that: i) ? ? Y; ii) all proper subsets of a set y in Y
are in Y; iii) if y and y 0 are in Y and |y| < |y 0 | then there exists e ? y 0 \ y such that y ? {e} ? Y.
Elements of Y are called independent sets. If y is an independent set, then ext(y) = {e ? S :
y ? {e} ? Y} is called the extension set of y. A maximal (having an empty extension set) independent set is called a base. In a weighted matroid, a local weight function v : S 7? IR+ assigns
a positive number v(e) to each element e ? S. The weight function allows us to compare two
structures in the following sense. A set y = {e1 , . . . , en } is lexicographically greater than set y 0
if its monotonically decreasing sequence of weights (v(e1 ), . . . , v(en )) is lexicographically greater
than the corresponding sequence for y 0 . The following classic result (see e.g. [9]) is the underlying
support for many greedy algorithms:
Theorem 3.2 (Rado 1957; Edmonds 1971). For any nonnegative weightingP
over S, a lexicographically maximum base in Y maximizes the global objective function F (y) = e?y v(e).
Weighted matroids can be seen as a kind of discrete counterparts of concave functions: thanks to the
above theorem, if M is a weighted matroid, then the following greedy algorithm is guaranteed to
find the optimal structure, i.e. arg maxy?Y F (y):
G REEDY C ONSTRUCT(M, F )
y??
while ext(y) 6= ?n
o
do y ? y ? arg maxe?ext(y) F (y ? {e})
return y
This theory shows that if the structured output space being searched satisfies the property of a matroid, learning structured outputs may be cast into the problem of learning the objective function
F for the greedy algorithm. When following this strategy, however, we may perceive the additive
form of F as a strong limitation as it would prescribe to predict v(e) independently for each part
e ? S, while the whole point of structured output learning is to end-up with a collective decision
about which parts should be present in the output structure. But interestingly, the additive form of
the objective function as in Theorem 3.2 is not a necessary condition for the greedy optimality of
matroids. In facts, Helman et al. [7] show that the classic theory can be generalized to so-called
consistent objective functions, i.e. functions that satisfy the following additional constraints:
F (y ? {e}) ? F (y ? {e0 }) ? F (y 0 ? {e}) ? F (y 0 ? {e0 })
0
0
(1)
0
for any y ? y ? S and e, e ? S \ y .
Theorem 3.3 (Helman et al. 1993). If F is a consistent objective function then, for each matroid on
S, all greedy bases are optimal.
Note that the sufficient condition of Theorem 3.3 is also necessary for a slighly more general class
of algebraic structures that include matroids, called matroid embeddings [7]. We now show that the
MBG problem is a suitable candidate for a greedy algorithmic solution.
Theorem 3.4. If each y ? Yx satisfies the MBG property, then Mx = (Sx , Yx ) is a matroid.
Proof. Suppose y 0 ? Yx and y ? y 0 . Removing an edge from y 0 cannot increase the degree of any
vertex in the bipartite graph so y ? Yx . Also, suppose y ? Yx , y 0 ? Yx , and |y| < |y 0 |. Then there
must be at least one vertex t in x having no incident edges in y and such that (?, t) ? y 0 for some
? ? I. Therefore y ? {(?, t)} also satisfies the MBG property and belongs to Yx , showing that Mx
is a matroid.
We can finally formulate the greedy algorithm for constructing the structured output in the MBG
problem. Given the input x, we begin by forming the associated MBG matroid Mx and a corresponding objective function Fx : Yx 7? IR+ (in the next section we will show how to learn the
objective function from data). The output structure associated with x is then computed as
f (x) = arg max Fx (y) = G REEDY C ONSTRUCT(Mx , Fx ).
y?Yx
(2)
The following result immediately follows from Definition 2.1 and Theorem 3.3:
Corollary 3.5. Let (x, y) be an MBG instance. If Fx is a consistent objective function and
Fx (y 0 ? {e}) > Fx (y 0 ? {e0 }) for each y 0 ? y, e ? ext(y 0 ) ? y and e0 ? ext(y 0 ) \ y, then
G REEDY C ONSTRUCT((Sx , Yx ), Fx ) returns y.
4
Learning the greedy objective function
A data set for the MBG problem consist of pairs D = {(xi , yi )} where xi is a string in T ? and yi
a bipartite graph. Corollary 3.5 directly suggests the kind of constraints that the objective function
needs to satisfy in order to minimize the empirical error of the structured-output problem. For any
input string x and (partial) output structure y ? Y, let Fx (y) = wT ?x (y), being w a weight vector
and ?x (y) a feature vector for (x, y). The corresponding max-margin formulation is
1
min kwk2
2
subject to:
(3)
wT ?xi (y 0 ? {e}) ? ?xi (y 0 ? {e0 }) ? 1
wT ?xi (y 00 ? {e}) ? ?xi (y 00 ? {e0 }) ? 1
(4)
(5)
?i = 1, . . . , |D|, ?y 0 ? yi , ?e ? ext(y 0 ) ? yi , ?e0 ? ext(y 0 ) \ yi ,
?y 00 : y 0 ? y 00 ? Sx .
Intuitively, the first set of constraints (Eq. 4) ensures that ?correct? extensions (i.e. edges that actually
belong to the target output structure yi ) receive a higher weight than ?wrong? extensions (i.e. edges
that do not belong to the target output structure). The purpose of the second set of constraints (Eq. 5)
is to force the learned objective function to obey the consistency property of Eq. (1), which in turns
ensures the correctness of the greedy algorithm thanks to Theorem 3.3. As usual, a regularized
variant with soft constraints can be formulated by introducing positive slack variables and adding
their 1-norm times a regularization coefficient to Eq. (3). The number of resulting constraints in the
above formulation grows exponentially with the number of edges in each example, hence naively
solving problem (3?5) is practically unfeasible. However, we can seek an approximate solution by
leveraging the efficiency of the greedy algorithm also during learning. For this purpose, we will use
an online active learner that samples constraints chosen by the execution of the greedy construction
algorithm.
For each epoch, the algorithm maintains the current highest scoring partial correct output yi0 ? yi
for each example, initialized with the empty MBG structure, where the score is computed by the
current objective function F . While there are ?unprocessed? examples in D, the algorithm picks
a random one and its current best MBG structure y 0 . If there are no more correct extensions of
y 0 , then y 0 = yi and the example is removed from D. Otherwise, the algorithm evaluates each
correct extension of y 0 , updates the current best MBG structure, and invokes the online learner by
calling F ORCE -C ONSTRAINT, which adds a constraint derived from a random incorrect extension
(see Eq. 4). It also performs a predefined number L of lookaheads by picking a random superset of
y 00 which is included in the target yi , evaluating it and updating the best MBG structure if needed,
and adding a corresponding consistency constraint (see Eq. 5). The epoch terminates when all
examples are processed. In practice, we found that a single epoch over the data set is sufficient for
convergence. Pseudocode for one epoch is given below.
G REEDY E POCH(D, L)
for i ? 1, . . . , |D|
do yi0 ? ?
while D 6= ?
do pick a random example (xi , yi ) ? D
y 0 ? yi0 , yi0 ? ?
if ext(y 0 ) ? yi = ?
then D ? D \ (xi , yi )
else for each e ? ext(y 0 ) ? yi
do pick randomly e0 ? ext(y 0 ) \ yi
if F (yi0 ) < F (y 0 ? {e}) then yi0 ? y 0 ? {e}
F ORCE -C ONSTRAINT(Fxi (y 0 ? {e}) ? Fxi (y 0 ? {e0 }) ? 1)
for l ? 1, . . . , L
do randomly choose y 00 : y 0 ? y 00 ? yi ? e, e0 ? Sx \ y 00
F ORCE -C ONSTRAINT(Fxi (y 00 ? {e}) ? Fxi (y 00 ? {e0 }) ? 1)
if F (yi0 ) < F (y 00 ? {e}) then yi0 ? y 00 ? {e}
There are several suitable online learners implementing the interface required by the above procedure. Possible candidates include perceptron-like or ALMA-like update rules like those proposed
in [4] for structured output learning (in our case the update would depend on the difference between
feature vectors of correctly and incorrectly extended structures in the inner loop of G REEDY E POCH).
An alternative online learner is the LaSVM algorithm [3] equipped with obvious modifications for
handling constraints between pairs of examples. LaSVM is an SMO-like solver for the dual version
of problem (3?5) that optimizes one or two coordinates at a time, alternating process (on newly
acquired examples, generated in our case by the F ORCE -C ONSTRAINT procedure) and reprocess
(on previously seen support vectors or patterns) steps. The ability to work efficiently in the dual
is the most appealing feature of LaSVM in the present context and advantageous with respect to
perceptron-like approaches. Our unsuccessful preliminary experiments with simple feature vectors
confirmed the necessity of flexible design choices for developing rich feature spaces. Kernel methods are clearly more attractive in this case. We will therefore rewrite the objective function F using
0
0
0 0
a kernel k(z, z 0 ) = h?x (y),
P?x0 (y )i between two structured instances z = (x, y) and z = (x , y ),
so that Fx (y) = F (z) = i ?i k(z, zi ).
Let ?i (z) denote the set of edges incident on ion ?i ? I \ nil and n(z) the number of non-nil ion
identifiers that have at least one incident edge. Below is a top-down definition of the kernel used in
the subsequent experiments.
n(z) n(z 0 )
0
0
k(z, z ) = kglob (z, z )
kglob (z, z 0 )
X X kmbs (?i (z), ?j (z 0 ))
n(z)n(z 0 )
i=1 j=1
= ?(n(z), n(z 0 ))
2 min{|x|, |x0 |}
|x| + |x0 |
(6)
(7)
|?i (z)|
0
kmbs (?i (z), ?j (z ))
0
= ?(|?i (z)|, |?j (z )|)
X
kres (xi (`), x0j (`))
(8)
`=1
where ?(a, b) = 1 iff a = b, xi (`) denotes the `-th residue in ?i (z), taken in increasing order
of sequential position in the protein, and kres (xi (`), x0j (`)) is simply the dot product between the
feature vectors describing residues xi (`) and x0j (`) (details on these features are given in Section 5).
kmbs measures the similarity between individual sites (two sites are orthogonal if have a different
number of ligands, a choice that is supported by protein functional considerations). kglob ensures
that two structures are orthogonal unless they have the same number of sites and down weights their
similarity when their number of candidate ligands differs.
5
Experiments
We tested the method on a dataset of non-redundant proteins previously used in [12]
for metal bonding state prediction (http://www.dsi.unifi.it/?passe/datasets/
mbs06/dataset.tgz). Proteins that do not bind metal ions (used in [12] as negative examples)
are of no interest in the present case and were removed, resulting in a set of 199 metalloproteins
binding transition metals. Following [12], we used T = {C, H} as the set of candidate ligands.
Protein sequences were enriched with evolutionary information derived from multiple alignments.
Profiles were obtained by running one iteration of PSI-BLAST on the non-redundant (nr) NCBI
dataset, with an e-value cutoff of 0.005. Each candidate ligand xi (`) was described by a feature
vector of 221 real numbers. The first 220 attributes consist of multiple alignment profiles in the
window of 11 amino acids centered around xi (`) (the window was formed from the original protein
sequence, not the substring xi of candidate ligands). The last attribute is the normalized sequence
separation between xi (`) and xi (` ? 1), using the N-terminus of the chain for ` = 1.
A modified version of LaSVM (http://leon.bottou.org/projects/lasvm) was run
with constraints produced by the G REEDY E POCH procedure of Section 4, using a fixed regularization parameter C = 1, and L ? {0, 5, 10}. All experiments were repeated 30 times, randomly
splitting the data into a training and test set in a ratio of 80/20. Two prediction tasks were considered, from unknown and from known metal bonding state (a similar distinction is also customary
for the related task of disulfide bonds prediction, see e.g. [15]). In the latter case, the input x only
contains actual ligands and no nil symbol is needed.
Several measures of performance are reported in Table 1. PE and RE are the precision and recall
for the correct assignment between a residue and the metal ion identifier (ratio of correctly predicted coordinations to the number of predicted/actual coordinations); correct links to the nil ion
(that would optimistically bias the results) are ignored in these measures. AG is the geometry accuracy, i.e. the fraction of chains that are entirely correctly predicted. PS and RS are the metal
binding site precision and recall, respectively (ratio of correctly predicted sites to the number of predicted/actual sites). Finally, PB and RB are precision and recall for metal bonding state prediction
(as in binary classification, being ?bonded? the positive class). Table 2 reports the breakdown of
these performance measures for proteins binding different numbers of metal ions (for L = 10).
Results show that enforcing consistency constraints tends to improve recall, especially for the bonding state prediction, i.e. helps the predictor to assign a residue to a metal ion identifier rather than to
nil. However, it only marginally improves precision and recall at the site level. Correct prediction of
whole sites is very challenging and correct prediction of whole chains even more difficult (given the
enormous number of alternatives to be compared). Hence, it is not surprising that some of these performance indicators are low. By comparison, absolute figures are not high even for the much easier
task of disulfide bonds prediction [15]. Correct edge assignment, however, appears satisfactory and
reasonably good when the bonding state is given. The complete experimental environment can be
obtained from http://www.disi.unitn.it/?passerini/nips08.tgz.
Table 1: Experimental results.
L
0
5
10
PE
75?5
66?5
63?5
L
0
5
10
ab-initio
RE
AG
PS
RS
46?5 12?4 18?6 14?6
52?4 14?6 20?7 17?6
52?5 13?6 20?7 15?6
metal bonding state given
PE
RE
AG
PS
87?2 87?2 64?6 65?6
87?3 87?3 65?7 66?7
88?3 88?3 67?7 67?7
PB
81?5
79?4
78?4
RB
51?6
64?6
68?5
RS
65?6
66?7
67?7
Table 2: Breakdown by number of sites each chain. BS= (K)nown/(U)nknown bonding state.
BS
U
K
BS
U
K
6
# sites = 1 (132 chains)
RE
PS
RS
57?6
25?9 21?8
97?2
92?6 92?6
# sites = 3 (11 chains)
PE
RE
PS
RS
65?16 33?13
1?5
1?5
61?12 61?12 8?11 9?13
PE
62?6
97?2
AG
19?8
92?6
PE
67?9
73?5
AG
0
0
PE
44?31
37?25
# sites = 2 (48 chains)
RE
PS
RS
46?8
14?12
6?8
73?5
21?10 21?10
# sites = 4 (8 chains)
RE
PS
RS
24?20
3?11
2?6
37?25
1?2
1?2
AG
3?6
20?11
AG
0
0
Related works
As mentioned in the Introduction, methods for structured outputs usually learn a function F on inputoutput pairs (x, y) and construct the predicted output as f (x) = arg maxy F (x, y). Our approach
follows the same general principle.
There is a notable analogy between the constrained optimization problem (3?5) and the set of constraints derived in [15] for the related problem of disulfide connectivity. As in [15], our method
is based on a large-margin approach for solving a structured output prediction problem. The underlying formal problems are however very different and require different algorithmic solutions.
Disulfide connectivity is a (perfect) matching problem since each cysteine is bound to exactly one
other cysteine (assuming known bonding state, yielding a perfect matching) or can be bound to another cysteine or free (unknown bonding state, yielding a non-perfect matching). The original set of
constraints in [15] only focuses on complete structures (non extensible set or bases, in our terminology). It also has exponential size but the matching structure of the problem in that case allows the
authors to derive a certificate formulation that reduces it to polynomial size. The MBG problem is
not a matching problem but has the structure of a matroid and our formulation allows us to control
the number of effectively enforced constraints by taking advantage of a greedy algorithm.
The idea of an online learning procedure that receives examples generated by an algorithm which
constructs the output structure was inspired from the Learning as Search Optimization (LaSO) approach [4]. LaSO aims to solve a much broader class of structured output problems where good
output structures can be generated by AI-style search algorithms such as beam search or A*. The
generation of a fresh set of siblings in LaSO when the search is stuck with a frontier of wrong candidates (essentially a backtrack) is costly compared to our greedy selection procedure and (at least
in principle) unnecessary when working on matroids.
Another general way to deal with the exponential growth of the search space is to introduce a generative model so that arg maxy F (x, y) can be computed efficiently, e.g. by developing an appropriate
dynamic programming algorithm. Stochastic grammars and related conditional models have been
extensively used for this purpose [2]. These approaches work well if the generative model matches
or approximates well the domain at hand. Unfortunately, as discussed in Section 2, the specific application problem we study in this paper cannot be even modeled by a context-free grammar. While
we do not claim that it is impossible to devise a suitable generative model for this task (and indeed
this is an interesting direction of research), we can argue that handling context-sensitiveness is hard.
It is of course possible to approximate context sensitive dependencies using a simplified model. Indeed, an alternative view of the MBG problem is supervised sequence labeling, where the output
string consists of symbols in I. A (higher-order) hidden Markov model or chain-structured conditional random field could be used as the underlying generative model for structured output learning.
Unfortunately, these approaches are unlikely to be very accurate since models that are structured as
linear chains of dependencies cannot easily capture long-ranged interactions such as those occurring
in the example. In our preliminary experiments, SVMHMM [16] systematically assigned all bonded
residues to the same ion, thus never correctly predicted the geometry except in trivial cases.
7
Conclusions
We have reported about the first successful solution to the challenging problem of predicting protein
metal binding geometry from sequence alone. The result fills-in an important gap in structural and
functional bioinformatics. Learning with structured outputs is a fairly difficult task and in spite of
the fact that several methodologies have been proposed, no single general approach can effectively
solve every possible application problem. The solution proposed in this paper draws on several
previous ideas and specifically leverages the existence of a matroid for the metal binding problem.
Other problems that formally exhibit a greedy structure might benefit of similar solutions.
Acknowledgments
We thank Thomas G?artner for very fruitful discussions.
References
[1] M. Babor, S. Gerzon, B. Raveh, V. Sobolev, and M. Edelman. Prediction of transition metal-binding sites
from apo protein structures. Proteins, 70(1):208?217, 2008.
[2] G. Bakir, T. Hofmann, B. Sch?olkopf, A. Smola, B. Taskar, and S. Vishwanathan, editors. Predicting
Structured Data. The MIT Press, 2007.
[3] A. Bordes, S. Ertekin, J. Weston, and L. Bottou. Fast kernel classifiers with online and active learning.
Journal of Machine Learning Research, 6:1579?1619, 2005.
[4] H. Daume III and D. Marcu. Learning as search optimization: Approximate large margin methods for
structured prediction. In Proc. of the 22nd Int. Conf. on Machine Learning (ICML?05), 2005.
[5] J. C. Ebert and R. B. Altman. Robust recognition of zinc binding sites in proteins. Protein Sci, 17(1):54?
65, 2008.
[6] F. Ferr`e and P. Clote. DiANNA 1.1: an extension of the DiANNA web server for ternary cysteine classification. Nucleic Acids Res, 34:W182?W185, 2006.
[7] P. Helman, B. M. E. Moret, and H. D. Shapiro. An exact characterization of greedy structures. SIAM J.
Disc. Math., 6(2):274?283, 1993.
[8] N. Hulo, A. Bairoch, V. Bulliard, L. Cerutti, B. A. Cuche, E. de Castro, C. Lachaize, P. S. LangendijkGenevaux, and C. J. A. Sigrist. The 20 years of prosite. Nucleic Acids Res, 36:D245?9, 2008.
[9] E. L. Lawler. Combinatorial Optimization: Networks and Matroids. Holt, Rinehart and Winston, 1976.
[10] A. Messerschmidt, R. Huber, K. Wieghardt, and T. Poulos, editors. Handbook of Metalloproteins. John
Wiley & Sons, 2004.
[11] A. Passerini, C. Andreini, S. Menchetti, A. Rosato, and P. Frasconi. Predicting zinc binding at the proteome level. BMC Bioinformatics, 8:39, 2007.
[12] A. Passerini, M. Punta, A. Ceroni, B. Rost, and P. Frasconi. Identifying cysteines and histidines in
transition-metal-binding sites using support vector machines and neural networks. Proteins, 65(2):305?
316, 2006.
[13] W. Shi, C. Zhan, A. Ignatov, B. A. Manjasetty, N. Marinkovic, M. Sullivan, R. Huang, and M. R. Chance.
Metalloproteomics: high-throughput structural and functional annotation of proteins in structural genomics. Structure, 13(10):1473?1486, 2005.
[14] N. Shu, T. Zhou, and S. Hovmoller. Prediction of zinc-binding sites in proteins from sequence. Bioinformatics, 24(6):775?782, 2008.
[15] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models: a large
margin approach. Proc. of the 22nd Int. Conf. on Machine Learning (ICML?05), pages 896?903, 2005.
[16] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large Margin Methods for Structured and
Interdependent Output Variables. The Journal of Machine Learning Research, 6:1453?1484, 2005.
[17] J. Weston, O. Chapelle, A. Elisseeff, B. Scholkopf, and V. Vapnik. Kernel dependency estimation. Advances in Neural Information Processing Systems, 15:873?880, 2003.
| 3498 |@word version:2 polynomial:1 norm:1 advantageous:1 yi0:8 nd:2 km:2 seek:2 r:7 elisseeff:1 pick:3 necessity:1 contains:1 score:2 terminus:1 interestingly:1 current:5 surprising:1 yet:1 must:1 parsing:1 john:1 subsequent:2 additive:2 hofmann:2 update:3 alone:5 greedy:19 generative:7 core:1 characterization:3 certificate:1 math:1 readability:1 org:1 scholkopf:1 incorrect:1 consists:6 edelman:1 artner:1 introduce:2 blast:1 x0:3 acquired:1 huber:1 indeed:2 andrea:1 inspired:1 decreasing:1 actual:3 equipped:1 considering:1 solver:1 increasing:1 begin:2 window:2 notation:2 underlying:4 maximizes:1 project:1 easiest:1 kind:2 string:7 finding:2 ag:7 disi:2 every:1 act:3 concave:1 growth:1 exactly:1 universit:2 k2:1 wrong:2 classifier:1 control:1 producing:1 before:1 positive:4 bind:3 local:1 tends:1 ext:10 optimistically:1 abuse:1 might:1 studied:2 equivalence:2 suggests:1 challenging:2 acknowledgment:1 ternary:1 practice:3 firenze:2 differs:1 sullivan:1 procedure:6 empirical:1 matching:7 convenient:1 pre:1 holt:1 pdb:2 protein:34 suggest:1 spite:1 cannot:4 close:1 clever:1 unfeasible:1 selection:1 proteome:1 context:7 impossible:1 tsochantaridis:1 www:2 equivalent:1 fruitful:1 shi:1 attention:1 independently:1 focused:2 formulate:4 simplicity:2 identifying:2 assigns:2 immediately:2 perceive:1 splitting:1 rinehart:1 rule:1 regarded:3 borrow:1 fill:1 classic:2 searching:1 fx:11 coordinate:1 altman:1 marta:1 limiting:1 construction:2 play:1 suppose:2 target:3 exact:1 programming:2 distinguishing:2 prescribe:1 element:6 cysteine:11 crossing:2 recognition:1 updating:1 apo:1 marcu:1 breakdown:2 labeled:1 role:1 taskar:2 solved:1 capture:2 ensures:3 highest:1 removed:2 disease:1 mentioned:1 rado:1 environment:1 ideally:1 dynamic:2 depend:1 solving:2 rewrite:1 bipartite:5 efficiency:1 learner:6 easily:1 alphabet:1 fast:1 effective:1 query:2 labeling:2 apparent:1 solve:3 say:1 otherwise:1 grammar:4 ability:1 jointly:1 online:6 sequence:15 advantage:2 interaction:1 maximal:1 product:1 loop:1 iff:1 inputoutput:1 trento:1 olkopf:1 exploiting:1 convergence:1 empty:2 p:7 perfect:3 help:1 derive:1 develop:1 conformation:1 received:1 eq:6 strong:1 predicted:7 direction:1 correct:10 attribute:3 stochastic:1 centered:1 implementing:1 require:2 assign:1 fix:1 preliminary:2 extension:8 frontier:1 initio:1 practically:1 around:1 considered:1 lookaheads:1 deciding:2 algorithmic:3 predict:3 claim:1 purpose:3 estimation:1 proc:2 bond:3 label:1 combinatorial:1 coordination:6 bridge:2 sensitive:2 correctness:1 weighted:3 mit:1 clearly:1 genomic:1 aim:1 modified:1 rather:1 avoid:3 parkinson:1 zhou:1 broader:1 corollary:2 derived:3 focus:1 joachim:1 sense:1 bairoch:1 chatalbashev:1 typically:3 unlikely:1 hidden:1 relation:1 koller:1 interested:1 issue:1 among:1 classification:4 arg:6 dual:2 flexible:1 constrained:1 special:2 fairly:1 field:1 construct:3 never:3 frasconi:3 having:3 atom:1 biology:1 bmc:1 look:1 icml:2 throughput:2 others:1 report:1 few:1 randomly:3 individual:3 geometry:9 ab:1 interest:1 highly:1 alignment:2 yielding:3 behind:1 chain:14 predefined:1 accurate:1 tuple:1 edge:12 partial:2 necessary:2 nown:1 orthogonal:2 unless:1 tree:1 initialized:1 desired:2 re:9 e0:11 theoretical:3 instance:2 modeling:1 soft:1 cover:1 extensible:1 assignment:3 exchanging:1 applicability:1 introducing:1 vertex:7 subset:2 entry:1 predictor:1 successful:1 characterize:1 reported:2 dependency:4 prosite:1 combined:1 thanks:2 siam:1 probabilistic:1 picking:1 connectivity:2 choose:1 huang:1 conf:2 leading:1 return:2 style:1 de:1 includes:1 coefficient:2 int:2 coordinated:2 satisfy:3 notable:1 view:4 linked:1 maintains:1 annotation:2 minimize:1 formed:1 ir:3 accuracy:1 acid:13 efficiently:2 identify:1 produced:1 backtrack:1 substring:1 marginally:1 disc:1 confirmed:1 researcher:1 povo:1 definition:4 evaluates:1 involved:5 obvious:1 associated:5 di:3 proof:1 psi:1 newly:1 dataset:3 treatment:1 hovever:1 recall:6 knowledge:1 improves:2 bakir:1 hilbert:1 actually:1 lawler:1 appears:1 higher:2 supervised:2 methodology:1 formulation:4 just:1 smola:1 hand:2 receives:1 working:1 parse:1 web:1 alma:1 grows:2 homologs:1 normalized:1 ranged:1 poch:3 counterpart:1 hence:4 regularization:2 chemical:3 alternating:1 assigned:1 satisfactory:1 semantic:1 deal:2 attractive:1 during:3 covering:1 generalized:1 bonded:2 zinc:4 complete:3 performs:1 interface:1 image:1 ranging:2 consideration:1 recently:2 common:3 pseudocode:1 functional:4 multinomial:1 overview:1 attached:1 exponentially:1 altun:1 discussed:2 belong:2 approximates:1 kwk2:1 ai:1 consistency:3 language:1 dot:1 chapelle:1 similarity:2 base:4 add:1 recent:1 perspective:2 italy:2 belongs:1 optimizes:1 occasionally:1 server:1 binary:2 yi:15 devise:1 scoring:1 conserved:1 seen:2 guestrin:1 additional:2 greater:2 determine:1 redundant:3 monotonically:1 ii:1 multiple:2 reduces:1 lexicographically:3 determination:1 match:1 long:1 e1:2 paired:1 prediction:20 variant:1 essentially:1 annotate:1 kernel:5 iteration:1 ion:31 beam:1 receive:1 residue:18 ertekin:1 addressed:1 else:1 sch:1 subject:1 leveraging:1 extracting:1 structural:4 presence:1 leverage:1 iii:2 embeddings:1 superset:1 covalent:1 matroid:14 zi:1 inner:1 idea:6 sibling:1 unprocessed:1 tgz:2 effort:1 passerini:5 ferr:1 algebraic:4 lasvm:5 laso:3 ignored:1 useful:1 covered:1 clear:1 amount:1 extensively:1 processed:1 http:3 shapiro:1 exist:1 coordinating:1 correctly:5 rb:2 edmonds:1 discrete:2 paolo:1 group:1 four:1 terminology:1 pb:2 enormous:1 cutoff:1 graph:8 fraction:1 year:1 enforced:1 run:1 family:1 x0j:3 separation:1 sobolev:1 draw:1 decision:1 zhan:1 entirely:1 bound:6 ki:1 guaranteed:1 winston:1 encountered:1 nonnegative:1 constraint:16 vishwanathan:1 disulfide:6 prosthetic:1 calling:1 argument:1 optimality:1 min:2 leon:1 attempting:1 structured:27 developing:2 belonging:1 terminates:1 slightly:1 son:1 character:1 appealing:1 shallow:1 modification:1 b:3 maxy:4 castro:1 intuitively:1 taken:1 previously:2 turn:1 slack:1 describing:1 needed:2 nips08:1 end:1 obey:1 away:1 fxi:4 appropriate:1 rost:1 alternative:3 customary:1 existence:1 original:2 thomas:1 top:1 denotes:1 include:2 running:1 sensitiveness:1 ncbi:1 yx:14 medicine:1 invokes:1 k1:1 especially:1 seeking:1 move:1 objective:13 strategy:1 costly:1 usual:1 nr:1 evolutionary:1 exhibit:1 mx:4 link:1 thank:1 sci:1 argue:1 trivial:1 studi:2 enforcing:1 fresh:1 assuming:2 length:1 modeled:1 ratio:3 difficult:3 unfortunately:2 shu:1 negative:1 design:1 proper:1 collective:1 unknown:2 maximises:1 nucleic:2 datasets:1 markov:1 enabling:2 finite:1 incorrectly:1 extended:1 introduced:1 pair:4 cast:2 required:1 connection:1 bonding:12 distinction:2 learned:1 smo:1 usually:2 pattern:2 below:2 including:1 max:2 unsuccessful:1 deleting:1 suitable:4 difficulty:3 rely:1 force:1 predicting:5 regularized:1 indicator:1 ebert:1 representing:1 improve:1 technology:1 genomics:1 epoch:4 understanding:1 interdependent:1 embedded:1 dsi:2 permutation:1 abstracting:1 interesting:3 limitation:1 generation:1 analogy:2 degree:2 incident:4 metal:42 sufficient:3 consistent:3 principle:2 editor:2 systematically:1 bordes:1 cancer:1 course:1 supported:1 last:1 free:4 formal:2 bias:1 perceptron:2 characterizing:1 taking:1 helman:3 matroids:5 absolute:1 benefit:1 transition:5 evaluating:1 rich:1 author:1 stuck:1 sulfur:1 simplified:1 approximate:3 global:1 active:2 handbook:1 histidine:3 conservation:1 unnecessary:1 xi:17 degli:2 subsequence:1 search:8 table:4 learn:3 nature:1 reasonably:1 robust:1 obtaining:1 bottou:2 constructing:2 domain:2 main:1 linearly:1 whole:4 profile:2 daume:1 identifier:8 repeated:1 amino:9 enriched:1 site:25 en:2 moret:1 aid:1 wiley:1 precision:5 structurally:3 formalization:1 position:1 exponential:3 candidate:12 unifi:2 pe:7 third:1 admissible:2 theorem:8 removing:1 down:2 specific:3 showing:1 dementia:1 symbol:5 exists:1 consist:2 naively:1 vapnik:1 adding:2 sequential:1 effectively:2 execution:1 occurring:1 margin:7 sx:4 gap:1 reedy:6 easier:1 simply:1 likely:1 forming:1 svmhmm:1 binding:28 ligand:21 satisfies:4 relies:1 chance:1 weston:2 conditional:2 goal:2 formulated:1 slighly:1 considerable:1 hard:1 included:1 determined:2 specifically:2 reducing:1 except:1 wt:3 called:6 nil:11 isomorphic:1 experimental:4 attempted:1 rarely:1 maxe:1 formally:1 support:3 searched:1 latter:1 passe:1 bioinformatics:3 unitn:2 tested:1 handling:2 |
2,755 | 3,499 | Clustered Multi-Task Learning:
a Convex Formulation
Laurent Jacob
Mines ParisTech ? CBIO
INSERM U900, Institut Curie
35, rue Saint Honor?e, 77300 Fontainebleau, France
[email protected]
Francis Bach
INRIA ? Willow Project
Ecole Normale Sup?erieure,
45, rue d?Ulm, 75230 Paris, France
[email protected]
Jean-Philippe Vert
Mines ParisTech ? CBIO
INSERM U900, Institut Curie
35, rue Saint Honor?e, 77300 Fontainebleau, France
[email protected]
Abstract
In multi-task learning several related tasks are considered simultaneously, with
the hope that by an appropriate sharing of information across tasks, each task may
benefit from the others. In the context of learning linear functions for supervised
classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected
to be related to each other. In this paper, we assume that tasks are clustered into
groups, which are unknown beforehand, and that tasks within a group have similar
weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting
in a new convex optimization formulation for multi-task learning. We show in
simulations on synthetic examples and on the IEDB MHC-I binding dataset, that
our approach outperforms well-known convex methods for multi-task learning, as
well as related non-convex methods dedicated to the same problem.
1
Introduction
Regularization has emerged as a dominant theme in machine learning and statistics, providing an
intuitive and principled tool for learning from high-dimensional data. In particular, regularization
by squared Euclidean norms or squared Hilbert norms has been thoroughly studied in various settings, leading to efficient practical algorithms based on linear algebra, and to very good theoretical
understanding (see, e.g., [1, 2]). In recent years, regularization by non Hilbert norms, such as ?p
norms with p 6= 2, has also generated considerable interest for the inference of linear functions in
supervised classification or regression. Indeed, such norms can sometimes both make the problem
statistically and numerically better-behaved, and impose various prior knowledge on the problem.
For example, the ?1 -norm (the sum of absolute values) imposes some of the components to be equal
to zero and is widely used to estimate sparse functions [3], while various combinations of ?p norms
can be defined to impose various sparsity patterns.
While most recent work has focused on studying the properties of simple well-known norms, we
take the opposite approach in this paper. That is, assuming a given prior knowledge, how can we
design a norm that will enforce it?
More precisely, we consider the problem of multi-task learning, which has recently emerged as a
very promising research direction for various applications [4]. In multi-task learning several related inference tasks are considered simultaneously, with the hope that by an appropriate sharing
1
of information across tasks, each one may benefit from the others. When linear functions are estimated, each task is associated with a weight vector, and a common strategy to design multi-task
learning algorithm is to translate some prior hypothesis about how the tasks are related to each other
into constraints on the different weight vectors. For example, such constraints are typically that the
weight vectors of the different tasks belong (a) to a Euclidean ball centered at the origin [5], which
implies no sharing of information between tasks apart from the size of the different vectors, i.e., the
amount of regularization, (b) to a ball of unknown center [5], which enforces a similarity between
the different weight vectors, or (c) to an unknown low-dimensional subspace [6, 7].
In this paper, we consider a different prior hypothesis that we believe could be more relevant in some
applications: the hypothesis that the different tasks are in fact clustered into different groups, and that
the weight vectors of tasks within a group are similar to each other. A key difference with [5], where
a similar hypothesis is studied, is that we don?t assume that the groups are known a priori, and in a
sense our goal is both to identify the clusters and to use them for multi-task learning. An important
situation that motivates this hypothesis is the case where most of the tasks are indeed related to each
other, but a few ?outlier? tasks are very different, in which case it may be better to impose similarity
or low-dimensional constraints only to a subset of the tasks (thus forming a cluster) rather than to
all tasks. Another situation of interest is when one can expect a natural organization of the tasks
into clusters, such as when one wants to model the preferences of customers and believes that there
are a few general types of customers with similar preferences within each type, although one does
not know beforehand which customers belong to which types. Besides an improved performance if
the hypothesis turns out to be correct, we also expect this approach to be able to identify the cluster
structure among the tasks as a by-product of the inference step, e.g., to identify outliers or groups of
customers, which can be of interest for further understanding of the structure of the problem.
In order to translate this hypothesis into a working algorithm, we follow the general strategy mentioned above which is to design a norm or a penalty over the set of weights which can be used as
regularization in classical inference algorithms. We construct such a penalty by first assuming that
the partition of the tasks into clusters is known, similarly to [5]. We then attempt to optimize the
objective function of the inference algorithm over the set of partitions, a strategy that has proved
useful in other contexts such as multiple kernel learning [8]. This optimization problem over the
set of partitions being computationally challenging, we propose a convex relaxation of the problem
which results in an efficient algorithm.
2
Multi-task learning with clustered tasks
We consider m related inference tasks that attempt to learn linear functions over X = Rd from a
training set of input/output pairs (xi , yi )i=1,...,n , where xi ? X and yi ? Y. In the case of binary
classification we usually take Y = {?1, +1}, while in the case of regression we take Y = R. Each
training example (xi , yi ) is associated to a particular task t ? [1, m], and we denote by I(t) ? [1, n]
the set of indices of training examples associated to the task t. Our goal is to infer m linear functions
ft (x) = wt? x, for t = 1, . . . , m, associated to the different tasks. We denote by W = (w1 . . . wm )
the d ? m matrix whose columns are the successive vectors we want to estimate.
We fix a loss function l : R ? Y 7? R that quantifies by l(f (x), y) the cost of predicting f (x)
for the input x when the correct output is y. Typical loss functions include the square error in
regression l(u, y) = 21 (u ? y)2 or the hinge loss in binary classification l(u, y) = max(0, 1 ? uy)
with y ? {?1, 1}. The empirical risk of a set of linear classifiers given in the matrix W is then
defined as the average loss over the training set:
Pm P
?(W ) = n1 t=1 i?I(t) l(wt? xi , yi ) .
(1)
In the sequel, we will often use the m?1 vector 1 composed of ones, the m?m projection matrices
U = 11? /m whose entries are all equal to 1/m, as well as the projection matrix ? = I ? U .
In order to learn simultaneously the m tasks, we follow the now well-established approach which
looks for a set of weight vectors W that minimizes the empirical risk regularized by a penalty
functional, i.e., we consider the problem:
minW ?Rd?m ?(W ) + ??(W ) ,
(2)
where ?(W ) can be designed from prior knowledge to constrain some sharing of information between tasks. For example, [5] suggests to penalize both the norms of the wi ?s and their variance,
2
i.e., to consider a function of the form:
?variance (W ) = kwk
? 2+
?
m
Pm
i=1
kwi ? wk
? 2,
(3)
Pn
where w
? = ( i=1 wi ) /m is the mean weight vector. This penalty enforces a clustering of the wi? s
towards their mean when ? increases. Alternatively, [7] propose to penalize the trace norm of W :
Pmin(d,m)
?trace (W ) = i=1
?i (W ) ,
(4)
where ?1 (W ), . . . , ?min(d,m) (W ) are the successive singular values of W . This enforces a low-rank
solution in W , i.e., constrains the different wi ?s to live in a low-dimensional subspace.
Here we would like to define a penalty function ?(W ) that encodes as prior knowledge that tasks
are clustered into r < m groups. To do so, let us first assume that we know beforehand the clusters,
i.e., we have a partition of the set of tasks into r groups. In that case we can follow an approach
proposed by [5] which for clarity we rephrase with our notations and slightly generalize now. For a
given cluster c ? [1, r], let us denote J (c) ? [1, m] the set of tasks in c, mc = |J (c)| the number
of tasks in the cluster c, and E the m ? r binary matrix which describes the cluster assignment
for
?c =
Pthe m tasks, i.e., Eij = 1 if task i is in cluster j, 0 otherwise. Let us further denote
Pm by w
( i?J (c) wi )/mc the average weight vector for the tasks in c, and recall that w
? = ( i=1 wi ) /m
denotes the average weight vector over all tasks. Finally it will be convenient to introduce the matrix
M = E(E ? E)?1 E ? . M can also be written I ? L, where L is the normalized Laplacian of the
graph G whose nodes are the tasks connected by an edge if and only if they are in the same cluster.
Then we can define three semi-norms of interest on W that quantify different orthogonal aspects:
? A global penalty, which measures on average how large the weight vectors are:
?mean (W ) = nkwk
? 2 = trW U W ? .
? A measure of between-cluster variance, which quantifies how close to each other the different clusters are:
Pr
?c ? wk
? 2 = trW (M ? U )W ? .
?between (W ) = c=1 mc kw
? A measure of within-cluster variance, which quantifies the compactness of the clusters:
o
Pr nP
2
?within (W ) = c=1
kw
?
w
?
k
= trW (I ? M )W ? .
i
c
i?J (c)
We note that both ?between (W ) and ?within (W ) depend on the particular choice of clusters E, or
equivalently of M . We now propose to consider the following general penalty function:
?(W ) = ?M ?mean (W ) + ?B ?between (W ) + ?W ?within (W ) ,
(5)
where ?M , ?B and ?W are non-negative parameters that can balance the importance of the components of the penalty. Plugging this quadratic penalty into (2) leads to the general problem:
minW ?Rd?m ?(W ) + ?trW ?(M )?1 W ? ,
(6)
?(M )?1 = ?M U + ?B (M ? U ) + ?W (I ? M ) .
(7)
where
Here we use the notation ?(M ) to insist on the fact that this quadratic penalty depends on the cluster
structure through the matrix M . Observing that the matrices U , M ? U and I ? M are orthogonal
projections onto orthogonal supplementary subspaces, we easily get from (7):
?1
?1
?1
?1
?1
?1
?1
?(M ) = ??1
M U + ?B (M ? U ) + ?W (I ? M ) = ?W I + (?M ? ?B )U + (?B ? ?W )M . (8)
By choosing particular values for ?M , ?B and ?W we can recover several situations, In particular:
? For ?W = ?B = ?M = ?, we simply recover the Frobenius norm of W , which does not put
any constraint on the relationship between the different tasks:
Pm
?(W ) = ?trW W ? = ? i=1 kwi k2 .
3
? For ?W = ?B > ?M , we recover the penalty of [5] without clusters:
Pm
?(W ) = trW (?M U + ?B (I ? U )) W ? = ?M nkwk
? 2 + ?B i=1 kwi ? wk
? 2.
In that case, a global similarity between tasks is enforced, in addition to the general constraint on their mean. The structure in clusters plays no role since the sum of the betweenand within-cluster variance is independent of the particular choice of clusters.
? For ?W > ?B = ?M we recover the penalty of [5] with clusters:
r n
o
X
P
2
?(W ) = trW (?M M + ?W (I ? M )) W ? = ?M
mc kw
?c k2 + ??W
kw
?
w
?
k
.
i
c
i?J
(c)
M
c=1
In order to enforce a cluster hypothesis on the tasks, we therefore see that a natural choice is to
take ?W > ?B > ?M in (5). This would have the effect of penalizing more the within-cluster
variance than the between-cluster variance, hence promoting compact clusters. Of course, a major
limitation at this point is that we assumed the cluster structure known a priori (through the matrix
E, or equivalently M ). In many cases of interest, we would like instead to learn the cluster structure
itself from the data. We propose to learn the cluster structure in our framework by optimizing our
objective function (6) both in W and M , i.e., to consider the problem:
minW ?Rd?m ,M ?Mr ?(W ) + ?trW ?(M )?1 W ? ,
(9)
where Mr denotes the set of matrices M = E(E ? E)?1 E ? defined by a clustering of the m tasks
into r clusters and ?(M ) is defined in (8). Denoting by Sr = {?(M ) : M ? Mr } the corresponding set of positive semidefinite matrices, we can equivalently rewrite the problem as:
minW ?Rd?m ,??Sr ?(W ) + ?trW ??1 W ? .
(10)
m
The objective function in (10) is jointly convex in W ? Rd?m and ? ? S+
, the set of m?m positive
semidefinite matrices, however the (finite) set Sr is not convex, making this problem intractable. We
are now going to propose a convex relaxation of (10) by optimizing over a convex set of positive
semidefinite matrices that contains Sr .
3
Convex relaxation
In order to formulate a convex relaxation of (10), we observe that in the penalty term (5) the cluster
structure only contributes to the second and third terms ?between (W ) and ?within (W ), and that
these penalties only depend on the centered version of W . In terms of matrices, only the last two
terms of ?(M )?1 in (7) depend on M , i.e., on the clustering, and these terms can be re-written as:
?B (M ? U ) + ?W (I ? M ) = ?(?B M + ?W (I ? M ))?.
(11)
Indeed, it is easy to check that M ? U = M ? = ?M ?, and that I ? M = I ? U ? (M ? U ) =
? ? ?M ? = ?(I ? M )?. Intuitively, multiplying by ? on the right (resp. on the left) centers the
rows (resp. the columns) of a matrix, and both M ? U and I ? M are row- and column-centered.
f = ?M ?. Plugging (11) in (7) and (9), we get the penalty
To simplify notations, let us introduce M
?
?
?1
?
f + ?W (I ? M
f))(W ?)? ,
trW ?(M ) W = ?M trW ? W U + (W ?)(?B M
(12)
in which, again, only the second part needs to be optimized with respect to the clustering M . Denotf
f
f
ing ??1
c (M ) = ?B M + ?W (I ? M ), one can express ?c (M ), using the fact that M is a projection:
? ?1
?
f + ??1 I.
?c (M ) = ? ? ??1 M
(13)
B
W
W
f = ?M ?, that is discrete by construction, hence the non-convexity of Sr .
?c is characterized by M
f ? ?U ), 0 ? M ? I (i.e., 0 ? M
f ? ?) and
We have the natural constraints M ? 0 (i.e., M
f
f is
trM = r (i.e., trM = r ? 1). A possible convex relaxation of the discrete set of matrices M
f:0?M
f ? I, trM
f = r ? 1}. This gives an equivalent convex set Sc for ?c , namely:
therefore {M
?
?
m
Sc = ?c ? S+
: ?I ? ?c ? ?I, tr?c = ? ,
(14)
?1
?1
?1
with ? = ??1
W , ? = ?B and ? = (m ? r + 1)?W + (r ? 1)?B . Incorporating
? of the
? the?first part
penalty (12) into the empirical risk term by defining ?c (W ) = ??(W ) + ?M trW W U , we are
now ready to state our relaxation of (10):
?
(15)
minW ?Rd?m ,?c ?Sc ?c (W ) + ?trW ???1
c (W ?) .
4
3.1
Reinterpretation in terms of norms
T
We denote kW k2c = min?c ?Sc trW ??1
c W the cluster norm (CN). For any convex set Sc , we obtain a norm on W (that we apply here to its centered version). By putting some different constraints
on the set Sc , we obtain different norms on W , and in fact all previous multi-task formulations may
be cast in this way, i.e., by choosing a specific set of positive matrices Sc (e.g., trace constraint for
the trace norm, and simply a singleton for the Frobenius norm). Thus, designing norms for multitask learning is equivalent to designing a set of positive matrices. In this paper, we have investigated
a specific set adapted for clustered-tasks, but other sets could be designed in other situations.
Note that we have selected a simple spectral convex set Sc in order to make the optimization simpler in Section 3.3, but we could also add some additional constraints that encode the point-wise
positivity of the matrix M . Finally, when r = 1 (one cluster) and r = m (one cluster per task), we
get back the formulation of [5].
3.2
Reinterpretation as a convex relaxation of K-means
In this section we show that the semi-norm kW ?k2c that we have designed earlier, can be interpreted
as a convex relaxation of K-means on the tasks [9]. Indeed, given W ? Rd?m , K-means aims
to decompose it in the form W = ?E ? where ? ? Rd?r are cluster centers and E represents
a partition. Given E, ? is found by minimizing min? kW ? ? E?? k2F . Thus, a natural strategy
outlined by [9], is to alternate between optimizing ?, the partition E and the weight vectors W . We
now show that our convex norm is obtained when minimizing in closed form with respect to ? and
relaxing.
By translation invariance, this is equivalent to minimizing min? k?W ? ? ?E?? k2F . If we add a
penalization on ? of the form ?trE ? E??? , then a short calculation shows that the minimum with
respect to ? (i.e., after optimization of the cluster centers) is equal to
tr?W ? W ?(?E(E ? E)?1 E ? ?/? + I)?1 = tr?W ? W ?(?M ?/? + I)?1 .
By comparing with Eq. (13), we see that our formulation is indeed a convex relaxation of K-means.
3.3
Primal optimization
Let us now show in more details how (15) can be solved efficiently. Whereas a dual formulation
could be easily derived following [8], a direct approach is to rewrite (15) as
?
?
?
minW ?Rd?m ?c (W ) + min?c ?Sc trW ???1
(16)
c (W ?)
which, if ?c is differentiable, can be directly optimized by gradient-based methods on W since
?
kW ?k2c = min?c ?Sc trW ???1
is a quadratic semi-norm of W ?. This regularization
c (W ?)
?1
?
term trW ??c (W ?) can be computed efficiently using a semi-closed form. Indeed, since ?c as
defined in (14) is a spectral set (i.e., it does depend only on eigenvalues of covariance matrices), we
obtain a function of the singular values of W ? (or equivalently the eigenvalues of W ?W ? ):
?
?1 ?
min?c ?Sc trW ???1
V (W ?)? ,
c (W ?) = min??Rm , ???i ??, ?1=?, V ?O m trW ?V diag(?)
where Om is the set of orthogonal matrices in Rm?m . The optimal V is the matrix of the eigenvectors of W ?W ? , and we obtain the value of the objective function at the optimum:
Pm ?2
min??S trW ???1 (W ?)? = min??Rm , ???i ??, ?1=? i=1 ?ii ,
where ? and ? are the vectors containing the singular values of W ? and ? respectively. Now, we
simply need to be able to compute this function of the singular values.
The only coupling in this formulation comes from the trace constraint. The Lagrangian corresponding to this constraint is:
Pm ? 2
Pm
L(?, ?) = i=1 ?ii + ? ( i=1 ?i ? ?) .
(17)
For ? ? 0, this is a decreasing function of ?i , so the minimum on ?i ? [?, ?] is reached for ?i = ?.
The dual function is then a linear non-decreasing function of ? (since ? ? ?/m ? ? from the
definition of ?, ?, ? in (14)), which reaches it maximum value (on ? ? 0) at ? = 0. Let us therefore
now consider the dual for ? ? 0. (17) is then a convex function of ?i . Canceling
its derivative with
?
respect to ?i gives that the minimum in ? ? R is reached for ?i = ?i / ?. Now this may not be
5
?
in the constraint set (?, ?),?so if ?i < ? ? then the minimum in ?i ? [?, ?] of (17) is reached
?
for ?i = ?, and if ?i > ? ? it is reached for ?i = ?. Otherwise, it is reached for ?i = ?i / ?.
Reporting this in (17), the dual problem is therefore
? 2
? 2
? P
?
P
P
?
?
?
max??0 i,?????i ?? ?? 2?i ? + i,?i <??? ?i + ?? + i,? ??<?i ?i + ?? ? ?? . (18)
Since a closed form for this expression is known for each fixed value of ?, one can obtain kW ?k2c
(and the eigenvalues of ?? ) by Algorithm 1. The cancellation condition in Algorithm 1 is that the
Algorithm 1 Computing kAk2c
Require: A, ?, ?, ?.
Ensure: kAk2c , ?? .
Compute the singular values ?i of A.
?2 ?2
Order the ?i2 , ?i2 in a vector I (with an additional 0 at the beginning).
for all interval (a, b) of I do
?
,?)
if ?L(?
is canceled on ? ? (a, b) then
??
Replace ? ? in the dual function L(?? , ?) to get kAk2c , compute ?? on (a, b).
return kAk2c , ?? .
end if
end for
value canceling the derivative belongs to (a, b), i.e.,
?2
?P ?
?
i,? ???i ?? ? ?i
? (a, b) ,
?=
?
+
??(?n +?n )
?
?
where n? and n+ are the number of ?i < ? ? and ?i > ? ? respectively. Denoting kAk2c =
?
F (A, ? (A)), ?A F = ?A F + ?? F ?A ? cannot be computed because of the non-differentiable
constraints on ? for F . We followed an alternative direction, using only the ?A F part.
4
4.1
Experiments
Artificial data
We generated synthetic data consisting of two clusters of two tasks. The tasks are vectors of Rd , d =
30. For each cluster, a center w
?c was generated in Rd?2 , so that the two clusters be orthogonal. More
precisely, each w
?c had (d ? 2)/2 random features randomly drawn from N (0, ?r2 ), ?r2 = 900, and
(d ? 2)/2 zero features. Then, each tasks t was computed as wt + w
?c (t), where c(t) was the cluster
of t. wt had the same zero feature as its cluster center, and the other features were drawn from
N (0, ?c2 ), ?c2 = 16. The last two features were non-zero for all the tasks and drawn from N (0, ?c2 ).
For each task, 2000 points were generated and a normal noise of variance ?n2 = 150 was added.
In a first experiment, we compared our cluster norm k.k2c with the single-task learning given by the
Frobenius norm, and with the trace norm, that corresponds to the assumption that the tasks live in a
low-dimension space. The multi-task kernel approach being a special case of CN, its performance
will always be between the performance of the single task and the performance of CN.
In a second setting, we compare CN to alternative methods that differ in the way they learn ?:
? The True metric approach, that simply plugs the actual clustering in E and optimizes W
using this fixed metric. This necessitates to know the true clustering a priori, and can be
thought of like a golden standard.
? The k-means approach, that alternates between optimizing the tasks in W given the metric
? and re-learning ? by clustering the tasks wi [9]. The clustering is done by a k-means run
3 times. This is a non convex approach, and different initialization of k-means may result
in different local minima.
We also tried one run of CN followed by a run of True metric using the learned ? reprojected
in Sr by rounding, i.e., by performing k-means on the eigenvectors of the learned ? (Reprojected
approach), and a run of k-means starting from the relaxed solution (CNinit approach).
6
Only the first method requires to know the true clustering a priori, all the other methods can be run
without any knowledge of the clustering structure of the tasks.
Each method was run with different numbers of training points. The training points were equally
separated between the two clusters and for each cluster, 5/6th of the points were used for the first
task and 1/6th for the second, in order to simulate a natural setting were some tasks have fewer data.
We used the 2000 points of each task to build 3 training folds, and the remaining points were used
for testing. We used the mean RMSE across the tasks as a criterion, and a quadratic loss for ?(W ).
The results of the first experiment are shown on Figure 1 (left). As expected, both multi-task approaches perform better than the approach that learns each task independently. CN penalization on
the other hand always gives better testing error than the trace norm penalization, with a stronger advantage when very few training points are available. When more training points become available,
all the methods give more and more similar performances. In particular, with large samples, it is not
useful anymore to use a multi-task approach.
35
32
Frob
Trace
CN
30
CN
KM
True
Repr
30
28
RMSE
RMSE
26
25
20
24
22
20
18
15
16
10
3
3.5
4
4.5
5
5.5
Number of training points (log)
6
14
3
6.5
3.5
4
4.5
5
5.5
Number of training points (log)
6
6.5
Figure 1: RMSE versus number of training points for the tested methods.
Figure 2: Recovered ? with CN (upper line) and k-means (lower line) for 28, 50 and 100 points.
Figure 1 (right) shows the results of the second experiment. Using the true metric always gives the
best results. For 28 training points, no method recovers the correct clustering structure, as displayed
on Figure 2, although CN performs slightly better than the k-means approach since the metric it
learns is more diffuse. For 50 training points, CN performs much better than the k-means approach,
which completely fails to recover the clustering structure as illustrated by the ? learned for 28 and
50 training points on Figure 2. In the latter setting, CN partially recovers the clusters. When more
training points become available, the k-means approach perfectly recovers the clustering structure
and outperforms the relaxed approach. The reprojected approach, on the other hand, performs always as well as the best of the two other methods. The CNinit approach results are not displayed
since the are the same as for the reprojected method.
4.2
MHC-I binding data
We also applied our method to the IEDB MHC-I peptide binding benchmark proposed in [10]. This
database contains binding affinities of various peptides, i.e., short amino-acid sequences, with different MHC-I molecules. This binding process is central in the immune system, and predicting it is
crucial, for example to design vaccines. The affinities are thresholded to give a prediction problem.
Each MHC-I molecule is considered as a task, and the goal is to predict whether a peptide binds a
molecule. We used an orthogonal coding of the amino acids to represent the peptides and balanced
7
Table 1: Prediction error for the 10 molecules with less than 200 training peptides in IEDB.
Method
Test error
Pooling
26.53% ? 2.0
Frobenius norm
11.62% ? 1.4
Multi-task kernel
10.10% ? 1.4
Trace norm
9.20% ? 1.3
Cluster norm
8.71% ? 1.5
the data by keeping only one negative example for each positive point, resulting in 15236 points
involving 35 different molecules. We chose a logistic loss for ?(W ).
Multi-task learning approaches have already proved useful for this problem, see for example [11,
12]. Besides, it is well known in the vaccine design community that some molecules can be grouped
into empirically defined supertypes known to have similar binding behaviors.
[12] showed in particular that the multi-task approaches were very useful for molecules with few
known binders. Following this observation, we consider the mean error on the 10 molecules with
less than 200 known ligands, and report the results in Table 1. We did not select the parameters by
internal cross validation, but chose them among a small set of values in order to avoid overfitting.
More accurate results could arise from such a cross validation, in particular concerning the number
of clusters (here we limited the choice to 2 or 10 clusters).
The pooling approach simply considers one global prediction problem by pooling together the data
available for all molecules. The results illustrate that it is better to consider individual models than
one unique pooled model.On the other hand, all the multitask approaches improve the accuracy, the
cluster norm giving the best performance. The learned ?, however, did not recover the known supertypes, although it may contain some relevant information on the binding behavior of the molecules.
5
Conclusion
We have presented a convex approach to clustered multi-task learning, based on the design of a
dedicated norm. Promising results were presented on synthetic examples and on the IEDB dataset.
We are currently investigating more refined convex relaxations and the natural extension to nonlinear multi-task learning as well as the inclusion of specific features on the tasks, which has shown
to improve performance in other settings [6].
References
[1] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series
in Applied Mathematics. SIAM, Philadelphia, 1990.
[2] F. Girosi, M. Jones, and T. Poggio. Regularization Theory and Neural Networks Architectures. Neural
Comput., 7(2):219?269, 1995.
[3] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Stat. Soc. B., 58:267?288, 1996.
[4] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. J. Mach. Learn.
Res., 4:83?99, 2003.
[5] T. Evgeniou, C. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. J. Mach. Learn.
Res., 6:615?637, 2005.
[6] J. Abernethy, F. Bach, T. Evgeniou, and J.-P. Vert. Low-rank matrix factorization with attributes. Technical
Report cs/0611124, arXiv, 2006.
[7] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In B. Sch?olkopf, J. Platt, and
T. Hoffman, editors, Adv. NIPS 19, pages 41?48, Cambridge, MA, 2007. MIT Press.
[8] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the Kernel Matrix
with Semidefinite Programming. J. Mach. Learn. Res., 5:27?72, 2004.
[9] M. Deodhar and J. Ghosh. A framework for simultaneous co-clustering and learning from complex data.
In KDD ?07, pages 250?259, New York, NY, USA, 2007. ACM.
[10] B. Peters, H.-H Bui, S. Frankild, M. Nielson, C. Lundegaard, E. Kostem, D. Basch, K. Lamberth,
M. Harndahl, W. Fleri, S. S Wilson, J. Sidney, O. Lund, S. Buus, and A. Sette. A community resource
benchmarking predictions of peptide binding to MHC-I molecules. PLoS Comput Biol, 2(6):e65, 2006.
[11] D. Heckerman, D. Kadie, and J. Listgarten. Leveraging information across HLA alleles/supertypes improves epitope prediction. J. Comput. Biol., 14(6):736?746, 2007.
[12] L. Jacob and J.-P. Vert. Efficient peptide-MHC-I binding prediction for alleles with few known binders.
Bioinformatics, 24(3):358?366, Feb 2008.
8
| 3499 |@word multitask:3 version:2 norm:34 stronger:1 km:1 simulation:1 tried:1 jacob:3 covariance:1 tr:3 contains:2 series:1 ecole:1 denoting:2 outperforms:2 recovered:1 comparing:1 written:2 partition:7 kdd:1 girosi:1 designed:3 e65:1 selected:1 fewer:1 beginning:1 short:2 node:1 preference:2 successive:2 org:1 simpler:1 c2:3 direct:1 become:2 sidney:1 introduce:2 indeed:6 expected:2 behavior:2 multi:19 insist:1 decreasing:2 actual:1 project:1 notation:3 interpreted:1 minimizes:1 bakker:1 ghosh:1 golden:1 classifier:1 k2:2 rm:3 platt:1 positive:6 local:1 bind:1 mach:3 laurent:2 inria:1 chose:2 initialization:1 studied:2 suggests:1 challenging:1 relaxing:1 binder:2 co:1 limited:1 factorization:1 statistically:1 uy:1 practical:1 unique:1 enforces:3 testing:2 pontil:2 empirical:3 mhc:7 thought:1 vert:4 projection:4 convenient:1 get:4 onto:1 close:1 cannot:1 selection:1 put:1 context:2 risk:3 live:2 optimize:1 equivalent:3 customer:4 center:6 lagrangian:1 starting:1 independently:1 convex:23 focused:1 formulate:1 reprojected:4 resp:2 construction:1 play:1 ulm:1 programming:1 designing:2 hypothesis:8 origin:1 lanckriet:1 database:1 ft:1 role:1 solved:1 connected:1 adv:1 buus:1 plo:1 principled:1 mentioned:1 balanced:1 convexity:1 constrains:1 cristianini:1 mine:5 depend:4 rewrite:2 reinterpretation:2 algebra:1 completely:1 necessitates:1 easily:2 various:6 separated:1 artificial:1 sc:11 choosing:2 refined:1 abernethy:1 jean:2 emerged:2 widely:1 whose:3 supplementary:1 tested:1 otherwise:2 statistic:1 jointly:1 itself:1 advantage:1 differentiable:2 eigenvalue:3 sequence:1 listgarten:1 propose:5 product:1 fr:2 relevant:2 translate:2 pthe:1 intuitive:1 frobenius:4 olkopf:1 cluster:49 optimum:1 coupling:1 illustrate:1 stat:1 eq:1 soc:1 c:1 implies:1 come:1 quantify:1 differ:1 direction:2 correct:3 attribute:1 allele:2 centered:4 observational:1 require:1 fix:1 clustered:7 decompose:1 extension:1 considered:3 normal:1 predict:1 major:1 currently:1 peptide:7 grouped:1 tool:1 hoffman:1 hope:2 mit:1 always:4 aim:1 normale:1 rather:1 pn:1 avoid:1 shrinkage:1 wilson:1 encode:1 derived:1 epitope:1 rank:2 check:1 sense:1 inference:6 el:1 typically:1 compactness:1 willow:1 france:3 going:1 canceled:1 classification:4 among:2 dual:5 priori:6 special:1 equal:3 construct:1 evgeniou:3 kw:9 represents:1 look:1 k2f:2 jones:1 others:2 np:1 simplify:1 report:2 few:5 spline:1 randomly:1 composed:1 simultaneously:3 individual:1 consisting:1 n1:1 attempt:2 frob:1 organization:1 interest:5 semidefinite:4 primal:1 accurate:1 beforehand:3 edge:1 poggio:1 minw:6 institut:2 orthogonal:6 euclidean:2 re:5 theoretical:1 iedb:4 column:3 earlier:1 assignment:1 cost:1 subset:1 entry:1 rounding:1 synthetic:3 thoroughly:1 siam:1 sequel:1 together:1 w1:1 squared:2 again:1 central:1 containing:1 positivity:1 derivative:2 leading:1 return:1 pmin:1 singleton:1 coding:1 wk:3 fontainebleau:2 pooled:1 kadie:1 depends:1 closed:3 observing:1 kwk:1 francis:2 sup:1 wm:1 recover:6 reached:5 curie:2 rmse:4 om:1 square:1 accuracy:1 variance:8 acid:2 efficiently:2 identify:3 generalize:1 bayesian:1 mc:4 multiplying:1 simultaneous:1 reach:1 sharing:4 canceling:2 definition:1 associated:5 recovers:3 dataset:2 proved:2 recall:1 knowledge:6 improves:1 hilbert:2 back:1 trw:20 cbms:1 supervised:2 follow:3 improved:1 formulation:7 done:1 trm:3 working:1 tre:1 hand:3 nonlinear:1 logistic:1 behaved:1 believe:1 usa:1 effect:1 normalized:1 true:6 contain:1 regularization:7 hence:2 i2:2 illustrated:1 criterion:1 performs:3 dedicated:2 wise:1 recently:1 common:1 functional:1 empirically:1 volume:1 belong:2 numerically:1 cambridge:1 rd:12 erieure:1 pm:8 similarly:1 outlined:1 inclusion:1 cancellation:1 repr:1 heskes:1 mathematics:1 had:2 immune:1 similarity:3 add:2 feb:1 dominant:1 recent:2 showed:1 optimizing:4 belongs:1 apart:1 optimizes:1 honor:2 binary:3 yi:4 minimum:5 additional:2 relaxed:2 impose:3 mr:3 semi:4 ii:2 multiple:2 infer:1 ing:1 technical:1 characterized:1 calculation:1 bach:3 plug:1 cross:2 concerning:1 equally:1 plugging:2 laplacian:1 prediction:6 involving:1 regression:5 metric:6 arxiv:1 sometimes:1 kernel:5 represent:1 achieved:1 penalize:2 nielson:1 addition:1 want:2 whereas:1 interval:1 singular:5 crucial:1 sch:1 regional:1 sr:6 kwi:3 pooling:3 leveraging:1 jordan:1 easy:1 architecture:1 perfectly:1 opposite:1 wahba:1 lasso:1 cn:12 whether:1 expression:1 bartlett:1 penalty:16 peter:1 york:1 useful:4 eigenvectors:2 amount:1 hla:1 nsf:1 estimated:1 per:1 tibshirani:1 discrete:2 express:1 group:9 key:1 putting:1 drawn:3 clarity:1 penalizing:1 thresholded:1 graph:1 relaxation:10 nkwk:2 year:1 sum:2 enforced:1 run:6 reporting:1 k2c:5 followed:2 fold:1 quadratic:4 adapted:1 precisely:2 constraint:13 constrain:1 encodes:2 diffuse:1 aspect:1 simulate:1 min:10 performing:1 alternate:2 combination:1 ball:2 across:4 slightly:2 describes:1 heckerman:1 wi:7 making:1 outlier:2 intuitively:1 pr:2 ghaoui:1 computationally:1 resource:1 turn:1 know:4 end:2 studying:1 available:4 promoting:1 observe:1 apply:1 appropriate:2 spectral:3 enforce:2 anymore:1 alternative:2 denotes:2 clustering:15 include:1 ensure:1 saint:2 remaining:1 hinge:1 giving:1 build:1 classical:1 micchelli:1 objective:4 added:1 already:1 strategy:4 gradient:1 affinity:2 subspace:3 considers:1 assuming:2 besides:2 index:1 relationship:1 providing:1 balance:1 minimizing:3 equivalently:4 trace:9 negative:2 u900:2 design:7 motivates:1 unknown:3 perform:1 upper:1 observation:1 benchmark:1 finite:1 philippe:2 displayed:2 situation:4 defining:1 community:2 pair:1 paris:1 namely:1 cast:1 optimized:2 rephrase:1 learned:4 established:1 nip:1 able:2 usually:1 pattern:1 lund:1 sparsity:1 including:1 max:2 royal:1 belief:1 natural:6 regularized:1 predicting:2 improve:2 ready:1 cbio:2 philadelphia:1 prior:7 understanding:2 loss:6 expect:2 limitation:1 versus:1 penalization:3 validation:2 imposes:1 editor:1 translation:1 row:2 course:1 last:2 keeping:1 absolute:1 sparse:1 benefit:2 dimension:1 compact:1 bui:1 inserm:2 global:3 overfitting:1 investigating:1 assumed:1 xi:4 alternatively:1 don:1 quantifies:3 table:2 promising:2 learn:8 molecule:11 contributes:1 investigated:1 complex:1 rue:3 diag:1 did:2 noise:1 arise:1 n2:1 amino:2 benchmarking:1 ny:1 vaccine:2 fails:1 theme:1 comput:3 third:1 learns:2 specific:3 gating:1 r2:2 intractable:1 incorporating:1 importance:1 eij:1 simply:5 forming:1 partially:1 binding:9 ligand:1 corresponds:1 acm:1 ma:1 goal:3 towards:1 replace:1 considerable:1 paristech:4 typical:1 wt:4 invariance:1 select:1 internal:1 latter:1 bioinformatics:1 argyriou:1 biol:2 |
2,756 | 35 | 804
INTRODUCTION TO A SYSTEM FOR IMPLEMENTING NEURAL NET
CONNECTIONS ON SIMD ARCHITECTURES
Sherryl Tomboulian
Institute for Computer Applications in Science and Engineering
NASA Langley Research Center, Hampton VA 23665
ABSTRACT
Neural networks have attracted much interest recently, and using parallel
architectures to simulate neural networks is a natural and necessary application. The SIMD model of parallel computation is chosen, because systems of
this type can be built with large numbers of processing elements. However,
such systems are not naturally suited to generalized communication. A method
is proposed that allows an implementation of neural network connections on
massively parallel SIMD architectures. The key to this system is an algorithm
that allows the formation of arbitrary connections between the "neurons". A
feature is the ability to add new connections quickly. It also has error recovery ability and is robust over a variety of network topologies. Simulations of
the general connection system, and its implementation on the Connection Machine, indicate that the time and space requirements are proportional to the
product of the average number of connections per neuron and the diameter of
the interconnection network.
INTRODUCTION
Neural Networks hold great promise for biological research, artificial intelligence, and even as general computational devices. However, to study systems
in a realistic manner, it is highly desirable to be able to simulate a network
with tens of thousands or hundreds of thousands of neurons. This suggests the
use of parallel hardware. The most natural method of exploiting parallelism
would have each processor simulating a single neuron.
Consider the requirements of such a system. There should be a very large
number of processing elements which can work in parallel. The computation
that occurs at these elements is simple and based on local data. The processing
elements must be able to have connections to other elements. All connections
in the system must be able to be traversed in parallel. Connections must be
added and deleted dynamically.
Given current technology, the only type of parallel model that can be constructed with tens of thousands or hundreds of thousands of processors is an
SIMD architecture. In exchange for being able to build a system with so many
processors, there are some inherent limitations. SIMD stands for single instruction multiple datal which means that all processors can work in parallel, but
they must do exactly the same thing at the same time. This machine model
is sufficient for the computation required within a neuron, however in such a
system it is difficult to implement arbitrary connections between neurons. The
Connection Machine2 provides such a model, but uses a device called the router
This work was supported by the National Aeronautics and Space Administration under
NASA Constract No. NASl-18010-7 while the author was in residence at ICASE.
? American Institute of Physics 1988
805
to deliver messages. The router is a complex piece of hardware that uses significant chip area, and without the additional hardware for the router, a machine
could be built with significantly more processors. Since one of the objectives is
to maximize the number of "neurons" it is desirable to eliminate the extra cost
of a hardware router and instead use a software method.
Existing software algorithms for forming connections on SIMD machines
are not sufficient for the requirements of a neural networks. They restrict the
form of graph (neural network) that can be embedded to permutations!?? or
sorts5.6combinedwith7, the methods are network specific, and adding a new connection is highly time consuming.
The software routing method presented here is a unique algorithm which allows arbitrary neural networks to be embedded in machines with a wide variety
of network topologies. The advantages of such an approach are numerous: A
new connection can be added dynamically in the same amount of time that it
takes to perform a parallel traversal of all connections. The method has error
recovery ability in case of network failures. This method has relationships with
natural neural models. When a new connection is to be formed, the two neurons
being connected are activated, and then the system forms the connection without any knowledge of the "address" of the neuron-processors and without any
instruction as to the method of forming the connecting path. The connections
are entirely distributed; a processor only knows that connections pass through
it - it doesn't know a connection's origin or final destination.
Some neural network applications have been implemented on massively parallel architectures, but they have run into restrictions due to communication.
An implementation on the Connection Machines discovered that it was more
desirable to cluster processors in groups, and have each processor in a group
represent one connection, rather than having one processor per neuron, because
the router is designed to deliver one message at a time from each processor. This
approach is contrary with the more natural paradigm of having one processor
represent a neuron. The MPP 9, a massively parallel architecture with processors arranged in a mesh, has been used to implement neural nets10 , but because
of a lack of generalized communication software, the method for edge connections is a regular communication pattern with all neurons within a specified
distance. This is not an unreasonable approach, since within the brain neurons
are usually locally connected, but there is also a need for longer connections
between groups of neurons. The algorithms presented here can be used on
both machines to facilitate arbitrary connections with an irregular number of
connections at each processor.
MACHINE MODEL
As mentioned previously, since we desire to build a system with an large
number of processing elements, the only technology currently available for building such large systems is the SIMD architecture model. In the SIMD model
there is a single control unit and a very large number of slave processors that
can execute the same instruction stream simultaneously. It is possible to disable
some processors so that only some execute an instruction, but it is not possible
to have two processor performing different instructions at the same time. The
processors have exclusively local memory which is small (only a few thousand
bits), and they have no facilities for local indirect addressing. In this scheme
an Instruction involves both a particular operation code and the local memory
806
address. All processors must do this same thing to the same areas of their local
memory at the same time.
The basic model of computation is bit-serial - each instruction operates on
a bit at a time. To perform multiple bit operations, such as integer addition,
requires several instructions. This model is chosen because it requires less
hardware logic, and so would allow a machine to be built with a larger number
of processors than could otherwise be achieved with a standard word-oriented
approach. Of course, the algorithms presented here will also work for machines
with more complex instruction abilities; the machine model described satisfies
the minimal requirements.
An important requirement for connection formation is that the processors
are connected in some topology. For instance, the processors might be connected in a grid so that each processor has a North, South, East, and West
neighbor. The methods presented here work for a wide variety of network
topologies. The requirements are: (1) there must be some path between any
two proeessors; (2) every neighbor )ink must be bi-directional, i.e. if A is a
neighbor of B, then B must be a neighbor of A; (3) the neighbor relations
between processors must have a consistent invertible labeling. A more precise definition of the labeling requirements can be found in 11. It suffices that
most networks 12, including grid, hypercube, cube connected cycles 1S , shuffle
exchange14 , and mesh of trees15 are admissible under the scheme. Additional
requirements are that the processors be able to read from or write to their
neighbors' memories, and that at least one of the processors acts as a serial
port between the processors and the controller.
COMPUTATIONAL REQUIREMENTS
The machine model described here is sufficient for the computational requirements of a neuron. Adopt the paradigm that each processor represents one
neuron. While several different models of neural networks exist with slightly
different features, they are all fairly well characterized by computing a sum or
product of the neighbors values, and if a certain threshold is exceeded, then
the processor neuron will fire, Le. activate other neurons. The machine model
described here is more efficient at boolean computation, such as described by
McCulloch and Pitts16 , since it is bit serial. Neural net models using integers
and floating point arithmetic 17,18 will also work but will be somewhat slower
since the time for computation is proportional to the number of bits of the
operands.
The only computational difficulty lies in the fact that the system is SIMD,
which means that the processes are synchronous. For some neural net models
this is sufficient18 however others require asynchronous behavior 17. This can
easily be achieved simply by turning the processors on and off based on a specified probability distribution. (For a survey of some different neural networks
see 19).
CONNECTION ASSUMPTIONS
Many models of neural networks assume fully connected systems. This
model is considered unrealistic, and the method presented here will work better
for models that contain more sparsely connected systems. While the method
will work for dense connections, the time and space required is proportional to
807
the number of edges, and becomes prohibitively expensive.
Other than the sparse assumptions, there are no restrictions to the topological form of the network being simulated. For example, multiple layered
systems, slightly irregular structures, and completely random connections are
all handled easily. The system does function better if there is locality in the
neural network. These assumptions seem to fit the biological model of neurons.
THE CONNECTION FORMATION METHOD
A fundamental part of a neural network implementation is the realization of
the connections between neurons. This is done using a software scheme first presented in 11,20. The original method was intended for realizing directed graphs
in SIMD architectures. Since a neural network is a graph with the neurons
being vertices and the connections being arcs, the method maps perfectly to
this system. Henceforth the terms neuron and vertex and the terms arc and
connection will be used interchangeably.
The software system presented here for implementing the connections has
several parts. Each processor will be assigned exactly one neuron. (Of course
some processors may be "free" or unallocated, but even "free" processor participate in the routing process.) Each connection will be realized as a path
in the topology of processors. A labeling of these paths in time and space is
introduced which allows efficient routing algorithms and a set-up strategy is
introduced that allows new connections to be added quickly.
The standard computer science approach to forming the connection would
be to store the addresses of the processors to which a given neuron is connected.
Then, using a routing algorithm, messages could be passed to the processors
with the specified destination. However, the SIMD architecture does not lend
itself to standard message passing schemes because processors cannot do indirect addressing, so buffering of values is difficult and costly.
Instead, a scheme is introduced which is closer to the natural neuron-synapse
structures. Instead of having an address for each connection, the connection
is actually represented as a fixed path between the processors, using time as a
virtual dimension. The path a connection takes through the network of processors is statically encoded in the local memories of the neurons that it passes
through. To achieve this, the following data structures will be resident at each
processor.
ALLOCATED ---- boolean flag indicating
whether this processor is assigned
a vertex (neuron) in the graph
VERTEX LABEL --- label of graph vertex (neuron)
HAS_NEIGHBOR[l .. neighbor_limit] flag
indicating the existence of neighbors
SLOTS[l .. T] OF
arc path information
START----------new arc starts here
DIRECTION------direction to send
{l .. neighbor_limit.FREE}
END-----------arc ends here
ARC LABEL-----label of arc
808
The ALLOCATED and VERTEX LABEL field indicates that the processor
has been assigned a vertex in the graph (neuron). The HAS NEIGHBOR field
is used to indicate whether a physical wire exists in the particular direction; it
allows irregular network topologies and boundary conditions to be supported.
The SLOTS data structure is the key to realizing the connections. It is used
to instruct the processor where to send a message and to insure that paths are
constructed in such a way that no collisions will occur.
SLOTS is an array with T elements. The value T is called the time quantum.
Traversing all the edges of the embedded graph in parallel will take a certain
amount of time since messages must be passed along through a sequence of
neighboring processors. Forming these parallel connections will be considered
an uninterruptable operation which will take T steps. The SLOTS array is used
to tell the processors what they should do on each relative time position within
the time quantum.
One of the characteristics of this algorithm is that a fixed path is chosen to
represent the connection between two processors, and once chosen it is never
changed. For example, consider the grid below.
I
I
I
I
I
--A--B--C--D--E-I
I
I
I
I
--F--G--H--I--J-I
I
I
I
I
Fig. 1. Grid Example
If there is an arc between A and H, there are several possible paths: EastEast-South, East-South-East, and South-East-East. Only one of these paths
will be chosen between A and H, and that same path will always be used.
Besides being invariant in space, paths are also invariant in time. As stated
above, traversal is done within a time quantum T. Paths do no have to start
on time 1, but can be scheduled to start at some relative offset within the
time quantum. Once the starting time for the path has been fixed, it is never
changed. Another requirement is that a message can not be buffered, it must
proceed along the specified directions without interruption. For example, if
the path is of length 3 and it starts at time 1, then it will arrive at time
4. Alternatively, if it starts at time 2 it will arrive at time 5. Further, it is
necessary to place the paths so that no collisions occur; that is, no two paths
can be at the same processor at the same instant in time. Essentially time
adds an extra dimension to the topology of the network, and within this spacetime network all data paths must be non-conflicting. The rules for constructing
paths that fulfill these requirements are listed below .
? At most one connection can enter a processor at a given time, and at
most one connection can leave a processor at a given time. It is possible
to have both one coming and one going at the same time. Note that this
does not mean that a processor can have only one connection; it means
that it can have only one connection during anyone of the T time steps.
It can have as many as T connections going through it .
? Any path between two processors (u,v) repr('senting a connection must
consist of steps at contiguous times. For example, if the path from processor u to processor v is u,f,g,h,v, then if the arc from u-f is assigned
time 1, f-g must have time 2, g-h time 3, and h-v time 4. Likewise if u-f
occurs at time 5, then arc h-v will occur time 8.
809
When these rules are used when forming paths, the SLOTS structure can
be used to mark the paths. Each path goes through neighboring processors at
successive time steps. For each of these time steps the DffiECTION field of
the SLOTS structure is marked, telling the processor which direction it should
pass a message if it receives it on that time. SLOTS serves both to instruct the
processors how to send messages, and to indicate that a processor is busy at a
certain time slot so that when new paths are constructed it can be guaranteed
that they won't conflict with current paths.
Consider the following example. Suppose we are given the directed graph
with vertices A,B,C,D and edges A - > C, B - > C,B - > D, and D - >
A. This is to be done where A,B,C, and D have been assigned to successive
elements of a linear array. (A linear array in not a good network for this
scheme, but is a convenient source of examples.)
Lo~ical
Faa. 2.
Connections
GIapb Example
A.B.C.D are successive members in a linear array
1---2---3---4
A---B---C---D
First. A ->C can be completed with the map East-East. so
Slots[A][1].direction = E. Slots[B][2].direction=E.
Slots[C][2].end = 1 .
B->C can be done with the map East. it can start at time 1.
since Slots[B] [1] . direction and Slots[C] [1].end are free.
B->D goes through C then to D. its map is East-East. B is
occupied at time 1 and 2. It is free at time 3.
so Slots[B] [3].direction = E. Slots[C] [4].direction = E.
Slots[D] [4].end = 1.
D->A must go through C.B.A. using map West-West-West.
D is free on time 1. C is free on time 2. but B is occupied
on time 3. D is free on time 2. but C is occupied on time 3.
It can start from D at time 3. Slots[D] [3].direction = W.
Slots[C] [4] . direction = W. Slots[B] [5].direction = W.
Slots [A] [5].end=1
810
Every processor acts as a conduit for its neighbors messages. No processor
knows where any message is going to or coming from, but each processor knows
what it must do to establish the local connections.
The use of contiguous time slots is vital to the correct operation of the
system. If all edge-paths are established according to the above rules, there is
a simple method for making the connections. The paths have been restricted
so that there will be no collisions, and paths' directions use consecutive time
slots. Hence if all arcs at time i send a message to their neighbors, then each
processor is guaranteed no more than 1 message coming to it. The end of a
path is specified by setting a separate bit that is tested after each message
is received. A separate start bit indicates when a path starts. The start bit
is needed because the SLOTS array just tells the processors where to send a
message, regardless of how that message arrived. The start array indicates
when a message originates, as opposed to arriving from a neighbor.
The following algorithm is basic to the routing system.
for i = time 1 to T
FORALL processors
/* if an arc starts or is passing through at this time*/
if SLOT[i] . START = 1 or active = 1
for j=1 to neighbor-limit
if SLOT[i].direction= j
write message bit to in-box
of neighbor j:
set active = 0:
FORALL processor that just received a message
if end[i]
move in-box to message-destination;
else
move in-box to out-box:
set active bit = 1:
This code follows the method mentioned above. The time slots are looped
through and the messages are passed in the appropriate directions as specified
in the SLOTS array. Two bits, in-box and out-box, are used for message passing
so that an out-going message won't be overwritten by an in-coming message
before it gets transferred. The inner loop lor j = 1 to neighbor limit checks
each of the possible neighbor directions and sends the message to the correct
neighbor. For instance, in a grid the neighbor limit is 4, for North, South, East,
and West neighbors. The time complexity of data movement is O(T times
neighbor-limi t) .
SETTING UP CONNECTIONS
One of the goals in developing this system was to have a method for adding
new connections quickly. Paths are added so that they don't conflict with any
previously constructed path. Once a path is placed it will not be re-routed
811
by the basic placement algorithm; it will always start at the same spot at the
same time. The basic idea of the method for placing a connection is to start
from the source processor and in parallel examine all possible paths outward
from it that do not conflict with pre-established paths and which adhere to the
sequential time constraint. As the trial paths are flooding the system, they
are recorded in temporary storage. At the end of this deluge of trial paths all
possible paths will have been examined. If the destination processor has been
reached, then a path exists under the current time-space restrictions. Using
the stored information a path can be backtraced and recorded in the SLOTS
structure. This is similar to the Lee-Moore routing algorithm21 ?22 for finding a
path in a system, but with the sequential time restriction.
For example, suppose that the connection (u,v) is to be added. First it is
assumed that processors for u and v have already been determined, otherwise
(as a simplification) assume a random allocation from a pool of free processors. A parallel breadth-first search will be performed starting from the source
processor. During the propagation phase a processor which receives a message
checks its SLOTS array to see if they are busy on that time step, if not it will
propagate to its neighbors on the next time step. For instance, suppose a trial
path starts at time 1 and moves to a neighboring processor, but that neighbor is
already busy at time 1 (as can be seen by examining the DIRECTION-SLOT.)
Since a path that would go through this neighbor at this time is not legal, the
trial path would commit suicide, that is, it stops propagating itself. If the processor slot for time 2 was free, the trial path would attempt to propagate to all
of its' neighbors at time 3.
Using this technique paths can be constructed with essentially no knowledge of the relative locations of the "neurons" being connected or the underlying topology. Variations on the outlined method, such as choosing the shortest
path, can improve the choice of paths with very little overhead. If the entire network were known ahead of time, an off-line method could be used to construct
the paths more efficiently; work on off-line methods is underway. However, the
simple elegance of this basic method holds great appeal for systems that change
slowly over time in unpredictable ways.
PERFORMANCE
Adding an edge (assuming one can be added), deleting any set of edges, or
traversing all the edges in parallel, all have time complexity O(T x neighborlimit). If it is assumed that neighbor limit is a small constant then the complexity is O(T). Since T is related both to the time and space needed, it is
a crucial factor in determining the value of the algorithms presented. Some
analytic bounds on T were presented inll, but it is difficult to get a tight bound
on T for general interconnection networks and dynamically changing graphs. A
simulator was constructed to examine the behavior of the algorithms. Besides
the simulated data, the algorithms mentioned were actually implemented for
the Connection Machine. The data produced by the simulator is consistent
with that produced by the real machine. The major result is that the size of T
appears proportional to the average degree of the graph times the diameter of
the interconnection network20 ?
812
FURTHER RESEARCH
This paper has been largely concerned with a system that can realize the
connections in a neural network when the two neurons to be joined have been
activated. The tests conducted have been concerned with the validity of the
method for implementing connections, rather than with a full simulation of a
neural network. Clearly this is the next step.
A natural extension of this method is a system which can form its .own
connections based solely on the activity of certain neurons, without having
to explicitly activate the source and destination neurons. This is an exciting
avenue, and further results should be forthcoming.
Another area of research involves the formation of branching paths. The
current method takes an arc in the neural network and realizes it as a unique
path in space-time. A variation that has similarities to dendritic structure
would allow a path coming from a neuron to branch and go to several target
neurons. This extension would allow for a much more economical embedding
system. Simulations are currently underway.
CONCLUSIONS
A method has been outlined which allows the implementation of neural nets
connections on a class of parallel architectures which can be constructed with
very large numbers of processing elements. To economize on hardware so as to
maximize the number of processing element buildable, it was assumed that the
processors only have local connections; no hardware is provided for communication. Some simple algorithms have been presented which allow neural nets
with arbitrary connections to be embedded in SIMD architectures having a variety of topologies. The time for performing a parallel traversal and for adding
a new connection appears to be proportional to the diameter of the topology
times the average number of arcs in the graph being embedded. In a system
where the topology has diameter O(logN), and where the degree of the graph
being embedded is bounded by a constant, the time is apparently O(logN).
This makes it competitive with existing methods for SIMD routing, with the
advantages that there are no apriori requirements for the form of the data, and
the topological requirements are extremely general. Also, with our approach
new arcs can be added without reconfiguring the entire system. The simplicity
of the implementation and the flexibility of the method suggest that it could be
an important tool for using SIMD architectures for neural network simulation.
BIBLIOGRAPHY
1. M.J. Flynn, "Some computer organizations and their effectiveness", IEEE
Trans Comput., vol C-21, no.9, pp. 948-960.
2. W. Hillis, "The Connection Machine", MIT Press, Cambridge, Mass, 1985.
3. D. Nassimi, S. Sahni, "Parallel Algorithms to Set-up the Benes Permutation
Network", Proc. Workshop on Interconnection Networks for Parallel and Distributed Processing, April 1980.
4. D. Nassimi, S. Sahni, "Benes Network and Parallel Permutation Algorithms",
IEEE Transactions on Computers, Vol C-30, No 5, May 1981.
5. D. Nassimi, S. Sahni, "Parallel Permutation and Sorting Algorithms and a
813
New Generalized Connection Network" , JACM, Vol. 29, No.3, July 1982 pp.
642-667
6. K.E. Batcher, "Sorting Networks and their Applications", The Proceedings
of AFIPS 1968 SJCC, 1968, pp. 307-314.
7. C. Thompson, "Generalized connection networks for parallel processor intercommunication", IEEE Tran. Computers, Vol C, No 27, Dec 78, pp. 1119-1125.
8. Nathan H. Brown, Jr., "Neural Network Implementation Approaches for the
Connection Machine", presented at the 1987 conference on Neural Information
Processing Systems - Natural and Synthetic.
9. K.E. Batcher, "Design of a massively parallel processor", IEEE Trans on
Computers, Sept 1980, pp. 836-840.
10. H.M. Hastings, S. Waner, "Neural Nets on the MPP" , Frontiers of Massively
Parallel Scientific Computation, NASA Conference Publication 2478, NASA
Goddard Space Flight Center, Greenbelt Maryland, 1986.
11. S. Tomboulian, "A System for Routing Arbitrary Communication Graphs
on SIMD Architectures", Doctoral Dissertation, Dept of Computer Science,
Duke University, Durham NC.
12. T. Feng, "A Survey of Interconnection Networks", Computer, Dec 1981,
pp.12-27.
13. F. Preparata and J. Vuillemin, "The Cube Connected Cycles: a Versatile
Network for Parallel Computation", Comm. ACM, Vol 24, No 5 May 1981, pp.
300-309.
14. H. Stone, "Parallel processing with the perfect shuffle", IEEE Trans. Computers, Vol C, No 20, Feb 1971, pp. 153-161.
15. T. Leighton, "Parallel Computation Using Meshes of Trees", Proc. International Workshop on Graph Theory Concepts in Computer Science, 1983.
16. W.S. McCulloch, and W. Pitts, "A Logical Calculus of the Ideas Imminent
in Nervous Activity," Bulletin of Mathematical Biophysics, Vol 5, 1943, pp.115133.
17. J.J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities", Prot!. Natl. Aca. Sci., Vol 79, April 1982, pp.
2554-2558.
18. T. Kohonen, "Self-Organization and Associative Memory, Springer-Verlag,
Berlin, 1984.
19. R.P. Lippmann, "An Introduction to Computing with Neural Nets", IEEE
AASP, Apri11987, pp. 4-22.
20. S. Tomboulian, "A System for Routing Directed Graphs on SIMD Architectures", ICASE Report No. 87-14, NASA Langley Research Center, Hampton,
VA.
21. C.Y. Lee, "An algorithm for path connections and its applications", IRE
Trans Elec Comput, Vol. EC-I0, Sept. 1961, pp. 346-365.
22. E. F. Moore, "Shortest path through a maze", A nnals of Computation
Laboratory, vol. 30. Cambridge, MA: Harvard Univ. Press, 1959, pp.285-292.
| 35 |@word trial:5 leighton:1 instruction:9 calculus:1 simulation:4 overwritten:1 propagate:2 versatile:1 exclusively:1 existing:2 current:4 router:5 attracted:1 must:16 realize:1 mesh:3 realistic:1 analytic:1 designed:1 intelligence:1 device:2 nervous:1 realizing:2 dissertation:1 provides:1 ire:1 location:1 successive:3 lor:1 mathematical:1 along:2 constructed:7 overhead:1 manner:1 behavior:2 examine:2 simulator:2 brain:1 little:1 unpredictable:1 becomes:1 provided:1 insure:1 underlying:1 bounded:1 mass:1 mcculloch:2 what:2 flynn:1 finding:1 every:2 act:2 exactly:2 prohibitively:1 prot:1 control:1 unit:1 originates:1 before:1 engineering:1 local:8 limit:4 path:56 solely:1 might:1 doctoral:1 examined:1 dynamically:3 suggests:1 bi:1 directed:3 unique:2 implement:2 spot:1 langley:2 area:3 significantly:1 convenient:1 imminent:1 word:1 pre:1 regular:1 suggest:1 get:2 cannot:1 layered:1 storage:1 restriction:4 map:5 center:3 send:5 go:5 regardless:1 starting:2 thompson:1 survey:2 simplicity:1 recovery:2 rule:3 array:9 embedding:1 variation:2 target:1 suppose:3 duke:1 us:2 origin:1 harvard:1 element:10 expensive:1 sparsely:1 thousand:5 connected:10 cycle:2 shuffle:2 movement:1 mentioned:3 comm:1 complexity:3 traversal:3 tight:1 deliver:2 completely:1 easily:2 indirect:2 chip:1 hopfield:1 represented:1 emergent:1 elec:1 univ:1 activate:2 artificial:1 labeling:3 tell:2 formation:4 choosing:1 encoded:1 larger:1 interconnection:5 otherwise:2 ability:5 commit:1 itself:2 final:1 associative:1 advantage:2 sequence:1 afips:1 net:7 tran:1 product:2 coming:5 neighboring:3 kohonen:1 loop:1 realization:1 flexibility:1 achieve:1 exploiting:1 cluster:1 requirement:14 perfect:1 leave:1 vuillemin:1 propagating:1 received:2 implemented:2 involves:2 indicate:3 direction:18 correct:2 routing:9 hampton:2 virtual:1 implementing:3 exchange:1 require:1 suffices:1 biological:2 dendritic:1 traversed:1 extension:2 frontier:1 hold:2 considered:2 great:2 pitt:1 major:1 adopt:1 consecutive:1 proc:2 realizes:1 label:5 currently:2 tool:1 mit:1 clearly:1 always:2 rather:2 fulfill:1 occupied:3 forall:2 publication:1 indicates:3 check:2 i0:1 eliminate:1 entire:2 relation:1 ical:1 going:4 logn:2 fairly:1 apriori:1 cube:2 field:3 simd:16 once:3 having:5 never:2 construct:1 represents:1 buffering:1 placing:1 looped:1 others:1 preparata:1 report:1 inherent:1 few:1 oriented:1 simultaneously:1 national:1 floating:1 intended:1 phase:1 fire:1 attempt:1 organization:2 interest:1 message:26 highly:2 activated:2 natl:1 edge:8 closer:1 necessary:2 traversing:2 tree:1 re:1 minimal:1 instance:3 boolean:2 contiguous:2 cost:1 addressing:2 vertex:8 hundred:2 examining:1 conducted:1 stored:1 synthetic:1 fundamental:1 international:1 destination:5 physic:1 off:3 lee:2 invertible:1 pool:1 connecting:1 quickly:3 recorded:2 opposed:1 slowly:1 henceforth:1 american:1 busy:3 north:2 explicitly:1 stream:1 piece:1 performed:1 apparently:1 aca:1 reached:1 start:17 competitive:1 parallel:28 formed:1 characteristic:1 likewise:1 efficiently:1 largely:1 directional:1 produced:2 economical:1 processor:75 definition:1 failure:1 pp:13 batcher:2 naturally:1 elegance:1 stop:1 logical:1 knowledge:2 actually:2 nasa:5 appears:2 exceeded:1 flooding:1 synapse:1 april:2 arranged:1 execute:2 done:4 box:6 intercommunication:1 just:2 flight:1 receives:2 hastings:1 lack:1 resident:1 propagation:1 scheduled:1 scientific:1 building:1 facilitate:1 validity:1 contain:1 brown:1 concept:1 facility:1 hence:1 assigned:5 read:1 moore:2 laboratory:1 interchangeably:1 during:2 branching:1 self:1 won:2 generalized:4 faa:1 stone:1 arrived:1 recently:1 operand:1 physical:2 significant:1 buffered:1 cambridge:2 enter:1 grid:5 outlined:2 repr:1 longer:1 similarity:1 aeronautics:1 add:2 feb:1 own:1 massively:5 store:1 certain:4 verlag:1 seen:1 additional:2 somewhat:1 disable:1 maximize:2 paradigm:2 shortest:2 july:1 arithmetic:1 branch:1 multiple:3 desirable:3 full:1 instruct:2 characterized:1 benes:2 serial:3 va:2 biophysics:1 basic:5 controller:1 essentially:2 represent:3 achieved:2 dec:2 irregular:3 addition:1 else:1 adhere:1 source:4 sends:1 allocated:2 crucial:1 extra:2 pass:1 south:5 thing:2 member:1 contrary:1 seem:1 effectiveness:1 integer:2 reconfiguring:1 vital:1 concerned:2 variety:4 fit:1 forthcoming:1 architecture:14 topology:11 restrict:1 perfectly:1 inner:1 idea:2 avenue:1 mpp:2 administration:1 synchronous:1 whether:2 handled:1 passed:3 routed:1 passing:3 proceed:1 collision:3 listed:1 conduit:1 amount:2 outward:1 ten:2 locally:1 hardware:7 diameter:4 exist:1 per:2 write:2 promise:1 vol:10 group:3 key:2 datal:1 threshold:1 deleted:1 changing:1 breadth:1 economize:1 graph:15 sum:1 run:1 arrive:2 place:1 residence:1 bit:12 entirely:1 bound:2 guaranteed:2 simplification:1 spacetime:1 topological:2 activity:2 occur:3 placement:1 constraint:1 ahead:1 software:6 bibliography:1 nathan:1 simulate:2 anyone:1 extremely:1 performing:2 statically:1 transferred:1 developing:1 according:1 jr:1 slightly:2 making:1 invariant:2 restricted:1 legal:1 previously:2 needed:2 know:4 deluge:1 end:9 serf:1 available:1 operation:4 unreasonable:1 appropriate:1 simulating:1 slower:1 existence:1 original:1 completed:1 instant:1 goddard:1 build:2 establish:1 hypercube:1 ink:1 feng:1 objective:1 move:3 added:7 realized:1 occurs:2 already:2 strategy:1 costly:1 interruption:1 distance:1 separate:2 maryland:1 simulated:2 sci:1 berlin:1 participate:1 assuming:1 code:2 besides:2 length:1 relationship:1 suicide:1 nc:1 difficult:3 stated:1 implementation:7 design:1 collective:1 perform:2 neuron:35 wire:1 arc:15 communication:6 precise:1 discovered:1 arbitrary:6 introduced:3 required:2 specified:6 connection:71 conflict:3 conflicting:1 temporary:1 established:2 hillis:1 trans:4 address:4 able:5 parallelism:1 pattern:1 usually:1 below:2 built:3 including:1 memory:6 lend:1 deleting:1 unrealistic:1 natural:7 difficulty:1 turning:1 scheme:6 improve:1 technology:2 numerous:1 sept:2 underway:2 relative:3 determining:1 embedded:6 fully:1 permutation:4 limitation:1 proportional:5 allocation:1 degree:2 sufficient:3 consistent:2 port:1 exciting:1 lo:1 course:2 changed:2 supported:2 placed:1 asynchronous:1 free:10 arriving:1 allow:4 telling:1 institute:2 wide:2 neighbor:25 limi:1 bulletin:1 sparse:1 distributed:2 boundary:1 dimension:2 stand:1 maze:1 doesn:1 quantum:4 author:1 ec:1 transaction:1 lippmann:1 sahni:3 logic:1 active:3 assumed:3 consuming:1 alternatively:1 don:1 search:1 robust:1 complex:2 constructing:1 dense:1 fig:1 west:5 position:1 slave:1 comput:2 lie:1 admissible:1 specific:1 offset:1 appeal:1 exists:2 icase:2 consist:1 workshop:2 adding:4 sequential:2 sorting:2 durham:1 suited:1 locality:1 simply:1 jacm:1 forming:5 desire:1 joined:1 springer:1 satisfies:1 acm:1 ma:1 slot:31 marked:1 goal:1 inll:1 change:1 determined:1 operates:1 flag:2 called:2 pas:2 east:11 indicating:2 mark:1 dept:1 senting:1 tested:1 |
2,757 | 350 | Simple Spin Models
for the Development of Ocular Dominance
Columns and Iso-Orientation Patches
J.D. Cowan & A.E. Friedman
Department of Mathematics. Committee on
Neurobiology. and Brain Research Institute.
The University of Chicago. 5734 S. Univ. Ave .?
Chicago. Illinois 60637
Abstract
Simple classical spin models well-known to physicists as the ANNNI
and Heisenberg XY Models. in which long-range interactions occur in
a pattern given by the Mexican Hat operator. can generate many of the
structural properties characteristic of the ocular dominance columns
and iso-orientation patches seen in cat and primate visual cortex.
1 INTRODUCTION
In recent years numerous models for the formation of ocular dominance columns
(Malsburg, 1979; Swindale. 1980; Miller. Keller, & Stryker. 1989) and of iso-orientation
patches (Malsburg 1973; Swindale 1982 & Linsker 1986)have been published. Here we
show that simple spin models can reproduce many of the observed features. Our work is
similar to, but independent of a recent study employing spin models (Tanaka. 1990).
26
Simple Spin Models
1.1 OCULAR DOMINANCE COLUMNS
We use a one-dimensional classical spin Hamiltonian on a two-dimensional lattice with
long-range interactions. Let O'i be a spin vector restricted to the orientations i and J, in
the lattice space, and let the spin Hamiltonian be:
HoD
=- L. L.
(1)
Wij O'i ? O'j ,
i j;ci
where Wij is the well-known "Mexican Hat" distribution of weights:
Wij = a+ exp(- li-jI2/ 0':) - a_ exp(-li-jI2/
HO~ =-L.
L
..
1 J;Cl
cr)
s
0
w..IJ - L.
L.
w??
...
IJ
(2)
(3)
1 J;CI
:i
Figure 1. Pattern of Ocular Dominance which
results from simulated annealing of the energy
function HOD. Light and dark shadings correspond
respectively to the two eyes.
Let s denote retinal fibers from the same eye and 0 fibers from the opposite eye. Then
HOD represents the "energy" of interactions between fibers from the two eyes. It is
relatively easy to find a configuration of spins which minimizes HO~ by simulated
annealing (Kirkpatrick, Gelatt & Vecchi 1983). The result is shown in figure 1. It will
be seen that the resulting pattern of right and left eye spins O'R and O'L is disordered, but
at a constant wavelength determined in large part by the space constants 0'+ and 0'_ .
27
28
Cowan and friedman
Breaking the symmetry of the initial conditions (or letting the lattivce grow
systematically) results in ordered patterns.
If HOD is considered to be the energy function of a network of spins exhibiting gradient
dynamics (Hirsch & Smale. 1974). then one can write equations for the evolution of spin
patterns in the form:
ddt
(Jl~ = -_a_ Hoo = L w~~(J~
a(J.a
. . IJ
J?l
1
=
ao ~
L ws
.. (J? + L w.. (J?
j;ti
IJ 1
j;ti
IJ 1
=
J
L w.. (J?a
j;ti
IJ 1
-
L w.. (J.~ ?
j;ti
(4)
IJ 1
where a = R or L. ~ = L or R respectively. Equation (4) will be recognized as that
proposed by Swindale in 1979.
1.2 ISO-ORIENTATION PATCHES
Now let (Ji represent avec tor in the plane of the lattice which runs continuously from
to J, without reference to eye class. It follows that
i
(5)
where 9i is the orientation of the ith spin vector. The appropriate classical spin
Hamiltonian is:
HIO = - L L Wij (Ji ? erj
i j;ti
=
- L L Wij leri I leri I cos(9i - 9j).
i j;ti
(6)
Physicists will recognize HOD as a form of the Ising Lattice Hamiltonian with long-range
alternating next nearest neighbor interactions. a type of ANNNI model (Binder. 1986)
and HIO as a similar form of the Heisenberg XY Model for antiferromagnetic materials
(Binder 1986).
Again one can find a spin configuration that minimizes HIO by simulated annealing. The
result is shown in figure 2 in which six differing orientations are depicted. corresponding
to 300 increments (note that 9 + 1t is equivalent to 9). It will be seen that there are long
stretches of continuously changing spin vector orientations, with intercalated
discontinuities and both clockwise and counter-clockwise singular regions around which
the orientations rotate. A one-dimensional slice shows some of these features, and is
shown in figure 3.
Simple Spin Models
Figure 2. Pattern of orientation patches obtained by
simulated annealing of the energy function RIO. Six
differing orientations varying from 0 0 to 1800 are
represented by the different shadings.
180
9.I 90
o
o
10
20
30
40
50
Cell Number
Figure 3. Details of a one-dimensional slice through
the orientation map. Long stretches of smoothly
changing orientations are evident.
The length of O'i is also correlated with these details. Figure 4 shows that 100i I is large in
smoothly changing regions and smallest in the neighborhood of a singularity. In fact this
model reproduces most of the details of iso-orientation patches found by Blasdel and
Salama (1986).
29
30
Cowan and friedman
10
5
o
10
20
30
40
50
Cell Number
Figure 4. Variation of leri I along the same one-dim.
slice through the orientation map shown in figure 3.
The amplitude drops only near singular regions.
For example, the change in orientation per unit length, Igrad9il is shown in figure 5. It
will be seen that the lattice is "tiled", just as in the data from visual cortex, with max
Igrad9illocated at singularities.
:.- .
:: .. -
. ;;:.::::....
Figure S. Plot of Igrad9i I corresponding to the
orientation map of figure 2. Regions of maximum
rate of change of 9i are shown as shaded. These
correspond with the singular regions of figure 2.
Simple Spin Models
Once again, if HIO is taken to be the energy of a gradient dynamical system, there results
the equation:
d
dt
0'1'
= --":}_a
au?
1
HIO
= ..
L w??(1?
IJ J
(7)
J~1
which is exactly that equation introduced by Swindale in 1981 as a model for the
structure of iso-orientation patches. There is an obvious relationship between such
equations, and recent similar treatments (Durbin & Mitchison 1990; Schulten, K. 1990
(preprint); Cherjnavsky & Moody, 1990).
2 CONCLUSIONS
Simple classical spin models well-known to physicists as the ANNNI and Heisenberg
XY Models, in which long-range interactions occur in a pattern given by the Mexican
Hat operator, can generate many of the structural properties characteristic of the ocular
dominance columns and iso-orientation patches seen in cat and primate visual cortex.
Acknowledgements
This work is based on lectures given at the Institute for Theoretical Physics (Santa
Barbara) Workshop on Neural Networks and Spin Glasses, in 1986. We thank the
Institute and The University of Chicago Brain Research Foundation for partial support of
this work.
References
Malsburg, Ch.v.d. (1979), BioI. Cybern., 32, 49-62.
Swindale, N.V. (1980), Proc. Roy. Soc. Lond. B, 208, 243-264.
Miller, K.D., Keller, J.B. & Stryker, M. P. (1989), Science, 245,605-611.
Malsburg, Ch.v.d. (1973), BioI. Cybern., 14,85-100.
Swindale, N.V. (1982), Proc. Roy. Soc. Lond. B, 215, 211-230.
Linsker, R. (1986), PNAS, 83, 7508-7512; 8390-8394; 8779-8783.
Tanaka, S. (1990), Neural Networks, 3, 6, 625-640.
Kirkpatrick, S., Gelatt, C.D. Jr. & Vecchi, M.P. (1983), Science, 229, 671-679.
Hirsch, M.W. & Smale, S. (1974), Differential Equations. Dynamical Systems.
and Linear Algebra. (Academic Press, NY).
Binder, K. (1986), Monte Carlo Methods in Statistical Physics, (Springer, NY.).
Blasdel, G.G. & Salama, G. (1986), Nature, 321,579-587.
Durbin, R. & Mitchison, G. (1990), Nature, 343, 6259, 644-647.
Schulten, K. (1990) (preprint).
Cherjnavsky, A. & Moody, J. (1990), Neural Computation, 2, 3, 334-354.
31
| 350 |@word soc:2 classical:4 evolution:1 exhibiting:1 alternating:1 stryker:2 disordered:1 gradient:2 material:1 thank:1 shading:2 simulated:4 initial:1 configuration:2 ao:1 avec:1 evident:1 singularity:2 swindale:6 stretch:2 length:2 around:1 considered:1 relationship:1 intercalated:1 exp:2 blasdel:2 chicago:3 tor:1 ji:2 smale:2 smallest:1 drop:1 plot:1 jl:1 proc:2 plane:1 ji2:2 iso:7 ith:1 hamiltonian:4 mathematics:1 illinois:1 neurobiology:1 cr:1 cortex:3 along:1 varying:1 differential:1 introduced:1 recent:3 barbara:1 ave:1 glass:1 rio:1 dim:1 tanaka:2 brain:2 discontinuity:1 seen:5 dynamical:2 pattern:7 w:1 salama:2 recognized:1 reproduce:1 wij:5 clockwise:2 max:1 pnas:1 orientation:19 development:1 minimizes:2 academic:1 long:6 differing:2 once:1 eye:6 numerous:1 represents:1 ti:6 linsker:2 exactly:1 represent:1 unit:1 cell:2 acknowledgement:1 recognize:1 annealing:4 lecture:1 grow:1 physicist:3 singular:3 friedman:3 erj:1 foundation:1 cowan:3 au:1 shaded:1 binder:3 co:1 kirkpatrick:2 structural:2 near:1 light:1 systematically:1 range:4 easy:1 partial:1 opposite:1 xy:3 institute:3 neighbor:1 heisenberg:3 six:2 slice:3 theoretical:1 column:5 operator:2 employing:1 lattice:5 cybern:2 santa:1 equivalent:1 map:3 dark:1 hirsch:2 reproduces:1 keller:2 generate:2 a_:2 mitchison:2 per:1 nature:2 physic:2 ddt:1 write:1 variation:1 increment:1 symmetry:1 continuously:2 moody:2 dominance:6 again:2 cl:1 changing:3 roy:2 li:2 ising:1 year:1 observed:1 run:1 preprint:2 retinal:1 region:5 ny:2 patch:8 counter:1 schulten:2 breaking:1 dynamic:1 durbin:2 algebra:1 spin:20 occur:2 characteristic:2 miller:2 correspond:2 workshop:1 vecchi:2 cat:2 fiber:3 represented:1 lond:2 ci:2 carlo:1 univ:1 relatively:1 hod:5 published:1 department:1 monte:1 smoothly:2 formation:1 neighborhood:1 hoo:1 jr:1 depicted:1 wavelength:1 visual:3 energy:5 nearest:1 ocular:6 primate:2 obvious:1 hio:5 ordered:1 restricted:1 springer:1 ch:2 antiferromagnetic:1 treatment:1 taken:1 equation:6 bioi:2 committee:1 interaction:5 amplitude:1 letting:1 change:2 dt:1 determined:1 mexican:3 appropriate:1 gelatt:2 just:1 tiled:1 ho:2 hat:3 support:1 rotate:1 malsburg:4 ij:8 correlated:1 |
2,758 | 3,500 | ICA based on a Smooth Estimation of the Differential
Entropy
Lev Faivishevsky
School of Engineering, Bar-Ilan University
[email protected]
Jacob Goldberger
School of Engineering, Bar-Ilan University
[email protected]
Abstract
In this paper we introduce the MeanNN approach for estimation of main information theoretic measures such as differential entropy, mutual information and
divergence. As opposed to other nonparametric approaches the MeanNN results
in smooth differentiable functions of the data samples with clear geometrical interpretation. Then we apply the proposed estimators to the ICA problem and obtain
a smooth expression for the mutual information that can be analytically optimized
by gradient descent methods. The improved performance of the proposed ICA
algorithm is demonstrated on several test examples in comparison with state-ofthe-art techniques.
1
Introduction
Independent component analysis (ICA) is the problem of recovering latent random vector from
observations of unknown linear functions of that vector. Assume a data S ? Rd is generated via d
independent sources. We observe X = AS where A is an unknown square matrix called the mixing
matrix. We are given repeated observation dataset {x1 , ..., xn } and our goal is to recover the linear
transformation A and the sources s1 , ..., sn that generated our data xi = Asi .
Given the minimal statement of the problem, it has been shown [6] that one can recover the original sources up to a scaling and a permutation provided that at most one of the underlying sources is
Gaussian and the rest are non-Gaussian. Upon pre-whitening the observed data, the problem reduces
to a search over rotation matrices in order to recover the source and mixing matrix in the sense described above [10]. We will assume henceforth that such pre-processing has been done. Specifying
distributions for the components of X, one obtains a parametric model that can be estimated via
maximum likelihood [3, 4]. Working with W = A?1 as the parametrization, one readily obtains
? and provides estimates of the latent
a gradient or fixed-point algorithm that yields an estimate W
?
?
components via S = W X [10].
In practical applications the distributions of the d components of X are unknown. Therefore it is
preferable to consider the ICA model as a semiparametric model in which the distributions of the
components of X are left unspecified. The problem is then, obviously, to find a suitable contrast
function, i.e. a target function to be minimized in order to estimate the ICA model. The earliest
ICA algorithms were based on contrast functions defined in terms of expectations of a single fixed
nonlinear function, chosen in ad-hoc manner [5]. More sophisticated algorithms have been obtained
by careful choice of a single fixed nonlinear function, such that the expectations of this function
yield a robust approximation to the mutual information [9].
Maximizing the likelihood in the semiparametric ICA model is essentially equivalent to minimizing
? X [4]. The usage of the
the mutual information between the components of the estimate S? = W
mutual information as a contrast function to be minimized in estimating the ICA model is well
motivated, quite apart from the link to maximum likelihood [6].
1
Estimating MI from a given finite sample set is difficult. Several modern approaches rely on knearest neighbor estimates of entropy and mutual information [12, 16]. Recently the Vasicek estimator [17] for the differential entropy of 1D random variables, based on k-nearest neighbors statistics, was applied to ICA [8, 13]. In addition ICA was studied by another recently introduced MI
estimator [16]. However, the derivative of the estimators that are based on order statistics can hardly
be computed and therefore the optimization of such numerical criteria can not be based on gradient
techniques. Also the result numerical criteria tend to have a non-smooth dependency on sample
values. The optimization therefore should involve computation of contrast function on a whole grid
of searched parameters.
In addition, such estimators do not utilize optimally the whole amount of data included in the samples of random vectors. Therefore they require significant artificial enlargement of data sets by a
technique called data augmentation [13] that replaces each data point in sample with R-tuple (R is
usually 30) of points given by an statistical procedure with ad-hoc parameters. An alternative is the
Fourier filtering of the estimated values of the evaluated MI estimators [16].
In the present paper we propose new smooth estimators for the differential entropy, the mutual information and the divergence. The estimators are obtained by a novel approach averaging k-nearest
neighbor statistics for the all possible values of order statistics k. The estimators are smooth, their
derivatives may be easily analytically calculated thus enabling fast gradient optimization techniques.
They fully utilize the amount of data comprised into a random variable sample. The estimators provide a novel geometrical interpretation for the entropy. When applied to ICA problem, the proposed
estimator leads to the most precise results for many distributions known at present.
The rest of the paper is organized as follows: Section 2 reviews the kNN approach for the entropy
and divergence estimation, Section 3 introduces the mean estimator for the differential entropy,
the mutual information and the divergence. Section 4 describes the application of the proposed
estimators to the ICA problem and Section 5 describes conducted numerical experiments.
2
kNN Estimators for the Differential Entropy
We review the nearest neighbor technique for the Shannon entropy estimation. The differential
entropy of X is defined as:
Z
H(X) = ? f (x) log f (x)dx
(1)
We describe the derivation of the Shannon differential entropy estimate of [11, 18]. Our aim is
to estimate H(X) from a random sample (x1 , ..., xn ) of n random realizations of a d-dimensional
random variable X with unknown density function f (x). The entropy is the average of ? log f (x).
If one had unbiased estimators for log f (xi ), one would arrive to an unbiased estimator for the
entropy. We will estimate log f (xi ) by considering the probability density function Pik (?) for the
distance between xi and its k-th nearest neighbor (the probability is computed over the positions
of all other n ? 1 points, with xi kept fixed). The probability Pik (?)d? is equal to the chance that
there is one point within distance r ? [?, ? + d?] from xi , that there are k?1 other points at smaller
distances, and that the remaining n?k?1 points
R have larger distances from xi . Denote the mass of
the ?-ball centered at xi by pi (?), i.e. pi (?) = kx?xi k<? f (x)dx. Applying the trinomial formula
we obtain:
dpi (?) k?1
(n?1)!
p
(1 ? pi )n?k?1
(2)
Pik (?) =
1!(k?1)!(n?k?1)! d? i
R
It can be easily verified that indeed Pik (?)d? = 1. Hence, the expected value of the function
log pi (?) according to the distribution Pik (?) is:
Z
EPik (?) (log pi (?)) =
0
?
?
?Z 1
n?1
Pik (?) log pi (?)d? = k
pk?1 (1 ? p)n?k?1 log p dp
k
0
(3)
= ?(k) ? ?(n)
where ?(x) is the digamma function (the logarithmic derivative of the gamma function). To verify
R1
the last equality, differentiate the identity 0 xa?1 (1?x)b?1 = ?(a)?(b)/?(a + b) with respect to
2
the parameter a and recall that ?0 (x) = ?(x)?(x). The expectation is taken over the positions of
all other n ? 1 points, with xi kept fixed. Assuming that f (x) is almost constant in the entire ?-ball
around xi , we obtain:
pi (?) ? cd ?d f (xi ).
(4)
where d is the dimension of x and cd is the volume of the d-dimensional unit ball (cd = ? d/2 /?(1 +
d/2) for Euclidean norm). Substituting Eq. (4) into Eq. (3), we obtain:
? log f (xi ) ? ?(n) ? ?(k) + log(cd ) + dE(log(?))
(5)
which finally leads to the unbiased kNN estimator for the differential entropy [11]:
n
Hk (X) = ?(n) ? ?(k) + log(cd ) +
dX
log ?i
n i=1
(6)
where ?i is the distance from xi to its k-th nearest neighbor. An alternative proof of the asymptotic
unbiasedness and consistency of the kNN estimator is found at [15].
A similar approach can be used to obtain a kNN estimator for the Kullback-Leibler divergence [19].
The estimator works as follows. Let {x1 , ..., xn } and {y1 , ..., ym } be i.i.d. d-dimensional samples
drawn independently from the densities p and q respectively. By definition the divergence is given
by:
Z
p(x)
(7)
D(pkq) = p(x) log
q(x)
The distance of xi to its nearest neighbor in {xj }j6=i is defined as
?n (i) = min d(xi , xj )
j6=i
(8)
We also define the distance of xi to its nearest neighbor in {yj }
?n (i) =
min d(xi , yj )
j=1,...,m
(9)
Then the estimator of [19] is given by
n
X
?m (i)
m
? n,m = d
log
+ log
D
n i=1
?n (i)
n?1
(10)
The authors established asymptotic unbiasedness and mean-square consistency of the estimator (10).
The same proofs could be applied to obtain k-nearest neighbor version of the estimator:
n
v k (i)
dX
m
k
? n,m
log m
+ log
D
=
n i=1
?kn (i)
n?1
(11)
Being non-parametric, the kNN estimators (6, 11) rely on the order statistics. This makes the analytical calculation of the gradient hardly possible. Also it leads to a certain lack of smoothness of the
estimator value as a function of the sample coordinates. One also should mention that finding the
k-nearest neighbor is a computationally intensive problem. It becomes necessarily to use involved
approximate nearest neighbor techniques for large data sets.
3
The MeanNN Entropy Estimator
We propose a novel approach for the entropy estimation as a function of sample coordinates. It is
based on the fact that the kNN estimator (6) is valid for every k. Therefore the differential entropy
can be also extracted from a mean of several estimators corresponding to different values of k. Next
we consider all the possible values of order statistics k from 1 to n ? 1:
Hmean
n?1
n?1
n
1 X
dX
1 X
Hk = log(cd ) + ?(n) +
(??(k) +
log ?i,k )
=
n?1
n?1
n i=1
k=1
(12)
k=1
where ?i,k is the k-th nearest neighbor of xi . Consider the double-summation last term in Eq. (12).
Exchanging the order of summation, the last sum adds for each sample point xi the sum of log of
3
its distances to all its nearest neighbors in the sample. It is of course equivalent to the sum of log of
its distances to all other points in the sample set. Hence the mean estimator (12) for the differential
entropy can be written as:
X
d
log kxi ? xj k
(13)
Hmean = const +
n(n ? 1)
i6=j
where the constant depends just on the sample size and dimensionality. We dub this estimator, the
MeanNN estimator for differential entropy. It follows that the differential entropy (approximation)
has a clear geometric meaning. It is proportional to log of the products of distances between each
two points in a random i.i.d. sample. It is an intuitive observation since a higher entropy would
lead to a larger scattering of the samples thus pairwise distances would grow resulting in a larger
product of all distances. Moreover, the MeanNN estimator (13) is a smooth function of the sample
coordinates. Its gradient can be easily found. The asymptotic unbiasedness and consistency of the
estimator follow from the same properties of the kNN estimator (6). Obviously, the same method
gives the mean estimator for the mutual information by usage of well known equality connecting the
mutual information and marginal and joint entropies:
Imean (X; Y ) = Hmean (X) + Hmean (Y ) ? Hmean (X, Y )
(14)
We demonstrate the MeanNN estimator for the entropy in the case exponential distributed random
x
variable f (x, ?) = ?1 e? ? , x > 0, ? > 0. In this case case the entropy may be analytically calculated
as H = log ? + 1. We compared the performance of the MeanNN estimator with k-nearest neighbor
estimator (6) for various values of k. Results are given in Table 1. One may see that the mean
square error of the MeanNN estimator is the same or worse for the traditional kNN estimators. But
the standard deviation of the estimator values is best for the MeanNN estimator. Further we will
apply MeanNN for optimization of a certain criterion based on the entropy. In such cases the most
important characteristics of an estimator is its monotonic dependency on the estimated value and
the prediction of the exact value of the entropy is less important. Therefore one may conclude that
MeanNN is better applicable for optimization of entropy based numerical criteria.
1NN
0.0290
0.1698
Mean square error of entropy estimation
STD of estimator values
4NN
0.0136
0.1166
10NN
0.0117
0.1079
MeanNN
0.0248
0.1029
Table 1: Performance of MeanNN entropy estimator in comparison with kNN entropy estimators.
100 samples of random variable, 10 various values of ? parameter, 100 repetitions.
To obtain the estimator for the divergence we apply the same mean approach to estimator (11) setting
m = n ? 1:
?
?
n?1
n
k
XX
X
X
v
(i)
d
d
mean
? n,n?1
?
=
log d(xi , yj ) ?
log m
D
=
log d(xi , xj )?
k (i)
n(n ? 1)
?
n(n
?
1)
n
i,j
i=1
k=1
i6=j
(15)
The mean estimator for the divergence has a clear geometric interpretation. If the product of all
distances inside one sample is small in comparison with the product of pairwise distances between
the samples then one concludes that divergence is large and vice versa.
4
The MeanNN ICA Algorithm
As many approaches do, we will use a contrast function
Z
J(Y ) =
d
d
Y
X
q(y1 , .., yd )
d? = D(q(y1 , .., yd )k
q(y1 , ..., yd ) log Qd
q(yi )) =
H(Yi )?H(Y1 , ..., Yd )
i=1 q(yi )
i=1
i=1
(16)
Considering Y as linear function of X, Y = W X, it is easily verified [3, 7, 10] that
4
J(Y ) =
d
X
H(Yt ) ? H(X1 , ..., Xd ) ? log(|W |)
(17)
t=1
In particular, the change in the entropy of the joint distribution under linear transformation is simply
the logarithm of the Jacobian of the transformation. As we will assume the X?s to be pre-whitened,
W will be restricted to rotation matrices, therefore log(|W |) = 0 and the minimization of J(Y )
reduces to finding
? = arg min H(Y1 ) + ... + H(Yd )
W
(18)
W
>
Denoting the rows of the matrix W by W = (w1 , ..., wd ) , we can explicitly write the minimization
expression as a function of W :
d
X
>
? = arg min
W
H(wt X)
(19)
W
t=1
Then we can plug the MeanNN entropy estimator into Eq. (19) to obtain (after omitting irrelevant
constants) an explicit contrast function to minimize:
d X
n
X
>
? = arg min S(W ) = arg min
W
log((wt (xi ? xj ))2 )
(20)
W
W
t=1 i6=j
The gradient of the contrast function S(W ) with respect to a rotation matrix W may be found with
the assistance of the so-called Givens rotations (see e.g. [14]). In this parametrization a rotation
matrix W ? Rd?d is represented by a product of d(d ? 1)/2 plane rotations:
d?1
d
Y Y
W =
Gst
(21)
s=1 t=s+1
where Gst is a rotation matrix corresponding to a rotation in the st plane by an angle ?st . It is
the identity matrix except that its elements (s, s),(s, t),(t, s),(t, t) form a two-dimensional (2-D)
rotation matrix by
? ?
?
?
cos(?st ) sin(?st )
Gst (s, s) Gst (s, t)
=
(22)
? sin(?st ) cos(?st )
Gst (t, s) Gst (t, t)
The gradient of a single rotation matrix Gst with respect to ?st is a zero matrix except for elements
(s, s),(s, t),(t, s),(t, t) for which
?
? ?
?
?
? sin(?st ) cos(?st )
Gst (s, s) Gst (s, t)
(23)
=
? cos(?st ) ? sin(?st )
??st Gst (t, s) Gst (t, t)
It can easily verified that the gradient of the contrast function (20) is given by
"d?1 d
#
d
d X
n
Y Y
X
X
?
?S ?wqr
(xir ? xjr )
?
Guv
S=
=2
(24)
??st
?wqr ??st
|wq> (xi ? xj )| u=1 v=u+1
q,r=1
q,r=1 i6=j
qr
? uv =
where G
?
??uv Guv
? uv = Guv otherwise.
if both u = s and v = t, and G
The contrast function S(W ) and its gradient ???st S may in theory suffer from discontinuities if a
row wt is perpendicular to a vector xi ? xj . To overcome this numerical difficulty we utilize a
smoothed version of the contrast function S(W, ?) and give the expression for its gradient:
S(W, ?) =
d X
n
X
t=1 i6=j
d X
n
X
d
X
?S ?wqr
?
S=
=
??st
?w
qr ??st
q,r=1
q,r=1
>
log((wt (xi ? xj ))2 + ?)
i6=j
"d?1 d
#
Y Y
(xir ? xjr )
? uv
G
(wq> (xi ? xj ))2 + ? u=1 v=u+1
(25)
(26)
qr
For the optimization of the contrast function we apply the conjugate gradient method. The algorithm
is summarized in Figure 1.
5
Input: Data vectors x1 , x2 , ..., xn ? Rd , assumed whitened
Output: Mixing matrix W
Method:
? Initialize d(d ? 1)/2 rotation angles ?st
? Apply the conjugate gradient optimization to the contrast function S(W (?)) (25) to
find the optimal angles
? Reconstruct the rotation matrix W from the found angles by Givens rotations (21)
Figure 1: The MeanNN ICA algorithm
5
Experiments
First we study the set of 9 problems proposed by [2]. Each problem corresponds to a 1D probability
distribution q(x). One thousand pairs of random numbers x and y are mixed as x0 = x cos ? +
y sin ?, y 0 = ?x sin ? + y cos ? with random angle ? common to all pairs (i.e. A is a pure rotation).
We applied the conjugate gradient methods for the optimization of the contrast function (25) with
? = 1/n = 0.001 in order to recover this rotation matrix. This was repeated 100 times with different
angles ? and with different random sets of pairs (x, y). To assess the quality of the estimator A?
? = A??1 ), we use the Amari performance index Perr
(or, equivalently, of the back transformation W
from [1].
Perr =
d
|pij |
|pij |
1 X
(
+
)?1
2d i,j=1 maxk |pik | maxk |pkj |
(27)
where pij = (A??1 A)ij . We compared our method with three state-of-the-art approaches: MILCA
[16], RADICAL [13] and KernelICA [2]. We used the official code proposed by authors1 . For the
first two techniques that utilize different information theoretic measures assessed by order statistics
it is highly recommended to use dataset augmentation. This is a computationally intensive technique
for the dataset enlargement by replacing each data set point with a fixed number (usually 30) new
data points randomly generated in the small neighborhood of the original point. The proposed
method gives smooth results without any additional augmentation due to its smooth nature (see Eq.
(13)).
pdfs
a
b
c
d
e
f
g
h
i
MILCA
3.3
3.4
7.5
1.8
1.7
1.4
1.4
1.7
1.9
MILCA Aug
2.5
3.0
4.4
1.7
1.6
1.3
1.3
2.0
2.1
RADICAL
3.6
3.6
7.6
1.4
1.5
1.6
1.6
1.6
1.8
RADICAL Aug
2.8
3.3
5.4
1.6
1.7
1.4
1.4
1.7
1.8
KernelICA
3.3
3.0
4.9
1.4
1.5
1.4
1.4
1.4
1.5
MeanNN ICA
2.4
2.6
4.2
1.4
1.4
1.4
1.4
1.5
1.8
Table 2: Amari performance (multiplied by 100) for two-component ICA. The distributions are: (a)
Student with 3 degrees of freedom; (b) double exponential; (c) Student with 5 degrees of freedom;
(d) exponential; (e) mixture of two double exponentials; (f) symmetric mixtures of two Gaussians;
(g) nonsymmetric mixtures of two Gaussians; (h) symmetric mixtures of four Gaussians; (i) nonsymmetric mixtures of four Gaussians.
In the explored cases the proposed method achieves the level of a state-of-the-art performance. This
is well explained by the inherent smoothness of MeanNN estimator, see Figure 2. Here we presented
1
http://www.klab.caltech.edu/?kraskov/MILCA/,
http://www.di.ens.fr/?fbach/kernel-ica/index.htm
https://www.cs.umass.edu/?elm/ICA/,
6
the comparison of different contrast functions based on different order statistics estimators for a grid
of possible rotations angles for the mixture of two exponentially distributed random variables (case
e). The contrast function corresponding to the order statistics k = 10 generally coincides with ?
the
MILCA approach. Also the contrast function corresponding to the order statistics k = 30 ' n
generally coincides with the RADICAL method. One may see that MeanNN ICA contrast function
leads to much more robust prediction of the rotation angle. One should mention that the gradient
based optimization enables to obtain the global optimum with high precision as opposed to MILCA
and RADICAL schemes which utilize subspace grid optimization.
Application of the gradient based optimization schemes also leads to a computational advantage.
The number of needed function evaluations was limited by 20 as opposed to 150 evaluations for grid
optimization schemes MILCA and RADICAL.
Contrast function S(W(?))
2.9
2.8
2.7
2.6
2.5
2.4
MeanNN
10NN
30NN
2.3
2.2
2.1
2
0
0.2
0.4
0.6
0.8
Rotation angle ?
1
1.2
1.4
1.6
Figure 2: Convergence analysis for a mixture of two exponentially distributed random variables.
Contrast function dependence on a rotation angle for different entropy estimators. 1000 samples,
0.01 radian grid.
We also studied the application of MeanNN ICA to multidimensional problems. For that purpose
we chose at random D (generally) different distributions, then we mixed them by a random rotation
and ran the compared ICA algorithms to recover the rotation matrix. The results are presented at
Table 3. MeanNN ICA achieved the best performance.
dims
2
4
MILCA
3.0
2.7
MILCA Aug
3.3
2.7
RADICAL
3.1
2.8
RADICAL Aug
3.0
2.3
KernelICA
2.9
2.6
MeanNN ICA
2.5
2.2
Table 3: Amari index (multiplied by 100) for multidimensional ICA. 1000 samples, 10 repetitions
6
Conclusion
We proposed a novel approach for estimation of main information theoretic measures such as differential entropy, mutual information and divergence. The estimators represent smooth differential
functions with clear geometrical meaning. Next this novel estimation technique was applied to the
ICA problem. Compared to state-of-the-art ICA methods the proposed method demonstrated superior results in the conducted tests.
Studied state-of-the-art approaches can be divided in two groups. The first group is based on exact
entropy estimation, that usually leads to high performance as demonstrated by MILCA and RADICAL. The drawback of such estimators is the lack of the gradient and therefore numerical difficulties
in optimization. The second group apply different from entropy criteria, that benefit easy calculation of gradient (KernelICA). However such methods may suffer from deteriorated performance.
7
MeanNN ICA comprises the advantages of these two kinds of estimators. It represents a contrast
function based on an accurate entropy estimation and its gradient is given analytically therefore it
may be readily optimized.
Finally we mention that the proposed estimation method may further be applied to various problems
in the field of machine learning and beyond.
References
[1] S. Amari, A. Cichoki, and H.H.Yang. A new learning algorithm for blind signal separation. Advances in
Neural Information Processing Systems, 8, 1996.
[2] F. Bach and M. Jordan. Kernel independent component analysis. Journal of Machine Learning Research,
3, 2002.
[3] A. J. Bell and T. J. Sejnowski. An information-maximization approach to blind separation and blind
deconvolution. Neural Computatiuon, 7, 1995.
[4] J.-F. Cardoso. Multidimensional independent component analysis. Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP?98), 1998.
[5] C.Jutten and J.Herault. Blind separation of sources, part 1: An adaptive algorithm based on neuromimetic
architecture. Signal Processing, 1991.
[6] P. Comon. Independent component analysis, a new concept? Signal Processing, 36(3), 1994.
[7] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley-Interscience, August
1991.
[8] D.T.Pham and P.Garat. Blind separation of mixtures of independent signals through a quasi-maximum
likelihood approach. IEEE transactions on Signal Processing 45(7), 1997.
[9] A. Hyvarinen and E.Oja. A fast fixed point algorithm for independent component analysis. Neural
computation, 9(7), 1997.
[10] A. Hyvarinen, J. Karhunen, and E. Oja. Independent component analysis. 2001.
[11] L. Kozachenko and N. Leonenko. On statistical estimation of entropy of random vector. Problems Infor.
Transmiss., 23 (2), 1987.
[12] A. Kraskov, H. St?ogbauer, and P. Grassberger. Estimating mutual information. Physical Review E,
69:066138, 2004.
[13] E. Miller and J. Fisher. Ica using spacing estimates of entropy. Proc. Fourth International Symposium on
Independent Component Analysis and Blind Signal Separation, Nara, Japan, Apr. 2003, pp. 1047?1052.,
2003.
[14] J. Peltonen and S. Kaski. Discriminative components of data. IEEE Transactions on Neural Networks,
16(1), 2005.
[15] H. Singh, N. Misra, V. Hnizdo, A. Fedorowicz, and Eugene Demchuk. Nearest neighbor estimates of
entropy. American Journal of Mathematical and Management Sciences, 2003.
[16] H. St?ogbauer, A. Kraskov, S. Astakhov, and P. Grassberger. Least-dependent-component analysis based
on mutual information. Phys. Rev. E, 70(6):066123, Dec 2004.
[17] O. Vasicek. A test for normality based on sample entropy. J. Royal Stat. Soc. B, 38 (1):54?59, 1976.
[18] J. D. Victor. Binless strategies for estimation of information from neural data. Physical Review, 2002.
[19] Q. Wang, S. R. Kulkarni, and S. Verdu. A nearest-neighbor approach to estimating divergence between
continuous random vectors. IEEE Int. Symp. Information Theory, Seattle, WA, 2006.
8
| 3500 |@word version:2 norm:1 jacob:1 eng:1 mention:3 uma:1 denoting:1 com:1 wd:1 goldberger:1 gmail:1 dx:5 written:1 readily:2 grassberger:2 numerical:6 enables:1 joy:1 plane:2 parametrization:2 provides:1 trinomial:1 mathematical:1 differential:15 symposium:1 interscience:1 inside:1 symp:1 manner:1 introduce:1 x0:1 pairwise:2 indeed:1 expected:1 ica:29 considering:2 becomes:1 provided:1 estimating:4 underlying:1 moreover:1 xx:1 mass:1 kind:1 unspecified:1 perr:2 finding:2 transformation:4 elm:1 every:1 multidimensional:3 xd:1 preferable:1 unit:1 engineering:2 lev:1 yd:5 chose:1 studied:3 verdu:1 specifying:1 co:6 limited:1 perpendicular:1 practical:1 yj:3 procedure:1 asi:1 bell:1 pre:3 applying:1 www:3 equivalent:2 demonstrated:3 yt:1 maximizing:1 independently:1 pure:1 estimator:57 coordinate:3 deteriorated:1 target:1 exact:2 element:3 std:1 observed:1 wang:1 thousand:1 ran:1 singh:1 upon:1 easily:5 joint:2 htm:1 icassp:1 various:3 represented:1 kaski:1 derivation:1 fast:2 describe:1 sejnowski:1 artificial:1 neighborhood:1 quite:1 larger:3 otherwise:1 reconstruct:1 amari:4 statistic:10 knn:10 knearest:1 obviously:2 hoc:2 differentiate:1 differentiable:1 advantage:2 analytical:1 propose:2 product:5 fr:1 realization:1 mixing:3 intuitive:1 qr:3 seattle:1 convergence:1 double:3 optimum:1 r1:1 radical:9 ac:1 stat:1 ij:1 nearest:15 school:2 aug:4 eq:5 soc:1 recovering:1 c:1 qd:1 drawback:1 centered:1 pkj:1 require:1 summation:2 pham:1 xjr:2 around:1 klab:1 substituting:1 achieves:1 purpose:1 estimation:13 proc:1 applicable:1 repetition:2 vice:1 minimization:2 gaussian:2 aim:1 earliest:1 xir:2 pdfs:1 likelihood:4 hk:2 contrast:20 digamma:1 sense:1 dependent:1 nn:5 entire:1 quasi:1 infor:1 arg:4 herault:1 art:5 initialize:1 mutual:13 marginal:1 equal:1 field:1 represents:1 minimized:2 inherent:1 wqr:3 modern:1 randomly:1 oja:2 gamma:1 divergence:11 freedom:2 highly:1 evaluation:2 introduces:1 mixture:8 accurate:1 tuple:1 euclidean:1 logarithm:1 vasicek:2 minimal:1 cover:1 exchanging:1 maximization:1 deviation:1 comprised:1 conducted:2 optimally:1 dependency:2 kn:1 kxi:1 unbiasedness:3 st:20 density:3 international:2 ym:1 connecting:1 w1:1 augmentation:3 management:1 opposed:3 henceforth:1 worse:1 american:1 derivative:3 japan:1 ilan:2 de:1 summarized:1 student:2 int:1 explicitly:1 ad:2 depends:1 blind:6 recover:5 minimize:1 il:1 square:4 ass:1 characteristic:1 miller:1 yield:2 ofthe:1 dub:1 j6:2 phys:1 definition:1 pp:1 involved:1 proof:2 mi:3 di:1 radian:1 dataset:3 recall:1 dimensionality:1 organized:1 sophisticated:1 back:1 scattering:1 higher:1 follow:1 improved:1 done:1 evaluated:1 xa:1 just:1 working:1 replacing:1 nonlinear:2 lack:2 jutten:1 quality:1 usage:2 omitting:1 verify:1 unbiased:3 concept:1 analytically:4 hence:2 equality:2 symmetric:2 leibler:1 assistance:1 sin:6 coincides:2 criterion:5 theoretic:3 demonstrate:1 enlargement:2 geometrical:3 meaning:2 novel:5 recently:2 common:1 rotation:21 superior:1 cichoki:1 physical:2 exponentially:2 volume:1 nonsymmetric:2 interpretation:3 significant:1 versa:1 smoothness:2 rd:3 uv:4 grid:5 consistency:3 i6:6 had:1 whitening:1 pkq:1 add:1 irrelevant:1 apart:1 certain:2 misra:1 yi:3 caltech:1 victor:1 additional:1 recommended:1 signal:7 ogbauer:2 reduces:2 smooth:10 calculation:2 bach:1 plug:1 nara:1 divided:1 prediction:2 whitened:2 essentially:1 expectation:3 kernel:2 represent:1 achieved:1 dec:1 addition:2 semiparametric:2 spacing:1 grow:1 source:6 rest:2 fbach:1 guv:3 tend:1 jordan:1 kraskov:3 yang:1 easy:1 xj:9 architecture:1 intensive:2 expression:3 motivated:1 suffer:2 speech:1 hardly:2 generally:3 clear:4 involve:1 cardoso:1 amount:2 nonparametric:1 http:3 estimated:3 dims:1 write:1 group:3 four:2 drawn:1 verified:3 utilize:5 kept:2 sum:3 angle:10 fourth:1 arrive:1 almost:1 separation:5 pik:7 scaling:1 replaces:1 x2:1 fourier:1 min:6 leonenko:1 gst:11 according:1 ball:3 conjugate:3 describes:2 smaller:1 rev:1 s1:1 comon:1 explained:1 restricted:1 taken:1 computationally:2 needed:1 neuromimetic:1 gaussians:4 multiplied:2 apply:6 observe:1 kozachenko:1 alternative:2 original:2 thomas:2 remaining:1 const:1 parametric:2 strategy:1 dependence:1 traditional:1 gradient:19 dp:1 subspace:1 distance:14 link:1 assuming:1 code:1 index:3 biu:1 minimizing:1 equivalently:1 difficult:1 statement:1 unknown:4 observation:3 finite:1 enabling:1 descent:1 maxk:2 precise:1 y1:6 dpi:1 smoothed:1 august:1 introduced:1 pair:3 optimized:2 acoustic:1 established:1 discontinuity:1 beyond:1 bar:2 usually:3 royal:1 suitable:1 difficulty:2 rely:2 normality:1 scheme:3 concludes:1 sn:1 review:4 geometric:2 eugene:1 asymptotic:3 fully:1 permutation:1 mixed:2 filtering:1 proportional:1 kernelica:4 degree:2 pij:3 pi:7 cd:6 row:2 course:1 last:3 neighbor:16 distributed:3 benefit:1 overcome:1 calculated:2 xn:4 dimension:1 valid:1 author:1 adaptive:1 hyvarinen:2 transaction:2 approximate:1 obtains:2 kullback:1 global:1 conclude:1 assumed:1 xi:27 discriminative:1 search:1 latent:2 continuous:1 table:5 nature:1 robust:2 necessarily:1 official:1 pk:1 main:2 apr:1 whole:2 repeated:2 x1:5 peltonen:1 en:1 wiley:1 precision:1 position:2 comprises:1 explicit:1 exponential:4 binless:1 jacobian:1 formula:1 explored:1 deconvolution:1 karhunen:1 kx:1 entropy:42 logarithmic:1 simply:1 monotonic:1 corresponds:1 chance:1 extracted:1 goal:1 identity:2 careful:1 fisher:1 change:1 included:1 except:2 averaging:1 wt:4 called:3 shannon:2 wq:2 searched:1 assessed:1 kulkarni:1 |
2,759 | 3,501 | Fitted Q-iteration by Advantage Weighted Regression
Gerhard Neumann
Institute for Theoretical Computer Science
Graz University of Technology
A-8010 Graz, Austria
[email protected]
Jan Peters
Max Planck Institute for Biological Cybernetics
D-72076 T?bingen, Germany
[email protected]
Abstract
Recently, fitted Q-iteration (FQI) based methods have become more popular due
to their increased sample efficiency, a more stable learning process and the higher
quality of the resulting policy. However, these methods remain hard to use for continuous action spaces which frequently occur in real-world tasks, e.g., in robotics
and other technical applications. The greedy action selection commonly used for
the policy improvement step is particularly problematic as it is expensive for continuous actions, can cause an unstable learning process, introduces an optimization
bias and results in highly non-smooth policies unsuitable for real-world systems.
In this paper, we show that by using a soft-greedy action selection the policy
improvement step used in FQI can be simplified to an inexpensive advantageweighted regression. With this result, we are able to derive a new, computationally
efficient FQI algorithm which can even deal with high dimensional action spaces.
1
Introduction
Reinforcement Learning [1] addresses the problem of how autonomous agents can improve their
behavior using their experience. At each time step t the agent can observe its current state st ? X
and chooses an appropriate action at ? A. Subsequently, the agent gets feedback on the quality
of the action, i.e., the reward rt = r(st , at ), and observes the next state st+1 . The goal of the
agent is to maximize the accumulated reward expected in the future. In this paper, we focus on
learning policies for continuous, multi-dimensional control problems. Thus the state space X and
action space A are continuous and multi-dimensional, meaning that discretizations start to become
prohibitively expensive.
While discrete-state/action reinforcement learning is a widely studied problem with rigorous convergence proofs, the same does not hold true for continuous states and actions. For continuous state
spaces, few convergence guarantees exist and pathological cases of bad performance can be generated easily [2]. Moreover, many methods cannot be transferred straightforwardly to continuous
actions.
Current approaches often circumvent continuous action spaces by focusing on problems where the
actor can rely on a discrete set of actions, e.g., when learning a policy for driving to a goal in
minimum time, an actor only needs three actions: the maximum acceleration when starting, zero
acceleration at maximum velocity and maximum throttle down when the goal is sufficiently close
for a point landing. While this approach (called bang-bang in traditional control) works for the
large class of minimum time control problems, it is also a limited approach as cost functions relevant to the real-world incorporate much more complex constraints, e.g., cost-functions in biological
systems often punish the jerkiness of the movement [3], the amount of used metabolic energy [4]
or the variance at the end-point [5]. For physical technical systems, the incorporation of further
optimization criteria is of essential importance; just as a minimum time policy is prone to damage
the car on the long-run, a similar policy would be highly dangerous for a robot and its environment
and the resulting energy-consumption would reduce its autonomy. More complex, action-dependent
immediate reward functions require that much larger sets of actions are being employed.
We consider the use of continuous actions for fitted Q-iteration (FQI) based algorithms. FQI is a
batch mode reinforcement learning (BMRL) algorithm. The algorithm mantains an estimate of the
state-action value function Q(s, a) and uses the greedy operator maxa Q(s, a) on the action space
for improving the policy. While this works well for discrete action spaces, the greedy operation
is hard to perform for high-dimensional continuous actions. For this reason, the application of
fitted Q-iteration based methods is often restricted to low-dimensional action spaces which can be
efficiently discretized. In this paper, we show that the use of a stochastic soft-max policy instead of
a greedy policy allows us to reduce the policy improvement step used in FQI to a simple advantageweighted regression. The greedy operation maxa Q(s, a) over the actions is replaced by a less
harmful greedy operation over the parameter space of the value function. This result allows us to
derive a new, computationally efficient algorithm which is based on Locally-Advantage-WEighted
Regression (LAWER).
We test our algorithm on three different benchmark tasks, i.e., the pendulum swing-up [6], the
acrobot swing-up [1] and a dynamic version of the puddle-world [7] with 2 and 3 dimensions. We
show that in spite of the soft-greedy action selection, our algorithm is able to produce high quality
policies.
2
Fitted Q-Iteration
In fitted Q-iteration [8, 6, 9] (FQI), we assume that all the experience of the agent up to the current
time is given in the form H = {< si , ai , ri , s?i >}1?i?N . The task of the learning algorithm is to
estimate an optimal control policy from this historical data. FQI approximates the state-action value
function Q(s, a) by iteratively using supervised regression techniques. New target values for the
regression are generated by
? k+1 (i) = ri + ?Vk (s?i ) = ri + ? max Qk (s?i , a? ).
Q
(1)
a?
The regression problem for finding the function Qk+1 is defined by the list of data-point pairs Dk
and the regression procedure Regress
?h
?
i
? k+1 (i)
Dk (Qk ) = (si , ai ), Q
, Qk+1 = Regress(Dk (Qk ))
(2)
1?i?N
FQI can be viewed as approximate value iteration with state-action value functions [9]. Previous
experiments show that function approximators such as neural networks [6], radial basis function
networks [8], CMAC [10] and regression trees [8] can be employed in this context. In [9], performance bounds for the value function approximation are given for a wide range of function approximators. The performance bounds also hold true for continuous action spaces, but only in the case
of an actor-critic variant of FQI. Unfortunately, to our knowledge, no experiments with this variant
exist in the literature. Additionally, it is not clear how to apply this actor-critic variant efficiently for
nonparametric function approximators.
FQI has proven to outperform classical online RL methods in many applications [8]. Nevertheless,
FQI relies on the greedy action selection in Equation (1). Thus, the algorithm frequently requires
a discrete set of actions and generalization to continuous actions is not straightforward. Using the
greedy operator for continuous action spaces is a hard problem by itself as the use of expensive
optimization methods is needed for high dimensional actions. Moreover the returned values of the
greedy operator often result in an optimization bias causing an unstable learning process, including
oscillations and divergence [11]. For a comparison with our algorithm, we use the Cross-Entropy
(CE) optimization method [12] to find the maximum Q-values. In our implementation, we maintain
a Gaussian distribution for the belief of the optimal action. We sample nCE actions from this
distribution. Then, the best eCE < nCE actions (with the highest Q-values) are used to update the
parameters of this distribution. The whole process is repeated for kCE iterations, starting with a
uniformly distributed set of sample actions.
FQI is inherently an offline method - given historical data, the algorithm estimates the optimal policy.
However, FQI can also be used for online learning. After the FQI algorithm is finished, new episodes
can be collected with the currently best inferred policy and the FQI algorithm is restarted.
3
Fitted Q-Iteration by Advantage Weighted Regression
A different method for policy updates in continuous action spaces is reinforcement learning by
reward-weighted regression [13]. As shown by the authors, the action selection problem in the immediate reward RL setting with continuous actions can be formulated as expectation-maximization
(EM) based algorithm and, subsequently, reduced to a reward-weighted regression. The weighted
regression can be applied with ease to high-dimensional action spaces; no greedy operation in the
action space is needed. While we do not directly follow the work in [13], we follow the general idea.
3.1
Weighted regression for value estimation
In this section we consider the task of estimating the value function V of a stochastic policy ?(?|s)
when theR state-action value function Q is already given. The value function can be calculated by
V (s) = a ?(a|s)Q(s, a)da. Yet, the integral over the action space is hard to perform for continuous
actions. However, we will show how we can approximate the value function without the evaluation
of this integral. Consider the quadratic error function
?Z
?2
Z
Error(V? ) =
?(s)
?(a|s)Q(s, a)da ? V? (s) ds
(3)
s
=
Z
a
?(s)
s
?Z
a
? ?2
?
?(a|s) Q(s, a) ? V (s) da ds,
?
(4)
which is used to find an approximation V? of the value function. ?(s) denotes the state distribution
when following policy ?(?|a). Since the squared function is convex we can use Jensens inequality
for probability density functions to derive an upper bound of Equation (4)
Z
Z
?
?2
Error(V? ) ? ?(s) ?(a|s) Q(s, a) ? V? (s) dads = ErrorB (V? ).
(5)
s
a
??
The solution V for minimizing the upper bound Error B (V? ) is the same as for the original error
function Error(V? ).
R
Proof. To see this, we compute the square and replace the term a ?(a|s)Q(s, a)da by the value
function V (s). This is done for the error function Error(V? ) and for the upper bound Error B (V? ).
Z
Z
?
?2
?
?
Error(V? ) = ?(s) V (s) ? V? (s) ds = ?(s) V (s)2 ? 2V (s)V? (s) + V? (s)2 ds
(6)
s
ErrorB (V? )
s
Z
Z
?
?
= ?(s) ?(a|s) Q(s, a)2 ? 2Q(s, a)V? (s) + V? (s)2 dads
a
?Z
?
Zs
2
2
?
?
= ?(s)
?(a|s)Q(s, a) da ? 2V (s)V (s) + V (s) ds
s
(7)
(8)
a
Both error functions are the same except for an additive constant which does not depend on V? .
In difference to the original error function, the upper bound ErrorB can be approximated straightforwardly by samples {(si , ai ), Q(si , ai )}1?i?N gained by following some behavior policy ?b (?|s).
ErrorB (V? ) ?
N
?2
X
?(s)?(ai |si ) ?
Q(si , ai ) ? V? (si ) ,
? (s )? (a |s )
i=1 b i b i i
(9)
?b (s) defines the state distribution when following the behavior policy ?b .
The term
1/(?b (si )?b (si , ai )) ensures that we do not give more weight on states and actions preferred by
?b . This is a well known method in importance sampling. In order to keep our algorithm tractable,
the factors ?b (ai |si ), ?b (si ) and ?(si ) will all be set to 1/N . The minimization of Equation (9)
defines a weighted regression problem which is given by the dataset DV , the weighting U and the
weighted regression procedure WeightedRegress
n
o
DV = [(si , ai ), Q(si , ai )]1?i?N , U = {[?(ai |si )]1?i?N } , V? = WeightedRegress(DV , U ) (10)
Algorithm 1 FQI with Advantage Weighted Regression
Input: H = {< si , ai , ri , s?i >}1?i?N , ? and L (Number of Iterations)
Initialize V?0 (s) = 0.
for k = 0 to L?? 1 do
?
h
i
?
?
?
Dk (Vk ) = (si , ai ), ri + ? Vk (si )
1?i?N
Qk+1 = Regress(Dk (V?k ))
A(i) = Qk+1 (si , ai ) ? V?k (si )
Estimate mA (si ) and ?A (si ) for 1 ? i ? N
U = {[exp(? (A(i) ? mA (si ))/?A (si )]i?i?N }
V?k+1 = WeightedRegress(Dk (V?k ), U )
end for
The result shows that in order to approximate the value function V (s), we do not need to carry out
the expensive integration over the action space for each state si . It is sufficient to know the Q-values
at a finite set of state-action pairs.
3.2
Soft-greedy policy improvement
We use a soft-max policy [1] in the policy improvement step of the FQI algorithm. Our soft-max
policy ?1 (a|s) is based on the advantage function A(s, a) = Q(s, a)?V (s). We additionally assume
the knowledge of the mean mA (s) and the standard deviation of ?A (s) of the advantage function at
state s. These quantities can be estimated locally or approximated by additional regressions. The
policy ?1 (a|s) is defined as
? a))
exp(? A(s,
A(s,a)?mA (s)
?
?1 (a|s) = R
.
(11)
?A (s)
? a))da , A(s, a) =
exp(? A(s,
a
? controls the greediness of the policy. If we assume that the advantages A(s, a) are distributed
2
? a) have the same distribuwith N (A(s, a)|mA (s), ?A
(s)), all normalized advantage values A(s,
? a)) ?
tion. Thus, the denominator of ?1 is constant for all states and we can use the term exp(? A(s,
?1 (a|s) directly as weighting for the regression defined in Equation (10). The resulting approximated value function V? (s) is used to replace the greedy operator V (s?i ) = maxa? Q(s?i , a? ) in the
FQI algorithm. The FQI by Advantage Weighted Regression (AWR) algorithm is given in Algorithm 1. As we can see, the Q-function Qk is only queried once for each step in the history H.
Furthermore only already seen state action pairs (si , ai ) are used for this query.
After the FQI algorithm is finished we still need to determine a policy for subsequent data collection. The policy can be obtained in the same way as for reward-weighted regression [13], only the
advantage is used instead of the reward for the weighting - thus, we are optimizing the long term
costs instead of the immediate one.
4
Locally-Advantage-WEighted Regression (LAWER)
Based on the FQI by AWR algorithm, we propose a new, computationally efficient fitted Q-iteration
algorithm which uses Locally Weighted Regression (LWR, [14]) as function approximator. Similar
to kernel based methods, our algorithm needs to be able to calculate the similarity wi (s) between
a state si in the dataset H and state s. To simplify the notation, we will denote wi (sj ) as wij for
all sj ? H. wi (s) is calculated by a Gaussian kernel wi (s) = exp(?(si ? s)T D(si ? s)). The
diagonal matrix D determines the bandwidth of the kernel. Additionally, our algorithm also needs
a
a
a similarity measure wij
between two actions ai and aj . Again wij
can be calculated by a Gaussian
a
T a
kernel wij = exp(?(ai ? aj ) D (ai ? aj )).
Using the state similarity wij , we can estimate the mean and the standard deviation of the advantage
function for each state si
P
P
wij (A(j)?mA (sj ))2
j wij A(j)
2
P
, ?A
(si ) = j
.
(12)
mA (si ) = P
j wij
j wij
4.1
Approximating the value functions
For the approximation of the Q-function, we use Locally Weighted Regression [14]. The Q-function
is therefore given by:
Qk+1 (s, a) = ?sA (SA T WSA )?1 SA T WQk+1
(13)
T
T
T
T
where ?sA = [1, s , a ] , SA = [?sA (1), ?sA (2), ..., ?sA (N )] is the state-action matrix, W =
diag(wi (s)wia (a)) is the local weighting matrix consisting of state and action similarities, and
? k+1 (1), Q
? k+1 (2), . . . , Q
? k+1 (N )]T is the vector of the Q-values (see Equation (1).
Qk+1 = [Q
For approximating the V-function we can multiplicatively combine the advantage-based weighting
? i , ai )) and the state similarity weights wi (s). The value V k+1 (s) is given by 1 :
ui = exp(? A(s
Vk+1 (s) = ?s(ST US)?1 ST UQk+1 ,
(14)
where ?s = [1, s ] , S = [?s1 , ?
s2 , ..., ?sN ]T is the state matrix and U = diag(wi (s)ui ) is the weight
matrix. We bound the estimate of V?k+1 (s) by maxi|wi (s)>0.001 Qk+1 (i) in order to prevent the local
regression from adding a positive bias which might cause divergence of the value iteration.
T T
A problem with nonparametric value function approximators is their strongly increasing computational complexity with an increasing number of data points. A simple solution to avoid this problem
is to introduce a local forgetting mechanism. Whenever parts of the state space are oversampled, old
examples in this area are removed from the dataset.
4.2
Approximating the policy
Similar to reward-weighted regression [13], we use a stochastic policy ?(a|s) =
N (a|?(s), diag(? 2 (s))) with Gaussian exploration as approximation of the optimal policy. The
mean ?(s) and the variance ? 2 (s) are given by
?(s) = ?s(ST US)?1 ST UA,
? 2 (s) =
P
2
?init
?0 + i wi (s)ui (ai ??(si ))2
P
,
?0 + i wi (s)ui
(15)
where A = [a1 , a2 , . . . , aN ]T denotes the action matrix. The variance ? 2 automatically adapts the
2
exploration of the policy to the uncertainty of the optimal action. With ?init
and ?0 we can set the
initial exploration of the policy. ?init is always set to the bandwidth of the action space. ?0 sets the
weight of the initial variance in comparision to the variance comming from the data, ?0 is set to 3
for all experiments.
5
Evaluations
We evaluated the LAWER algorithm on three benchmark tasks, the pendulum swing up task, the
acrobot swing up task and a dynamic version of the puddle-world (i.e., augmenting the puddleworld by velocities, inertia, etc.) with 2 and 3 dimensions. We compare our algorithm to tree-based
FQI [8] (CE-Tree), neural FQI [6] (CE-Net) and LWR-based FQI (CE-LWR) which all use the
Cross-Entropy (CE) optimization to find the maximum Q-values. For the CE optimization we used
nCE = 10 samples for one dimensional, nCE = 25 samples for 2-dimensional and nCE = 64 for
3-dimensional control variables. eCE was always set to 0.3nCE and we used kCE = 3 iterations.
To enforce exploration when collecting new data, a Gaussian noise of ? = N (0, 1.0) was added
to the CE-based policy. For the tree-based algorithm, an ensemble of M = 20 trees was used, K
was set to the number of state and action variables and nmin was set to 2 (see [8]). For the CE-Net
algorithm we used a neural network with 2 hidden layers and 10 neurons per layer and trained the
network with the algorithm proposed in [6] for 600 epochs. For all experiments, a discount factor
of ? = 0.99 was used. The immediate reward function was quadratic in the distance to the goal
position xG and in the applied torque/force r = ?c1 (x ? xG )2 ? c2 a2 . For evaluating the learning
process, the exploration-free (i.e., ?(s) = 0, ? = 0) performance of the policy was evaluated after
each data-collection/FQI cycle. This was done by determining the accumulated reward during an
episode starting from the specified initial position. All errorbars represent a 95% confidence interval.
1
In practice, ridge regression V k+1 (s) = ?s(ST WS + ?I)?1 ST WQk+1 is used to avoid numerical instabilities in the regression.
?40
LAWER
CE Tree
CE LWR
CE Net
5
10
15
20
Number of Data Collections
?60
?80
LAWER
CE Tree
CE LWR
CE Net
5
10
15
20
Number of Data Collections
(a)
(b)
5
0
?5
5
0
?5
0
5
0
?5
LAWER
CE Tree
CE LWR
1
2
3
Time [s]
(c)
4
5
u [N]
?40
u [N]
?20
?30
5
0
?5
?20
Average Reward
Average Reward
?10
LAWER
5
0
?5
5
0
?5
0
CE Tree
CE LWR
1
2
3
Time [s]
4
5
(d)
Figure 1: (a) Evaluation of LAWER and CE-based FQI algorithms on the pendulum swing-up task
for c2 = 0.005 . The plots are averaged over 10 trials. (b) The same evaluation for c2 = 0.025. (c)
Learned torque trajectories for c2 = 0.005. (d) Learned torque trajectories for c2 = 0.025.
5.1
Pendulum swing-up task
In this task, a pendulum needs to be swung up from the position at the bottom to the top position [6].
The state space consists of the angular deviation ? from the top position and the angular velocity ??
of the pendulum. The system dynamics are given by 0.5ml2 ?? = mg sin(?) + u , the torque of the
motor u was limited to [?5N, 5N ]. The mass was set to m = 1kg and length of the link to 1m.
The time step was set to 0.05s. Two experiments with different torque punishments c2 = 0.005 and
c2 = 0.025 were performed.
We used L = 150 iterations. The matrices D and DA were set to D = diag(30, 3) and DA =
diag(2). In the data collection phase, 5 episodes with 150 steps were collected starting from the
bottom position and 5 episodes starting from a random position.
A comparison of the LAWER algorithm to CE-based algorithms for c2 = 0.005 is shown in Figure
1(a) and for c2 = 0.025 in Figure 1(b). Our algorithm shows a comparable performance to the
tree-based FQI algorithm while being computationally much more efficient. All other CE-based
FQI algorithms show a slightly decreased performance. In Figure 1(c) and (d) we can see typical
examples of learned torque trajectories when starting from the bottom position for the LAWER,
the CE-Tree and the CE-LWR algorithm. In Figure 1(c) the trajectories are shown for c2 = 0.005
and in Figure 1(d) for c2 = 0.025. All algorithms were able to discover a fast solution with 1
swing-up for the first setting and a more energy-efficient solution with 2 swing-ups for the second
setting. Still, there are qualitative differences in the trajectories. Due to the advantage-weighted
regression, LAWER was able to produce very smooth trajectories while the trajectories found by the
CE-based methods look more jerky. In Figure 2(a) we can see the influence of the parameter ? on
the performance of the LAWER algorithm. The algorithm works for a large range of ? values.
5.2
Acrobot swing-up task
In order to asses the performance of LAWER on a complex highly non-linear control task, we used
the acrobot (for a description of the system, see [1]). The torque was limited to [?5N, 5N ]. Both
masses were set to 1kg and both lengths of the links to 0.5m. A time step of 0.1s was used. L = 100
iterations were used for the FQI algorithms. In the data-collection phase the agent could observe 25
episodes starting from the bottom position and 25 starting from a random position. Each episode had
100 steps. The matrices D and DA were set to D = diag(20, 23.6, 10, 10.5) and DA = diag(2).
The comparison of the LAWER and the CE-Tree algorithm is shown in Figure 2(a). Due to the
adaptive state discretization, the tree-based algorithm is able to learn faster, but in the end, the
LAWER algorithm is able to produce policies of higher quality than the tree-based algorithm.
5.3
Dynamic puddle-world
In the puddle-world task [7], the agent has to find a way to a predefined goal area in a continuousvalued maze world (see Figure 3(a)). The agent gets negative reward when going through puddles.
In difference to the standard puddle-world setting where the agent has a 2-dimensional state space
(the x and y position), we use a more demanding setting. We have created a dynamic version of the
puddle-world where the agent can set a force accelerating a k-dimensional point mass (m = 1kg).
?20
?10
?30
?40
c2 = 0.005
?50
Average Reward
Average Reward
1
?20
Start
Goal
?30
?40
LAWER
CE Tree
c2 = 0.025
?60
2
3
4
?
5
6
7
?50
5
10
15
20
Number of Data Collections
(a)
0
1
(b)
(c)
Figure 2: (a) Evaluation of the average reward gained over a whole learning trial on the pendulum
swing-up task for different settings of ? (b) Comparison of the LAWER and the CE-Tree algorithm
on the acrobot swing-up task (c) Setting of the 2-dimensional dynamic puddle-world.
2
0
?2
?40
?60
?80
?100
LAWER
CE Tree
5
10 15 20 25 30
Number of Data Collections
(a)
Average Reward
Average Reward
?20
?50
1
2
0
?2
?100
?150
u
2
0
?2
0
30
u
2
u
LAWER
CE Tree
5
10 15 20 25
Number of Data Collections
(b)
2
0
?2
u
1
2
0
?2
2
0
?2
5
0
u
2
u
3
1
2
3
Time [s]
(c)
4
3
1
2
3
Time [s]
4
5
(d)
Figure 3: (a) Comparison of the CE-Tree and the LAWER algorithm for the 2-dimensional dynamic
puddle-world. (b) Comparison of the CE-Tree and the LAWER algorithm for the 3-dimensional
dynamic puddle-world. (c) Torque trajectories for the 3-dimensional puddle world learned with the
LAWER algorithm. (d) Torque trajectories learned with the CE-Tree algorithm.
This was done for k = 2 and k = 3 dimensions. The puddle-world illustrates the scalability of
the algorithms to multidimensional continuous action spaces (2 respectively 3 dimensional). The
positions were limited to [0, 1] and the velocities to [?1, 1]. The maximum force that could be
applied in one direction was restricted to 2N and the time step was set to 0.1s. The setting of the
2-dimensional puddle-world can be seen in Figure 2(c). Whenever the agent was about to leave
the predefined area, the velocities were set to zero and an additional reward of ?5 was given. We
compared the LAWER with the CE-Tree algorithm. L = 50 iterations were used. The matrices D
and DA were set to D = diag(10, 10, 2.5, 2.5) and DA = diag(2.5, 2.5) for the 2-dimensional and
to D = diag(8, 8, 8, 2, 2, 2) and DA = diag(1, 1, 1) for the 3-dimensional puddle-world. In the
data collection phase the agent could observe 20 episodes with 50 steps starting from the predefined
initial position and 20 episodes starting from a random position.
In Figure 3(a), we can see the comparison of the CE-Tree and the LAWER algorithm for the 2dimensional puddle-world and in Figure 3(b) for the 3-dimensional puddle-world. The results show
that the tree-based algorithm has an advantage in the beginning of the learning process. However,
the CE-Tree algorithm has problems finding a good policy in the 3-dimensional action-space, while
the LAWER algorithm still performs well in this setting. This can be seen clearly in the comparison
of the learned force trajectories which are shown in Figure 3(c) for the LAWER algorithm and in
Figure 3(d) for the CE-Tree algorithm. The trajectories for the CE-Tree algorithm are very jerky
and almost random for the first and third dimension of the control variable, whereas the trajectories
found by the LAWER algorithm look very smooth and goal directed.
6
Conclusion and future work
In this paper, we focused on solving RL problems with continuous action spaces with fitted Qiteration based algorithms. The computational complexity of the max operator maxa Q(s, a) often
makes FQI algorithms intractable for high dimensional continuous action spaces. We proposed a
new method which circumvents the max operator by the use of a stochastic soft-max policy that
allows us to reduce the policy improvement step V (s) = maxa Q(s, a) to a weighted regression
problem. Based on this result, we can derive the LAWER algorithm, a new, computationally efficient
FQI algorithm based on LWR.
Experiments have shown that the LAWER algorithm is able to produce high quality smooth policies,
even for high dimensional action spaces where the use of expensive optimization methods for calculating maxa Q(s, a) becomes problematic and only quite suboptimal policies are found. Moreover,
the computational costs of using continuous actions for standard FQI are daunting. The LAWER
algorithm needed on average 2780s for the pendulum, 17600s for the acrobot, 13700s for the 2Dpuddle-world and 24200s for the 3D-puddle world benchmark task. The CE-Tree algorithm needed
on average 59900s, 201900s, 134400s and 212000s, which is an order of magnitude slower than the
LAWER algorithm. The CE-Net and CE-LWR algorithm showed comparable running times as the
CE-Tree algorithm. A lot of work has been spent to optimize the implementations of the algorithms.
The simulations were run on a P4 Xeon with 3.2 gigahertz.
Still, in comparison to the tree-based FQI approach, our algorithm has handicaps when dealing with
high dimensional state spaces. The distance kernel matrices have to be chosen appropriately by
the user. Additionally, the uniform distance measure throughout the state space is not adequate for
many complex control tasks and might degrade the performance. Future research will concentrate
on combining the AWR approach with the regression trees presented in [8].
7
Acknowledgement
This paper was partially funded by the Austrian Science Fund FWF project # P17229. The first
author also wants to thank Bernhard Sch?lkopf and the MPI for Biological Cybernetics in T?bingen
for the academic internship which made this work possible.
References
[1] R. Sutton and A. Barto, Reinforcement Learning. Boston, MA: MIT Press, 1998.
[2] J. A. Boyan and A. W. Moore, ?Generalization in reinforcement learning: Safely approximating the value
function,? in Advances in Neural Information Processing Systems 7, pp. 369?376, MIT Press, 1995.
[3] P. Viviani and T. Flash, ?Minimum-jerk, two-thirds power law, and isochrony: Converging approaches to
movement planning,? Journal of Experimental Psychology: Human Perception and Performance, vol. 21,
no. 1, pp. 32?53, 1995.
[4] R. M. Alexander, ?A minimum energy cost hypothesis for human arm trajectories,? Biological Cybernetics, vol. 76, pp. 97?105, 1997.
[5] C. M. Harris and D. M. Wolpert, ?Signal-dependent noise determines motor planning.,? Nature, vol. 394,
pp. 780?784, August 1998.
[6] M. Riedmiller, ?Neural fitted Q-iteration - first experiences with a data efficient neural reinforcement
learning method,? in Proceedings of the European Conference on Machine Learning (ECML), 2005.
[7] R. Sutton, ?Generalization in reinforcement learning: Successful examples using sparse coarse coding,?
in Advances in Neural Information Processing Systems 8, pp. 1038?1044, MIT Press, 1996.
[8] D. Ernst, P. Geurts, and L. Wehenkel, ?Tree-based batch mode reinforcement learning,? J. Mach. Learn.
Res., vol. 6, pp. 503?556, 2005.
[9] A. Antos, R. Munos, and C. Szepesvari, ?Fitted Q-iteration in continuous action-space MDPs,? in Advances in Neural Information Processing Systems 20, pp. 9?16, Cambridge, MA: MIT Press, 2008.
[10] S. Timmer and M. Riedmiller, ?Fitted Q-iteration with CMACs,? pp. 1?8, 2007.
[11] J. Peters and S. Schaal, ?Policy learning for motor skills,? in Proceedings of 14th International Conference
on Neural Information Processing (ICONIP), 2007.
[12] P.-T. de Boer, D. Kroese, S. Mannor, and R. Rubinstein, ?A tutorial on the cross-entropy method,? Annals
of Operations Research, vol. 134, pp. 19?67, January 2005.
[13] J. Peters and S. Schaal, ?Reinforcement learning by reward-weighted regression for operational space
control,? in Proceedings of the International Conference on Machine Learning (ICML), 2007.
[14] C. G. Atkeson, A. W. Moore, and S. Schaal, ?Locally weighted learning,? Artificial Intelligence Review,
vol. 11, no. 1-5, pp. 11?73, 1997.
| 3501 |@word trial:2 version:3 simulation:1 carry:1 initial:4 current:3 discretization:1 si:33 yet:1 subsequent:1 numerical:1 additive:1 motor:3 plot:1 update:2 fund:1 greedy:14 intelligence:1 beginning:1 coarse:1 mannor:1 c2:13 become:2 qualitative:1 consists:1 combine:1 introduce:1 forgetting:1 expected:1 behavior:3 frequently:2 planning:2 multi:2 discretized:1 torque:9 continuousvalued:1 automatically:1 increasing:2 ua:1 becomes:1 estimating:1 moreover:3 notation:1 discover:1 mass:3 project:1 kg:3 maxa:6 z:1 finding:2 guarantee:1 safely:1 collecting:1 multidimensional:1 viviani:1 prohibitively:1 control:10 planck:1 timmer:1 positive:1 local:3 sutton:2 mach:1 punish:1 might:2 studied:1 ease:1 limited:4 range:2 averaged:1 directed:1 practice:1 procedure:2 jan:2 cmac:1 area:3 riedmiller:2 discretizations:1 ups:1 confidence:1 radial:1 fqi:34 spite:1 get:2 cannot:1 close:1 selection:5 operator:6 context:1 greediness:1 instability:1 influence:1 landing:1 optimize:1 straightforward:1 starting:10 convex:1 focused:1 autonomous:1 annals:1 target:1 gerhard:2 user:1 us:2 hypothesis:1 velocity:5 expensive:5 particularly:1 approximated:3 bottom:4 calculate:1 graz:3 ensures:1 cycle:1 episode:8 movement:2 highest:1 removed:1 observes:1 environment:1 ui:4 complexity:2 reward:21 dynamic:8 trained:1 depend:1 solving:1 efficiency:1 basis:1 easily:1 fast:1 query:1 rubinstein:1 artificial:1 kce:2 quite:1 widely:1 larger:1 itself:1 online:2 advantage:15 mg:1 net:6 propose:1 p4:1 tu:1 relevant:1 causing:1 combining:1 ernst:1 adapts:1 description:1 scalability:1 convergence:2 neumann:1 produce:4 leave:1 spent:1 derive:4 ac:1 augmenting:1 sa:8 ml2:1 direction:1 concentrate:1 subsequently:2 stochastic:4 exploration:5 human:2 require:1 generalization:3 biological:4 p17229:1 hold:2 sufficiently:1 exp:7 driving:1 a2:2 estimation:1 currently:1 weighted:20 minimization:1 mit:4 clearly:1 gaussian:5 always:2 avoid:2 barto:1 focus:1 schaal:3 improvement:6 vk:4 rigorous:1 dependent:2 accumulated:2 hidden:1 w:1 wij:9 going:1 germany:1 integration:1 initialize:1 once:1 lwr:10 sampling:1 look:2 icml:1 future:3 simplify:1 few:1 pathological:1 divergence:2 replaced:1 phase:3 consisting:1 maintain:1 highly:3 evaluation:5 introduces:1 antos:1 predefined:3 integral:2 experience:3 tree:32 harmful:1 old:1 re:1 theoretical:1 fitted:12 increased:1 xeon:1 soft:7 maximization:1 cost:5 deviation:3 uniform:1 successful:1 straightforwardly:2 chooses:1 punishment:1 st:9 density:1 international:2 boer:1 kroese:1 squared:1 again:1 de:1 coding:1 igi:1 tion:1 performed:1 lot:1 dad:2 pendulum:8 start:2 jerky:2 ass:1 square:1 variance:5 qk:11 efficiently:2 ensemble:1 lkopf:1 trajectory:13 cybernetics:3 history:1 whenever:2 inexpensive:1 energy:4 internship:1 pp:10 regress:3 proof:2 dataset:3 popular:1 austria:1 knowledge:2 car:1 focusing:1 higher:2 supervised:1 follow:2 wqk:2 daunting:1 done:3 evaluated:2 strongly:1 furthermore:1 just:1 angular:2 nmin:1 d:5 defines:2 mode:2 aj:3 quality:5 normalized:1 true:2 swing:11 iteratively:1 moore:2 deal:1 sin:1 during:1 mpi:1 criterion:1 iconip:1 ridge:1 geurts:1 performs:1 meaning:1 recently:1 physical:1 rl:3 approximates:1 cambridge:1 ai:20 queried:1 had:1 funded:1 stable:1 actor:4 robot:1 similarity:5 etc:1 showed:1 optimizing:1 inequality:1 approximators:4 seen:3 minimum:5 additional:2 employed:2 determine:1 maximize:1 signal:1 smooth:4 technical:2 faster:1 academic:1 cross:3 long:2 a1:1 converging:1 variant:3 regression:32 denominator:1 austrian:1 expectation:1 iteration:19 kernel:5 represent:1 robotics:1 c1:1 whereas:1 want:1 interval:1 decreased:1 appropriately:1 sch:1 fwf:1 jerk:1 psychology:1 bandwidth:2 suboptimal:1 reduce:3 idea:1 accelerating:1 peter:4 bingen:2 returned:1 cause:2 action:62 adequate:1 clear:1 amount:1 nonparametric:2 discount:1 locally:6 reduced:1 outperform:1 exist:2 problematic:2 tutorial:1 estimated:1 per:1 discrete:4 vol:6 nevertheless:1 prevent:1 ce:41 nce:6 run:2 uncertainty:1 almost:1 qiteration:1 throughout:1 oscillation:1 circumvents:1 comparable:2 bound:7 layer:2 handicap:1 quadratic:2 comparision:1 occur:1 dangerous:1 constraint:1 incorporation:1 ri:5 transferred:1 remain:1 slightly:1 em:1 wi:10 s1:1 dv:3 restricted:2 computationally:5 equation:5 wsa:1 mechanism:1 needed:4 know:1 tractable:1 end:3 operation:5 apply:1 observe:3 appropriate:1 enforce:1 batch:2 slower:1 original:2 denotes:2 top:2 running:1 wehenkel:1 unsuitable:1 calculating:1 approximating:4 classical:1 already:2 quantity:1 added:1 damage:1 rt:1 traditional:1 diagonal:1 distance:3 link:2 thank:1 consumption:1 degrade:1 mail:1 collected:2 unstable:2 reason:1 length:2 multiplicatively:1 minimizing:1 unfortunately:1 negative:1 implementation:2 policy:43 perform:2 upper:4 neuron:1 benchmark:3 finite:1 ecml:1 january:1 immediate:4 august:1 inferred:1 pair:3 specified:1 oversampled:1 errorbars:1 learned:6 ther:1 address:1 able:8 perception:1 max:8 including:1 belief:1 power:1 demanding:1 rely:1 circumvent:1 force:4 boyan:1 arm:1 improve:1 technology:1 mdps:1 finished:2 created:1 xg:2 sn:1 epoch:1 literature:1 acknowledgement:1 review:1 determining:1 law:1 proven:1 approximator:1 throttle:1 agent:12 sufficient:1 metabolic:1 critic:2 autonomy:1 prone:1 free:1 offline:1 bias:3 institute:2 wide:1 munos:1 sparse:1 distributed:2 feedback:1 dimension:4 calculated:3 world:21 evaluating:1 maze:1 author:2 commonly:1 reinforcement:10 collection:10 simplified:1 inertia:1 historical:2 adaptive:1 made:1 atkeson:1 sj:3 approximate:3 skill:1 preferred:1 bernhard:1 keep:1 dealing:1 continuous:21 additionally:4 wia:1 learn:2 nature:1 szepesvari:1 inherently:1 operational:1 init:3 improving:1 complex:4 european:1 da:13 diag:11 whole:2 s2:1 noise:2 repeated:1 position:14 weighting:5 third:2 down:1 bad:1 jensen:1 maxi:1 list:1 dk:6 swung:1 essential:1 intractable:1 adding:1 importance:2 gained:2 acrobot:6 magnitude:1 illustrates:1 boston:1 entropy:3 wolpert:1 partially:1 restarted:1 determines:2 relies:1 harris:1 ma:9 goal:7 viewed:1 formulated:1 acceleration:2 bang:2 flash:1 cmacs:1 replace:2 hard:4 typical:1 except:1 uniformly:1 called:1 ece:2 experimental:1 puddle:17 alexander:1 incorporate:1 |
2,760 | 3,502 | Structured Ranking Learning using
Cumulative Distribution Networks
Jim C. Huang
Probabilistic and Statistical Inference Group
University of Toronto
Toronto, ON, Canada M5S 3G4
[email protected]
Brendan J. Frey
Probabilistic and Statistical Inference Group
University of Toronto
Toronto, ON, Canada M5S 3G4
[email protected]
Abstract
Ranking is at the heart of many information retrieval applications. Unlike standard
regression or classification in which we predict outputs independently, in ranking
we are interested in predicting structured outputs so that misranking one object
can significantly affect whether we correctly rank the other objects. In practice,
the problem of ranking involves a large number of objects to be ranked and either approximate structured prediction methods are required, or assumptions of
independence between object scores must be made in order to make the problem
tractable. We present a probabilistic method for learning to rank using the graphical modelling framework of cumulative distribution networks (CDNs), where we
can take into account the structure inherent to the problem of ranking by modelling the joint cumulative distribution functions (CDFs) over multiple pairwise
preferences. We apply our framework to the problem of document retrieval in
the case of the OHSUMED benchmark dataset. We will show that the RankNet,
ListNet and ListMLE probabilistic models can be viewed as particular instances
of CDNs and that our proposed framework allows for the exploration of a broad
class of flexible structured loss functionals for learning to rank.
1
Introduction
Ranking is the central problem for many information retrieval applications such as web search,
collaborative filtering and document retrieval [8]. In these problems, we are given a set of objects
to be ranked and a series of observations where each observation consists of some subset of the
objects, a feature vector and some ordering of the objects with highly ranked objects corresponding
to a higher relevance or degree of importance. The goal is to then learn a model which allows
us to assign a score to new test objects: this often takes the form of a ranking function [2, 4]
which assigns a higher score to objects with higher rankings. Unlike the canonical problems of
regression or classification in which we predict outputs independently of one another, in ranking
we are interested in predicting structured outputs, as the rank of one item can only be determined
given the scores of all other items, and so complex inter-dependencies exist between outputs. This
requires measures of loss which are multivariate and structured. However, such ranking measures
are typically difficult to optimize directly [3], making the problem of learning difficult. A previous
approach has been to treat the problem as one of structured prediction [7], where the aim is to directly
optimize ranking measures. Another approach has been to approximate these ranking measures with
smooth differentiable loss functionals by formulating probabilistic models on pairwise preferences
between objects (RankNet; [2]), or on ordered lists of objects (ListNet and ListMLE; [4, 13]). In
practice, these methods either require approximating a learning problem with an intractable number
of constraints, or they require observations containing complete orderings over the objects to be
ranked or one must make independence assumptions on pairwise preferences.
In practice however, we can take advantage of the fact that each observation in the training set
only provides preference information about a small subset of the objects to be ranked, so that a
sensible probabilistic representation would be the probability of observing a partial ordering over
nodes for a given observation. We will show that 1) a probability over orderings is equivalent to a
probability over pairwise inequalities between objects to be ranked and 2) this amounts to specifying
a joint cumulative distribution function (CDF) over pairwise object preferences. We will present a
framework for ranking using the recently-developed probabilistic graphical modelling framework
of CDNs which compactly represents this joint CDF as a product of local functions [5]. While the
problem of inference in CDNs was addressed in [5], here we address the problem of learning in
CDNs in the context of ranking learning where we estimate model parameters under a structured
loss functional that accounts for dependencies between pairwise object preferences. We will then
test the proposed framework on the OHSUMED dataset [8], a benchmark dataset used in information
retrieval research. Finally we will show that the frameworks proposed by [2, 4, 13] can be viewed
as particular types of CDNs so that novel classes of flexible structured loss functionals for ranking
learning can be specified under our framework.
2
Cumulative distribution networks
The CDN [5] is an undirected graphical model in which the joint CDF F (z) over a set of random
variables is represented as a product over functions defined over subsets of these variables. More
formally,
F (z) =
?c (zc ),
(1)
c?C
where ?c (zc ) is a function defined over some subset of variables. An example of a CDN is shown
in Figure 1(a), along with an example bivariate density which can be obtained by differentiating a
product of 2 Gaussian CDF functions (Figure 1(b)).
In contrast to undirected models for probability density functions, the global normalization constraint on the CDF does not require computing a partition function and can be enforced locally for
each ?c (zc ). Thus, in order for the CDN to represent a valid CDF, it is sufficient that each of the local
functions ?c satisfy all of the properties of a multivariate CDF. These properties include the requirements that each CDN function ?c be bounded between 0 and 1, and that each ?c is monotonically
non-decreasing with respect to all of its argument variables zc , so that the joint CDF F (z) is also
bounded between 0 and 1 and is monotonically non-decreasing with respect to any and all subsets
of variables. In a CDN, disjoint sets of variables A, B are marginally independent if they share no
functions in common, and disjoint sets of variables A, B are conditionally independent given variable set C if no path linking any variable in A to any variable in B passes through C. In addition,
marginalization of variables in a CDN can be done in constant-time via a trivial maximization of the
joint CDF with respect to the variables being marginalized. The problem of inference in a CDN can
be solved efficiently using a message-passing algorithm called derivative-sum-product. For detailed
derivations of the properties of CDNs, including marginal and conditional independence properties,
we refer the reader to [5]. The CDN framework provides us with a means to compactly represent
multivariate joint CDFs over many variables: in the next section we will formulate a loss functional
for learning to rank which takes on such a form.
8
y
6
4
2
0
0
2
4
6
8
x
(a)
(b)
Figure 1: a) Cumulative distribution network representing the joint CDF F (z1 , z2 , z3 , z4 , z5 ) =
?a (z2 )?b (z1 , z2 , z3 )?c (z3 )?d (z4 )?e (z3 , z4 , z5 )?f (z5 ); b) Example of a bivariate density P (x, y) corresponding to differentiating a CDF F (x, y) obtained from taking the product of 2 Gaussian bivariate CDFs.
3
Structured loss functionals for ranking learning
We now proceed to formulate the problem of learning to rank in a structured setting. Suppose
we wish to rank N nodes in the set V = {V1 , ? ? ? , VN } and we are given a set of observations
D1 , ? ? ? , DT . Each observation Dt consists of an ordering over the nodes in a subset Vt ? V, where
each node is provided with a corresponding feature vector x ? RL which may be specific to the
given observation. The orderings could be provided in the form of ordinal node labels1 , or in the
form of pairwise node preferences. The orderings can be represented as a directed graph over the
nodes in which a directed edge e = (Vi ? Vj ) is drawn between 2 nodes Vi , Vj iff Vi is preferred
to node Vj , which we denote as Vi Vj . In general, we assume that for any given observation,
we observe a partial ordering over nodes, with complete orderings being a special case. We denote
the above graph consisting of edges e = (Vi ? Vj ) ? Et and the node set Vt as the order graph
Gt = (Vt , Et ) for observation Dt so that Dt = {Gt , {xtn }Vn ?Vt }. A toy example of an observation
over 4 nodes is shown in Figure 2(a). Note that under this framework, the absence of an edge
between two nodes Vi , Vj in the order graph indicates we cannot assert any preference between the
two nodes for the given observation.
(a)
(b)
Figure 2: a) An example of an order graph over 4 nodes V1 , V2 , V3 , V4 corresponding to the objects to be
ranked. The graph represents the set of preference relationships V1 V2 , V1 V3 , V1 V4 , V2 V4 , V3
V4 ; b) Learning the ranking function from training data. The training data consists of a set of order graphs over
subsets of the objects to be ranked. For order graph, the ranking function ? maps each node to the real line .
The goal is to learn ? such that we minimize our probability of misranking on test observations.
We now define ? : V ? R as a ranking function which assigns scores to nodes via their feature
vectors so that for node Vi ,
Si = ?(Vi ) + ?i
(2)
where Si is a scalar and ?i is a random variable specific to node Vi . We wish to learn such a function
given multiple observations D1 , ? ? ? , DT so that we minimize the probability of misranking on test
observations (Figure 2(b)). The above model allows us to account for the fact that the amount of
uncertainty about a node?s rank may depend on unobserved features for that node (e.g.: documents
associated with certain keywords might have less variability in their rankings than other documents).
Under this model, the preference relation Vi Vj is completely equivalent to
?(Vi ) + ?i ? ?(Vj ) + ?j ? ?ij = ?j ? ?i ? ?(Vi ) ? ?(Vj ).
(3)
where we have defined ?ij as a preference variable between nodes Vi , Vj .
For each edge e = (Vi ? Vj ) ? Et in the order graph, we can define r(?; e, Dt ) ? ?(Vi ) ? ?(Vj )
and collect these into the vector r(?; Gt ) ? R|Et | . Similarly, let ?e ? ?ij . Having defined the
preferences, we must select an appropriate loss measure. A sensible metric here [13] is the joint
1
It is crucial to note that node labels may in general not be directly comparable with one another from one
observation to the next (e.g.: documents with the same rating might not truly have the same degree of relevance
for different queries), or the scale of the labels may be arbitrary.
probability of observing the order graph Gt = (Vt , Et ) corresponding to the partial ordering of
nodes in Vt . From Equation (3), this will take the form of a probability measure over events of the
type ?e ? r(?; e, Dt ) so that we obtain
[?e ? r(?; e, Dt )] = F? r(?; Gt ) ,
(4)
P r{Et |Vt , ?} = P r
e?Et
where F? is the joint CDF over the preference variables ?e . Given an observation Dt , the goal is to
learn the ranking function ? by maximizing Equation (4). Note that under this framework, the set
of edges Et corresponding to the set of pairwise preferences are treated as
randomvariables which
may have a high degree of dependence between one another, so that F? r(?; Gt ) is a joint CDF
over multiple pairwise preferences. The problem of learning the ranking function then consists of
scoring multiple nodes simultaneously whilst accounting for dependencies between node scores.
Now, if we are given multiple independent (but not necessarily identically distributed) observations
D = {D1 , ? ? ? , DT }, we can define a structured loss functional
L(?, F? , D) = ?
T
log F? r(?; Gt )
(5)
t=1
where each term in the loss functional depends on multiple preference relationships specified by the
order graph for observation t. The problem of learning then consists of solving the optimization
problem
inf
?,F?
L(?, F? , D).
(6)
In general, the above structured loss functional may be difficult to specify, as it takes on the form of
a joint CDF over many random variables with a high degree of inter-dependency which may require
a large number of parameters to specify. We can, however, compactly represent this using the CDN
framework, as we will now show.
3.1
Tranforming order graphs into CDNs
Figure 3: Transforming the order graph Gt into a CDN. For each edge e = (Vi ? Vj ) in the order graph
(left), a preference variable ?ij is created. All such random variables are then connected to one another in a
CDN (right), allowing for complex dependencies between preferences.
The representation of the structured loss functional in Equation (5) as a CDN consists of transforming the order graph Gt for a each observation into a set of variable nodes in a CDN. More precisely,
for each edge e = (Vi ? Vj ) in the order graph, the preference variable ?ij is created. All such
variables are then connected to one another in a CDN (Figure 3), where the pattern of connectivity
used will determine the set of dependencies between these preferences ?ij as given by the marginal
and conditional independence properties of CDNs [5]. Thus for any given CDN topology, each
preference node ?e is a member of some neighborhood of preference nodes ?e so that neighboring
preferences nodes are marginally dependent of one another.
One possible concern here is that we may require a fully connected CDN topology over all possible
pairwise preferences between all nodes in order to capture all of these dependencies, leading to a
model which is cumbersome to learn. In practice, because any observation only conveys information
about a small subset of the nodes in V and because in practice we observe partial orderings between
these, the order graph is sparse and so the number of preference nodes in the CDN for the given
observation will be much smaller than the worst-case number of all possible pairwise preferences
between nodes. Furthermore, we do not have to store a large CDN in memory during training, as
we only need to store a single CDN over a relatively small number of preference variables for the
current observation. We can thus perform ranking learning in an online fashion by
constructing a
single CDN for each observation Dt and optimizing the loss ? log F? r(?; Gt ) defined by that
CDN for the given observation.
4
StructRank: a probabilistic model for structured ranking learning with
node labels
Suppose now that each node in the training set is provided with an ordinal node label y along with a
feature vector x. For any given order graph over some subset of the nodes, the node labels y allow
us to establish edges in the order graph, so that an edge Vi ? Vj exists between two nodes Vi , Vj iff
yi > yj . We can then parametrically model the ranking function ?(V ) ? ?(x; a) (where a is a set
of parameters) using a Nadaraya-Watson [10, 12] local estimator with a Gaussian kernel so that
T
K(xi , x; a)yi
1
i
? A x?x
? ,
,
K(?
x, x; a) = exp ? x ? x
?(x; a) =
(7)
2
i K(xi , x; a)
where the summations are taken over all feature vector-label pairs in the training set, with A =
diag(a21 , ? ? ? , a2L ). Consider now an edge e = (Vi ? Vj ) in the order graph and define re ?
re (a; Dt ) = ?(xti ; a) ? ?(xtj ; a). For a given order graph, the structured loss functional L(?; Dt ) is
given by
L(?; Dt ) = ? log F? r(?; Gt ) = ?
log ?(re (a; Dt ), re (a; Dt )),
(8)
e,e
where ? = a w1 w2 is the parameter vector and the function ?(r1 , r2 ) set to a multivariate
sigmoidal function so that
?(r1 , r2 ) =
1
,
1 + exp(?w1 r1 ) + exp(?w2 r2 )
w1 , w2 ? 0,
(9)
where w1 , w2 are weights parameterizing the CDN function ?(r1 , r2 ). It can be readily shown that
this choice of CDN function ?(r1 , r2 ), when combined with the constraints w1 , w2 > 0, satisfies
all of the necessary and sufficient conditions required for the CDN to represent a valid CDF, as
0 ? ?(r1 , r2 ) ? 1 and is monotonically non-decreasing with respect to all of its arguments. For the
given CDN and ranking functions, the learning problem for the current observation Dt then becomes
log 1 + exp ? w1 re (a; Dt ) + exp ? w2 re (a; Dt )
s.t. ? ? 0
inf
?
t
e,e
? 1 ? t, (10)
where we have introduced a regularizer in the form of an L1 -norm constraint. Notice that our model
has one parameter per data feature and 2 parameters defining the CDN for any given observation.
The gradient ?a L(?; Dt ) and the derivatives with respect to the CDN function weights w1 , w2 for
a given observation Dt are provided in the Supplementary Information.
5
Results
To compare the performance of our proposed framework to other methods, we will use the following
three metrics commonly in use in information retrieval research: Precision, Mean Average Precision
(MAP) and Normalized Discounted Cumulative Gain (NDCG) [6]. The NDCG accounts for the fact
that less relevant documents are less likely to be examine by a user by putting more weight on highly
relevant documents than marginally relevant ones.
We downloaded the OHSUMED dataset provided as part of the LETOR 2.0 benchmark [8]. The
dataset consists of a set of 106 query-document pairs, with a feature vector and relevance judgment
(a)
(b)
(c)
Figure 4: a) Average NDCG as a function of truncation level n for the OHSUMED dataset. NDCG values are
averaged over 5 cross-validation splits; b) Mean average precision (MAP) as a function of truncation level n;
c) Mean average precision value for several methods.
provided for each pair, where queries correspond to medical searches associated with patient and
topic information. There are a total of 16,140 query-document pairs with relevance judgments provided by humans on three ordinal levels: definitely relevant, partially relevant or not relevant. For
any given query, we used the ordinal labels y for each document in the query in order to establish
preferences between documents for that query. Each node in the order graph is provided with 25
query-specific features including term frequency, document length, BM25 and LMIR features as
well as combinations thereof [1, 11, 14]. In accordance with the nomenclature above, we use the
terms query and observation interchangeably.
The OHSUMED dataset is provided in the form of 5 training/validation/test splits of sizes 63/21/22
observations each. To ensure that features are comparable across all observations, we normalized
each feature vector within each observation as described in [8]. We performed learning of our model
using a constrained stochastic gradients algorithm where for each observation, we prevent updates
from violating the inequality constraints in the optimization problem defined by Equation (10) by
reducing the learning rate ? until the update becomes feasible. We set the default learning rate
to ? = 0.5 and we randomly initialized the model parameters a, w1 , w2 in the range [0, 1]. This
optimization was run for 10 epochs (passes through the training set) and ? was scaled by ?12 at
the end of each epoch. We set the regularization parameter using the validation set for a given data
split. Due to the nonconvex nature of the optimization problem, for each cross-validation split, we
performed learning using 3 random initializations, and we then selected the model which achieved
the best MAP score on the validation set.
We tested a fully connected CDN which models full interdependence between preferences, and a
completely disconnected CDN which models preferences independently of one another. The above
3 performance metrics are shown in Figures 4(a),4(b),4(c) in addition to the performances of seven
state-of-the-art methods which are part of the LETOR 2.0 benchmarks. At the time of submission,
numerical performance scores for ListMLE [13] were not available and so were not included in
these plots. With the exception of ListNet and ListMLE, none of the above methods explicitly
model dependencies between pairwise preferences. As can be seen, accounting for dependencies
between pairwise preferences provides a significant gain in performance compared to modellling
preferences as being independent. Additional results on the TREC2004 dataset from LETOR 2.0
are provided in Supplemental Information.
6
Discussion
We have proposed here a novel framework for ranking learning using structured loss functionals.
We have shown that the problem of learning to rank can be reduced to maximizing a joint CDF
over multiple pairwise preferences. We have shown how to compactly represent this using the CDN
framework and have applied it to the OHSUMED benchmark dataset. We have demonstrated that
representing the dependencies between pairwise preferences leads to improved performance over
modelling preferences as being independent of one another.
6.1
Relation to RankNet and ListNet/ListMLE
The probability models for ranking proposed by [2, 4, 13] can all be expressed as special cases of
models defined by different CDNs. In the case of RankNet [2], the corresponding probability over a
given pairwise preference Vi Vj is modelled by a logistic function of ?(xi ) ? ?(xj ) and the model
was optimized using cross-entropy loss. The joint probability of preferences can thus be represented
as a completely disconnected CDN with logistic functions in which all pairwise object preferences
are treated as being independent. In the case of ListNet [4] and ListMLE [13], the probability of
observing a complete ordering V1 ? ? ? VN over N objects are defined as products of functions
of the type
P (V1 ? ? ? VN |D) =
N
i=1
exp(?(xi ))
N
k=i
exp(?(xk ))
=
N
i=1
1+
N
k=i+1
1
=
exp ? ?(xi ) ? ?(xk )
which we see is equivalent to a CDN with N multivariate sigmoids. As noted by the authors of [13],
the above model is also an example of the Plackett-Luce class of probability models over object
scores [9]. In addition, the ListNet/ListMLE frameworks both require a complete ordering over
objects by definition: under the CDN framework, we can model partial orderings, with complete
orderings as a special case. The connections between RankNet, ListNet and ListMLE and the CDN
framework are illustrated in Supplementary Figure 2. Our proposed framework unifies the above
N
i=1
?i (r i ),
views of ranking as different instantiations of a joint CDF over pairwise preferences and hence as
particular types of CDNs. This allows us to consider flexible joint CDFs defined over different
subsets of object preferences and over different families of CDN functions so as to capture various
data specific properties.
6.2
Future directions
Our work here suggests several future directions for research. In [13], it was shown that the loglikelihood corresponding to the probability of an ordering is a good surrogate to the 0-1 loss between the predicted ordering and the true ordering, as the former is differentiable and penalizes
mis-orderings in a sensible way. One could investigate connections between the structured loss
functionals proposed in this paper and other ranking measures such as NDCG. Another possible direction is to generalize StructRank to products over Gaussian multivariate CDFs or other classes of
functions which satisfy the requirements of CDN functions , as in this paper we have elected to use
a product of bivariate sigmoids ?(re , re ) to represent our loss functional. Also, it may be fruitful
to investigate different CDN topologies: for example, we found that averaging randomly connected
CDNs are very fast to learn and perform comparably to the fully-connected CDN we used in this
paper (data not shown). In addition, we have only investigated representing the loss functional using
a single CDN function: this could easily be generalized to K functions. Lastly, alternatives to the
Nadaraya-Watson local estimator, such as the neural networks used in [2, 4, 13], can be investigated.
References
[1] R. Baeza-Yates and B. Ribeiro-Neto. Modern information retrieval. Addison Wesley, 1999.
[2] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton and G. Hullender.
Learning to rank using gradient descent. In Proceedings of the Twenty-Second International Conference on Machine Learning (ICML), 2005.
[3] C.J.C. Burges, R. Ragno and Q.V. Le. Learning to rank with nonsmooth cost functions. In Proceedings of the Nineteenth Annual Conference on Neural Information Processing Systems (NIPS),
2007.
[4] Z. Cao, T. Qin, T.Y. Liu, M.F. Tsai and H. Li. Learning to rank: from pairwise approach to
listwise approach. In Proceedings of the Twenty-Fourth International Conference on Machine
Learning (ICML), 2007.
[5] J.C. Huang and B.J. Frey. Cumulative distribution networks and the derivative-sum-product algorithm. In Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence
(UAI), 2008.
[6] K. Jarvelin and J. Kekalainen. Cumulated evaluation of IR techniques, ACM Information Systems, 2002.
[7] T. Joachims. A support vector method for multivariate performance measures. In Proceedings
of the Twenty-Second International Conference on Machine Learning (ICML), 2005.
[8] T.Y. Liu, J. Xu, T. Qin, W. Xiong and H. Li. LETOR: Benchmark dataset for research on learning
to rank for information retrieval. LR4IR 2007, in conjunction with SIGIR 2007, 2007.
[9] J. I. Marden. Analyzing and modeling rank data. CRC Press, 1995.
[10] E.A. Nadaraya. On estimating regression. Theory of Probability and its Applications 9(1), pp.
141-142, 1964.
[11] S.E. Robertson. Overview of the OKAPI projects. Journal of Documentation 53 (1), pp. 3-7,
1997.
[12] G.S. Watson. Smooth regression analysis. The Indian Journal of Statistics. Series A 26, pp.
359-372, 1964.
[13] F. Xia, T.Y. Liu, J. Wang, W. Zhang and H. Li. Listwise approach to learning to rank - theory
and algorithm. In Proceedings of the Twenty-Fifth International Conference on Machine Learning
(ICML), 2008.
[14] C. Zhai and J. Lafferty. A study of smoothing methods for language models applied to ad hoc
information retrieval. In Proceedings of SIGIR 2001, 2001.
| 3502 |@word norm:1 accounting:2 liu:3 series:2 score:9 document:12 current:2 z2:3 si:2 must:3 readily:1 numerical:1 partition:1 listmle:8 plot:1 update:2 intelligence:1 selected:1 item:2 xk:2 renshaw:1 provides:3 node:41 toronto:6 preference:41 sigmoidal:1 zhang:1 along:2 consists:7 interdependence:1 g4:2 pairwise:19 inter:2 examine:1 discounted:1 decreasing:3 xti:1 ohsumed:6 becomes:2 provided:10 estimating:1 bounded:2 project:1 developed:1 whilst:1 supplemental:1 unobserved:1 assert:1 scaled:1 medical:1 hamilton:1 frey:3 treat:1 local:4 accordance:1 analyzing:1 path:1 ndcg:5 might:2 initialization:1 specifying:1 collect:1 suggests:1 nadaraya:3 cdfs:5 range:1 averaged:1 directed:2 yj:1 practice:5 significantly:1 deed:1 cannot:1 context:1 shaked:1 optimize:2 equivalent:3 map:4 demonstrated:1 fruitful:1 maximizing:2 independently:3 sigir:2 formulate:2 assigns:2 kekalainen:1 estimator:2 parameterizing:1 d1:3 marden:1 suppose:2 user:1 robertson:1 documentation:1 submission:1 solved:1 capture:2 worst:1 wang:1 connected:6 ordering:19 transforming:2 depend:1 solving:1 completely:3 compactly:4 easily:1 joint:16 represented:3 various:1 regularizer:1 derivation:1 fast:1 query:9 artificial:1 neighborhood:1 supplementary:2 nineteenth:1 loglikelihood:1 statistic:1 online:1 hoc:1 advantage:1 differentiable:2 product:9 qin:2 neighboring:1 relevant:6 cao:1 iff:2 requirement:2 r1:6 letor:4 object:24 ij:6 keywords:1 predicted:1 involves:1 direction:3 stochastic:1 exploration:1 human:1 crc:1 require:6 assign:1 summation:1 exp:8 predict:2 label:7 gaussian:4 aim:1 conjunction:1 joachim:1 rank:15 modelling:4 indicates:1 contrast:1 brendan:1 cdns:12 inference:4 plackett:1 dependent:1 okapi:1 typically:1 relation:2 interested:2 classification:2 flexible:3 constrained:1 special:3 art:1 smoothing:1 marginal:2 having:1 represents:2 broad:1 icml:4 jarvelin:1 future:2 nonsmooth:1 inherent:1 modern:1 randomly:2 simultaneously:1 xtj:1 consisting:1 message:1 highly:2 investigate:2 evaluation:1 truly:1 edge:10 partial:5 necessary:1 initialized:1 re:8 penalizes:1 instance:1 modeling:1 maximization:1 cost:1 subset:10 parametrically:1 lazier:1 dependency:10 combined:1 density:3 definitely:1 international:4 probabilistic:8 v4:4 connectivity:1 w1:8 central:1 containing:1 huang:2 derivative:3 leading:1 toy:1 li:3 account:4 satisfy:2 trec2004:1 explicitly:1 ranking:30 vi:21 depends:1 ad:1 performed:2 view:1 observing:3 collaborative:1 minimize:2 ir:1 efficiently:1 judgment:2 correspond:1 generalize:1 modelled:1 unifies:1 comparably:1 marginally:3 none:1 xtn:1 m5s:2 cumbersome:1 definition:1 frequency:1 pp:3 thereof:1 conveys:1 associated:2 psi:2 mi:1 gain:2 dataset:10 wesley:1 higher:3 dt:21 violating:1 listnet:7 specify:2 improved:1 done:1 furthermore:1 lastly:1 until:1 web:1 logistic:2 normalized:2 true:1 former:1 regularization:1 hence:1 illustrated:1 conditionally:1 during:1 interchangeably:1 noted:1 generalized:1 misranking:3 complete:5 l1:1 elected:1 novel:2 recently:1 common:1 functional:9 rl:1 overview:1 linking:1 refer:1 significant:1 z4:3 similarly:1 language:1 gt:11 multivariate:7 optimizing:1 inf:2 store:2 certain:1 nonconvex:1 inequality:2 watson:3 vt:7 yi:2 scoring:1 seen:1 additional:1 determine:1 v3:3 monotonically:3 multiple:7 full:1 lr4ir:1 smooth:2 cross:3 retrieval:9 z5:3 prediction:2 regression:4 patient:1 metric:3 normalization:1 represent:6 kernel:1 achieved:1 addition:4 addressed:1 crucial:1 w2:8 unlike:2 pass:2 undirected:2 member:1 lafferty:1 split:4 identically:1 baeza:1 affect:1 independence:4 marginalization:1 xj:1 topology:3 luce:1 whether:1 nomenclature:1 passing:1 proceed:1 ranknet:5 detailed:1 amount:2 locally:1 reduced:1 bm25:1 exist:1 canonical:1 notice:1 disjoint:2 correctly:1 per:1 yates:1 group:2 putting:1 drawn:1 prevent:1 v1:7 graph:22 sum:2 enforced:1 run:1 uncertainty:2 fourth:2 family:1 reader:1 vn:4 comparable:2 annual:1 constraint:5 precisely:1 ragno:1 argument:2 formulating:1 relatively:1 structured:18 combination:1 disconnected:2 smaller:1 across:1 making:1 heart:1 taken:1 equation:4 labels1:1 ordinal:4 addison:1 tractable:1 end:1 available:1 apply:1 observe:2 v2:3 appropriate:1 structrank:2 xiong:1 alternative:1 include:1 ensure:1 graphical:3 marginalized:1 a2l:1 establish:2 approximating:1 dependence:1 surrogate:1 gradient:3 sensible:3 topic:1 seven:1 trivial:1 length:1 relationship:2 z3:4 zhai:1 difficult:3 neto:1 twenty:5 perform:2 allowing:1 observation:33 benchmark:6 descent:1 defining:1 variability:1 jim:2 arbitrary:1 canada:2 rating:1 introduced:1 pair:4 required:2 specified:2 z1:2 optimized:1 connection:2 nip:1 address:1 pattern:1 including:2 memory:1 tranforming:1 event:1 ranked:8 treated:2 predicting:2 representing:3 created:2 hullender:1 epoch:2 loss:20 fully:3 filtering:1 cdn:39 validation:5 downloaded:1 degree:4 sufficient:2 share:1 truncation:2 zc:4 allow:1 burges:2 taking:1 differentiating:2 fifth:1 sparse:1 distributed:1 listwise:2 xia:1 default:1 valid:2 cumulative:8 author:1 made:1 commonly:1 ribeiro:1 functionals:6 approximate:2 preferred:1 global:1 instantiation:1 uai:1 xi:5 search:2 learn:6 nature:1 investigated:2 complex:2 necessarily:1 constructing:1 vj:18 diag:1 xu:1 fashion:1 precision:4 wish:2 a21:1 specific:4 list:1 r2:6 concern:1 bivariate:4 intractable:1 exists:1 cumulated:1 importance:1 sigmoids:2 entropy:1 likely:1 expressed:1 ordered:1 partially:1 scalar:1 satisfies:1 acm:1 cdf:17 conditional:2 viewed:2 goal:3 absence:1 feasible:1 included:1 determined:1 reducing:1 averaging:1 called:1 total:1 exception:1 formally:1 select:1 support:1 relevance:4 indian:1 tsai:1 tested:1 |
2,761 | 3,503 | Simple Local Models for Complex Dynamical Systems
Erik Talvitie
Computer Science and Engineering
University of Michigan
[email protected]
Satinder Singh
Computer Science and Engineering
University of Michigan
[email protected]
Abstract
We present a novel mathematical formalism for the idea of a ?local model? of an
uncontrolled dynamical system, a model that makes only certain predictions in
only certain situations. As a result of its restricted responsibilities, a local model
may be far simpler than a complete model of the system. We then show how
one might combine several local models to produce a more detailed model. We
demonstrate our ability to learn a collection of local models on a large-scale example and do a preliminary empirical comparison of learning a collection of local
models and some other model learning methods.
1
Introduction
Building models that make good predictions about the world can be a complicated task. Humans,
however, seem to have the remarkable ability to split this task up into manageable chunks. For
instance, the activity in a park may have many complex interacting components (people, dogs, balls,
etc.) and answering questions about their joint state would be impossible. It can be much simpler
to answer abstract questions like ?Where will the ball bounce?? ignoring most of the detail of what
else might happen in the next moment. Some other questions like ?What will the dog do?? may still
be very difficult to answer in general, as dogs are complicated objects and their behavior depends
on many factors. However, in certain situations, it may be relatively easy to make a prediction. If a
ball has just been thrown, one may reasonably predict that the dog will chase it, without too much
consideration of other potentially relevant facts. In short, it seems that humans have a lot of simple,
localized pieces of knowledge that allow them to make predictions about particular aspects of the
world in restricted situations. They can combine these abstract predictions to form more concrete,
detailed predictions. Of course, there has been substantial effort in exploiting locality/independence
structure in AI. Much of it is focused on static domains without temporal concerns (e.g. [1]), though
these ideas have been applied in dynamical settings as well (e.g. [2, 3]). Our main contribution
is to provide a novel mathematical formulation of ?local models? of dynamical systems that make
only certain predictions in only certain situations. We also show how to combine them into a more
complete model. Finally, we present empirical illustrations of the use of our local models.
1.1
Background
In this paper we will focus on learning models of uncontrolled discrete dynamical systems (we leave
consideration of controlled systems to future work). At each time step i the system emits an observation oi from a finite set of observations O. We call sequences of observations tests and let T be
the set of all possible tests of all lengths. At time step i, the history is simply the sequence o1 o2 ...oi
of past observations. We use the letter ? to represent the null history in which no observation has yet
been emitted. A prediction of a test t = oi+1 ...oi+k given a history h = o1 ...oi , which we denote
p(t|h), is the conditional probability that the sequence t will occur, given that the sequence h has
def
already occurred: p(t|h) = Pr(oi+1 = oi+1 , ..., oi+k = oi+k |o1 = o1 , ..., oi = oi ). The set of all
def
histories H is defined: H = {t ? T : p(t|?) > 0} ? {?}. We use models to make predictions:
1
Definition 1. A complete model can generate predictions p(t|h) for all t ? T and h ? H.
A model that can make every such prediction can make any conditional prediction about the system
[4]. For instance, one may want to make predictions about whether any one of a set of possible
futures will occur (e.g. ?Will the man throw a ball any time before he leaves the park??). We can
represent this type of prediction using a union test (also called a ?collective outcome? by Jaeger [5]).
Definition 2. A union test T ? T is a set of tests such that if t ? T then no prefix of t is in T . The
def P
prediction of a union test is a sum of predictions: p(T |h) = t?T p(t|h).
Models may be provided by an expert, or we can learn them from experience with the system (in the
form of a data set of observation sequences emitted by the system). The complexity of representing
and learning a model often depends on the complexity of the system being modeled. The measure
of complexity that we will adopt is called the linear dimension [6] and is defined as the rank of
the ?system dynamics matrix? (the infinite matrix of predictions whose ij th entry is p(tj |hi ) for all
tj ? T and hi ? H). It is also closely related to the number of underlying states in a Hidden Markov
Model. We will not define it more formally here but note that when we say one system is simpler
than another, we mean that it has a smaller linear dimension.
We will now present the main contributions of our work, starting by precisely defining a local model,
and then showing how they can be combined to create a more complete model.
2
Local Models
In contrast to a complete model, a local model has limited prediction responsibilities and hence
makes only certain predictions in certain situations.
Definition 3. Given a set of tests of interest T I and a set of histories of interest HI , a local model
is any model that generates the predictions of interest: p(t|h) for all t ? T I and h ? HI .
We will assume, in general, that the tests of interest are union tests. In this paper, we will place a
constraint on HI ? H which we will call the ?semi-Markov? property, due to its close relationship
to the concept of the same name in the ?options? literature [7]; this assumption will be relaxed in
future work. In words, we require that, in order to determine if the current history is of interest, we
need only look at what has happened since the preceeding history of interest. Put formally,
Definition 4. A set of histories of interest HI is semi-Markov iff h, h? ? HI ? {?} and ht ? HI for
some t ? T , implies that either h? t ? HI or p(h? t|?) = 0.
As a simple example, consider the 1D Ball Bounce system (see
Figure 1). The agent observes a line of pixels, one of which
(the location of the ?ball?) is black; the rest are white. The ball
moves along the line, changing direction when it hits the edge.
Each time step, with probability 0.5, the ball sticks in place, and
with probability 0.5 it moves one square in its current direction.
Figure 1: 1D Ball Bounce
One natural local model would make one-step predictions about only one pixel, p. It has two tests
of interest: the set of all one-step tests in which the pixel p is black, and the set of all one-step tests
in which p is white. All histories are of interest. This local model answers the question ?What is the
chance the ball will be in pixel p next?? Note that, in order to answer this question, we need only
observe the color of the pixels neighboring p. We will refer to this example as Model A.
Another, even more restricted local model would be one that has the same tests of interest, but
whose histories of interest are only those that end with pixel p being black. This local model would
essentially answer the question ?When the ball is in pixel p, what is the chance that it will stick??
In order to make this prediction, the local model can ignore all detail; the prediction for the test of
interest is always 0.5 at histories of interest. We will refer to this local model as Model B.
In general, as in the examples above, we expect that many details about the world are irrelevant to
making the predictions of interest and could be ignored in order to simplify the local model. Taking
an approach similar to that of, e.g., Wolfe & Barto [8], Soni & Singh [9], or Talvitie et al. [10], given
tests and histories of interest, we will show how to convert a primitive observation sequence into an
2
abstract observation sequence that ignores unnecessary detail. A complete model of the abstracted
system can be used as a local model in the original, primitive system. The abstraction proceeds in
two steps (shown in Figure 2). First, we construct an intermediate system which makes predictions
for all tests, but only updates at histories of interest. Then we further abstract the system by ignoring
details irrelevant to making predictions for just the tests of interest.
2.1
Abstracting Details for Local Predictions
Incorporating Histories Of Interest: Intuitively, since a local model is never asked to make a
prediction at a history outside of HI , one way to simplify it is to only update its predictions at
histories of interest. Essentially, it ?wakes up? whenever a history of interest occurs, sees what
observation sequence happened since it was last awake, updates, and then goes dormant until the
next history of interest. We call the sequences of observations that happen between histories of
interest bridging tests. The set of bridging tests T B is induced by the set of histories of interest.
Definition 5. A test t ? T is a bridging test iff for all j < |t|, and all h ? HI , ht[1...j] ?
/ HI (where
[1...j]
I
I
t
denotes the j-length prefix of t) and either ? h ? H such that ht ? H or |t| = ?.
Conceptually, we transform the primitive observation sequence into a sequence of abstract observations in which each observation corresponds to a
bridging test. We call such a transformed sequence
the Temporally Extended or T E sequence (see Figure 2). Note that even when the primitive system has
a small number of observations, the T E system can
have infinitely many, because there can be an infinity of bridging tests. However, because it does not Figure 2: Mapping experience in the original
update between histories of interest, a model of T E system to experience in the TE system, and
may be simpler than a model of the original system. then to experience in the abstract system.
To see this, consider again the 1D Ball Bounce of
size k. This system has linear dimension O(2k), intuitively because the ball has 2 possible directions and k possible positions. Recall Model B, that
only applies when the ball lands on a particular pixel. The bridging tests, then, are all possible ways
the ball could travel to an edge and back. The probability of each bridging test depends only on the
current direction of the ball. As such, the T E system here has linear dimension 2, regardless of k.
It is possible to show formally that the T E system is never more complex than the original system.
Proposition 1. If the linear dimension of a dynamical system is n then, given a semi-Markov set of
histories of interest HI , the linear dimension of the induced T E system, nT E ? n.
Proof. (Sketch) The linear dimension of a system is the rank of the system dynamics matrix (SDM)
corresponding to the system [6]. The matrix corresponding to the T E system is the submatrix of the
SDM of the original system with only columns and rows corresponding to histories and tests that are
sequences of bridging tests. A submatrix never has greater rank than the matrix that contains it.
What good is a model of the TE system? We next show that a model of the TE system can make
predictions for all tests t ? T in all histories of interest h ? HI . Specifically, we show that the
prediction for any test in a history of interest can be expressed as a prediction of a union test in
T E. For the following, note that every history of interest h ? HI can be written as a corresponding
sequence of bridging tests, which we will call sh . Also, we will use the subscript T E to distinguish
predictions pT E (t|h) in T E from predictions p(t|h) in the original system.
Proposition 2. For any primitive test t ? T in the original system, there is a union test St in T E
such that p(t|h) = pT E (St |sh ) for all h ? HI .
Proof. We will present a constructive proof. First suppose t can be written as a sequence of bridging
tests st . Then trivially St = {st }. If t does not correspond to a sequence of bridging tests, we can
re-write it as the concatenation of two tests: t = t1 t2 such that t1 is the longest prefix of t that is
a sequence of bridging tests (which may be null) and t2 ?
/ T B . Now, p(t|h) = p(t1 |h)p(t2 |ht1 ),
I
where h, ht1 ? H . We know already that p(t1 |h) = pT E (st1 |sh ). To calculate p(t2 |ht1 ) note that
3
def
there must be a set of bridging tests Bt2 which have t2 as a prefix: Bt2 = {b ? T B : b[1...|t2 |] = t2 }.
The probability of seeing t2 is the probability
the bridging tests in Bt2 . Thus,
P of seeing any ofP
at the history of interest ht1 , p(t2 |ht1 ) = b?Bt p(b|ht1 ) = b?Bt pT E (b|sh st1 ). So, we let
2
2
St = {st1 b : b ? Bt2 }, which gives us the result.
I
Since tests of interest are union tests, to make the prediction of interest p(T |h)
P for some T ? T
and h ? HI using a model of T E, we have simply p(T |h) = pT E (ST |sh ) = t?T pT E (St |sh ).
A model of T E is simpler than a complete model of the system because it only makes predictions
at histories of interest. However, it still makes predictions for all tests. We can further simplify our
modeling task by focusing on predicting the tests of interest.
Incorporating Tests of Interest: Recall Model A from our example. Since all histories are of
interest, bridging tests are single observations, and T E is exactly equivalent to the original system.
However, note that in order to make the predictions of interest, one must only know whether the ball
is neighboring or on the pixel. So, we need only distinguish observations in which the ball is nearby,
and we can group the rest into one abstract observation: ?the ball is far from the pixel.?
In general we will attempt to abstract away unnecessary details of bridging tests by aliasing bridging
tests that are equivalent with respect to making the predictions of interest. Specifically, we will
define a partition, or a many-to-one mapping, from T E observations (the bridging tests T B ) to
abstract observations A. We will then use a model of the abstract system with A as its observations
(see Figure 2) as our local model. So, A must have the following properties: (1) we must be able
to express the tests of interest as a union of sequences of abstract observations in A and (2) an
abstracted history must contain enough detail to make accurate predictions for the tests of interest.
Let us first consider how to satisfy (1). For ease of exposition, we will discuss a special case. We
assume that tests of interest are unions of one-step tests (i.e., for any T ? T I , T ? O) and that
T I partitions O, so every observation is contained within exactly one test of interest. One natural
example that satisfies this assumption is where the local model makes one-step predictions for a
particular dimension of a vector-valued observation. There is no fundamental barrier to treating tests
of interest that are arbitrary union tests, but the development of the general case is more complex.
Note that if a union test T ? O, then the equivalent T E union test, ST , consists of every bridging
def
test that begins with an observation in T . So, if T I partitions O, then S I ={ST : T ? T I } partitions
B
the bridging tests, T , according to their first observation. As such, if we chose A = S I , or any
refinement thereof, we would satisfy criterion (1). However, S I may not satisfy (2). For instance,
in our 1D Ball Bounce, in order to make accurate predictions for one pixel it does not suffice to
observe that pixel and ignore the rest. We must also distinguish the color of the neighboring pixels.
This problem was treated explicitly by Talvitie et al. [10]. They define an accurate partition:
Definition 6. An observation abstraction A is accurate with respect to T I iff for any two primitive
histories h1 = o1 ...ok and h2 = o?1 ...o?k such that ?i oi and o?i are contained within the same
abstract observation Oi ? A, we have p(T |h1 ) = p(T |h2 ), ?T ? T I .
The system we are abstracting is T E, so the observations are bridging tests. We require an accurate
refinement of S I . Any refinement of S I satisfies criterion (1). Furthermore, an accurate refinement
is one that only aliases two histories if they result in the same predictions for the tests of interest.
Thus, we can use an abstract history to make exactly the same predictions for the tests of interest that
we would make if we had access to the primitive history. So, an accurate refinement also satisfies
criterion (2). Furthermore, an accurate refinement always exists, because the partition that distinguishes every observation is trivially accurate, though in general we expect to be able to abstract
away some detail. Finally, a model of the abstract system may be far simpler than a model of the
original system or the T E system, and can be no more complex:
Proposition 3. If the linear dimension of a dynamical system is n then the linear dimension of any
local model M, nM ? nT E ? n.
Proof. (Sketch) The rows and columns of the SDM corresponding to an abstraction of T E are linear
combinations of rows and columns of the SDM of T E [10]. So, the rank of the abstract SDM can
be no more than the rank of the SDM for T E.
4
Learning a local model: We are given tests and histories of interest and an accurate abstraction.
To learn a local model, we first translate the primitive trajectories into T E trajectories using the
histories of interest, and then translate the T E trajectories into abstract trajectories using the accurate
abstraction (as in Figure 2). We can then train any model on the abstracted data. In our experiments,
we use POMDPs [11], PSRs [4], and low-order Markov models as local model representations.
2.2
Combining Local Models
I
Consider a collection of local models M. Each local model M ? M has tests of interest TM
,
I
histories of interest HM , and is an exact model of the abstract system induced by a given accurate
def
I
refinement, AM . At any history h, the set of models Mh = {M ? M : h ? HM
} is available
to make predictions for their tests of interest. However, we may wish to make predictions that are
not specifically of interest to any local model. In that case, we must combine the abstract, coarse
predictions made by individual models into more fine-grained joint predictions. We will make a
modeling assumption that allows us to efficiently combine the predictions of local models:
Definition 7. The local models in Mh are mutually conditionally independent, given h iff for any
I
I
I
subset {M1 , M2 , ..., Mk } ? Mh , and any T1 ? TM
, T2 ? TM
, ..., Tk ? TM
, the prediction of
1
2
k
Q
k
k
the intersection is equal to the product of the predictions: p(?i=1 Ti |h) = i=1 p(Ti |h).
A domain expert specifying the structure of a collection of local models should strive to satisfy
this property as best as possible since, given this assumption, a collection of local models can be
used to make many more predictions than can be made by each individual model. We can compute
the predictions of finer-grained tests (intersections of tests of interest) by multiplying predictions
together. We can also compute the predictions of unions of tests of interest using the standard
formula: Pr(A ? B) = Pr(A) + Pr(B) ? Pr(A ? B). At any history h for which Mh 6= ?, a
collection of local models can be used to make predictions for any union test that can be constructed
by unioning/intersecting the tests of interest of the models in Mh . This may not include all tests.
Of course making all predictions may not be practical, or necessary. A collection of local models
can selectively focus on making the most important predictions well, ignoring or approximating less
important predictions to save on representational complexity.
Of course, a collection of local models can be a complete model. For instance, note that any
model that can make the predictions p(o|h) for every o ? O and h ? H is a complete model.
This is because every prediction can be expressed in terms of one-step predictions: p(o1 ...ok |h) =
p(o1 |h)p(o2 |ho1 )...p(ok |ho1 ...ok?1 ). As such, if every one-step test is expressible as an intersection
of tests of interest of models in Mh at every h, then M is a complete model. That said, for a given
M, the mutual conditional independence property may or may not hold. If it does not, predictions
made using M will be approximate, even if each local model in M makes its predictions of interest
exactly. It would be useful, in future work, to explore bounds on the error of this approximation.
When learning a collection of local models in this paper, we assume that tests and histories of interest as well as an accurate refinement for each model are given. We then train each local model
individually on abstract data. This is a fair amount of knowledge to assume as given, though it
is analogous to providing the structure of a graphical model and learning only the distribution parameters, which is common practice. Automatically splitting a system into simple local models is
an interesting, challenging problem, and ripe ground for future research. We hope that casting the
structure learning problem in the light of our framework may illuminate new avenues to progress.
2.3
Relationship to Other Structured Representations
Here we briefly discuss a few especially relevant alternative modeling technologies that also aim to
exploit local and independence structure in dynamical systems.
DBNs: The dynamic Bayes network (DBN) [2] is a representation that exploits conditional independence structure. The main difference between DBNs and our collection of local models is that DBNs
specify independence structure over ?hidden variables? whose values are never observed. Our representation expresses structure entirely in terms of predictions of observations. Thus our structural
assumptions can be verified using statistical tests on the data while DBN assumptions cannot be
directly verified. That said, a DBN does decompose its world state into a set of random variables. It
5
Table 1: Local model structure for the arcade game
I
:
HM
M applies when
history ends with:
Ball hitting brick b
I
: M makes one-step predicTM
tions for:
Color of 6?4 pixels within b
Ball not hitting brick b
Ball in position p, coming
from direction d
No brick in pixel p and no
ball near pixel p
Color of 6?4 pixels within b
Absence or presence of ball color
in 6 ? 6 pixels around p
Color of pixel p
AM : M additionally distinguishes
bridging tests by:
Type of special bricks hit and type of
special brick most recently hit
None
Configuration of bricks adjacent to p
in last step of bridging test
None
stores the conditional probability distribution for each variable, given the values in the previous time
step. These distributions are like local models that make one-step predictions about their variable.
For each variable, a DBN also specifies which other variables can be ignored when predicting its
next value. This is essentially our accurate refinement, which identifies details a local model can
ignore. Histories of interest are related to the concept of context-specific independence [12].
Relational Models: Relational models (e.g. [3]) treat the state of the world as a conjunction of
predicates. The state evolves using ?update rules,? consisting of pre-conditions specifying when the
rule applies and post-conditions (changes to the state). Update rules are essentially local models
with pre and post-conditions playing the roles of histories and tests of interest. Relational models
typically focus on Markov worlds. We address partial observability by essentially generalizing the
?update rule.? The main strength of relational models is that they include first-order variables in
update rules, allowing for sophisticated parameter tying and generalization. We use parameter tying
in our experiments, but do not incorporate the formalism of variables into our framework.
Others: Wolfe and Singh recently introduced the Factored PSR [13] which is essentially a special
collection of local models. Also related are maximum entropy models (e.g. [14], [15]) which
represent predictions as weighted products of features of the future and the past.
3
Experimental Results
Large Scale Example: In this section we present preliminary empirical results illustrating the application of collections of local models.
Our first example is a modified, uncontrolled version of an arcade game
(see Figure 3). The observations are 64 ? 42 pixel images. In the image is a 2 ? 2 pixel ball and a wall of 6 ? 4 pixel bricks. After the ball
hits a brick, the brick disappears. When the ball hits the bottom wall, it
bounces at a randomly selected angle. An episode ends when there are
Figure 3: Arcade game
no more bricks. In our version there are two types of ?special bricks.?
After the ball hits a dark brick, all bricks require two hits rather than
one to break. After the ball hits a light brick, all bricks require only one hit to break. When they
are first placed, bricks are regular (medium gray) with probability 0.9 and dark or light each with
probability 0.05. This system is stochastic, partially observable (and because of the special bricks,
not short-order Markov). It has roughly 1020 observations and even more underlying states.
The decomposition into local models is specified in Table 11 . Quite naturally, we have local models
to predict how the bricks (rows 1-2), the ball (row 3), and the background (row 4) will behave. This
structure satisfies the mutual conditional independence property, and since every pixel is predicted
by some model at every history, we can make fully detailed 64? 42 pixel one-step predictions.
More or less subdivision of models could be applied, the tradeoff being the complexity of individual
models versus the total number of local models. With the structure we have selected there are approximately 25,000 local models. Of course, naively training 25,000 models is impractical. We can
improve our data efficiency and training time though parameter tying. In this system, the behavior
of objects does not depend on their position. To take advantage of this, for each type of local model
1
Note: there are 30 bricks b, 2,688 pixels p, 2,183 possible positions p for the ball, and 9 possible directions
d the ball could come from, including the case in the first step, where the ball simply appears in a pixel.
6
0.5
0
0
Local POMDP
Local PSR
DBN
POMDP
PSR
5000
10000
Avg. Likelihood Ratio
Avg. Likelihood Ratio
Size 5
1
Size 20
1
0.5
0
0
# Training Episodes
Local POMDP
Local PSR
DBN
POMDP
PSR
5000
10000
# Training Episodes
Figure 5: Left: Results for the 1D Ball Bounce problem. Error bars are omitted to avoid graph
clutter. Right: DBN structure used. All nodes are binary. The shaded nodes are hidden. Links from
?Vel.? at t ? 1 to all nodes at t omitted for simplicity.
(12 in total, since there is a ball model for each of the 9 directions) we combine all translated trajectories associated with various positions and use them to train a single shared model. Each local
model maintains its own state, but the underlying model parameters are shared across all models of
the same type, associated with different positions. Note that position does matter in the first time
step, since the ball always appears in the same place. As a result, our model makes bad predictions
about the first time step. For clarity of presentation, we will ignore the first time-step in our results.
For the local models themselves, we used lookup table based short-order Markov representations.
Though the overall system is not short-order Markov, each local model is. Our learned local models
were first-order Markov except the one responsible for predicting what will happen to a brick when
the ball hits it. This model was second-order Markov. No local model had more than 200 states.
100
Avg. % Episodes Dropped
Avg. Likelihood Ratio
The learning curve for this collection of local models can be seen in
Figure 4. In each trial we train the models on various numbers of
episodes (ending when there are no more bricks, or after 1000 steps)
and measure the likelihood w.r.t. 50 test episodes. We report the
0.5
50
average over 20 trials. Even with parameter tying, our model can
assign zero probability to a test sequence, due to data sparsity issues.
The solid line shows the likelihood ratio (the log likelihood of the
0
0
0
50
100 150 200 250
# Training Trajectories
true system divided by the log likelihood of our model) ignoring
the episodes that caused an infinite log likelihood. The dashed line
Figure 4: Results for the ar- shows the proportion of episodes we dropped. The likelihood ratio
approaches 1 while the proportion of ?bad? episodes approaches 0,
cade game example.
implying that we are learning a good model in about 100 episodes.
1
Learning Comparisons: In this experiment, we will compare parameter learning results for collections of local models to a few other methods on a simple example, whose complexity is easily
controlled. Recall the 1D Ball Bounce. We learned a model of the 1D Ball Bounce of size 5 and 20
using two collections of local models with no parameter tying (using PSRs and POMDPs as local
models respectively), two flat models (a PSR and a POMDP), and a DBN 2 .
Both collections of local models have the following structure: for every pixel, there are two types
of model. One predicts the color of the pixel in the next time step in histories when the ball is not
in the immediate neighborhood about the pixel. This model ignores all pixels other than the one it
is predicting. The other model applies when the ball is in the pixel. It jointly predicts the colors of
the pixel and its two neighbors. This model distinguishes bridging tests in which the ball went to
the left, the right, or stayed on the pixel in the first step. This collection of local models satisfies the
mutual conditional independence property and allows prediction of primitive one-step tests.
As with the arcade game example, in each trial we trained each model on various numbers of
episodes (of length 50) and then measured their log likelihood on 1000 test episodes (also of length
2
We initialized each local POMDP with 5 states and the flat POMDP with 10 and 40 states for the different problem sizes. For the DBN we used the graphical structure shown in Figure 5(c) and trained using the
Graphical Models Toolkit [16]. We stopped EM after a maximum of 50 iterations. PSR training also has a free
parameter (see [17] for details). Via parameter sweep we chose 0.02 for local PSRs and for the flat PSR 0.175
and 0.005, respectively for the size 5 and size 20 domains.
7
50). We report the likelihood ratio averaged over 20 trials. The results are shown in Figure 5. The
collections of local models both perform well, outperforming the flat models (dashed lines). Both
of the flat models? performance degrades as the size of the world increases from 5 to 20. The collections of local models are less affected by problem size. The local PSRs seem to take more data than
the local POMDPs to learn a good model, however they ultimately seem to learn a better model.
The unexpected result is that DBN training seemed to perform worse than flat POMDP training. We
have no explanation for this effect, other than the fact that different graphical structures could cause
different local extrema issues for the EM algorithm. Clearly, given these results, a more thorough
empirical comparison across a wider variety of problems is warranted.
Conclusions: We have presented a novel formalization of the idea of a ?local model.? Preliminary
empirical results show that collections of local models can be learned for large-scale systems and
that the data complexity of parameter learning compares favorably to that of other representations.
Acknowledgments
Erik Talvitie was supported under the NSF GRFP. Satinder Singh was supported by NSF grant IIS0413004. Any opinions, findings, and conclusions or recommendations expressed in this material
are those of the authors and do not necessarily reflect the views of the NSF.
References
[1] Lise Getoor, Nir Friedman, Daphne Koller, and Benjamin Taskar. Learning probabilistic models of relational structure. Journal of Machine Learning Research, 3:679?707, 2002.
[2] Zoubin Ghahramani and Michael I. Jordan. Factorial hidden Markov models. In Advances in Neural
Information Processing Systems 8 (NIPS), pages 472?478, 1995.
[3] Hanna M. Pasula, Luke S. Zettlemoyer, and Leslie Pack Kaelbling. Learning symbolic models of stochastic domains. Journal of Artificial Intelligence, 29:309?352, 2007.
[4] Michael Littman, Richard Sutton, and Satinder Singh. Predictive representations of state. In Advances in
Neural Information Processing Systems 14 (NIPS), pages 1555?1561, 2002.
[5] Herbert Jaeger. Observable operator models for discrete stochastic time series. Neural Computation,
12(6):1371?1398, 2000.
[6] Satinder Singh, Michael R. James, and Matthew R. Rudary. Predictive state representations: A new theory
for modeling dynamical systems. In Uncertainty in Artificial Intelligence 20 (UAI), pages 512?519, 2004.
[7] Richard Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for
temporal abstraction in reinforcement learning. Artificial Intelligence, 112:181?211, 1999.
[8] Alicia Peregrin Wolfe and Andrew G. Barto. Decision tree methods for finding reusable MDP homomorphisms. In National Conference on Artificial Intelligence 21 (AAAI), 2006.
[9] Vishal Soni and Satinder Singh. Abstraction in predictive state representations. In National Conference
on Artificial Intelligence 22 (AAAI), 2007.
[10] Erik Talvitie, Britton Wolfe, and Satinder Singh. Building incomplete but accurate models. In International Symposium on Artificial Intelligence and Mathematics (ISAIM), 2008.
[11] George E. Monahan. A survey of partially observable markov decisions processes: Theory, models, and
algorithms. Management Science, 28(1):1?16, 1982.
[12] Craig Boutilier, Nir Friedman, Moises Goldszmidt, and Daphne Koller. Context-specific independence in
bayesian networks. In Uncertainty in Artificial Intelligence 12 (UAI), pages 115?123, 1996.
[13] Britton Wolfe, Michael James, and Satinder Singh. Approximate predictive state representations. In
Autonomous Agents and Multiagent Systems 7 (AAMAS), 2008.
[14] Adam Berger, Stephen Della Pietra, and Vincent Della Pietra. A maximum entropy approach to natural
language processing. Computational Linguistics, 22(1):39?71, 1996.
[15] David Wingate and Satinder Singh. Exponential family predictive representations of state. In Advances
in Neural Information Processing Systems 20 (NIPS), pages 1617?1624, 2007.
[16] Jeff Bilmes. The graphical models toolkit (gmtk), 2007. http://ssli.ee.washington.edu/
?bilmes/gmtk.
[17] Michael James and Satinder Singh. Learning and discovery of predictive state representations in dynamical systems with reset. In International Conference on Machine Learning 21 (ICML), 2004.
8
| 3503 |@word trial:4 illustrating:1 version:2 briefly:1 manageable:1 seems:1 proportion:2 decomposition:1 homomorphism:1 solid:1 moment:1 configuration:1 contains:1 series:1 prefix:4 o2:2 past:2 current:3 nt:2 yet:1 written:2 must:7 happen:3 partition:6 treating:1 update:8 implying:1 intelligence:7 leaf:1 selected:2 talvitie:5 short:4 grfp:1 coarse:1 node:3 location:1 simpler:6 daphne:2 mathematical:2 along:1 constructed:1 symposium:1 consists:1 combine:6 roughly:1 themselves:1 behavior:2 aliasing:1 moises:1 automatically:1 provided:1 begin:1 underlying:3 suffice:1 medium:1 null:2 what:8 tying:5 britton:2 extremum:1 finding:2 impractical:1 temporal:2 thorough:1 every:12 ti:2 exactly:4 hit:10 stick:2 grant:1 before:1 t1:5 engineering:2 local:80 treat:1 dropped:2 sutton:2 subscript:1 approximately:1 might:2 black:3 chose:2 specifying:2 challenging:1 shaded:1 luke:1 ease:1 limited:1 averaged:1 practical:1 responsible:1 acknowledgment:1 union:14 practice:1 empirical:5 word:1 ho1:2 pre:2 seeing:2 regular:1 arcade:4 zoubin:1 symbolic:1 cannot:1 close:1 operator:1 put:1 context:2 impossible:1 equivalent:3 primitive:9 go:1 starting:1 regardless:1 focused:1 pomdp:8 survey:1 simplicity:1 preceeding:1 splitting:1 m2:1 rule:5 factored:1 autonomous:1 analogous:1 pt:6 suppose:1 dbns:3 exact:1 wolfe:5 predicts:2 observed:1 role:1 bottom:1 taskar:1 wingate:1 calculate:1 soni:2 episode:12 went:1 observes:1 substantial:1 benjamin:1 complexity:7 asked:1 littman:1 dynamic:3 ultimately:1 cade:1 singh:12 depend:1 trained:2 predictive:6 efficiency:1 translated:1 easily:1 joint:2 mh:6 various:3 train:4 artificial:7 outcome:1 outside:1 neighborhood:1 gmtk:2 whose:4 quite:1 valued:1 say:1 ability:2 transform:1 jointly:1 chase:1 sequence:20 sdm:6 advantage:1 product:2 coming:1 reset:1 neighboring:3 relevant:2 combining:1 iff:4 translate:2 representational:1 exploiting:1 jaeger:2 produce:1 adam:1 leave:1 object:2 tk:1 tions:1 wider:1 andrew:1 measured:1 ij:1 progress:1 throw:1 predicted:1 implies:1 come:1 direction:7 dormant:1 closely:1 stochastic:3 human:2 opinion:1 material:1 require:4 st1:3 assign:1 stayed:1 generalization:1 wall:2 preliminary:3 decompose:1 proposition:3 hold:1 around:1 ground:1 mapping:2 predict:2 matthew:1 adopt:1 omitted:2 travel:1 individually:1 create:1 weighted:1 hope:1 clearly:1 always:3 aim:1 modified:1 rather:1 avoid:1 barto:2 casting:1 conjunction:1 lise:1 focus:3 longest:1 rank:5 likelihood:11 contrast:1 am:2 abstraction:7 bt:2 typically:1 hidden:4 koller:2 expressible:1 transformed:1 pixel:33 overall:1 issue:2 development:1 special:6 mutual:3 equal:1 construct:1 never:4 washington:1 park:2 look:1 icml:1 future:6 t2:10 others:1 simplify:3 report:2 few:2 distinguishes:3 richard:2 randomly:1 national:2 individual:3 pietra:2 consisting:1 attempt:1 thrown:1 friedman:2 interest:58 bt2:4 sh:6 light:3 tj:2 accurate:15 edge:2 psrs:4 partial:1 necessary:1 experience:4 tree:1 incomplete:1 initialized:1 re:1 stopped:1 mk:1 instance:4 column:3 modeling:4 brick:21 formalism:2 ar:1 leslie:1 kaelbling:1 entry:1 subset:1 predicate:1 too:1 answer:5 combined:1 chunk:1 st:10 fundamental:1 international:2 rudary:1 probabilistic:1 michael:5 together:1 precup:1 concrete:1 intersecting:1 again:1 reflect:1 nm:1 aaai:2 management:1 isaim:1 worse:1 expert:2 strive:1 lookup:1 matter:1 satisfy:4 explicitly:1 caused:1 depends:3 doina:1 piece:1 h1:2 lot:1 responsibility:2 break:2 view:1 bayes:1 option:1 complicated:2 maintains:1 contribution:2 oi:13 square:1 efficiently:1 correspond:1 conceptually:1 bayesian:1 vincent:1 craig:1 none:2 trajectory:6 pomdps:3 multiplying:1 finer:1 bilmes:2 history:45 whenever:1 definition:7 james:3 thereof:1 naturally:1 proof:4 associated:2 static:1 emits:1 recall:3 knowledge:2 color:8 sophisticated:1 back:1 focusing:1 ok:4 appears:2 specify:1 formulation:1 vel:1 though:5 furthermore:2 just:2 until:1 pasula:1 sketch:2 gray:1 mdp:1 name:1 building:2 effect:1 concept:2 contain:1 true:1 hence:1 white:2 conditionally:1 adjacent:1 game:5 criterion:3 complete:10 demonstrate:1 image:2 consideration:2 novel:3 recently:2 common:1 occurred:1 he:1 m1:1 refer:2 ai:1 trivially:2 dbn:10 mathematics:1 language:1 baveja:1 had:2 toolkit:2 access:1 etc:1 own:1 irrelevant:2 store:1 certain:7 binary:1 outperforming:1 ht1:6 seen:1 herbert:1 greater:1 relaxed:1 george:1 determine:1 dashed:2 semi:4 stephen:1 constructive:1 ofp:1 divided:1 post:2 controlled:2 prediction:70 essentially:6 iteration:1 represent:3 background:2 want:1 fine:1 zettlemoyer:1 else:1 wake:1 rest:3 induced:3 seem:3 jordan:1 call:5 emitted:2 ee:1 structural:1 near:1 presence:1 intermediate:1 split:1 easy:1 enough:1 variety:1 independence:9 observability:1 idea:3 tm:4 avenue:1 tradeoff:1 bounce:9 whether:2 bridging:24 effort:1 cause:1 ignored:2 useful:1 boutilier:1 detailed:3 factorial:1 amount:1 clutter:1 dark:2 generate:1 specifies:1 http:1 nsf:3 happened:2 discrete:2 write:1 affected:1 express:2 group:1 reusable:1 changing:1 clarity:1 verified:2 ht:3 graph:1 sum:1 convert:1 angle:1 letter:1 uncertainty:2 place:3 family:1 decision:2 submatrix:2 entirely:1 def:6 hi:17 uncontrolled:3 bound:1 distinguish:3 activity:1 strength:1 occur:2 precisely:1 constraint:1 infinity:1 awake:1 flat:6 nearby:1 generates:1 aspect:1 relatively:1 structured:1 according:1 ball:43 combination:1 smaller:1 across:2 em:2 evolves:1 making:5 intuitively:2 restricted:3 pr:5 mutually:1 discus:2 know:2 end:3 umich:2 peregrin:1 available:1 observe:2 away:2 save:1 alternative:1 original:9 denotes:1 include:2 linguistics:1 graphical:5 exploit:2 ghahramani:1 especially:1 approximating:1 sweep:1 move:2 question:6 already:2 occurs:1 degrades:1 illuminate:1 said:2 link:1 concatenation:1 erik:3 length:4 o1:7 modeled:1 relationship:2 illustration:1 providing:1 ratio:6 berger:1 difficult:1 potentially:1 favorably:1 collective:1 perform:2 allowing:1 observation:32 markov:13 finite:1 behave:1 immediate:1 situation:5 defining:1 extended:1 relational:5 interacting:1 arbitrary:1 introduced:1 david:1 dog:4 specified:1 learned:3 nip:3 address:1 able:2 bar:1 proceeds:1 dynamical:10 alicia:1 sparsity:1 including:1 explanation:1 getoor:1 natural:3 treated:1 predicting:4 representing:1 improve:1 technology:1 mdps:2 temporally:1 alias:1 identifies:1 disappears:1 hm:3 nir:2 literature:1 discovery:1 fully:1 expect:2 multiagent:1 abstracting:2 interesting:1 versus:1 remarkable:1 localized:1 h2:2 agent:2 ripe:1 playing:1 land:1 row:6 course:4 placed:1 last:2 free:1 supported:2 allow:1 neighbor:1 taking:1 barrier:1 curve:1 dimension:10 world:7 ending:1 seemed:1 ignores:2 author:1 collection:20 refinement:9 made:3 avg:4 reinforcement:1 monahan:1 far:3 approximate:2 observable:3 ignore:4 satinder:10 abstracted:3 uai:2 unnecessary:2 table:3 additionally:1 learn:5 reasonably:1 pack:1 ignoring:4 hanna:1 warranted:1 complex:5 necessarily:1 domain:4 main:4 fair:1 aamas:1 formalization:1 position:7 wish:1 exponential:1 answering:1 vishal:1 grained:2 formula:1 bad:2 specific:2 showing:1 concern:1 incorporating:2 exists:1 naively:1 te:3 locality:1 entropy:2 intersection:3 michigan:2 generalizing:1 simply:3 explore:1 infinitely:1 psr:8 expressed:3 contained:2 hitting:2 unexpected:1 partially:2 recommendation:1 applies:4 corresponds:1 chance:2 satisfies:5 conditional:7 presentation:1 exposition:1 jeff:1 shared:2 man:1 absence:1 change:1 infinite:2 specifically:3 except:1 called:2 total:2 experimental:1 subdivision:1 formally:3 selectively:1 people:1 goldszmidt:1 incorporate:1 della:2 |
2,762 | 3,504 | MDPs with Non-Deterministic Policies
Mahdi Milani Fard
School of Computer Science
McGill University
Montreal, Canada
[email protected]
Joelle Pineau
School of Computer Science
McGill University
Montreal, Canada
[email protected]
Abstract
Markov Decision Processes (MDPs) have been extensively studied and used in the
context of planning and decision-making, and many methods exist to find the optimal policy for problems modelled as MDPs. Although finding the optimal policy
is sufficient in many domains, in certain applications such as decision support systems where the policy is executed by a human (rather than a machine), finding all
possible near-optimal policies might be useful as it provides more flexibility to
the person executing the policy. In this paper we introduce the new concept of
non-deterministic MDP policies, and address the question of finding near-optimal
non-deterministic policies. We propose two solutions to this problem, one based
on a Mixed Integer Program and the other one based on a search algorithm. We
include experimental results obtained from applying this framework to optimize
treatment choices in the context of a medical decision support system.
1
Introduction
Markov Decision Processes (MDPs) have been extensively studied in the context of planning and
decision-making. In particular, MDPs have emerged as a useful framework for optimizing action
choices in the context of medical decision support systems [1, 2, 3, 4]. Given an adequate MDP
model (or data source), many methods can be used to find a good action-selection policy. This policy is usually a deterministic or stochastic function [5]. But policies of these types face a substantial
barrier in terms of gaining acceptance from the medical community, because they are highly prescriptive and leave little room for the doctor?s input. In such cases, where the actions are executed
by a human, it may be preferable to instead provide several (near-)equivalently good action choices,
so that the agent can pick among those according to his or her own heuristics and preferences. 1
To address this problem, this paper introduces the notion of a non-deterministic policy 2 , which is
a function mapping each state to a set of actions, from which the acting agent can choose. We aim
for this set to be as large as possible, to provide freedom of choice to the agent, while excluding
any action that is significantly worse than optimal. Unlike stochastic policies, here we make no
assumptions regarding which action will be executed. This choice can be based on the doctor?s
qualitative assessment, patient?s preferences, or availability of treatment.
While working with non-deterministic policies, it is important to ensure that by adding some freedom of choice to the policy, the worst-case expected return of the policy is still close enough to the
optimal value. We address this point by providing guarantees on the expected return of the nondeterministic policy. We define a set of optimization problems to find such a policy and provide
two algorithms to solve this problem. The first is based on a Mixed Integer Program formulation,
which provides the best solution?in the sense of maximizing the choice of action, while remaining
1
This is especially useful given that human preferences are often difficult to quantify objectively, and thus
difficult to incorporate in the reward function.
2
Borrowing the term ?non-deterministic? from the theory of computation, as opposed to deterministic or
stochastic actions.
within an allowed performance-loss threshold?but with high computational cost. Then we describe
a simple search algorithm that can be much more efficient in some cases.
The main contributions of this work are to introduce the concept of non-deterministic policies, provide solution methods to compute such policies, and demonstrate the usefulness of this new model
for providing acceptable solutions in medical decision support systems. From a pratical perspective,
we aim to improve the acceptability of MDP-based decision-support systems.
2
Non-Deterministic Policies
In this section, we formulate the concept of non-deterministic policies and provide some definitions
that are used throughout the paper.
An MDP M = (S, A, T, R) is defined by a set of states S, a function A(s) mapping each state to a
set of action, a transition function T (s, a, s0 ) defined as:
T (s, a, s0 ) = p(st+1 = s0 |st = s, at = a), 8s, s0 2 S, a 2 A(s),
(1)
and a reward function R(s, a) : S ? A ! [Rmin , Rmax ]. Throughout the paper we assume finite
state, finite action, discounted reward MDPs, with the discount factor denoted by .
A deterministic policy is a function from states to actions. The P
optimal deterministic policy is the
policy that maximizes the expected discounted sum of rewards ( t t rt ) if the agent acts according
to that policy. The value of a state-action pair (s, a) according to the optimal deterministic policy
on an MDP M = (S, A, T, R) satisfies the Bellman optimality equation [6]:
?
X?
Q?M (s, a) = R(s, a) +
T (s, a, s0 ) 0max 0 Q?M (s0 , a0 ) .
(2)
s0
a 2A(s )
?
We further define the optimal value of state s denoted by VM
(s) to be maxa2A(s) Q?M (s, a).
A non-deterministic policy is a function that maps each state s to a non-empty set of actions
denoted by ?(s) ? A(s). The agent can choose to do any action a 2 ?(s) whenever the MDP is in
state s. Here we will provide a worst-case analysis, presuming that the agent may choose the worst
action in each state.
The value of a state-action pair (s, a) according to a non-deterministic policy ? on an MDP M =
(S, A, T, R) is given by the recursive definition:
?
X?
0
?
0 0
Q?
(s,
a)
=
R(s,
a)
+
T
(s,
a,
s
)
min
Q
(s
,
a
)
,
(3)
M
M
0
0
s0
a 2?(s )
which is the worst-case expected return under the allowed set of actions. We define the value of state
?
s according to a non-deterministic policy ? denoted by VM
(s) to be mina2?(s) Q?
M (s, a).
To calculate the value of a non-deterministic policy, we construct an MDP M 0 = (S 0 , A0 , R0 , T 0 )
where S 0 = S, A0 = ?, R0 = R and T 0 = T . It is straight-forward to show that:
Q?
M (s, a) =
Q?M 0 (s, a).
(4)
A non-deterministic policy ? is said to be augmented with state-action pair (s, a) denoted by ?0 =
? + (s, a), if it satisfies:
?
?(s0 ),
s0 6= s
?0 (s0 ) =
(5)
0
?(s ) [ {a}, s0 = s
If a policy ? can be achieved by a number of augmentations from a policy ?0 , we say that ?
includes ?0 . P
The size of a policy ?, denoted by |?|, is the sum of the cardinality of the action sets
in ?: |?| = s |?(s)|.
A non-deterministic policy ? is said to be non-augmentable according to a constraint if and only
if ? satisfies , and for any state-action pair (s, a), ? + (s, a) does not satisfy . In this paper we
will be working with constraints that have this particular property: if a policy ? does not satisfy ,
any policy that includes ? does not satisfy . We will refer to such constraints as being monotonic.
A non-deterministic policy ? on an MDP M is said to be ?-optimal (? 2 [0, 1]) if we have:3
?
?
VM
(s) (1 ?)VM
(s), 8s 2 S.
(6)
This can be thought of as a constraint on the space of non-deterministic policies which makes sure
that the worst-case expected return is within some range of the optimal value. It is straight forward
to show that this constraint is monotonic.
A conservative ?-optimal non-deterministic policy ? on an MDP M is a policy that is nonaugmentable according to this constraint:
X
?
?
R(s, a) +
(T (s, a, s0 )(1 ?)VM
(s0 )) (1 ?)VM
(s), 8a 2 ?(s).
(7)
s0
This constraint indicates that we only add those actions to the policy whose reward plus (1 ?)
of the future optimal return is within the sub-optimal margin. This ensures that non-deterministic
policy is ?-optimal by using the inequality:
X
?
Q?
R(s, a) +
(T (s, a, s0 )(1 ?)VM
(s0 )) ,
(8)
M (s, a)
s0
instead of solving Eqn 3 and using the inequality constraint in Eqn 6. Applying Eqn 7 guarantees
that the non-deterministic policy is ?-optimal while it may still be augmentable according to Eqn 6,
hence the name conservative. It can also be shown that the conservative policy is unique.
A non-augmentable ?-optimal non-deterministic policy ? on an MDP M is a policy that is not
augmentable according to the constraint in Eqn 6. It is easy to show that any non-augmentable
?-optimal policy includes the conservative policy. However, non-augmentable ?-optimal policies
are not necessarily unique. In this paper we will focus on a search problem in the space of nonaugmentable ?-optimal policies, trying to maximize some criteria. Specifically, we will be trying
to find non-deterministic policies that give the acting agent more options while staying within an
acceptable sub-optimal margin.
We now present an example that clarifies the concepts introduced so far. To simplify drawing graphs
of the MDP and policies, we assume deterministic transitions in this example. However the concepts
apply to any probabilistic MDP as well. Fig 1 shows a sample MDP. The labels on the arcs show
action names and the corresponding rewards are shown in the parentheses. We assume ' 1 and
? = 0.05. Fig 2 shows the optimal policy of this MDP. The conservative ?-optimal non-deterministic
policy of this MDP is shown in Fig 3.
a(0)
S1
a(0)
S2
b( 3)
a(100)
S3
b( 3)
a(0)
S4
b(99)
S5
a(0)
Figure 1: Example MDP
a(0)
S1
a(0)
S2
a(0)
a(100)
S3
S4
S5
a(0)
Figure 2: Optimal policy
a(0)
S1
a(100)
a(0)
S2
S3
a(0)
S4
b(99)
S5
a(0)
Figure 3: Conservative policy
Fig 4 includes two possible non-augmentable ?-optimal policies. Although both policies in Fig 4 are
?-optimal, the union of these is not ?-optimal. This is due to the fact that adding an option to one
of the states removes the possibility of adding options to other states, which illustrates why local
changes are not always appropriate when searching in the space of ?-optimal policies.
3
In some of the MDP literature, ?-optimality is defined as an additive constraint (Q?
M
derivations will be analogous in that case.
Q?M
?) [7]. The
a(0)
a(0)
S1
S2
a(100)
S3
b( 3)
a(0)
b(99)
S2
S3
b( 3)
S5
a(0)
a(100)
a(0)
S1
a(0)
S4
a(0)
S4
b(99)
S5
a(0)
Figure 4: Two non-augmentable policies
3
Optimization Problem
We formalize the problem of finding an ?-optimal non-deterministic policy in terms of an optimization problem. There are several optimization criteria that can be formulated, while still complying
with the ?-optimal constraint. Notice that the last two problems can be defined both in the space of
all ?-optimal policies or only the non-augmentable ones.
? Maximizing the size of the policy: According to this criterion, we seek non-augmentable
?-optimal policies that have the biggest overall size. This provides more options to the
agent while still keeping the ?-optimal guarantees. The algorithms proposed in this paper
use this optimization criterion. Notice that the solution to this optimization problem is nonaugmentable according to the ?-optimal constraint, because it maximizes the overall size of
the policy.
? Maximizing the margin: We aim to maximize margin of a non-deterministic policy ?:
?
?
0
(?)
=
min
min
(Q(s,
a)
Q(s,
a
))
.
(9)
M
0
s
a2?(s),a 2?(s)
/
This optimization criterion is useful when one wants to find a clear separation between the
good and bad actions in each state.
? Minimizing the uncertainly: If we learn the models from data we will have some uncertainly about the optimal action in each state. We can use some variance estimation on the
value function [8] along with a Z-Test to get some confidence level on our comparisons and
find the probability of having the wrong order when comparing actions according to their
? be our empirical estimate based on
values. Let Q be the value of the true model and Q
some dataset D. We aim to minimize the uncertainly of a non-deterministic policy ?:
?
?
0
(?)
=
max
max
p
(Q(s,
a)
<
Q(s,
a
)|D)
.
(10)
M
0
s
4
a2?(s),a 2?(s)
/
Solving the Optimization Problem
In the following sections we provide algorithms to solve the first optimization problem mentioned
above, which aims to maximize the size of the policy. We focus on this criterion as it seems most
appropriate for medical decision support systems, where it is desirable for the acceptability of the
system to find policies that provide as much choice as possible for the acting agent. We first present
a Mixed Integer Program formulation of the problem, and then present a search algorithm that uses
the monotonic property of the ?-optimal constraint. While the MIP method is useful as a general
formulation of the problem, the search algorithm has potential for further extensions with heuristics.
4.1
Mixed Integer Program
Recall that we can formulate the problem of finding the optimal deterministic policy on an MDP as
a simple linear program [5]:
V (s)
minV ?T V, subject to
P
0
0
R(s, a) +
s0 T (s, a, s )V (s ) 8s, a,
(11)
where ? can be thought of as the initial distribution over the states. The solution to the above problem
is the optimal value function (denoted by V ? ). Similarly, having computed V ? using Eqn 11, the
problem of a search for an optimal non-deterministic policy according to the size criterion can be
rewritten as a Mixed Integer Program:4
maxV,? (?T V + (Vmax Vmin )eTs ?ea ), subject to
V (s) (1 ?)V ? (s)
8s
P
?(s,
a)
>
0
8s
P a
0
0
V (s) ? R(s, a) +
?(s, a)) 8s, a.
s0 T (s, a, s )V (s ) + Vmax (1
(12)
Here we are overloading the notation ? to define a binary matrix representing the policy. ?(s, a)
is 1 if a 2 ?(s), and 0 otherwise. We define Vmax = Rmax /(1
) and Vmin = Rmin /(1
).
e?s are column vectors of 1 with the appropriate dimensions. The first set of constraints makes sure
that we stay within ? of the optimal return. The second set of constraints ensures that at least one
action is selected per state. The third set ensures that for those state-action pairs that are chosen
in any policy, the Bellman constraint holds, and otherwise, the constant Vmax makes the constraint
trivial. Notice that the solution to the above maximizes |?| and the result is non-augmentable. As a
counter argument, suppose that we could add a state-action pair to the solution ?, while still staying
in ? sub-optimal margin. By adding that pair, the objective function is increased by (Vmax Vmin ),
which is bigger than any possible decrease in the ?T V term, and thus the objective is improved,
which conflicts with ? being the solution.
We can use any MIP solver to solve the above problem. Note however that we do not make use of
the monotonic nature of the constraints. A general purpose MIP solver could end up searching in the
space of all the possible non-deterministic policies, which would require exponential running time.
4.2
Search Algorithm
We can make use of the monotonic property of the ?-optimal policies to narrow down the search. We
start by computing the conservative policy. We then augment it until we arrive at a non-augmentable
policy. We make use of the fact that if a policy is not ?-optimal, neither is any other policy that
includes it, and thus we can cut the search tree at this point.
The following algorithm is a one-sided recursive depth-first-search-like algorithm that searches in
the space of plausible non-deterministic policies to maximize a function g(?). Here we assume that
there is an ordering on the set of state-action pairs {pi } = {(sj , ak )}. This ordering can be chosen
according to some heuristic along with a mechanism to cut down some parts of the search space. V ?
is the optimal value function and the function V returns the value of the non-deterministic policy
that can be calculated by minimizing Equation 3.
Function getOptimal(?, startIndex, ?)
?o
?
for i
startIndex to |S||A| do
(s, a)
pi
if a 2
/ ?(s) & V (? + (s, a)) (1 ?)V ? then
?0
getOptimal (? + (s, a), i + 1, ?)
if g(?0 ) > g(?o ) then
?o
?0
end
end
end
return ?o
We should make a call to the above function passing in the conservative policy ?m and starting from
the first state-action pair: getOptimal(?m , 0, ?).
The asymptotic running time of the above algorithm is O((|S||A|)d (tm + tg )), where d is the maximum size of an ?-optimal policy minus the size of the conservative policy, tm is the time to solve the
original MDP and tg is the time to calculate the function g. Although the worst-case running time is
still exponential in the number of state-action pairs, the run-time is much less when the search space
is sufficiently small. The |A| term is due to the fact that we check all possible augmentations for
4
Note that in this MIP, unlike the standard LP for MDPs, the choice of ? can affect the solution in cases
where there is a tie in the size of ?.
each state. Note that this algorithm searches in the space of all ?-optimal policies rather than only
the non-augmentable ones. If we set function g(?) = |?|, then the algorithm will return the biggest
non-augmentable ?-optimal policy.
This search can be further improved by using heuristics to order the state-action pairs and prune the
search. One can also start the search from any other policy rather than the conservative policy. This
can be potentially useful if we have further constraints on the problem. One way to narrow down
the search is to only add the action that has the maximum value for any state s:
?
?
?0 = ? + s, arg max ,
(13)
Q(s,a)
This leads to a running time of O(|S|d (tm + tg )). However this does not guarantee that we see all
non-augmentable policies. This is due to the fact that after adding an action, the order of values
might change. If the transition structure of the MDP contains no loop with non-zero probability
(transition graph is directed acyclic, DAG), then this heuristic will produce the optimal result while
cutting down the search time. In other cases, one might do a partial evaluation of the augmented
policy to approximate the value after adding the actions, possibly by doing a few backups rather
than using the original Q values. This offers the possibility of trading-off computation time for
better solutions.
5
Empirical Evaluation
To evaluate our proposed algorithms, we first test the both the MIP and search formulations on MDPs
created randomly, and then test the search algorithm on a real-world treatment design scenario.
To begin, we generated random MDPs with 5 states and 4 actions. The transitions are deterministic
(chosen uniformly random) and the rewards are random values between 0 and 1, except for one of
the states with reward 10 for one of the actions; was set to 0.95. The MIP method was implemented with MATLAB and CPLEX. Fig 5 shows the solution to the MIP defined in Eqn 12 for a
particular randomly generated MDP. We see that the size of non-deterministic policy increases as
the performance threshold is relaxed.
1, 0.4
S4
1, 0.4
3, 0.5
S4
3, 0.5
2, 0.7
S1
S5
S1
3, 0.2
S5
S3
3, 9.9
S3
3, 9.9
3, 0.9
3, 0.9
S2
1, 0.4
3, 0.2
S2
S4
S4
3, 0.5
1, 0.4
3, 0.5
4, 0.2
2, 0.7
S1
S5
3, 0.5
2, 0.7
3, 0.2
S1
3, 0.5
S3
3, 9.9
S5
S3
3, 0.9
S2
3, 0.2
3, 9.9
3, 0.9
S2
Figure 5: MIP solution for different values of ? 2 {0, 0.01, 0.02, 0.03}. The labels on the edges are
action indices, followed by the corresponding immediate rewards.
To compare the running time of the MIP solver and the search algorithm, we constructed random
MDPs as described above with more state-action pairs. Fig 6 Left shows the running time averaged
over 20 different random MDPs , assuming ? = 0.01. It can be seen that both algorithms have
????????
??
?
???
????
???
??????
??
??
??
????????????????????????????
??????????????????????????????
???
???
???
???
???
??
??
??
??
??
?
???
?
???
?
?????????????????????????????????????????
Figure 6: Left: Running time of MIP and search algorithm as a function of the number of state-action
pairs. Right: Average percentage of state-action pairs that were different in the noisy policy.
exponential running time. The running time of the search algorithm has a bigger constant factor, but
has a smaller exponent base which results in a faster asymptotic running time.
To study how stable non-deterministic policies are to potential noise in the models, we check to see
how much the policy changes when Gaussian noise is added to the reward function. Fig 6 Right
shows the percentage of the total state-action pairs that were either added or removed from the
resulting policy by adding noise to the reward model (we assume a constant ? = 0.02). We see that
the resulting non-deterministic policy changes somewhat, but not drastically, even with noise level
of similar magnitude as the reward function.
Next, we implemented the full search algorithm on an MDP constructed for a medical decisionmaking task involving real patient data. The data was collected as part of a large (4000+ patients)
multi-step randomized clinical trial, designed to investigate the comparative effectiveness of different treatments provided sequentially for patients suffering from depression [9]. The goal is to find
a treatment plan that maximizes the chance of remission. The dataset includes a large number of
measured outcomes. For the current experiment, we focus on a numerical score called the Quick
Inventory of Depressive Symptomatology (QIDS), which was used in the study to assess levels of
depression (including when patients achieved remission). For the purposes of our experiment, we
discretize the QIDS scores (which range from 5 to 27) uniformly into quartiles, and assume that
this, along with the treatment step (up to 4 steps were allowed), completely describe the patient?s
state. Note that the underlying transition graph can be treated as a DAG because the study is limited
to four steps of treatment. There are 19 actions (treatments) in total. A reward of 1 is given if the
patient achieves remission (at any step) and a reward of 0 is given otherwise. The transition and
reward models were generated empirically from the data using a frequentist approach.
Table 1: Policy and running time of the full search algorithm on the medical problem
? = 0.02
? = 0.015
? = 0.01
?=0
118.7
12.3
3.5
1.4
CT
SER
CT
CT
CIT+BUP
CIT+CT
VEN
CIT+BUS
CIT+BUP
CIT+BUP
VEN
VEN
12 ? QIDS < 16
CT
SER
BUP, CIT+BUS
CIT+BUP
CIT+CT
VEN
CIT+BUS
CT
16 ? QIDS ? 27
CT
CIT+CT
CT
CIT+CT
CT
CIT+CT
CT
Time (seconds)
5 < QIDS < 9
9 ? QIDS < 12
Table 1 shows the non-deterministic policy obtained for each state during the second step of the
trial (each acronym refers to a specific treatment). This is computed using the search algorithm,
assuming different values of ?. Although this problem is not tractable with the MIP formulation
(304 state-action pairs), a full search in the space of ?-optimal policies is still possible. Table 1 also
shows the running time of the algorithm, which as expected increases as we relax the threshold ?.
Here we did not use any heuristics. However, as the underlying transition graph is a DAG, we could
use the heuristic discussed in the previous section (Eqn 13) to get the same policies even faster. An
interesting question is how to set ? a priori. In practice, a doctor may use the full table as a guideline,
using smaller values of ? when s/he wants to rely more on the decision support system, and larger
values when relying more on his/her own assessments.
6
Discussion
This paper introduces a framework for computing non-deterministic policies for MDPs. We believe
this framework can be especially useful in the context of decision support systems to provide more
choice and flexibility to the acting agent. This should improve acceptability of decision support
systems in fields where the policy is used to guide (or advise) a human expert, notably for the
optimization of medical treatments.
The framework we propose relies on two competing objectives. On the one hand we want to provide
as much choice as possible in the non-deterministic policy, while at the same time preserving some
guarantees on the return (compared to the optimal policy). We present two algorithms that can solve
such an optimization problem: a MIP formulation that can be solved by any general MIP solver,
and a search algorithm that uses the monotonic property of the studied constraints to cut down
on the running time. The search algorithm is particularly useful when we have good heuristics to
further prune the search space. Future work will consider different optimizing criteria, such as those
outlined in Section 3, which may be more appropriate for some domains with very large action sets.
A limitation of our current approach is that the algorithms presented so far are limited to relatively
small domains, and scale well only for domains with special properties, such as a DAG structure
in the transition model or good heuristics for pruning the search. This clearly points to future work
in developing better approximation techniques. Nonetheless it is worth keeping in mind that many
domains of application, may not be that large (see [1, 2, 3, 4] for examples) and the techniques as
presented can already have a substantial impact.
Finally, it is worth noting that non-deterministic policies can also be useful in cases where the MDP
transition and reward models are imperfectly specified or learned from data, though we have not
explored this case in detail yet. In such a setting, the difference between the optimal and a near
optimal policy may not be computed accurately. Thus, it is useful to find all actions that are close to
optimal so that the real optimal action is not missed. An interesting question here is whether we can
find the smallest non-deterministic policy that will include the optimal policy with some probability
1
. This is similar to the framework in [7], and could be useful in cases where there is not enough
data to compare policies with good statistical significance.
Acknowledgements: The authors wish to thank A. John Rush, Susan A. Murphy, Doina Precup, and
Stephane Ross for helpful discussions regarding this work. Funding was provided by the National
Institutes of Health (grant R21 DA019800) and the NSERC Discovery Grant program.
References
[1] A. Schaefer, M. Bailey, S. Shechter, and M. Roberts. Handbook of Operations Research / Management
Science Applications in Health Care, chapter Medical decisions using Markov decision processes. Kluwer
Academic Publishers, 2004.
[2] M. Hauskrecht and H. Fraser. Planning treatment of ischemic heart disease with partially observable
Markov decision processes. Artificial Intelligence in Medicine, 18(3):221?244, 2000.
[3] P. Magni, S. Quaglini, M. Marchetti, and G. Barosi. Deciding when to intervene: a Markov decision
process approach. International Journal of Medical Informatics, 60(3):237?253, 2000.
[4] D. Ernst, G. B. Stan, J. Concalves, and L. Wehenkel. Clinical data based optimal sti strategies for hiv: a
reinforcement learning approach. In Proceedings of Benelearn, 2006.
[5] D.P. Bertsekas. Dynamic Programming and Optimal Control, Vol 2. Athena Scientific, 1995.
[6] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.
[7] M. Kearns and S. Singh. Near-optimal reinforcement learning in poly. time. Machine Learning, 49, 2002.
[8] S. Mannor, D. Simester, P. Sun, and J.N. Tsitsiklis. Bias and variance in value function estimation. In
Proceedings of ICML, 2004.
[9] M. Fava, A.J. Rush, and M.H. Trivedi et al. Background and rationale for the sequenced treatment alternatives to relieve depression (STAR*D) study. Psychiatr Clin North Am, 26(2):457?94, 2003.
| 3504 |@word trial:2 complying:1 seems:1 seek:1 pick:1 minus:1 initial:1 contains:1 score:2 prescriptive:1 mmilan1:1 current:2 comparing:1 yet:1 john:1 numerical:1 additive:1 remove:1 designed:1 maxv:1 intelligence:1 vmin:3 selected:1 bup:5 provides:3 mannor:1 preference:3 along:3 augmentable:15 constructed:2 qualitative:1 nondeterministic:1 introduce:2 notably:1 expected:6 planning:3 multi:1 bellman:2 discounted:2 relying:1 remission:3 little:1 cardinality:1 solver:4 begin:1 provided:2 notation:1 underlying:2 maximizes:4 shechter:1 rmax:2 r21:1 finding:5 hauskrecht:1 guarantee:5 act:1 pratical:1 preferable:1 tie:1 wrong:1 ser:2 control:1 medical:10 grant:2 bertsekas:1 local:1 sutton:1 ets:1 ak:1 might:3 plus:1 studied:3 limited:2 range:2 presuming:1 averaged:1 directed:1 unique:2 recursive:2 union:1 minv:1 practice:1 empirical:2 significantly:1 fard:1 thought:2 confidence:1 refers:1 advise:1 get:2 close:2 selection:1 context:5 applying:2 optimize:1 deterministic:46 map:1 quick:1 maximizing:3 starting:1 formulate:2 his:2 searching:2 notion:1 analogous:1 mcgill:4 suppose:1 programming:1 us:2 particularly:1 cut:3 solved:1 worst:6 calculate:2 susan:1 ensures:3 sun:1 ordering:2 counter:1 decrease:1 removed:1 substantial:2 mentioned:1 disease:1 reward:16 dynamic:1 singh:1 solving:2 mina2:1 completely:1 acceptability:3 chapter:1 derivation:1 describe:2 artificial:1 outcome:1 schaefer:1 whose:1 emerged:1 heuristic:9 solve:5 plausible:1 say:1 drawing:1 otherwise:3 relax:1 hiv:1 objectively:1 larger:1 noisy:1 propose:2 milani:1 loop:1 flexibility:2 ernst:1 empty:1 decisionmaking:1 produce:1 comparative:1 executing:1 leave:1 staying:2 montreal:2 measured:1 school:2 implemented:2 c:2 trading:1 quantify:1 stephane:1 stochastic:3 quartile:1 human:4 require:1 extension:1 hold:1 sufficiently:1 deciding:1 mapping:2 achieves:1 a2:2 smallest:1 purpose:2 estimation:2 label:2 ross:1 mit:1 clearly:1 always:1 gaussian:1 aim:5 rather:4 barto:1 focus:3 indicates:1 check:2 sense:1 am:1 helpful:1 a0:3 her:2 borrowing:1 overall:2 among:1 arg:1 denoted:7 augment:1 exponent:1 priori:1 plan:1 special:1 field:1 construct:1 having:2 ven:4 icml:1 future:3 simplify:1 few:1 randomly:2 national:1 murphy:1 cplex:1 freedom:2 acceptance:1 highly:1 possibility:2 investigate:1 evaluation:2 introduces:2 edge:1 partial:1 tree:1 mip:13 rush:2 increased:1 column:1 da019800:1 tg:3 cost:1 imperfectly:1 usefulness:1 person:1 st:2 international:1 randomized:1 stay:1 probabilistic:1 vm:7 off:1 informatics:1 precup:1 augmentation:2 management:1 opposed:1 choose:3 possibly:1 depressive:1 worse:1 expert:1 return:10 potential:2 star:1 availability:1 includes:6 relieve:1 north:1 satisfy:3 doina:1 doing:1 doctor:3 start:2 option:4 contribution:1 minimize:1 ass:1 variance:2 clarifies:1 modelled:1 magni:1 accurately:1 worth:2 straight:2 whenever:1 definition:2 maxa2a:1 nonetheless:1 dataset:2 treatment:12 recall:1 formalize:1 ea:1 improved:2 formulation:6 though:1 psychiatr:1 until:1 working:2 eqn:8 hand:1 assessment:2 pineau:1 scientific:1 believe:1 mdp:24 name:2 concept:5 true:1 hence:1 during:1 criterion:8 trying:2 demonstrate:1 funding:1 empirically:1 discussed:1 he:1 kluwer:1 refer:1 s5:9 cambridge:1 dag:4 outlined:1 similarly:1 stable:1 intervene:1 add:3 base:1 own:2 perspective:1 optimizing:2 scenario:1 certain:1 inequality:2 binary:1 joelle:1 seen:1 preserving:1 relaxed:1 care:1 somewhat:1 r0:2 prune:2 maximize:4 full:4 desirable:1 faster:2 academic:1 offer:1 clinical:2 fraser:1 bigger:2 parenthesis:1 impact:1 involving:1 patient:7 sequenced:1 achieved:2 background:1 want:3 source:1 publisher:1 unlike:2 sure:2 subject:2 effectiveness:1 integer:5 call:1 near:5 noting:1 enough:2 easy:1 affect:1 competing:1 regarding:2 tm:3 whether:1 passing:1 action:48 adequate:1 matlab:1 depression:3 useful:11 clear:1 s4:9 discount:1 extensively:2 cit:12 exist:1 percentage:2 s3:9 notice:3 per:1 vol:1 four:1 threshold:3 neither:1 graph:4 sum:2 run:1 sti:1 arrive:1 throughout:2 separation:1 missed:1 decision:17 acceptable:2 ct:14 followed:1 constraint:20 rmin:2 argument:1 optimality:2 min:3 relatively:1 developing:1 according:14 smaller:2 lp:1 making:2 s1:9 sided:1 heart:1 equation:2 bus:3 mechanism:1 mind:1 tractable:1 end:4 acronym:1 operation:1 rewritten:1 apply:1 appropriate:4 bailey:1 frequentist:1 alternative:1 original:2 remaining:1 include:2 ensure:1 running:13 wehenkel:1 clin:1 medicine:1 especially:2 objective:3 question:3 added:2 already:1 strategy:1 rt:1 said:3 thank:1 athena:1 collected:1 trivial:1 assuming:2 index:1 providing:2 minimizing:2 equivalently:1 difficult:2 executed:3 robert:1 potentially:1 marchetti:1 design:1 guideline:1 policy:111 discretize:1 markov:5 arc:1 finite:2 immediate:1 excluding:1 community:1 canada:2 introduced:1 pair:16 specified:1 conflict:1 learned:1 narrow:2 address:3 usually:1 program:7 gaining:1 max:4 including:1 treated:1 rely:1 representing:1 jpineau:1 improve:2 mdps:12 stan:1 created:1 health:2 literature:1 acknowledgement:1 discovery:1 asymptotic:2 loss:1 rationale:1 mixed:5 interesting:2 limitation:1 acyclic:1 agent:10 sufficient:1 s0:20 pi:2 last:1 keeping:2 drastically:1 guide:1 tsitsiklis:1 bias:1 institute:1 face:1 barrier:1 dimension:1 depth:1 transition:10 uncertainly:3 calculated:1 world:1 forward:2 author:1 reinforcement:3 vmax:5 far:2 sj:1 approximate:1 pruning:1 observable:1 cutting:1 sequentially:1 handbook:1 ischemic:1 search:32 why:1 table:4 learn:1 nature:1 ca:2 inventory:1 necessarily:1 poly:1 domain:5 did:1 significance:1 main:1 s2:9 backup:1 noise:4 allowed:3 suffering:1 augmented:2 fig:8 biggest:2 simester:1 sub:3 wish:1 exponential:3 mahdi:1 third:1 down:5 bad:1 specific:1 explored:1 adding:7 overloading:1 magnitude:1 illustrates:1 margin:5 trivedi:1 nserc:1 partially:1 monotonic:6 satisfies:3 chance:1 relies:1 ma:1 goal:1 formulated:1 room:1 change:4 specifically:1 except:1 uniformly:2 acting:4 kearns:1 conservative:10 total:2 called:1 experimental:1 support:9 incorporate:1 evaluate:1 |
2,763 | 3,505 | Understanding Brain Connectivity Patterns during
Motor Imagery for Brain-Computer Interfacing
Moritz Grosse-Wentrup
Max Planck Institute for Biological Cybernetics
Spemannstr. 38
72076 T?ubingen, Germany
[email protected]
Abstract
EEG connectivity measures could provide a new type of feature space for inferring
a subject?s intention in Brain-Computer Interfaces (BCIs). However, very little is
known on EEG connectivity patterns for BCIs. In this study, EEG connectivity
during motor imagery (MI) of the left and right is investigated in a broad frequency
range across the whole scalp by combining Beamforming with Transfer Entropy
and taking into account possible volume conduction effects. Observed connectivity patterns indicate that modulation intentionally induced by MI is strongest
in the ?-band, i.e., above 35 Hz. Furthermore, modulation between MI and rest
is found to be more pronounced than between MI of different hands. This is in
contrast to results on MI obtained with bandpower features, and might provide an
explanation for the so far only moderate success of connectivity features in BCIs.
It is concluded that future studies on connectivity based BCIs should focus on
high frequency bands and consider experimental paradigms that maximally vary
cognitive demands between conditions.
1
Introduction
Brain-Computer Interfaces (BCIs) are devices that enable a subject to communicate without utilizing the peripheral nervous system, i.e., without any overt movement requiring volitional motor
control. The primary goal of research on BCIs is to enable basic communication for subjects unable
to communicate by normal means due to neuro-degenerative diseases such as amyotrophic lateral
sclerosis (ALS). In non-invasive BCIs, this is usually approached by measuring the electric field of
the brain by EEG, and detecting changes intentionally induced by the subject (cf. [1] for a general
introduction to BCIs). The most commonly used experimental paradigm in this context is motor
imagery (MI) [2]. In MI subjects are asked to haptically imagine movements of certain limbs, e.g.,
the left or the right hand. MI is known to be accompanied by a decrease in bandpower (usually most
prominent in the ?-band, i.e., roughly at 8-13 Hz) in that part of the motor cortex representing the
specific limb [3]. These bandpower changes, termed event related (de-)synchronization (ERD/ERS),
can be detected and subsequently used for inferring the subject?s intention. This approach to BCIs
has been demonstrated to be very effective in healthy subjects, with only little subject training time
required to achieve classification accuracies close to 100% in two-class paradigms [4?6]. Furthermore, satisfactory classification results have been reported with subjects in early to middle stages
of ALS [7]. However, all subjects diagnosed with ALS and capable of operating a BCI still had
residual motor control that enabled them to communicate without the use of a BCI. Until now, no
communication has been established with a completely locked-in subject, i.e., a subject without any
residual motor control. Establishing communication with a completely locked-in subject arguably
constitutes the most important challenge in research on BCIs.
1
Unfortunately, reasons for the failure of establishing communication with completely locked-in subjects remain unknown. While cognitive deficits in completely locked-in patients can at present not
be ruled out as the cause of this failure, another possible explanation is abnormal brain activity observed in patients in late stages of ALS [8]. Our own observations indicate that intentionally induced
bandpower changes in the electric field of the brain might be reduced in subjects in late stages of
ALS. To explore the plausibility of this explanation for the failure of current BCIs in completely
locked-in subjects, it is necessary to devise feature extraction algorithms that do not rely on measures of bandpower. In this context, one promising approach is to employ connectivity measures
between different brain regions. It is well known from fMRI-studies that brain activity during MI is
not confined to primary motor areas, but rather includes a distributed network including pre-motor,
parietal and frontal regions of the brain [9]. Furthermore, synchronization between different brain
regions is known to be an essential feature of cognitive processing in general [10]. Subsequently, it
can be expected that different cognitive tasks, such as MI of different limbs, are associated with different connectivity patterns between brain regions. These connectivity patterns should be detectable
from EEG recordings, and thus offer a new type of feature space for inferring a subject?s intention.
Since measures of connectivity are, at least in principle, independent of bandpower changes, this
might offer a new approach to establishing communication with completely locked-in subjects.
In recent years, several measures of connectivity have been developed for analyzing EEG recordings
(cf. [11] for a good introduction and a comparison of several algorithms). However, very few studies
exist that analyze connectivity patterns as revealed by EEG during MI [12, 13]. Furthermore, these
studies focus on differences in connectivity patterns between MI and motor execution, which is not
of primary interest for research on BCIs. In the context of non-invasive BCIs, connectivity measures
have been most notably explored in [14] and [15]. However, these studies only consider frequency
bands and small subsets of electrodes known to be relevant for bandpower features, and do not take
into account possible volume conduction effects. This might lead to misinterpreting bandpower
changes as changes in connectivity. Consequently, a better understanding of connectivity patterns
during MI of different limbs as measured by EEG is required to guide the design of new feature
extraction algorithms for BCIs. Specifically, it is important to properly address possible volume
conduction effects, not confine the analysis to a small subset of electrodes, and consider a broad
range of frequency bands.
In this work, these issues are addressed by combining connectivity analysis during MI of the left
and right hand in four healthy subjects with Beamforming methods [6]. Since it is well known that
MI includes primary motor cortex [3], this area is chosen as the starting point of the connectivity
analysis. Spatial filters are designed that selectively extract those components of the EEG originating
in the left and right motor cortex. Then, the concept of Transfer Entropy [16] is used to estimate
class-conditional ?information flow? from all 128 employed recording sites into the left and right
motor cortex in frequency bands ranging from 5 - 55 Hz. In this way, spatial topographies are
obtained for each frequency band that depict by how much each area of the brain is influencing the
left/right motor cortex during MI of the left/right hand. Interestingly, the most pronounced changes
in connectivity patterns are not observed in MI of the left vs. the right hand, but rather in rest vs. MI
of either hand. Furthermore, these pattern changes are most pronounced in frequency bands not
usually associated with MI. i.e., in the ?-band above 35 Hz. These results suggest that in order
to fully exploit the capabilities of connectivity measures for BCIs, and establish communication
with completely locked-in subjects, it might be advisable to consider ?-band oscillations and adapt
experimental paradigms as to maximally vary cognitive demands between conditions.
2
2.1
Methods
Symmetric vs. Asymmetric Connectivity Analysis
In analyzing interrelations between time-series data it is important to distinguish symmetric from
asymmetric measures. Consider Fig. 1, depicting two graphs of three random processes s1 to s3 ,
representing three EEG sources. The goal of symmetric connectivity analysis (Fig. 1.a) is to estimate some instantaneous measure of similarity between random processes, i.e., assigning weights
to the undirected edges between the nodes of the graph in Fig. 1.a. Amplitude coupling and phase
synchronization fall into this category, which are the measures employed in [14] and [15] for feature
extraction in BCIs. However, interrelations between EEG sources originating in different regions of
2
a)
b)
s1 [t]
s2 [t]
s3 [t]
s1 [t]
s2 [t]
s3 [t]
s1 [t + 1]
s2 [t + 1]
s3 [t + 1]
s1 [t + 1]
s2 [t + 1]
s3 [t + 1]
Figure 1: Illustration of symmetric- vs. asymmetric connectivity analysis for three EEG sources
within the brain.
the brain can be expected to be asymmetric, with certain brain regions exerting stronger influence on
other regions than vice versa. For this reason, asymmetric connectivity measures potentially provide
more information on cognitive processes than symmetric measures.
Considering asymmetric relations between random processes requires a definition of how the influence of one process on another process is to be measured, i.e., a quantitative definition of causal
influence. The commonly adopted definition of causality in time-series analysis is that si causes
sj if observing si helps in predicting future observations of sj , i.e., reduces the prediction error of
sj . This implies that cause precedes effect, i.e., that the graph in Fig. 1.b may only contain directed
arrows pointing forward in time. Note that there is some ambiguity in this definition of causality,
since it does not specify a metric for reduction of the prediction error of sj due to observing si . In
Granger causality (cf. [11]), reduction of the variance of the prediction error is chosen as a metric,
essentially limiting Granger causality to linear systems. It should be noted, however, that any other
metric is equally valid. Finally, note that for reasons of simplicity the graph in Fig. 1.b only contains
directed edges from nodes at time t to nodes at time t + 1. In general, directed arrows from nodes at
times t, . . . , t ? k to nodes at time t + 1 may be considered, with k the order of the random processes
generating s[t + 1].
To assess Granger causality between bivariate time-series data a linear autoregressive model is fit to
the data, which is then used to compute a 2x2 transfer matrix in the frequency domain (cf. [11]). The
off-diagonal elements of the transfer matrix then provide a measure of the asymmetric interaction
between the observed time-series. Extensions of Granger causality to multivariate time-series data,
termed directed transfer function (DTF) and partial directed coherence (PDC), have been developed
(cf. [11] and the references therein). However, in this work a related but different measure for asymmetric interrelations between time-series is utilized. The concept of Transfer Entropy (TE) [16] defines the causal influence of si on sj as the reduction in entropy of sj obtained by observing si . More
precisely, let si and sj denote two random processes, and let ski/j [t] := si/j [t], . . . , si/j [t ? k] .
TE is then defined as
Tk (si [t] ? sj [t + 1]) := H sj [t + 1]|skj [t] ? H sj [t + 1]|skj [t], ski [t] ,
(1)
with k the order of the random processes and H(?) the Shannon entropy. TE can thus be understood
as the reduction in uncertainty about the random process sj at time t + 1 due to observing the past
k samples of the random process si . Both, Granger causality and TE, thus define causal influence
as a reduction in the uncertainty of a process due to observing another process, but employ different
metrics to measure reduction in uncertainty. While TE is a measure that applies to any type of
random processes, it is difficult to compute in practice. Hence,
in this study only Gaussian processes
are considered, i.e., it is assumed that sj [t + 1], skj [t], ski [t] is jointly Gaussian distributed. TE can
then be computed as
det R(skj [t],ski [t]) det R(sj [t+1],skj [t])
1
TkGP (si [t] ? sj [t + 1]) = log
,
(2)
2
det R(sj [t+1],skj [t],ski [t]) det R(skj [t])
with R(?) the (cross-)covariance matrices of the respective random processes [17]. In comparison
to Granger causality and related measures, TE for Gaussian processes possesses several advantages.
It is easy to compute from a numerical perspective, since it does not require fitting a multivariate
autoregressive model including (implicit) inversion of large matrices. Furthermore, for continuous
processes it is invariant under coordinate transformations [17]. Importantly, this entails invariance
with regard to scaling of the random processes.
Computing TE for Gaussian processes requires estimation of the (cross-)covariance matrices
in (2). Consider a matrix S ? R2?T ?N , corresponding to data recorded from two EEG
3
sources during an experimental paradigm with N trials of T samples each. In order to compute
TkGP (s1 [t] ? s2 [t + 1]) for t = k + 1, . . . , T ? k ? 1, it is assumed that in each trial s1 [t] and
s2 [t] are i.i.d. samples from the distribution p(s1 [t], s2 [t]), i.e., that the non-stationary Gaussian processes that give rise to the observation matrix S are identical for each of the N repetitions of the
experimental paradigm. For each instant in time, TE can then be evaluated by computing the sample
(cross-)covariance matrices required in (2) across trials. Note that evaluating (2) requires specification of k. In general, k should be chosen as large as possible in order to maximize information on
the random processes contained in the (cross-)covariance matrices. However, choosing k too large
leads to rank deficient matrices with a determinant of zero. Here, for each observation matrix S the
highest possible k is chosen such that none of the matrices in (2) is rank deficient.
2.2
The Problem of Volume Conduction in EEG Connectivity Analysis
The goal of connectivity analysis in EEG recordings is to estimate connectivity patterns between
different regions of the brain. Unfortunately, EEG recordings do not offer direct access to EEG
sources. Instead, each EEG electrode measures a linear and instantaneous superposition of EEG
sources within the brain [18]. This poses a problem for symmetric connectivity measures, since
these assess instantaneous coupling between electrodes [18]. Asymmetric connectivity measures
such as TE, on the other hand, are not based on instantaneous coupling, but rather consider prediction
errors. It is not obvious that instantaneous volume conduction also poses a problem for this type of
measures. Unfortunately, the following example demonstrates that volume conduction also leads to
incorrect connectivity estimates in asymmetric connectivity analysis based on TE.
Example 1 (Volume Conduction Effects in Connectivity Analysis based on Transfer Entropy)
Consider the EEG signals x1 [t] and x2 [t], recorded at two electrodes placed on the scalp, that
consist of a linear superposition of three EEG sources s1 [t] to s3 [t] situated somewhere within the
brain (Fig. 2.a). Let x[t] = (x1 [t], x2 [t])T and s[t] = (s1 [t], s2 [t], s3 [t])T . Then x[t] = As[t],
with A ? R2?3 describing the projection strength of each source to each electrode. For sake of
simplicity, assume that A = (1 0 1 ; 0 1 1 ), i.e., that the first source only projects to the first
electrode with unit strength, the second source only projects to the second electrode with unit
strength, and the third source projects to both electrodes with unit strength. Furthermore, assume
that
?
?
1 0 0 0 0 0
? 0 1 0 0 0 0 ?
?
?
? 0 0 1 0 0 ? ?
(3)
p(s[t + 1], s[t]) = N (0, ?) with ? = ?
?,
? 0 0 0 1 0 0 ?
? 0 0 0 0 1 0 ?
0 0 ? 0 0 1
i.e., that all sources have zero mean, unit variance, are mutually independent, and s1 and s2 are
uncorrelated in time. Only s3 [t] and s3 [t + 1] are assumed to be correlated with covariance ?
(Fig. 2.b). In this setting, it would be desirable to obtain zero TE between both electrodes, since
there is no interaction between the sources giving rise to the EEG. However, some rather tedious
algebraic manipulations reveal that in this case
1
3
1
4 ? ?2
T1GP (x2 [t] ? x1 [t + 1]) = log
+ log
.
(4)
2
2
2
6 ? 2? 2
Note that (4) is zero if and only if ? = 0, i.e., if s3 represents white noise. Otherwise, TE between
the two electrodes is estimated to be greater than zero solely due to volume conduction effects from
source s3 . Further note that qualitatively this result holds independently of the strength of the
projection of the third source to both electrodes.
2.3
Attenuation of Volume Conduction Effects via Beamforming
One way to avoid volume conduction effects in EEG connectivity analysis is to perform source
localization on the obtained EEG data, and apply connectivity measures on estimated current density
time-series at certain locations within the brain [11]. This is feasible to test certain hypothesis, e.g.,
to evaluate whether there exists a causal link between two specific points within the brain. However,
testing pairwise causal links between more than just a few points within the brain is computationally
4
a)
x1 [t]
s1 [t]
b)
x2 [t]
s2 [t]
s3 [t]
s1 [t]
s2 [t]
s3 [t]
s1 [t + 1]
s2 [t + 1]
s3 [t + 1]
Figure 2: Illustration of volume conduction effects in EEG connectivity analysis.
intractable. Accordingly, attenuation of volume conduction effects via source localization is not
feasible if a complete connectivity pattern considering the whole brain is desired. Here, a different
approach is pursued. It is well known that primary motor cortex is central to MI as measured by
EEG [3]. Accordingly, it is assumed that any brain region involved in MI displays some connectivity
to the primary motor cortex. This (admittedly rather strong) assumption enables a complete analysis
of the connectivity patterns during MI covering the whole brain in the following way. First, two
spatial filters, commonly known as Beamformers, are designed that selectively extract EEG sources
originating within the right and left motor cortex, respectively [6]. In brief, this can be accomplished
by solving the optimization problem
(
)
T
w
R
w
?
x
l/r
w? = argmax
,
(5)
w T Rx w
w?RM
with Rx ? RM ?M the covariance of the recorded EEG, and Rx? l/r ? RM ?M model-based spatial
covariance matrices of EEG sources originating within the left/right motor cortex. In this way,
spatial filters can be obtained that optimally attenuate the variance of all EEG sources not originating
within the left/right motor cortex. The desired spatial filters are obtained as the eigenvectors with
the largest eigenvalue of the generalized eigenvalue problem Rx? l/r w = ?Rx w (cf. [6] for a more
detailed presentation).
With EEG sources originating within the left and right motor cortex extracted, TE from all EEG
electrodes into the left and right motor cortex can be computed. In this way, volume conduction
effects from all sources within the brain into the left/right motor cortex can be optimally attenuated.
However, volume conduction effects from the left/right motor cortex to any of the EEG electrodes
still poses a problem. Accordingly, it has to be verified if any positive TE from an EEG electrode
into the left/right motor cortex could be caused by bandpower changes within the left/right motor
cortex. Positive TE from any electrode into the left/right motor cortex can only be considered as a
genuine causal link if it is not accompanied by a bandpower change in the respective motor cortex.
3
Experimental Results
To investigate connectivity patterns during MI the following experimental paradigm was employed.
Subjects sat in a dimly lit and shielded room, approximately two meters in front of a silver screen.
Each trial started with a centrally displayed fixation cross. After three seconds, the fixation cross was
overlaid with a centrally placed arrow pointing to the left or right. This instructed subjects to begin
MI of the left or right hand, respectively. Subjects were explicitly instructed to perform haptic MI,
but the exact choice of the type of imaginary hand movement was left unspecified. After a further
seven seconds the arrow was removed, indicating the end of the trial and start of the next trial. 150
trials per class were carried out by each subjects in randomized order. During the experiment, EEG
was recorded at 128 electrodes placed according to the extended 10-20 system with electrode Cz as
reference. EEG data was re-referenced to common average reference offline. Four healthy subjects
participated in the experiment, all of which were male and right handed with an age of 27 ? 2.5
years. For each subject, electrode locations were recorded with an ultrasound tracking system. No
artifact correction was employed and no trials were rejected.
For each subject, model-based covariance matrices Rx? l/r for EEG sources within the left/right motor
cortex were computed as described in [6]. The EEG covariance matrix Rx was computed for each
subject using all available data, and the two desired Beamformers, extracting EEG sources from the
left and right motor cortex, were computed by solving (5). The EEG sources extracted from the
left/right motor cortex as well as the unfiltered data recorded at each electrode were then bandpass5
filtered with sixth-order Butterworth filters in five frequency bands ranging from 5 to 55 Hz in steps
of 10 Hz. Then, TE was computed from all EEG electrodes into the left/right motor cortex at each
sample point as described in Section 2.1. Furthermore, for each subject class-conditional bandpower
changes (ERD/ERS) of sources extracted from the left/right motor cortex were computed in order
to identify frequency bands with common modulations in bandpower and TE. Two subjects showed
significant modulations of bandpower in all five frequency bands. These were excluded from further
analysis, since any observed positive TEs could have been confounded by volume conduction. The
resulting topographies of mean TE between conditions of the two remaining subjects are shown
in Fig. 3. Here, the first two columns show mean TE from all electrodes into the left/right motor
cortex during MI of either hand (3.5-10s) minus mean TE during baseline (0.5-3s) in each of the
five frequency bands. The last two columns show mean differences in TE into the left/right motor
cortex between MI of the left and right hand (both conditions also baseline corrected). Note that
the topographies in Fig. 3 have been normalized to the maximum difference across conditions to
emphasize differences between conditions. Interestingly, no distinct differences in TE are observed
between MI of the left and right hand. Instead, strongest differences in TE are observed in rest
vs. MI of either hand (left two columns). The amount of decrease in TE during MI relative to
rest increases with higher frequencies, and is most pronounced in the ?-band from 45-55 Hz (last
row, left two columns). Topographically, strongest differences are observed in frontal, pre-central,
and post-central areas. Observed changes in TE are statistically significant with significance level
? = 0.01 at all electrodes in Fig. 3 marked with red crosses (statistical significance was tested nonparametrically and individually for each subject, Beamformer, and condition by one thousand times
randomly permuting the EEG data of each recorded trial in time and testing the null-hypothesis that
changes in TE at least as large as those in Fig.3 are observed without any temporal structure being
present in the data). Due to computational resources only a small subset of electrodes was tested
for significance. The observed changes in TE display opposite modulations in comparison to mean
bandpower changes observed in left/right motor cortex relative to baseline (Fig. 4, only significant
(? = 0.01) bandpower changes relative to baseline (0-3s) plotted). Here, strongest modulation of
bandpower is found in the ?- (? 10 Hz) and ?-band (? 25 Hz). Frequencies above 35 Hz show very
little modulation, indicating that the observed differences in TE at high frequencies in Fig. 3 are not
due to volume conduction but genuine causal links.
4
Discussion
In this study, Beamforming and TE were employed to investigate the topographies of ?information flow? into the left and right motor cortex during MI as measured by EEG. To the best of the
author?s knowledge, this is the first study investigating asymmetric connectivity patterns between
brain regions during MI of different limbs considering a broad frequency range, a large number of
recordings sites, and properly taking into account volume conduction effects. However, it should
be pointed out that there are several issues that warrant further investigation. First, the presented
results are obtained from only two subjects, since two subjects had to be excluded due to possible
volume conduction effects. Future studies with more subjects are required to validate the obtained
results. Also, no outflow from primary motor cortex and no TE between brain regions not including
primary motor cortex have been considered. Finally, the methodology presented in this study can
not be applied in a straight-forward manner to single-trial data, and is thus only of limited use for
actual feature extraction in BCIs.
Never the less, the obtained results indicate that bandpower changes in motor cortex and connectivity between motor cortex and other regions of the brain are processes that occupy distinct spectral
bands and are modulated by different cognitive tasks. In conjunction with the observation of no
distinct changes in connectivity patterns between MI of different limbs, this indicates that in [14]
and [15] bandpower changes might have been misinterpreted as connectivity changes. This is further
supported by the fact that these studies focused on frequency bands displaying significant modulation of bandpower (8-30 Hz) and did not control for volume conduction effects. In conclusion, the
pronounced modulation of connectivity between MI of either hand vs. rest in the ?-band observed in
this study underlines the importance of also considering high frequency bands in EEG connectivity
analysis. Furthermore, since the ?-band is thought to be crucial for dynamic functional connectivity
between brain regions [10], future studies on connectivity patterns in BCIs should consider experimental paradigms that maximally vary cognitive demands in order to activate different networks
within the brain across conditions.
6
Motor Imagery - Rest
Left - Right Motor Imagery
Right MC
Left MC
C3
C4
C3
C4
C3
C4
Left MC
Right MC
1
5-15 Hz
15-25 Hz
0
25-35 Hz
C3
C4
C3
C4
35-45 Hz
45-55 Hz
-1
Figure 3: Topographies of mean Transfer Entropy changes into left/right motor cortex (MC). C3/C4
mark electrodes over left/right motor cortex. Red crosses indicate statistically significant electrodes.
Plotted with [19].
Right Motor Cortex
Right Hand Imagery Left Hand Imagery
Left Motor Cortex
8 dB
50 Hz
40 Hz
30 Hz
20 Hz
10 Hz
0 dB
50 Hz
40 Hz
30 Hz
20 Hz
10 Hz
0s
3s
10s 0s
3s
10s
-8 dB
Figure 4: Class-conditional mean ERD/ERS in left/right motor cortex relative to baseline (0-3s).
Horizontal line marks start of motor imagery. Plotted with [19].
7
References
[1] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, and T.M. Vaughan. Braincomputer interfaces for communication and control. Clinical Neurophysiology, 113(6):767?
791, 2002.
[2] S.G. Mason, A. Bashashati, M. Fatourechi, K.F. Navarro, and G.E. Birch. A comprehensive
survey of brain interface technology designs. Annals of Biomedical Engineering, 35(2):137?
169, 2007.
[3] G. Pfurtscheller and F.H. Lopes da Silva. Even-related EEG/MEG synchronization and desynchronization: basic principles. Clinical Neurophysiology, 110:1842?1857, 1999.
[4] H. Ramoser, J. Mueller-Gerking, and G. Pfurtscheller. Optimal spatial filtering of single trial
EEG during imagined hand movement. IEEE Transactions on Rehab. Eng., 8(4):441?446,
2000.
[5] B. Blankertz, G. Dornhege, M. Krauledat, K.R. Mueller, and G. Curio. The non-invasive
Berlin brain-computer interface: Fast acquisition of effective performance in untrained subjects. NeuroImage, 27(2):539?550, 2007.
[6] Moritz Grosse-Wentrup, Klaus Gramann, and Martin Buss. Adaptive spatial filters with predefined region of interest for EEG based brain-computer-interfaces. In B. Schoelkopf, J. Platt,
and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 537?
544. MIT Press, Cambridge, MA, 2007.
[7] A. Kubler, F. Nijboer, J. Mellinger, T.M. Vaughan, H. Pawelzik, G. Schalk, D.J. McFarland,
N. Birbaumer, and J. Wolpaw. Patients with ALS can use sensorimotor rhythms to operate a
brain-computer interface. Neurology, 64:1775?1777, 2005.
[8] B. Kotchoubey, S. Lang, S. Winter, and N. Birbaumer. Cognitive processing in completely paralyzed patients with amyotrophic lateral sclerosis. European Journal of Neurology, 10(5):551?
558, 2003.
[9] T. Hanakawa, M.A. Dimyan, and M. Hallett. Motor planning, imagery, and execution in the
distributed motor network: a time-course study with functional MRI. Cerebral Cortex. Advance online publication.
[10] F. Varela, J.-P. Lachaux, E. Rodriguez, and J. Martinerie. The brainweb: phase sychronization
and large-scale integration. Nature Reviews Neuroscience, 2:229?239, 2001.
[11] L. Astolfi, F. Cincotti, D. Mattia, M.G. Marciani, L.A. Baccala, F. de Vico Fallani, S. Salinari,
M. Ursino, M. Zavaglia, L. Ding, J.C. Edgar, G.A. Miller, B. He, and F. Babiloni. Comparison
of different cortical connectivity estimators for high-resolution EEG recordings. Human Brain
Mapping, 28:143?157, 2007.
[12] R. Kus, J.S. Ginter, and K.J. Blinowska. Propagation of EEG activity during finger movement
and its imagination. Acta Neurobiologiae Experimentalis, 66:195?206, 2006.
[13] M.L. Stavrinou, L. Moraru, L. Cimponeriu, S. Della Penna, and A. Bezerianos. Evaluation
of cortical connectivity during real and imagined rythmic finger tapping. Brain Topography,
19:137?145, 2007.
[14] E. Gysels and P. Celka. Phase synchronization for the recognition of mental tasks in a braincomputer interface. IEEE Transactions on Rehab. Eng., 12(4):406?415, 2004.
[15] Q. Wei, Y. Wang, X. Gao, and S. Gao. Amplitude and phase coupling measures for feature
extraction in an EEG-based brain-computer interface. Journal of Neural Engineering, 4:120?
129, 2007.
[16] T. Schreiber. Measuring information transfer. Physical Review Letters, 85(2):461?464, 2000.
[17] A. Kaiser and T. Schreiber. Information transfer in continuous processes. Physica D, 166:43?
62, 2002.
[18] P.L. Nunez and R. Srinivasan. Electric Fields of the Brain: The Neurophysics of EEG. Oxford
University Press, 2005.
[19] A. Delorme and S. Makeig. EEGLAB: an open source toolbox for analysis of single-trial
EEG dynamics including independent component analysis. Journal of Neuroscience Methods,
134(1):9?21, 2004.
8
| 3505 |@word neurophysiology:2 trial:12 determinant:1 middle:1 inversion:1 mri:1 stronger:1 underline:1 tedious:1 cincotti:1 open:1 fatourechi:1 covariance:9 eng:2 minus:1 sychronization:1 reduction:6 series:7 contains:1 interestingly:2 past:1 imaginary:1 current:2 si:11 assigning:1 lang:1 numerical:1 enables:1 motor:50 designed:2 depict:1 v:6 stationary:1 pursued:1 device:1 nervous:1 beamformers:2 accordingly:3 filtered:1 mental:1 detecting:1 node:5 location:2 org:1 misinterpreted:1 five:3 direct:1 incorrect:1 fixation:2 fitting:1 manner:1 pairwise:1 notably:1 expected:2 roughly:1 bandpower:19 planning:1 brain:39 little:3 actual:1 pawelzik:1 considering:4 project:3 begin:1 null:1 unspecified:1 developed:2 transformation:1 dornhege:1 temporal:1 quantitative:1 attenuation:2 makeig:1 demonstrates:1 rm:3 platt:1 control:5 unit:4 kubler:1 planck:1 arguably:1 positive:3 influencing:1 referenced:1 understood:1 engineering:2 interrelation:3 analyzing:2 oxford:1 establishing:3 solely:1 modulation:9 approximately:1 tapping:1 might:6 therein:1 acta:1 limited:1 gysels:1 range:3 locked:7 statistically:2 directed:5 testing:2 practice:1 wolpaw:2 area:4 thought:1 projection:2 intention:3 pre:2 suggest:1 shielded:1 close:1 context:3 influence:5 vaughan:2 demonstrated:1 starting:1 independently:1 focused:1 survey:1 resolution:1 simplicity:2 estimator:1 utilizing:1 amyotrophic:2 importantly:1 enabled:1 coordinate:1 limiting:1 annals:1 imagine:1 exact:1 hypothesis:2 element:1 recognition:1 utilized:1 skj:7 asymmetric:11 observed:14 ding:1 wang:1 thousand:1 wentrup:2 region:14 schoelkopf:1 movement:5 decrease:2 highest:1 removed:1 disease:1 asked:1 dynamic:2 solving:2 topographically:1 localization:2 completely:8 finger:2 distinct:3 fast:1 effective:2 activate:1 detected:1 approached:1 precedes:1 klaus:1 neurophysics:1 choosing:1 otherwise:1 bci:2 jointly:1 online:1 advantage:1 eigenvalue:2 interaction:2 rehab:2 relevant:1 combining:2 achieve:1 pronounced:5 validate:1 electrode:26 generating:1 silver:1 tk:1 help:1 coupling:4 advisable:1 pose:3 measured:4 strong:1 indicate:4 implies:1 filter:6 subsequently:2 human:1 enable:2 require:1 investigation:1 biological:1 extension:1 correction:1 hold:1 physica:1 confine:1 considered:4 normal:1 overlaid:1 mapping:1 pointing:2 vary:3 early:1 estimation:1 overt:1 superposition:2 healthy:3 individually:1 largest:1 schreiber:2 vice:1 repetition:1 hoffman:1 butterworth:1 mit:1 interfacing:1 gaussian:5 rather:5 martinerie:1 avoid:1 publication:1 conjunction:1 focus:2 properly:2 rank:2 indicates:1 degenerative:1 contrast:1 baseline:5 mueller:2 relation:1 originating:6 germany:1 issue:2 classification:2 spatial:8 integration:1 field:3 genuine:2 never:1 extraction:5 identical:1 represents:1 broad:3 lit:1 constitutes:1 warrant:1 future:4 fmri:1 employ:2 few:2 randomly:1 winter:1 comprehensive:1 phase:4 argmax:1 interest:2 investigate:2 evaluation:1 male:1 permuting:1 predefined:1 edge:2 capable:1 partial:1 necessary:1 gramann:1 respective:2 ruled:1 desired:3 causal:7 re:1 plotted:3 handed:1 column:4 measuring:2 subset:3 too:1 front:1 optimally:2 reported:1 conduction:19 gerking:1 density:1 randomized:1 off:1 connectivity:50 imagery:9 ambiguity:1 recorded:7 central:3 cognitive:9 imagination:1 account:3 de:2 accompanied:2 includes:2 caused:1 explicitly:1 analyze:1 observing:5 red:2 start:2 capability:1 ass:2 accuracy:1 variance:3 miller:1 identify:1 babiloni:1 kotchoubey:1 none:1 mc:5 rx:7 edgar:1 cybernetics:1 straight:1 penna:1 strongest:4 definition:4 sixth:1 failure:3 acquisition:1 frequency:18 intentionally:3 invasive:3 obvious:1 involved:1 sensorimotor:1 associated:2 mi:34 birch:1 knowledge:1 exerting:1 amplitude:2 higher:1 methodology:1 specify:1 maximally:3 erd:3 wei:1 evaluated:1 diagnosed:1 furthermore:9 just:1 stage:3 implicit:1 rejected:1 until:1 biomedical:1 hand:17 horizontal:1 propagation:1 rodriguez:1 nonparametrically:1 defines:1 artifact:1 reveal:1 bcis:18 effect:15 requiring:1 concept:2 contain:1 normalized:1 hence:1 moritz:2 symmetric:6 excluded:2 satisfactory:1 white:1 during:19 covering:1 noted:1 rhythm:1 generalized:1 prominent:1 complete:2 interface:9 silva:1 ranging:2 instantaneous:5 common:2 functional:2 physical:1 birbaumer:3 volume:19 imagined:2 cerebral:1 he:1 significant:5 versa:1 cambridge:1 attenuate:1 pointed:1 had:2 specification:1 entail:1 cortex:37 operating:1 similarity:1 access:1 multivariate:2 own:1 recent:1 showed:1 perspective:1 moderate:1 termed:2 manipulation:1 certain:4 ubingen:1 success:1 accomplished:1 devise:1 greater:1 employed:5 brainweb:1 paradigm:8 maximize:1 signal:1 paralyzed:1 desirable:1 reduces:1 adapt:1 plausibility:1 offer:3 cross:8 clinical:2 post:1 equally:1 prediction:4 neuro:1 basic:2 patient:4 metric:4 essentially:1 nunez:1 cz:1 confined:1 beamformer:1 participated:1 addressed:1 source:27 concluded:1 crucial:1 rest:6 operate:1 posse:1 dtf:1 navarro:1 haptic:1 induced:3 hz:26 subject:36 recording:7 beamforming:4 spemannstr:1 flow:2 undirected:1 deficient:2 db:3 extracting:1 revealed:1 easy:1 fit:1 opposite:1 attenuated:1 det:4 whether:1 algebraic:1 cause:3 krauledat:1 detailed:1 eigenvectors:1 amount:1 band:21 situated:1 category:1 hallett:1 reduced:1 occupy:1 exist:1 s3:14 estimated:2 neuroscience:2 per:1 srinivasan:1 varela:1 four:2 verified:1 volitional:1 graph:4 year:2 letter:1 uncertainty:3 communicate:3 lope:1 oscillation:1 coherence:1 scaling:1 abnormal:1 distinguish:1 display:2 centrally:2 scalp:2 activity:3 strength:5 precisely:1 x2:5 sake:1 martin:1 according:1 peripheral:1 sclerosis:2 across:4 remain:1 s1:14 invariant:1 mattia:1 computationally:1 resource:1 mutually:1 bus:1 describing:1 detectable:1 granger:6 end:1 confounded:1 adopted:1 available:1 apply:1 limb:6 spectral:1 remaining:1 cf:6 schalk:1 instant:1 somewhere:1 exploit:1 giving:1 pdc:1 establish:1 kaiser:1 primary:8 diagonal:1 unable:1 deficit:1 lateral:2 link:4 berlin:1 seven:1 reason:3 meg:1 illustration:2 difficult:1 unfortunately:3 potentially:1 rise:2 design:2 lachaux:1 ski:5 unknown:1 perform:2 observation:5 ultrasound:1 parietal:1 displayed:1 extended:1 communication:7 required:4 toolbox:1 c3:6 c4:6 delorme:1 established:1 address:1 mcfarland:2 usually:3 pattern:17 challenge:1 kus:1 max:1 including:4 explanation:3 event:1 braincomputer:2 rely:1 predicting:1 residual:2 baccala:1 representing:2 blankertz:1 technology:1 brief:1 mellinger:1 started:1 carried:1 extract:2 review:2 understanding:2 meter:1 relative:4 synchronization:5 fully:1 topography:6 filtering:1 unfiltered:1 age:1 principle:2 displaying:1 editor:1 uncorrelated:1 row:1 course:1 placed:3 last:2 supported:1 offline:1 guide:1 nijboer:1 institute:1 fall:1 taking:2 distributed:3 regard:1 cortical:2 valid:1 evaluating:1 autoregressive:2 forward:2 commonly:3 qualitatively:1 instructed:2 author:1 adaptive:1 far:1 transaction:2 sj:15 emphasize:1 investigating:1 sat:1 assumed:4 neurology:2 continuous:2 promising:1 dimly:1 transfer:10 nature:1 eeg:52 depicting:1 investigated:1 untrained:1 european:1 electric:3 domain:1 da:1 ramoser:1 did:1 significance:3 arrow:4 whole:3 s2:12 noise:1 outflow:1 x1:4 site:2 fig:13 causality:8 screen:1 grosse:2 pfurtscheller:3 neuroimage:1 inferring:3 late:2 third:2 bashashati:1 specific:2 er:3 desynchronization:1 explored:1 r2:2 mason:1 bivariate:1 essential:1 consist:1 exists:1 intractable:1 curio:1 importance:1 execution:2 te:32 demand:3 entropy:7 explore:1 gao:2 contained:1 tracking:1 applies:1 extracted:3 ma:1 conditional:3 goal:3 presentation:1 marked:1 consequently:1 room:1 feasible:2 change:21 eeglab:1 specifically:1 corrected:1 admittedly:1 invariance:1 experimental:8 shannon:1 indicating:2 selectively:2 mark:2 modulated:1 frontal:2 evaluate:1 tested:2 della:1 correlated:1 |
2,764 | 3,506 | Natural Image Denoising with
Convolutional Networks
Viren Jain1
Brain & Cognitive Sciences
Massachusetts Institute of Technology
H. Sebastian Seung1,2
Howard Hughes Medical Institute
Massachusetts Institute of Technology
1
2
Abstract
We present an approach to low-level vision that combines two main ideas: the
use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise
models. We demonstrate this approach on the challenging problem of natural
image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance
to state of the art wavelet and Markov random field (MRF) methods. Moreover,
we find that a convolutional network offers similar performance in the blind denoising setting as compared to other techniques in the non-blind setting. We also
show how convolutional networks are mathematically related to MRF approaches
by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and
inference. This makes it possible to learn image processing architectures that have
a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated
with inference in MRF approaches with even hundreds of parameters.
1
Background
Low-level image processing tasks include edge detection, interpolation, and deconvolution. These
tasks are useful both in themselves, and as a front-end for high-level visual tasks like object recognition. This paper focuses on the task of denoising, defined as the recovery of an underlying image
from an observation that has been subjected to Gaussian noise.
One approach to image denoising is to transform an image from pixel intensities into another representation where statistical regularities are more easily captured. For example, the Gaussian scale
mixture (GSM) model introduced by Portilla and colleagues is based on a multiscale wavelet decomposition that provides an effective description of local image statistics [1, 2].
Another approach is to try and capture statistical regularities of pixel intensities directly using
Markov random fields (MRFs) to define a prior over the image space. Initial work used handdesigned settings of the parameters, but recently there has been increasing success in learning the
parameters of such models from databases of natural images [3, 4, 5, 6, 7, 8]. Prior models can be
used for tasks such as image denoising by augmenting the prior with a noise model.
Alternatively, an MRF can be used to model the probability distribution of the clean image conditioned on the noisy image. This conditional random field (CRF) approach is said to be discriminative, in contrast to the generative MRF approach. Several researchers have shown that the CRF
approach can outperform generative learning on various image restoration and labeling tasks [9, 10].
CRFs have recently been applied to the problem of image denoising as well [5].
1
The present work is most closely related to the CRF approach. Indeed, certain special cases of convolutional networks can be seen as performing maximum likelihood inference on a CRF [11]. The
advantage of the convolutional network approach is that it avoids a general difficulty with applying
MRF-based methods to image analysis: the computational expense associated with both parameter
estimation and inference in probabilistic models. For example, naive methods of learning MRFbased models involve calculation of the partition function, a normalization factor that is generally
intractable for realistic models and image dimensions. As a result, a great deal of research has
been devoted to approximate MRF learning and inference techniques that meliorate computational
difficulties, generally at the cost of either representational power or theoretical guarantees [12, 13].
Convolutional networks largely avoid these difficulties by posing the computational task within the
statistical framework of regression rather than density estimation. Regression is a more tractable
computation and therefore permits models with greater representational power than methods based
on density estimation. This claim will be argued for with empirical results on the denoising problem,
as well as mathematical connections between MRF and convolutional network approaches.
2
Convolutional Networks
Convolutional networks have been extensively applied to visual object recognition using architectures that accept an image as input and, through alternating layers of convolution and subsampling,
produce one or more output values that are thresholded to yield binary predictions regarding object
identity [14, 15]. In contrast, we study networks that accept an image as input and produce an entire
image as output. Previous work has used such architectures to produce images with binary targets
in image restoration problems for specialized microscopy data [11, 16]. Here we show that similar
architectures can also be used to produce images with the analog fluctuations found in the intensity
distributions of natural images.
Network Dynamics and Architecture
A convolutional network is an alternating sequence of linear filtering and nonlinear transformation
operations. The input and output layers include one or more images, while intermediate layers
contain ?hidden" units with images called feature maps that are the internal computations of the
algorithm. The activity of feature map a in layer k is given by
!
Ik,a = f
X
wk,ab ? Ik?1,b ? ?k,a
(1)
b
where Ik?1,b are feature maps that provide input to Ik,a , and ? denotes the convolution operation.
The function f is the sigmoid f (x) = 1/ (1 + e?x ) and ?k,a is a bias parameter.
We restrict our experiments to monochrome images and hence the networks contain a single image
in the input layer. It is straightforward to extend this approach to color images by assuming an input
layer with multiple images (e.g., RGB color channels). For numerical reasons, it is preferable to
use input and target values in the range of 0 to 1, and hence the 8-bit integer intensity values of the
dataset (values from 0 to 255) were normalized to lie between 0 and 1. We also explicitly encode
the border of the image by padding an area surrounding the image with values of ?1.
Learning to Denoise
Parameter learning can be performed with a modification of the backpropagation algorithm for feedfoward neural networks that takes into account the weight-sharing structure of convolutional networks [14]. However, several issues have to be addressed in order to learn the architecture in Figure
1 for the task of natural image denoising.
Firstly, the image denoising task must be formulated as a learning problem in order to train the
convolutional network. Since we assume access to a database of only clean, noiseless images, we
implicitly specify the desired image processing task by integrating a noise process into the training
procedure. In particular, we assume a noise process n(x) that operates on an image xi drawn from a
distribution of natural images X. If we consider the entire convolutional network to be some function
2
Architecture of CN1 and CN2
input
image
I1,1
I2,1
I3,1
I4,1
I1,2
I2,2
I3,2
I4,2
.
.
.
.
.
.
.
.
.
.
.
.
I1,24
I2,24
I3,24
I4,24
output
image
Figure 1: Architecture of convolutional network used for denoising. The network has 4 hidden layers and 24
feature maps in each hidden layer. In layers 2, 3, and 4, each feature map is connected to 8 randomly chosen
feature maps in the previous layer. Each arrow represents a single convolution associated with a 5 ? 5 filter,
and hence this network has 15,697 free parameters and requires 624 convolutions to process its forward pass.
F? with free parameters ?, then the parameter estimation
P problem is to minimize the reconstruction
error of the images subject to the noise process: min? i (xi ? F? (n(xi )))2 ).
Secondly, it is inefficient to use batch learning in this context. The training sets used in the experiments have millions of pixels, and it is not practical to perform both a forward and backward
pass on the entire training set when gradient learning requires many tens of thousands of updates to
converge to a reasonable solution. Stochastic online gradient learning is a more efficient learning
procedure that can be adapted to this problem. Typically, this procedure selects a small number of
independent examples from the training set and averages together their gradients to perform a single
update. We compute a gradient update from 6 ? 6 patches randomly sampled from six different
images in the training set. Using a localized image patch violates the independence assumption in
stochastic online learning, but combining the gradient from six separate images yields a 6 ? 6 ? 6
cube that in practice is a sufficient approximation of the gradient to be effective. Larger patches (we
tried 8 ? 8 and 10 ? 10) reduce correlations in the training sample but do not improve accuracy. This
scheme is especially efficient because most of the computation for a local patch is shared.
We found that training time is minimized and generalization accuracy is maximized by incrementally
learning each layer of weights. Greedy, layer-wise training strategies have recently been explored
in the context of unsupervised initialization of multi-layer networks, which are usually fine tuned
for some discriminative task with a different cost function [17, 18, 19]. We maintain the same cost
function throughout. This procedure starts by training a network with a single hidden layer. After
thirty epochs, the weights from the first hidden layer are copied to a new network with two hidden
layers; the weights connecting the hidden layer to the output layer are discarded. The two hidden
layer network is optimized for another thirty epochs, and the procedure is repeated for N layers.
Finally, when learning networks with two or more hidden layers it was important to use a very small
learning rate for the final layer (0.001) and a larger learning rate (0.1) in all other layers.
Implementation
Convolutional network inference and learning can be implemented in just a few lines of MATLAB
code using multi-dimensional convolution and cross-correlation routines. This also makes the approach especially easy to optimize using parallel computing or GPU computing strategies.
3
Experiments
We derive training and test sets for our experiments from natural images in the Berkeley segmentation database, which has been previously used to study denoising [20, 4]. We restrict our experiments
to the case of monochrome images; color images in the Berkeley dataset are converted to grayscale
by averaging the color channels. The test set consists of 100 images, 77 with dimensions 321 ? 481
and 23 with dimensions 481 ? 321. Quantitative comparisons are performed using the Peak Signal
3
Denoising Performance Comparison
31
FoE
BLS?GSM 1
BLS?GSM 2
CN1
CN2
CNBlind
Average PSNR of Denoised Images
30
29
28
27
26
25
24
23
22
21
20
19
25
50
Noise ?
100
Figure 2: Denoising results as measured by peak signal to noise ratio (PSNR) for 3 different noise levels. In
each case, results are the average denoised PSNR of the hundred images in the test set. CN1 and CNBlind are
learned using the same forty image training set as the Field of Experts model (FoE). CN2 is learned using a
training set with an additional sixty images. BLS-GSM1 and BLS-GSM2 are two different parameter settings
of the algorithm in [1]. All methods except CNBlind assume a known noise distribution.
to Noise Ratio (PSNR): 20 log10 (255/?e ), where ?e is the standard deviation of the error. PSNR
has been widely used to evaluate denoising performance [1, 4, 2, 5, 6, 7].
Denoising with known noise conditions
In this task it is assumed that images have been subjected to Gaussian noise of known variance.
We use this noise model during the training process and learn a five-layer network for each noise
level. Both the Bayes Least Squares-Gaussian Scale Mixture (BLS-GSM) and Field of Experts
(FoE) method also optimize the denoising process based on a specified noise level.
We learn two sets of networks for this task that differ in their training set. In one set of networks,
which we refer to as CN1, the training set is the same subset of the Berkeley database used to learn
the FoE model [4]. In another set of networks, called CN2, this training set is augmented by an
additional sixty images from the Berkeley database. The architecture of these networks is shown in
Fig. 1. Quantitative results from both networks under three different noise levels are shown in Fig.
2, along with results from the FoE and BLS-GSM method (BLS-GSM 1 is the same settings used
in [1] while BLS-GSM 2 is the default settings in the code provided by the authors). For the FoE
results, the number of iterations and magnitude of the step size are optimized for each noise level
using a grid search on the training set. A visual comparison of these results is shown in Fig. 3.
We find that the convolutional network has the highest average PSNR using either training set,
although by a margin that is within statistical insignificance when standard error is computed from
the distribution of PSNR values of the entire image. However, we believe this is a conservative
estimate of the standard error, which is much smaller when measured on a pixel or patch-wise basis.
Blind denoising
In this task it is assumed that images have been subjected to Gaussian noise of unknown variance.
Denoising in this context is a more difficult problem than in the non-blind situation. We train a single
six-layer network network we refer to as CNBlind by randomly varying the amount of noise added to
each example in the training process, in the range of ? = [0, 100] . During inference, the noise level
is unknown and only the image is provided as input. We use the same training set as the FoE model
and CN1. The architecture is the same as that shown in Fig. 1 except with 5 hidden layers instead
of 4. Results for 3 noise levels are shown in Fig. 2. We find that a convolutional network trained for
blind denoising performs well even compared to the other methods under non-blind conditions. In
Fig. 4, we show filters that were learned for this network.
4
CLEAN
CLEAN
CN2
BLS-GSM
FoE
NOISY PSNR=14.96
CN2 PSNR=24.25
BLS-GSM PSNR=23.78
FoE PSNR=23.02
Figure 3: Denoising results on an image from the test set. The noisy image was generated by adding Gaussian
noise with ? = 50 to the clean image. Non-blind denoising results for the BLS-GSM, FoE, and convolutional
network methods are shown. The lower left panel shows results for the outlined region in the upper left panel.
The zoomed in region shows that in some areas CN2 output has less severe artifacts than the wavelet-based
results and is sharper than the FoE results. CN1 results (PSNR=24.12) are visually similar to those of CN2.
4
Relationship between MRF and Convolutional Network Approaches
In the introduction, we claim that convolutional networks have similar or even greater representational power compared to MRFs. To support this claim, we will show that special cases of convolutional networks correspond to mean field inference for an MRF. This does not rigorously prove that
convolutional networks have representational power greater than or equal to MRFs, since mean field
inference is an approximation. However, it is plausible that this is the case.
Previous work has pointed out that the Field of Experts MRF can be interpreted as a convolutional
network (see [21]) and that MRFs with an Ising-like prior can be related to convolutional networks
(see [11]). Here, we analyze a different MRF that is specially designed for image denoising and
show that it is closely related to the convolutional network in Figure 1. In particular, we consider an
MRF that defines a distribution over analog ?visible? variables v and binary ?hidden? variables h:
1
1 X 2
1 X a a
1 X a ab
P (v, h) = exp ? 2
vi + 2
hi (w ? v)i +
hi (w ? hb )i
Z
2? i
? ia
2
!
(2)
iab
where vi and hi correspond to the ith pixel location in the image, Z is the partition function, and ? is
ab
ba
the known standard deviation of the Gaussian noise. Note that by symmetry we have wi?j
= wj?i
,
5
Layer 1
Layer 2
Figure 4: Filters learned for the first 2 hidden layers of network CNBlind. The second hidden layer has 192
filters (24 feature maps 8 filters per map). The first layer has recognizable structure in the filters, including
both derivative filters as well as high frequency filters similar to those learned by the FoE model [4, 6].
and we assume w0aa = 0 so there is no self interaction in the model (if this were not the case, one
could always transfer this to a term that is linear in hai , which would lead to an additional bias term
in the mean field approximation). Hence, P (v h) constitutes an undirected graphical model which
can be conceptualized as having separate layers for the visible and hidden variables. There are no
intralayer interactions in the visible layer and convolutional structure (instead of full connectivity) in
the intralayer interactions between hidden variables and interlayer interactions between the visible
and hidden layer.
From the definition of P (v h) it follows that the conditional distribution,
P (v h)
exp
2
1
vi
2
(wa
ha )i
2
(3)
a
i
is Gaussian with mean vi = a (wa ha )i . This is also equal to the conditional expectation E [v h].
We can use this model for denoising by fixing the visible variables to the noisy image, computing
the most likely hidden variables h by MAP inference, and regarding the conditional expectation of
P (v h ) as the denoised image. To do inference we would like to calculate maxh P (h v), but this is
difficult because of the partition function. However, we can consider the mean field approximation,
1
a
a
ab
b
hi = f
(4)
(w
v)i +
(w
h )i
2
b
which can be solved by regarding the equation as a dynamics and iterating it. If we compare this to
Eq. 1, we find that this is equivalent to a convolutional network in which each hidden layer has the
same weights and each feature map directly receives input from the image.
These results suggest that certain convolutional networks can be interpreted as performing approximate inference on MRF models designed for denoising. In practice, the convolutional network architectures we train are not exactly related to such MRF models because the weights of each hidden
layer are not constrained to be the same, nor is the image an input to any feature map except those
in the first layer. An interesting question for future research is how these additional architectural
constraints would affect performance of the convolutional network approach.
Finally, although the special case of non-blind Gaussian denoising allows for direct integration of the
noise model into the MRF equations, our empirical results on blind denoising suggest that the convolutional network approach is adaptable to more general and complex noise models when specified
implicitly through the learning cost function.
5
Discussion
Prior versus learned structure
Before learning, the convolutional network has little structure specialized to natural images. In
contrast, the GSM model uses a multi-scale wavelet representation that is known for its suitability in
6
representing natural image statistics. Moreover, inference in the FoE model uses a procedure similar
to non-linear diffusion methods, which have been previously used for natural image processing
without learning. The architecture of the FoE MRF is so well chosen that even random settings of
the free parameters can provide impressive performance [21].
Random parameter settings of the convolutional networks do not produce any clearly useful computation. If the parameters of CN2 are randomized in just the last layer, denoising performance for
the image in Fig. 3 drops from PSNR=24.25 to 14.87. Random parameters in all layers yields even
worse results. This is consistent with the idea that nothing in CN2?s representation is specialized to
natural images before training, other than the localized receptive field structure of convolutions. Our
approach instead relies on a gradient learning algorithm to tune thousands of parameters using examples of natural images. One might assume this approach would require vastly more training data
than other methods with more prior structure. However, we obtain good generalization performance
using the same training set as that used to learn the Field of Experts model, which has many fewer
degrees of freedom. The disadvantage of this approach is that it produces an architecture whose
performance is more difficult to understand due to its numerous free parameters. The advantage of
this approach is that it may lead to more accurate performance, and can be applied to novel forms of
imagery that have very different statistics than natural images or any previously studied dataset (an
example of this is the specialized image restoration problem studied in [11]).
Network architecture and using more image context
The amount of image context the convolutional network uses to produce an output value for a specific image location is determined by the number of layers in the network and size of filter in each
layer. For example, the 5 and 6-layer networks explored here respectively use a 20 ? 20 and 24 ? 24
image patch. This is a relatively small amount of context compared to that used by the FoE and BLSGSM models, both of which permit correlations to extend over the entire image. It is surprising that
despite this major difference, the convolutional network approach still provides good performance.
One explanation could be that the scale of objects in the chosen image dataset may allow for most
relevant information to be captured in a relatively small field of view.
Nonetheless, it is of interest for denoising as well as other applications to increase the amount of
context used by the network. A simple strategy is to further increase the number of layers; however,
this becomes computationally intensive and may be an inefficient way to exploit the multi-scale
properties of natural images. Adding additional machinery in the network architecture may work
better. Integrating the operations of sub-sampling and super-sampling would allow a network to
process the image at multiple scales, while still being entirely amenable to gradient learning.
Computational efficiency
With many free parameters, convolutional networks may seem like a computationally expensive
image processing architecture. On the contrary, the 5-layer CN1 and CN2 architecture (Fig. 1)
requires only 624 image convolutions to process an image. In comparison, the FoE model performs
inference by means of a dynamic process that can require several thousand iterations. One-thousand
iterations of these dynamics requires 48,000 convolutions (for an FoE model with 24 filters).
We also report wall-clock speed by denoising a 512 ? 512 pixel image on a 2.16Ghz Intel Core
2 processor. Averaged over 10 trials, CN1/CN2 requires 38.86 ? 0.1 sec., 1,000 iterations of the
FoE requires 1664.35 ? 30.23 sec. (using code from the authors of [4]), the BLS-GSM model with
parameter settings ?1? requires 51.86 ? 0.12 sec., and parameter setting ?2? requires 26.51 ? 0.15
sec. (using code from the authors of [1]). All implementations are in MATLAB.
It is true, however, that training the convolutional network architecture requires substantial computation. As gradient learning can require many thousands of updates to converge, training the denoising
networks required a parallel implementation that utilized a dozen processors for a week. While this
is a significant amount of computation, it can be performed off-line.
Learning more complex image transformations and generalized image attractors models
In this work we have explored an image processing task which can be easily formulated as a learning
problem by synthesizing training examples from abundantly available noiseless natural images. Can
7
this approach be extended to tasks in which the noise model has a more variable or complex form?
Our results on blind denoising, in which the amount of noise may vary from little to severe, provides
some evidence that it can. Preliminary experiments on image inpainting are also encouraging.
That said, a major virtue of the image prior approach is the ability to easily reuse a single image
model in novel situations by simply augmenting the prior with the appropriate observation model.
This is possible because the image prior and the observation model are decoupled. Yet explicit probabilistic modeling is computationally difficult and makes learning even simple models challenging.
Convolutional networks forgo probabilistic modeling and, as developed here, focus on specific image to image transformations as a regression problem. It will be interesting to combine the two
approaches to learn models that are ?unnormalized priors? in the sense of energy-based image attractors; regression can then be used as a tool for unsupervised learning by capturing dependencies
between variables within the same distribution [22].
Acknowledgements: we are grateful to Ted Adelson, Ce Liu, Srinivas Turaga, and Yair Weiss for
helpful discussions. We also thank the authors of [1] and [4] for making code available.
References
[1] J. Portilla, V. Strela, M.J. Wainwright, E.P. Simoncelli. Image denoising using scale mixtures of Gaussians
in the wavelet domain. IEEE Trans. Image Proc., 2003.
[2] S. Lyu, E.P. Simoncelli. Statistical modeling of images with fields of Gaussian scale mixtures. NIPS*
2006.
[3] S. Geman, D. Geman. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images.
Pattern Analysis and Machine Intelligence, 1984.
[4] S. Roth, M.J. Black. Fields of Experts: a framework for learning image priors. CVPR 2005.
[5] M.F. Tappen, C. Liu, E.H. Adelson, W.T. Freeman. Learning Gaussian Conditional Random Fields for
Low-Level Vision. CVPR 2007.
[6] Y. Weiss, W.T. Freeman. What makes a good model of natural images? CVPR 2007.
[7] P. Gehler, M. Welling. Product of "edge-perts". NIPS* 2005.
[8] S.C. Zhu, Y. Wu, D. Mumford. Filters, Random Fields and Maximum Entropy (FRAME): Towards a
Unified Theory for Texture Modeling. International Journal of Computer Vision, 1998.
[9] S. Kumar, M. Hebert. Discriminative fields for modeling spatial dependencies in natural images. NIPS*
2004.
[10] X. He, R Zemel, M.C. Perpinan. Multiscale conditional random fields for image labeling. CVPR 2004.
[11] V. Jain, J.F. Murray, F. Roth, S. Turaga, V. Zhigulin, K.L. Briggman, M.N. Helmstaedter, W. Denk, H.S.
Seung. Supervised Learning of Image Restoration with Convolutional Networks. ICCV 2007.
[12] S. Parise, M. Welling. Learning in markov random fields: An empirical study. Joint Stat. Meeting, 2005.
[13] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, C. Rother. A
comparative study of energy minimization methods for markov random fields. ECCV 2006.
[14] Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, L.D. Jackel. Backpropagation
Applied to Handwritten Zip Code Recognition. Neural Computation, 1989.
[15] Y. LeCun, F.J. Huang, L. Bottou. Learning methods for generic object recognition with invariance to pose
and lighting. CVPR 2004.
[16] F. Ning, D. Delhomme, Y. LeCun, F. Piano, L. Bottou, P.E. Barbano. Toward Automatic Phenotyping of
Developing Embryos From Videos. IEEE Trans. Image Proc., 2005.
[17] G. Hinton, R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006.
[18] M. Ranzato, YL Boureau, Y. LeCun. Sparse feature learning for deep belief networks. NIPS* 2007.
[19] Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle. Greedy Layer-Wise Training of Deep Networks.
NIPS* 2006.
[20] D. Martin, C. Fowlkes, D. Tal, J. Malik. A database of human segmented natural images and its application
to evaluating segmentation algorithms and measuring ecological statistics. ICCV 2001.
[21] S. Roth. High-order markov random fields for low-level vision. PhD Thesis, Brown Univ., 2007.
[22] H.S. Seung. Learning continuous attractors in recurrent networks. NIPS* 1997.
8
| 3506 |@word trial:1 tried:1 rgb:1 decomposition:1 inpainting:1 briggman:1 initial:1 liu:2 tuned:1 abundantly:1 surprising:1 yet:1 must:1 gpu:1 numerical:1 visible:5 realistic:1 partition:3 designed:3 drop:1 update:4 generative:2 greedy:2 fewer:1 intelligence:1 ith:1 feedfoward:1 core:1 provides:3 location:2 firstly:1 five:1 mathematical:1 along:1 direct:1 ik:4 consists:1 prove:1 combine:2 recognizable:1 interlayer:1 indeed:1 themselves:1 nor:1 multi:4 brain:1 freeman:2 salakhutdinov:1 little:2 encouraging:1 increasing:1 becomes:1 provided:2 moreover:2 underlying:1 panel:2 strela:1 what:1 interpreted:2 developed:1 unified:1 transformation:3 guarantee:1 berkeley:4 quantitative:2 preferable:1 exactly:1 unit:1 medical:1 before:2 local:2 despite:1 fluctuation:1 interpolation:1 handdesigned:1 might:1 black:1 initialization:1 studied:2 challenging:2 range:2 averaged:1 practical:1 thirty:2 lecun:4 hughes:1 practice:2 backpropagation:2 cn1:8 procedure:7 area:2 empirical:3 significantly:1 integrating:2 suggest:2 context:7 applying:1 optimize:2 equivalent:1 map:11 roth:3 crfs:1 conceptualized:1 straightforward:1 recovery:1 lamblin:1 target:2 viren:1 us:3 recognition:4 expensive:1 utilized:1 tappen:2 ising:1 database:6 geman:2 gehler:1 zhigulin:1 solved:1 capture:1 thousand:5 calculate:1 region:2 wj:1 connected:1 ranzato:1 highest:1 substantial:1 seung:2 rigorously:1 dynamic:4 denk:1 parise:1 trained:1 grateful:1 efficiency:1 basis:1 easily:3 joint:1 various:1 kolmogorov:1 surrounding:1 train:4 univ:1 jain:1 effective:2 zemel:1 labeling:2 whose:2 larger:2 widely:1 plausible:1 cvpr:5 ability:1 statistic:4 transform:1 noisy:4 final:1 online:2 advantage:2 sequence:1 reconstruction:1 interaction:4 product:1 zoomed:1 relevant:1 combining:1 representational:5 description:1 regularity:2 produce:7 comparative:1 object:5 derive:1 recurrent:1 fixing:1 pose:1 stat:1 augmenting:2 measured:2 eq:1 implemented:1 larochelle:1 differ:1 ning:1 closely:2 filter:11 stochastic:3 human:1 violates:1 argued:1 require:3 generalization:2 wall:1 suitability:1 preliminary:1 secondly:1 mathematically:1 visually:1 great:1 exp:2 lyu:1 week:1 claim:3 major:2 vary:1 estimation:4 proc:2 jackel:1 hubbard:1 tool:1 minimization:1 clearly:1 gaussian:11 always:1 super:1 i3:3 rather:1 avoid:2 varying:1 phenotyping:1 encode:1 focus:2 monochrome:2 likelihood:1 contrast:3 sense:1 helpful:1 inference:14 mrfs:4 entire:5 typically:1 accept:2 hidden:19 i1:3 selects:1 pixel:6 issue:1 agarwala:1 art:1 special:3 constrained:1 integration:1 cube:1 field:23 equal:2 spatial:1 having:1 ted:1 sampling:2 represents:1 adelson:2 unsupervised:3 constitutes:1 future:1 minimized:1 report:1 few:1 randomly:3 attractor:3 maintain:1 ab:4 freedom:1 detection:1 interest:1 severe:2 henderson:1 mixture:4 sixty:2 devoted:1 amenable:1 accurate:1 edge:2 machinery:1 decoupled:1 desired:1 theoretical:1 modeling:5 disadvantage:1 measuring:1 restoration:5 cost:4 deviation:2 subset:1 hundred:3 veksler:1 front:1 dependency:2 density:2 peak:2 randomized:1 international:1 probabilistic:4 off:1 yl:1 together:1 connecting:1 connectivity:1 vastly:1 imagery:1 thesis:1 huang:1 worse:1 cognitive:1 expert:5 inefficient:2 derivative:1 account:1 converted:1 sec:4 wk:1 explicitly:1 blind:10 vi:4 performed:3 try:1 view:1 analyze:1 start:1 bayes:1 denoised:3 parallel:2 minimize:1 square:1 iab:1 accuracy:2 convolutional:42 largely:1 variance:2 maximized:1 yield:3 correspond:2 bayesian:1 handwritten:1 lighting:1 researcher:1 processor:2 foe:18 gsm:12 sebastian:1 sharing:1 definition:1 nonetheless:1 colleague:1 frequency:1 energy:2 associated:3 sampled:1 dataset:4 massachusetts:2 color:4 dimensionality:1 psnr:13 segmentation:2 routine:1 adaptable:1 supervised:1 specify:1 wei:2 just:2 correlation:3 clock:1 receives:1 nonlinear:1 multiscale:2 incrementally:1 defines:1 artifact:1 believe:1 contain:2 normalized:1 true:1 brown:1 hence:4 alternating:2 i2:3 deal:1 during:2 self:1 unnormalized:1 generalized:1 presenting:1 crf:4 demonstrate:1 performs:2 image:111 wise:3 novel:2 recently:3 superior:1 sigmoid:1 specialized:4 million:1 analog:2 extend:2 he:1 refer:2 significant:1 gibbs:1 automatic:1 grid:1 outlined:1 pointed:1 access:1 impressive:1 maxh:1 certain:2 ecological:1 binary:3 success:1 meeting:1 captured:2 seen:1 greater:3 additional:5 zip:1 converge:2 forty:1 signal:2 multiple:2 full:1 simoncelli:2 segmented:1 calculation:1 offer:1 cross:1 prediction:1 mrf:19 regression:4 vision:4 noiseless:2 expectation:2 iteration:4 normalization:1 microscopy:1 background:1 fine:1 addressed:1 specially:2 subject:1 undirected:1 contrary:1 seem:1 integer:1 perts:1 intermediate:1 bengio:1 easy:1 hb:1 independence:1 affect:1 architecture:19 restrict:2 reduce:1 idea:2 regarding:3 intensive:1 six:3 reuse:1 padding:1 matlab:2 deep:2 useful:2 generally:2 iterating:1 involve:1 tune:1 amount:6 extensively:1 ten:1 zabih:1 outperform:1 per:1 bls:12 drawn:1 clean:5 ce:1 thresholded:1 diffusion:1 backward:1 relaxation:1 throughout:1 reasonable:1 architectural:1 wu:1 patch:6 comparable:1 bit:1 entirely:1 layer:45 hi:4 capturing:1 copied:1 activity:1 i4:3 adapted:1 constraint:1 tal:1 speed:1 min:1 kumar:1 performing:2 relatively:2 martin:1 developing:1 turaga:2 smaller:1 wi:1 modification:1 making:1 iccv:2 embryo:1 computationally:3 equation:2 previously:3 subjected:3 tractable:1 end:1 available:2 operation:3 gaussians:1 permit:2 denker:1 appropriate:1 generic:1 fowlkes:1 batch:1 yair:1 denotes:1 subsampling:1 include:2 graphical:1 log10:1 exploit:1 especially:2 murray:1 malik:1 added:1 question:1 mumford:1 strategy:3 receptive:1 said:2 hai:1 gradient:9 separate:2 thank:1 reason:1 toward:1 assuming:1 rother:1 code:6 relationship:1 ratio:2 difficult:4 sharper:1 expense:2 ba:1 synthesizing:1 implementation:3 unknown:2 perform:2 upper:1 observation:3 convolution:8 markov:5 discarded:1 howard:2 situation:2 extended:1 hinton:1 frame:1 portilla:2 intensity:4 introduced:1 required:1 specified:2 connection:1 optimized:2 learned:6 boser:1 nip:6 trans:2 usually:1 pattern:1 including:1 explanation:1 video:1 wainwright:1 power:5 ia:1 belief:1 natural:19 difficulty:4 zhu:1 representing:1 scheme:1 improve:1 technology:2 numerous:1 naive:1 prior:11 epoch:2 acknowledgement:1 piano:1 popovici:1 interesting:2 filtering:1 versus:1 localized:2 degree:2 cn2:12 sufficient:1 consistent:1 eccv:1 last:1 free:5 hebert:1 bias:2 allow:2 understand:1 institute:3 szeliski:1 sparse:1 ghz:1 dimension:3 default:1 evaluating:1 avoids:1 forward:2 author:4 welling:2 scharstein:1 approximate:2 implicitly:2 assumed:2 discriminative:3 xi:3 alternatively:1 grayscale:1 search:1 continuous:1 learn:7 channel:2 transfer:1 helmstaedter:1 symmetry:1 synthesizes:1 posing:1 bottou:2 intralayer:2 complex:3 domain:1 main:1 arrow:1 border:1 noise:28 arise:1 denoise:1 nothing:1 repeated:1 augmented:1 fig:8 intel:1 sub:1 explicit:1 lie:1 perpinan:1 wavelet:5 dozen:1 specific:3 explored:3 virtue:1 evidence:1 deconvolution:1 intractable:1 adding:2 texture:1 magnitude:1 phd:1 conditioned:1 margin:1 boureau:1 entropy:1 simply:1 likely:1 visual:3 relies:1 conditional:6 identity:1 formulated:2 towards:1 shared:1 determined:1 except:3 operates:1 reducing:1 averaging:1 denoising:34 conservative:1 called:2 pas:2 forgo:1 invariance:1 internal:1 support:1 evaluate:1 srinivas:1 |
2,765 | 3,507 | Playing Pinball with non-invasive BCI
Michael W. Tangermann
Machine Learning Laboratory
Berlin Institute of Technology
Berlin, Germany
Matthias Krauledat
Machine Learning Laboratory
Berlin Institute of Technology
Berlin, Germany
[email protected]
[email protected]
Konrad Grzeska
Machine Learning Laboratory
Berlin Institute of Technology
Berlin, Germany
Max Sagebaum
Machine Learning Laboratory
Berlin Institute of Technology
Berlin, Germany
[email protected]
[email protected]
Carmen Vidaurre
Machine Learning Laboratory
Berlin Institute of Technology
Berlin, Germany
Benjamin Blankertz
Machine Learning Laboratory
Berlin Institute of Technology
Berlin, Germany
[email protected]
[email protected]
?
Klaus-Robert Muller
Machine Learning Laboratory, Berlin Institute of Technology, Berlin, Germany
[email protected]
Abstract
Compared to invasive Brain-Computer Interfaces (BCI), non-invasive BCI systems based on Electroencephalogram (EEG) signals have not been applied successfully for precisely timed control tasks. In the present study, however, we
demonstrate and report on the interaction of subjects with a real device: a pinball
machine. Results of this study clearly show that fast and well-timed control well
beyond chance level is possible, even though the environment is extremely rich
and requires precisely timed and complex predictive behavior. Using machine
learning methods for mental state decoding, BCI-based pinball control is possible
within the first session without the necessity to employ lengthy subject training.
The current study shows clearly that very compelling control with excellent timing
and dynamics is possible for a non-invasive BCI.
1
Introduction
Brain computer interfaces (BCI) have seen a rapid development towards faster and more userfriendly systems for thought-based control of devices such as video games, wheel chairs, robotic
devices etc. While a full control of even complex trajectories has become possible for invasive BCIs
[1, 2, 3], non-invasive EEG-based systems have been considered hardly able to provide such high
information transfer rates between man and machine [4, 5].
This paper will show evidence that real-time BCI control of a machine is possible with little subject
training. The machine studied (a standard pinball machine, see Fig. 1 requires only two classes for
control but a very fast and precise reaction; predictive behavior and learning are mandatory. We
1
consider it a formidable platform for studying timing and dynamics of brain control in real-time interaction with a physical machine. Furthermore this paradigm is well suited for future investigations
of mental states during complex real-time tasks and decision-making processes.
Figure 1: Left: pinball machine used for the present study. Middle: Close look at the build-in
gadgets of the play field. Right: Zoom into the modified parts of the play field (side walls and
central bump).
Compared to highly controlled and simplified lab settings, a pinball machine provides flow (according to the definition in [6]), a rich and complex feedback, acoustic and visual distractors, and a
challenging behavioral task. These components are well-known ingredients for engaging and immersive game environments [7]. In case of the pinball machine model used in this study, this receives
further evidence from the high sales figures that have made the Addams Family model the all-time
popular pinball machine.
Given the reaction-time critical pinball game and the intrinsic delays imposed on the subjects by the
BCI technology, it is very interesting to observe that subjects can manage to control and maintain
the necessary timing and dynamics. The prediction of upcoming game situations and behavioral
adaptation to the machine and BCI constraints are necessary ingredients to master this difficult task.
The following Sections Sec. 2 and Sec. 3 briefly introduce the used motor paradigm, spatial filter
methods, the experimental paradigm, the decoding and machine learning techniques used, Sec. 4
provides the statistics and results, and finally a brief discussion is given in section Sec. 5.
2
2.1
Background
Neurophysiology
Macroscopic brain activity during resting wakefulness contains distinct rhythms located over various
brain areas. Sensorimotor cortices show rhythmic macroscopic EEG oscillations (?-rhythm or sensorimotor rhythm, SMR), with spectral peak energies of about 8?14 Hz (?-band) and/or 16?28 Hz
(?-band) localized in the motor and somatosensory cortex ([8]).
A large class of EEG-based BCI systems relies on the fact that amplitude modulations of sensorimotor rhythms can be caused, e.g. by imagining movements. For example, the power of the ?-rhythm
decreases during imagined hand movements in the corresponding representation area which is located in the contralateral sensorimotor cortex. This phenomenon is called event-related desynchronization (ERD, [9, 10]), while the increase of band power is termed event-related synchronization
(ERS). This may be observed, e.g., during motor imagery over flanking sensorimotor areas, possibly reflecting an ?surround inhibition? enhancing focal cortical activation, see [11, 10]. The exact
location and the exact frequency band of the sensorimotor rhythm is subject-specific. Hence indi2
vidually optimized filters can increase the signal-to-noise ratio dramatically [12]. To this end, the
CSP technique has proven to be useful.
2.2
Common Spatial Pattern (CSP) Analysis
Common Spatial Pattern and its extensions (e.g. [13, 14, 15, 16, 12]) is a technique to analyze
multi-channel data based on recordings from two classes (conditions). It is used e.g. in BCI systems
based on the modulation of brain rhythms. CSP yields a data-driven supervised decomposition of
the signal parameterized by a matrix W ? IRC?C0 (C being the number of channels; C0 ? C) that
projects the signal x(t) ? IRC in the original sensor space to xCSP (t) ? IRC0 , which lives in the
surrogate sensor space, as follows:
xCSP (t) = W> x(t).
Each column vector of W represents a spatial filter. In particular CSP filters maximize the EEG signal?s variance under one condition while simultaneously minimizing it for the other condition. Since
variance of band-pass filtered signals is equal to band power, CSP analysis is applied to band-pass
filtered signals in order to obtain an effective discrimination of mental states that are characterized
by ERD/ERS effects (see above). In the example of left vs. right hand motor imagery, the CSP algorithm will find two groups of spatial filters. The first will show high band power during left hand
motor imagery and low band power during right hand motor imagery, and the second vice versa.
Let ?i be the covariance matrix of the trial-concatenated matrix of dimension [C ? T ] (where C is
the number of electrodes and T is the number of concatenated samples) belonging to the respective
class i ? {1, 2}. The CSP analysis consists of calculating a matrix W ? IRC?C and a diagonal
matrix D with elements in [0, 1] such that
W> ?1 W = D and W> ?2 W = I ? D
(1)
where I ? IR
is the identity matrix. This can be solved as a generalized eigenvalue problem.
The projection that is given by the i-th column of matrix W has a relative variance of di (i-th
element of D) for trials of class 1 and relative variance 1 ? di for trials of class 2. If di is near 1,
the filter given by the i-th column of W (i.e., the ith spatial filter) maximizes the variance for class
1, and since 1 ? di is near 0, it also minimizes the variance for class 2. Typically one would retain
projections corresponding to two or three of the highest eigenvalues di , i.e., CSP filters for class 1,
and projections corresponding to the two or three lowest eigenvalues, i.e., CSP filters for class 2.
For a detailed review of the CSP technique with respect to the application in BCI see [12].
C?C
3
3.1
Experiment
Paradigm
Standard EEG lab experiments typically realize an environment that avoids distractions in order
to have maximum control over all parameters of the experiment. Since the subjects respond to a
small number of artificial stimuli, a stimulus-locked averaging reveals the average characteristics
of their brain response. If we are interested in understanding broader behavioral brain responses
in cognitively demanding natural environments then stimulus/response-locked averaging may no
longer be easily possible. The complexity in interaction may be caused by (1) a large number of
possibilities to respond, (2) a large spread in response times and quality due to a rich environment
(e.g. real objects that have a variety of physical properties), (3) a changing environment where the
underlying nonstationarity is caused by a large number of states, and possibly by even more, but
unknown influencing factors.
While the first steps towards complex paradigms use simulators that show an increased complexity
but still allow complete introspection into the system state, it is evident that the interaction with
real physical devices has an even higher complexity but also provides a rich multi-modal sensory
experience for the user. However, gaining even only partial introspection into the system states
of complex physical devices and into the interaction processes between the system and the mental
processes of the user requires a huge effort.
Here modern machine learning and signal processing methods (e.g. [17, 18, 19, 20]) are helpful,
since they have been developed to analyze EEG on a single trial basis (e.g. [21, 22]). They can adapt
3
to changing signal characteristics (e.g. [23, 24, 25]) and they can deal with missing and noisy data
[26, 27] ? even beyond the field of computational neuroscience and BCI [28].
3.2
Setup
In this study seven subjects played with the pinball machine. They were known for well-classifiable
EEG signals in simple BCI applications. One subject played successfully and enjoyed it, but was
excluded from further analysis as his/her games had not been video-taped. From the remaining six
subjects, three managed to acquire good control, played very successfully and enjoyed this experience. One subject managed to get limited control and reported to enjoy the games although some
of his/her scores were close to chance. The performance of these four subjects was measured in
a rigorous manner. The remaining two subjects could not establish reliable control and were also
excluded from further analysis.
An overview of the technical setup and the data processing steps involved is given by Fig. 2. The experiment was organized in several stages: the calibration of the BCI system (Sec. 3.3), the fine-tuning
of parameters in a simple cursor feedback paradigm (Sec. 3.4), the application of the BCI control
system during pinball games (Sec. 3.5), the pseudo-random control of pinball games (Sec. 3.6), and
ball insertions without any paddle activity (Sec. 3.7).
EEG
Amplifier / Digitizer
Feedback
Filter (FQ / spatial)
Classifier
Player
Low-level
controller
Paddle control signal
Figure 2: Schematic view of the BCI-controlled pinball machine. The user?s EEG signals upon
motor imagery are amplified, digitized, filtered in the frequency domain and the spatial domain
by CSP. Band power features are extracted and classified. The classifier output is translated by a
low-level controller into paddle movements.
3.3
Calibration of the BCI system
The BCI system was calibrated individually for each of the subjects (VPMa, VPks, VPzq, VPlf ) to
discriminate two classes of motor imagery (left hand and right hand). The calibration procedure
followed a standard Berlin BCI (BBCI) paradigm based on spatial filters and oscillatory features
that avoids and prevents the use of class-correlated EOG or EMG artefacts (see [29, 28] for details).
Visualizing the spatial filters and the resulting patterns of activity showed that EOG or EMG components were disregarded for the calibration of the BCI system. For the calibration, 100 (VPMa)
or 75 (VPks, VPzq, VPlf ) trials of motor imagery were collected for each class. For every trial of
4?5s duration, the class of the motor imagery was indicated on a computer screen by visual cues.
The calibration procedure included the determination of a subject-specific frequency band for the
mu-rhythm (see Sec. 2.1), filtering the 64-channel EEG-data to this band, the determination of classdiscriminant spatial filters with Common Spatial Pattern (CSP, see Sec. 2.2), and the training of a
regularized linear classification method (LDA) based on the power features of the filtered data. All
subjects showed a crossvalidation error below 10% on the calibration data.
3.4
Cursor feedback control by BCI
The bias of the classifier, a gain factor and thresholds for an idle-class (for classifier outputs close
to the decision plane) were adapted during a short control task running on a computer screen. The
subject had to control a horizontally moving cursor to a target on the left or right side of the screen
4
for approximately 2 minutes while fixating a cross in the center. During this procedure the above
mentioned parameters were fine-tuned according to the test persons?s ratings. The goals were to
determine parameter values that translate the classifier output to a suitable range for the final application and ? for the test persons ? to reach a subjective feeling of control. For an exhaustive study on
the role of bias adaptation in BCI, especially in the context of changing from calibration to feedback,
see [30, 24].
3.5
Pinball control by BCI
A real, physical pinball machine (in our study an Addams Family model) needs good control in
terms of classification accuracy and timing (dynamics).
The subject has to learn the physical properties of the machine to play well. The subject?s expectation needs to be trained as bumpers, magnets like ?The Power? and many other built-in sources of
surprise (see middle image in Fig. 1) can cause the ball to go into rather unpredictable directions.
This interaction with the pinball machine makes the game interesting and challenging. Fast brain
dynamics that participate in the eye-hand coordination and visual memory play an essential role to
cope with these difficulties. The task difficulty increases further, as with any game, there is a strong
emotional engagement of the subject which gives rise to non-stationarities in the statistics. Moreover
the physical machine is very noisy and distracting due to its various sources of visual and auditory
stimulation, and only a small percentage of these stimulations is task relevant.
Three modifications were implemented in order to reduce the frequency of manual ball launches (1
and 3) and to increase the frequency of balls passing the paddle areas (1 and 2). While the original
character of the game was not changed, the modifications introduced slight simplification to conduct
the experiment. The right image of Fig. 1 depicts the modifications:
1. side limits that prevents balls from exiting without passing the paddles
2. a soft central bump in front of the paddles that biases balls to pass one of the paddles rather
than exiting in a perfect vertical trajectory. This is necessary, as the classifier output could
not activate both paddles at exactly the same time.
3. a reduced slope of the game field (about half the original slope), that somewhat slows down
the game speed.
During the BCI-controlled gaming (?bci? control mode), the subject sat in front of the pinball machine, hands resting on the arm rests except for short times when new balls had to be launched
with the pulling lever. The EEG signals recorded in the previous 500ms were translated by the BCI
system into a control signal. A simple low-level control mechanism was implemented in software
that translated the continuous classifier output by thresholding into a three-class signal (left flipper,
idle, right flipper) using the thresholds pre-determined during the cursor control (see Sec. 3.4). Furthermore it introduced a logic that translated a very long lasting control signal for the left or right
class into a hold-and-shoot mechanism. This allowed the user to catch slow balls rolling sideways
down towards a paddle. The user played several games of 10 to 12 balls each. Performance was
observed in terms of the playing time per ball, the score per game and the number of high-quality
shots. The latter were defined by the presence of one of the following two conditions, which have
been evaluated in an offline video analysis of the game: (1) a precisely timed shot that hit the ball
by the center of the paddle and drives it into one of the scoring zones of the lower half of the field
and (2) a precisely timed shot that drives the ball directly into the upper half of the field.
3.6
Pseudo random control mode
This ?rand? control mode was incorporated into the experimental setup in order to deliver a fair
performance baseline. Here, the BCI system was up and running with the same settings as in the
BCI-controlled pinball game, but no player was present. Instead an EEG file previously recorded
during the BCI-controlled pinball game was fed into the BCI system and generated the control signal
for the pinball machine. These signals produced the same statistics of paddle movements as in the
real feedback setting. But as the balls were launched at random time points, the paddle behavior
was not synchronized with the ball positions. Therefore, the pseudo random control mode marks
5
the chance level of the system. In this mode several games of 10-12 balls each were performed. The
same performance measures were applied as for BCI-controlled gaming.
3.7
No control mode
For performance comparisons, two performance ratings (time per ball and points per game) were
also taken for a series of balls that were launched without any paddle movements (?none? control
mode).
4
Results
As video recordings have been available for the four subjects, a detailed analysis of the game performances was possible. It is introduced for the example of the best subject VPMa in Fig. 3. The
analysis compares three different scoring measures for BCI control (bbci), pseudo-random control
(rand) and no control (none) and shows the histogram of high-quality shots per ball. The average
Performance Comparison Subject VPMa
30
20
10
n=81
bbci
n=112
n=22
rand
none
Control Mode
6
5
4
3
2
1
0
n=81
bbci
n=112
40
rand
none
Control Mode
normalized histograms of
bci control
rand control
50
20
10
0
n=22
60
30
Percentage
40
Million Points per Game
Quality Shots per Ball
Ball Duration [s]
50
0
70
7
60
40
30
20
10
n=10
bbci
n=10
n=12
rand
none
Control Mode
0
0
1
2
3
4
5
6
7
High-Quality Shots per Ball
Figure 3: Performance comparison for three control modes of the pinball machine and the normalized histograms of high-quality shots per ball for subject VPMa.
ball duration (median) is significantly higher for the BCI-controlled gaming (average of 15s over 81
balls) than for the pseudo-random control (average of 8s over 112 balls). A confidence interval is
reflected by the notches above and below the median values in the boxplot of Fig. 3. Boxes whose
notches do not overlap indicate that the medians of the two groups differ at the 5% significance
level. The increased average ball duration under BCI control is caused by the larger number of highquality shots per ball. While in pseudo-random control only 7% of the balls scored more than one
high-quality shot per ball, this rate raises drastically to 45% for the BCI control of subject VPMa. A
comparison of the game scores for 10 games of BCI control and 10 games of pseudo-random control
shows, that these differ even stronger due to the nonlinear characteristic of the score. The rightmost
plot in Fig. 3 shows the normalized histograms of the high-quality shots.
The pooled data of all four subjects in Fig. 4 reflects these performance differences to a large extend. Again, BCI control is significantly superior to the pseudo random control. The difference in
normalized histograms between BCI control and pseudo random control reveals, that even for the
pooled data BCI-controlled games more often have a larger number of high-quality shots.
Not surprisingly, the BCI-controlled games showed a number of paddle movements in moments,
when no ball was in the vicinity of the paddles. These so-called false hits are indirectly reflected
in the performance measures for the pseudo-random control. As pseudo-random control mode was
able to gain significantly better results than no control at all (see e.g. modes rand and none in
Fig. 3), these false hits can not be neglected. In order to study this issue, the pseudo-random control
was based on an EEG file, which had been previously recorded during the BCI-controlled gaming,
the dynamics of the paddle movements was identical during both of these control modes. Under
these very similar conditions, the higher scores of the BCI control must be credited to the control
ability of the BCI user, especially to the precise timing of a large number of paddle shots.
A video of the gaming performance which provides an impression of the astonishing level of timing
and dynamical control ? much better than the figures can show ? is available under http://www.
bbci.de/supplementary/. It should be added that for this experiment it was very easy to recruit
highly motivated subjects, who enjoyed the session.
6
Performance Comparison Four Subjects
10
50
40
30
20
10
0
n=490
bbci
n=543
n=346
rand
none
Control Mode
40
5
30
Percentage
Million Points per Game
Ball Duration [s]
60
20
10
0
-5
Difference of
normalized histograms:
(bci control) - (rand control)
-10
0
n=42
n=43
n=42
-15
bbci
rand
none
Control Mode
0
1
2
3
4
5
6
7
High-Quality Shots per Ball
Figure 4: Performance comparison for combined data of four subjects (VPMa, VPks, VPzq, VPlf ).
5
Discussion
To date, BCI is mainly perceived as an opportunity for the disabled to regain interaction with their
environment, say, through BCI actuated spelling or other forms of BCI control.
The present study is relevant to rehabilitation since it explores the limits of BCI with respect to
timing, dynamics and speed of interaction in a difficult real-time task. We would, however, like
to re-iterate to consider machine learning methods developed in BCI also as novel powerful tools
for the neurosciences ? not only when operated invasively for harvesting on local field potentials
(LFP) and on micro electrode array data [1, 2, 3] or for decoding functional MRI [31] ? but also for
non-invasive, low-risk EEG-BCI.
An important novel aspect of our study was to analyze EEG recorded during predictive behavior, in
other words we made use of the subject?s expectation and experience of the system delay. Learning
curves and traces of adaptation on the subject side, the use of error potentials as well as emerging
subject specific strategy differences and many other exciting question must remain untouched in this
first study. Emotion, surprise and other mental states or cognitive processes that play an important
role in such complex real-time paradigms still await their consideration in future studies.
Acknowledgments
We thank Brain Products GmbH for funding and for help with the preparation of the pinball machine. Funding by the European Community under the PASCAL Network of Excellence (IST2002-506778) and under the FP7 Programme (TOBI ICT-2007-224631), by the Bundesministerium
f?ur Bildung und Forschung (BMBF) (FKZ 01IBE01A and FKZ 16SV2231) and by the Deutsche
Forschungsgemeinschaft (DFG) (VitalBCI MU 987/3-1) is gratefully acknowledged. Last but not
least, we would like to thank our reviewers for their valuable comments.
References
[1] J. M. Carmena, M. A. Lebedev, R. E. Crist, J. E. O?Doherty, D. M. Santucci, D. F. Dimitrov, P. G. Patil,
C. S. Henriquez, and M. A. Nicolelis. Learning to control a brain-machine interface for reaching and
grasping by primates. PLoS Biol, E42, 2003.
[2] D. M. Taylor, S. I. Tillery, and A. B. Schwartz. Direct cortical control of 3D neuroprosthetic devices.
Science, 296:1829?1832, 2002.
[3] L.R. Hochberg, M.D. Serruya, G.M. Friehs, J.A. Mukand, M. Saleh, A.H. Caplan, A. Branner, D. Chen,
R.D. Penn, and J.P. Donoghue. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442(7099):164?171, July 2006.
[4] J. R. Wolpaw and D. J. McFarland. Control of a two-dimensional movement signal by a noninvasive
brain-computer interface in humans. Proc Natl Acad Sci USA, 101(51):17849?17854, 2004.
[5] Andrea K?ubler and Klaus-Robert M?uller. An introduction to brain computer interfacing. In Guido Dornhege et al., editors, Toward Brain-Computer Interfacing, pages 1?25. MIT press, Cambridge, MA, 2007.
[6] W. A. IJsselsteijn, H. H. Nap, Y. A. W. de Kort, K. Poels andA. Jurgelionis, and F. Bellotti. Characterizing
and measuring user experiences in digital games. In Proceedings of the ACE, Salzburg, 2007.
[7] C. Jennett, A. L. Cox, P. Cairns, S. Dhoparee, A. Epps, T. Tijs, and A. Walton. Measuring and defining
the experience of immersion in games. International Journal of Human Computer Studies, 2008.
7
[8] H. Jasper and H.L. Andrews. Normal differentiation of occipital and precentral regions in man. Arch.
Neurol. Psychiat. (Chicago), 39:96?115, 1938.
[9] Gert Pfurtscheller and F.H. Lopes da Silva. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol, 110(11):1842?1857, Nov 1999.
[10] G. Pfurtscheller, C. Brunner, A. Schl?ogl, and F.H. Lopes da Silva. Mu rhythm (de)synchronization and
EEG single-trial classification of different motor imagery tasks. NeuroImage, 31(1):153?159, 2006.
[11] C. Neuper and G. Pfurtscheller. Evidence for distinct beta resonance frequencies in human EEG related
to specific sensorimotor cortical areas. Clin Neurophysiol, 112:2084?2097, 2001.
[12] Benjamin Blankertz, Ryota Tomioka, Steven Lemm, Motoaki Kawanabe, and Klaus-Robert M?uller. Optimizing spatial filters for robust EEG single-trial analysis. IEEE Signal Proc Magazine, 25(1):41?56,
January 2008.
[13] Keinosuke Fukunaga. Introduction to statistical pattern recognition. Academic Press, Boston, 2nd edition
edition, 1990.
[14] Z. J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the
clinical EEG. Electroencephalogr Clin Neurophysiol, 79(6):440?447, 1991.
[15] Steven Lemm, Benjamin Blankertz, Gabriel Curio, and Klaus-Robert M?uller. Spatio-spectral filters for
improving classification of single trial EEG. IEEE Trans Biomed Eng, 52(9):1541?1548, 2005.
[16] Guido Dornhege, Benjamin Blankertz, Matthias Krauledat, Florian Losch, Gabriel Curio, and KlausRobert M?uller. Optimizing spatio-temporal filters for improving brain-computer interfacing. In Advances
in Neural Inf. Proc. Systems (NIPS 05), volume 18, pages 315?322, Cambridge, MA, 2006. MIT Press.
[17] B. Sch?olkopf and A.J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[18] K.-R. M?uller, S. Mika, G. R?atsch, K. Tsuda, and B. Sch?olkopf. An introduction to kernel-based learning
algorithms. IEEE Neural Networks, 12(2):181?201, May 2001.
[19] Klaus-Robert M?uller, Charles W. Anderson, and Gary E. Birch. Linear and non-linear methods for braincomputer interfaces. IEEE Trans Neural Sys Rehab Eng, 11(2):165?169, 2003.
[20] S. Haykin. Neural Networks : A Comprehensive Foundation. Macmillan, New York, 1994.
[21] N.J. Hill, T. N. Lal, M. Tangermann, T. Hinterberger, G. Widman, C. E. Elger, B. Sch?olkopf, and N. Birbaumer. Classifying event-related desynchronization in EEG, ECoG and MEG signals. In Guido Dornhege et al., editors, Toward Brain-Computer Interfacing, pages 235?260. MIT press, Cambridge, MA,
2007.
[22] Benjamin Blankertz, Florian Losch, Matthias Krauledat, Guido Dornhege, Gabriel Curio, and KlausRobert M?uller. The Berlin Brain-Computer Interface: Accurate performance from first-session in BCInaive subjects. IEEE Trans Biomed Eng, 2008. in press.
[23] Matthias Krauledat, Michael Schr?oder, Benjamin Blankertz, and Klaus-Robert M?uller. Reducing calibration time for brain-computer interfaces: A clustering approach. In B. Sch?olkopf, J. Platt, and T. Hoffman,
editors, Advances in Neural Information Processing Systems 19, pages 753?760, Cambridge, MA, 2007.
MIT Press.
[24] Masashi Sugiyama, Matthias Krauledat, and Klaus-Robert M?uller. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research, 8:1027?1061, 2007.
[25] Pradeep Shenoy, Matthias Krauledat, Benjamin Blankertz, Rajesh P. N. Rao, and Klaus-Robert M?uller.
Towards adaptive classification for BCI. J Neural Eng, 3(1):R13?R23, 2006.
[26] Guido Dornhege, Matthias Krauledat, Klaus-Robert M?uller, and Benjamin Blankertz. General signal
processing and machine learning tools for BCI. In Guido Dornhege et al., editors, Toward Brain-Computer
Interfacing, pages 207?233. MIT Press, Cambridge, MA, 2007.
[27] Matthias Krauledat, Guido Dornhege, Benjamin Blankertz, and Klaus-Robert M?uller. Robustifying EEG
data analysis by removing outliers. Chaos and Complexity Letters, 2(3):259?274, 2007.
[28] Klaus-Robert M?uller, Michael Tangermann, Guido Dornhege, Matthias Krauledat, Gabriel Curio, and
Benjamin Blankertz. Machine learning for real-time single-trial EEG-analysis: From brain-computer
interfacing to mental state monitoring. J Neurosci Methods, 167(1):82?90, 2008.
[29] Benjamin Blankertz, Guido Dornhege, Matthias Krauledat, Klaus-Robert M?uller, and Gabriel Curio. The
non-invasive Berlin Brain-Computer Interface: Fast acquisition of effective performance in untrained
subjects. NeuroImage, 37(2):539?550, 2007.
[30] Matthias Krauledat, Pradeep Shenoy, Benjamin Blankertz, Rajesh P. N. Rao, and Klaus-Robert M?uller.
Adaptation in CSP-based BCI systems. In Guido Dornhege et al., editors, Toward Brain-Computer Interfacing, pages 305?309. MIT Press, Cambridge, MA, 2007.
[31] J.D. Haynes and G. Rees. Decoding mental states from brain activity in humans. Nature Reviews Neuroscience, 7:523?534, 2006.
8
| 3507 |@word neurophysiology:1 trial:10 cox:1 middle:2 mri:1 briefly:1 stronger:1 nd:1 c0:2 r13:1 decomposition:1 covariance:1 eng:4 shot:13 moment:1 necessity:1 contains:1 score:5 series:1 tuned:1 rightmost:1 subjective:1 reaction:2 current:1 activation:1 must:2 realize:1 chicago:1 motor:11 flipper:2 plot:1 discrimination:1 v:1 cue:1 half:3 device:7 plane:1 sys:1 ith:1 short:2 haykin:1 filtered:4 harvesting:1 mental:7 provides:4 psychiat:1 location:1 direct:1 become:1 beta:1 consists:1 behavioral:3 manner:1 introduce:1 excellence:1 rapid:1 andrea:1 behavior:4 multi:2 brain:23 simulator:1 mukand:1 little:1 unpredictable:1 kraulem:1 project:1 underlying:1 moreover:1 formidable:1 maximizes:1 deutsche:1 lowest:1 minimizes:1 emerging:1 recruit:1 developed:2 differentiation:1 dornhege:10 pseudo:12 quantitative:1 every:1 stationarities:1 temporal:1 masashi:1 exactly:1 classifier:7 hit:3 schwartz:1 control:70 sale:1 highquality:1 enjoy:1 digitizer:1 penn:1 platt:1 shenoy:2 influencing:1 timing:7 local:1 limit:2 acad:1 nap:1 modulation:2 approximately:1 credited:1 crist:1 mika:1 studied:1 challenging:2 limited:1 locked:2 range:1 acknowledgment:1 lfp:1 wolpaw:1 procedure:3 area:5 thought:1 significantly:3 projection:3 idle:2 pre:1 confidence:1 word:1 get:1 wheel:1 close:3 context:1 risk:1 www:1 imposed:1 reviewer:1 missing:1 center:2 go:1 occipital:1 duration:5 array:1 his:2 blanker:1 gert:1 target:1 play:5 user:7 exact:2 guido:10 magazine:1 engaging:1 element:2 recognition:1 located:2 observed:2 role:3 steven:2 solved:1 await:1 region:1 grasping:1 movement:8 decrease:1 highest:1 valuable:1 plo:1 mentioned:1 benjamin:12 environment:7 mu:3 complexity:4 insertion:1 und:1 dynamic:7 neglected:1 trained:1 raise:1 ist2002:1 astonishing:1 predictive:3 deliver:1 upon:1 basis:1 neurophysiol:3 translated:4 easily:1 various:2 distinct:2 fast:4 effective:2 activate:1 artificial:1 klaus:13 exhaustive:1 whose:1 ace:1 larger:2 supplementary:1 say:1 bci:55 ability:1 statistic:3 topographic:1 noisy:2 final:1 eigenvalue:3 matthias:11 regain:1 interaction:8 product:1 adaptation:5 tu:6 relevant:2 rehab:1 date:1 wakefulness:1 translate:1 ogl:1 amplified:1 tillery:1 olkopf:4 crossvalidation:1 carmena:1 electrode:2 walton:1 perfect:1 object:1 help:1 andrew:1 schl:1 measured:1 strong:1 implemented:2 c:6 launch:1 somatosensory:1 indicate:1 synchronized:1 differ:2 artefact:1 direction:1 motoaki:1 filter:16 human:5 wall:1 investigation:1 extension:1 ecog:1 hold:1 considered:1 vidaurre:1 branner:1 normal:1 bildung:1 mapping:1 bump:2 perceived:1 proc:3 coordination:1 individually:1 vice:1 successfully:3 sideways:1 reflects:1 tool:2 uller:15 electroencephalogr:1 mit:7 clearly:2 sensor:2 interfacing:7 hoffman:1 weighted:1 modified:1 csp:13 rather:2 reaching:1 broader:1 ubler:1 fq:1 mainly:1 rigorous:1 baseline:1 caplan:1 helpful:1 typically:2 her:2 interested:1 germany:7 biomed:2 issue:1 classification:5 pascal:1 elger:1 development:1 resonance:1 platform:1 spatial:13 field:7 equal:1 emotion:1 extraction:1 identical:1 represents:1 haynes:1 look:1 schroedm:1 future:2 introspection:2 report:1 stimulus:3 pinball:22 micro:1 employ:1 modern:1 simultaneously:1 zoom:1 comprehensive:1 dfg:1 cognitively:1 maintain:1 bundesministerium:1 amplifier:1 huge:1 highly:2 possibility:1 smr:1 pradeep:2 operated:1 natl:1 accurate:1 brunner:1 rajesh:2 partial:1 necessary:3 experience:5 respective:1 conduct:1 taylor:1 timed:5 re:1 tsuda:1 precentral:1 increased:2 column:3 soft:1 compelling:1 rao:2 measuring:2 contralateral:1 rolling:1 delay:2 paddle:17 front:2 reported:1 emg:2 engagement:1 calibrated:1 combined:1 person:2 rees:1 peak:1 explores:1 international:1 retain:1 decoding:4 michael:3 lebedev:1 imagery:9 central:2 lever:1 manage:1 recorded:4 again:1 possibly:2 hinterberger:1 cognitive:1 fixating:1 potential:2 de:10 sec:12 pooled:2 addams:2 caused:4 performed:1 view:1 lab:2 analyze:3 keinosuke:1 slope:2 ir:1 accuracy:1 variance:6 characteristic:3 who:1 ensemble:1 yield:1 produced:1 none:8 trajectory:2 monitoring:1 drive:2 classified:1 oscillatory:1 reach:1 nonstationarity:1 manual:1 lengthy:1 definition:1 energy:1 sensorimotor:7 frequency:6 involved:1 invasive:8 acquisition:1 di:5 gain:2 auditory:1 popular:1 birch:1 distractors:1 organized:1 amplitude:1 reflecting:1 friehs:1 higher:3 supervised:1 reflected:2 response:4 modal:1 erd:2 rand:10 evaluated:1 though:1 box:1 anderson:1 furthermore:2 stage:1 arch:1 smola:1 widman:1 r23:1 hand:8 receives:1 gaming:5 nonlinear:1 mode:16 quality:10 indicated:1 bcis:1 lda:1 pulling:1 disabled:1 usa:1 effect:1 normalized:5 managed:2 hence:1 vicinity:1 excluded:2 laboratory:7 deal:1 visualizing:1 konrad:1 game:30 during:15 rhythm:9 m:1 generalized:1 distracting:1 hill:1 evident:1 complete:1 demonstrate:1 electroencephalogram:1 impression:1 doherty:1 interface:8 tetraplegia:1 silva:2 image:2 shoot:1 novel:2 consideration:1 funding:2 charles:1 common:3 superior:1 chaos:1 stimulation:2 functional:1 physical:7 overview:1 jasper:1 birbaumer:1 volume:1 imagined:1 million:2 slight:1 extend:1 resting:2 untouched:1 surround:1 versa:1 cambridge:7 enjoyed:3 tuning:1 focal:1 session:3 sugiyama:1 gratefully:1 had:4 moving:1 calibration:9 cortex:3 longer:1 inhibition:1 etc:1 showed:3 optimizing:2 inf:1 driven:1 mandatory:1 termed:1 life:1 muller:1 scoring:2 seen:1 somewhat:1 florian:2 determine:1 paradigm:8 maximize:1 tangermann:3 signal:22 july:1 full:1 technical:1 faster:1 characterized:1 adapt:1 determination:2 cross:2 long:1 academic:1 clinical:1 controlled:10 schematic:1 prediction:1 basic:1 controller:2 enhancing:1 expectation:2 histogram:6 kernel:2 serruya:1 background:1 fine:2 interval:1 median:3 source:2 macroscopic:2 dimitrov:1 launched:3 rest:1 sch:4 file:2 comment:1 subject:35 hz:2 recording:2 flow:1 near:2 presence:1 forschungsgemeinschaft:1 easy:1 variety:1 iterate:1 fkz:2 reduce:1 tijs:1 donoghue:1 shift:1 six:1 motivated:1 notch:2 effort:1 passing:2 cause:1 hardly:1 york:1 oder:1 krauledat:11 dramatically:1 useful:1 gabriel:5 detailed:2 band:12 reduced:1 http:1 percentage:3 neuroscience:3 per:13 group:2 koles:1 four:5 salzburg:1 threshold:2 acknowledged:1 changing:3 bbci:8 parameterized:1 master:1 respond:2 powerful:1 classifiable:1 taped:1 lope:2 family:2 letter:1 oscillation:1 epps:1 decision:2 hochberg:1 abnormal:1 followed:1 played:4 simplification:1 activity:4 immersion:1 adapted:1 precisely:4 constraint:1 boxplot:1 software:1 prosthetic:1 jennett:1 lemm:2 aspect:1 speed:2 robustifying:1 carmen:1 extremely:1 chair:1 fukunaga:1 according:2 ball:32 belonging:1 remain:1 character:1 ur:1 making:1 modification:3 rehabilitation:1 lasting:1 primate:1 outlier:1 flanking:1 taken:1 previously:2 mechanism:2 fed:1 fp7:1 end:1 studying:1 available:2 observe:1 kawanabe:1 spectral:2 indirectly:1 original:3 remaining:2 running:2 clustering:1 opportunity:1 patil:1 emotional:1 clin:3 calculating:1 concatenated:2 build:1 establish:1 especially:2 upcoming:1 added:1 question:1 strategy:1 spelling:1 diagonal:1 surrogate:1 thank:2 berlin:23 sci:1 participate:1 seven:1 collected:1 toward:4 meg:2 ratio:1 minimizing:1 acquire:1 difficult:2 setup:3 robert:13 ryota:1 trace:1 slows:1 rise:1 unknown:1 upper:1 vertical:1 january:1 situation:1 defining:1 incorporated:1 precise:2 digitized:1 schr:1 exiting:2 community:1 rating:2 introduced:3 optimized:1 lal:1 acoustic:1 nip:1 trans:3 beyond:2 able:2 mcfarland:1 below:2 pattern:5 dynamical:1 built:1 max:2 gaining:1 video:5 reliable:1 memory:1 power:8 critical:1 event:4 nicolelis:1 irc:3 demanding:1 natural:1 regularized:1 suitable:1 difficulty:2 overlap:1 arm:1 blankertz:12 santucci:1 technology:8 brief:1 eye:1 fraunhofer:1 catch:1 eog:2 review:2 understanding:1 ict:1 relative:2 synchronization:3 interesting:2 filtering:1 proven:1 ingredient:2 localized:1 digital:1 foundation:1 validation:1 thresholding:1 exciting:1 e42:1 editor:5 playing:2 principle:1 classifying:1 changed:1 surprisingly:1 last:1 offline:1 side:4 allow:1 bias:3 drastically:1 institute:7 tobi:1 characterizing:1 rhythmic:1 braincomputer:1 feedback:6 dimension:1 cortical:3 curve:1 avoids:2 rich:4 neuroprosthetic:1 sensory:1 noninvasive:1 made:2 adaptive:1 simplified:1 feeling:1 programme:1 cope:1 nov:1 logic:1 robotic:1 reveals:2 anda:1 sat:1 spatio:2 continuous:1 nature:2 channel:3 robust:1 transfer:1 learn:1 henriquez:1 actuated:1 eeg:25 improving:2 klausrobert:2 imagining:1 excellent:1 complex:7 european:1 untrained:1 domain:2 da:2 significance:1 spread:1 neurosci:1 noise:1 scored:1 edition:2 allowed:1 fair:1 gadget:1 gmbh:1 fig:9 neuronal:1 screen:3 depicts:1 slow:1 bmbf:1 pfurtscheller:3 neuroimage:2 position:1 tomioka:1 bumper:1 minute:1 down:2 removing:1 specific:4 covariate:1 invasively:1 er:2 desynchronization:3 neurol:1 evidence:3 intrinsic:1 essential:1 curio:5 false:2 importance:1 forschung:1 cursor:4 disregarded:1 chen:1 surprise:2 suited:1 boston:1 visual:4 prevents:2 horizontally:1 macmillan:1 gary:1 chance:3 immersive:1 relies:1 extracted:1 saleh:1 ma:7 identity:1 goal:1 losch:2 towards:4 krm:1 man:2 included:1 determined:1 except:1 reducing:1 averaging:2 called:2 pas:3 discriminate:1 experimental:2 player:2 neuper:1 atsch:1 zone:1 distraction:1 mark:1 latter:1 phenomenon:1 preparation:1 magnet:1 biol:1 correlated:1 |
2,766 | 3,508 | Learning to use Working Memory in Partially
Observable Environments through
Dopaminergic Reinforcement
Michael T. Todd, Yael Niv, Jonathan D. Cohen
Department of Psychology & Princeton Neuroscience Institute
Princeton University, Princeton, NJ 08544
{mttodd,yael,jdc}@princeton.edu
Abstract
Working memory is a central topic of cognitive neuroscience because it is
critical for solving real-world problems in which information from multiple
temporally distant sources must be combined to generate appropriate
behavior. However, an often neglected fact is that learning to use working
memory effectively is itself a difficult problem. The Gating framework [14] is a collection of psychological models that show how dopamine can
train the basal ganglia and prefrontal cortex to form useful working memory
representations in certain types of problems. We unite Gating with machine
learning theory concerning the general problem of memory-based optimal
control [5-6]. We present a normative model that learns, by online temporal
difference methods, to use working memory to maximize discounted future
reward in partially observable settings. The model successfully solves a
benchmark working memory problem, and exhibits limitations similar to
those observed in humans. Our purpose is to introduce a concise, normative
definition of high level cognitive concepts such as working memory and
cognitive control in terms of maximizing discounted future rewards.
1
I n t ro d u c t i o n
Working memory is loosely defined in cognitive neuroscience as information that is (1)
internally maintained on a temporary or short term basis, and (2) required for tasks in which
immediate observations cannot be mapped to correct actions. It is widely assumed that
prefrontal cortex (PFC) plays a role in maintaining and updating working memory. However,
relatively little is known about how PFC develops useful working memory representations
for a new task. Furthermore, current work focuses on describing the structure and limitations
of working memory, but does not ask why, or in what general class of tasks, is it necessary.
Borrowing from the theory of optimal control in partially observable Markov decision
problems (POMDPs), we frame the psychological concept of working memory as an internal
state representation, developed and employed to maximize future reward in partially
observable environments. We combine computational insights from POMDPs and
neurobiologically plausible models from cognitive neuroscience to suggest a simple
reinforcement learning (RL) model of working memory function that can be implemented
through dopaminergic training of the basal ganglia and PFC.
The Gating framework is a series of cognitive neuroscience models developed to explain
how dopaminergic RL signals can shape useful working memory representations [1-4].
Computationally this framework models working memory as a collection of past
observations, each of which can occasionally be replaced with the current observation, and
addresses the problem of learning when to update each memory element versus maintaining
it. In the original Gating model [1-2] the PFC contained a unitary working memory
representation that was updated whenever a phasic dopamine (DA) burst occurred (e.g., due
to unexpected reward or novelty). That model was the first to connect working memory and
RL via the temporal difference (TD) model of DA firing [7-8], and thus to suggest how
working memory might serve a normative purpose. However, that model had limited
computational flexibility due to the unitary nature of the working memory (i.e., a singleobservation memory controlled by a scalar DA signal). More recent work [3-4] has partially
repositioned the Gating framework within the Actor/Critic model of mesostriatal RL [9-10],
positing memory updating as but another cortical action controlled by the dorsal striatal
"actor." This architecture increased computational flexibility by introducing multiple
working memory elements, corresponding to multiple corticostriatal loops, that could be
quasi-independently updated. However, that model combined a number of components
(including supervised and unsupervised learning, and complex neural network dynamics),
making it difficult to understand the relationship between simple RL mechanisms and
working memory function. Moreover, because the model used the Rescorla-Wagner-like
PVLV algorithm [4] rather than TD [7-8] as the model of phasic DA bursts, the model's
behavior and working memory representations were not directly shaped by standard
normative criteria for RL models (i.e., discounted future reward or reward per unit time).
We present a new Gating model, synthesizing the mesostriatal Actor/Critic architecture of
[4] with a normative POMDP framework, and reducing the Gating model to a fourparameter, pure RL model in the process. This produces a model very similar to previous
machine learning work on "model-free" approximate POMDP solvers [5,6], which attempt to
form good solutions without explicit knowledge of the environment's structure or dynamics.
That is, we model working memory as a discrete memory system (a collection of recent
observations) rather than a continuous "belief state" (an inferred probability distribution over
hidden states). In some environments this may permit only an approximate solution.
However, the strength of such a system is that it requires very little prior knowledge, and is
thus potentially useful for animals, who must learn effective behavior and memorymanagement policies in completely novel environments (i.e., in the absence of a ?world
model?). Therefore, we retain the computational flexibility of the more recent Gating models
[3-4], while re-establishing the goal of defining working memory in normative terms [1-2].
To illustrate the strengths and limitations of the model, we apply it to two representative
working-memory tasks. The first is the 12-AX task proposed as a Gating benchmark in [4].
Contrary to previous claims that TD learning is not sufficient to solve this task, we show that
with an eligibility trace (i.e., TD(?) with 0 < ? < 1), the model can achieve optimal
behavior. The second task highlights important limitations of the model. Since our model is a
POMDP solver and POMDPs are, in general, intractable (i.e., solution algorithms require an
infeasible number of computations), it is clear that our model must ultimately fail to achieve
optimal performance as environments increase even to moderate complexity. However,
human working memory also exhibits sharp limitations. We apply our model to an implicit
artificial grammar learning task [11] and show that it indeed fails in ways reminiscent of
human performance. Moreover, simulating this task with increased working memory
capacity reveals diminishing returns as capacity increases beyond a small number,
suggesting that the "magic number" limited working memory capacity found in humans [12]
might in fact be optimal from a learning standpoint.
2
M o d e l A rc h i t e c t u re
As with working memory tasks, a POMDP does not admit an optimal behavior policy based
only on the current observation. Instead, the optimal policy generally depends on some
combination of memory as well as the current observation. Although the type of memory
required varies across POMDPs, in certain cases a finite memory system is a sufficient basis
for an optimal policy. Peshkin, Meuleau, and Kaelbling [6] used an external finite memory
device (e.g., a shopping list) to improve the performance of RL in a model-free POMDP
setting. Their model's "state" variable consisted of the current observation augmented by the
memory device. An augmented action space, consisting of both memory actions and motor
actions, allowed the model to learn effective memory-management and motor policies
simultaneously. We integrate this approach with the Gating model, altering the semantics so
that the external memory device becomes internal working memory (presumed
Choose motor action, ? , and gating action, ? , for
current state, ? according to softmax over motor and
gating action preferences, ? and ?, respectively.
Update motor and gating action eligibility traces, ?
and ? , respectively. (Update shown for motor action
eligibility trace. Gating action trace is analogous.)
Update (hidden) environment state, ?, with motor
action. Get next reward, ?, and observation, ?.
Update internal state based on previous state, gating
action, and new observation
Compute state-value prediction error, ? , based on
critic?s state-value approximation, ?(?)
? ? Softmax(?; ? )
? ? Softmax(?; ? )
1 ? Pr(?|?) , ? = ? , ? = ?
? Pr(?|?) , ? = ? , ? ? ?
??? (?, ?), ? ? ?
?
? Environment(? , ? )
?, ? ? Environment(? )
? (?, ?) ?
?
? ?, ?
? ?, ? , ?
? ? ? + ??(?
) ? ?(? )
??? (?) + 1, ? = ?
, ??
??? (?), ? ? ?
Update state-value eligibility traces, ? .
? (?) =
Update state-values
?(?) = ?(?) + ?? ? (?), ? ?
Update motor action preferences
Update gating action preferences
Next trial?
?(?, ?) = ?(?, ?) + ?? ? (?, ?),
?(?, ?) = ?(?, ?) + ?? ? (?, ?),
? ??
? ?, ?
? ?, ?
Table 1 Pseudocode of one trial of the model, based on the Actor/Critic architecture with
eligibility traces. Following [13], we substitute the critic's state-value prediction error for
Williams's (? ? ?) term [14]. We describe here a single gating actor, but it is straightforward to
generalize to an array of independent gating actors as we use in our simulations. ? = discount rate;
? = eligibility trace decay rate; ? =learning rate. In all simulations, ? = 0.94, ? = 0.1.
to be supported in PFC), and altering the Gating model so that the role of working memory
is explicitly to support optimal behavior (in terms of discounted future reward) in a POMDP.
Like [6], the key difference between our model and standard RL methods is that our state
variable includes controlled memory elements (i.e., working memory), which augment the
current observation. The action space is similarly augmented to include memory or gating
actions, and the model learns by trial-and-error how to update its working memory (to
resolve hidden states when such resolution leads to greater rewards) as well as its motor
policy. The task for our model then, is to learn a working memory policy such that the
current internal state (i.e., memory and current observation) admits an optimal behavioral
policy.
Our model (Table 1) consists of a critic, a motor actor, and several gating actors. As in the
standard Actor/Critic architecture, the critic learns to evaluate (internal) states and, based on
the ongoing temporal difference of these values, generates at each time step a prediction
error (PE) signal (thought to correspond to phasic bursts and dips in DA [8]). The PE is used
to train the critic's state values and the policies of the actors. The motor actor also fulfills the
usual role, choosing actions to send to the environment based on its policy and the current
internal state. Finally, gating actors correspond one-to-one with each memory element. At
each time point, each gating actor independently chooses (via a policy based on the internal
state) whether to (1) maintain its element's memory for another time step, or (2) replace
(update) its element's memory with the current observation.
To remain aligned with the Actor/Critic online learning framework of mesostriatal RL [910], learning in our model is based on REINFORCE [14] modified for expected discounted
future reward [13], rather than the Monte-Carlo policy learning algorithm in [6] (which is
more suitable for offline, episodic learning). Furthermore, because it has been shown that
eligibility traces are particularly useful when applying TD to POMDPs (e.g., [15-16]), we
used TD(?), taking the characteristic eligibilities of the REINFORCE algorithm [14] as the
impulse function for a replacing eligibility trace [17]. For simplicity of exposition and
interpretation, we used tabular policy and state-value representations throughout.
Figure 1 12-AX: Average performance over 40 training runs, each consisting of 2?107 timesteps.
(A) As indicated by reward rate over the last 105 time steps, the model learns an optimal policy
when the eligibility trace parameter, ?, is between zero and one. (B) The time required for the
model to reach 300 consecutive correct trials increases rapidly as ? decreases. (C) Sample
sequence of the 12-AX task.
3
Benchmark Performance and Psychological Data
We now describe the model's performance on the 12-AX task proposed as a benchmark for
Gating models [4]. We then turn to a comparison of the model's behavior against actual
psychological data.
3.1
12-AX Performance
The 12-AX task was used in [4] to illustrate the problem of learning a task in which correct
behavior depends on multiple previous observations. In the task (Figure 1C), subjects are
presented with a sequence of observations drawn from the set {1, 2, A, B, C, X, Y, Z}. They
gain rewards by responding L or R according to the following rules: Respond R if (1) the
current observation is an X, the last observation from the set {A, B, C} was an A, and the
last observation from the set {1, 2} was a 1; or (2) the current observation is a Y, the last
observation from the set {A, B, C} was a B, and the last observation from the set {1, 2} was
a 2. Respond L otherwise. In our implementation, reward is 1 for correct responses when the
current observation is X or Y, 0.25 for all other correct responses, and 0 for incorrect
responses.
We modeled this task using two memory elements, the minimum theoretically necessary for
optimal performance. The results (Figure 1A,B) show that our TD(?) Gating model can
indeed achieve optimal 12-AX performance. The results also demonstrate the reliance of the
model on the eligibility trace parameter, ?, with best performance at high intermediate
values of ?. When ? = 0, the model finds a suboptimal policy that is only slightly better than
the optimal policy for a model without working memory. With ? = 1 performance is even
worse, as can be expected for an online policy improvement method with non-decaying
traces (a point of comparison with [6] to which we will return in the Discussion). These
results are consistent with previous work showing that TD(0) performs poorly in partially
observable (non-Markovian) settings [15], whereas TD(?) (without memory) with ? ? 0.9
performs best [16]. Indeed, early in training, as our model learns to convert a POMDP to an
MDP via its working memory, the internal state dynamics are not Markovian, and thus an
eligibility trace is necessary.
3.2
Psychological data
We are the first to interpret the Gating framework (and the use of working memory) as an
attempt to solve POMDPs. This brings a large body of theoretical work to bear on the
properties of Gating models. Importantly, it implies that, as task complexity increases, both
the Gating model and humans must fail to find optimal solutions in reasonable time frames
Figure 2 (A) Artificial grammar from [11]. Starting from node 0, the grammar generates a
continuing sequence of observations. All nodes with two transitions (edges) make either transition
with p=0.5. Edge labels mark grammatical observations. At each transition, the grammatical
observation is replaced with a random, ungrammatical, observation with p=0.15. The task is to
predict the next observation at each time point. (B) The model shows a gradual increase in
sensitivity to sequences of length 2 and 3, but not length 4, replicating the human data. Sensitivity
is measured as probability of choosing grammatical action for the true state, minus probability of
choosing grammatical action for the aliased state; 0 indicates complete aliasing, 1 complete
resolution. (C) Model performance (reward rate) averaged over training runs with variable
numbers of time steps shows diminishing returns as the number of memory elements increases.
due to the generally intractable nature of POMDPs. Given this inescapable conclusion, it is
interesting to compare model failures to corresponding human failures: a pattern of failures
matching human data would provide support for our model. In this subsection we describe a
simulation of artificial grammar learning [11], and then offer an account of the pervasive
"magic number" observations concerning limits of working memory capacity (e.g., [12]).
In artificial grammar learning, subjects see a seemingly random sequence of observations,
and are instructed to mimic each observation as quickly as possible (or to predict the next
observation) with a corresponding action. Unknown to the subjects, the observation
sequence is generated by a stochastic process called a "grammar" (Figure 2A). Artificial
grammar tasks constitute POMDPs: the (recent) observation history can predict the next
observation better than the current observation alone, so optimal performance requires
subjects to remember information distilled from the history. Although subjects typically
report no knowledge of the underlying structure, after training their reaction times (RTs)
reveal implicit structural knowledge. Specifically, RTs become significantly faster for
"grammatical" as compared to "ungrammatical" observations (see Figure 2).
Cleeremans and McClelland [11] examined the limits of subjects' capacity to detect grammar
structure. The grammar they used is shown in Figure 2A. They found that, although subjects
grew increasingly sensitive to sequences of length two and three throughout training, (as
measured by transient RT increases following ungrammatical observations), they remained
insensitive, even after 60,000 time steps of training, to sequences of length four. This
presumably reflected a failure of subjects' implicit working memory learning mechanisms,
and was confirmed in a second experiment [11]. We replicated these results, as shown in
Figure 2B. To simulate the task, we gave the model two memory elements (results were no
different with three elements), and reward 1 for each correct prediction. We tested the
model's ability to resolve states based on previous observations by contrasting its behavior
across pairs of observation sequences that differed only in the first observation. State
resolution based on sequences of length two, three, and four were represented by VS versus
XS (leading to predictions Q vs. V/P, respectively), SQX versus XQX (S/Q vs. P/T), and
XTVX versus PTVX (S/Q vs. P/T), respectively.
In this task, optimal use of information from sequences of length four or more proved
impossible for the model and, apparently, for humans. To understand intuitively this
limitation, consider a problem of two hidden states, 1 and 2, with optimal actions L and R,
respectively. The states are preceded by identical observation sequences of length .
However, at
+ 1 time steps in the past, observation A precedes state 1, whereas
observation B precedes state 2. The probability that A/B are held in memory for the required
+ 1 time steps decreases geometrically with , thus the probability of resolving states 1
and 2 decreases geometrically. Because the agent cannot resolve state 1 from state 2, it can
never learn the appropriate 1-L, 2-R action preferences even if it explores those actions, a
more insidious problem than an RL agent faces in a fully observable setting. As a result, the
model can?t reinforce optimal gating policies, eventually learning an internal state space and
dynamics that fail to reflect the true environment. The problem is that credit assignment (i.e.,
learning a mapping from working memory to actions) is only useful inasmuch as the internal
state corresponds to the true hidden state of the POMDP, leading to a ?chicken-and-egg?
problem.
Given the preceding argument, one obvious modification that might lead to improved
performance is to increase the number of memory elements. As the number of memory
elements increases, the probability that the model remembers observation A for the required
amount of time approaches one. However, this strategy introduces the curse of
dimensionality due to the rapidly increasing size of the internal state space.
This intuitive analysis suggests a normative explanation for the famous "magic number"
limitation observed in human working memory capacity, thought to be about four
independent elements (e.g., [12]). We demonstrate this idea by again simulating the artificial
grammar task, this time averaging performance over a range of training times (1 to 10
million time steps) to capture the idea that humans may practice novel tasks for a typical, but
variable, amount of time. Indeed the averaged results show diminishing returns of increasing
memory elements (Figure 2C). This simulation used tabular (rather than more neurally
plausible) representations and a highly simplified model, so the exact number of policy
parameters and state values to be estimated, time steps, and working memory elements is
somewhat arbitrary in relation to human learning. Still, the model's qualitative behavior
(evidenced by the shape of the resulting curve and the order of magnitude of the optimal
number of working memory elements) is surprisingly reminiscent of human behavior. Based
on this we suggest that the limitation on working memory capacity may be due to a
limitation on learning rather than on storage: it may be impractical to learn to utilize more
than a very small number (i.e., smaller than 10) of independent working memory elements,
due to the curse of dimensionality.
4
D i s c u s s i on
We have presented a psychological model that suggests that dopaminergic PE signals can
implicitly shape working memory representations in PFC. Our model synthesizes recent
advances in the Gating literature [4] with normative RL theory regarding model-free, finite
memory solutions to POMDPs [6]. We showed that the model learns to behave optimally in
the benchmark 12-AX task. We also related the model's computational limitations to known
limitations of human working memory [11-12].
4.1
Relation to other theoretical work
Other recent work in neural RL has argued that the brain applies memory-based POMDP
solution mechanisms to the real-world problems faced by animals [17-20]. That work
primarily considers model-based mechanisms, in which the temporary memory is a
continuous belief state, and assumes that a function of cerebral cortex is to learn the required
world model, and specifically that PFC should represent temporary goal- or policy-related
information necessary for optimal POMDP behavior. The model that we present here is
related to that line of thinking, demonstrating a model-free, rather than model-based,
mechanism for learning to store policy-related information in PFC. Different learning
systems may form different types of working memory representations. Future work may
investigate the relationship between implicit learning (as in this Gating model) and modelfree POMDP solutions, versus other kinds of learning and model-based POMDP solutions.
Irrespective of the POMDP framework, other work has assumed that there exists a gating
policy that controls task-relevant working memory updating in PFC (e.g., [21]). The present
work further develops a model of how this policy can be learned.
It is interesting to compare our model to previous work on model-free POMDP solutions.
McCallum first emphasized the importance of learning utile distinctions [5], or learning to
resolve two hidden states only if they have different optimal actions. This is an emphasis
that our model shares, at least in spirit. Humans must of course be extremely flexible in their
behavior. Therefore there is an inherent tension between the need to focus cognitive
resources on learning the immediate task, and the need to form a basis of general task
knowledge [3]. It would be interesting for future work to explore how closely the working
memory representations learned by our model align to McCallum's utile (and less
generalizable) distinctions as opposed to more generalizable representations of the
underlying hidden structure of the world, or whether our model could be modified to
incorporate a mixture of both kinds of knowledge, depending on some
exploration/exploitation parameter.
Our model most closely follows the Gating model described in [4], and the theoretical model
described in [6]. Our model is clearly more abstract and less biologically detailed than [4].
However, our intent was to ask whether the important insights and capabilities of that model
could be captured using a four-parameter, pure RL model with a clear normative basis.
Accordingly, we have shown that such a model is comparably equipped to simulate a range
of psychological phenomena. Our model also makes equally testable (albeit different)
predictions about the neural DA signal. Relative to [6], our model places biological and
psychological concerns at the forefront, eliminating the episodic memory requirements of
the Monte-Carlo algorithm. It is perhaps interesting, vis ? vis [6], that our model performed
so poorly when = 1, as this produces a nearly Monte-Carlo scheme. The difference was
likely due to our model's online learning (i.e., we updated the policy at each time step rather
than at the ends of episodes), which invalidates the Monte-Carlo approach. Thus it might be
said that our model is a uniquely psychological variant of that previous architecture.
4.2
I m p l i c a t i o n s f o r Wo r k i n g M e m o r y a n d C o g n i t i v e C o n t r o l
Subjects in cognitive control experiments typically face situations in which correct behavior
is indeterminate given only the immediate observation. Working memory is often thought of
as the repository of temporary information that augments the immediate observation to
permit correct behavior, sometimes called goals, context, task set, or decision categories.
These concepts are difficult to define. Here we have proposed a formal theoretical definition
for the cognitive control and working memory constructs. Due to the importance of
temporally distant goals and of information that is not immediately observable, the canonical
cognitive control environment is well captured by a POMDP. Working memory is then the
temporary information, defined and updated by a memory control policy, that the animal
uses to solve these POMDPs. Model-based research might identify working memory with
continuous belief states, whereas our model-free framework identifies working memory with
a discrete collection of recent observations. These may correspond to the products of
different learning systems, but the outcome is the same in either case: cognitive control is
defined as an animal's memory-based POMDP solver, and working memory is defined as the
information, derived from recent history, that the solver requires.
4.3
Psychological and neural validity
Although the intractability of solving a POMDP means that all models such as the one we
present here must ultimately fail to find an optimal solution in a practical amount of time (if
at all), the particular manifestation of computational limitations in our model aligns
qualitatively with that observed in humans. Working memory, the psychological construct
that the Gating model addresses, is famously limited (see [12] for a review). Beyond
canonical working memory capacity limitations, other work has shown subtler limitations
arising in learning contexts (e.g., [11]). The results that we presented here are promising, but
it remains for future work to more fully explore the relation between the failures exhibited
by this model and those exhibited by humans.
In conclusion, we have shown that the Gating framework provides a connection between
high level cognitive concepts such as working memory and cognitive control, systems
neuroscience, and current neural RL theory. The framework's trial-and-error method for
solving POMDPs gives rise to particular limitations that are reminiscent of observed
psychological limits. It remains for future work to further investigate the model's ability to
capture a range of specific psychological and neural phenomena. Our hope is that this link
between working memory and POMDPs will be fruitful in generating new insights, and
suggesting further experimental and theoretical work.
Acknowledgments
We thank Peter Dayan, Randy O'Reilly, and Michael Frank for productive discussions, and
three anonymous reviewers for helpful comments. This work was supported by NIH grant
5R01MH052864 (MT & JDC) and a Human Frontiers Science Program Fellowship (YN)
References
[1] Braver, T. S., & Cohen, J. D. (1999). Dopamine, cognitive control, and schizophrenia: The gating model.
In J. A. Reggia, E. Ruppin, & D. Glanzman (Eds.), Progress in Brain Research (pp. 327-349). Amsterdam,
North-Holland: Elsevier Science.
[2] Braver, T. S., & Cohen, J. D. (2000). On the Control of Control: The Role of Dopamine in Regulating
Prefrontal Function and Working Memory. In S. Monsell, & J. S. Driver (Eds.), Control of Cognitive
Processes: Attention and Performance XVIII (pp. 713-737). Cambridge, MA: MIT Press.
[3] Rougier, A., Noelle, D., Braver, T., Cohen, J., & O'Reilly, R. (2005). Prefrontal Cortex and Flexible
Cognitive Control: Rules Without Symbols. Proceedings of the National Academy of Sciences , 102 (20),
7338-7343.
[4] O'Reilly, R. C., & Frank, M. J. (2006). Making Working Memory Work: A Computational Model of
Learning in the Prefrontal Cortex and Basal Ganglia. Neural Computation , 18, 283-328.
[5] McCallum, A. (1995). Instance-Based Utile Distinctions for Reinforcement Learning with Hidden State.
International Conference on Machine Learning, (pp. 387-395).
[6] Peshkin, L., Meuleau, N., & Kaelbling, L. (1999). Learning Policies with External Memory. Sixteenth
International Conference on Machine Learning, (pp. 307-314).
[7] Montague, P. R., Dayan, P., & Sejnowski, T. J. (1996). A Framework for Mesencephalic Dopamine
Systems Based on Predictive Hebbian Learning. The Journal of Neuroscience , 16 (5), 1936-1947.
[8] Schultz, W., Dayan, P., & Montague, P. R. (1997). A Neural Substrate of Prediction and Reward. Science
, 275, 1593-1599.
[9] Houk, J., Adams, J., & Barto, A. (1995). A Model of how the Basal Ganglia Generate and use Neural
Signals that Predict Reinforcement. In J. Houk, J. Davis, & D. Beiser, Models of Information Processing in
the Basal Ganglia. MIT Press.
[10] Joel, D., Niv, Y., & Ruppin, E. (2002). Actor-critic Models of the Basal Ganglia: New Anatomical and
Computational Perspectives. Neural Networks , 15, 535-547.
[11] Cleeremans, A., & McClelland, J. (1991). Learning the Structure of Event Sequences. Journal of
Experimental Psychology: General , 120 (3), 235-253.
[12] Cowan, N. (2000). The Magical Number 4 in Short-term Memory: A Reconsideration of Mental Storage
Capacity. Behavioral and Brain Sciences , 24, 87-114.
[13] Dayan, P., & Abbott, L. (2001). Theoretical Neuroscience. Cambridge, MA: MIT Press.
[14] Williams, R. (1992). Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement
Learning. Machine Learning , 8, 229-256.
[15] Singh, S., Jaakkola, T., & Jordan, M. I. (1994). Learning Without State-Estimation in Partially
Observable Markovian Decision Processes. Eleventh International Conference on Machine Learning, (pp.
284-292).
[16] Loch, J., & Singh, S. (1998). Using Eligibility Traces to Find the Best Memoryless Policy in Partially
Observable Markov Decision Processes. Fifteenth International Conference on Machine Learning, (pp. 323331).
[17] Sutton, R., & Barto, A. (1998). Reinforcement Learning: An Introduction. Cambridge, MA: The MIT
Press.
[18] Daw, N., Courville, A., & Touretzky, D. (2006). Representation and Timing in Theories of the
Dopamine System. Neural Computation , 18, 1637-1677.
[19] Samejima, K., & Doya, K. (2007). Multiple Representations of Belief States and Action Values in
Corticobasal Ganglia Loops. Annals of the New York Academy of Sciences , 213-228.
[20] Yoshida, W., & Ishii, S. (2006). Resolution of Uncertainty in Prefrontal Cortex. Neuron , 50, 781-789.
[21] Dayan, P. (2007). Bilinearity, Rules, and Prefrontal Cortex. Frontiers in Computational Neuroscience ,
1, 1-14.
| 3508 |@word trial:5 exploitation:1 repository:1 eliminating:1 simulation:4 gradual:1 concise:1 minus:1 series:1 past:2 reaction:1 current:16 must:6 reminiscent:3 distant:2 shape:3 motor:11 update:11 v:4 alone:1 device:3 rts:2 accordingly:1 mccallum:3 short:2 meuleau:2 utile:3 mental:1 provides:1 node:2 preference:4 positing:1 rc:1 burst:3 become:1 driver:1 incorrect:1 consists:1 qualitative:1 combine:1 eleventh:1 behavioral:2 introduce:1 theoretically:1 presumed:1 expected:2 indeed:4 behavior:15 aliasing:1 brain:3 discounted:5 td:9 resolve:4 little:2 actual:1 curse:2 solver:4 increasing:2 becomes:1 equipped:1 moreover:2 underlying:2 aliased:1 what:1 kind:2 developed:2 contrasting:1 generalizable:2 nj:1 impractical:1 temporal:3 remember:1 ro:1 control:15 unit:1 internally:1 grant:1 yn:1 timing:1 todd:1 limit:3 sutton:1 establishing:1 firing:1 might:5 emphasis:1 examined:1 suggests:2 limited:3 forefront:1 range:3 averaged:2 practical:1 acknowledgment:1 practice:1 episodic:2 thought:3 significantly:1 matching:1 indeterminate:1 reilly:3 suggest:3 get:1 cannot:2 storage:2 context:2 applying:1 impossible:1 fruitful:1 reviewer:1 maximizing:1 send:1 williams:2 straightforward:1 starting:1 independently:2 attention:1 pomdp:17 resolution:4 yoshida:1 simplicity:1 immediately:1 pure:2 insight:3 rule:3 array:1 importantly:1 analogous:1 updated:4 annals:1 play:1 exact:1 substrate:1 us:1 element:17 particularly:1 updating:3 neurobiologically:1 observed:4 role:4 capture:2 cleeremans:2 episode:1 decrease:3 environment:12 complexity:2 reward:16 productive:1 neglected:1 dynamic:4 ultimately:2 singh:2 solving:3 predictive:1 serve:1 basis:4 completely:1 montague:2 represented:1 train:2 effective:2 describe:3 monte:4 sejnowski:1 artificial:6 precedes:2 choosing:3 outcome:1 widely:1 plausible:2 solve:3 otherwise:1 grammar:10 ability:2 itself:1 online:4 seemingly:1 sequence:13 rescorla:1 product:1 aligned:1 loop:2 relevant:1 rapidly:2 flexibility:3 achieve:3 poorly:2 academy:2 sixteenth:1 intuitive:1 requirement:1 produce:2 generating:1 adam:1 illustrate:2 depending:1 measured:2 progress:1 solves:1 implemented:1 implies:1 closely:2 correct:8 stochastic:1 exploration:1 human:18 transient:1 require:1 argued:1 shopping:1 niv:2 anonymous:1 biological:1 frontier:2 credit:1 houk:2 presumably:1 mapping:1 predict:4 claim:1 consecutive:1 early:1 purpose:2 estimation:1 label:1 sensitive:1 noelle:1 successfully:1 hope:1 mit:4 clearly:1 modified:2 rather:7 barto:2 jaakkola:1 pervasive:1 ax:8 focus:2 derived:1 improvement:1 indicates:1 ishii:1 detect:1 helpful:1 elsevier:1 dayan:5 typically:2 diminishing:3 borrowing:1 hidden:8 relation:3 quasi:1 semantics:1 flexible:2 augment:1 animal:4 softmax:3 construct:2 distilled:1 shaped:1 never:1 identical:1 unsupervised:1 nearly:1 thinking:1 future:10 tabular:2 mimic:1 report:1 develops:2 inherent:1 primarily:1 connectionist:1 simultaneously:1 national:1 replaced:2 consisting:2 maintain:1 attempt:2 regulating:1 highly:1 investigate:2 joel:1 introduces:1 mixture:1 held:1 edge:2 necessary:4 unite:1 loosely:1 continuing:1 re:2 theoretical:6 psychological:13 increased:2 instance:1 markovian:3 altering:2 assignment:1 kaelbling:2 introducing:1 optimally:1 connect:1 varies:1 combined:2 chooses:1 explores:1 sensitivity:2 international:4 retain:1 michael:2 quickly:1 again:1 central:1 reflect:1 management:1 opposed:1 choose:1 prefrontal:7 sqx:1 worse:1 cognitive:16 admit:1 external:3 leading:2 return:4 suggesting:2 account:1 includes:1 north:1 explicitly:1 depends:2 vi:2 performed:1 apparently:1 decaying:1 capability:1 who:1 characteristic:1 correspond:3 identify:1 generalize:1 famous:1 comparably:1 carlo:4 pomdps:12 confirmed:1 history:3 explain:1 reach:1 touretzky:1 whenever:1 aligns:1 ed:2 definition:2 against:1 failure:5 pp:6 obvious:1 gain:1 jdc:2 proved:1 ask:2 knowledge:6 subsection:1 dimensionality:2 supervised:1 reflected:1 response:3 improved:1 tension:1 reconsideration:1 furthermore:2 implicit:4 working:62 replacing:1 inescapable:1 brings:1 indicated:1 impulse:1 reveal:1 mdp:1 perhaps:1 validity:1 concept:4 consisted:1 true:3 memoryless:1 eligibility:13 uniquely:1 maintained:1 davis:1 criterion:1 manifestation:1 modelfree:1 complete:2 demonstrate:2 performs:2 ruppin:2 novel:2 bilinearity:1 nih:1 pseudocode:1 preceded:1 rl:15 mt:1 cohen:4 insensitive:1 cerebral:1 million:1 occurred:1 interpretation:1 interpret:1 cambridge:3 similarly:1 replicating:1 had:1 actor:15 cortex:7 align:1 recent:8 showed:1 perspective:1 moderate:1 occasionally:1 certain:2 store:1 randy:1 captured:2 minimum:1 greater:1 somewhat:1 preceding:1 employed:1 novelty:1 maximize:2 signal:6 beiser:1 resolving:1 multiple:5 neurally:1 hebbian:1 faster:1 offer:1 concerning:2 equally:1 schizophrenia:1 controlled:3 prediction:7 variant:1 dopamine:6 fifteenth:1 represent:1 sometimes:1 chicken:1 whereas:3 fellowship:1 xviii:1 source:1 standpoint:1 exhibited:2 comment:1 subject:9 cowan:1 contrary:1 spirit:1 invalidates:1 jordan:1 unitary:2 structural:1 intermediate:1 psychology:2 timesteps:1 architecture:5 gave:1 suboptimal:1 idea:2 regarding:1 peshkin:2 whether:3 wo:1 peter:1 york:1 constitute:1 action:27 useful:6 generally:2 clear:2 detailed:1 amount:3 discount:1 mcclelland:2 augments:1 category:1 generate:2 canonical:2 neuroscience:9 estimated:1 per:1 arising:1 anatomical:1 discrete:2 basal:6 key:1 four:5 reliance:1 demonstrating:1 drawn:1 abbott:1 utilize:1 geometrically:2 convert:1 run:2 uncertainty:1 respond:2 place:1 throughout:2 reasonable:1 doya:1 decision:4 monsell:1 courville:1 strength:2 generates:2 simulate:2 argument:1 extremely:1 dopaminergic:4 relatively:1 department:1 according:2 combination:1 across:2 remain:1 slightly:1 increasingly:1 smaller:1 making:2 modification:1 biologically:1 subtler:1 intuitively:1 pr:2 computationally:1 resource:1 remains:2 describing:1 turn:1 mechanism:5 fail:4 phasic:3 eventually:1 end:1 yael:2 permit:2 apply:2 appropriate:2 reggia:1 simulating:2 braver:3 inasmuch:1 original:1 substitute:1 responding:1 assumes:1 include:1 magical:1 maintaining:2 testable:1 corticostriatal:1 strategy:1 rt:1 usual:1 said:1 exhibit:2 gradient:1 link:1 mapped:1 reinforce:3 capacity:9 thank:1 topic:1 considers:1 length:7 loch:1 modeled:1 relationship:2 difficult:3 striatal:1 potentially:1 frank:2 xqx:1 trace:14 rise:1 synthesizing:1 magic:3 intent:1 implementation:1 policy:27 unknown:1 observation:46 neuron:1 markov:2 benchmark:5 finite:3 behave:1 immediate:4 defining:1 grew:1 situation:1 frame:2 sharp:1 arbitrary:1 inferred:1 evidenced:1 pair:1 required:6 connection:1 learned:2 distinction:3 temporary:5 daw:1 address:2 beyond:2 pattern:1 program:1 including:1 memory:97 explanation:1 belief:4 critical:1 suitable:1 event:1 scheme:1 improve:1 temporally:2 identifies:1 irrespective:1 remembers:1 faced:1 prior:1 literature:1 review:1 relative:1 fully:2 highlight:1 bear:1 interesting:4 limitation:15 versus:5 integrate:1 agent:2 sufficient:2 consistent:1 intractability:1 famously:1 critic:11 share:1 course:1 supported:2 last:5 free:6 surprisingly:1 infeasible:1 offline:1 formal:1 understand:2 institute:1 taking:1 wagner:1 face:2 grammatical:5 dip:1 curve:1 cortical:1 world:5 transition:3 instructed:1 collection:4 reinforcement:6 replicated:1 simplified:1 qualitatively:1 schultz:1 approximate:2 observable:9 mesencephalic:1 implicitly:1 reveals:1 assumed:2 repositioned:1 samejima:1 continuous:3 why:1 table:2 promising:1 nature:2 learn:6 synthesizes:1 ungrammatical:3 pfc:9 complex:1 da:6 allowed:1 body:1 augmented:3 representative:1 egg:1 differed:1 fails:1 explicit:1 pe:3 learns:6 remained:1 specific:1 emphasized:1 gating:36 showing:1 normative:9 symbol:1 list:1 decay:1 admits:1 x:1 concern:1 intractable:2 exists:1 albeit:1 effectively:1 importance:2 magnitude:1 explore:2 likely:1 ganglion:7 unexpected:1 contained:1 amsterdam:1 partially:8 scalar:1 holland:1 applies:1 corresponds:1 rougier:1 ma:3 goal:4 exposition:1 replace:1 absence:1 specifically:2 typical:1 reducing:1 averaging:1 called:2 experimental:2 internal:11 support:2 mark:1 fulfills:1 jonathan:1 dorsal:1 ongoing:1 incorporate:1 evaluate:1 princeton:4 tested:1 phenomenon:2 |
2,767 | 3,509 | Designing neurophysiology experiments to optimally
constrain receptive field models along parametric
submanifolds.
Jeremy Lewi ?
School of Bioengineering
Georgia Institute of Technology
[email protected]
Robert Butera
School of Electrical and Computer Engineering
Georgia Institute of Technology
[email protected]
David M. Schneider
Departments of Neurobiology and Psychology
Columbia University
[email protected]
Sarah M. N. Woolley
Department of Psychology
Columbia University
[email protected]
Liam Paninski ?
Department of Statistics and Center for Theoretical Neuroscience
Columbia University
[email protected]
Abstract
Sequential optimal design methods hold great promise for improving the efficiency of neurophysiology experiments. However, previous methods for optimal
experimental design have incorporated only weak prior information about the underlying neural system (e.g., the sparseness or smoothness of the receptive field).
Here we describe how to use stronger prior information, in the form of parametric models of the receptive field, in order to construct optimal stimuli and further
improve the efficiency of our experiments. For example, if we believe that the
receptive field is well-approximated by a Gabor function, then our method constructs stimuli that optimally constrain the Gabor parameters (orientation, spatial
frequency, etc.) using as few experimental trials as possible. More generally, we
may believe a priori that the receptive field lies near a known sub-manifold of the
full parameter space; in this case, our method chooses stimuli in order to reduce
the uncertainty along the tangent space of this sub-manifold as rapidly as possible.
Applications to simulated and real data indicate that these methods may in many
cases improve the experimental efficiency.
1
Introduction
A long standing problem in neuroscience has been collecting enough data to robustly estimate the
response function of a neuron. One approach to this problem is to sequentially optimize a series
of experiments as data is collected [1, 2, 3, 4, 5, 6]. To make optimizing the design tractable, we
typically need to assume our knowledge has some nice mathematical representation. This restriction
often makes it difficult to include the types of prior beliefs held by neurophysiologists; for example
that the receptive field has some parametric form such as a Gabor function [7]. Here we consider
?
?
http://www.lewilab.org
http://www.stat.columbia.edu/?liam/
1
~ t , Ct )
p(?|?
~ b , Cb )
p(?|?
T?~ M,t M
?2
?
~ M,t
M
?
~t
?2
?2
?1
?1
?1
Figure 1: A schematic illustrating how we use the manifold to improve stimulus design. Our
method begins with a Gaussian approximation of the posterior on the full model space after t trials,
~ ?t , C t ). The left panel shows an example of this Gaussian distribution when dim(?)
~ = 2. The
p(?|~
next step involves constructing the tangent space approximation of the manifold M on which ?~ is believed to lie, as illustrated in the middle plot; M is indicated in blue. The MAP estimate (blue dot) is
projected onto the manifold to obtain ?
~ M,t (green dot). We then compute the tangent space (dashed
red line) by taking the derivative of the manifold at ?
~ M,t . The tangent space is the space spanned by
vectors in the direction parallel to M at ?
~ M,t . By definition, in the neighborhood of ?
~ M,t , moving
along the manifold is roughly equivalent to moving along the tangent space. Thus, the tangent space
~ ?b,t , Cb,t ) by evaluprovides a good local approximation of M. In the right panel we compute p(?|~
~ ?t , C t ) on the tangent space. The resulting distribution concentrates its mass on models
ating p(?|~
~ ?t , C t ) and close to the manifold.
which are probable under p(?|~
the problem of incorporating this strong prior knowledge into an existing algorithm for optimizing
neurophysiology experiments [8].
We start by assuming that a neuron can be modeled as a generalized linear model (GLM). Our
prior knowledge defines a subset of all GLMs in which we expect to find the best model of the
neuron. We represent this class as a sub-manifold in the parameter space of the GLM. We use the
manifold to design an experiment which will provide the largest reduction in our uncertainty about
the unknown parameters. To make the computations tractable we approximate the manifold using
the tangent space evaluated at the maximum a posteriori (MAP) estimate of the parameters projected
onto the manifold. Despite this rather crude approximation of the geometry of the manifold, our
simulations show that this method can significantly improve the informativeness of our experiments.
Furthermore, these methods work robustly even if the best model does not happen to lie directly on
the manifold.
2
Methods
We begin by summarizing the three key elements of an existing algorithm for optimizing neurophysiology experiments. A more thorough discussion is available in [8]. We model the neuron?s
response function as a mapping between the neuron?s input at time t, ~st , and its response, rt . We
define the input rather generally as a vector which may consist of terms corresponding to a stimulus,
e.g. an image or a sound, or the past activity of the neuron itself, {rt?1 , rt?2 , . . .}. The response, rt ,
is typically a non-negative integer corresponding to the number of spikes observed in a small time
window. Since neural responses are typically noisy, we represent the response function as a con~ In this context, optimizing the experimental design means picking
ditional distribution, p(rt |~st , ?).
the input for which observing the response will provide the most information about the parameters
?~ defining the conditional response function.
2
~ can be adequately
The first important component of this algorithm is the assumption that p(rt |~st , ?)
approximated by a generalized linear model [9, 10]. The likelihood of the response depends on the
firing rate, ?t , which is a function of the input,
?t = E(rt ) = f ?~T ~st ,
(1)
where f () is some nonlinear function which is assumed known1 . To identify the response function,
~ One important property of the GLM
we need to estimate the coefficients of the linear projection, ?.
is that we can easily derive sufficient conditions to ensure the log-likelihood is concave [11].
The second key component of the algorithm is that we may reasonably approximate the posterior on
?~ as Gaussian. This approximation is justified by the log-concavity of the likelihood function and
asymptotic normality of the posterior distribution given sufficient data [12]. As a result, we can re~ 1:t , s1:t ) ? p(?|~
~ ?t , C t ) [8].
cursively compute a Gaussian approximation of the full posterior, p(?|r
Here (~
?t , C t ) denote the mean and covariance matrix of our Gaussian approximation: ?
~ t is set to
~ and C t to the inverse Hessian of the log-posterior at ?
the MAP estimate of ?,
~ t.
The final component is an efficient method for picking the optimal input on the next trial, ~st+1 .
Since the purpose of an experiment is to identify the best model, we optimize the design by max~ rt+1 |~st+1 ). The
imizing the conditional mutual information between rt+1 and ?~ given ~st+1 , I(?;
mutual information measures how much we expect observing the response to ~st+1 will reduce our
~ We pick the optimal input by maximizing the mutual information with respect
uncertainty about ?.
to ~st+1 ; as discussed in [8], this step, along with the updating of the posterior mean and covariance
(~
?t , C t ), may be computed efficiently enough for real-time implementation in many cases.
2.1
Optimizing experiments to reduce uncertainty along parameter sub-manifolds.
For the computation of the mutual information to be tractable, the space of candidate models, ?,
must have some convenient form so that we can derive a suitable expression for the mutual information. Intuitively, to select the optimal design, we need to consider how much information an
experiment provides about each possible model. Evaluating the mutual information entails an integral over model space, ?. The problem with incorporating prior knowledge is that if we restrict
the model to some complicated subset of model space we will no longer be able to efficiently integrate over the set of candidate models. We address this problem by showing how local geometric
approximations to the parameter sub-manifold can be used to guide optimal sampling while still
maintaining a flexible, tractable representation of the posterior distribution on the full model space.
In many experiments, neurophysiologists expect a-priori that the receptive field of a neuron will have
some low-dimensional parametric structure; e.g the receptive field might be well-approximated by
a Gabor function [13], or by a difference of Gaussians [14], or by a low rank spatiotemporal matrix
[15, 13]. We can think of this structure as defining a sub-manifold, M, of the full model space, ?,
M = {?~ : ?~ = ?(~?), ?~?}.
(2)
The vector, ~?, essentially enumerates the points on the manifold and ?() is a function which maps
these points into ? space. A natural example is the case where we wish to enforce the constraint
that ?~ has some parametric form, e.g. a Gabor function. The basic idea is that we want to run
experiments which can identify exactly where on the manifold the optimal model lies.
Since M can have some arbitrary nonlinear shape, computing the informativeness of a stimulus
using just the models on the manifold is not easy. Furthermore, if we completely restrict our attention
to models in M then we ignore the possibility that our prior knowledge is incorrect. Hence, we do
not force the posterior distribution of ?~ to only have support on the manifold. Rather, we maintain a
Gaussian approximation of the posterior on the full space, ?. However, when optimizing our stimuli
we combine our posterior with our knowledge of M in order to do a better job of maximizing the
informativeness of each experiment.
1
It is worth noting that this simple GLM can be generalized in a number of directions; we may include
spike-history effects, nonlinear input terms, and so on [10].
3
~ st+1 , s1:t , r1:t ) entails an integral over model space
Computing the mutual information I(rt+1 ; ?|~
weighted by the posterior probability on each model. We integrate over model space because the
informativeness of an experiment clearly depends on what we already know (i.e. the likelihood we
assign to each model given the data and our prior knowledge). Furthermore, the informativeness of
an experiment will depend on the outcome. Hence, we use what we know about the neuron to make
predictions about the experimental outcome. Unfortunately, since the manifold in general has some
arbitrary nonlinear shape we cannot easily compute integrals over the manifold. Furthermore, we
do not want to continue to restrict ourselves to models on the manifold if the data indicates our prior
knowledge is wrong.
We can solve both problems by making use of the tangent space of the manifold, as illustrated in
Figure 1 [16]. The tangent space is a linear space which provides a local approximation of the
manifold. Since the tangent space is a linear subspace of ?, integrating over ?~ in the tangent space
is much easier than integrating over all ?~ on the manifold; in fact, the methods introduced in [8]
may be applied directly to this case. The tangent space is a local linear approximation evaluated at
a particular point, ?
~ M,t , on the manifold. For ?
~ M,t we use the projection of ?
~ t onto the manifold
(i.e., ?
~ M,t is the closest point in M to ?
~ t ). Depending on the manifold, computing ?
~ M,t can be
nontrivial; the examples considered in this paper, however, all have tractable numerical solutions to
this problem.
The challenge is representing the set of models close to ?
~ M,t in a way that makes integrating over the
models tractable. To find models on the manifold close to ?
~ M,t we want to perturb the parameters
~? about the values corresponding to ?
~ M,t . Since ? is in general nonlinear, there is no simple
expression for the combination of all such perturbations. However, we can easily approximate the
set of ?~ resulting from these perturbations by taking linear combinations of the partial derivatives
of ? with respect to ~?. The partial derivative is the direction in ? in which ?~ moves if we perturb
one of the manifold?s parameters. Thus, the subspace formed by linear combinations of the partial
derivatives approximates the set of models on the manifold close to ?
~ M,t . This subspace is the
tangent space,
T?~ M,t M = {?~ : ?~ = ?
~ M,t + B~b, ?~b ? Rdim(M) }
B = orth
??
??
...
??1
??d
,
(3)
where orth is an orthonormal basis for the column space of its argument. Here Tx M denotes the
tangent space at the point x. The columns of B denote the direction in which ?~ changes if we perturb
one of the manifold?s parameters. (In general, the directions corresponding to changes in different
parameters are not independent; to avoid this redundancy we compute a set of basis vectors for the
space spanned by the partial derivatives.)
We now use our Gaussian posterior on the full parameter space to compute the posterior likelihood
of the models in the tangent space. Since the tangent space is a subspace of ?, restricting our
~ ?t , C t ), to the tangent space means we are taking a slice through our
Gaussian approximation, p(?|~
Gaussian approximation of the posterior. Mathematically, we are conditioning on ?~ ? T?~ M,t M.
The result is a Gaussian distribution on the tangent space whose parameters may be obtained using
the standard Gaussian conditioning formula:
N (~b; ?
~ b,t , Cb,t ) if ? ~b s.t ?~ = ?
~ M,t + B~b
~
ptan (?|~
?b,t , Cb,t ) =
(4)
0
if
?~ ?
/ T?~ M,t
?
~ b,t = ?Cb,t B T C ?1
?M,t ? ?
~ t)
t (~
?1
Cb,t = (B T C ?1
t B)
(5)
where N denotes a normal distribution with the specified parameters. Now, rather than optimizing
~ 1:t , s1:t , M) on the nonlinear manifold M
the stimulus by trying to squeeze the uncertainty p(?|r
down as much as possible (a very difficult task in general), we pick the stimulus which best reduces
~ ?b,t , Cb,t ) on the vector space T?~ . We can solve this latter problem dithe uncertainty ptan (?|~
M,t
rectly using the methods presented in [8]. Finally, to handle the possibility that ?~ ?
/ M, every so
~ ?t , C t ). This simple modification enoften we optimize the stimulus using the full posterior p(?|~
sures that asymptotically we do not ignore directions orthogonal to the manifold; i.e., that we do not
4
t=500
t=750
t=1000
?
info. max.
full
Frequency(KHz)
i.i.d.
info. max.
tan. space
t=250
4
2
0
?2
6
4
2
?20 ?10
0
Time(ms)
Figure 2: MAP estimates of a STRF obtained using three designs: the new info. max. tangent
space design described in the text; an i.i.d. design; and an info. max. design which did not use
the assumption that ?~ corresponds to a low rank STRF. In each case, stimuli were chosen under
the spherical power contraint, ||~st ||2 = c. The true STRF (fit to real zebrafinch auditory responses
and then used to simulate the observed data) is shown in the last column. (For convenience we
rescaled the coefficients to be between -4 and 4). We see that using the tangent space to optimize the
design leads to much faster convergence to the true parameters; in addition, either infomax design
significantly outperforms the iid design here. In this case the true STRF did not in fact lie on the
manifold M (chosen to be the set of rank-2 matrices here); thus, these results also show that our
knowledge of M does not need to be exact in order to improve the experimental design.
get stuck obsessively sampling along the incorrect manifold. As a result, ?t will always converge
asymptotically to the true parameters, even when ? 6? M .
To summarize, our method proceeds as follows:
0. Initial conditions: start with a log-concave (approximately Gaussian) posterior given t previous trials, summarized by the posterior mean, ?
~ t and covariance, C t .
1. Compute ?
~ M,t , the projection of ?
~ t on the manifold. (The procedure for computing ?
~ M,t
depends on the manifold.)
2. Compute the tangent space of M at ?
~ M,t using Eqn. 3.
~ ?b,t , Cb,t ), using the standard
3. Compute the posterior restricted to the tangent space, ptan (?|~
Gaussian conditioning formula (Eqn. 5).
4. Apply the methods in [8] to find the optimal t + 1 stimulus, and observe the response rt+1 .
5. Update the posterior by recursively updating the posterior mean and covariance: ?
~t ?
?
~ t+1 and C t ? C t+1 (again, as in [8]), and return to step 1.
3
3.1
Results
Low rank models
To test our methods in a realistic, high-dimensional setting, we simulated a typical auditory neurophysiology [17, 15, 18] experiment. Here, the objective is to to identify the spectro-temporal
receptive field (STRF) of the neuron. The input and receptive field of the neuron are usually represented in the frequency domain because the cochlea is known to perform a frequency decomposition
of sound. The STRF, ?(?, ?), is a 2-d filter which relates the firing rate at time t to the amount of
5
energy at frequency ? and time t ? ? in the stimulus. To incorporate this spectrotemporal model in
the standard GLM setting, we simply vectorize the matrix ?(?, ?).
Estimating the STRF can be quite difficult due to its high dimensionality. Several researchers,
however, have shown that low-rank assumptions can be used to produce accurate approximations of
the receptive field while significantly reducing the number of unknown parameters [19, 13, 15, 20].
A low rank assumption is a more general version of the space-time separable assumption that is often
used when studying visual receptive fields [21]. Mathematically, a low-rank assumption means that
the matrix corresponding to the STRF can be written as a sum of rank one matrices,
? = M at ?~ = U V T
(6)
where M at indicates the matrix formed by reshaping the vector ?~ to form the STRF. U and V are
low-rank matrices with orthonormal columns. The columns of U and V are the principal components
of the column and row spaces of ? respectively, and encode the spectral and temporal properties of
the STRF, respectively.
We simulated an auditory experiment using an STRF fitted to the actual response of a neuron in the
Mesencephalicus lateralis pars dorsalis (MLd) of an adult male zebra finch [18]. To reduce the dimensionality we sub-sampled the STRF in the frequency domain and shortened it in the time domain
to yield a 20 ? 21 STRF. We generated synthetic data by sampling a Poisson process whose instantaneous firing rate was set to the output of a GLM with exponential nonlinearity and ?~ proportional
to the true measured zebra finch STRF.
For the manifold we used the set of ?~ corresponding to rank-2 matrices. For the STRF we used,
the rank-2 assumption turns out to be rather accurate. We also considered manifolds of rank-1 and
rank-5 matrices (data not shown), but rank-2 did slightly better. The manifold of rank r matrices
is convenient because we can easily project any ?~ onto M by reshaping ?~ as a matrix and then
computing its singular-value-decomposition (SVD). ?
~ M,t is the matrix formed by the first r singular
vectors of ?
~ t . To compute the tangent space, Eqn. 3, we compute the derivative of ?~ with respect to
each component of the matrices U and V . Using these derivatives we can linearly approximate the
effect on ? of perturbing the parameters of its principal components.
In Figure 3.1 we compare the effectiveness of different experimental designs by plotting the MAP
estimate ?
~ t on several trials. The results clearly show that using the tangent space to design the
experiments leads to much faster convergence to the true parameters. Furthermore, using the assumption that the STRF is rank-2 is beneficial even though the true STRF here is not in fact rank-2.
3.2
Real birdsong data
We also tested our method by using it to reshuffle the data collected during an actual experiment
to find an ordering which provided a faster decrease in the error of the fitted model. During the
experiments, we recorded the responses of MLd neurons when the songs of other birds and ripple
noise were presented to the bird (again, as previously described in [18]). We compared a design
which randomly shuffled the trials to a design which used our info. max. algorithm to select the
order in which the trials are processed. We then evaluated the fitted model by computing the expected
P
~ ? denotes all the observations made
log-likelihood of the spike trains, ? E?|~
s? , ?).
~ ?t ,C t log p(r? |~
when inputs in a test set are played to the bird.
To constrain the models we assume the STRF is low-rank and that its principal components are
smooth. The smoothing prior means that if we take the Fourier transform of the principal components, the Fourier coefficients of high frequencies should be zero with high probability. In other
words, each principal component (the columns of U and V ) should be a linear combination of sinusoidal functions with low frequencies. In this case we can write the STRF as
? = F ??? T T T .
(7)
Each column of F and T is a sine or cosine function representing one of the basis functions of
the principal spectral (columns of F ) or temporal (columns of T ) components of the STRF. Each
column of ? and ? determines how we form one of the principal components by combining sine and
cosine functions. ? is a diagonal matrix which specifies the projection of ? onto each principal
6
E?log p(r|st,?t)
0
?0.5
shuffled:
Info. Max. full:
Info. Max. Tan: rank=2
?1 3
10
4
trial
10
Figure 3: Plots comparing the performance of an info. max. design, an info. max. design which
uses the tangent space, and a shuffled design. The manifold was the set of rank 2 matrices. The plot
shows the expected log-likelihood (prediction accuracy) of the spike trains in response to a birdsong
in the test set. Using a rank 2 manifold to constrain the model produces slightly better fits of the
data.
component. The unknown parameters in this case are the matrices ?, ?, and ?. The sinusoidal
functions corresponding to the columns of F and T should have frequencies {0, . . . , fo,f mf } and
{0, . . . , fo,t mt } respectively. fo,f and fo,t are the fundamental frequencies and are set so that 1
period corresponds to the dimensions of the STRF. mf and mt are the largest integers such that
fo,f mf and fo,t mt are less than the Nyquist frequency. Now to enforce a smoothing prior we can
simply restrict the columns of F and T to sinusoids with low frequencies. To project ? onto the
manifold we simply need to compute ?, ? and ? by evaluating the SVD of F T ?T .
The results, Figure 3, show that both info. max. designs significantly outperform the randomly
shuffled design. Furthermore, incorporating the low-rank assumption using the tangent space improves the info. max. design, albeit only slightly; the estimated STRF?s are shown in Figure 4.
It is worth noting that in an actual online experiment, we would expect a larger improvement with
the info. max. design, since during the experiment we would be free to pick any input. Thus, the
different designs could choose radically different stimulus sets; in contrast, when re-analyzing the
data offline, all we can do is reshuffle the trials, but the stimulus sets remain the same in the info.
max. and iid settings here.
4
Conclusion
We have provided a method for incorporating detailed prior information in existing algorithms for
the information-theoretic optimal design of neurophysiology experiments. These methods use realistic assumptions about the neuron?s response function and choose significantly more informative
stimuli, leading to faster convergence to the true response function using fewer experimental trials.
We expect that the inclusion of this strong prior information will help experimentalists contend with
the high dimensionality of neural response functions.
5
Acknowledgments
We thank Vincent Vu and Bin Yu for helpful conversations. JL is supported by the Computational Science Graduate Fellowship Program administered by the DOE under contract DE-FG0297ER25308 and by the NSF IGERT Program in Hybrid Neural Microsystems at Georgia Tech via
grant number DGE-0333411. LP is supported by an NSF CAREER award and a Gatsby Initiative
in Brain Circuitry Pilot Grant.
7
Trial 2500 Trial 5000 Trial 7500
shuffled
Trial 1000
Trial 10k
Trial 20k
Trial 50k x 10?3
2
0
Info. Max. Tan Info. Max.
full
rank=2
Frequency (KHz)
?2
6
4
2
?40 ?20 0
Time(ms)
Figure 4: The STRFs estimated using the bird song data. We plot ?
~ t for trials in the interval over
which the expected log-likelihood of the different designs differed the most in Fig. 3. The info. max.
designs converge slightly faster than the shuffled design. In these results, we smoothed the STRF by
only using frequencies less than or equal to 10fo,f and 2fo,t .
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
P. Foldiak, Neurocomputing 38?40, 1217 (2001).
R. C. deCharms, et al., Science 280, 1439 (1998).
T. Gollisch, et al., Journal of Neuroscience 22, 10434 (2002).
F. Edin, et al., Journal of Computational Neuroscience 17, 47 (2004).
C. Machens, et al., Neuron 47, 447 (2005).
K. N. O?Connor, et al., Journal of Neurophysiology 94, 4051 (2005).
D. L. Ringach, J Neurophysiol 88, 455 (2002).
J. Lewi, et al., Neural Computation 21 (2009).
E. Simoncelli, et al., The Cognitive Neurosciences, M. Gazzaniga, ed. (MIT Press, 2004).
L. Paninski, et al., Computational Neuroscience: Theoretical Insights into Brain Function
(Elsevier, 2007), chap. Statistical models for neural encoding, decoding, and optimal stimulus
design.
L. Paninski, Network: Computation in Neural Systems 15, 243 (2004).
L. Paninski, Neural Computation 17, 1480 (2005).
A. Qiu, et al., J Neurophysiol 90, 456 (2003).
C. Enroth-Cugell, et al., Journal of Physiology 187, 517 (1966).
J. F. Linden, et al., Journal of Neurophysiology 90, 2660 (2003).
J. M. Lee, Introduction to Smooth Manifolds (Springer, 2000).
F. E. Theunissen, et al., Journal of Neuroscience 20, 2315 (2000).
S. M. Woolley, et al., The Journal of Neuroscience 26, 2499 (2006).
D. A. Depireux, et al., Journal of Neurophysiology 85, 1220 (2001).
M. B. Ahrens, et al., Network 19, 35 (2008).
G. C. DeAngelis, et al., J Neurophysiol 69, 1091 (1993).
8
| 3509 |@word neurophysiology:9 trial:18 illustrating:1 middle:1 version:1 stronger:1 simulation:1 covariance:4 decomposition:2 pick:3 recursively:1 reduction:1 initial:1 series:1 past:1 existing:3 outperforms:1 comparing:1 must:1 written:1 numerical:1 happen:1 realistic:2 informative:1 shape:2 plot:4 update:1 fewer:1 provides:2 org:1 mathematical:1 along:7 initiative:1 incorrect:2 combine:1 expected:3 roughly:1 brain:2 spherical:1 chap:1 gollisch:1 actual:3 window:1 begin:2 estimating:1 underlying:1 project:2 panel:2 mass:1 provided:2 what:2 submanifolds:1 temporal:3 thorough:1 every:1 collecting:1 concave:2 exactly:1 edin:1 wrong:1 grant:2 engineering:1 local:4 despite:1 shortened:1 encoding:1 analyzing:1 firing:3 approximately:1 might:1 bird:4 strf:23 liam:3 graduate:1 acknowledgment:1 vu:1 lewi:3 procedure:1 gabor:5 significantly:5 projection:4 physiology:1 word:1 convenient:2 integrating:3 get:1 onto:6 close:4 cannot:1 convenience:1 context:1 optimize:4 restriction:1 www:2 map:6 center:1 equivalent:1 maximizing:2 attention:1 insight:1 spanned:2 orthonormal:2 handle:1 tan:3 exact:1 us:1 designing:1 machens:1 element:1 approximated:3 updating:2 theunissen:1 observed:2 electrical:1 ordering:1 decrease:1 rescaled:1 depend:1 efficiency:3 completely:1 basis:3 neurophysiol:3 easily:4 represented:1 tx:1 train:2 describe:1 deangelis:1 neighborhood:1 outcome:2 whose:2 quite:1 larger:1 solve:2 statistic:1 think:1 transform:1 itself:1 noisy:1 final:1 online:1 combining:1 rdim:1 rapidly:1 squeeze:1 convergence:3 r1:1 ripple:1 produce:2 help:1 sarah:1 derive:2 depending:1 stat:2 measured:1 school:2 job:1 strong:2 involves:1 indicate:1 direction:6 concentrate:1 filter:1 bin:1 assign:1 probable:1 mathematically:2 hold:1 considered:2 normal:1 great:1 cb:8 mapping:1 circuitry:1 ditional:1 purpose:1 spectrotemporal:1 largest:2 weighted:1 mit:1 clearly:2 gaussian:13 always:1 rather:5 avoid:1 depireux:1 gatech:1 encode:1 improvement:1 rank:23 likelihood:8 indicates:2 tech:1 contrast:1 summarizing:1 dim:1 posteriori:1 helpful:1 elsevier:1 typically:3 orientation:1 flexible:1 priori:2 spatial:1 smoothing:2 mutual:7 field:12 construct:2 equal:1 sampling:3 yu:1 stimulus:17 few:1 randomly:2 neurocomputing:1 geometry:1 ourselves:1 maintain:1 possibility:2 male:1 held:1 accurate:2 bioengineering:1 integral:3 partial:4 orthogonal:1 re:2 theoretical:2 fitted:3 column:13 ating:1 subset:2 mld:2 optimally:2 spatiotemporal:1 finch:2 chooses:1 synthetic:1 st:12 fundamental:1 standing:1 contract:1 lee:1 decoding:1 picking:2 infomax:1 again:2 recorded:1 choose:2 cognitive:1 derivative:7 leading:1 return:1 jeremy:2 sinusoidal:2 de:1 summarized:1 coefficient:3 cugell:1 depends:3 sine:2 observing:2 red:1 start:2 parallel:1 complicated:1 formed:3 accuracy:1 efficiently:2 yield:1 identify:4 igert:1 weak:1 vincent:1 iid:2 worth:2 researcher:1 history:1 fo:8 ed:1 definition:1 energy:1 frequency:14 con:1 sampled:1 auditory:3 pilot:1 knowledge:9 enumerates:1 dimensionality:3 improves:1 conversation:1 response:19 evaluated:3 known1:1 though:1 furthermore:6 just:1 glms:1 eqn:3 nonlinear:6 defines:1 indicated:1 dge:1 believe:2 effect:2 true:8 adequately:1 hence:2 shuffled:6 butera:1 sinusoid:1 illustrated:2 ringach:1 during:3 cosine:2 m:2 generalized:3 trying:1 theoretic:1 image:1 instantaneous:1 mt:3 perturbing:1 conditioning:3 khz:2 jl:1 discussed:1 approximates:1 connor:1 zebra:2 smoothness:1 inclusion:1 nonlinearity:1 dot:2 moving:2 entail:2 longer:1 etc:1 rectly:1 posterior:20 closest:1 foldiak:1 optimizing:7 continue:1 schneider:1 converge:2 period:1 dashed:1 relates:1 full:11 sound:2 simoncelli:1 reduces:1 smooth:2 faster:5 believed:1 long:1 reshaping:2 award:1 schematic:1 prediction:2 basic:1 essentially:1 experimentalists:1 poisson:1 cochlea:1 represent:2 justified:1 addition:1 want:3 fellowship:1 interval:1 singular:2 effectiveness:1 integer:2 near:1 noting:2 enough:2 easy:1 fit:2 psychology:2 restrict:4 reduce:4 idea:1 administered:1 expression:2 birdsong:2 nyquist:1 song:2 enroth:1 reshuffle:2 hessian:1 generally:2 strfs:1 detailed:1 amount:1 processed:1 http:2 specifies:1 outperform:1 nsf:2 ahrens:1 neuroscience:8 estimated:2 blue:2 write:1 promise:1 key:2 redundancy:1 lateralis:1 asymptotically:2 sum:1 run:1 inverse:1 uncertainty:6 cursively:1 ct:1 played:1 activity:1 nontrivial:1 constraint:1 constrain:4 fourier:2 simulate:1 argument:1 separable:1 department:3 combination:4 beneficial:1 slightly:4 remain:1 lp:1 making:1 s1:3 modification:1 intuitively:1 restricted:1 glm:6 previously:1 turn:1 know:2 tractable:6 studying:1 available:1 gaussians:1 apply:1 observe:1 enforce:2 spectral:2 robustly:2 denotes:3 include:2 ensure:1 maintaining:1 perturb:3 move:1 objective:1 already:1 spike:4 receptive:12 parametric:5 rt:11 diagonal:1 subspace:4 thank:1 simulated:3 manifold:47 collected:2 vectorize:1 assuming:1 modeled:1 difficult:3 unfortunately:1 robert:1 sures:1 decharms:1 info:16 negative:1 design:33 implementation:1 unknown:3 perform:1 contend:1 neuron:14 observation:1 contraint:1 defining:2 neurobiology:1 incorporated:1 perturbation:2 smoothed:1 arbitrary:2 david:1 introduced:1 specified:1 address:1 able:1 adult:1 proceeds:1 usually:1 microsystems:1 gazzaniga:1 challenge:1 summarize:1 program:2 green:1 max:17 belief:1 power:1 suitable:1 natural:1 force:1 hybrid:1 normality:1 representing:2 improve:5 technology:2 ptan:3 columbia:7 text:1 prior:13 nice:1 geometric:1 tangent:27 asymptotic:1 expect:5 par:1 proportional:1 integrate:2 sufficient:2 informativeness:5 plotting:1 row:1 supported:2 last:1 free:1 offline:1 guide:1 institute:2 taking:3 slice:1 dimension:1 evaluating:2 concavity:1 stuck:1 made:1 projected:2 approximate:4 spectro:1 ignore:2 sequentially:1 assumed:1 reasonably:1 career:1 improving:1 constructing:1 domain:3 did:3 linearly:1 noise:1 qiu:1 fig:1 georgia:3 gatsby:1 differed:1 sub:7 orth:2 wish:1 exponential:1 lie:5 crude:1 candidate:2 formula:2 down:1 showing:1 linden:1 incorporating:4 consist:1 restricting:1 sequential:1 albeit:1 woolley:2 sparseness:1 easier:1 mf:3 paninski:4 simply:3 visual:1 springer:1 corresponds:2 radically:1 determines:1 conditional:2 change:2 neurophysiologists:2 typical:1 reducing:1 principal:8 ece:1 experimental:8 svd:2 select:2 support:1 latter:1 incorporate:1 tested:1 |
2,768 | 351 | Connectionist Implementation of a Theory of Generalization
Roger N. Shepard
Sheila Kannappan
Department of Psychology
Stanford University
Stanford, CA 94305-2130
Department of Physics
Harvard University
Cambridge, MA 02138
Abstract
Empirically, generalization between a training and a test stimulus falls off in
close approximation to an exponential decay function of distance between the
two stimuli in the "stimulus space" obtained by multidimensional scaling. Mathematically, this result is derivable from the assumption that an individual takes
the training stimulus to belong to a "consequential" region that includes that
stimulus but is otherwise of unknown location, size, and shape in the stimulus
space (Shepard, 1987). As the individual gains additional information about the
consequential region-by finding other stimuli to be consequential or nOl-the
theory predicts the shape of the generalization function to change toward the
function relating actual probability of the consequence to location in the stimulus
space. This paper describes a natural connectionist implementation of the theory,
and illustrates how implications of the theory for generalization, discrimination,
and classification learning can be explored by connectionist simulation.
1 THE THEORY OF GENERALIZATION
Because we never confront exactly the same situation twice, anything we have learned in
any previous situation can guide us in deciding which action to take in the present situation
only to the extent that the similarity between the two situations is sufficient to justify
generalization of our previous learning to the present situation. Accordingly, principles of
generalization must be foundational for any theory of behavior.
In Shepard (1987) nonarbitrary principles of generalization were sought that would be
optimum in any world in which an object, however distinct from other objects, is generally
a member of some class or natural kind sharing some dispositional property of potential
consequence for the individual. A newly encountered plant or animal might be edible or
665
666
Shepard and Kannappan
poisonous. for example. depending on the hidden genetic makeup of its natural kind.
This simple idea was shown to yield a quantitative explanation of two very general empirical
regularities that emerge when generalization date are submitted to methods of multidimensional scaling. The first concerns the shape of the generalization gradient. which describes
how response probability falls off with distance of a test stimulus from the training stimulus
in the obtained representational space. The second. which is not treated in the present
(unidimensional) connectionist implementation. concerns the metric of multidimensional
representational spaces. (See Shepard. 1987.)
These results were mathematically derived for the simplest case of an individual who. in
the absence of any advance knowledge about particular objects, now encounters one such
object and discovers it to have an important consequence. From such a learning event.
the individual can conclude that all objects are consequential that are of the same kind
as that object and that therefore fall in some consequential region that overlaps the point
corresponding to that object in representational space. The individual can only estimate the
probability that a given new object is consequential as the conditional probability. given
that a region of unknown size and shape overlaps that point. that it also overlaps the point
corresponding to the new object. The gradient of generalization then arises because a new
object that is closer to the old object in the representational space is more likely to fall
within a random region that overlaps the old object.
In order to obtain a quantitative estimate of the probability that the new stimulus is consequential. the individual must integrate over all candidate regions in representational
space--with. perhaps. different probabilities assigned. a priori. to different sizes and shapes
of region. The results tum out to depend remarkably little on the prior probabilities assigned
(Shepard. 1987). For any reasonable choice of these probabilities. integration yields an
approximately exponential gradient. And. for the single most reasonable choice in the
absence of any advance information about size or shape. namely. the choice of maximum
entropy prior probabilities, integration yields exactly the exponential decay function.
These results were obtained by separating the psychological problem of the form of generalization in a psychological space from the psychophysical problem of the mapping from
any physical parameter space to that psychological space. The psychophysical mapping,
having been shaped by natural selection. would favor a representational space in which
regions that correspond to natural kinds. though variously sized and shaped. are not on
average systematically elogated or compressed in any particular direction or location of
the space. Such a regularized space would provide the best basis for generalization from
objects of newly encountered kinds.
The psychophysical mapping thus corresponds to an optimum mapping from input to hidden
units in a connectionist system. Indeed. Rumelhart (1990) has recently suggested that the
power of the connectionist approach comes from the ability of a set of hidden units to
represent the relations among possible inputs according to their significances for the system
as a whole rather than according to their superficial relations at the input level. Although in
biologically evolved individuals the psychophysical mapping is likely to have been shaped
more through evolution than through learning (Shepard. 1989; see also Miller & Todd,
1990) the connectionist implementation to be described here does provide for some fine
tuning of this mapping through learning.
Beyond the exponential form of the gradient of generalization following training on a
single stimulus. three basic phenomena of discrimination and classification learning that
Connectionist Implementation of a Theory of Generalization
the theory of generalization should be able to explain are the following: First, when all
and only the stimuli within a compact subset are followed by an important consequence
(reinforcement), an individual should eventually learn to respond to all and only the stimuli
in that subset (Shepard, 1990)-at least to the degree possible, given any noise-induced
uncertainty about locations in the representational space (Shepard, 1986, 1990). Second,
when the positive stimuli do not fonn a compact subset but are interspersed among negative
(nonreinforced) stimuli, generalization should entail a slowing of classification learning
(Nosofsky, 1986; Shepard & Chang, 1963). Third, repeated discrimination or classification
learning, in which a boundary between positive and negative stimuli remains fixed, should
induce a "fine tuning" stretching of the representational space at that boundary such that
any subsequent learning will generalize less fully across that boundary.
Our initial connectionist explorations have been for relatively simple cases using a un idemensional stimulus set and a linear learning rule. These simulations serve to illustrate
how infonnation about the probable disposition of a consequential region accrues, in a
Bayesian manner, from successive encounters with different stimuli, each of which is or
is not followed by the consequence. In complex cases, the cumulative effects on probability of generalized response, on latency of discriminative response, and on fine tuning of
the psychophysical mapping may sometimes be easier to establish by simulation than by
mathematical derivation. Fortunately for this purpose, the theory of generalization has a
connectionist embodiment that is quite direct and neurophysiologically plausible.
2
A CONNECTIONIST IMPLEMENTATION
In the implementation reponed here, a linear array of 20 input units represents a set of 20
stimuli differing along a unidimensional continnuum, such as the continuum of pitches of
tones. The activation level of a given input unit is 1 when its corresponding stimulus is
presented and 0 when it is not. (This localist representation of the "input" may be considered
the output of a lower-level, massively parallel network for perpetual analysis.)
When such an "input unit" is activated, its activation propagates upward and outward
through successively higher layers of hidden units, giving rise to a cone of activation of that
input unit (Figure la). Higher units are activated by wider ranges of input units (Le., have
larger "receptive fields"). The hidden units thus represent potential consequential regions,
with higher units corresponding to regions of greater sizes in representational space.
The activation from any input unit is also subject to progressive attenuation as it propagates
to succesively higher layers of hidden units. In the present fonn of the model, this attenuation
comes about because the weights of the connections from input to hidden units falloff
exponentially with the heights of the hidden units. (Connection weights are pictorially
indicated in Figure 1 by the heavinesses of the connecting lines.) An exponential falloff
of connection weight with height is natural, in that it corresponds to a decrement of fixed
proportion as the activation propagates through each layer to the next. According to the
generalizaton theory (Shepard, 1987), an exponential falloff is also optimum for the case
of minimum prior knowledge, because it corresponds to the maximum entropy probability
density distribution of possible sizes of a consequential region.
When a response, Rk, is followed by a positive consequence in the presence of a stimulus,
SI, a simple linear rule (either a Hebbian or a delta rule) will increase the weight of the
connection from each representational unit, j, (whether inputor hidden unit) to that response
667
668
Shepard and Kannappan
a '5
t:
R
b
1/1
a ..
; .~
c: 1II
o
4
ia .~
1/1
A.
~ ~
I:a -~
3
o
0
U ...
1/1 a
-: ~
:5Jl
c:
o
2
1/1
-IE
'0 ::J
:.5
.... '
o
0
o
t
...
....?
?>-
Input Units Corresponding to Values _
on a Unidimensional Continuum
o
o
III
Input
Units 0
o
o
o
o
51
Cone of adivalion
o
57
~d----l
'Test stimulus
Figure 1: Schematic portrayal of the connectionist embodiment. (a) Initial
connections from an input unit to hidden units in its "cone of activation." (b)
Connections from these hidden units to a response unit following reinforcement
of the response.
unit, Ric, in proportion to the current level of activation, a j , of that representational unit. In
the initial implementation considered here, the change in weight from representational unit
j to the response unit Ric is simply
llw;.
={
>.aj (1 - alc)
upon a positive outcome (reinforcement)
upon a negative outcome (nonreinforcement)
where>. is a learning rate parameter and alc is the current activation level of the response
unit Ric (which, tending to be confined between 0 and 1, represents an estimate of the
probability of the positive consequence). Following a positive outcome, then, positive
weights will connect all the units in the cone of activation for SI to Ric, but with values that
decay exponentially with the height of a unit in that cone (Figure Ib).
If, now, a different stimulus, S2, is encountered, some but not all of the representational
units that are in the cone of activation of SI and, hence, that are already connected to Ric
will also fall in the cone of activation of S2 (Figure 1b). It is these units in the overlap of the
two cones that mediate generalization of the response from SI to S2. Not only is this simple
connectionist scheme neurophysiologically plausible, it is also isomorphic to the theory
of generalization (Shepard, 1987) based solely on considerations of optimal behavior in a
world consisting of natural kinds.
Connectionist Implementation of a Theory of Generalization
3
PRELIMINARY CONNECTIONIST EXPLORATIONS
The simulation results for generalization and discrimination learning are summarized in
Figure 2. Panel a shows. for different stages of training on stimulus SlO. the level of
response activation produced by activation of each of the 20 input units. In accordance
with theory. this activation decayed exponentially with distance from the training stimulus.
The obtained functions differ only by a multiplicative scale factor that increased (toward
asymptote) with the amount of training. Following this training. the response connection
weights decreased exponentially with the heights of the hidden units (panel b). Later training
on a second positive stimulus. S12. created a secondary peak in the activation function (panel
c). and still later nonreinforced presentation of a third stimulus. S9. produced a sharp drop
in the activation function at the discrimination boundary (panel d).
Response Connection Weights Following Training on 5'1
-.
lit
?2
:I
I:
0
:.:.
~
I:
?
?
'"
&
lit
Q.
o
5
Input
10
Unit
15
I
I
0
5
20
,
,
,
,
Input
I
10
,
,
,
Uni t
?
I
,
,
,
,
I
15
20
15
20
<f><f>
~
!I.O
?
,.a
C
1/1
? .6
"'i
.4
i
:.:;
.2
?
.~ .0
~
~
-<
o
5
Input
10
Unit
15
5
Input
10
Unit
Figure 2: Connectionist simulations of generalization and discrimination learning.
Figure 3 presents the results for classification learning in which all stimuli were presented
but with response reinforcement for stimuli in the positive set only. When the positive
set was compact (panel a) sharp discrimination boundaries formed and response activation
approached 1 for all positive stimuli and 0 for all negative stimuli. In accordance with
theory and empirical data. generalization entailed slower classification learning when the
positive stimuli were dispersed among negative stimuli (panel b)-as shown by a (mean
square) error measure (panel c).
669
670
Shepard and Kannappan
o
Input
10
Unit
5
IS
I
Input
I
I
3.0
\, C
\
2.5
o
....'c
In nch Clse. the positive set
contains 5 out of 20 stimuli
\,
\
'- 2.0
t
5
~~
~
c
&~
,~
,~
LU
'...
.5
.0 ? ____________________
,
o
:--P,..Y10US
! discrimlnltlon
i
I
I
!
--0. 4
'.
!
I
C
'~"
0
'.... -"----'-
!
~ .2
-
=
__=
__=__:__: :__==__==_=____
boundlry
!
~
"~-f
'_
1.0
P"'Vl0U5~
discrimination!
boundlry
&.6
'-!-v
1.5
d
.~
-_-_--..:-~--~I ~ .0 I----~:::
I
I
!
,
20
40
60
80
Successive Learning Epochs
?
I 00
0~"""""'''''--~5..............o...-'''''''''.:..J,1:-0...................~.,.L"5,............0...-...........,,20
Input
Unit
Figure 3: Connectionist simulations of classification learning.
Finally. Panel d illustrates fine tuning of the psychophysical mapping when discrimination
boundaries have the same locations for many successively learned classifications. In
contrast to the preceding simulations. in which only the response connection weights were
allowed to change. here the connection weights from the input units to the hidden units
were also allowed to change through "back propagation" (Rumelhart. Hinton, & Williams.
1986). For 400 learning epochs each. each of ten different responses was successively
associated with the same five positive stimuli. SlO through S14. while reinforcement was
withheld for all the remaining stimuli. Then. yet another response was associated with the
single stimulus SlO. Although the resulting activation curves for this new response (panel d)
are similar to the original generalization curves (Figure 2a). they drop more sharply where
classification boundaries were previously located. This fine tuning of the psychophysical
mapping proceeded. however. much more slowly than the learning of the classificatory
responses themselves.
4
CONCLUDING REMARKS
This is just the beginning of the connectionist exploration of the implications of the generalization theory in more complex cases. In addition to accounting for generalization
Connectionist Implementation of a Theory of Generalization
and classification along a unidimensional continuum. the approach can account for generalization and classification of stimuli differing with respect to multidimensional continua
(Shepard. 1987) and also with respect to discrete features (Gluck. 1991; Russell. 1986).
Finally, the connectionist implementation should facilitate a proposed extension to the
treatment of response latencies as well as probabilities (Shepard. 1987).
Connectionists have sometimes assumed an exponential decay generalization function.
and their notion of radial basis functions is not unlike the present concept of consequential
regions (see Hanson & Gluck. this volume). What has been advocated here (and in Shepard.
1987) is the derivation of such functions and concepts from first principles.
Acknowledgements
This work was supported by National Science Foundation grant BNS85-11685 to the
first author. For help and guidance. we thank Jonathan Bachrach. Geoffrey Miller, Mark
Monheit. David Rumelhart, and Steven Sloman.
References
Gluck, M. A. (1991). Stimulus generalization and representation in adaptive network models of
category learning. Psychological Science, 2. (in press).
Hanson, S. J. & Gluck, M. A. (1991). Spherical units as dynamic consequential regions: Implications
for attention, competition, and categorization. (fhis volume).
Miller, G. F. & Todd, P. M. (1990). Exploring adaptive agency I: Theory and methods for simulating
the evolution of learning. In D. S. Touretzky, J. L. Elman, T. J. Sejnowski, & G. E. Hinton
(Eds.), Proceedings of the 1990 Connectionist Models Summer School. S an Mateo, CA: Morgan
Kaufmann.
Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship.
Journal of Experimental Psychology: General, 114, 39-57.
Rumelhart. D. E. (1990). Representation in connectionist models (fhe Association Lecture). Attention
& Performance Meeting. Ann Arbor, Michigan, July 9.
Rumelhart. D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by backpropagating errors. Nature, 323,533-536.
Russell, S. J. (1986). A quantitative analysis of analogy by similarity. In Proceedings of the National
Conference on Artificial Intelligence. Philadelphia, PA: American Association for Artificial
In telligence.
Shepard, R. N. (1986). Discrimination and generalization in identification and classification: Comment on Nosofsky. Journal of Experimental Psychology: General, 115, 50-61.
Shepard, R. N. (1987). Toward a universal law of generalization for psychological science. Science,
237, 1317-1323.
Shepard. R. N. (1989). Internal representation of universal regularities: A challenge for connectionism. In L. Nadel, L. A. Cooper, P. Culicover, & R. M. Harnish (Eds.), Neural Connections,
Mental Computation (pp. 103-104). Cambridge, MA: MIT Press.
Shepard, R. N. (1990). Neural nets for generalization and classification: Comment on Staddon and
Reid. Psychological Review, 97, 579-580.
Shepard, R. N. & Chang, J. J. (1963). Stimulus generalization in the learning of classifications.
Journal of Experimental Psychology, 65,94-102.
671
Part XI
Local Basis Functions
| 351 |@word proceeded:1 proportion:2 consequential:12 simulation:7 accounting:1 fonn:2 initial:3 contains:1 genetic:1 current:2 activation:18 si:4 yet:1 must:2 subsequent:1 shape:6 asymptote:1 drop:2 discrimination:10 intelligence:1 tone:1 accordingly:1 slowing:1 beginning:1 mental:1 location:5 successive:2 five:1 height:4 mathematical:1 along:2 direct:1 dispositional:1 manner:1 indeed:1 behavior:2 themselves:1 elman:1 spherical:1 actual:1 little:1 panel:9 what:1 evolved:1 kind:6 differing:2 finding:1 quantitative:3 multidimensional:4 attenuation:2 exactly:2 unit:42 grant:1 reid:1 positive:14 accordance:2 todd:2 local:1 consequence:7 solely:1 approximately:1 might:1 twice:1 mateo:1 nol:1 range:1 classificatory:1 foundational:1 universal:2 empirical:2 induce:1 radial:1 close:1 selection:1 s9:1 williams:2 attention:3 bachrach:1 rule:3 array:1 notion:1 harvard:1 pa:1 rumelhart:5 located:1 predicts:1 steven:1 pictorially:1 region:14 connected:1 russell:2 agency:1 dynamic:1 depend:1 serve:1 upon:2 basis:3 derivation:2 distinct:1 sejnowski:1 artificial:2 approached:1 outcome:3 quite:1 stanford:2 plausible:2 larger:1 otherwise:1 compressed:1 favor:1 ability:1 net:1 date:1 representational:13 competition:1 regularity:2 optimum:3 categorization:2 object:13 wider:1 depending:1 illustrate:1 help:1 school:1 advocated:1 come:2 differ:1 direction:1 exploration:3 generalization:35 preliminary:1 probable:1 connectionism:1 mathematically:2 extension:1 exploring:1 considered:2 deciding:1 mapping:9 sought:1 continuum:4 purpose:1 s12:1 infonnation:1 mit:1 rather:1 derived:1 contrast:1 hidden:13 relation:2 llw:1 upward:1 classification:14 among:3 priori:1 animal:1 integration:2 field:1 never:1 having:1 shaped:3 represents:2 progressive:1 lit:2 connectionist:22 stimulus:41 national:2 individual:9 variously:1 consisting:1 entailed:1 activated:2 implication:3 heaviness:1 closer:1 old:2 guidance:1 psychological:6 increased:1 disposition:1 localist:1 subset:3 connect:1 density:1 decayed:1 peak:1 ie:1 physic:1 off:2 connecting:1 nosofsky:3 successively:3 slowly:1 american:1 account:1 potential:2 summarized:1 includes:1 multiplicative:1 later:2 parallel:1 formed:1 square:1 kaufmann:1 who:1 stretching:1 miller:3 yield:3 correspond:1 generalize:1 bayesian:1 identification:2 produced:2 lu:1 submitted:1 explain:1 falloff:3 touretzky:1 sharing:1 ed:2 pp:1 associated:2 gain:1 newly:2 treatment:1 knowledge:2 back:1 tum:1 higher:4 response:21 though:1 roger:1 stage:1 just:1 propagation:1 aj:1 indicated:1 perhaps:1 facilitate:1 effect:1 concept:2 evolution:2 hence:1 assigned:2 nch:1 backpropagating:1 anything:1 generalized:1 slo:3 kannappan:4 consideration:1 discovers:1 recently:1 tending:1 empirically:1 physical:1 shepard:22 exponentially:4 interspersed:1 belong:1 jl:1 volume:2 relating:1 association:2 cambridge:2 tuning:5 entail:1 similarity:3 massively:1 meeting:1 morgan:1 nonreinforcement:1 minimum:1 additional:1 fortunately:1 greater:1 preceding:1 july:1 ii:1 fhe:1 hebbian:1 schematic:1 pitch:1 nadel:1 basic:1 confront:1 metric:1 represent:2 sometimes:2 confined:1 addition:1 remarkably:1 fine:5 decreased:1 unlike:1 comment:2 induced:1 subject:1 member:1 presence:1 iii:1 psychology:4 s14:1 idea:1 unidimensional:4 edible:1 whether:1 action:1 remark:1 generally:1 latency:2 staddon:1 outward:1 amount:1 ten:1 category:1 simplest:1 delta:1 discrete:1 cone:8 uncertainty:1 respond:1 reasonable:2 ric:5 scaling:2 layer:3 followed:3 summer:1 encountered:3 portrayal:1 sharply:1 sheila:1 concluding:1 relatively:1 department:2 according:3 describes:2 across:1 biologically:1 remains:1 previously:1 eventually:1 simulating:1 encounter:2 slower:1 original:1 remaining:1 giving:1 establish:1 psychophysical:7 already:1 receptive:1 gradient:4 sloman:1 distance:3 thank:1 separating:1 extent:1 toward:3 relationship:1 negative:5 rise:1 implementation:11 unknown:2 withheld:1 situation:5 hinton:3 sharp:2 makeup:1 david:1 namely:1 connection:11 hanson:2 connectionists:1 learned:2 poisonous:1 beyond:1 suggested:1 able:1 challenge:1 explanation:1 power:1 event:1 overlap:5 natural:7 treated:1 regularized:1 ia:1 scheme:1 created:1 philadelphia:1 prior:3 epoch:2 acknowledgement:1 review:1 law:1 plant:1 fully:1 neurophysiologically:2 lecture:1 analogy:1 geoffrey:1 foundation:1 integrate:1 degree:1 sufficient:1 propagates:3 principle:3 systematically:1 supported:1 guide:1 fall:5 emerge:1 boundary:7 embodiment:2 curve:2 world:2 cumulative:1 author:1 reinforcement:5 adaptive:2 compact:3 derivable:1 uni:1 conclude:1 assumed:1 discriminative:1 xi:1 un:1 learn:1 nature:1 superficial:1 ca:2 complex:2 significance:1 decrement:1 whole:1 noise:1 s2:3 mediate:1 repeated:1 allowed:2 cooper:1 exponential:7 candidate:1 accrues:1 ib:1 third:2 rk:1 explored:1 decay:4 concern:2 alc:2 illustrates:2 easier:1 gluck:4 entropy:2 michigan:1 simply:1 likely:2 chang:2 corresponds:3 dispersed:1 ma:2 conditional:1 sized:1 presentation:1 ann:1 absence:2 change:4 justify:1 isomorphic:1 secondary:1 experimental:3 la:1 arbor:1 internal:1 mark:1 arises:1 jonathan:1 phenomenon:1 |
2,769 | 3,510 | On the Complexity of Linear Prediction:
Risk Bounds, Margin Bounds, and Regularization
Sham M. Kakade
TTI Chicago
Chicago, IL 60637
[email protected]
Karthik Sridharan
TTI Chicago
Chicago, IL 60637
[email protected]
Ambuj Tewari
TTI Chicago
Chicago, IL 60637
[email protected]
Abstract
This work characterizes the generalization ability of algorithms whose predictions are linear in the input vector. To this end, we provide sharp bounds for
Rademacher and Gaussian complexities of (constrained) linear classes, which directly lead to a number of generalization bounds. This derivation provides simplified proofs of a number of corollaries including: risk bounds for linear prediction
(including settings where the weight vectors are constrained by either L2 or L1
constraints), margin bounds (including both L2 and L1 margins, along with more
general notions based on relative entropy), a proof of the PAC-Bayes theorem,
and upper bounds on L2 covering numbers (with Lp norm constraints and relative entropy constraints). In addition to providing a unified analysis, the results
herein provide some of the sharpest risk and margin bounds. Interestingly, our
results show that the uniform convergence rates of empirical risk minimization
algorithms tightly match the regret bounds of online learning algorithms for linear
prediction, up to a constant factor of 2.
1
Introduction
Linear prediction is the cornerstone of an extensive number of machine learning algorithms, including SVM?s, logistic and linear regression, the lasso, boosting, etc. A paramount question is to
understand the generalization ability of these algorithms in terms of the attendant complexity restrictions imposed by the algorithm. For example, for the sparse methods (e.g. regularizing based
on L1 norm of the weight vector) we seek generalization bounds in terms of the sparsity level. For
margin based methods (e.g. SVMs or boosting), we seek generalization bounds in terms of either
the L2 or L1 margins. The focus of this paper is to provide a more unified analysis for methods
which use linear prediction.
? which minimizes
Given a training set {(xi , yi )}ni=1 , the paradigm is to compute a weight vector w
the F -regularized ?-risk. More specifically,
n
? = argmin
w
w
1X
?(hw, xi i , yi ) + ?F (w)
n i=1
(1)
where ? is the loss function, F is the regularizer, and hw, xi is the inner product between vectors x
and w. In a formulation closely related to the dual problem, we have:
n
1X
?(hw, xi i , yi )
w:F (w)?c n i=1
? = argmin
w
(2)
where, instead of regularizing, a hard restriction over the parameter space is imposed (by the constant
c). This works provides generalization bounds for an extensive family of regularization functions F .
Rademacher complexities (a measure of the complexity of a function class) provide a direct route
to obtaining such generalization bounds, and this is the route we take. Such bounds are analogous
to VC dimensions bounds, but they are typically much sharper and allow for distribution dependent
bounds. There are a number of methods in the literature to use Rademacher complexities to obtain
either generalization bounds or margin bounds. Bartlett and Mendelson [2002] provide a generalization bound for Lipschitz loss functions. For binary prediction, the results in Koltchinskii and
Panchenko [2002] provide means to obtain margin bounds through Rademacher complexities.
In this work, we provide sharp bounds for Rademacher and Gaussian complexities of linear classes,
with respect to a strongly convex complexity function F (as in Equation 1). These bounds provide
simplified proofs of a number of corollaries: generalization bounds for the regularization algorithm
in Equation 2 (including settings where the weight vectors are constrained by either L2 or L1 constraints), margin bounds (including L2 and L1 margins, and, more generally, for Lp margins), a
proof of the PAC-Bayes theorem, and L2 covering numbers (with Lp norm constraints and relative
entropy constraints). Our bounds are often tighter than previous results and our proofs are all under
this more unified methodology.
Our proof techniques ? reminiscent of those techniques for deriving regret bounds for online learning algorithms ? are rooted in convex duality (following Meir and Zhang [2003]) and use a more
general notion of strong convexity (as in Shalev-Shwartz and Singer [2006]). Interestingly, the risk
bounds we provide closely match the regret bounds for online learning algorithms (up to a constant
factor of 2), thus showing that the uniform converge rates of empirical risk minimization algorithms
tightly match the regret bounds of online learning algorithms (for linear prediction). The Discussion
provides this more detailed comparison.
1.1
Related Work
A staggering number of results have focused on this problem in varied special cases. Perhaps the
most extensively studied are margin bounds for the 0-1 loss. For L2 -margins (relevant for SVM?s,
perceptron based algorithms, etc.), the sharpest bounds are those provided by Bartlett and Mendelson [2002] (using Rademacher complexities) and Langford and Shawe-Taylor [2003], McAllester
[2003] (using the PAC-Bayes theorem). For L1 -margins (relevant for Boosting, winnow, etc),
bounds are provided by Schapire et al. [1998] (using a self-contained analysis) and Langford et al.
[2001] (using PAC-Bayes, with a different analysis). Another active line of work is on sparse methods ? particularly methods which impose sparsity via L1 regularization (in lieu of the non-convex
L0 norm). For L1 regularization, Ng [2004] provides generalization bounds for this case, which
follow from the covering number bounds of Zhang [2002]. However, these bounds are only stated
as polynomial in the relevant quantities (dependencies are not provided).
Previous to this work, the most unified framework for providing generalization bounds for linear
prediction stem from the covering number bounds in Zhang [2002]. Using these covering number
bounds, Zhang [2002] derives margin bounds in a variety of cases. However, providing sharp generalization bounds for problems with L1 regularization (or L1 constraints in the dual) requires more
delicate arguments. As mentioned, Ng [2004] provides bounds for this case, but the techniques used
by Ng [2004] would result in rather loose dependencies (the dependence on the sample size n would
be n?1/4 rather than n?1/2 ). We discuss this later in Section 4.
2
Preliminaries
Our input space, X , is a subset of a vector space, and our output space is Y. Our samples (X, Y ) ?
X ? Y are distributed according to some unknown distribution P . The inner product between
vectors x and w is denoted by hw, xi, where w ? S (here, S is a subset of the dual space to
our input vector space). A norm of a vector x is denoted by kxk, and the dual norm is defined as
kwk? = sup{hw, xi : kxk ? 1}. We further assume that for all x ? X , kxk ? X.
Let ? : R ? Y ? R+ be our loss function of interest. Throughout we shall consider linear predictors
of form hw, xi. The expected of loss of w is denoted by L(w) = E[?(hw, xi , y)]. As usual, we are
provided with a sequence of i.i.d. samples {(xi , yi )}ni=1 , and our goal is to minimize our expected
Pn
?
loss. We denote the empirical loss as L(w)
= n1 i=1 ?(hw, xi i , yi ).
The restriction we make on our complexity function F is that it is a strongly convex function. In
particular, we assume it is strongly convex with respect to our dual norm: a function F : S ? R is
said to be ?-strongly convex w.r.t. to k ? k? iff ?u, v ? S, ?? ? [0, 1], we have
?
F (?u + (1 ? ?)v) ? ?F (u) + (1 ? ?)F (v) ? ?(1 ? ?)ku ? vk2? .
2
See Shalev-Shwartz and Singer [2006] for more discussion on this generalized definition of strong
convexity.
Recall the definition of the Rademacher and Gaussian complexity of a function class F,
#
"
#
"
n
n
1X
1X
f (xi )?i
Gn (F) = E sup
f (xi )?i
Rn (F) = E sup
f ?F n i=1
f ?F n i=1
where, in the former, ?i independently takes values in {?1, +1} with equal probability, and, in the
latter, ?i are independent, standard normal random variables. In both expectations, (x1 , . . . , xn ) are
i.i.d.
As mentioned in the Introduction, there are number of methods in the literature to use Rademacher
complexities to obtain either generalization bounds or margin bounds. Two results are particularly
useful to us. First, Bartlett and Mendelson [2002] provides the following generalization bound for
Lipschitz loss functions. Here, L(f ) = E[?(f (x), y)] is the expected of loss of f : X ? R, and
? ) = 1 Pn ?(f (xi ), yi ) is the empirical loss.
L(f
i=1
n
Theorem 1. (Bartlett and Mendelson [2002]) Assume the loss ? is Lipschitz (with respect to its
first argument) with Lipschitz constant L? and that ? is bounded by c. For any ? > 0 and with
probability at least 1 ? ? simultaneously for all f ? F, we have that
r
? ) + 2L? Rn (F) + c log(1/?)
L(f ) ? L(f
2n
where Rn (F) is the Rademacher complexity of a function class F, and n is the sample size.
The second result, for binary prediction, from Koltchinskii and Panchenko [2002] provides a margin bound in terms of the Rademacher complexity. The following is a variant of Theorem 2 in
Koltchinskii and Panchenko [2002]:
Theorem 2. (Koltchinskii and Panchenko [2002]) The zero-one loss function is given by
?(f (x), y) = 1[yf (x) ? 0], where y ? {+1, ?1}. Denote the fraction of the data having ?i )<?}|
. Assume that ?f ? F we have supx?X |f (x)| ? C.
margin mistakes by K? (f ) := |{i:yi f (x
n
Then, with probability at least 1 ? ? over the sample, for all margins ? > 0 and all f ? F we have,
s
r
log(log2 4C
Rn (F)
log(1/?)
? )
L(f ) ? K? (f ) + 4
+
+
.
?
n
2n
(We provide a proof in the appendix.) The above results show that if we provide sharp bounds on the
Rademacher complexities then we obtain sharp generalization bounds. Typically, we desire upper
bounds on the Rademacher complexity that decrease with n.
3
Complexities of Linear Function Classes
Given a subset W ? S, define the associated class of linear functions FW as FW := {x 7? hw, xi :
w ? W}. Our main theorem bounds the complexity of FW for certain sets W.
Theorem 3. (Complexity Bounds) Let S be a closed convex set and let F : S ? R be ?-strongly
convex w.r.t. k ? k? s.t. inf w?S F (w) = 0. Further, let X = {x : kxk ? X}. Define W = {w ?
S : F (w) ? W?2 }. Then, we have
r
r
2
2
,
Gn (FW ) ? XW?
.
Rn (FW ) ? XW?
?n
?n
The restriction inf w?S F (w) = 0 is not a significant one since adding a constant to F still keeps it
strongly convex. Interestingly, the complexity bounds above precisely match the regret bounds for
online learning algorithms (for linear prediction), a point which we return to in the Discussion. We
first provide a few examples, before proving this result.
3.1
Examples
(1) Lp /Lq norms. Let S = Rd . Take k?k, k?k? to be the Lp , Lq norms for p ? [2, ?), 1/p+1/q = 1,
P
1/p
d
p
where kxkp :=
|x
|
. Choose F (w) = k?k2q and note that it is 2(q ?1)-strongly convex
i
j=1
on Rd w.r.t. itself. Set X , W as in Theorem 3. Then, we have
r
p?1
.
Rn (FW ) ? XW?
n
(3)
(2) L? /L1 norms. Let S = {w ? Rd : kwk1 = W1 , wj ? 0} be the W1 -scaled probability
simplex. Take k ? k, k ? k? to be the L? , L1 norms, P
kxk? = max1?j?d |xj |. Fix a probability
distribution ? > 0 and let F (w) = entro? (w) :=
j (wj /W1 ) log(wj /(W1 ?j )). For any ?,
entro? (w) is 1/W12 -strongly convex on S w.r.t. k ? k1 . Set X as in Theorem 3 and let W(E) =
{w ? S : entro? (w) ? E}. Then, we have
r
2E
Rn (FW(E) ) ? XW1
.
(4)
n
Note that if we take ? to be the uniform distribution then for any w ? S we have that trivial upper
bound of entro? (w) ? log d. Hence if we let W := W(log d) with uniform ? and note that it is the
entire scaled probability simplex. Then
r
2 log d
.
(5)
Rn (FW ) ? XW1
n
The restriction wj ? 0 can be removed in the definition of S by the standard trick of doubling the
dimension of x to include negated copies of each coordinate. So, if we have Sp= {w ? Rd :
kwk1 ? W1 } and we set X as above and W = S, then we get Rn (FW ) ? XW1 2 log(2d)/n.
In this way, even though the L1 norm is not strongly convex (so our previous Theorem does not
directly apply to it), the class of functions imposed by this L1 norm restriction is equivalent to that
imposed by the above entropy restriction. Hence, we are able to analyze the generalization properties
of the optimization problem in Equation 2.
(3) Smooth norms. A norm is (2, D)-smooth on S if for any x, y ? S,
d2
kx + tyk2 ? 2D2 kyk2 .
dt2
Let k ? k be a (2, D)-smooth norm and k ? k? be its dual. Lemma 11 in the appendix proves that k ? k?
is 2/D2 -strongly convex w.r.t. itself. Set X , W as in Theorem 3. Then, we have
Rn (FW ) ?
XW? D
?
.
n
(6)
(4) Bregman divergences. For a strongly convex F , define the Bregman divergence ?F (wkv) :=
F (w) ? F (v) ? h?F (v), w ? vi. It is interesting to note that Theorem 3 is still valid if we choose
W? = {w ? S : ?F (wkv) ? W?2 } for some fixed v ? S. This is because the Bregman divergence
?F (?kv) inherits the strong convexity of F .
Except for (5), none of the above bounds depend explicitly on the dimension of the underlying space
and hence can be easily extended to infinite dimensional spaces under appropriate assumptions.
3.2
The Proof
First, some background on convex duality is in order. The Fenchel conjugate of F : S ? R is
defined as:
F ? (?) := sup hw, ?i ? F (w) .
w?S
A simple consequence of this definition is Fenchel-Young inequality,
??, w ? S, hw, ?i ? F (w) + F ? (?) .
If F is ?-strongly convex, then F ? is differentiable and
??, ?, F ? (? + ?) ? F ? (?) + h?F ? (?), ?i +
1
k?k2? .
2?
(7)
See the Appendix in Shalev-Shwartz [2007] for proof. Using this inequality we can control the
expectation of F ? applied to a sum of independent random variables.
Lemma 4. Let S be a closed convex set and let F : S ? R be ?-strongly convex w.r.t.
Pk ? k? . Let Zi
be mean zero independent random vectors such that E[kZi k2 ] ? V 2 . Define Si := j?i Zi . Then
F ? (Si ) ? iV 2 /2? is a supermartingale. Furthermore, if inf w?S F (w) = 0, then E[F ? (Sn )] ?
nV 2 /2?.
Proof. Note that inf w?S F (w) = 0 implies F ? (0) = 0. Inequality (7) gives,
F ? (Si?1 + Zi ) ? F ? (Si ) + h?F ? (Si?1 ), Zi i +
1
kZi k2? .
2?
Taking conditional expectation w.r.t. Z1 , . . . , Zi?1 and noting that Ei?1 [Zi ] = 0 and Ei?1 [kZi k2? ] ?
V 2 , we get
V2
Ei?1 [F ? (Si )] ? F ? (Si?1 ) + 0 +
2?
where Ei?1 [?] abbreviates E[? | Z1 , . . . , Zi?1 ]. To end the proof, note that inf w?S F (w) = 0 implies
F ? (0) = 0.
Like Meir and Zhang [2003] (see Section 5 therein), we begin by using conjugate duality to bound
the Rademacher complexity. To finish the proof, we exploit the strong convexity of F by applying
the above lemma.
P
Proof. Fix x1 , . . . , xn such that kxi k ? X. Let ? = n1 i ?i xi where ?i ?s are i.i.d. Rademacher or
Gaussian random variables (our proof only requires that E[?i ] = 0 and E[?2i ] = 1). Choose arbitrary
? > 0. By Fenchel?s inequality, we have hw, ??i ? F (w) + F ? (??) which implies
hw, ?i ?
F (w) F ? (??)
+
.
?
?
Since, F (w) ? W?2 for all w ? W, we have
sup hw, ?i ?
w?W
W?2
F ? (??)
+
.
?
?
Taking expectation (w.r.t. ?i ?s), we get
W2
1
E sup hw, ?i ? ? + E [F ? (??)] .
?
?
w?W
Now set Zi = ??ni xi (so that Sn = ??) and note that the conditions of Lemma 4 are satisfied with
2
X2
V 2 = ?2 B 2 /n2 and hence E[F ? (??)] ? ?2?n
. Plugging this above, we have
W2
?X 2
E sup hw, ?i ? ? +
.
?
2?n
w?W
q
2?nW?2
gives
Setting ? =
X2
r
2
.
E sup hw, ?i ? XW?
?n
w?W
which completes the proof.
4
4.1
Corollaries
Risk Bounds
We now provide generalization error bounds for any Lipschitz loss function ?, with Lipschitz constant L? . Based on the Rademacher generalization bound provided in the Introduction (see Theorem 1) and the bounds on Rademacher complexity proved in previous section, we obtain the following corollaries.
Corollary 5. Each of the following statements holds with probability at least 1 ? ? over the sample:
? Let W be as in the Lp /Lq norms example. For all w ? W,
r
r
p?1
log(1/?)
?
+ L? XW?
L(w) ? L(w) + 2L? XW?
n
2n
? Let W be as in the L? /L1 norms example. For all w ? W,
r
r
2 log(d)
log(1/?)
?
? ? L(w) + 2L? XW1
L(w)
+ L? XW1
n
2n
Ng [2004] provides bounds for methods which use L1 regularization. These bounds are only stated
as polynomial bounds, and, the methods used (covering number techniques from Pollard [1984] and
covering number bounds from Zhang [2002]) would provide rather loose bounds (the n dependence
would be n?1/4 ). In fact, even a more careful analysis via Dudley?s entropy integral using the
covering numbers from Zhang [2002] would result in a worse bound (with additional log n factors).
The above argument is sharp and rather direct.
4.2
Margin Bounds
In this section we restrict ourselves to binary classification where Y = {+1, ?1}. Our prediction
is given by sign(hw, xi). The zero-one loss function is given by ?(hw, xi , y) = 1[y hw, xi ?
i )<?}|
. We
0]. Denote the fraction of the data having ?-margin mistakes by K? (f ) := |{i:yi f (x
n
now demonstrate how to get improved margin bounds using the upper bounds for the Rademacher
complexity derived in Section 3.
Based on the Rademacher margin bound provided in the Introduction (see Theorem 2), we get the
following corollary which will directly imply the margin bounds we are aiming for. The bound for
the p = 2 case has been used to explain the performance of SVMs. Our bound essentially matches
the best known bound [Bartlett and Mendelson, 2002] which was an improvement over previous
bounds [Bartlett and Shawe-Taylor, 1999] proved using fat-shattering dimension estimates. For the
L??/L1 case, our bound improves the best known bound [Schapire et al., 1998] by removing a factor
of log n.
Corollary 6. (Lp Margins) Each of the following statements holds with probability at least 1 ? ?
over the sample:
? Let W be as in the Lp /Lq norms example. For all ? > 0, w ? W,
s
r
r
?
log(log2 4XW
)
XW? p ? 1
log(1/?)
?
L(w) ? K? (w) + 4
+
+
?
n
n
2n
? Let W be as in the L? /L1 norms example. For all ? > 0, w ? W,
s
r
r
1
log(log2 4XW
)
log(1/?)
XW1 2 log(d)
?
+
+
L(w) ? K? (w) + 4
?
n
n
2n
The following result improves the best known results of the same ?
kind, [Langford et al., 2001, Theorem 5] and [Zhang, 2002, Theorem 7], by removing a factor of log n. These results themselves
were an improvement over previous results obtained using fat-shattering dimension estimates.
Corollary 7. (Entropy Based Margins) Let X be such that for all x ? X , kxk? ? X. Consider
the class W = {w ? Rd : kwk1 ? W1 }. Fix an arbitrary prior ?. We have that with probability
at least 1 ? ? over the sample, for all margins ? > 0 and all weight vector w ? W,
s
r
r
1
log(log2 4XW
)
XW1 entro? (w) + 2.5
log(1/?)
?
L(w) ? K? (w) + 8.5
+
+
?
n
n
2n
P |wi |
i|
where entro? (w) := i kwk1 log( ?i|w
kwk1 )
Proof. Proof is provided in the appendix.
4.3
PAC-Bayes Theorem
We now show that (a form of) the PAC Bayesian theorem [McAllester, 1999] is a consequence of
Theorem 3. In the PAC Bayesian theorem, we have a set of hypothesis (possibly infinite) C. We
choose some prior distribution over this hypothesis set say ?, and after observing the training data,
we choose any arbitrary posterior ? and the loss we are interested in is ?? (x, y) = Ec?? ?(c, x, y)
that is basically the expectation of the loss when hypothesis c ? C are drawn i.i.d. using distribution
?. Note that in this section we are considering a more general form of the loss.
The key observation as that we can view ?? (x) as the inner product hd?(?), ?(?, x, y)i between the
measure d?(?) and the loss ?(?, x). This leads to the following straightforward corollary.
Corollary 8. (PAC-Bayes) For a fixed prior ? over the hypothesis set C, and any loss bounded by 1,
with probability at least 1 ? ? over the sample, simultaneously for all choice of posteriors ? over C
we have that,
r
r
max{KL(?k?), 2}
log(1/?)
?
L? ? L? + 4.5
+
(8)
n
2n
Proof. Proof is provided in the appendix.
Interestingly,
this result is an improvement over the original statement, in which the last term was
p
log(n/?)/n. Our bound removes this extra log(n) factor, so, in the regime where we fix ? and
examine large n, this bound is sharper. We note that our goal was not to prove the PAC-Bayes
theorem, and we have made little attempt to optimize the constants.
4.4
Covering Number Bounds
It is worth noting that using Sudakov?s minoration results we can obtain upper bound on the L2
(and hence also L1 ) covering numbers using the Gaussian complexities. The following is a direct
corollary of the Sudakov minoration theorem for Gaussian complexities (Theorem 3.18, Page 80 of
Ledoux and Talagrand [1991]).
Corollary 9. Let FW be the function class from Theorem 3. There exists a universal constant K > 0
such that its L2 covering number is bounded as follows:
?? > 0 log(N2 (FW , ?, n)) ?
2K 2 X 2 W?2
??2
This bound is sharper than those that could be derived from the N? covering number bounds of
Zhang [2002].
5
Discussion: Relations to Online, Regret Minimizing, Algorithms
In this section, we make a further assumption that loss ?(hw, xi , y) is convex in its first argument.
We now show that in the online setting that the regret bounds for linear prediction closely match our
risk bounds. The algorithm we consider performs the update,
wt+1 = ?F ?1 (?F (wt ) ? ??w ?(hwt , xt i , yt ))
(9)
This algorithm captures both gradient updates, multiplicative updates, and updates based on the Lp
norms, through appropriate choices of F . See Shalev-Shwartz [2007] for discussion.
For the algorithm given by the above update, the following theorem is a bound on the cumulative
regret. It is a corollary of Theorem 1 in Shalev-Shwartz and Singer [2006] (and also of Corollary 1
in Shalev-Shwartz [2007]), applied to our linear case.
Corollary 10. (Shalev-Shwartz and Singer [2006]) Let S be a closed convex set and let F : S ? R
be ?-strongly convex w.r.t. k ? k? . Further, let X = {x : kxk ? X} and W = {w ? S : F (w) ?
W?2 }. Then for the update given by Equation 9 if we start with w1 = argmin F (w), we have that
for all sequences {(xt , yt )}nt=1 ,
r
n
n
X
X
2n
?(hwt , xt i , yt ) ? argmin
?(hw, xt i , yt ) ? L? XW?
?
w?W t=1
t=1
For completeness, we provide a direct proof in the Appendix. Interestingly, the regret above is
precisely our complexity bounds (when L? = 1). Also, our risk bounds are a factor of 2 worse,
essentially due to the symmetrization step used in proving Theorem 1.
References
P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results.
Journal of Machine Learning Research, 3:463?482, 2002.
P. L. Bartlett and J. Shawe-Taylor. Generalization performance of support vector machines and other pattern
classifiers. In B. Sch?olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods ? Support
Vector Learning, pages 43?54. MIT Press, 1999.
N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of
combined classifiers. Annals of Statistics, 30(1):1?50, 2002.
J. Langford and J. Shawe-Taylor. PAC-Bayes & margins. In Advances in Neural Information Processing
Systems 15, pages 423?430, 2003.
J. Langford, M. Seeger, and Nimrod Megiddo. An improved predictive accuracy bound for averaging classifiers.
In Proceedings of the Eighteenth International Conference on Machine Learning, pages 290?297, 2001.
M. Ledoux and M. Talagrand. Probability in Banach spaces: Isoperimetry and processes, volume 23 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3). Springer-Verlag, 1991.
David A. McAllester. Simplified PAC-Bayesian margin bounds. In Proceedings of the Sixteenth Annual Conference on Computational Learning Theory, pages 203?215, 2003.
David A. McAllester. PAC-Bayesian model averaging. In Proceedings of the Twelfth Annual Conference on
Computational Learning Theory, pages 164?170, 1999.
Ron Meir and Tong Zhang. Generalization error bounds for Bayesian mixture algorithms. Journal of Machine
Learning Research, 4:839?860, 2003.
A.Y. Ng. Feature selection, l1 vs. l2 regularization, and rotational invariance. In Proceedings of the Twenty-First
International Conference on Machine Learning, 2004.
David Pollard. Convergence of Stochastic Processes. Springer-Verlag, 1984.
R.E. Schapire, Y. Freund, P. Bartlett, and W.S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651?1686, October 1998.
S. Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, The Hebrew University, 2007.
S. Shalev-Shwartz and Y. Singer. Convex repeated games and Fenchel duality. In Advances in Neural Information Processing Systems 20, 2006.
M. Warmuth and A. K. Jagota. Continuous versus discrete-time non-linear gradient descent: Relative loss
bounds and convergence. In Fifth International Symposium on Artificial Intelligence and Mathematics,
1997.
T. Zhang. Covering number bounds of certain regularized linear function classes. Journal of Machine Learning
Research, 2:527?550, 2002.
| 3510 |@word polynomial:2 norm:21 twelfth:1 d2:3 seek:2 interestingly:5 nt:1 si:7 reminiscent:1 chicago:6 remove:1 update:6 v:1 intelligence:1 warmuth:1 completeness:1 provides:8 boosting:4 ron:1 org:3 zhang:11 along:1 direct:4 symposium:1 prove:1 expected:3 themselves:1 examine:1 little:1 considering:1 provided:8 begin:1 bounded:3 underlying:1 argmin:4 kind:1 minimizes:1 sudakov:2 unified:4 voting:1 megiddo:1 fat:2 scaled:2 k2:4 classifier:3 control:1 before:1 mistake:2 consequence:2 aiming:1 lugosi:1 koltchinskii:5 studied:1 therein:1 regret:9 universal:1 empirical:5 get:5 selection:1 risk:11 applying:1 restriction:7 equivalent:1 imposed:4 optimize:1 yt:4 eighteenth:1 straightforward:1 independently:1 convex:22 focused:1 deriving:1 hd:1 proving:2 notion:2 coordinate:1 analogous:1 annals:2 hypothesis:4 trick:1 particularly:2 capture:1 wj:4 decrease:1 removed:1 mentioned:2 panchenko:5 convexity:4 complexity:28 und:1 dt2:1 depend:1 predictive:1 max1:1 easily:1 regularizer:1 derivation:1 artificial:1 shalev:9 whose:1 say:1 ability:2 statistic:2 itself:2 online:8 sequence:2 differentiable:1 ledoux:2 product:3 relevant:3 iff:1 sixteenth:1 kv:1 olkopf:1 convergence:3 rademacher:19 tti:6 strong:4 implies:3 closely:3 stochastic:1 vc:1 mcallester:4 fix:4 generalization:22 preliminary:1 tighter:1 hold:2 normal:1 nw:1 hwt:2 symmetrization:1 minimization:2 mit:1 gaussian:7 rather:4 pn:2 corollary:15 l0:1 focus:1 inherits:1 derived:2 improvement:3 seeger:1 vk2:1 dependent:1 typically:2 entire:1 relation:1 interested:1 dual:6 classification:1 denoted:3 constrained:3 special:1 equal:1 having:2 ng:5 shattering:2 simplex:2 few:1 simultaneously:2 tightly:2 divergence:3 ourselves:1 entro:6 karthik:2 delicate:1 n1:2 attempt:1 interest:1 mixture:1 bregman:3 integral:1 iv:1 taylor:4 fenchel:4 gn:2 subset:3 uniform:4 predictor:1 dependency:2 supx:1 kxi:1 combined:1 international:3 lee:1 w1:7 thesis:1 satisfied:1 cesa:1 choose:5 possibly:1 worse:2 return:1 explicitly:1 vi:1 later:1 view:1 multiplicative:1 closed:3 observing:1 kwk:1 sup:8 characterizes:1 analyze:1 bayes:8 start:1 minimize:1 il:3 ni:3 accuracy:1 sharpest:2 bayesian:5 basically:1 none:1 worth:1 explain:1 definition:4 proof:20 associated:1 proved:2 recall:1 improves:2 follow:1 methodology:1 improved:2 formulation:1 though:1 strongly:14 furthermore:1 smola:1 langford:5 k2q:1 talagrand:2 ei:4 logistic:1 yf:1 perhaps:1 former:1 regularization:8 hence:5 staggering:1 game:2 self:1 kyk2:1 covering:13 rooted:1 supermartingale:1 generalized:1 demonstrate:1 performs:1 l1:21 volume:1 banach:1 significant:1 cambridge:1 rd:5 mathematics:1 shawe:4 etc:3 posterior:2 winnow:1 inf:5 route:2 certain:2 verlag:2 inequality:4 binary:3 kwk1:5 jagota:1 yi:8 der:1 additional:1 impose:1 converge:1 paradigm:1 sham:2 stem:1 smooth:3 match:6 plugging:1 prediction:14 variant:1 regression:1 xw1:7 essentially:2 expectation:5 kernel:1 addition:1 background:1 completes:1 sch:1 w2:2 extra:1 nv:1 sridharan:1 effectiveness:1 structural:1 noting:2 variety:1 xj:1 finish:1 zi:8 ergebnisse:1 lasso:1 restrict:1 inner:3 bartlett:9 pollard:2 cornerstone:1 useful:1 tewari:2 detailed:1 generally:1 extensively:1 svms:2 nimrod:1 schapire:3 meir:3 sign:1 discrete:1 shall:1 key:1 drawn:1 fraction:2 sum:1 family:1 throughout:1 w12:1 appendix:6 bound:93 paramount:1 annual:2 constraint:7 precisely:2 x2:2 argument:4 according:1 conjugate:2 wi:1 kakade:1 lp:9 equation:4 mathematik:1 discus:1 loose:2 singer:5 end:2 lieu:1 apply:1 v2:1 appropriate:2 dudley:1 original:1 include:1 log2:4 xw:12 exploit:1 k1:1 prof:1 question:1 quantity:1 dependence:2 usual:1 said:1 gradient:2 trivial:1 providing:3 minimizing:1 rotational:1 hebrew:1 october:1 sharper:3 statement:3 stated:2 unknown:1 negated:1 bianchi:1 upper:5 twenty:1 observation:1 descent:1 extended:1 rn:10 varied:1 sharp:6 arbitrary:3 david:3 kl:1 extensive:2 z1:2 herein:1 able:1 pattern:1 regime:1 sparsity:2 ambuj:1 including:6 max:1 explanation:1 regularized:2 isoperimetry:1 imply:1 wkv:2 sn:2 prior:3 literature:2 l2:11 relative:4 freund:1 loss:21 interesting:1 versus:1 editor:1 kxkp:1 last:1 copy:1 allow:1 understand:1 perceptron:1 burges:1 taking:2 fifth:1 sparse:2 distributed:1 dimension:5 attendant:1 xn:2 valid:1 cumulative:1 made:1 simplified:3 ec:1 kzi:3 keep:1 active:1 xi:20 shwartz:9 continuous:1 ku:1 obtaining:1 sp:1 pk:1 main:1 bounding:1 n2:2 repeated:1 x1:2 tong:1 lq:4 hw:22 young:1 theorem:28 removing:2 xt:4 pac:12 showing:1 svm:2 derives:1 mendelson:6 exists:1 adding:1 phd:1 margin:31 kx:1 entropy:6 kxk:7 contained:1 desire:1 doubling:1 springer:2 conditional:1 goal:2 careful:1 lipschitz:6 hard:1 fw:12 specifically:1 except:1 infinite:2 abbreviates:1 wt:2 averaging:2 lemma:4 duality:4 invariance:1 support:2 latter:1 regularizing:2 |
2,770 | 3,511 | Signal-to-Noise Ratio Analysis
of Policy Gradient Algorithms
John W. Roberts and Russ Tedrake
Computer Science and
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
Policy gradient (PG) reinforcement learning algorithms have strong (local) convergence guarantees, but their learning performance is typically limited by a large
variance in the estimate of the gradient. In this paper, we formulate the variance
reduction problem by describing a signal-to-noise ratio (SNR) for policy gradient
algorithms, and evaluate this SNR carefully for the popular Weight Perturbation
(WP) algorithm. We confirm that SNR is a good predictor of long-term learning performance, and that in our episodic formulation, the cost-to-go function is
indeed the optimal baseline. We then propose two modifications to traditional
model-free policy gradient algorithms in order to optimize the SNR. First, we
examine WP using anisotropic sampling distributions, which introduces a bias
into the update but increases the SNR; this bias can be interpreted as following the
natural gradient of the cost function. Second, we show that non-Gaussian distributions can also increase the SNR, and argue that the optimal isotropic distribution is
a ?shell? distribution with a constant magnitude and uniform distribution in direction. We demonstrate that both modifications produce substantial improvements
in learning performance in challenging policy gradient experiments.
1
Introduction
Model-free policy gradient algorithms allow for the optimization of control policies on systems
which are impractical to model effectively, whether due to cost, complexity or uncertainty in the
very structure and dynamics of the system (Kohl & Stone, 2004; Tedrake et al., 2004). However,
these algorithms often suffer from high variance and relatively slow convergence times (Greensmith
et al., 2004). As the same systems on which one wishes to use these algorithms tend to have a
high cost of policy evaluation, much work has been done on maximizing the policy improvement
from any individual evaluation (Meuleau et al., 2000; Williams et al., 2006). Techniques such as
Natural Gradient (Amari, 1998; Peters et al., 2003a) and GPOMDP (Baxter & Bartlett, 2001) have
become popular through their ability to match the performance gains of more basic model-free
policy gradient algorithms while using fewer policy evaluations.
As practitioners of policy gradient algorithms in complicated mechanical systems, our group has a
vested interest in making practical and substantial improvements to the performance of these algorithms. Variance reduction, in itself, is not a sufficient metric for optimizing the performance of PG
algorithms - of greater significance is the magnitude of the variance relative to the magnitude of the
gradient update. Here we formulate a signal-to-noise ratio (SNR) which facilitates simple and fast
evaluations of a PG algorithm?s average performance, and facilitates algorithmic performance improvements. Though the SNR does not capture all facets of a policy gradient algorithm?s capability
to learn, we show that achieving a high SNR will often result in a superior convergence rate with
less violent variations in the policy.
1
Through a close analysis of the SNR, and the means by which it is maximized, we find several modifications to traditional model-free policy gradient updates that improve learning performance. The
first of these is the reshaping of distributions such that they are different on different parameters, a
modification which introduces a bias to the update. We show that this reshaping can improve performance, and that the introduced bias results in following the natural gradient of the cost function,
rather than the true point gradient. The second improvement is the use of non-Gaussian distributions for sampling, and through the SNR we find a simple distribution which improves performance
without increasing the complexity of implementation.
2
The weight perturbation update
Consider minimizing a scalar function J(w)
~ with respect to the parameters w
~ (note that it is possible that J(w)
~ is a long-term cost and results from running a system with the parameters w
~ until
conclusion). The weight perturbation algorithm (Jabri & Flower, 1992) performs this minimization
with the update:
?w
~ = ?? (J(w
~ + ~z) ? J(w))
~ ~z,
(1)
where the components of the ?perturbation?, ~z, are drawn independently from a mean-zero distribution, and ? is a positive scalar controlling the magnitude of the update (the ?learning rate?).
Performing a first-order Taylor expansion of J(w
~ + ~z) yields:
!
X ?J
X ?J
?w
~ = ?? J(w)
~ +
zi ? J(w)
~ ~z = ??
zi ? ~z.
(2)
?w
~i
?w
~i
i
i
In expectation, this becomes the gradient times a (diagonal) covariance matrix, and reduces to
?J
,
(3)
?w
~
an unbiased estimate of the gradient, scaled by the learning rate and ? 2 , the variance of the perturbation. However, this unbiasedness comes with a very high variance, as the direction of an update
is uniformly distributed. It is only the fact that updates near the direction of the true gradient have a
larger magnitude than do those nearly perpendicular to the gradient that allows for the true gradient
to be achieved in expectation. Note also that all samples parallel to the gradient are equally useful,
whether they be in the same or opposite direction, as the sign does not affect the resulting update.
E[?w]
~ = ??? 2
The WP algorithm is one of the simplest examples of a policy gradient reinforcement learning algorithm, and thus is well suited for analysis. In the special case when ~z is drawn from a Gaussian
distribution, weight perturbation can be interpreted as a REINFORCE update(Williams, 1992).
3
SNR for policy gradient algorithms
The SNR is the expected power of the signal (update in the direction of the true gradient) divided by
the expected power of the noise (update perpendicular to the true gradient). Taking care to ensure
that the magnitude of the true gradient does not effect the SNR, we have:
h
i
E ?w
~ kT ?w
~k
,
SNR =
(4)
T ?w
E ?w
~?
~?
?
?
~
Jw ? J~w
?w
~ k = ??w
~T
~ ? = ?w
~ ?w
~ k,
(5)
~
~
, ?w
Jw
Jw
w)
~
and using J~w (w
~ 0 ) = ?J(
for convenience.
?w
~
(w=
~ w
~0)
Intuitively, this expression measures how large a proportion of the update is ?useful?. If the update
is purely in the direction of the gradient the SNR would be infinite, while if the update moved
perpendicular to the true gradient, it would be zero. As such, all else being equal, a higher SNR
should generally perform as well or better than a lower SNR, and result in less violent swings in cost
and policy for the same improvement in performance.
2
3.1
Weight perturbation with Gaussian distributions
Evaluating the SNR for the WP update in Equation 1 with a deterministic J(w)
~ and ~z drawn from a
Gaussian distribution yields a surprisingly simple result. If one first considers the numerator:
?
?
?
?
? ?
h
i
2
X
X
T
? ?
?
E ?w
~ kT ?w
~k
= E ?
4 ?
Jwi Jwj zi zj ? J~w ? ?
Jwk Jwp zk zp ? J~w ?
~
i,j
k,p
Jw
?
?
X
?
? ?2
(6)
Jwi Jwj Jwk Jwp zi zj zk zp ? = Q,
= E ?
2
~
Jw
i,j,k,p
where we have named this term Q for convenience as it occurs several times in the expansion of the
SNR. We now expand the denominator as follows:
h
i
T
E ?w
~?
?w
~ ? = E ?w
~ T ?w
~ ? 2?w
~ kT (?w
~ k + ?w
~ ? ) + ?w
~ kT ?w
~ k = E ?w
~ T ?w
~ ?2Q+Q
(7)
Substituting Equation (1) into Equation (7) and simplifying results in:
?
?
2
X
?
T
E ?w
~?
?w
~ ? =
2 E ?
Jwi Jwj zi zj zk2 ? ? Q.
(8)
~
i,j,k
Jw
We now assume that each component zi is drawn from a Gaussian distribution with variance ? 2 .
Taking the expected value, it may be further simplified to:
?
?
X
X
?2 ? 4 X
3? 4 X
Q =
4 3?
Jwi 4 + 3? 4
Jwi 2
Jwj 2 ? =
4
Jwi 2 Jwj 2 = 3? 4 , (9)
~
~
i
i
j6=i
Jw
Jw
i,j
?
?
X
X
?2 ?4
T
Jwi 2 +
Jwi 2 ? ?Q = ? 4 (2+N )?3? 4 = ? 4 (N ?1), (10)
E ?w
~?
?w
~ ? =
2 ?2
~
i
i,j
Jw
where N is the number of parameters. Canceling ? results in:
SNR =
3
.
N ?1
(11)
Thus, for small noises and constant ? the SNR and the parameter number have a simple inverse
relationship. This is a particularly concise model for performance scaling in PG algorithms.
3.2
Relationship of the SNR to learning performance
To evaluate the degree to which the SNR is correlated with actual learning performance, we ran a
number of experiments on a simple quadratic bowl cost function, which may be written as:
J(w)
~ =w
~ T Aw,
~
(12)
where the optimal is always at the point ~0. The SNR suggests a simple inverse relationship between the number of parameters and the learning performance. To evalute this claim we performed
three tests: 1) true gradient descent on the identity cost function (A set to the identity matrix) as a
benchmark, 2) WP on the identity cost function and 3) WP on 150 randomly generated cost functions (each component drawn from a Gaussian distribution), all of the form given in Equation (12),
and for values of N between 2 and 10. For each trial w
~ was intially set to be ~1. As can be seen
in Figure 1a, both the SNR and the reduction in cost after running WP for 100 iterations decrease
monotonically as the number of parameters N increases. The fact that this occurs in the case of
randomly generated cost functions demonstrates that this effect is not related to the simple form of
the identity cost function, but is in fact related to the number of dimensions.
3
Figure 1: Two comparisons of SNR and learning performance: (A) Relationship as dimension N
is increased (Section 3.2). The curves are 15,000 averaged runs, each run 100 iterations. For randomly generated cost functions, 150 A matrices were tested. True gradient descent was run on the
identity cost function. The SNR for each case was computed in with Equation (11). (B) Relationship as Gaussian is reshaped by changing variances for case of 2D anisotropic cost function(ratio of
gradients in different directions is 5) cost function (Section 4.1.1). The constraint ?12 + ?22 = 0.1
is imposed, while ?12 is between 0 and .1. For each value of ?1 15,000 updates were averaged to
produce the curve plotted. The plot shows that variances which increase the SNR also improve the
performance of the update.
3.3
SNR with parameter-independent additive noise
In many real world systems, the evaluation of the cost J(w)
~ is not deterministic, a property which
can significantly affect learning performance. In this section we investigate how additive ?noise? in
the function evaluation affects the analytical expression for the SNR. We demonstrate that for very
high noise WP begins to behave like a random walk, and we find in the SNR the motivation for an
improvement in the WP algorithm that will be examined in Section 4.2.
Consider modifying the update seen in Equation (1) to allow for a parameter-independent additive
noise term v and a more general baseline b(w),
~ and again perform the Taylor expansion. Writing
the update with these terms gives:
!
!
X
X
?w
~ = ?? J(w)
~ +
Jwi zi ? b(w)
~ + v ~z = ??
Jwi zi + ?(w)
~ ~z.
(13)
i
i
where we have combined the terms J(w),
~ b(w)
~ and v into a single random variable ?(w).
~ The new
variable ?(w)
~ has two important properties: its mean can be controlled through the value of b(w),
~
and its distribution is independent of parameters w,
~ thus ?(w)
~ is independent of all the zi .
We now essentially repeat the calculation seen in Section 3.1, with the small modification of including the noise term. When we again assume independent zi , each drawn from identical Gaussian
distributions with standard deviation ?, we obtain the expression:
SNR =
?+3
,
(N ? 1)(? + 1)
?=
(J(w)
~ ? b(w))
~ 2 + ?v2
? 2 kJ~w k2
(14)
where ?v is the standard deviation of the noise v and we have termed the error component ?. This
expression depends upon the fact that the noise v is mean-zero and independent of the parameters,
although as stated earlier, the assumption that v is mean-zero is not limiting. It is clear that in the
limit of small ? the expression reduces to that seen in Equation (11), while in the limit of very large
? it becomes the expression for the SNR of a random walk (see Section 3.4). This expression makes
it clear that minimizing ? is desirable, a result that suggests two things: (1) the optimal baseline
(from the perspective of the SNR) is the value function (i.e. b? (w)
~ = J(w))
~ and (2) higher values of
? are desirable, as they reduce ? by increasing the size of its denominator. However, there is clearly
a limit on the size of ? due to higher order terms in the Taylor expansion; very large ? will result in
samples which do not represent the local gradient. Thus, in the case of noisy measurements, there
is some optimal sampling distance that is as large as possible without resulting in poor sampling of
the local gradient. This is explored in Section 4.2.1.
4
3.4
SNR of a Random Walk
Due to the fact that the update is squared in the SNR, only its degree of parallelity to the true gradient
is relevant, not its direction. In the case of WP on a deterministic function, this is not a concern as the
update is always within 90? of the gradient, and thus the parallel component is always in the correct
direction. For a system with noise, however, components of the update parallel to the gradient can
in fact be in the incorrect direction, contributing to the SNR even though they do not actually result
in learning. This effect only becomes significant when the noise is particularly large, and reaches
its extreme in the case of a true random walk (a strong bias in the ?wrong? direction is in fact a
good update with an incorrect sign). If one considers moving by a vector drawn from a multivariate
Gaussian distribution without any correlation to the cost function, the SNR is particularly easy to
compute, taking the form:
T X
1 X
Jwi zi J~w
Jwj zj J~w
kJ~w k4 i
1
?2
j
SNR =
=
=
X
2
2
2
1 X
1
N ? ? 2? + ?
N ?1
(~z ?
Jwi zi J~w )T (~z ?
Jwi zi J~w )
kJ~w k2 i
kJ~w k2 i
(15)
As was discussed in Section 3.3, this value of the SNR is the limiting case of very high measurement
noise, a situation which will in fact produce a random walk.
4
Applications of SNR
4.1
Reshaping the Gaussian Distribution
Consider a generalized WP algorithm, in which we allow each component zi to be drawn independently from separate mean-zero distributions. Returning to the derivation in Section 3.1, we no
longer assume each zi is drawn from an identical distribution, but rather associate each with its own
?i (the vector of the ?i will be referred to as ~? ). Removing this assumption results in the SNR:
?
?
??1
2
X
X
2
2
4
2
2
?
J~w
?2
?
Jwi ?i +
Jwi ?i ?j ?
?
?
?
?
i
i,j
~
?
X
SNR(~? , Jw ) = ?
? 1?
? .
2 2
2 2
3
Jwi ?i Jwj ?j
?
?
?
?
?
(16)
i,j
An important property of this SNR is that it depends only upon the direction of J~w and the relative magnitude of the ?i (as opposed to parameters such as the learning rate ? and the absolute
magnitudes k~? k and kJ~w k).
4.1.1
Effect of reshaping on performance
While the absolute magnitudes of the variance and true gradient do not affect the SNR given in
Equation (16), the relative magnitudes of the different ?i and their relationship to the true gradient
can affect it. To study this property, we investigate a cost function with a significant degree of
anisotropy. Using a cost function of the form given in Equation (12) and N = 2, we choose an A
matrix whose first diagonal component is five times that of the second. We then investigate a series
of possible variances ?12 and ?22 constrained such that their sum is a constant (?12 + ?22 = C). We
observe the performance of the first update (rather than the full trial) as the true gradient can vary
significantly over the course of a trial, thereby having major effects on the SNR even as the variances
are unchanged. As is clear in Figure 1b, as the SNR is increased through the choice of variances the
performance of this update is improved. The variation of the SNR is much more significant than the
change in performance, however this is not surprising as the SNR is infinite if the update is exactly
along the correct direction, while the improvement from this update will eventually saturate.
5
4.1.2
Demonstration in simulation
The improved performance of the previous section suggests the possibility of a modification to the
WP algorithm in which an estimate of the true gradient is used before each update to select new
variances which are more likely to learn effectively. Changing the shape of the distribution does add
a bias to the update direction, but the resulting biased update is in fact descending the natural gradient
of the cost function. To make use of this opportunity, some knowledge of the likely gradient direction
is required. This knowledge can be provided via a momentum estimate (an average of previous
updates) or through an inaccurate model that is able to capture some facets of the geometry of the
cost function. With this estimated gradient the expression given in Equation (16) can be optimized
over the ?i numerically using a method such as Sequential Quadratic Programming (SQP). Care
must be taken to avoid converging to very narrow distributions (e.g. placing some small minimum
noise on all parameters regardless of the optimization), but ultimately this reshaping of the Gaussian
can provide real performance benefits.
mp
g
l
?
x
mc
f
(a)
(b)
Figure 2: (a) The cart-pole system. The task is to apply a horizontal force f to the cart such that
the pole swings to the vertical position. (b) The average of 200 curves showing reduction in cost
versus trial number for both a symmetric Gaussian distribution and a distribution reshaped using the
SNR. The blue shaded region marks the area within one standard deviation for a symmetric Gaussian
distribution, the red region marks one standard deviation for the reshaped distribution and the purple
is within one standard deviation of both. The reshaping began on the eighth trial to give time for the
momentum-based gradient estimate to stabilize.
To demonstrate the improvement in convergence time this reshaping can achieve, weight perturbation was used to develop a barycentric feedback policy for the cart-pole swingup task, where the
cost was defined as a weighted sum of the actuation used and the squared distance from the upright
position. A gradient estimate was obtained through averaging previous updates, and SQP was used
to optimize the SNR prior to each trial. Figure 2 demonstrates the superior performance of the reshaped distribution over a symmetric Guassian using the same total variance (i.e. the traces of the
covariance matrices for both distributions were the same).
4.1.3
WP with Gaussian distributions follow the natural gradient
The natural gradient for a policy that samples with a mean-zero Gaussian of covariance ? may be
written (see (Peters et al., 2003b)):
#
"
~ w)
~ w)
?
log
?(
?;
~
?
log
?(
?;
~
?
?1
.
(17)
J~w = F J~w , F = E?(?;
~ w)
~
?wi
?wj
where F is the Fisher Information matrix, ? is the sampling distribution, and ?~ = w
~ + ~z. Using the
Gaussian form of the sampling, F may be evaluated easily, and becomes as ??1 , thus:
?
J~w = ? J~w .
(18)
This is true for all mean-zero multivariate Gaussian distributions, thus the biased update, while no
longer following the local point gradient, does follow the natural gradient. It is important to note
that the natural gradient is a function of the shape of the sampling distribution, and it is because of
this that all sampling distributions of this form can follow the natural gradient.
6
4.2
Non-Gaussian Distributions
The analysis in Section 3.3 suggests that for a function
with noisy measurements there is an optimal sampling
distance which depends upon the local noise and gradient as well as the strength of higher-order terms in
that region. For a two-dimensional cost function of
the form given in Equation (12), Figure 3 shows the
SNR?s dependence upon the radius of the shell distribution (i.e. the magnitude of the sampling). For various
levels of additive mean-zero noise the SNR was computed for a distribution uniform in angle and fixed in its
distance from the mean (this distance is the ?sampling
magnitude?). The fact that there is a unique maximum
for each case suggests the possibility of sampling only
at that maximal magnitude, rather than over all magnitudes as is done with a Gaussian, and thus improving SNR and performance. While determining the exact magnitude of maximum SNR may be impractical,
choosing a distribution with uniformly distributed direction and a constant magnitude close to this optimal
value, performance can be improved. This idea was
tested on the benchmark proposed in (Riedmiller et al.,
2007), where comparisons showed it was able to learn
at rates similar to optimized RPROP from reasonable
initial policies, and was capable of learning from a zero
initial policy.
4.2.1
Figure 3: SNR vs. update magnitude for
a 2D quadratic cost function. Mean-zero
measurement noise is included with variances from 0 to .65. As the noise is increased, the sampling magnitude producing the maximum SNR is larger and the
SNR achieved is lower. Note that the
highest SNR achieved is for the smallest sampling magnitude with no noise
where it approaches the theoretical value
(for 2D) of 3. Also note that for small
sampling magnitudes and large noises the
SNR approaches the random walk value.
Experimental Demonstration
To provide compelling evidence of improved performance, the shell distribution was implemented
on a laboratory experimental system with actuator limitations and innate stochasticity. We have recently been exploring the use of PG algorithms in an incredibly difficult and exciting control domain
-fluid dynamics - and as such applied the shell distribution to a fluid dynamical system. Specifically,
we applied learning to a system used to sudy the dynamics of flapping flight via a wing submerged
in water (see Figure 4 for a description of the system (Vandenberghe et al., 2004)). The task is to
determine the vertical motion producing the highest ratio of rotational displacement to energy input.
Model-free methods are particularly exciting in this domain because direct numerical simulation
can take days(Shelley et al., 2005) - in contrast optimizationg on the experimental physical flapping
wing can be done in real-time, at the cost of dealing with noise in the evaluation of the cost function;
success here would be enabling for experimental fluid dynamics. We explored the idea of using a
?shell? distribution to improve the performance of our PG learning on this real-world system.
(a)
(b)
Figure 4: (a) Schematic of the flapping setup. The plate rotates freely about its vertical axis, while
the vertical motion is prescribed by the learnt policy. This vertical motion is coupled with the plate?s
rotation through hydrodynamic effects. (b) 5 averaged runs on the flapping plate using Gaussian or
Shell distributions for sampling. The error bars represent one standard deviation in the performance
of different runs at that trial.
7
Representing the vertical position as a function of time with a 13-point periodic cubic spline, a
5D space was searched (points 1, 7 and 13 were fixed at zero, while points 2 and 8, 3 and 9 etc.
were set to equal and opposite values determined by the control parameters). Beginning with a
smoothed square wave, WP was run for 20 updates using shell distributions and Gaussians. Both
forms of distributions were run 5 times and averaged to produce the curves in Figure 4. The sampling
magnitude of the shell distribution was set to be the expected value of the length of a sample from
the Gaussian distribution, while all other parameters were set equal. With optimized sampling, we
acquired locally optimal policies in as little as 15 minutes, with repeated optimizations from very
different initial policies converging to the same waveform. The result deepened our understanding
of this fluid system and suggests promising applications to other fluid systems of similar complexity.
5
Conclusion
In this paper we present an expression for the SNR of PG algorithms, and looked in detail at the
common case of WP. This expression gives us a quantitative means of evaluating the expected performance of a PG algorithm, although the SNR does not completely capture an algorithm?s capacity
to learn. SNR analysis revealed two distinct mechanisms for improving the WP update - perturbing different parameters with different distributions, and using non-Gaussian distributions. Both of
them showed real improvement on highly nonlinear problems (the cart-pole example used a very
high-dimensional policy), without knowledge of the problem?s dynamics and structure. We believe
that SNR-optimized PG algorithms show promise for many complicated, real-world applications.
6
Acknowledgements
The authors thank Drs. Lionel Moret and Jun Zhang for valuable assistance with the heaving foil.
References
Amari, S. (1998). Natural gradient works efficiently in learning. Neural Computation, 10, 251?276.
Baxter, J., & Bartlett, P. (2001). Infinite-horizon policy-gradient estimation. Journal of Artificial
Intelligence Research, 15, 319?350.
Greensmith, E., Bartlett, P. L., & Baxter, J. (2004). Variance reduction techniques for gradient
estimates in reinforcement learning. Journal of Machine Learning Research, 5, 1471?1530.
Jabri, M., & Flower, B. (1992). Weight perturbation: An optimal architecture and learning technique
for analog VLSI feedforward and recurrent multilayer networks. IEEE Trans. Neural Netw., 3,
154?157.
Kohl, N., & Stone, P. (2004). Policy gradient reinforcement learning for fast quadrupedal locomotion. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).
Meuleau, N., Peshkin, L., Kaelbling, L. P., & Kim, K.-E. (2000). Off-policy policy search. NIPS.
Peters, J., Vijayakumar, S., & Schaal, S. (2003a). Policy gradient methods for robot control (Technical Report CS-03-787). University of Southern California.
Peters, J., Vijayakumar, S., & Schaal, S. (2003b). Reinforcement learning for humanoid robotics.
Proceedings of the Third IEEE-RAS International Conference on Humanoid Robots.
Riedmiller, M., Peters, J., & Schaal, S. (2007). Evaluation of policy gradient methods and variants on
the cart-pole benchmark. Symposium on Approximate Dynamic Programming and Reinforcement
Learning (pp. 254?261).
Shelley, M., Vandenberghe, N., & Zhang, J. (2005). Heavy flags undergo spontaneous oscillations
in flowing water. Physical Review Letters, 94.
Tedrake, R., Zhang, T. W., & Seung, H. S. (2004). Stochastic policy gradient reinforcement learning
on a simple 3D biped. Proceedings of the IEEE International Conference on Intelligent Robots
and Systems (IROS) (pp. 2849?2854). Sendai, Japan.
Vandenberghe, N., Zhang, J., & Childress, S. (2004). Symmetry breaking leads to forward flapping
flight. Journal of Fluid Mechanics, 506, 147?155.
Williams, J. L., III, J. W. F., & Willsky, A. S. (2006). Importance sampling actor-critic algorithms.
Proceedings of the 2006 American Control Conference.
Williams, R. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8, 229?256.
8
| 3511 |@word trial:7 proportion:1 simulation:2 covariance:3 simplifying:1 pg:9 concise:1 thereby:1 reduction:5 initial:3 series:1 surprising:1 written:2 must:1 john:1 numerical:1 additive:4 shape:2 plot:1 update:38 v:1 intelligence:2 fewer:1 isotropic:1 beginning:1 meuleau:2 jwi:16 zhang:4 five:1 along:1 direct:1 become:1 symposium:1 incorrect:2 sendai:1 acquired:1 expected:5 ra:1 indeed:1 examine:1 mechanic:1 anisotropy:1 actual:1 little:1 increasing:2 becomes:4 begin:1 provided:1 interpreted:2 impractical:2 guarantee:1 quantitative:1 exactly:1 returning:1 wrong:1 scaled:1 demonstrates:2 k2:3 control:5 producing:2 greensmith:2 positive:1 before:1 local:5 limit:3 examined:1 suggests:6 challenging:1 shaded:1 limited:1 perpendicular:3 averaged:4 practical:1 unique:1 displacement:1 episodic:1 riedmiller:2 area:1 significantly:2 jwk:2 convenience:2 close:2 intially:1 writing:1 descending:1 optimize:2 deterministic:3 imposed:1 maximizing:1 go:1 williams:4 regardless:1 independently:2 incredibly:1 formulate:2 vandenberghe:3 variation:2 limiting:2 controlling:1 spontaneous:1 exact:1 programming:2 locomotion:1 associate:1 particularly:4 capture:3 region:3 wj:1 decrease:1 highest:2 valuable:1 ran:1 substantial:2 complexity:3 seung:1 dynamic:6 ultimately:1 purely:1 upon:4 completely:1 bowl:1 easily:1 various:1 derivation:1 distinct:1 fast:2 guassian:1 artificial:2 choosing:1 whose:1 larger:2 amari:2 ability:1 reshaped:4 itself:1 noisy:2 analytical:1 propose:1 maximal:1 relevant:1 achieve:1 description:1 moved:1 convergence:4 sqp:2 zp:2 lionel:1 produce:4 develop:1 recurrent:1 strong:2 implemented:1 c:1 come:1 direction:16 waveform:1 radius:1 correct:2 modifying:1 stochastic:1 exploring:1 algorithmic:1 claim:1 substituting:1 major:1 vary:1 smallest:1 estimation:1 violent:2 weighted:1 minimization:1 clearly:1 gaussian:23 always:3 rather:4 avoid:1 sudy:1 schaal:3 improvement:10 contrast:1 baseline:3 kim:1 inaccurate:1 typically:1 vlsi:1 expand:1 constrained:1 special:1 equal:3 having:1 sampling:19 identical:2 placing:1 nearly:1 report:1 spline:1 intelligent:1 connectionist:1 randomly:3 individual:1 geometry:1 interest:1 investigate:3 possibility:2 highly:1 evaluation:8 introduces:2 extreme:1 kt:4 capable:1 taylor:3 walk:6 plotted:1 theoretical:1 increased:3 earlier:1 compelling:1 facet:2 cost:30 pole:5 deviation:6 kaelbling:1 snr:65 uniform:2 predictor:1 aw:1 periodic:1 learnt:1 combined:1 unbiasedness:1 international:3 vijayakumar:2 off:1 again:2 squared:2 opposed:1 choose:1 hydrodynamic:1 wing:2 american:1 japan:1 stabilize:1 automation:1 mp:1 depends:3 performed:1 red:1 wave:1 complicated:2 capability:1 parallel:3 purple:1 square:1 variance:18 efficiently:1 maximized:1 yield:2 rus:1 mc:1 j6:1 reach:1 canceling:1 rprop:1 energy:1 pp:2 gain:1 massachusetts:1 popular:2 knowledge:3 improves:1 carefully:1 actually:1 higher:4 day:1 follow:3 flowing:1 improved:4 jw:10 formulation:1 done:3 though:2 evaluated:1 until:1 correlation:1 flight:2 horizontal:1 nonlinear:1 believe:1 innate:1 effect:6 true:16 unbiased:1 swing:2 symmetric:3 laboratory:2 wp:16 assistance:1 numerator:1 generalized:1 stone:2 plate:3 demonstrate:3 performs:1 motion:3 recently:1 began:1 common:1 rotation:1 superior:2 physical:2 perturbing:1 anisotropic:2 discussed:1 analog:1 numerically:1 measurement:4 significant:3 cambridge:1 stochasticity:1 biped:1 moving:1 robot:3 actor:1 longer:2 etc:1 add:1 multivariate:2 own:1 showed:2 perspective:1 optimizing:1 termed:1 success:1 seen:4 minimum:1 greater:1 care:2 freely:1 determine:1 monotonically:1 signal:4 full:1 desirable:2 reduces:2 technical:1 match:1 calculation:1 long:2 divided:1 reshaping:7 equally:1 controlled:1 schematic:1 converging:2 variant:1 basic:1 denominator:2 essentially:1 metric:1 expectation:2 multilayer:1 iteration:2 represent:2 achieved:3 robotics:2 else:1 biased:2 cart:5 tend:1 undergo:1 facilitates:2 thing:1 gpomdp:1 practitioner:1 near:1 revealed:1 feedforward:1 easy:1 baxter:3 iii:1 affect:5 zi:15 architecture:1 opposite:2 jwp:2 reduce:1 idea:2 whether:2 expression:10 peshkin:1 bartlett:3 suffer:1 peter:5 useful:2 generally:1 clear:3 locally:1 simplest:1 flapping:5 zj:4 sign:2 estimated:1 blue:1 promise:1 group:1 quadrupedal:1 achieving:1 drawn:9 changing:2 k4:1 iros:1 sum:2 run:7 inverse:2 angle:1 uncertainty:1 letter:1 named:1 reasonable:1 oscillation:1 scaling:1 quadratic:3 kohl:2 strength:1 constraint:1 prescribed:1 performing:1 relatively:1 poor:1 wi:1 modification:6 making:1 intuitively:1 taken:1 equation:11 describing:1 eventually:1 mechanism:1 drs:1 zk2:1 gaussians:1 apply:1 observe:1 actuator:1 v2:1 running:2 ensure:1 opportunity:1 icra:1 unchanged:1 occurs:2 looked:1 dependence:1 traditional:2 diagonal:2 southern:1 gradient:61 distance:5 separate:1 reinforce:1 rotates:1 capacity:1 thank:1 argue:1 considers:2 water:2 willsky:1 length:1 relationship:6 ratio:5 minimizing:2 demonstration:2 rotational:1 difficult:1 setup:1 robert:1 trace:1 stated:1 fluid:6 implementation:1 policy:33 perform:2 vertical:6 benchmark:3 enabling:1 descent:2 behave:1 situation:1 barycentric:1 perturbation:9 smoothed:1 introduced:1 mechanical:1 required:1 optimized:4 california:1 narrow:1 nip:1 trans:1 able:2 bar:1 flower:2 dynamical:1 eighth:1 including:1 power:2 natural:10 force:1 representing:1 improve:4 technology:1 axis:1 jun:1 coupled:1 kj:5 prior:1 understanding:1 swingup:1 acknowledgement:1 review:1 contributing:1 relative:3 determining:1 limitation:1 versus:1 humanoid:2 degree:3 sufficient:1 exciting:2 critic:1 heavy:1 foil:1 course:1 surprisingly:1 repeat:1 free:5 bias:6 allow:3 institute:1 taking:3 absolute:2 distributed:2 benefit:1 curve:4 dimension:2 feedback:1 evaluating:2 world:3 author:1 forward:1 reinforcement:8 simplified:1 approximate:1 deepened:1 netw:1 confirm:1 dealing:1 search:1 promising:1 learn:4 zk:2 symmetry:1 improving:2 expansion:4 jabri:2 domain:2 significance:1 motivation:1 noise:23 repeated:1 referred:1 moret:1 cubic:1 slow:1 momentum:2 position:3 wish:1 breaking:1 shelley:2 third:1 removing:1 saturate:1 minute:1 showing:1 explored:2 concern:1 evidence:1 sequential:1 effectively:2 importance:1 magnitude:21 horizon:1 suited:1 jwj:7 likely:2 scalar:2 tedrake:3 ma:1 shell:8 identity:5 fisher:1 change:1 included:1 infinite:3 upright:1 uniformly:2 specifically:1 averaging:1 determined:1 flag:1 total:1 experimental:4 select:1 mark:2 searched:1 actuation:1 evaluate:2 tested:2 correlated:1 |
2,771 | 3,512 | Counting Solution Clusters in Graph Coloring
Problems Using Belief Propagation
Lukas Kroc
Ashish Sabharwal
Bart Selman
Department of Computer Science
Cornell University, Ithaca NY 14853-7501, U.S.A.
{kroc,sabhar,selman}@cs.cornell.edu ?
Abstract
We show that an important and computationally challenging solution space feature
of the graph coloring problem (COL), namely the number of clusters of solutions,
can be accurately estimated by a technique very similar to one for counting the
number of solutions. This cluster counting approach can be naturally written in
terms of a new factor graph derived from the factor graph representing the COL
instance. Using a variant of the Belief Propagation inference framework, we can
efficiently approximate cluster counts in random COL problems over a large range
of graph densities. We illustrate the algorithm on instances with up to 100, 000
vertices. Moreover, we supply a methodology for computing the number of clusters exactly using advanced techniques from the knowledge compilation literature.
This methodology scales up to several hundred variables.
1
Introduction
Message passing algorithms, in particular Belief Propagation (BP), have been very successful in
efficiently computing interesting properties of succinctly represented large spaces, such as joint
probability distributions. Recently, these techniques have also been applied to compute properties
of discrete spaces, in particular, properties of the space of solutions of combinatorial problems. For
example, for propositional satisfiability (SAT) and graph coloring (COL) problems, marginal probability information about the uniform distribution over solutions (or similar combinatorial objects)
has been the key ingredient in the success of BP-like algorithms. Most notably, the survey propagation (SP) algorithm utilizes this information to solve very large hard random instances of these
problems [3, 11].
Earlier work on random ensembles of Constraint Satisfaction Problems (CSPs) has shown that the
computationally hardest instances occur near phase boundaries, where instances go from having
many globally satisfying solutions to having no solution at all (a ?solution-focused picture?). In
recent years, this picture has been refined and it was found that a key factor in determining the hardness of instances in terms of search algorithm (or sampling algorithm) is the question: how are the
solutions spatially distributed within the search space? This has made the structure of the solution
space in terms of its clustering properties a key factor in determining the performance of combinatorial search methods (a ?cluster-focused picture?). Can BP-like algorithms be used to provide such
cluster-focused information? For example, how many clusters are there in a solution space? How
big are the clusters? How are they organized? Answers to such questions will shed further light into
our understanding of these hard combinatorial problems and lead to better algorithmic approaches
for reasoning about them, be it for finding one solution or answering queries of probabilistic inference about the set of solutions. The study of the solution space geometry has indeed been the focus
?
This work was supported by IISI, Cornell University (AFOSR grant FA9550-04-1-0151), DARPA (REAL
grant FA8750-04-2-0216), and NSF (grant 0514429).
of a number of recent papers [e.g. 1, 2, 3, 7, 9, 11], especially by the statistical physics community,
which has developed extensive theoretical tools to analyze such spaces under certain structural assumptions and large size limits. We provide a purely combinatorial method for counting the number
of clusters, which is applicable even to small size problems and can be approximated very well by
message passing techniques.
Solutions can be thought of as ?neighbors? if they differ in the value of one variable, and the transitive
closure of the neighbor relation defines clusters in a natural manner. Counting the number of clusters
is a challenging problem. To begin with, it is not even clear what is the best succinct way to represent
clusters. One relatively crude but useful way is to represent a cluster by the set of ?backbone?
variables in that cluster, i.e., variables that take a fixed value in all solutions within the cluster.
Interestingly, while it is easy (polynomial time) to verify whether a variable assignment is indeed a
solution of CSP, the same check is much harder for a candidate cluster represented by the set of its
backbone variables.
We propose one of the first scalable methods for estimating the number of clusters of solutions of
graph coloring problems using a belief propagation like algorithm. While the na??ve method, based
on enumeration of solutions and pairwise distances, scales to graph coloring problems with 50 or so
nodes and a recently proposed local search based method provides estimates up to a few hundred
node graphs [7], our approach?being based on BP?easily provides fast estimates for graphs with
100, 000 nodes. We validate the accuracy of our approach by also providing a fairly non-trivial
exact counting method for clusters, utilizing advanced knowledge compilation techniques. Our
approach works with the factor graph representation of the graph coloring problem. Yedidia et al.
[12] showed that if one can write the so-called ?partition function?, Z, for a quantity of interest
in a factor graph with non-negative weights, then there is a fairly mechanical variational method
derivation that yields belief propagation equations for estimating Z. Under certain assumptions,
we derive a partition function style quantity, Z(?1) , to count the number of clusters. We then use
the variational method to obtain BP equations for estimating Z(?1) . Our experiments with random
graph coloring problems show that Z(?1) itself is an extremely accurate estimate of the number of
clusters, and so is its approximation, ZBP(?1) , obtained from our BP equations.
2
Preliminaries
The graph coloring problem can be expressed in the form of a factor graph, a bipartite graph with
two kinds of nodes. The variable nodes, ~x = (x1 , . . . , xn ), represent the variables in the problem (n
vertices to be colored) with their discrete domain Dom = {c1 , . . . , ck } (k colors). The factor nodes,
?, . . ., with associated factor functions f? , . . . , represent the constrains of the problem (no two
adjacent vertices have the same color). Each factor function is a Boolean function with arguments
~x? (a subset of variables from ~x) and range {0, 1}, and evaluates to 1 if and only if (iff) the associated
constraint is satisfied. An edge connects a variable xi with factor f? iff the variable appears in the
constraint represented by the factor node, which we denote by i ? ?. In the graph coloring problem,
each factor function has exactly two variables.
In the factor representation, each variable assignment ~x is thought of as having a weightQ
equal to the
product of the values that all factors evaluate to. We denote this product by F (~x) := ? f? (~x? ).
In our case, the weight of an assignment ~x is 1 if all of the factors have value of 1, and 0 otherwise.
The assignments with weight 1 correspond precisely to legal colorings, or solutions to the problem.
The number of solutions can thus be expressed as the weighted sum across all possible assignments.
We denote this quantity by Z, the so-called partition function:
X
X Y
Z :=
F (~x) =
f? (~x? )
(1)
~
x?Domn
~
x?Domn ?
We define the solution space of a graph coloring problem to be the set of all its legal colorings. Two
legal colorings (or solutions) are called neighbors if they differ in the color of one vertex.
Definition 1 (Solution Cluster). A set of solutions C ? S of a solution space S is a cluster if it is
a maximal subset such that any two solutions in C can be connected by a sequence from C where
consecutive solutions are neighbors.
In other words, clusters are connected components of the ?solution graph? which has solutions as
nodes and an edge between two solutions if they differ in the value of exactly one variable.
3
A Partition Function Style Expression for Counting Clusters
In this section we consider a method for estimating the number of solution clusters of a graph
coloring problem. We briefly describe the concepts here; a more in-depth treatment, including
formal results, may be found in [8]. First let us extend the definition of the function F so that it
may be evaluated on an extended domain DomExt := P({c1 , . . . , ck }) \ ? where c1 , . . . , ck are
the k domain values (colors) of each of the problem variables, and P is the power set operator
(so |DomExt| = 2k ? 1). Each generalized assignment ~y ? DomExtn thus associates a (nonempty) set of values with each original variable, defining a hypercube in the search
Q space for F .
We generalize F and f? to this extended domain in the natural way, F ? (~y ) := ~x?~y F (~x), and
Q
f?? (~y? ) := ~x? ?~y? f? (~x? ), where the relation ? is applied point-wise, as will be the case with any
relational operators used on vectors in this text. This means that F ? evaluates to 1 on a hypercube
iff F evaluates to 1 on all points within that hypercube.
Let us first assume that the solution space we work with decomposes into a set of separated hypercubes, so clusters correspond exactly to the hypercubes; by separated hypercubes, we mean
that points in one hypercube differ from points in others in at least two values. E.g., ~y1 =
({c1 } , {c1 } , {c1 }) and ~y2 = ({c2 } , {c3 } , {c1 , c2 }) are separated hypercubes in three dimensions.
This allows us to develop a surprisingly simple expression for counting the number of clusters, and
we will later see that the same expression applies with high precision also to solution spaces of much
more complex instances of graph coloring problems. Consider the indicator function ?(~y ) for the
property that ~y ? DomExtn is a maximal solution hypercube contained in the solution space:
Y Y
1 ? F ? (~y [yi ? yi ? {vi }])
?(~y ) := F ? (~y ) ?
| {z }
i vi ?y
/ i
y
~ is legal |
{z
}
no point-wise generalization is legal
Here ~y [yi ? yi? ] denotes the substitution of yi? into yi in ~y . Note that if the solution clusters are in
fact hypercubes, then variable values that can be ?extended? independently can also be extended all
at once, that is, F ? (~y [yi ? yi ? {vi }]) = 1 and F ? (~y [yj ? yj ? {vj }]) = 1 implies F (~y [yi ?
yi ? {vi } , yj ? yj ? {vj }]) = 1. Moreover, any F ? (~y [yi ? yi ? {vi }]) implies F (~y ). Using these
observations, ?(~y ) can be reformulated by factoring out the product as follows. Here #o (~y ) denotes
the number of odd-size elements of ~y , and #e (~y ) the number of even-size ones.
X
? Y Y
?(~y ) = F ? (~y )
(?1)#o (~y )
F ? (~y [yi ? yi ? {vi }])
i vi ?yi?
~
y ? ?(P(Dom))n \~
y
~
z :=~
y ?~
y?
=
X
|
{z
=F ? (~
y ?~
y ? ) by hypercube assumption
(?1)#o (~z\~y) F ? (~z) = (?1)#e (~y)
~
z ?~
y
X
}
(?1)#e (~z) F ? (~z)
~
z ?~
y
Finally, to count the number of maximal hypercubes fitting into the set of solutions, we sum the
indicator function ?(~y ) across all vectors ~y ? DomExtn :
X
X
X
X
X
?(~y ) =
(?1)#e (~y)
(?1)#e (~z) F ? (~z) =
(?1)#e (~z) F ? (~z)
(?1)#e (~y)
~
y
y
~
=
X
~
z
~
z ?~
y
~
z
??~
/ y ?~
z
Y X
X
(?1)#e (~z) F ? (~z)
(?1)?e (yi ) =
(?1)#e (~z) F ? (~z)
i ??~
/ yi ?~
zi
|
~
z
{z
=1
}
The expression above is important for our study, and we denote it by Z(?1) :
X
X
Y
Z(?1) :=
(?1)#e (~z) F ? (~z) =
(?1)#e (~y)
f?? (~y? )
~
z ?DomExtn
~
y ?DomExtn
(2)
?
The notation Z(?1) is chosen to emphasize its relatedness to the partition function (1) denoted by
Z, and indeed the two expressions differ only in the (?1) term. It is easily seen that if the solution
space consists of a set of separated hypercubes, then Z(?1) exactly captures the number of clusters
(each separated hypercube is a cluster). Surprisingly, this number is remarkably accurate even for
random coloring problems as we will see in Section 6, Figure 1.
4
Exact Computation of the Number of Clusters and Z(?1)
Obtaining the exact number of clusters for reasonable size problems is crucial for evaluating our
proposed approach based on Z(?1) and the corresponding BP equations to follow in Section 5. A
na??ve way is to explicitly enumerate all solutions, compute their pairwise Hamming distances, and
infer the cluster structure. Not surprisingly, this method does not scale well because the number of
solutions typically grows exponentially as the number of variables of the graph coloring problems
increases. We discuss here a much more scalable approach that uses two advanced techniques to
this effect: disjunctive negation normal form (DNNF) and binary decision diagrams (BDDs). Our
method scales to graph coloring problems with a few hundred variables (see experimental results)
for computing both the exact number of clusters and the exact value of Z(?1) .
Both DNNF [6] and BDD [4] are graph based data structures that have proven to be very effective
in ?knowledge compilation?, i.e., in converting a 0-1 function F into a (potentially exponentially
long, but often reasonably sized) standard form from which various interesting properties of F can
be inferred easily, often in linear time in the size of the DNNF formula or BDD. For our purposes,
we use DNNF to succinctly represent all solutions of F and a set of BDDs to represent solution
clusters that we create as we traverse the DNNF representation. The only relevant details for us of
these two representations are the following: (1) DNNF is represented as an acyclic directed graph
with variables and their negations at the leaves and two kinds of internal nodes, ?or? and ?and?; ?or?
nodes split the set of solutions such that they differ in the value of the variable labeling the node but
otherwise have identical variables; ?and? nodes partition the space into disjoint sets of variables; (2)
BDDs represent arbitrary sets of solutions and support efficient intersection and projection (onto a
subset of variables) operations on these sets.
We use the compiler c2d [5] to obtain the DNNF form for F . Since c2d works on Boolean formulas
and our F often has non-Boolean domains, we first convert F to a Boolean function F ? using a
unary encoding, i.e., by replacing each variable xi of F with domain size t with t Boolean variables
x?i,j , 1 ? j ? t, respecting the semantics: xi = j iff xi,j = 1. In order to ensure that F and F ? have
similar cluster structure of solutions, we relax the usual condition that only one of xi,1 , . . . , xi,t
may be 1, thus effectively allowing the original xi to take multiple values simultaneously. This
yields a generalized function: the domains of the variables of F ? correspond to the power sets of the
domains of the respective variables of F . This generalization has the following useful property: if
two solutions ~x(1) and ~x(2) are neighbors in the solution space of F , then the corresponding solutions
~x?(1) and ~x?(2) are in the same cluster in the solution space of F ? .
Computing the number of clusters. Given F ? , we run c2d on it to obtain an implicit representation
of all solutions as a DNNF formula F ?? . Next, we traverse F ?? from the leaf nodes up, creating
clusters as we go along. Specifically, with each node U of F ?? , we associate a set SU of BDDs,
one for each cluster in the sub-formula contained under U . The set of BDDs for the root node of
F ?? then corresponds precisely to the set of solution clusters of F ? , and thus of F . These BDDs are
computed as follows. If U is a leaf node of F ?? , it represents a Boolean variable or its negation and
SU consists of the single one-node BDD corresponding to this Boolean literal. If U is an internal
node of F ?? labeled with the variable xU and with children L and R, the set of BDDs SU is computed
as follows. If U is an ?or? node, then we consider the union SL ? SR of the two sets of BDDs and
merge any two of these BDDs if they are adjacent, i.e., have two solutions that are neighbors in the
solution space (since the DNNF form guarantees that the BDDs in SL and SR already must differ
in the value of the variable xU labeling U , the adjacency check is equivalent to testing whether the
two BDDs, with xU projected out, have a solution in common; this is a straightforward projection
and intersection operation for BDDs); in the worst case, this leads to |SL | + |SR | cluster BDDs
in SU . Similarly, if U is an ?and? node, then SU is constructed by considering the cross product
{bL and bR | bL ? SL , bR ? SR } of the two sets of BDDs and merging adjacent resulting BDDs as
before; in the worst case, this leads to |SL | ? |SR | cluster BDDs in SU .
Evaluating Z(?1) . The exact value of Z(?1) on F ? can also be evaluated easily once we have the
DNNF representation F ?? . In fact, as is reflected in our experimental results, evaluation of Z(?1)
is a much more scalable process than counting clusters because it requires a simple traversal of F ??
without the need for maintaining BDDs. With each node U of F ?? , we associate a value VU which
equals precisely the difference between the number of solutions below U with an even number
of positive literals and those with an odd number of positive literals; Z(?1) then equals (?1)N
times the value thus associated with the root node of F ?? . These values are computed bottomup as follows. If U is a leaf node labeled with a positive (or negative) literal, then VU = ?1
(or 1, resp.). If U is an ?or? node with children L and R, then VU = VL + VR . This works
because L and R have identical variables. Finally, if U is an ?and? node with children L and R,
then VU = VL VR . This last computation works because L and R are on disjoint sets of variables
and because of the following observation. Suppose L has VLe solutions with an even number of
positive literals and VLo solutions with an odd number of positive literals; similarly for R. Then
VU = (VLe VRe + VLo VRo ) ? (VLe VRo + VLo VRe ) = (VLe ? VLo )(VRe ? VRo ) = VL VR .
5
Belief Propagation Inference for Clusters
We present a version of the Belief Propagation algorithm that allows us to deal with the alternating
signs of Z(?1) . The derivation follows closely the one given by Yedidia et al. [12] for standard BP,
i.e., we will write equations for a stationary point of KL divergence of two sequences (not necessarily
probability distributions in our case). Since the Z(?1) expression involves both positive and negative
terms, we must appropriately generalize some of the steps.
Given a function p(~y ) (the target function, with real numbers as its range) on DomExtn that is
known up to a normalization constant but with unknown marginal sums, we seek a function b(~y )
(the trial function) to approximate p(~y ), suchQ
that b?s marginal sums are known. The target function
1
(?1)#e (~y) ? f?? (~y? ). We adopt previously used notation [12]:
p(~y ) is defined as p(~y ) := Z(?1)
~y? are values in ~y of variables that appear in factor (i.e. vertex) f?? ; ~y?i are values of all variables
in ~y except yi . The marginal sums can be extended in a similar way to allow for any number of
variables fixed in ~y , specified by the subscript. When convenient, we treat the symbol ? as a set of
indices of variables in f?? , to be able to index them. We begin by listing the assumptions used in the
derivation, both the ones that are used in the ?standard? BP, and two additional ones needed for the
generalization. An assumption on b(~y ) is legitimate if the corresponding condition holds for p(~y ).
Assumptions: The standard assumptions, present in the derivation of standard BP [12], are:
P
P
? Marginalization: bi (yi ) = ~y?i b(~y ) and b? (~y? ) = ~y?? b(~y ). This condition is legitimate,
but cannot be enforced with a polynomial number of constraints. Moreover, it might happen that
the solution found by BP does not satisfy it, which is a known problem with BP [10].
P
P
? Normalization: yi bi (yi ) = ~y? b? (~y? ) = 1. This is legitimate and explicitly enforced.
P
? Consistency: ??, i ? ?, yi : bi (yi ) = ~y?\i b? (~y? ). This is legitimate and explicitly enforced.
? Tree-like decomposition: says that the weights b(~y ) of each configuration can be obtained from
the marginal
Q sums as follows (di is the degree of the variable node yi in the factor graph):
y? )|
? |b? (~
Q
|b(~y )| =
di ?1 . (The standard assumption is without the absolute values.) This assumpi |bi (yi )|
tion is not legitimate, and it is built-in, i.e., it is used in the derivation of the BP equations.
To appropriately handle the signs of b(~y ) and p(~y ), we have two additional assumptions. These are
necessary for the BP derivation applicable to Z(?1) , but not for the standard BP equations.
? Sign-correspondence: For all configurations ~y , b(~y ) and p(~y ) have the same sign (zero, being a
singular case, is treated as having a positive sign). This is a built-in assumption and legitimate.
? Sign-alternation: bi (yi ) is negative iff |yi | is even, and b? (~y? ) is negative iff #e (~y? ) is odd.
This is also a built-in assumption, but not necessarily legitimate; whether or not it is legitimate
depends on the structure of the solution space of a particular problem.
The Sign-alternation assumption can be viewed as an application of the inclusion-exclusion principle, and is easy to illustrate on a graph coloring problem with only two colors. In this case, if
F ? (~y ) = 1, then yi = {c1 } means that yi can have color 1, yi = {c2 } that yi can have color 2,
and yi = {c1 , c2 } that yi can have both colors. The third event is included in the first two, and its
probability must thus appear with a negative sign if the sum of probabilities is to be 1.
Kullback-Leibler divergence: The KL-divergence is traditionally defined for probability distributions, for sequences of non-negative terms in particular. We need a more general measure, as our
sequences p(~y ) and b(~y ) have alternating signs. But using the Sign-correspondence assumption, we
observe that the usual definition of KL-divergence is still applicable, since the term in the logarithm
P
P
b(~
y)
|b(~
y )|
is non-negative: D(b k p) := ~y?DomExtn b(~y ) log p(~
y ) log |p(~
y ?DomExtn b(~
~
y) =
y )| . Moreover, the following Lemma shows that the two properties of KL-divergence that make it suitable for
distance-minimization are still valid.
Lemma 1. Let b(.) and p(.) be (possibly negative) weight functions on the same domain D, with the
property that they agree on signs forP
all states (i.e.,
P ?~y ? D : sign(b(~y )) = sign(p(~y ))), and that
they sum to the same constant (i.e., ~y b(~y ) = ~y p(~y ) = c). Then the KL-divergence D(b k p)
satisfies D(b k p) ? 0 and D(b k p) = 0 ? b ? p.
The proof is essentially identical to the equivalent statement made about KL-divergence of probability distributions. We omit it here for lack of space.
Minimizing D(b k p): We write p(~y ) = sign(p(~y )) ? |p(~y )|, and analogously for b(~y ). This allows
us to isolate the signs, and the minimization follows exactly the steps of standard BP derivation,
namely we write a set of equations characterizing stationary points of D(b k p). At the end, using
the Sign-alternation assumption, we are able to implant the signs back.
BP equations: The resulting modified BP updates (denoted BP(?1) ) are, for yi ? DomExt:
Y
ni?? (yi ) =
m??i (yi )
(3)
??i\?
X
m??i (yi ) ?
y?\i ?DomExt|?|?1
~
f?? (~y? )
Y
(?1)?(|yj | is even) nj?? (yj )
(4)
j??\i
(Almost equivalent to standard BP, except for the (?1) term.) One would iterate these equations
from a suitable starting point to find a fixed point, and then obtain the beliefs bi (yi ) and b? (~y? ) (i.e.,
estimates of marginal sums) using the Sign-alternation assumption and the standard BP relations:
Y
Y
ni?? (yi ) (5)
m??i (yi )
b? (~y? ) ?(?1)#e (~y? ) f?? (~y? )
bi (yi ) ?(?1)?(|yi | is even)
i??
??i
To approximately count the number of clusters in large problems for which exact cluster count or
exact Z(?1) evaluation is infeasible, we employ the generic BP(?1) scheme derived above. We substitute the extended factors f ? (~y? ) into Equations (3) and (4), iterate from a random initial starting
point to find a fixed point, and then use Equations (5) to compute the beliefs. The actual estimate of
Z(?1) is obtained with the standard BP formula (with signs properly taken care of), where di is the
degree of the variable node yi in the factor graph:
XX
X
X
log ZBP(?1) := ?
b? (~y? ) log |b? (~y? )| +
(di ? 1)
bi (yi ) log |bi (yi )|
(6)
?
6
y?
~
i
yi
Experimental Evaluation
We empirically evaluate the accuracy of our Z(?1) and ZBP(?1) approximations on an ensemble of
random graph 3-coloring instances. The results are discussed in this section.
Z(?1) vs. the number of clusters. The left panel of Figure 1 compares the number of clusters (on the
x-axis, log-scale) with Z(?1) (on the y-axis, log-scale) for 2, 500 colorable random 3-COL instances
on graphs with 20, 50, and 100 vertices with average vertex degree ranging between 1.0 and 4.7 (the
threshold for 3-colorability). As can be seen, the Z(?1) expression captures the number of clusters
almost exactly. The inaccuracies come mostly from low graph density regions; in all instances we
tried with density > 3.0, the Z(?1) expression was exact. We remark that although uncolorable
instances were not considered in this comparison, Z(?1) = 0 = num-clusters by construction.
It is worth noting that for tree-structured graphs (with more than one vertex), the Z(?1) expression
gives 0 for any k ? 3 colors although there is exactly one solution cluster. Moreover, given a
disconnected graph with at least one tree component, Z(?1) also evaluates to 0 as it is the product
of Z(?1) values over different components. We have thus removed all tree components from the
generated graphs prior to computing Z(?1) ; tree components are easily identified and removing
them does not change the number of clusters. For low graph densities, there are still some instances
0.20
Average log(Z)/N
0.05
0.10
0.15
0.30
0.20
0.00
5
5
20 50
200
1000
Number of clusters
5000
ZBP(?1), |V|=100K
ZBP(?1), |V|=100
Z(?1), |V|=100
0.00
Z(?1)?marginals
0.10
5000
200
20 50
Z(?1)
1000
|V|= 20
|V|= 50
|V|= 100
0.00
0.10
0.20
Cluster marginals
0.30
Figure 1: Left: Z(?1) vs. number of clusters in random 3-COL
problems with 20, 50 and 100 vertices, and average vertex degree
between 1.0 ? 4.7. Right: cluster marginals vs. Z(?1) -marginals
for one instance of random 3-COL problem with 100 vertices.
1
2
3
4
Average vertex degree
Figure 2: Average ZBP(?1)
and Z(?1) for 3-COL vs. average vertex degrees for small
and large random graphs.
for which Z(?1) evaluates to 0; these instances are not visible in Figure 1 due to the log-log scale.
In fact, all our instances with fewer than 5 clusters have Z(?1) = 0. This is because of other
substructures for which Z(?1) evaluates to 0, e.g., cordless cycles of length not divisible by 3 (for
k = 3 coloring) with attached trees. These structures, however, become rare as the density increases.
Z(?1) marginals vs. clusters marginals. For a given problem instance, we can define the cluster
marginal of a variable xi to be the fraction of solution clusters in which xi only appears with one
particular value (i.e., xi is a backbone of the cluster). Since Z(?1) counts well the number of clusters,
it is natural to ask whether it is also possible to obtain the marginals information from it. Indeed,
Z(?1) does provide an estimate of the cluster marginals, and we call them Z(?1) -marginals. Recall
that the semantics of factors in the extended domain is such that a variable can assume a set of values
only if every value in the set yields a solution to the problem. This extends to the Z(?1) estimate of
the number of clusters, and one can therefore use the principle of inclusion-exclusion to compute the
number of clusters where a variable can only assume one particular value. The definition of Z(?1)
conveniently P
provides for correct signs, and the number of clusters where xi is fixed to vi is thus
estimated by yi ?vi Z(?1) (yi ), where Z(?1) (yi ) is the marginal sum of Z(?1) . The Z(?1) -marginal
is obtained by dividing this quantity by Z(?1) .
The right panel of Figure 1 shows the results on one random 3-COL problem with 100 vertices. The
plot shows cluster marginals and Z(?1) -marginals for one color; the points correspond to individual variables. The Z(?1) -marginals are close to perfect. This is a typical situation, although it is
important to mention that Z(?1) -marginals are not always correct, or even non-negative. They are
merely an estimate of the true cluster marginals, and how well they work depends on the solution
space structure at hand. They are exact if the solution space decomposes into separated hypercubes
and, as the figure shows, remarkably accurate also for random coloring instances.
The number of clusters vs. ZBP(?1) . Figure 3 depicts a comparison between ZBP(?1) and Z(?1)
for the 3-COL problem on colorable random graphs of various sizes and graph densities. It compares Z(?1) (on the x-axis, log-scale) with ZBP(?1) (y-axis, log-scale) for 1, 300 colorable 3-COL
instances on random graphs with 50, 100, and 200 vertices, with average vertex degree ranging from
1.0 to 4.7. The plots shows that BP is quite accurate in estimating Z(?1) for individual instances,
which in turn captures the number of clusters. Instances which are not 3-colorable are not shown,
and BP in general incorrectly estimates a non-zero number of clusters for them.
Estimates on very large graphs and for various graph densities. Figure 2 shows similar data
from a different perspective: what is shown is a rescaled average estimate of the number of clusters
(y-axis) for average vertex degrees 1.0 to 4.7 (x-axis). The average is taken across different colorable
instances of a given size, and the rescaling assumes that the number of clusters = exp(|V |??) where
? is a constant independent of the number of vertices [3]. The three curves show, respectively, BP?s
estimate for graphs with 100, 000 vertices, BP?s estimate for graphs with 100 vertices, and Z(?1) for
the same graphs of size 100. The averages are computed across 3, 000 instances of the small graphs,
and only 10 instances of the large ones where the instance-to-instance variability is practically nonexistent. The fact that the curves nicely overlay shows that BP(?1) computes Z(?1) very accurately
1e+03
1e+06
Z(?1)
1e+09
1e+09
1e+06
1e+00
1e+00
1e+00
|V|= 200
1e+03
ZBP(?1)
1e+06
1e+09
|V|= 100
1e+03
ZBP(?1)
1e+06
1e+00
1e+03
ZBP(?1)
1e+09
|V|= 50
1e+00
1e+03
1e+06
Z(?1)
1e+09
1e+00
1e+03
1e+06
Z(?1)
1e+09
Figure 3: ZBP(?1) compared to Z(?1) for 3-COL problem on random graphs with 50, 100 and 200
vertices and average vertex degree in the range 1.0 ? 4.7.
on average for colorable instances (where we can compare it with exact values), and that the estimate remains accurate for large problems. Note that the Survey Propagation algorithm developed
by Braunstein et al. [3] also aims at computing the number of certain clusters in the solution space.
However, SP counts only the number of clusters with a ?typical size?, and would show non-zero
values in Figure 2 only for average vertex degrees between 4.42 and 4.7. Our algorithm counts
clusters of all sizes, and is very accurate in the entire range of graph densities.
7
Conclusion
We discuss a purely combinatorial construction for estimating the number of solution clusters in
graph coloring problems with very high accuracy. The technique uses a hypercube-based inclusionexclusion argument coupled with solution counting, and lends itself to an application of a modified
belief propagation algorithm. This way, the number of clusters in huge random graph coloring
instances can be accurately and efficiently estimated. Our preliminary investigation has revealed
that it is possible to use combinatorial arguments to formally prove that the cluster counts estimated
by Z(?1) are exact on certain kinds of solution spaces (not necessarily only for graph coloring). We
hope that such insights and the cluster-focused picture will lead to new techniques for solving hard
combinatorial problems and for bounding solvability transitions in random problem ensembles.
References
[1] D. Achlioptas and F. Ricci-Tersenghi. On the solution-space geometry of random constraint satisfaction
problems. In 38th STOC, pages 130?139, Seattle, WA, May 2006.
[2] J. Ardelius, E. Aurell, and S. Krishnamurthy. Clustering of solutions in hard satisfiability problems. J.
Statistical Mechanics, P10012, 2007.
[3] A. Braunstein, R. Mulet, A. Pagnani, M. Weigt, and R. Zecchina. Polynomial iterative algorithms for
coloring and analyzing random graphs. Physical Review E, 68:036702, 2003.
[4] R. E. Bryant. Graph-based algorithms for Boolean function manipulation. IEEE Transactions on Computers, 35(8):677?691, 1986.
[5] A. Darwiche. New advances in compiling CNF into decomposable negation normal form. In 16th European Conf. on AI, pages 328?332, Valencia, Spain, Aug. 2004.
[6] A. Darwiche. Decomposable negation normal form. J. ACM, 48(4):608?647, 2001.
[7] A. Hartmann, A. Mann, and W. Radenback. Clusters and solution landscapes for vertex-cover and SAT
problems. In Workshop on Physics of Distributed Systems, Stockholm, Sweden, May 2008.
[8] L. Kroc, A. Sabharwal, and B. Selman. Counting solution clusters of combinatorial problems using belief
propagation, 2008. (in preparation).
[9] F. Krzakala, A. Montanari, F. Ricci-Tersenghi, G. Semerjian, and L. Zdeborova. Gibbs states and the set
of solutions of random constraint satisfaction problems. PNAS, 104(25):10318?10323, June 2007.
[10] D. Mackay, J. Yedidia, W. Freeman, and Y. Weiss. A conversation about the Bethe free energy and
sum-product, 2001. URL citeseer.ist.psu.edu/mackay01conversation.html.
[11] M. M?ezard, G. Parisi, and R. Zecchina. Analytic and algorithmic solution of random satisfiability problems. Science, 297(5582):812?815, 2002.
[12] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized
belief propagation algorithms. IEEE Transactions on Information Theory, 51(7):2282?2312, 2005.
| 3512 |@word trial:1 version:1 briefly:1 polynomial:3 closure:1 seek:1 tried:1 decomposition:1 citeseer:1 mention:1 harder:1 nonexistent:1 substitution:1 configuration:2 initial:1 interestingly:1 fa8750:1 written:1 must:3 visible:1 partition:6 happen:1 analytic:1 plot:2 update:1 bart:1 stationary:2 v:6 leaf:4 fewer:1 fa9550:1 colored:1 num:1 provides:3 node:27 traverse:2 along:1 c2:4 constructed:1 become:1 supply:1 cordless:1 consists:2 prove:1 fitting:1 darwiche:2 krzakala:1 manner:1 pairwise:2 notably:1 indeed:4 hardness:1 mechanic:1 pagnani:1 globally:1 freeman:2 actual:1 enumeration:1 considering:1 begin:2 estimating:6 moreover:5 notation:2 xx:1 panel:2 spain:1 what:2 backbone:3 kind:3 developed:2 finding:1 nj:1 guarantee:1 zecchina:2 every:1 zdeborova:1 shed:1 bryant:1 exactly:8 grant:3 omit:1 appear:2 before:1 positive:7 local:1 treat:1 limit:1 encoding:1 analyzing:1 subscript:1 merge:1 approximately:1 might:1 challenging:2 range:5 bi:9 directed:1 yj:6 testing:1 union:1 vu:5 braunstein:2 thought:2 projection:2 convenient:1 word:1 onto:1 cannot:1 close:1 operator:2 equivalent:3 go:2 straightforward:1 starting:2 independently:1 colorable:6 survey:2 focused:4 decomposable:2 legitimate:8 insight:1 utilizing:1 handle:1 traditionally:1 krishnamurthy:1 resp:1 target:2 suppose:1 construction:2 exact:12 us:2 associate:3 element:1 satisfying:1 approximated:1 labeled:2 disjunctive:1 capture:3 worst:2 region:1 connected:2 cycle:1 removed:1 rescaled:1 respecting:1 iisi:1 constrains:1 traversal:1 dom:2 ezard:1 solving:1 purely:2 bipartite:1 easily:5 joint:1 darpa:1 represented:4 various:3 derivation:7 separated:6 fast:1 describe:1 effective:1 query:1 labeling:2 refined:1 quite:1 solve:1 say:1 relax:1 otherwise:2 itself:2 sequence:4 parisi:1 propose:1 product:6 maximal:3 relevant:1 iff:6 validate:1 seattle:1 cluster:84 bdds:17 perfect:1 object:1 illustrate:2 derive:1 develop:1 odd:4 aug:1 dividing:1 c:1 involves:1 implies:2 come:1 differ:7 sabharwal:2 closely:1 correct:2 mann:1 adjacency:1 ricci:2 generalization:3 preliminary:2 investigation:1 stockholm:1 hold:1 practically:1 considered:1 normal:3 exp:1 algorithmic:2 consecutive:1 adopt:1 purpose:1 applicable:3 combinatorial:9 create:1 tool:1 weighted:1 minimization:2 hope:1 always:1 aim:1 csp:1 ck:3 modified:2 cornell:3 derived:2 focus:1 june:1 properly:1 check:2 inference:3 factoring:1 unary:1 vl:3 typically:1 entire:1 relation:3 semantics:2 hartmann:1 html:1 denoted:2 fairly:2 mackay:1 marginal:9 equal:3 once:2 having:4 nicely:1 sampling:1 psu:1 identical:3 represents:1 hardest:1 others:1 few:2 employ:1 simultaneously:1 ve:2 divergence:7 individual:2 phase:1 geometry:2 connects:1 negation:5 interest:1 message:2 huge:1 evaluation:3 light:1 compilation:3 accurate:6 edge:2 vro:3 necessary:1 respective:1 sweden:1 tree:6 logarithm:1 forp:1 theoretical:1 instance:27 earlier:1 boolean:8 cover:1 assignment:6 vertex:24 subset:3 rare:1 hundred:3 uniform:1 successful:1 answer:1 hypercubes:8 density:8 probabilistic:1 physic:2 ashish:1 analogously:1 na:2 satisfied:1 possibly:1 literal:6 conf:1 creating:1 style:2 rescaling:1 satisfy:1 explicitly:3 vi:9 depends:2 later:1 root:2 tion:1 analyze:1 compiler:1 substructure:1 ni:2 accuracy:3 efficiently:3 ensemble:3 yield:3 correspond:4 listing:1 landscape:1 vlo:4 generalize:2 accurately:3 worth:1 weigt:1 definition:4 evaluates:6 energy:2 naturally:1 associated:3 di:4 proof:1 hamming:1 treatment:1 ask:1 recall:1 knowledge:3 color:10 conversation:1 satisfiability:3 organized:1 back:1 coloring:26 appears:2 follow:1 methodology:2 reflected:1 wei:2 evaluated:2 implicit:1 achlioptas:1 hand:1 replacing:1 su:6 propagation:12 lack:1 vre:3 defines:1 grows:1 effect:1 verify:1 concept:1 y2:1 true:1 zbp:13 spatially:1 alternating:2 leibler:1 deal:1 adjacent:3 generalized:3 reasoning:1 ranging:2 variational:2 wise:2 bdd:3 recently:2 common:1 empirically:1 physical:1 attached:1 exponentially:2 extend:1 discussed:1 marginals:14 vle:4 gibbs:1 ai:1 consistency:1 similarly:2 inclusion:2 solvability:1 recent:2 csps:1 showed:1 exclusion:2 perspective:1 manipulation:1 certain:4 binary:1 success:1 alternation:4 yi:49 seen:2 additional:2 care:1 converting:1 multiple:1 pnas:1 infer:1 cross:1 long:1 variant:1 scalable:3 essentially:1 represent:7 normalization:2 c1:9 remarkably:2 diagram:1 singular:1 crucial:1 ithaca:1 appropriately:2 sr:5 isolate:1 valencia:1 call:1 structural:1 near:1 counting:11 noting:1 revealed:1 split:1 easy:2 divisible:1 iterate:2 marginalization:1 zi:1 identified:1 br:2 whether:4 expression:9 url:1 reformulated:1 passing:2 cnf:1 remark:1 enumerate:1 useful:2 clear:1 dnnf:10 sl:5 overlay:1 nsf:1 sign:20 estimated:4 disjoint:2 discrete:2 write:4 ist:1 key:3 threshold:1 graph:53 merely:1 fraction:1 year:1 sum:11 convert:1 run:1 enforced:3 extends:1 almost:2 reasonable:1 utilizes:1 decision:1 correspondence:2 occur:1 constraint:6 precisely:3 bp:28 argument:3 extremely:1 relatively:1 department:1 structured:1 disconnected:1 across:4 taken:2 computationally:2 equation:12 legal:5 previously:1 agree:1 discus:2 count:9 nonempty:1 turn:1 needed:1 remains:1 end:1 operation:2 yedidia:4 observe:1 generic:1 compiling:1 original:2 c2d:3 denotes:2 clustering:2 ensure:1 substitute:1 assumes:1 maintaining:1 especially:1 hypercube:8 bl:2 question:2 quantity:4 already:1 usual:2 lends:1 distance:3 trivial:1 length:1 index:2 providing:1 minimizing:1 mostly:1 potentially:1 statement:1 stoc:1 negative:10 unknown:1 allowing:1 observation:2 incorrectly:1 defining:1 extended:7 relational:1 situation:1 variability:1 y1:1 arbitrary:1 community:1 inferred:1 propositional:1 namely:2 mechanical:1 kl:6 extensive:1 c3:1 specified:1 inaccuracy:1 able:2 below:1 built:3 including:1 belief:12 power:2 event:1 satisfaction:3 natural:3 treated:1 suitable:2 indicator:2 advanced:3 representing:1 scheme:1 picture:4 axis:6 transitive:1 coupled:1 text:1 prior:1 literature:1 understanding:1 review:1 determining:2 afosr:1 interesting:2 aurell:1 proven:1 acyclic:1 ingredient:1 degree:10 principle:2 succinctly:2 supported:1 surprisingly:3 last:1 free:2 infeasible:1 formal:1 allow:1 neighbor:6 lukas:1 characterizing:1 absolute:1 distributed:2 boundary:1 depth:1 xn:1 dimension:1 evaluating:2 valid:1 curve:2 computes:1 selman:3 made:2 transition:1 projected:1 transaction:2 approximate:2 emphasize:1 relatedness:1 kullback:1 sat:2 xi:11 bottomup:1 search:5 iterative:1 decomposes:2 bethe:1 reasonably:1 obtaining:1 complex:1 necessarily:3 european:1 domain:10 vj:2 constructing:1 sp:2 montanari:1 big:1 bounding:1 succinct:1 child:3 x1:1 xu:3 depicts:1 ny:1 vr:3 precision:1 sub:1 col:12 candidate:1 crude:1 answering:1 third:1 formula:5 removing:1 symbol:1 workshop:1 merging:1 effectively:1 implant:1 intersection:2 conveniently:1 expressed:2 contained:2 applies:1 corresponds:1 satisfies:1 tersenghi:2 acm:1 sized:1 viewed:1 hard:4 change:1 included:1 specifically:1 except:2 typical:2 colorability:1 lemma:2 called:3 experimental:3 formally:1 internal:2 support:1 preparation:1 evaluate:2 |
2,772 | 3,513 | Dependence of Orientation Tuning on Recurrent
Excitation and Inhibition in a Network Model of V1
Klaus Wimmer1 * , Marcel Stimberg1 * , Robert Martin1 , Lars Schwabe2 , Jorge Mari?o3 ,
James Schummers4 , David C. Lyon5 , Mriganka Sur4 , and Klaus Obermayer1
1
Bernstein Center for Computational Neuroscience and Technische Universit?t Berlin, Germany
2
Dept of Computer Science and Electrical Engineering, University of Rostock, Germany
3
Dept of Medicine, Neuroscience, and Motor Control Group, Univ. A Coru?a, Spain
4
Dept of Brain and Cognitive Sci and Picower Ctr for Learning and Memory, MIT, Cambridge
5
Dept of Anatomy and Neurobiology, University of California, Irvine, USA
[klaus, mst]@cs.tu-berlin.de
Abstract
The computational role of the local recurrent network in primary visual cortex is
still a matter of debate. To address this issue, we analyze intracellular recording data of cat V1, which combine measuring the tuning of a range of neuronal
properties with a precise localization of the recording sites in the orientation preference map. For the analysis, we consider a network model of Hodgkin-Huxley
type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map. We then systematically vary the strength
of the recurrent excitation and inhibition relative to the strength of the afferent
input. Each parametrization gives rise to a different model instance for which the
tuning of model neurons at different locations of the orientation map is compared
to the experimentally measured orientation tuning of membrane potential, spike
output, excitatory, and inhibitory conductances. A quantitative analysis shows
that the data provides strong evidence for a network model in which the afferent input is dominated by strong, balanced contributions of recurrent excitation
and inhibition. This recurrent regime is close to a regime of ?instability?, where
strong, self-sustained activity of the network occurs. The firing rate of neurons
in the best-fitting network is particularly sensitive to small modulations of model
parameters, which could be one of the functional benefits of a network operating
in this particular regime.
1
Introduction
One of the major tasks of primary visual cortex (V1) is the computation of a representation of
orientation in the visual field. Early models [1], combining the center-surround receptive fields of
lateral geniculate nucleus to give rise to orientation selectivity, have been shown to be over-simplistic
[2; 3]. Nonetheless, a debate remains regarding the contribution of afferent and recurrent excitatory
and inhibitory influences [4; 5]. Information processing in cortex changes dramatically with this
?cortical operating regime?, i. e. depending on the relative strengths of the afferent and the different
recurrent inputs [6; 7]. Recently, experimental and theoretical studies have investigated how a cell?s
orientation tuning depends on its position in the orientation preference map [7?10]. However, the
computation of orientation selectivity in primary visual cortex is still a matter of debate.
The wide range of models operating in different regimes that are discussed in the literature are an
indication that models of V1 orientation selectivity are underconstrained. Here, we assess whether
the specific location dependence of the tuning of internal neuronal properties can provide sufficient
* K.
Wimmer and M. Stimberg contributed equally to this work.
1
constraints to determine the corresponding cortical operating regime. The data originates from intracellular recordings of cat V1 [9], combined with optical imaging. This allowed to measure, in
vivo, the output (firing rate) of neurons, the input (excitatory and inhibitory conductances) and a
state variable (membrane potential) as a function of the position in the orientation map. Figure 1
shows the experimentally observed tuning strength of each of these properties depending on the
distribution of orientation selective cells in the neighborhood of each neuron. The x-axis spans the
range from pinwheels (0) to iso-orientation domains (1), and each y-axis quantifies the sharpness
of tuning of the individual properties (see section 2.2). The tuning of the membrane potential (Vm )
as well as the tuning of the total excitatory (ge ) and inhibitory (gi ) conductances vary strongly with
map location, whereas the tuning of the firing rate (f ) does not. Specifically, the conductances and
the membrane potential are sharper tuned for neurons within an iso-orientation domain, where the
neighboring neurons have very similar orientation preferences, as compared to neurons close to a
pinwheel center, where the neighboring neurons show a broad range of orientation preferences.
Figure 1: Variation of the orientation selectivity indices (OSI, cf. Equation 2) of the firing rate (f ),
the average membrane potential (Vm ), and the excitatory (ge ) and inhibitory (gi ) input conductances
of neurons in cat V1 with the map OSI (the orientation selectivity index of the orientation map at the
location of the measured neuron). Dots indicate the experimentally measured values from 18 cells
[9]. Solid lines show the result of a linear regression. The slopes (values ? 95% confidence interval)
are ?0.02 ? 0.24 (f ), 0.27 ? 0.22 (Vm ), 0.49 ? 0.20 (ge ), 0.44 ? 0.19 (gi ).
This paper focuses on the constraints that this specific map-location dependence of neuronal properties imposes on the operating regime of a generic network composed of Hodgkin-Huxley type
model neurons. The model takes into account that the lateral inputs a cell receives are determined
(1) by the position in the orientation map and (2) by the way that synaptic inputs are pooled across
the map. The synaptic pooling radius has been shown experimentally to be independent of map
location [9], resulting in essentially different local recurrent networks depending on whether the
neighborhood is made up of neurons with similar preferred orientation, such as in an iso-orientation
domain, or is highly non-uniform, such as close to a pinwheel. The strength of lateral connections,
on the other hand, is unknown. Mari?o et al. [9] have shown that their data is compatible with a
model showing strong recurrent excitation and inhibition. However, this approach cannot rule out
alternative explanations accounting for the emergence of orientation tuning in V1. Here, we systematically explore the model space, varying the strength of the recurrent excitation and inhibition.
This, in effect, allows us to test the full range of models, including feed-forward-, inhibition- and
excitation-dominated models as well as balanced recurrent models, and to determine those that are
compatible with the observed data.
2
2.1
Methods
Simulation: The Hodgkin-Huxley network model
The network consists of Hodgkin-Huxley type point neurons and includes three voltage dependent
currents (Na+ and K+ for generation of action potentials, and a non-inactivating K+ -current that
is responsible for spike-frequency adaptation). Spike-frequency adaptation was reduced by a factor
0.1 for inhibitory neurons. For a detailed description of the model neuron and the parameter values,
see Destexhe et al. [11]. Every neuron receives afferent, recurrent and background input. We
2
used exponential models for the synaptic conductances originating from GABAA -like inhibitory
and AMPA-like excitatory synapses [12]. Slow NMDA-like excitatory synapses are modeled by a
difference of two exponentials (parameters are summarized in Table 1). Additional conductances
represent background activity (Ornstein-Uhlenbeck conductance noise, cf. Destexhe et al. [11]).
Table 1: Parameters of the Hodgkin-Huxley type neural network.
PARAMETER
D ESCRIPTION
VALUE
NAff
NE
NI
?E = ?I
Ee
Ei
?E
?I
?1
?2
d
?dE , ?E
d
d
?I , ?I
g Aff
E
g Aff
I
g II
g EI
Number of afferent exc. synaptic connections per cell
Number of recurrent exc. synaptic connections per cell
Number of recurrent inh. synaptic connections per cell
Spread of recurrent connections (std. dev.)
Reversal potential excitatory synapses
Reversal potential inhibitory synapses
Time constant of AMPA-like synapses
Time constant of GABAA -like synapses
Time constant of NMDA-like synapses
Time constant of NMDA-like synapses
Mean and standard deviation of excitatory synaptic delay
Mean and standard deviation of inhibitory synaptic delay
Peak conductance of afferent input to exc. cells
Peak conductance of afferent input to inh. cells
Peak conductance from inh. to inh. cells
Peak conductance from inh. to exc. cells
20
100
50
4 units (125 ?m)
0 mV
-80 mV
5 ms
5 ms
80 ms
2 ms
4 ms, 2 ms
1.25 ms, 1 ms
141 nS
0.73 g Aff
E
1.33 g Aff
E
1.33 g Aff
E
The network was composed of 2500 excitatory cells arranged on a 50 ? 50 grid and 833 inhibitory
neurons placed at random grid locations, thus containing 75% excitatory and 25% inhibitory cells.
The complete network modeled a patch of cortex 1.56 ? 1.56 mm2 in size. Connection probabilities
for all recurrent connections (between the excitatory and inhibitory population and within the populations) were determined from a spatially isotropic Gaussian probability distribution (for parameters,
see Table 1) with the same spatial extent for excitation and inhibition, consistent with experimental
measurements [9]. In order to avoid boundary effects, we used periodic boundary conditions. Recurrent excitatory conductances were modeled as arising from 70% fast (AMPA-like) versus 30%
slow (NMDA-like) receptors. If a presynaptic neuron generated a spike, this spike was transferred
to the postsynaptic neuron with a certain delay (parameters are summarized in Table 1).
The afferent inputs to excitatory and inhibitory cortical cells were modeled as Poisson spike trains
with a time-independent firing rate fAff given by
(?stim ? ?)2
fAff (?stim ) = 30 Hz rbase + (1 ? rbase ) exp ?
,
(1)
(2?Aff )2
where ?stim is the orientation of the presented stimulus, ? is the preferred orientation of the cell,
rbase = 0.1 is a baseline firing rate, and ?Aff = 27.5? is the tuning width. These input spike
trains exclusively trigger fast, AMPA-like excitatory synapses. The orientation preference for each
neuron was assigned according to its location in an artificial orientation map (Figure 2A). This map
was calibrated such that the pinwheel distance and the spread of recurrent connections matches
experimental data [9].
In order to measure the orientation tuning curves of f , Vm , ge , and gi , the response of the network
to inputs with different orientations was computed for 1.5 s with 0.25 ms resolution (usually, the
network settled into a steady state after a few hundred milliseconds). We then calculated the firing
rate, the average membrane potential, and the average total excitatory and inhibitory conductances
for every cell in an interval between 0.5 s and 1.5 s.
2.2
Quantitative evaluation: Orientation selectivity index (OSI) and OSI-OSI slopes
We analyze orientation tuning using the orientation selectivity index [13], which is given by
OSI =
q P
N
i=1
R(?i ) cos(2?i )
2
+
3
PN
i=1
2 P
/ N
R(?i ).
i=1
R(?i ) sin(2?i )
(2)
Figure 2: (A) Artificial orientation map with four pinwheels of alternating handedness arranged on
a 2-dimensional grid. The white (black) circle denotes the one-(two-) ?-area corresponding to the
radial Gaussian synaptic connection profile (?E = ?I = 125 ?m). (B) Map OSI of the artificial
orientation map. Pinwheel centers appear in black.
R(?i ) is the value of the quantity whose tuning is considered, in response to a stimulus of orientation ?i (e. g. the spiking activity). For all measurements, eight stimulus orientations ?i ?
{?67.5, ?45, ?22.5, 0, 22.5, 45, 67.5, 90} were presented. The OSI is then a measure of tuning
sharpness ranging from 0 (unselective) to 1 (perfectly selective). In addition, the OSI was used to
characterize the sharpness of the recurrent input a cell receives based on the orientation preference
map. To calculate this map OSI, we estimate the local orientation preference distribution by binning
the orientation preference of all pixels within a radius of 250 ?m around a cell into bins of 10? size;
the number of cells in each bin replaces R(?i ). Figure 2 shows the artificial orientation map and the
map OSI for the cells in our network model. The map OSI ranges from almost 0 for cells close to
pinwheel centers to almost 1 in the linear zones of the iso-orientation domains.
The dependence of each tuning property on the local map OSI was then described by a linear regression line using the least squares method. These linear fits provided a good description of the
relationship between map OSI and the tuning of the neuronal properties in the simulations (mean
squared deviation around the regression lines was typically below 0.0025 and never above a value
of 0.015) as well as in the experimental data (mean squared deviation was between 0.009 (gi ) and
0.015 (f )). In order to find the regions of parameter space where the linear relationship predicted by
the models is compatible with the data, the confidence interval for the slope of the linear fit to the
data was used.
3
Results
The parameter space of the class of network models considered in this paper is spanned by the peak
conductance of synaptic excitatory connections to excitatory (g EE ) and inhibitory (g IE ) neurons.
We shall first characterize the operating regimes found in this model space, before comparing the
location dependence of tuning observed in the different models with that found experimentally.
3.1
Operating regimes of the network model
The operating regimes of a firing rate model can be defined in terms of the strength and shape of the
effective recurrent input [7]. The definitions of Kang et al. [7], however, are based on the analytical
solution of a linear firing rate model where all neurons are above threshold and cannot be applied
to the non-linear Hodgkin-Huxley network model used here. Therefore, we characterize the parameter space explored here using a numerical definition of the operating regimes. This definition is
based on the orientation tuning of the input currents to the excitatory model cells in the orientation
domain (0.6 < map OSI < 0.9). Specifically, if the sum of input currents is positive (negative) for
all presented orientations, recurrent excitation (inhibition) is dominant, and the regime thus excitatory (EXC; respective inhibitory, INH). If the sum of input currents has a positive maximum and a
negative minimum (i. e. Mexican-hat like), a model receives significant excitation as well as inhibi4
Figure 3: (A) Operating regimes of the network model as a function of the peak conductance of
synaptic excitatory connections to excitatory (g EE ) and inhibitory (g IE ) neurons: FF ? feed-forward,
EXC ? recurrent excitatory dominated, INH ? recurrent inhibitory dominated, REC ? strong recurrent excitation and inhibition, and unstable. The conductances are given as multiples of the
afferent peak conductance of excitatory neurons (g Aff
E ). The figure summarizes simulation results
for 38 ? 28 different values of g EE and g IE . (B) Tuning curves for one example network in the REC
regime (marked by a cross in A). Mean responses across cells are shown for the firing rate (f ), the
membrane potential (Vm ), the total excitatory (ge ), and the total inhibitory conductance (gi ), separately for cells in iso-orientation domains (0.6 < map OSI < 0.9, thick lines) and cells close to
pinwheel centers (map OSI < 0.3, thin lines). For each cell, responses were individually aligned
to its preferred orientation and normalized to its maximum response; for the Vm tuning curve, the
mean membrane potential without any stimulation (Vm = ?64.5 mV) was subtracted beforehand.
To allow comparison of the magnitude of gi and ge responses, both types of conductances were
normalized to the maximum gi response.
tion and we refer to such a model as operating in the recurrent regime (REC). An example for the
orientation tuning properties observed in the recurrent regime is shown in Figure 3B. Finally, if the
sum of the absolute values of the currents through excitatory and inhibitory recurrent synapses of
the model cells (at preferred orientation) is less than 30% of the current through afferent synapses,
the afferent drive is dominant and we call such regimes feed-forward (FF).
The regions of parameter space corresponding to these operating regimes are depicted in Figure 3A
as a function of the peak conductance of synaptic excitatory connections to excitatory (g EE ) and
inhibitory (g IE ) neurons. We refer to the network as ?unstable? if the model neurons show strong
responses (average firing rate exceeds 100 Hz) and remain at high firing rates if the afferent input
is turned off; i. e. the network shows self-sustained activity. In this regime, the model neurons lose
their orientation tuning.
3.2
Orientation tuning properties in the different operating regimes
We analyzed the dependence of the orientation tuning properties on the operating regimes and compared them to the experimental data. For every combination of g EE and g IE , we simulated the responses of neurons in the network model to oriented stimuli in order to measure the orientation
tuning of Vm , f , ge and gi (see Methods). The OSI of each of the four quantities can then be plotted
against the map OSI to reveal the dependence of the tuning on the map location (similar to the experimental data shown in Figure 1). The slope of the linear regression of this OSI-OSI dependence
was used to characterize the different operating points of the network. Figure 4 shows these slopes
for the tuning of f , Vm , ge and gi , as a function of g EE and g IE of the respective Hodgkin-Huxley
network models (gray scale). Model networks with strong recurrent excitation (large values of g EE ),
as in the REC regime, predict steeper slopes than networks with less recurrent excitation. In other
words, as the regime becomes increasingly more recurrently dominated, the recurrent contribution
leads to sharper tuning in neurons within iso-orientation domains as compared to neurons near the
5
Figure 4: Location dependence of orientation tuning of the conductances, the membrane potential,
and the firing rate in the network model. The figure shows the slope values of the OSI-OSI regression lines (in gray values) as a function of the peak conductance of synaptic excitatory connections
to excitatory (g EE ) and inhibitory (g IE ) neurons, separately for the spike rate (A), the membrane potential (B), the total synaptic excitatory (C), and inhibitory conductance (D). The conductances are
given as multiples of the afferent peak conductance of excitatory neurons (g Aff
E ). Thin lines denote
the borders of the different operating regimes (cf. Figure 3). The region delimited by the thick
yellow line corresponds to slope values within the 95% confidence interval of the corresponding
experimental data. Note that in (A) this region covers the whole range of operating regimes except
the unstable regime.
pinwheel centers. However, yet closer to the line of instability the map-dependence of the tuning
almost vanishes (slope approaching zero). This reflects the strong excitatory recurrent input in the
EXC regime which leads to an overall increase in the network activity that is almost untuned and
therefore provides very similar input to all neurons, regardless of map location. Also, the strongly
inhibitory-dominated regimes (large values of g IE ) at the bottom right corner of Figure 4 are of interest. Here, the slope of the location dependence becomes negative for the tuning of firing rate f
and membrane potential Vm . Such a sharpening of the tuning close to pinwheels in an inhibition
dominated regime has been observed elsewhere [8].
Comparing the slope of the OSI-OSI regression lines to the 95% confidence interval of the slopes
estimated from the experimental data (Figure 1) allows us to determine those regions in parameter space that are compatible with the data (yellow contours in Figure 4). The observed locationindependence of the firing rate tuning is compatible with all stable models in the parameter space
(Figure 4A) and therefore does not constrain the model class. In contrast to this, the observed
location-dependence of the membrane potential tuning (Figure 4B) and the inhibitory conductance
tuning (Figure 4D) excludes most of the feed-forward and about half of the inhibitory-dominated
regime. Most information, however, is gained from the observed location-dependence of the excitatory conductance tuning (Figure 4C). It constrains the network to operate in either a recurrent
regime with strong excitation and inhibition or in a slightly excitatory-dominated regime.
6
3.3
Only the strongly recurrent regime satisfies all constraints
Combining the constraints imposed by the OSI-OSI relationship of the four measured quantities (yellow contour in both panels of Figure 5), we can conclude that the experimental data constrains the
network to operate in a recurrent operating regime, with recurrent excitation and inhibition strong,
approximately balanced, and dominating the afferent input. In addition, we calculated the sum of
squared differences between the data points (Figure 1) and the OSI-OSI relationship predicted by
the model, for each operating regime. The ?best fitting? operating regime, which had the lowest
squared difference, is marked with a cross in Figure 5. The corresponding simulated tuning curves
for orientation domain and pinwheel cells are shown in Figure 3B.
Figure 5: Ratio between (A) the excitatory current through the recurrent synapses and the current through afferent synapses of excitatory model cells and between (B) the inhibitory recurrent and the excitatory afferent current (in gray values). Currents were calculated for stimuli
at the cells? preferred orientations, and averaged over all model cells within orientation domains
(0.6 < map OSI < 0.9). The region delimited by the thick yellow line corresponds to slope values that are in the 95% confidence interval for each experimentally measured quantity (spike rate,
membrane potential, the total synaptic excitatory, and inhibitory conductance). The white cross at
(2.0, 1.7) denotes the combination of model parameters that yields the best fit to the experimental
data (see text). Thin lines denote the borders of the different operating regimes (cf. Figure 3).
In line with the definition of the operating regimes, the excitatory current through the recurrent
synapses (gray values in Figure 5A) plays a negligible role in the feed-forward and in most of the
inhibitory-dominated regimes. Only in the recurrent and the excitatory-dominated regime is the
recurrent current stronger than the afferent current. A similar observation holds for the inhibitory
current (Figure 5B). The strong recurrent currents in the excitatory-dominated regime reflect the
strong overall activity that reduce the map-location dependence of the total excitatory and inhibitory
conductances (cf. Figure 4C and D).
4
Discussion
Although much is known about the anatomy of lateral connections in the primary visual cortex of
cat, the strengths of synapses formed by short-range connections are largely unknown. In our study,
we use intracellular physiological measurements to constrain the strengths of these connections.
Extensively exploring the parameter space of a spiking neural network model, we find that neither
feed-forward dominated, nor recurrent excitatory- or inhibitory-dominated networks are consistent
with the tuning properties observed in vivo. We therefore conclude that the cortical network in cat
V1 operates in a regime with a dominant recurrent influence that is approximately balanced between
inhibition and excitation.
7
The analysis presented here focuses on the steady state the network reaches when presented with
one non-changing orientation. In this light, it is very interesting, that a comparable operating regime
has been indicated in an analysis of the dynamic properties of orientation tuning in cat V1 [14].
Our main finding ? tuning properties of cat V1 are best explained by a network operating in a regime
with strong recurrent excitation and inhibition ? is robust against variation of the values chosen for
other parameters not varied here, e. g. g II and g EI (data not shown). Nevertheless, the network architecture is based on a range of basic assumptions: e. g. all neurons in the network receive equally
sharply tuned input. The explicit inclusion of location dependence of the input tuning might well
lead to tuning properties compatible with the experimental data in different operating regimes. However, there is no evidence supporting such a location dependence of the afferent input and therefore
assuming location-independent input seemed the most prudent basis for this analysis. Another assumption is the absence of untuned inhibition, since the inhibitory neurons in the network presented
here receive tuned afferent input, too. The existence of an untuned inhibitory subpopulation is still
a matter of debate (compare e. g. [15] and [16]). Naturally, such an untuned component would
considerably reduce the location dependence of the inhibitory conductance gi . Given that in our
exploration only a small region of parameter space exists where the slope of gi is steeper than in the
experiment, a major contribution of such an untuned inhibition seems incompatible with the data.
Our analysis demonstrates that the network model is compatible with the data only if it operates in a
regime that ? due to the strong recurrent connections ? is close to instability. Such a network is very
sensitive to changes in its governing parameters, e. g. small changes in connection strengths lead to
large changes in the overall firing rate: In the regimes close to the line of instability, increasing g EE
by just 5% typically leads to increases in firing rate of around 40% (EXC), respectively 20% (REC).
In the other regimes (FF and INH) firing rate only changes by around 2?3%. In the ?best fitting?
operating regime, a 10% change in firing rate, which is of similar magnitude as observed firing rate
changes under attention in macaque V1 [17], is easily achieved by increasing g EE by 2%. It therefore
seems plausible that one benefit of being in such a regime is the possibility of significantly changing
the ?operating point? of the network through only small adjustments of the underlying parameters.
Candidates for such an adjustment could be contextual modulations, adaptation or attentional effects.
The analysis presented here is based on data for cat V1. However, the ubiquitous nature of some
of the architectural principles in neocortex suggests that our results may generalize to other cortical
areas, functions and species.
References
[1] Hubel, D. H & Wiesel, T. N. (1962) J Physiol 160, 106?154.
[2] Sompolinsky, H & Shapley, R. (1997) Curr Opin Neurobiol 7, 514?522.
[3] Ferster, D & Miller, K. D. (2000) Annu Rev Neurosci 23, 441?471.
[4] Martin, K. A. C. (2002) Curr Opin Neurobiol 12, 418?425.
[5] Teich, A. F & Qian, N. (2006) J Neurophysiol 96, 404?419.
[6] Ben-Yishai, R, Bar-Or, R. L, & Sompolinsky, H. (1995) Proc Natl Acad Sci U S A 92, 3844?3848.
[7] Kang, K, Shelley, M, & Sompolinsky, H. (2003) Proc Natl Acad Sci U S A 100, 2848?2853.
[8] McLaughlin, D, Shapley, R, Shelley, M, & Wielaard, D. J. (2000) Proc Natl Acad Sci U S A 97, 8087?92.
[9] Mari?o, J, Schummers, J, Lyon, D. C, Schwabe, L, Beck, O, Wiesing, P, Obermayer, K, & Sur, M. (2005)
Nat Neurosci 8, 194?201.
[10] Nauhaus, I, Benucci, A, Carandini, M, & Ringach, D. L. (2008) Neuron 57, 673?679.
[11] Destexhe, A, Rudolph, M, Fellous, J, & Sejnowski, T. (2001) Neuroscience 107, 13?24.
[12] Destexhe, A, Mainen, Z. F, & Sejnowski, T. J. (1998) in Methods in neuronal modeling, eds. Koch, C &
Segev, I. (MIT Press, Cambridge, Mass), 2nd edition, pp. 1?25.
[13] Swindale, N. V. (1998) Biol Cybern 78, 45?56.
[14] Schummers, J, Cronin, B, Wimmer, K, Stimberg, M, Martin, R, Obermayer, K, Koerding, K, & Sur, M.
(2007) Frontiers in Neuroscience 1, 145?159.
[15] Cardin, J. A, Palmer, L. A, & Contreras, D. (2007) J Neurosci 27, 10333?10344.
[16] Nowak, L. G, Sanchez-Vives, M. V, & McCormick, D. A. (2008) Cereb Cortex 18, 1058?1078.
[17] McAdams, C. J & Maunsell, J. H. (1999) J Neurosci 19, 431?441.
8
| 3513 |@word wiesel:1 stronger:1 seems:2 nd:1 simulation:3 teich:1 accounting:1 solid:1 exclusively:1 mainen:1 tuned:3 current:16 mari:3 comparing:2 contextual:1 yet:1 mst:1 physiol:1 numerical:1 shape:1 motor:1 opin:2 half:1 isotropic:1 parametrization:1 iso:6 short:1 provides:2 location:20 preference:9 consists:1 sustained:2 combine:1 fitting:3 shapley:2 nor:1 brain:1 lyon:1 increasing:2 becomes:2 spain:1 provided:1 underlying:1 panel:1 mass:1 lowest:1 neurobiol:2 finding:1 sharpening:1 quantitative:2 every:3 universit:1 demonstrates:1 control:1 originates:1 unit:1 maunsell:1 appear:1 inactivating:1 before:1 positive:2 engineering:1 local:4 negligible:1 acad:3 receptor:1 firing:20 modulation:2 approximately:2 black:2 might:1 suggests:1 co:1 palmer:1 range:9 averaged:1 responsible:1 area:2 significantly:1 confidence:5 radial:1 word:1 subpopulation:1 cannot:2 close:8 influence:2 instability:4 cybern:1 map:33 imposed:1 center:7 regardless:1 attention:1 sharpness:3 resolution:1 qian:1 rule:1 spanned:1 population:2 variation:2 trigger:1 play:1 particularly:1 rec:5 std:1 binning:1 observed:10 role:2 bottom:1 electrical:1 calculate:1 region:7 sompolinsky:3 balanced:4 vanishes:1 constrains:2 dynamic:1 koerding:1 localization:1 gabaa:2 basis:1 neurophysiol:1 easily:1 cat:8 train:2 univ:1 fast:2 effective:1 sejnowski:2 artificial:4 klaus:3 neighborhood:2 whose:1 cardin:1 plausible:2 dominating:1 gi:12 topographic:1 emergence:1 rudolph:1 mcadams:1 indication:1 analytical:1 adaptation:3 tu:1 neighboring:2 combining:2 aligned:1 osi:30 turned:1 description:2 ben:1 depending:3 recurrent:45 measured:5 strong:14 c:1 marcel:1 indicate:1 predicted:2 anatomy:2 radius:2 thick:3 lars:1 exploration:1 bin:2 swindale:1 exploring:1 frontier:1 hold:1 around:4 considered:2 koch:1 exp:1 predict:1 major:2 vary:2 early:1 proc:3 geniculate:1 lose:1 sensitive:2 individually:1 reflects:1 mit:2 gaussian:2 avoid:1 pn:1 varying:1 voltage:1 focus:2 contrast:1 cronin:1 baseline:1 dependent:1 typically:2 originating:1 selective:2 germany:2 pixel:1 issue:1 overall:3 orientation:60 prudent:1 spatial:1 field:2 never:1 mm2:1 broad:1 thin:3 stimulus:5 few:1 oriented:1 composed:2 individual:1 beck:1 curr:2 conductance:31 interest:1 highly:1 possibility:1 evaluation:1 analyzed:1 light:1 natl:3 yishai:1 benucci:1 beforehand:1 closer:1 nowak:1 respective:2 circle:1 plotted:1 theoretical:1 instance:1 modeling:1 dev:1 cover:1 measuring:1 technische:1 deviation:4 uniform:1 hundred:1 delay:3 too:1 characterize:4 periodic:1 considerably:1 combined:1 calibrated:1 peak:10 ie:8 vm:10 off:1 na:1 ctr:1 squared:4 reflect:1 settled:1 containing:1 cognitive:1 corner:1 account:1 potential:16 de:2 pooled:1 summarized:2 includes:1 matter:3 afferent:20 depends:1 ornstein:1 escription:1 mv:3 tion:1 analyze:2 steeper:2 slope:14 vivo:2 contribution:4 ass:1 square:1 ni:1 formed:1 largely:1 miller:1 yield:1 yellow:4 generalize:1 drive:1 synapsis:15 reach:1 synaptic:15 ed:1 definition:4 against:2 nonetheless:1 frequency:2 pp:1 james:1 naturally:1 nauhaus:1 irvine:1 carandini:1 ubiquitous:1 nmda:4 feed:6 delimited:2 response:9 arranged:3 strongly:3 governing:1 just:1 hand:1 receives:4 ei:3 reveal:1 gray:4 indicated:1 usa:1 effect:3 normalized:2 assigned:1 spatially:1 alternating:1 ringach:1 white:2 sin:1 self:2 width:1 excitation:16 steady:2 m:9 o3:1 complete:1 cereb:1 ranging:1 recently:1 functional:1 spiking:2 stimulation:1 discussed:1 measurement:3 significant:1 refer:2 cambridge:2 surround:1 tuning:45 grid:3 inclusion:1 had:1 dot:1 stable:1 cortex:7 operating:27 inhibition:16 dominant:3 selectivity:7 certain:1 contreras:1 jorge:1 minimum:1 additional:1 determine:3 ii:2 full:1 multiple:2 exceeds:1 match:1 cross:3 equally:2 simplistic:1 regression:6 basic:1 essentially:1 poisson:1 represent:1 uhlenbeck:1 achieved:1 cell:31 receive:2 whereas:1 background:2 addition:2 separately:2 interval:6 schwabe:1 operate:2 recording:3 pooling:1 hz:2 sanchez:1 call:1 ee:11 near:1 bernstein:1 destexhe:4 fit:3 architecture:1 perfectly:1 approaching:1 reduce:2 regarding:1 mclaughlin:1 whether:2 action:1 dramatically:1 detailed:1 neocortex:1 extensively:1 reduced:1 inhibitory:35 millisecond:1 neuroscience:4 arising:1 per:3 estimated:1 shall:1 group:1 four:3 threshold:1 nevertheless:1 changing:2 neither:1 v1:12 imaging:1 excludes:1 sum:4 hodgkin:7 almost:4 architectural:1 patch:1 incompatible:1 summarizes:1 comparable:1 replaces:1 activity:6 strength:10 constraint:4 huxley:7 aff:9 constrain:2 sharply:1 segev:1 dominated:14 span:1 optical:1 martin:2 transferred:1 according:2 combination:2 membrane:13 across:2 remain:1 increasingly:1 postsynaptic:1 slightly:1 rev:1 biologically:1 naff:1 explained:1 equation:1 remains:1 ge:8 reversal:2 eight:1 generic:1 subtracted:1 alternative:1 hat:1 existence:1 denotes:2 cf:5 medicine:1 quantity:4 spike:9 occurs:1 receptive:1 primary:4 dependence:17 obermayer:2 distance:1 attentional:1 berlin:2 sci:4 lateral:4 simulated:2 exc:8 presynaptic:1 extent:1 unstable:3 stim:3 assuming:1 sur:2 index:4 modeled:4 relationship:4 ratio:1 robert:1 sharper:2 debate:4 negative:3 rise:2 unknown:2 contributed:1 mccormick:1 neuron:37 observation:1 pinwheel:11 supporting:1 neurobiology:1 precise:1 inh:8 varied:1 wielaard:1 david:1 connection:18 california:1 kang:2 wimmer:2 macaque:1 address:1 bar:1 usually:1 below:1 regime:50 including:1 memory:1 explanation:1 ne:1 axis:2 text:1 literature:1 relative:2 generation:1 interesting:1 versus:1 untuned:5 nucleus:1 sufficient:1 consistent:2 imposes:1 principle:1 systematically:2 excitatory:44 compatible:7 elsewhere:1 placed:1 allow:1 wide:1 stimberg:2 absolute:1 benefit:2 boundary:2 curve:4 cortical:5 calculated:3 contour:2 seemed:1 forward:6 made:1 preferred:5 hubel:1 conclude:2 quantifies:1 table:4 nature:1 robust:1 investigated:1 ampa:4 domain:9 spread:2 main:1 intracellular:3 neurosci:4 border:2 noise:1 whole:1 profile:1 edition:1 allowed:1 neuronal:5 site:1 ff:3 slow:2 n:1 position:3 explicit:1 exponential:2 candidate:1 shelley:2 annu:1 fellous:1 specific:2 showing:1 recurrently:1 explored:1 physiological:1 evidence:2 exists:1 underconstrained:1 gained:1 magnitude:2 nat:1 depicted:1 explore:1 visual:5 adjustment:2 corresponds:2 satisfies:1 marked:2 ferster:1 absence:1 experimentally:6 change:7 specifically:2 determined:2 rbase:3 except:1 operates:2 mexican:1 total:7 specie:1 experimental:11 zone:1 internal:1 dept:4 biol:1 |
2,773 | 3,514 | From Online to Batch Learning with
Cutoff-Averaging
Anonymous Author(s)
Affiliation
Address
email
Abstract
We present cutoff averaging, a technique for converting any conservative online
learning algorithm into a batch learning algorithm. Most online-to-batch conversion techniques work well with certain types of online learning algorithms and not
with others, whereas cutoff averaging explicitly tries to adapt to the characteristics
of the online algorithm being converted. An attractive property of our technique
is that it preserves the efficiency of the original online algorithm, making it appropriate for large-scale learning problems. We provide a statistical analysis of our
technique and back our theoretical claims with experimental results.
1
Introduction
Batch learning (also called statistical learning) and online learning are two different supervised
machine-learning frameworks. In both frameworks, a learning problem is primarily defined by an
instance space X and a label set Y, and the goal is to assign labels from Y to instances in X . In batch
learning, we assume that there exists a probability distribution over the product space X ? Y, and
that we have access to a training set drawn i.i.d. from this distribution. A batch learning algorithm
uses the training set to generate an output hypothesis, which is a function that maps instances in
X to labels in Y. We expect a batch learning algorithm to generalize, in the sense that its output
hypothesis should accurately predict the labels of previously unseen examples, which are sampled
from the distribution.
On the other hand, in the online learning framework, we typically make no statistical assumptions
regarding the origin of the data. An online learning algorithm receives a sequence of examples and
processes these examples one-by-one. On each online-learning round, the algorithm receives an
instance and predicts its label using an internal hypothesis, which it keeps in memory. Then, the
algorithm receives the correct label corresponding to the instance, and uses the new instance-label
pair to update and improve its internal hypothesis. There is no notion of statistical generalization,
as the algorithm is only expected to accurately predict the labels of examples it receives as input.
The sequence of internal hypotheses constructed by the online algorithm from round to round plays
a central role in this paper, and we refer to this sequence as the online hypothesis sequence.
Online learning algorithms tend to be computationally efficient and easy to implement. However,
many real-world problems fit more naturally in the batch learning framework. As a result, we are
sometimes tempted to use online learning algorithms as if they were batch learning algorithms. A
common way to do this is to present training examples one-by-one to the online algorithm, and
use the last hypothesis constructed by the algorithm as the output hypothesis. We call this technique the last-hypothesis online-to-batch conversion technique. The appeal of this technique is that
it maintains the computational efficiency of the original online algorithm. However, this heuristic technique generally comes with no theoretical guarantees, and the online algorithm?s inherent
disregard for out-of-sample performance makes it a risky practice.
1
In addition to the last-hypothesis heuristic, various principled techniques for converting online algorithms into batch algorithms have been proposed. Each of these techniques essentially wraps the
online learning algorithm with an additional layer of instructions that endow it with the ability to
generalize. One approach is to use the online algorithm to create the online hypothesis sequence, and
then to choose a single good hypothesis from this sequence. For instance, the longest survivor technique [8] (originally called the pocket algorithm) chooses the hypothesis that survives the longest
number of consecutive online rounds before it is replaced. The validation technique [12] uses a
validation set to evaluate each online hypothesis and chooses the hypothesis with the best empirical
performance. Improved versions of the validation technique are given in [2, 3], where the wasteful
need for a separate validation set is resolved. All of these techniques follow the single hypothesis
approach. We note in passing that a disadvantage of the various validation techniques [12, 2, 3] is
that their running time scales quadratically with the number of examples. We typically turn to online
algorithms for their efficiency, and often a quadratic running time can be problematic.
Another common online-to-batch conversion approach, which we call the ensemble approach, uses
the online algorithm to construct the online hypothesis sequence, and combines the hypotheses in
the sequence by taking a majority [7] or by averaging [2, Sec. 2.A]. When using linear hypotheses,
averaging can be done on-the-fly, while the online algorithm is constructing the online hypothesis
sequence. This preserves the computational efficiency of the online algorithm. Taking the majority
or the average over a rich set of hypotheses promotes robustness and stability. Moreover, since we
do not truly know the quality of each online hypothesis, building an ensemble allows us to hedge
our bets, rather than committing to a single online hypothesis.
Sometimes the ensemble approach outperforms the single hypothesis approach, while other times
we see the opposite behavior (see Sec. 4 and [9]). Ideally, we would like a conversion technique
that enjoys the best of both worlds: when a single good online hypothesis can be clearly identified,
it should be chosen as the output hypothesis, but when a good hypothesis cannot be identified, we
should play it safe and construct an ensemble.
A first step in this direction was taken in [10, 5], where the conversion technique selectively chooses
which subset of online hypotheses to include in the ensemble. For example, the suffix averaging
conversion [5] sets the output hypothesis to be the average over a suffix of the online hypothesis
sequence, where the suffix length is determined by minimizing a theoretical upper-bound on the
generalization ability of the resulting hypothesis. One extreme of this approach is to include the
entire online hypothesis sequence in the ensemble. The other extreme reduces to the last-hypothesis
heuristic. By choosing the suffix that gives the best theoretical guarantee, suffix averaging automatically balances the trade-off between these two extremes. Regretfully, this technique suffers from
a computational efficiency problem. Specifically, the suffix averaging technique only chooses the
suffix length after the entire hypothesis sequence has been constructed. Therefore, it must store
the entire sequence in memory before it constructs the output hypothesis, and its memory footprint
grows linearly with training set size. This is in sharp contrast to the last-hypothesis heuristic, which
uses no memory aside from the memory used by the online algorithm itself. When the training set
is massive, storing the entire online hypothesis sequence in memory is impossible.
In this paper, we present and analyze a new conversion technique called cutoff averaging. Like
suffix averaging, it attempts to enjoy the best of the single hypothesis approach and of the ensemble
approach. One extreme of our technique reduces to the simple averaging conversion technique,
while the other extreme reduces to the longest-survivor conversion technique. Like suffix averaging,
we search for the sweet-spot between these two extremes by explicitly minimizing a tight theoretical
generalization bound. The advantage of our technique is that much of it can be performed on-the-fly,
as the online algorithm processes the data. The memory required by cutoff averaging scales with
square-root the number of training examples in the worst case, and is far less in the typically case.
This paper is organized as follows. In Sec. 2 we formally present the background for our approach.
In Sec. 3 we present the cutoff averaging technique and provide a statistical generalization analysis
for it. Finally, we demonstrate the merits of our approach with a set of experiments in Sec. 4.
2
2
Preliminaries
Recall that X is an instance domain and that Y is a set of labels, and let H be a hypothesis class,
where each h ? H is a mapping from X to Y. For example, we may be faced with a confidencerated binary classification problem, where H is the class of linear separators. In this case, X is a
subset of the Euclidean space Rn , Y is the real line, and each hypothesis in H is a linear function
parametrized by a weight vector w ? Rn and defined as h(x) = hw, xi. We interpret sign(h(x)) as
the actual binary label predicted by h, and |h(x)| as the degree of confidence in this prediction.
The quality of the predictions made by h is measured using a loss function ?. We use ?(h; (x, y))
to denote the penalty incurred for predicting the label h(x) when the correct label is actually y.
Returning to the example of linear separators, a common choice of loss function is the zero-one loss,
which is simply the indicator function of prediction mistakes. Another popular loss function is the
hinge loss, defined as
1 ? yhw, xi if yhw, xi ? 1
.
?(h; (x, y)) =
0
otherwise
As noted above, in batch learning we assume the existence of a probability distribution D over the
product space X ? Y. The input of a batch learning algorithm is a training set, sampled from Dm .
The risk of a hypothesis h, denoted by ?(h; D), is defined as the expected loss incurred by h over
examples sampled from D. Formally,
?(h; D) = E(X,Y )?D [?(h; (X, Y ))] .
We can talk about the zero-one-risk or the hinge-loss-risk, depending on which loss function we
choose to work with. The goal of a batch learning algorithm for the hypothesis class H and for the
loss function ? is to find a hypothesis h? ? H whose risk is as close as possible to inf h?H ?(h; D).
m
In online learning, the labeled examples take the form of a sequence S = (xi , yi ) i=1 . We typically
refrain from making any assumptions on the process that generates S; it could very well be a stochastic process but it doesn?t have to be. The online algorithm observes the examples in the sequence
one-by-one, and incrementally constructs the sequence of online hypotheses (hi )m
i=0 , where each
hi ? H. The first hypotheses, h0 , is a default hypothesis, which is defined in advance. Before round
t begins, the algorithm has already constructed the prefix (hi )t?1
i=0 . At the beginning of round t, the
algorithm observes xt and makes the prediction ht?1 (xt ). Then, the correct label yt is revealed and
the algorithm suffers a loss of ?(ht?1 ; (xt , yt )). Finally, the algorithm uses the new example (xt , yt )
to construct the next hypothesis ht . The update rule used to construct ht is the main component of
the online learning algorithm. In this paper, we make the simplifying assumption that the update
rule is deterministic, and we note that our derivation can be extended to randomized update rules.
Since S is not necessarily generated by any distribution D, we cannot define the risk of an online
hypothesis. Instead, the performance of an online algorithm is measured using the game-theoretic
notion of regret. The regret of an online algorithm is defined as
m
m
1 X
1 X ?
?(hi?1 ; (xi , yi )) ? min
? h; (xi , yi ) .
?
m i=1
m i=1
h?H
(1)
In words, regret measures how much better the algorithm could have done by using the best fixed
hypothesis in H on all m rounds. The goal of an online learning algorithm is to minimize regret.
To make things more concrete, we focus on two online learning algorithms for binary classification.
The first is the classic Perceptron algorithm [13] and the second is a finite-horizon margin-based
variant of the Perceptron, which closely resembles algorithms given in [11, 4]. The term finitehorizon indicates that the algorithm knows the total length of the sequence of examples before observing any data. The term margin-based indicates that the algorithm is concerned with minimizing
the hinge-loss, unlike the classic Perceptron, which deals directly with the zero-one loss. Pseudocode for both algorithms is given in Fig. 1. We chose these two particular algorithms because they
exhibit two extreme behaviors when converted into batch learning algorithms. Specifically, if we
were to present the classic Perceptron with an example-sequence S drawn i.i.d. from a distribution
D, we would typically see large fluctuations in the zero-one-risk of the various online hypotheses.
(see Sec. 4). Due to these fluctuations, the ensemble approach suits the classic Perceptron very well,
3
P ERCEPTRON
m
input S = (xi , yi ) i=1
set w0 = (0, . . . , 0)
for i = 1, . . . , m
receive xi , predict signhwi?1 , xi i
receive yi ? {?1, +1}
if sign hwi?1 , xi i 6= yi
F INITE -H ORIZON M ARGIN -BASED P ERCEPTRON
m
input S = (xi , yi ) i=1 s.t. kxi k2 ? R
set w0 = (0, . . . , 0)
for i = 1, . . . , m
receive xi , predict signhwi?1 , xi i
receive yi ? {?1, +1}
if ?(wi?1 ; (xi , yi )) > 0
?
wi?1
? wi?1 +
wi ? wi?1 + yi xi
wi ?
yi xi
?
mR
?
wi?1
?
k2
kwi?1
Figure 1: Two versions of the Perceptron algorithm.
and typically outperforms any single hypothesis approach. On the other hand, if we were to repeat
this experiment with the margin-based Perceptron, using hinge-loss-risk, we would typically see a
monotonic decrease in risk from round to round. A possible explanation for this is the similarity
between the margin-based Perceptron and some incremental SVM solvers [14]. The last hypothesis
constructed by the margin-based Perceptron is typically better than any average. This difference
between the classic Perceptron and its margin-based variant was previously observed in [9]. Ideally,
we would like a conversion technique that performs well in both cases.
From a theoretical standpoint, the purpose of an online-to-batch conversion technique is to turn an
online learning algorithm with a regret bound into a batch learning algorithm with a risk bound. We
state a regret bound for the margin-based Perceptron, so that we can demonstrate this idea in the
next section.
m
Theorem 1. Let S = (xi , yi ) i=1 be a sequence of examples such that xi ? Rn and y ? {?1, +1}
and let ? denote the hinge loss. Let H be the set of linear separators defined by weight vectors in
the unit L2 ball. Let (hi )m
i=0 be the online hypothesis sequence generated by the margin-based
? ? H,
Perceptron (see Fig. 1) when it processes S. Then, for any h
Pm
Pm
1
1
?
?R .
i=1 ? hi?1 ; (xi , yi ) ? m
i=1 ? h; (xi , yi ) ?
m
m
The proof of Thm. 1 is not much different from other regret bounds for Perceptron-like algorithms;
for completeness we give the proof in [1].
3
Cutoff Averaging
We now present the cutoff averaging conversion technique. This technique can be applied to any
conservative online learning algorithm that uses a convex hypothesis class H. A conservative algorithm is one that modifies its online hypotheses only on rounds where a positive loss is suffered.
On rounds where no loss is suffered, the algorithm keeps its current hypothesis, and we say that
the hypothesis survived the round. The survival time of each distinct online hypothesis is the number of consecutive rounds it survives before the algorithm suffers a loss and replaces it with a new
hypothesis.
Like the conversion techniques mentioned in Sec. 1, we start by applying the online learning algorithm to an i.i.d. training set, and obtaining the online hypothesis sequence (hi )m?1
i=0 . Let k be an
arbitrary non-negative integer, which we call the cutoff parameter. Ultimately, our technique will
set k automatically, but for the time-being, assume k is a predefined constant. Let ? ? (hi )m?1
i=0 be
the set of distinct hypotheses whose survival time is greater than k. The cutoff averaging technique
defines the output hypothesis h? as a weighted average over the hypotheses in ?, where the weight
of a hypothesis with survival time s is proportional to s ? k. Intuitively, each hypothesis must qualify for the ensemble, by suffering no loss for k consecutive rounds. The cutoff parameter k sets the
bar for acceptance into the ensemble. Once a hypothesis is included in the ensemble, its weight is
determined by the number of additional rounds it perseveres after qualifying.
4
We present a statistical analysis of the cutoff averaging technique. We use capital-letter notation
throughout our analysis to emphasize that our input is stochastic and that we are essentially analyzing random variables.
First, we represent the sequence of examples as a sequence of random
m
variables (Xi , Yi ) i=1 . Once this sequence is presented to the online algorithm, we obtain the online hypothesis sequence (Hi )m
i=1 , which is a sequence of random functions. Note that each random
function Hi is deterministically defined by the random variables ((Xj , Yj ))ij=1 . Therefore, the risk
of Hi is also a deterministic function of ((Xj , Yj ))ij=1 . Since (Xi+1 , Yi+1 ) is sampled from D
independently of ((Xj , Yj ))ij=1 , we observe that
i
?(Hi ; D) = E ? Hi ; (Xi+1 , Yi+1 ) (Xj , Yj ) j=1 .
(2)
In words, the risk of the random function Hi equals the conditional expectation of the online loss
suffered on round i + 1, conditioned on the random examples 1 through i. This simple observation
relates statistical risk with online loss, and is the key to converting regret bounds into risk bounds.
Define the sequence of binary random variables (Bi )m?1
i=0 as follows
1 if i = 0
or if i ? k and Hi?k = Hi?k+1 = . . . = Hi
Bi =
0 otherwise
.
(3)
Now define the output hypothesis
Hk? =
m?1
X
i=0
Bi
?1 m?1
X
Bi Hi .
(4)
i=0
Note that we automatically include the default hypothesis H0 in the definition of Hk? . This technical
detail makes our analysis more elegant, and is otherwise irrelevant. Also note that setting k = 0
results in Bi = 1 for all i, and would reduce our conversion technique to the standard averaging
conversion technique. At the other extreme, as k increases, our technique approaches the longest
survivor conversion technique.
The following theorem bounds the risk of Hk? using the online loss suffered on rounds where Bi = 1.
The theorem holds only when the loss function ? is convex in its first argument and bounded in [0, C].
Note that this is indeed the case for the margin-based Perceptron and the hinge loss function. Since
the margin-based Perceptron enforces kwi k ? 1, and assuming that kxi k ? R, it follows from the
Cauchy-Schwartz inequality that ? ? [0, R + 1]. If the loss function is not convex, the theorem does
not hold, but note that we can still bound the average risk of the hypotheses in the ensemble.
Theorem 2. Let k be a non-negative constant and let ? be a convex loss function such that
?(h; (x, y)) ? [0, C]. An online algorithm is given m ? 4 independent samples from D and
constructs the online hypothesis sequence (HiP
)m
Bi and Hk? as above, let Li =
i=0 . Define
P
?1
? = ( Bi )
Bi?1 ? Hi?1 ; (Xi , Yi ) for all i and let L
Li . For any ? ? (0, 1), with probability at least 1 ? ?, it holds that
s
?
2C ln( m
)L
7C ln( m )
?
? +
P ?
?(Hk ; D) < L
+ P ? .
Bi
Bi
To prove the theorem, we require the following tail bound, which is a corollary of Freedman?s tail
bound for martingales [6], similar to [3, Proposition 2].
m
Lemma 1. Let (Li )m
i=1 be a sequence of real-valued random variables and let (Zi )i=1 be a sequence
of arbitrary random variables such that Li = E[Li |(Zj )ij=1 ] and Li ? [0, C] for all i. Define
Pt
Pt
?
?
Ui = E[Li |(Zj )i?1
j=1 ] for all i, and define Lt =
i=1 Li and Ut =
i=1 Ui for all t. For any
m ? 4 and for any ? ? (0, 1), with probability at least 1 ? ?, it holds that
q
?t < L
? t + 2C ln( m )L
? t + 7C ln( m ) .
? t ? {1, . . . , m}
U
?
?
Due to space constraints, the proof of Lemma 1 is given in [1]. It can also be reverse-engineered
from [3, Proposition 2]. Equipped with Lemma 1, we now prove Thm. 2.
5
?
Proof of Thm. 2. Define Ui = E[Li |((Xj , Yj ))i?1
j=1 ] for all i ? {1, . . . , m}, and define U =
Pm
i=1 Ui . Using Lemma 1, we have that, with probability at least 1 ? ?
q
? < L
? + 2C ln( m )L
? + 7C ln( m ) .
U
?
?
Now notice that, by definition,
h
i
Ui = E Bi?1 ? Hi?1 ; (Xi , Yi ) ((Xj , Yj ))i?1
j=1 .
Since Bi is deterministically defined by ((Xj , Yj ))i?1
j=1 , it can be taken outside of the conditional
expectation above. Using the observation made in Eq. (2), we have Ui = Bi?1 ?(Hi?1 ; D). Overall,
we have shown that
m
q
X
? + 7C ln( m ) .
? + 2C ln( m )L
Bi?1 ?(Hi?1 ; D) < L
?
?
i=1
Using Jensen?s inequality, the left-hand side above is at least
Pm
i=1
Bi?1 ?(Hk? ; D).
We can now complete the definition of the cutoff averaging technique. Note that by replacing ?
with ?/m in Thm. 2 and by using the union bound, we can ensure that Thm. 2 holds uniformly for
all k ? {0, . . . , m ? 1} with probability at least 1 ? ?. The cutoff averaging technique sets the
?
output hypothesis H ? to be hypothesis in {H0? , . . . , Hm?1
} for which Thm. 2 gives the smallest
bound. In other words, k is chosen automatically so as to balance the trade-off between the benefits
of averaging and those of good empirical performance. If a small number of online hypotheses
stand out with significantly long survival times, then our technique will favor a large k and a sparse
ensemble. On the other hand, if most of the online hypotheses have medium/short survival times,
then our technique will favor small values of k and a dense ensemble. Even if ? is not convex,
minimizing the bound in Thm. 2 implicitly minimizes the average risk of the ensemble hypotheses.
If the online algorithm being converted has a regret bound, then the data dependent risk bound
given by Thm. 2 can be turned into a data independent risk bound. A detailed derivation of such a
bound exceeds the scope of this paper, and we just sketch the proof in the case of the margin-based
Perceptron. It trivially holds that the risk of H ? is upper-bounded by the bound given in Thm. 2 for
? simply becomes the average loss suffered by the
k = 0. When Thm. 2 is applied with k = 0, L
P
?
online algorithm over the entire training set and
Bi = m. We can now use Thm. 1 to bound L
m
?
?
by the average loss of any h ? H on the sequence (Xi , Yi ) i=1 . Particularly, we can choose h to
? = arg minh?H ?(h; D). The final step is
be the hypothesis with the smallest risk in H, namely, h
P ?
1
? D), which can be done using any tail
to bound the difference between m
?(h; (Xi , Yi )) and ?(h;
bound for sums of independent bounded random variables, such as Hoeffding?s bound or Bernstein?s
bound. The result is that, with high probability, ?(H ? ; D) ? minh?H ?(h; D) + O(m?1/2 ). Similar
derivations appear in [2, 3].
As mentioned in the introduction, our approach is similar to the suffix averaging conversion technique of [5], which also interpolates between an ensemble approach and a single hypothesis approach. However, the suffix conversion requires
? ?(m) space, which is problematic when m is large.
In contrast, cutoff averaging requires only O( m) space. Our technique cannot choose the optimal
value of k before the entire dataset has been processed, but nevertheless, it does not need to store
the entire hypothesis sequence. Instead, it can group the online hypotheses based on their survival
times, and stores only the average hypothesis in each group and the total loss in each group. By
the time the entire dataset is processed, most of the work has already been done and calculating the
optimal k and the output hypothesis is straightforward. Using simple?combinatorics, the maximal
number of distinct survival times in a sequence of m hypotheses is O( m).
Finally, note that Lemma 1 is a Kolmogorov-type bound, namely, it holds uniformly for every prefix
of the sequence of random variables. Therefore, Thm. 2 actually holds simultaneously for every
prefix of the training set. Since our conversion is mostly calculated on-the-fly, in parallel with the
online rounds, we can easily construct intermediate output hypotheses, before the online algorithm
has a chance to process the entire dataset. Thanks to the Kolmorogorv-type bound, the risk bounds
for all of these hypotheses all hold simultaneously. We can monitor how the risk bound changes
as the number of examples increases, and perhaps even use the bound to define an early stopping
criterion for the training algorithm. Specifically, we could stop processing examples when the risk
bound becomes lower than a predefined threshold.
6
CCAT vs. MCAT
CCAT vs. GCAT
0.5
cutoff
last
test error
CCAT vs. OTHER
GCAT vs. MCAT
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.4
1
10
3
10
1
5
10
10
GCAT vs. ECAT
test error
CCAT vs. ECAT
0.5
3
10
5
1
10
3
10
GCAT vs. OTHER
10
5
1
10
10
MCAT vs. ECAT
3
10
5
1
10
10
MCAT vs. OTHER
0.5
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
1
10
3
10
5
10
1
10
3
10
5
10
1
3
10
10
5
10
1
10
3
10
5
10
3
10
5
10
ECAT vs. OTHER
1
10
3
10
5
10
Figure 2: Test error (zero-one-loss) of last-hypothesis and cutoff averaging, each applied to the standard Perceptron, on ten binary classification problems from RCV1. The x-axis represents training
set size, and is given in log-scale. Each plot represents the average over 10 random train-test splits.
4
Experiments and Conclusions
We conducted experiments using Reuters Corpus Vol. 1 (RCV1), a collection of over 800K news
articles collected from the Reuters news wire. An average article in the corpus contains 240 words,
and the entire corpus contains over half a million distinct tokens (not including numbers and dates).
Each article in the corpus is associated with one or more high-level categories, which are: Corporate/Industrial (CCAT), Economics (ECAT), Government/Social (GCAT), Markets (MCAT), and
Other (OTHER). About 20% of the articles in the corpus are associated with more than one highlevel category. After discarding this 20%, we are left with over 600K documents, each with a single
high-level label. Each pair of high-level labels defines the binary classification problem of distinguishing between articles of the two categories, for a total of ten different problems. Each problem
has different characteristics, due to the different number of articles and the varying degree of homogeneity in each category.
Each article was mapped to a feature vector using a logarithmic bag-of-words representation.
Namely, the length of each vector equals the number of distinct tokens in the corpus, and each
coordinate in the vector represents one of these tokens. If a token appears s times in a given article,
the respective coordinate in the feature vector equals log2 (1 + s).
We applied the cutoff averaging technique to the classic Perceptron and to the margin-based Perceptron. We repeated each of our experiments ten times, each time taking a new random split of
the data into a training set (80%) and a test set (20%), and randomly ordering the training set. We
trained each algorithm on each dataset in an incremental manner, namely, we started by training the
algorithm using a short prefix of the training sequence, and gradually increased the training set size.
We paused training at regular intervals, computed the output hypothesis so far, and calculated its test
loss. This gives us an idea of what would happen on smaller training sets.
Fig. 2 shows the test zero-one loss attained when our technique is applied to the classic Perceptron
algorithm. It also shows the test zero-one loss of the last-hypothesis conversion technique. Clearly,
the test loss of the last hypothesis is very unstable, even after averaging over 10 repetitions. In some
cases, adding training data actually deteriorates the performance of the last hypothesis. If we decide
to use the last hypothesis technique, our training set size could happen to be such that we end up with
a bad output hypothesis. On the other hand, the cutoff averaging hypothesis is accurate, stable and
consistent. The performance of the simple averaging conversion technique is not plotted in Fig. 2,
but we note that it was only slightly worse than the performance of cutoff averaging. When using
the classic Perceptron, any form of averaging is beneficial, and our technique successfully identifies
this.
Fig. 3 shows the test hinge loss of cutoff averaging, last-hypothesis, and simple averaging, when
applied to the margin-based Perceptron. In this case, the last hypothesis performs remarkably well
7
CCAT vs. MCAT
test hinge-loss
CCAT vs. GCAT
cutoff
average
last
0.9
CCAT vs. OTHER
GCAT vs. MCAT
0.9
0.9
0.9
0.7
0.7
0.7
0.7
0.5
0.5
0.5
0.5
0.5
0.3
0.3
0.3
0.3
0.3
0.1
0.1
0.1
0.1
0.7
1
10
3
1
5
10
10
10
GCAT vs. ECAT
test hinge-loss
CCAT vs. ECAT
0.9
3
10
5
1
10
3
10
GCAT vs. OTHER
10
5
0.1
1
10
10
MCAT vs. ECAT
3
10
5
1
10
10
0.9
0.9
0.9
0.9
0.9
0.7
0.7
0.7
0.7
0.7
0.5
0.5
0.5
0.5
0.5
0.3
0.3
0.3
0.3
0.3
0.1
0.1
0.1
0.1
1
10
3
10
5
10
1
10
3
10
5
10
1
3
10
10
5
10
3
5
10
10
ECAT vs. OTHER
MCAT vs. OTHER
0.1
1
10
3
10
5
10
1
10
3
10
5
10
Figure 3: Test hinge-loss of last-hypothesis, averaging, and cutoff averaging, each applied to the
finite-horizon margin-based Perceptron, on ten binary classification problems from RCV1. The xaxis represents training set size and each plot represents the average over 10 random train-test splits.
and the simple averaging conversion technique is significantly inferior for all training set sizes.
Within 1000 online rounds (0.1% of the data), the cutoff averaging technique catches up to the last
hypothesis and performs comparably well from then on. Our technique?s poor performance on the
first 0.1% of the data is expected, since the tail bounds we rely on are meaningless with so few
examples. Once the tail bounds become tight enough, our technique essentially identifies that there
is no benefit in constructing a diverse ensemble, and assigns all of the weight to a short suffix of the
online hypothesis sequence.
We conclude that there are cases where the single-hypothesis approach is called for and there are
cases where an ensemble approach should be used. If we are fortunate enough to know which case
applies, we can simply choose the right approach. However, if we are after a generic solution that
performs well in both cases, we need a conversion technique that automatically balances the tradeoff between these two extremes. Suffix averaging [5] and cutoff averaging are two such techniques,
with cutoff averaging having a significant computational advantage.
References
[1] Anonimous. Technical appendix submitted with this manuscript, 2008.
[2] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of online learning
algorithms. IEEE Transactions on Information Theory, 50(9):2050?2057, September 2004.
[3] N. Cesa-Bianchi and C. Gentile. Improved risk bounds for online algorithms. NIPS 19, 2006.
[4] O. Dekel, S. Shalev-Shwartz, and Y. Singer. The Forgetron: A kernel-based perceptron on a
budget. SIAM Journal on Computing, 37:1342?1372, 2008.
[5] O. Dekel and Y. Singer. Data-driven online to batch conversions. NIPS 18, 2006.
[6] D. A. Freedman. On tail probabilities for martingales. Annals of Prob., 3(1):100?118, 1975.
[7] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm.
Machine Learning, 37(3):277?296, 1999.
[8] S. I. Gallant. Optimal linear discriminants. Proc. of ICPR 8, pages 849?852. IEEE, 1986.
[9] R. Khardon and G. Wachman. Noise tolerant variants of the perceptron algorithm. Journal of
Machine Learning Research, 8:227?248, 2007.
[10] Y. Li. Selective voting for perceptron-like learning. Proc. of ICML 17, pages 559?566, 2000.
[11] Y. Li, H. Zaragoza, R. He, J. ShaweTaylor, and J. Kandola. The perceptron algorithm with
uneven margins. Proc. of ICML 19, pages 379?386, 2002.
[12] N. Littlestone. From online to batch learning. Proc. of COLT 2, pages 269?284, 1989.
[13] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization
in the brain. Psychological Review, 65:386?407, 1958.
[14] T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent
algorithms. Proc. of ICML 21, 2004.
8
| 3514 |@word version:2 dekel:2 instruction:1 simplifying:1 contains:2 document:1 prefix:4 outperforms:2 current:1 must:2 happen:2 shawetaylor:1 plot:2 update:4 aside:1 v:20 half:1 beginning:1 short:3 completeness:1 zhang:1 constructed:5 become:1 prove:2 combine:1 manner:1 finitehorizon:1 indeed:1 market:1 expected:3 behavior:2 brain:1 automatically:5 actual:1 equipped:1 solver:1 becomes:2 begin:1 moreover:1 notation:1 bounded:3 medium:1 what:1 minimizes:1 guarantee:2 every:2 voting:1 returning:1 k2:2 schwartz:1 unit:1 enjoy:1 appear:1 before:7 positive:1 mistake:1 analyzing:1 fluctuation:2 chose:1 resembles:1 wachman:1 bi:17 enforces:1 yj:7 practice:1 regret:9 implement:1 union:1 spot:1 footprint:1 survived:1 empirical:2 significantly:2 confidence:1 word:5 regular:1 cannot:3 close:1 storage:1 risk:24 impossible:1 applying:1 map:1 deterministic:2 yt:3 modifies:1 straightforward:1 economics:1 independently:1 convex:5 assigns:1 rule:3 stability:1 classic:8 notion:2 coordinate:2 annals:1 pt:2 play:2 massive:1 us:7 distinguishing:1 hypothesis:99 origin:1 particularly:1 predicts:1 labeled:1 observed:1 role:1 fly:3 worst:1 news:2 ordering:1 trade:2 decrease:1 observes:2 inite:1 principled:1 mentioned:2 ui:6 ideally:2 ultimately:1 trained:1 tight:2 solving:1 efficiency:5 resolved:1 easily:1 various:3 talk:1 kolmogorov:1 derivation:3 train:2 distinct:5 committing:1 choosing:1 h0:3 outside:1 shalev:1 whose:2 heuristic:4 valued:1 say:1 otherwise:3 ability:3 favor:2 unseen:1 itself:1 final:1 online:78 sequence:37 advantage:2 highlevel:1 product:2 maximal:1 turned:1 date:1 incremental:2 depending:1 measured:2 ij:4 eq:1 predicted:1 come:1 direction:1 safe:1 closely:1 correct:3 stochastic:3 engineered:1 require:1 government:1 assign:1 generalization:5 anonymous:1 preliminary:1 proposition:2 hold:9 mapping:1 predict:4 scope:1 claim:1 consecutive:3 smallest:2 early:1 purpose:1 proc:5 bag:1 label:15 repetition:1 create:1 successfully:1 survives:2 weighted:1 clearly:2 rather:1 bet:1 varying:1 endow:1 corollary:1 focus:1 longest:4 indicates:2 survivor:3 hk:6 contrast:2 industrial:1 sense:1 dependent:1 suffix:13 stopping:1 typically:8 entire:10 selective:1 overall:1 classification:6 arg:1 colt:1 denoted:1 equal:3 construct:8 once:3 having:1 represents:5 icml:3 argin:1 others:1 inherent:1 primarily:1 sweet:1 few:1 randomly:1 preserve:2 simultaneously:2 homogeneity:1 kandola:1 replaced:1 suit:1 attempt:1 organization:1 acceptance:1 truly:1 extreme:9 hwi:1 xaxis:1 predefined:2 accurate:1 respective:1 mcat:9 euclidean:1 littlestone:1 plotted:1 theoretical:6 psychological:1 instance:8 hip:1 increased:1 disadvantage:1 subset:2 conducted:1 kxi:2 chooses:4 thanks:1 randomized:1 siam:1 probabilistic:1 off:2 concrete:1 central:1 cesa:2 choose:5 hoeffding:1 worse:1 li:11 converted:3 sec:7 combinatorics:1 explicitly:2 performed:1 try:1 root:1 analyze:1 observing:1 start:1 maintains:1 parallel:1 minimize:1 square:1 characteristic:2 ensemble:18 generalize:2 accurately:2 ccat:9 comparably:1 submitted:1 suffers:3 email:1 definition:3 yhw:2 dm:1 naturally:1 proof:5 associated:2 sampled:4 stop:1 dataset:4 popular:1 recall:1 ut:1 organized:1 pocket:1 actually:3 back:1 gcat:9 appears:1 manuscript:1 originally:1 attained:1 supervised:1 follow:1 forgetron:1 improved:2 done:4 just:1 hand:5 receives:4 sketch:1 replacing:1 incrementally:1 defines:2 quality:2 perhaps:1 grows:1 building:1 deal:1 attractive:1 round:19 zaragoza:1 game:1 inferior:1 noted:1 criterion:1 theoretic:1 demonstrate:2 complete:1 performs:4 common:3 pseudocode:1 discriminants:1 million:1 tail:6 he:1 interpret:1 refer:1 significant:1 trivially:1 pm:4 access:1 stable:1 similarity:1 inf:1 irrelevant:1 reverse:1 driven:1 store:3 certain:1 inequality:2 affiliation:1 binary:7 refrain:1 qualify:1 yi:21 additional:2 greater:1 gentile:2 mr:1 converting:3 relates:1 corporate:1 reduces:3 exceeds:1 technical:2 adapt:1 long:1 promotes:1 prediction:5 variant:3 essentially:3 expectation:2 sometimes:2 represent:1 kernel:1 receive:4 whereas:1 addition:1 background:1 remarkably:1 interval:1 suffered:5 standpoint:1 meaningless:1 unlike:1 kwi:2 tend:1 elegant:1 thing:1 call:3 integer:1 revealed:1 bernstein:1 easy:1 concerned:1 intermediate:1 ecat:9 xj:7 fit:1 zi:1 split:3 enough:2 identified:2 opposite:1 reduce:1 regarding:1 idea:2 tradeoff:1 penalty:1 interpolates:1 passing:1 generally:1 detailed:1 ten:4 processed:2 category:4 generate:1 schapire:1 problematic:2 zj:2 notice:1 sign:2 deteriorates:1 rosenblatt:1 diverse:1 vol:1 group:3 key:1 nevertheless:1 threshold:1 monitor:1 drawn:2 capital:1 wasteful:1 cutoff:26 ht:4 sum:1 prob:1 letter:1 throughout:1 decide:1 appendix:1 bound:34 layer:1 hi:22 quadratic:1 replaces:1 constraint:1 generates:1 argument:1 min:1 rcv1:3 icpr:1 ball:1 poor:1 smaller:1 slightly:1 beneficial:1 wi:7 making:2 intuitively:1 gradually:1 taken:2 computationally:1 ln:8 previously:2 turn:2 singer:2 know:3 merit:1 end:1 observe:1 appropriate:1 generic:1 batch:20 robustness:1 existence:1 original:2 running:2 include:3 ensure:1 log2:1 hinge:10 calculating:1 already:2 exhibit:1 september:1 gradient:1 wrap:1 separate:1 mapped:1 majority:2 parametrized:1 w0:2 cauchy:1 collected:1 unstable:1 assuming:1 length:4 minimizing:4 balance:3 mostly:1 negative:2 bianchi:2 conversion:24 upper:2 observation:2 wire:1 gallant:1 finite:2 minh:2 descent:1 extended:1 rn:3 sharp:1 thm:12 arbitrary:2 pair:2 required:1 namely:4 quadratically:1 nip:2 address:1 bar:1 including:1 memory:7 explanation:1 rely:1 predicting:1 indicator:1 improve:1 qualifying:1 risky:1 identifies:2 axis:1 started:1 hm:1 catch:1 faced:1 review:1 l2:1 regretfully:1 freund:1 loss:37 expect:1 proportional:1 validation:5 incurred:2 degree:2 consistent:1 article:8 storing:1 token:4 repeat:1 last:17 enjoys:1 side:1 perceptron:29 taking:3 sparse:1 benefit:2 default:2 calculated:2 world:2 stand:1 rich:1 doesn:1 author:1 made:2 collection:1 far:2 social:1 transaction:1 emphasize:1 implicitly:1 keep:2 tolerant:1 corpus:6 conclude:1 xi:27 shwartz:1 search:1 obtaining:1 necessarily:1 separator:3 constructing:2 domain:1 erceptron:2 main:1 dense:1 linearly:1 reuters:2 noise:1 freedman:2 suffering:1 repeated:1 fig:5 martingale:2 deterministically:2 khardon:1 fortunate:1 hw:1 theorem:6 paused:1 bad:1 xt:4 discarding:1 jensen:1 appeal:1 svm:1 survival:7 exists:1 adding:1 conditioned:1 budget:1 horizon:2 margin:16 lt:1 logarithmic:1 simply:3 conconi:1 monotonic:1 applies:1 chance:1 hedge:1 conditional:2 goal:3 confidencerated:1 tempted:1 change:1 included:1 determined:2 specifically:3 uniformly:2 averaging:40 lemma:5 conservative:3 called:4 total:3 experimental:1 disregard:1 selectively:1 formally:2 uneven:1 internal:3 evaluate:1 |
2,774 | 3,515 | Transfer Learning by Distribution Matching
for Targeted Advertising
Steffen Bickel, Christoph Sawade, and Tobias Scheffer
University of Potsdam, Germany
{bickel, sawade, scheffer}@cs.uni-potsdam.de
Abstract
We address the problem of learning classifiers for several related tasks that may
differ in their joint distribution of input and output variables. For each task, small ?
possibly even empty ? labeled samples and large unlabeled samples are available.
While the unlabeled samples reflect the target distribution, the labeled samples
may be biased. This setting is motivated by the problem of predicting sociodemographic features for users of web portals, based on the content which they have
accessed. Here, questionnaires offered to a portion of each portal?s users produce
biased samples. We derive a transfer learning procedure that produces resampling
weights which match the pool of all examples to the target distribution of any
given task. Transfer learning enables us to make predictions even for new portals
with few or no training data and improves the overall prediction accuracy.
1
Introduction
We study a problem setting of transfer learning in which classifiers for multiple tasks have to be
learned from biased samples. Some of the multiple tasks will likely relate to one another, but one
cannot assume that the tasks share a joint conditional distribution of the class label given the input
variables. The challenge of multi-task learning is to come to a good generalization across tasks: each
task should benefit from the wealth of data available for the entirety of tasks, but the optimization
criterion needs to remain tied to the individual task at hand.
A common method for learning under covariate shift (marginal shift) is to weight the biased trainto match the marginal distribution of the test data
ing examples by the test-to-training ratio pp test (x)
train (x)
[1]. Instead of separately estimating the two potentially high-dimensional densities one can directly
estimate the density ratio ? by kernel mean matching [2], minimization of the KL-divergence between test and weighted training data [3], or by discrimination of training against test data with a
probabilistic classifier [4].
Hierarchical Bayesian models are a standard statistical approach to multi-task learning [5, 6, 7].
Here, a common prior on model parameters across tasks captures the task dependencies. Similar to
the idea of learning under marginal shift by weighting the training examples, [8] devise a method
for learning under joint shift of covariates and labels over multiple tasks that is based on instancespecific rescaling factors. We generalize this idea to a setting where not only the joint distributions
between tasks may differ but also the training and test distribution within each task.
Our work is motivated by the targeted advertising problem for which the goal is to predict sociodemographic features (such as gender, age, or marital status) of web users, based on their surfing
history. Many types of products are specifically targeted at clearly defined market segments, and
marketing organizations seek to disseminate their message under minimal costs per delivery to a
targeted individual. When sociodemographic attributes can be identified, delivering advertisements
to users outside the target segment can be avoided. For some campaigns, clicks and resulting on-
line purchases constitute an ultimate success criterion. However, for many campaigns ? including
campaigns for products that are not typically purchased on the web ? the sole goal is to deliver the
advertisement to customers in the target segment.
The paper is structured as follows. Section 2 defines the problem setting. In Section 3, we devise our
transfer learning model. We empirically study transfer learning for targeted advertising in Section 4
and Section 5 concludes.
2
Problem Setting
We consider the following multi-task learning scenario. Each of several tasks z is characterized by
an unknown joint distribution p test (x, y|z) = p test (x|z)p(y|x, z) over features x and labels y given
the task z. The joint distributions of different tasks may differ arbitrarily but usually some tasks
have similar distributions. An unlabeled test sample T = h(x1 , z1 ), . . . , (xm , zm )i with examples
from different tasks is available. For each test example, attributes xi and the originating task zi are
known. The test data for task z are governed by p test (x|z).
A labeled training set L = h(xm+1 , ym+1 , zm+1 ), . . . , (xm+n , ym+n , zm+n )i collects examples
from several tasks. In addition to xi and zi , the label yi is known for each example. The training
data for task z is drawn from a joint distribution p train (x, y|z) = p train (x|z)p(y|x, z) that may
differ from the test distribution in terms of the marginal distribution p train (x|z). The training and
test marginals may differ arbitrarily, as long as each x with positive p test (x|z) also has a positive
p train (x|z). This guarantees that the training distribution covers the entire support of the test distribution for each task. The conditional distribution p(y|x, z) of test and training data is identical for
a given task z, but conditionals can differ arbitrarily between tasks. The entire training set over all
tasks is governed by the mixed density p train (z)p train (x, y|z). The prior p train (z) specifies the task
proportions. There may be tasks with only a few or no labeled data.
The goal is to learn a hypothesis fz : x 7? y for each task z. This hypothesis fz (x) should correctly
predict the true label y of unseen examples drawn from p(x|z) for all z. That is, it should minimize
the expected loss
E(x,y)?p test (x,y|z) [`(fz (x), y)]
with respect to the unknown distribution p test (x, y|z) for each individual z.
This abstract problem setting models the targeted advertising application as follows. The feature
vector x encodes the web surfing behavior of a user of web portal z (task). For a small number of
users the sociodemographic target label y (e.g., gender of user) is collected through web surveys. For
new portals the number of such labeled training instances is initially small. The sociodemographic
labels for all users of all portals are to be predicted. The joint distribution p test (x, y|z) can be different between portals since they attract specific populations of users. The training distribution differs
from the test distribution because the response to the web surveys is not uniform with respect to the
test distribution. Since the completion of surveys cannot be enforced, it is intrinsically impossible
to obtain labeled samples that are governed by the test distribution. Therefore, a possible difference
between the conditionals p test (y|x, z) and p train (y|x, z) cannot be reflected in the model.
One reference strategy is to learn individual models for each target task z by minimizing an appropriate loss function on the portion of Lz = {(xi , yi , zi ) ? L : zi = z}. This procedure does
not exploit data of related tasks. In addition, it minimizes the loss with respect to p train (x, y|z);
the minimum of this optimization problem will not generally coincide with the minimal loss on
p test (x, y|z). The other extreme is a one-size-fits-all model f? (x) trained on the pooled training
sample L. The training sample may deviate arbitrarily from the target distribution p test (x, y|z).
In order to describe the following model accurately, we introduce selector variable s which distinguishes training (s = 1) from test distributions (s = ?1). Symbol p train (x, y|z) is a shorthand for
p(x, y|z, s = 1); likewise, p test (x, y|z) = p(x, y|z, s = ?1).
3
Transfer Learning by Distribution Matching
In learning a classifier ft (x) for target task t, we seek to minimize the loss function with respect
to p test (x, y|t) = p(x, y|t, s = ?1). Both, t and z are values of the random variable task; value t
identifies the current
target task. Simply pooling the available data for all tasks would create a sample
P
governed by z p(z|s = 1)p(x, y|z, s = 1). Our approach is to create a task-specific resampling
weight rt (x, y) for each element of the pool of examples. The resampling weights match the pool
distribution to the target distribution p(x, y|t, s = ?1). The resampled pool is governed by the correct
target distribution, but is larger than the labeled sample of the target task. Instead of sampling from
the pool, one can weight the loss incurred by each instance by the resampling weight.
The expected weighted loss with respect to the mixture distribution that governs the pool equals the
loss with respect to the target distribution p(x, y|t, s = ?1). Equation 1 defines the condition that
the resampling weights have to satisfy.
E(x,y)?p(x,y|t,s=?1) [`(f (x, t), y)]
(1)
= E(x,y)?Pz p(z|s=1)p(x,y|z,s=1) [rt (x, y)`(f (x, t), y)]
In the following, we will show that
p(x|t, s = ?1)
p(x, y|t, s = 1)
rt (x, y) = P
(2)
p(x|t, s = 1)
p(z|s
=
1)p(x,
y|z,
s
=
1)
z
satisfies Equation 1. Equation 3 expands the expectation and introduces two fractions that equal
one. We can factorize p(x, y|t, s = ?1) and expand the sum over z in the numerator to run over
the entire expression because the integral over (x, y) is independent of z (Equation 4). Equation 5
rearranges some terms and Equation 6 is the expected loss over the distribution of all tasks weighted
by rt (x, y).
E(x,y)?p(x,y|t,s=?1) [`(f (x, t), y)]
Z P
p(z|s = 1)p(x, y|z, s = 1) p(x|t, s = 1)
P z
=
p(x, y|t, s = ?1)`(f (x, t), y)dxdy
(3)
0 |s = 1)p(x, y|z 0 , s = 1) p(x|t, s = 1)
p(z
0
z
?
Z X?
p(z|s = 1)p(x, y|z, s = 1) p(x|t, s = 1)
P
=
p(x|t,
s
=
?1)p(y|x,
t)`(f
(x,
t),
y)
dxdy
0
0
z 0 p(z |s = 1)p(x, y|z , s = 1) p(x|t, s = 1)
z
=
Z X?
z
(4)
p(z|s = 1)p(x, y|z, s = 1) P
z0
?
`(f (x, t), y) dxdy
?
= E(x,y)?
P
z
p(z|s=1)p(x,y|z,s=1)
p(x|t, s = ?1)
p(x|t, s = 1)p(y|x, t)
p(z 0 |s = 1)p(x, y|z 0 , s = 1) p(x|t, s = 1)
(5)
?
p(x, y|t, s = 1)
p(x|t, s = ?1)
P
`(f (x, t), y) (6)
0
0
z 0 p(z |s = 1)p(x, y|z , s = 1) p(x|t, s = 1)
Equation 5 signifies that we can train a hypothesis for task t by minimizing the expected loss over
the distribution of all tasks weighted by rt (x, y). This amounts to minimizing the expected loss with
respect to the target distribution p(x, y|t, s = ?1). The resampling weights of Equation 2 have an
intuitive interpretation: The first fraction accounts for the difference in the joint distributions across
tasks, and the second fraction accounts for the covariate shift within the target task.
Equation 5 leaves us with the problem of estimating the product of two density ratios rt (x, y) =
p(x,y|t,s=1)
p(x|t,s=?1)
P
. One might be tempted to train four separate density estimators,
z p(z|s=1)p(x,y|z,s=1) p(x|t,s=1)
one for each of the two numerators and the two denominators. However, obtaining estimators for
potentially high-dimensional densities is unnecessarily difficult because ultimately only a scalar
weight is required for each example.
3.1
Discriminative Density Ratio Models
In this section, we derive a discriminative model that directly estimates the two factors rt1 (x, y) =
p(x,y|t,s=1)
1
2
P
and rt2 (x) = p(x|t,s=?1)
p(x|t,s=1) of the resampling weights rt (x, y) = rt (x, y)rt (x)
z p(z|s=1)p(x,y|z,s=1)
without estimating the individual densities.
p(x,y|t,s=1)
in terms of a conditional
We reformulate the first density ratio rt1 (x, y) = P p(z|s=1)p(x,y|z,s=1)
z
model p(t|x, y, s = 1). This conditional has the following intuitive
meaning:
Given that an inP
stance (x, y) has been drawn at random from the pool distribution z p(z|s = 1)p(x, y|z, s = 1)
over all tasks (including target task t); the probability that (x, y) originates from p(x, y|t, s = 1) is
p(t|x, y, s = 1). The following equations assume that the prior on the size of the target sample is
greater than zero, p(t|s = 1) > 0. In Equation 7 Bayes? rule is applied to the numerator and z is
summed out in the denominator. Equation 8 follows by dropping the normalization factor p(t|s = 1)
and by canceling p(x, y|s = 1).
rt1 (x, y)
=
p(x, y|t, s = 1)
p(z|s
= 1)p(x, y|z, s = 1)
z
P
p(t|x, y, s = 1)p(x, y|s = 1)
p(t|s = 1)p(x, y|s = 1)
? p(t|x, y, s = 1)
=
(7)
(8)
The significance of Equation 8 is that it shows how the first factor of the resampling weights rt1 (x, y)
can be determined without knowledge of any of the task densities p(x, y|z, s = 1). The right hand
side of Equation 8 can be evaluated based on a model p(t|x, y, s = 1) that discriminates labeled
instances of the target task against labeled instances of the pool of examples for all non-target tasks.
Similar to the first density ratio, the second density ratio rt2 (x) = p(x|t,s=?1)
p(x|t,s=1) can be expressed
using a conditional model p(s = 1|x, t). In Equation 9 Bayes? rule is applied twice. The two terms
of p(x|t) cancel each other out, p(s = 1|t)/p(s = ?1|t) is just a normalization factor, and since
p(s = ?1|x, t) = 1 ? p(s = 1|x, t), Equation 10 follows.
rt2 (x) =
p(x|t, s = ?1)
p(x|t, s = 1)
=
?
p(s = 1|t)
p(s = ?1|x, t)p(x|t)
p(s = ?1|t)
p(s = 1|x, t)p(x|t)
1
?1
p(s = 1|x, t)
(9)
(10)
The significance of the above derivations is that instead of the four potentially high-dimensional
densities in rt (x, y), only two conditional distributions with binary variables (Equations 8 and 10)
need to be estimated. One can apply any probabilistic classifier to this estimation.
3.2
Estimation of Discriminative Density Ratios
For estimation of rt1 (x, y) we model p(t|x, y, s = 1) of Equation 8 with a logistic regression model
p(t|x, y, s = 1, ut ) =
1
1 + exp(?uT
t ?(x, y))
over model parameters ut using a hproblem-specific
i feature mapping ?(x, y). We define this map?(y, +1)?(x)
ping for binary labels, ?(x, y) = ?(y,
?1)?(x) , where ? is the Kronecker delta. In the absence
of prior knowledge about the similarity of classes, input features x of examples with different class
labels y are mapped to disjoint subsets of the feature vector. With this feature mapping the models
for positive and negative examples do not interact and can be trained independently. Any suitable
mapping ?(x) can be applied. In [8], p(t|x, y, s = 1) is modeled for all tasks jointly in single optimization problem with a soft-max model. Empirically, we observe that a separate binary logistic
regression model (as described above) for each task yields more accurate results with the drawback
of a slightly increased overall training time.
Optimization Problem 1 For task t: over parameters ut , maximize
X
(x,y)?Lt
log p(t|x, y, s = 1, ut ) +
X
(x,y)?L\Lt
log(1 ? p(t|x, y, s = 1, ut )) ?
uT
t ut
.
2?u
The solution of Optimization Problem 1 is a MAP estimate of the logistic regression using a
Gaussian prior on ut . The estimated vector ut leads to the first part of the weighting factor
r?t1 (x, y) ? p(t|x, y, s = 1, ut ) according to Equation 8.P
r?t1 (x, y) is normalized so that the weighted
1
empirical distribution over the pool L sums to one, |L| (x,y)?L r?t1 (x, y) = 1.
1
According to Equation 10 density ratio rt2 (x) = p(x|t,s=?1)
p(x|t,s=1) ? p(s=1|x,t) ? 1 can be inferred
from p(s = 1|x, t) which is the likelihood that a given x for task t originates from the training
distribution, as opposed to from the test distribution. A model of p(s = 1|x, t) can be obtained by
discriminating a sample governed by p(x|t, s = 1) against a sample governed by p(x|t, s = ?1) using
a probabilistic classifier. Unlabeled test data Tt is governed by p(x|t, s = ?1). The labeled pool L
over all training examples weighted by rt1 (x, y) can serve as a sample governed by p(x|t, s = 1);
the labels y can be ignored for this step. Empirically, we find that using the weighted pool L
instead of just Lt (as used by [4]) achieves better results because the former sample is larger. We
model p(s = 1|x, vt ) of Equation 10 with a regularized logistic regression on target variable s with
parameters vt (Optimization Problem 2). Labeled examples L are weighted by the estimated first
factor r?t1 (x, y) using the outcome of Optimization Problem 1.
Optimization Problem 2 For task t: over parameters vt , maximize
X
r?t1 (x, y) log p(s = 1|x, vt ) +
X
log p(s = ?1|x, vt ) ?
x?Tt
(x,y)?L
vtT vt
.
2?v
1
?
With the result of Optimization Problem 2 the estimate for the second factor is r?t2 (x) ? p(s=1|x,v
t)
2
1, according to Equation 10. r?P
t (x) is normalized so that the final weighted empirical distribution
1
?t1 (x, y)?
over the pool sums to one, |L|
rt2 (x) = 1.
(x,y)?L r
3.3
Weighted Empirical Loss and Target Model
The learning procedure first determines resampling weights r?t (x, y) = r?t1 (x, y)?
rt2 (x) by solving
Optimization Problems 1 and 2. These weights can now be used to reweight the labeled pool over
all tasks and train the target model for task t. Using the weights we can evaluate the expected
loss over the weighted training data as displayed in Equation 11. It is the regularized empirical
counterpart of Equation 6.
?
? wT wt
E(x,y)?L r?t1 (x, y)?
rt2 (x)`(f (x, t), y) + t 2
2?w
(11)
Optimization Problem 3 minimizes Equation 11, the weighted regularized loss over the training
2
data using a standard Gaussian log-prior with variance ?w
on the parameters wt . Each example is
weighted by the two discriminatively estimated density fractions from Equations 8 and 10 using the
solution of Optimization Problems 1 and 2.
Optimization Problem 3 For task t: over parameters wt , minimize
1 X 1
wT wt
r?t (x, y)?
rt2 (x)`(f (x, wt ), y) + t 2 .
|L|
2?w
(x,y)?L
In order to train target models for all tasks, instances of Optimization Problems 1 to 3 are solved for
each task.
4
Targeted Advertising
We study the benefit of distribution matching and other reference methods on targeted advertising
for four web portals. The portals play the role of tasks. We manually assign topic labels, out of
a fixed set of 373 topics, to all web pages on all portals. For each user the topics of the surfed
pages are tracked and the topic counts are stored in cookies of the user?s web browser. The average
number of surfed topics per user over all portals is 17. The feature vector x of a specific surfer is the
normalized 373 dimensional vector of topic counts.
A small proportion of users is asked to fill out a web questionnaire that collects sociodemographic
user profiles. About 25% of the questionnaires get completely filled out (accepted) and in 75% of the
cases the user rejects to fill out the questionnaire. The accepted questionnaires constitute the training
data L. The completion of the questionnaire cannot be enforced and it is therefore not possible to
obtain labeled data that is governed by the test distribution of all users that surf the target portal. In
order to evaluate the model, we approximate the distribution of users who reject the questionnaire
as follows. We take users who have answered the very first survey question (gender) but have
then discontinued the survey as an approximation of the reject set. We add the correct proportion
(25%) of users who have taken the survey, and thereby construct a sample that is governed by an
approximation of the test distribution. Consequently, in our experiments we use the binary target
label y ? {male, female}. Table 1 gives an overview of the data set.
Table 1: Portal statistics: number of accepted, partially rejected, and test examples (mix of all partial
reject (=75%) and 25% accept); ratio of male users in training (accept) and test set.
portal
family
TV channel
news 1
news 2
# accept
8073
8848
3051
2247
# partial reject
2035
1192
149
143
# test
2713
1589
199
191
% male training
53.8%
50.5%
79.4%
73.0%
% male test
46.6%
50.1%
76.7%
76.0%
We compare distribution matching on labeled and unlabeled data (Optimization Problems 1 to 3)
and distribution matching only on labeled data by setting r?t2 (x) = 1 in Optimization Problem 3 to
the following reference models. The first baseline is a one-size-fits-all model that directly trains a
logistic regression on L (setting r?t1 (x, y)?
rt2 (x) = 1 in Optimization Problem 3). The second baseline
is a logistic regression trained only on Lt , the training examples of the target task. Training only on
the reweighted target task data and correcting for marginal shift with respect to the unlabeled test
data is the third baseline [4].
The last reference method is a hierarchical Bayesian model. Evgeniou and Pontil [6] describe a feature mapping for regularized regression models that corresponds to hierarchical Bayes with Gaussian prior on the regression parameters of the tasks. Training a logistic regression with their feature
mapping over training examples from all tasks is equivalent to a joint MAP estimation of all model
parameters and the mean of the Gaussian prior.
We evaluate the methods using all training examples from non-target tasks and different numbers
of training examples of the target task. From all available accept examples of the target task we
randomly select a certain number (0-1600) of training examples. From the remaining accept examples of the target task we randomly select an appropriate number and add them to all partial reject
examples of the target task so that the evaluation set has the right proportions as described above.
We repeat this process ten times and report the average accuracies of all methods.
We use a logistic loss as the target loss of distribution matching in Optimization Problem 3 and all
reference methods. We compare kernelized variants of Optimization Problems 1 to 3 with RBF,
polynomial, and linear kernels and find the linear kernel to achieve the best performance on our data
set. All reported results are based on models with linear kernels. For the optimization of the logistic
regression models we use trust region Newton descent [9].
We tune parameters ?u , ?v , and ?w with grid search by executing the following steps.
1. ?u is tuned by nested ten-fold cross-validation. The outer loop is a cross-validation on Lt . In
each loop Optimization Problem 1 is solved on L?t merged with current training folds of Lt .
? The inner loop temporarily tunes ?w by cross-validation on rescaled L?t merged with the
rescaled current training folds of Lt . At this point ?w cannot be finally tuned because ?v has
not been tuned yet. In each loop Optimization Problem 3 is solved with fixed r?t2 (x) = 1. The
temporary ?w is chosen to maximize the accuracy on the tuning folds.
Optimization Problem 3 is solved for each outer loop with the temporary ?w and with r?t2 (x) = 1.
The final ?u is chosen to maximize the accuracy on the tuning folds of Lt over all outer loops.
2. ?v is tuned by likelihood cross-validation on Tt ? L. The labels of the labeled data are ignored
for this step. Test data Tt of the target task as well as the weighted pool L (weighted by r?t1 (x, y),
based on previously tuned ?u ) are split into ten folds. With the nine training folds of the test data
and the nine training folds of the weighted pool L, Optimization Problem 2 is solved. Parameter
distr. matching on lab. and unlab. data
distribution matching on labeled data
hierarchical Bayes
one-size-fits-all on pool of labeled data
training only on lab. data of target task
training on lab. and unlab. data of targ. task
family
TV channel
0.72
accuracy
accuracy
0.68
0.64
0.6
0.68
0.64
0.56
0
25
50 100 200 400 800 1600
0
50 100 200 400 800 1600
training examples for target portal
news 1
news 2
0.88
accuracy
0.8
accuracy
25
training examples for target portal
0.76
0.84
0.8
0.72
0
25
50 100 200 400 800 1600
training examples for target portal
0
25
50 100 200 400 800 1600
training examples for target portal
Figure 1: Accuracy over different number of training examples for target portal. Error bars indicate
the standard error of the differences to distribution matching on labeled data.
?v is chosen to maximize the log-likelihood
X
r?t1 (x, y) log p(s = 1|x, vt ) +
(x,y)?Ltune
X
log p(s = ?1|x, vt )
x?Tttune
on the tuning folds of test data and weighted pool (denoted by Ltune and Tttune ) over all ten
cross-validation loops.
Applying non-uniform weights to labeled data (some of which may even be zero) reduces the
effective sample size. This leads to a bias-variance trade-off [1]: training on unweighted data
causes a bias, applying non-uniform weights reduces the sample size and increases the variance
of the estimator. We follow [1] and smooth the estimated weights by r?t2 (x)? before including
them into Optimization Problem 3. The smoothing parameter ? biases the weights towards
uniformity and thereby controls the trade-off. Without looking at the test data of the target task
we tune ? on the non-target tasks so that the accuracy of the distribution matching method is
maximized. This procedure usually results in ? values around 0.3.
3. Finally, ?w is tuned by cross-validation on L rescaled by r?t1 (x, y)?
rt2 (x) (based on the previously
tuned parameters ?u and ?v ). In each cross-validation loop Optimization Problem 3 is solved.
Figure 1 displays the accuracies over different numbers of labeled data for the four different target
portals. The error bars are the standard errors of the differences to the distribution matching method
on labeled data (solid blue line).
For the ?family? and ?TV channel? portals the distribution matching method on labeled and unlabeled data outperforms all other methods in almost all cases. The distribution matching method on
labeled data outperforms the baselines trained only on the data of the target task for all portals and
all data set sizes and it is at least as good as the one-size-fits-all model in almost all cases. The
hierarchical Bayesian method yields low accuracies for smaller numbers of training examples but
becomes comparable to the distribution matching method when training set sizes of the target portal
increase. The simple covariate shift model that trains only on labeled and unlabeled data of the target
task does not improve over the iid model that only trains on the labeled data of the target task. This
indicates that the marginal shift between training and test distributions is small, or could indicate that
the approximation of the reject distribution which we use in our experimentation is not sufficiently
close. Either reason also explains why accounting for the marginal shift in the distribution matching
method does not always improve over distribution matching using only labeled data.
Transfer learning by distribution matching passes all examples for all tasks to the underlying logistic
regressions. This is computationally more expensive than the reference methods. For example, the
single task baseline trains only one logistic regression on the examples of the target task. Empirically, we observe that all methods scale linearly in the number training examples.
5
Conclusion
We derived a multi-task learning method that is based on the insight that the expected loss with
respect to the unbiased test distribution of the target task is equivalent to the expected loss over
the biased training examples of all tasks weighted by a task specific resampling weight. This led
to an algorithm that discriminatively estimates these resampling weights by training two simple
conditional models. After weighting the pooled examples over all tasks the target model for a
specific task can be trained.
In our empirical study on targeted advertising, we found that distribution matching using labeled
data outperforms all reference methods in almost all cases; the differences are particularly large for
small sample sizes. Distribution matching with labeled and unlabeled data outperforms the reference
methods and distribution matching with only labeled data in two out of four portals. Even with no
labeled data of the target task the performance of the distribution matching method is comparable to
training on 1600 examples of the target task for all portals.
Acknowledgments
We gratefully acknowledge support by nugg.ad AG and the German Science Foundation DFG. We
wish to thank Stephan Noller and the nugg.ad team for their valuable contributions.
References
[1] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90:227?244, 2000.
[2] J. Huang, A. Smola, A. Gretton, K. Borgwardt, and B. Sch?olkopf. Correcting sample selection bias by
unlabeled data. In Advances in Neural Information Processing Systems, 2007.
[3] M. Sugiyama, S. Nakajima, H. Kashima, P. von Bunau, and M. Kawanabe. Direct importance estimation
with model selection and its application to covariate shift adaptation. In Advances in Neural Information
Processing Systems, 2008.
[4] S. Bickel, M. Br?uckner, and T. Scheffer. Discriminative learning for differing training and test distributions.
In Proceedings of the International Conference on Machine Learning, 2007.
[5] A. Schwaighofer, V. Tresp, and K. Yu. Learning Gaussian process kernels via hierarchical Bayes. In
Advances in Neural Information Processing Systems, 2005.
[6] T. Evgeniou and M. Pontil. Regularized multi?task learning. Proceedings of the International Conference
on Knowledge Discovery and Data Mining, pages 109?117, 2004.
[7] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram. Multi-task learning for classification with Dirichlet
process priors. Journal of Machine Learning Research, 8:35?63, 2007.
[8] S. Bickel, J. Bogojeska, T. Lengauer, and T. Scheffer. Multi-task learning for HIV therapy screening. In
Proceedings of the International Conference on Machine Learning, 2008.
[9] C. Lin, R. Weng, and S. Keerthi. Trust region Newton method for large-scale logistic regression. Journal
of Machine Learning Research, 9:627?650, 2008.
| 3515 |@word polynomial:1 proportion:4 seek:2 accounting:1 thereby:2 solid:1 tuned:7 outperforms:4 current:3 yet:1 enables:1 resampling:11 discrimination:1 sawade:2 leaf:1 accessed:1 direct:1 shorthand:1 introduce:1 expected:8 market:1 vtt:1 behavior:1 planning:1 multi:7 steffen:1 becomes:1 estimating:3 underlying:1 surfing:2 minimizes:2 differing:1 ag:1 guarantee:1 expands:1 classifier:6 control:1 originates:2 positive:3 t1:12 before:1 might:1 twice:1 instancespecific:1 christoph:1 collect:2 campaign:3 acknowledgment:1 differs:1 procedure:4 pontil:2 empirical:5 reject:7 matching:22 inp:1 krishnapuram:1 get:1 cannot:5 unlabeled:10 close:1 selection:2 impossible:1 applying:2 equivalent:2 map:3 customer:1 independently:1 survey:6 unlab:2 correcting:2 estimator:3 rule:2 insight:1 fill:2 population:1 target:52 play:1 user:20 hypothesis:3 element:1 expensive:1 particularly:1 discontinued:1 labeled:31 ft:1 role:1 solved:6 capture:1 region:2 news:4 trade:2 rescaled:3 valuable:1 discriminates:1 questionnaire:7 covariates:1 asked:1 tobias:1 ultimately:1 trained:5 uniformity:1 solving:1 segment:3 predictive:1 deliver:1 serve:1 completely:1 joint:10 derivation:1 train:19 describe:2 effective:1 outside:1 outcome:1 hiv:1 larger:2 statistic:1 unseen:1 browser:1 jointly:1 final:2 product:3 zm:3 adaptation:1 loop:8 achieve:1 marital:1 intuitive:2 olkopf:1 empty:1 produce:2 executing:1 derive:2 completion:2 rt2:10 targ:1 sole:1 c:1 entirety:1 come:1 predicted:1 indicate:2 differ:6 drawback:1 correct:2 attribute:2 merged:2 explains:1 assign:1 generalization:1 around:1 sufficiently:1 therapy:1 exp:1 mapping:5 predict:2 surfer:1 bickel:4 achieves:1 estimation:5 label:13 sociodemographic:6 create:2 weighted:18 minimization:1 clearly:1 gaussian:5 always:1 derived:1 likelihood:4 indicates:1 baseline:5 inference:2 attract:1 typically:1 entire:3 accept:5 initially:1 kernelized:1 originating:1 expand:1 germany:1 overall:2 classification:1 denoted:1 smoothing:1 summed:1 marginal:7 equal:2 construct:1 evgeniou:2 sampling:1 manually:1 identical:1 unnecessarily:1 yu:1 cancel:1 carin:1 purchase:1 t2:5 report:1 few:2 distinguishes:1 randomly:2 divergence:1 individual:5 dfg:1 keerthi:1 organization:1 screening:1 message:1 mining:1 surfed:2 evaluation:1 introduces:1 mixture:1 extreme:1 male:4 weng:1 accurate:1 rearranges:1 integral:1 partial:3 filled:1 cooky:1 minimal:2 instance:5 increased:1 soft:1 cover:1 signifies:1 cost:1 subset:1 uniform:3 stored:1 reported:1 dependency:1 xue:1 density:16 borgwardt:1 international:3 discriminating:1 probabilistic:3 off:2 pool:17 ym:2 von:1 reflect:1 opposed:1 huang:1 possibly:1 rescaling:1 account:2 de:1 pooled:2 satisfy:1 ad:2 lab:3 portion:2 bayes:5 contribution:1 minimize:3 accuracy:12 variance:3 who:3 likewise:1 maximized:1 yield:2 generalize:1 bayesian:3 accurately:1 iid:1 advertising:7 history:1 ping:1 canceling:1 against:3 pp:1 intrinsically:1 knowledge:3 ut:11 improves:1 follow:1 reflected:1 response:1 evaluated:1 marketing:1 just:2 rejected:1 smola:1 hand:2 web:11 trust:2 defines:2 logistic:12 lengauer:1 normalized:3 true:1 unbiased:1 counterpart:1 former:1 stance:1 reweighted:1 numerator:3 criterion:2 tt:4 meaning:1 common:2 empirically:4 tracked:1 overview:1 interpretation:1 marginals:1 tuning:3 grid:1 sugiyama:1 gratefully:1 similarity:1 add:2 female:1 scenario:1 certain:1 binary:4 arbitrarily:4 success:1 vt:8 yi:2 devise:2 minimum:1 dxdy:3 greater:1 maximize:5 multiple:3 mix:1 reduces:2 gretton:1 ing:1 smooth:1 match:3 characterized:1 cross:7 long:1 lin:1 uckner:1 prediction:2 variant:1 regression:13 denominator:2 liao:1 expectation:1 kernel:5 normalization:2 nakajima:1 addition:2 conditionals:2 separately:1 wealth:1 sch:1 biased:5 pass:1 pooling:1 split:1 stephan:1 fit:4 zi:4 identified:1 click:1 inner:1 idea:2 br:1 shift:11 motivated:2 expression:1 ultimate:1 nine:2 constitute:2 cause:1 ignored:2 generally:1 delivering:1 governs:1 tune:3 amount:1 ten:4 specifies:1 fz:3 estimated:5 delta:1 per:2 correctly:1 disjoint:1 blue:1 dropping:1 four:5 drawn:3 fraction:4 sum:3 enforced:2 run:1 family:3 almost:3 delivery:1 comparable:2 resampled:1 display:1 fold:9 kronecker:1 encodes:1 answered:1 structured:1 tv:3 according:3 shimodaira:1 across:3 remain:1 slightly:1 smaller:1 taken:1 computationally:1 equation:26 previously:2 count:2 german:1 available:5 distr:1 experimentation:1 apply:1 observe:2 hierarchical:6 kawanabe:1 appropriate:2 kashima:1 remaining:1 dirichlet:1 newton:2 exploit:1 purchased:1 question:1 strategy:1 rt:10 separate:2 mapped:1 thank:1 outer:3 topic:6 collected:1 reason:1 modeled:1 reformulate:1 ratio:10 minimizing:3 difficult:1 potentially:3 relate:1 reweight:1 negative:1 unknown:2 acknowledge:1 descent:1 displayed:1 looking:1 team:1 inferred:1 required:1 kl:1 z1:1 potsdam:2 learned:1 temporary:2 address:1 bar:2 usually:2 rt1:6 xm:3 challenge:1 including:3 max:1 suitable:1 regularized:5 predicting:1 improve:2 identifies:1 concludes:1 tresp:1 deviate:1 prior:9 discovery:1 loss:18 discriminatively:2 mixed:1 age:1 validation:7 foundation:1 incurred:1 offered:1 share:1 repeat:1 last:1 side:1 bias:4 benefit:2 unweighted:1 coincide:1 avoided:1 lz:1 approximate:1 selector:1 uni:1 status:1 xi:3 factorize:1 discriminative:4 search:1 why:1 table:2 learn:2 transfer:8 channel:3 obtaining:1 improving:1 interact:1 surf:1 significance:2 linearly:1 profile:1 x1:1 scheffer:4 wish:1 governed:11 tied:1 weighting:4 third:1 advertisement:2 z0:1 specific:6 covariate:5 symbol:1 pz:1 importance:1 portal:25 lt:8 led:1 simply:1 likely:1 expressed:1 schwaighofer:1 temporarily:1 partially:1 scalar:1 disseminate:1 gender:3 corresponds:1 nested:1 satisfies:1 determines:1 conditional:7 goal:3 targeted:9 consequently:1 rbf:1 towards:1 tempted:1 absence:1 content:1 specifically:1 determined:1 wt:7 accepted:3 bogojeska:1 select:2 support:2 evaluate:3 |
2,775 | 3,516 | Load and Attentional Bayes
Peter Dayan
Gatsby Computational Neuroscience Unit, UCL
London, England, WC1N 3AR
[email protected]
Abstract
Selective attention is a most intensively studied psychological phenomenon, rife
with theoretical suggestions and schisms. A critical idea is that of limited capacity,
the allocation of which has produced continual conflict about such phenomena
as early and late selection. An influential resolution of this debate is based on
the notion of perceptual load (Lavie, 2005), which suggests that low-load, easy
tasks, because they underuse the total capacity of attention, mandatorily lead to
the processing of stimuli that are irrelevant to the current attentional set; whereas
high-load, difficult tasks grab all resources for themselves, leaving distractors high
and dry. We argue that this theory presents a challenge to Bayesian theories of
attention, and suggest an alternative, statistical, account of key supporting data.
1
Introduction
It was some fifty years after James (1950)?s famously poetic description of our capacities for attention that more analytically-directed experiments began, based originally on dichotic listening Cherry
(1953). There are three obvious dichotic tasks: (i) being able to interpret fully two separate streams
of information coming into the two ears; (ii) the less ambitious version of this of being able to interpret fully one of the streams, specified top-down, without interference from the other one; and
(iii) being able to combine information from the two ears appropriately, perhaps into a single percept. Various forms, interpretations and conflicts about these three tasks have permeated the field of
attention ever since (Driver, 2001; Paschler, 1998), driven by different notions of the computational
tasks and constraints at hand.
The experiments in dichotic listening coincided with the quickly burgeoning realization that mathematical concepts from Shannonian information theory would be very helpful for understanding
biological information processing. One central concept in information theory is that of a limited capacity channel, and Broadbent (1958) adopted this as a formal basis for understanding the necessity
for, and hence the nature of, selection. Broadbent (1958)?s theory critically involves early selection,
in that following a first, automatic, parallel stage of low-level perceptual processing (itself the subject of important studies of bottom-up influences on selection, Zhaoping, 2006), a relevant stream
should be selected for subsequent higher-level, semantic, processing, leaving any irrelevant streams
in the cold. However, evidence that information in unattended streams is actually processed semantically (eg being able to bias the perception of ambiguous words in the attended stream; Mackay,
1973), led to alternative theories, either late selection (influentially, Deutsch and Deutsch, 1963;
Duncan, 1980), in which both streams are fully processed, but with the irrelevant stream being prevented by a selective process at the last step from entering memory or awareness, or weaker forms of
this, such as the notion that elements from the irrelevant stream might be attenuated, only sometimes
progressing through to higher levels of processing (Treisman, 1960, 1969). Many hypotheses in the
field depend on this collection of metaphors, nicely exemplified by the zoom-lens theory of Eriksen and St. James (1986) (based on influential experiments on distractor processing such as Eriksen
and Eriksen, 1974), which suggests that the smaller the attentional focus, the more intense it can
somehow be, given that the limited capacity is ?spread? over a smaller area.
However, of course, late selection makes little sense from a limited capacity viewpoint; and short
of a theory of what controls the degree of attenuation of irrelevant stimuli, Treisman (1960)?s idea
is hard to falsify. Here, we consider the seminal sharp operationalization of Lavie and Tsal (1994);
Lavie (2005), who suggested that attenuation is a function of load, such that in easy tasks, irrelevant
data is always processed, even at the cost of worse performance on the relevant information, whereas
in difficult tasks, no capacity remains, and so distractors are more effectively removed. To reiterate,
the attentional load hypothesis, although an attractive formalization of attenuation, suggests that the
brain is unable on easy tasks to exclude information that is known to be irrelevant. It therefore
involves an arguably infelicitous combination of sophisticated attentional shaping (as to what can be
attended in high-load situations) with inept control.
Although the Bayesian revolution in cognitive science has had a huge impact over modern views of
sensory processing (see, for instance, Rao et al., 2002, and references therein), having the ability to
resolve many issues in the field as a whole, there are few recent attempts to build probabilistic models
for selective attention (see Shaw, 1982; Palmer, 1994; Dayan and Zemel, 1999; Navalpakkam and
Itti, 2006; Mozer and Baldwin, 2008; Yu and Dayan, 2005; Yu et al., 2008). This is despite the
many other computational models of attention (see Itti and Koch, 2001; Zhaoping, 2006). Indeed,
Whiteley and Sahani (2008) have suggested that this lacuna arises from a focus on optimal Bayesian
inference in the face of small numbers of objects in the focus of attention, rather than the necessity
of using approximate methods in the light of realistic, cluttered, complex scenes.
Some of the existing probabilistic models are aimed at variants of search (Navalpakkam and Itti,
2006; Mozer and Baldwin, 2008); however others, including Palmer (1994); Dayan and Zemel
(1999), and one of the two models in Yu et al. (2008), are more similar to the account here. They
acknowledge that there is a critical limited resource coming from the existence of neurons with large
receptive fields into which experimenters slot multiple sensory objects, some relevant, some irrelevant. Probabilistically-correct inference should then implement selection, when data that is known
to be irrelevant is excluded to the advantage of the relevant information (eg Dayan and Zemel, 1999;
Palmer, 1994). However, in other circumstances, it will be appropriate to take advantage of the
information about the target that is available in the neurons with large fields, even if this means
allowing some influence on the final decisions from distractors.
Here, we build a Bayesian-inspired account of key data used to argue for the attentional load hypothesis (based on an extension of Yu et al. (2008)?s model of Eriksen and Eriksen (1974)). Section 2
describes the key data; section 3 the model and results; and section 4 discusses the implications.
2
Attentional Load
Figure 1 shows the central experiment and results from Lavie and de Fockert (2003) that we set out
to capture. Subjects had to report the identity of a target letter that was either an ?X? or an ?N? (here,
the former) presented in one of eight locations arranged in a circle around the fixation point. The
reaction times and accuracies of their selections were measured. There was also a distractor letter
in the further periphery (the larger ?N?) which was either compatible (ie the same as the target),
incompatible (as here, the opposite of the target), or, in so-called neutral trials, a different letter
altogether.
Figure 1A-C show the three key conditions. Figure 1A is a high-load condition, in that there are
irrelevant non-targets in the remaining 7 positions around the circle. Figure 1B is a low-load condition, since there is no non-target. Figure 1C is a critical control, called the degraded low-load
condition, and was actually the main topic of Lavie and de Fockert (2003). In this, the difficulty of
the sensory processing was increased (by making the target smaller and dimmer) without changing
the attentional (ie selectional) load.
Figure 1D shows the mean reaction times (RTs) for these conditions for the three sorts of distractor
(RTs suffice here, since there was no speed accuracy tradeoff at work in the different conditions;
data not shown). There are three key results:
1. The central finding about attentional load is that the distractor exerted a significant effect
over target processing only in the low load case ? that is, an incompatible distractor slowed
down the RTs compared with a neutral distractor for the low load case but not the high load
case.
Figure 1: The attentional load task, from Lavie and de Fockert (2003). Subjects had to judge whether
a target letter in the central circle around fixation was ?N? or ?X? in the face of a compatible, incompatible (shown) or neutral distractor. A) high-load condition with non-target letters occupying the
other positions in the circle. B) low-load condition with no non-target letters. C) degraded low-load
condition with no non-targets but a smaller (not shown) and darker target. D) reaction times (RTs)
for the conditions, averaging only over correct choices.
2. Since, in the degraded low-load case the RTs were slower but the influence of the distractor
was if anything greater, this could not just be a function of the processing time or difficulty.
Indeed, Lavie and de Fockert (2003) noted the distinction made by Norman and Bobrow
(1975) between data- and resource-limited processing, with excess resources (putatively
ample, given the low load) unable to make up for the poor quality sensory data, and so
predicted this greater distractor impact.
3. It is apparent that compatible distractors were of almost no help in any case, whereas incompatible distractors were harmful.
3
The Bayesian model
The data in figure 1 pose the question for normative modeling as to why the distractor would corrupt
processing of the target in the easy, low-load, case, but not the difficult, high-load case. No normative
account could simply assume that extra data ?leak? through in the low-load condition (which is the
attentional load hypothesis) if the subjects have the ability to fashion attention far more finely in
other cases, such as that of high load.
We argue that these results stem from the simple observation that the visual system has available
receptive fields with a range of sizes, including smaller, spatially precise ones, which can be nicely
confined to the target; and larger, spatially extended ones, which may include both target and distractor. In this case, normative processing will combine information from all the receptive fields,
with Bayesian inference and marginalization exactly eliminating any substantial impact from those
that are useless or confusing. In the high load case, the proximal non-target stimuli have the effect of
adding so much extra noise to the units with large receptive fields compared with their signal about
the target, that only the smallest receptive fields will be substantially useful. This implies that the
distractor will exert little influence. In the low load case, large receptive fields that also include the
distractor will be usefully informative about the target, and so the distractor will exert an influence.
Note that this happens automatically through inference ? indeed to make this point starkly, there is
no explicit attentional control signal in our model whatsoever, only inference and marginalization.1
1
Note that Lavie and de Fockert (2003) chose the conditions in the experiment at random, so many forms
of top-down selection would not be possible.
load
low
high
n
0
+1
neutral
t
n
+c
0
+c
-1
d
0
0
n
0
+1
incompatible
t
n
d
+c
0
-1
+c
-1
-1
n
0
+1
compatible
t
n
+c
0
+c
-1
d
+1
+1
Table 1: Our version of the task. This table shows 6 out of the 18 conditions. Each display consists
of four stimulus positions labelled n for the non-targets; t for the target (shown in the table, though
not the display, as being boxed); and d for the distractor, which is relatively far from the target.
The target takes the values ?c, where c acts like a contrast; subjects have to report its sign. The
distractor can be 0 (neutral) or ?1; and is compatible if it has the same sign as the target (and
conversely, incompatible). Load is increased by having non-zero non-targets which are spatially
balanced, with mean 0, so providing no net information about the sign of the target, but only noise.
The 18 conditions come from using c = ?1 and c = ?0.3, with the degraded condition (|c| = 0.3)
only being run for the case of low load, as in figure 1D.
Lavie and de Fockert (2003)?s experiment is rather complicated. Table 1 shows our simplification
of it, to a form which is slightly closer to a version of an Eriksen task (Eriksen and Eriksen, 1974)
with two optional flankers in known positions on either size of the target (the non-targets) and a
farther-flung distractor (the input layer of figure 2A cartoons the spatial arrangement). The target
takes the value ?c; subjects have to report its sign. The distractor can be neutral (0) or have the same
sign as (compatible) or a different sign from (incompatible) the target. In the low load condition,
the non-target units are 0; in the high load, one is +1; the other is ?1, making them balanced, but
confusing, because they lead to excess noise.
The generative model
Table 1 indicates the values determining the various conditions from the perspective of the experimenter. We assume that the subject performs inference about the sign of the target based on noisy
observations created by a generative model. In the generative model, the values in table 1 amount to
hidden structure, which, as in Yu et al. (2008), is mapped and mixed through various receptive fields
to provide the noisy input to a Bayesian recognition model. The job of the recognition model is to
calculate the posterior probability of the various hidden settings given data, and, by marginalizing
(summing) out all the hidden settings apart from the state of the target, report on its sign.
Figure 2A shows the generative model, indicating the receptive fields (RFs) associated with this
mixing. We consider 8 topographically-mapped units, 4 with small RFs covering only a single input
(the generative weights are just the identity map); and 4 with large RFs (in which the inputs are
mixed together more holistically). Since the distractor is relatively far from the target and non-target
stimuli, the weights associated with its hidden values are lower for the three large RFs mapped to the
target and non-target hidden units; the target and non-target hidden units have smaller weights to the
generated input associated with the distractor. For simplicity, we treat the distractor as equidistant
from the target and non-target input, partially modeling the fact that it can be in different locations.
We assume a crude form of signal-dependent noise; it is this that makes the non-target stimuli so
devastating.
Figure 2B shows the means and standard deviations arising from the generative model for the 8
units (one per column) for the six conditions in table 1 (rows from top to bottom ? low load: neutral,
incompatible, compatible; then high load: neutral, incompatible, compatible). For this figure, c =
+1. The means associated with the small and large RF target units show the lack of bias from
the non-targets in the high-load condition; and for the large RF case, the bias associated with the
distractor.
The standard deviations play the most critical role in the model, defining what it means for the nontarget stimuli, when present, to make inference difficult. They therefore constitute a key modeling
assumption. In the high load case, the units with the large RFs are assumed to have very high standard deviations, coming from a crude form of signal-dependent noise. This captures the relatively
uselessness of these large RFs in the high load condition. However, and importantly, their mean
values are unaffected by the non-target stimuli, since the non-targets are balanced between positive
and negative values, preferring neither sign of target.
A
B
t
n
d
weights
1
2
3
4
small RFs
5
6
7
large RFs
8
attn load
high
low
n
mean
unit # 1 2 3 4 5 6 7 8
input
std
1 2 34 5 6 7 8
inco
neut
comp
inco
neut
comp
n t nd
small large
RF size
n t nd
small large
RF size
Figure 2: The generative model. A) In the model, the four input units, representing non-targets,
the target and the distractor, are assumed to generate 8 input units which fall into two groups, with
small and large receptive fields (RFs). The Hinton diagrams of the weights indicate how the RFs are
represented (all weights are positive; the maximum value is 0.3). B) These plots show the means
and standard deviations in the generative model associated with the 8 input units for the low and
high load cases shown in table 1 (in raster scan order). The means for the large RFs (based on the
weights in A) are unaffected by the load; the standard deviations for the units with large receptive
fields are much higher in the high load condition. Standard deviations are affected by a coarse form
of signal-dependent noise.
In all cases, a new sample from the generative model is provided at each time step; the noise corrupting each of the observed units is assumed to be Gaussian, and independent across units and over
time.
The recognition model
We build a recognition model based on this generative model. The recognition model is quite similar
to a sequential probability ratio test (SPRT; Wald, 1947), except that, as in Yu and Dayan (2005); Yu
et al. (2008), it is necessary to perform inference over all the possible values of the hidden variables
(all the possible values of the hidden structure2 ), then marginalizing out all the variables apart the
the target itself. We accumulate evidence until a threshold of 0.9 is reached on the probability that
the target is either positive or negative (reporting whichever one is more likely). However, to take
account of the possibility of erroneous, early, responses, there is also a probability of 0.01 per step of
stopping the accumulation and reporting whichever sign of target has a higher probability (guessing
randomly if this probability is 0.5). This factor played a critical role in Yu et al. (2008) in generating
early responses.
Results
Figure 3 shows the results of inference based on the model. For each of the conditions, figure 3A
shows the reaction times in the form of the mean number of steps to a choice. Here, as in the data
in Lavie and de Fockert (2003), the RTs are averaged only over cases in which the model got the
answer correct. However, figure 3B shows the percentage correct answers in each condition; the
errors are relatively rare, and so the RTs plots look identical. The datapoints are averages over more
than 35, 000 samples (depending on the actual error rates) and so the errorbars are too small to see.
Comparing figure 3A with the data in figure 1D, it is apparent that the main trends in the data
are closely captured. This general pattern of results is robust to many different parameter values;
though it is possible (by reducing c) to make inference take very much longer still in the degraded
low load condition whilst maintaining and boosting the effect of high load. The error probabilities
in figure 3B indicate that the pattern of RTs is not accounted for by a tradeoff between speed and
accuracy.
The three characteristics of these data described above are explained in the model as:
1. In the low load case, the lack of non-targets means that the inputs based on the large RFs
are usefully informative about the target, and therefore automatically play a key role in
posterior inference. Since these inputs are also influenced by the distractor, there is an RT
2
In fact, also including the possibility of a degraded high-load case
A
RT
B
30
error rate
steps
Incompatible
Neutral
Compatible
0.4
25
20
15
Incompatible
Neutral
Compatible
10
5
error rate
0.5
low
load
high
load
degraded
low load
0.3
0.2
0.1
0
low
load
high
load
degraded
low load
Figure 3: Results. A) Mean RTs (steps of inference) for correct choices in each of the 9 cases (since
the target is equally often positive and negative, we averaged over these cases. Here, the threshold
on the (marginalized) probability was 0.9, and there was a probability of 0.01 per step that inference
would terminate early with whichever response was more probable. B) Error probabilities for the
same conditions showing the lack of a speed-accuracy trade-off. All points are averages over more
than 35000 points, and so errorbars would be too small to see.
cost in the face of incompatibility. However, in the high load case, the non-target stimuli
are closer to the target and exert substantial influence over the noise corrupting the large RF
units associated with it (and no net signal). This makes these large RF units relatively poor
sources of information about the target. Thus the smaller RF units are relied upon instead,
which are not affected by the distractor.
2. Rather as suggested in Norman and Bobrow (1975); Lavie and de Fockert (2003): in the
data-poor case of the degraded input, it is particularly important to take advantage of information from the large RFs, to make inferences about the target; therefore the distractor
exerts a large influence over target processing.
3. The compatible distractor is helpful to a lesser extent than the incompatible one is harmful,
for a couple of reasons. First, there is a ceiling effect for the former coming from the
non-linearity of an effective sigmoid function that arises in turning log likelihood ratios
into probabilities. Second, compared with a neutral distractor, the compatible distractor
increases the (signal-dependent) noise associated with the units with large RFs, reducing
their informativeness about the target.
4
Discussion
In this paper, we have shown how to account for key results used to argue for an attentional load hypothesis. Our model involves simple Bayesian inference based on a generative process recognizing
the existence of small and large receptive fields. The attentional load hypothesis suggests that when
little attention is required to solve the set task, inputs associated with distractor stimuli leak through
with little attenuation, and so cause disruption; when the task is difficult, attention is totally occupied with the set task, leaving nothing left over. By contrast, we have suggested that an inferential
model taking advantage of all the information in the input will show exactly the same characteristic,
with the key issue being whether the units with large RFs, which include the distractor, are rendered
useless by the non-target stimuli that make for the high load in the first place. The advantage of this
version of an attenuation theory (Treisman, 1960, 1969) is that it obviates the requirement to appeal
to an inexplicable inefficiency, over and above the existence of units with large RFs, and indeed
relates this set of selective attentional tasks to the wide range of other accounts of probabilisticallycorrect sensory inference.
One key characteristic of this model (shared with, among others, Yu et al., 2008) is that the form of
selection it considers is an output of inference rather than an input into it. That is, the model does
not employ an explicit attentional mechanism in inference which has the capacity to downplay some
input units over others. The model does know the location of the target, and focuses all its resources
on it; but there is no further way of boosting or suppressing some RFs compared with others. Most
of the substantial results on the neuroscience of selective attention (eg Moran and Desimone, 1985;
Desimone and Duncan, 1995; Reynolds and Chelazzi, 2004) study the focusing process, rather than
the post-focus information integration that we have looked at; the forms of attention at play in the
load-related tasks we have discussed are somewhat orthogonal. It would be interesting to design
neurophysiological experiments to probe the form of online selection at work in the attentional load
tasks.
The difference between the present model and the spatial version of Yu et al. (2008) is that the model
here includes RFs of different sizes, whereas in that model, the distractors were always close to the
target. Further, the two neutral conditions here (no distractor, and low load) were not modeled in the
earlier study. Yu et al. (2008) suggested that the anterior cingulate might monitor conflict between
the cases of compatible and incompatible distractors as part of an approximate inference strategy.
That seems most unlikely here, since the conflict would have to be between the multidimensional
collection of hidden nuisance variables (notably the cross product between the states of the nontargets and the state of the distractor), which seems implausibly complicated.
The assumptions of large RFs and their high standard deviations in the high load condition are certainly rather simplistic. However, (a) RFs in inferotemporal cortex are indeed very large, allowing
for the possibility of distractor interference in the low load condition; and (b) even under the attentional load hypothesis, the only reason that an unattenuated distractor stimulus would interfere with
target processing is that there is something in common about them, since it is known that there is
more to the effects of distractors than just competition at the stage of the actual responses (Driver,
2001). Further, the assumption that the inputs with large RFs have high standard deviations in the
high load condition is a most straightforward way to capture the essential effect of the non-target
stimuli in disrupting target processing in a way that forces a more stringent attentional effect associated with the use of the small RFs.
The attentional load theory has been applied to many tasks (including the regular Eriksen task,
Eriksen and Eriksen, 1974) as well as the one here. However, it would be good to extend the current
model to match the experimental circumstances in Lavie and de Fockert (2003) more faithfully.
Perhaps the most significant lacuna is that, as in the Eriksen task, we assumed that the subjects knew
the location of the target in the stimulus array, whereas in the real experiment, this had to be inferred
from the letters in the circle of targets close to fixation (figure 1A). Modeling this would effectively
require a more complex collection of letter-based RFs, together with a confusion matrix associated
with the perceptual similarities of letters. This induces a search problem, more like the one studied
by Mozer and Baldwin (2008), except, again, multiple sizes of RFs would play a critical role. It
would also be worth extending the current model to the much wider range of other tasks used to
explore the effects of attentional load (such as Forster and Lavie, 2008).
In conclusion, we have suggested a particular rationale for an attenuation theory of attention, which
puts together the three tasks suggested at the outset for dichotic listening. Inputs should automatically be attenuated to the extent that they do not bear on (or, worse, are confusing with respect to)
a task. The key resource limitation is the restricted number, and therefore, the necessarily broad
tuning of RFs; the normative response to his makes attenuation and combination kissing cousins.
Acknowledgements
I am most grateful to Louise Whiteley for helpful comments and to her and Nillie Lavie for discussions. Funding came from the Gatsby Charitable Foundation.
References
Broadbent, D. (1958). Perception and communication. OUP, Oxford, England.
Cherry, E. (1953). Some experiments on the recognition of speech with one and with two ears.
Journal of the Acoustical Society of America, 25:975?979.
Dayan, P. and Zemel, R. (1999). Statistical models and sensory attention. In ICANN 1999, volume 2.
IEE.
Desimone, R. and Duncan, J. (1995). Neural mechanisms of selective visual attention. Annu Rev
Neurosci, 18:193?222.
Deutsch, J. A. and Deutsch, D. (1963). Attention: Some theoretical considerations. Psychol Rev,
70:80?90.
Driver, J. (2001). A selective review of selective attention research from the past century. Br J
Psychol, 92 Part 1:53?78.
Duncan, J. (1980). The locus of interference in the perception of simultaneous stimuli. Psychol Rev,
87(3):272?300.
Eriksen, B. and Eriksen, C. (1974). Effects of noise-letters on identification of a target letter in a
nonsearch task. Perception & Psychophysics, 16:143?149.
Eriksen, C. W. and St. James, J. D. (1986). Visual attention within and around the field of focal
attention: a zoom lens model. Percept Psychophys, 40(4):225?240.
Forster, S. and Lavie, N. (2008). Failures to ignore entirely irrelevant distractors: the role of load. J
Exp Psychol Appl, 14(1):73?83.
Itti, L. and Koch, C. (2001). Computational modelling of visual attention. Nat Rev Neurosci,
2(3):194?203.
James, W. (1890/1950). The Principles of Psychology. Dover, New York, NY.
Lavie, N. (2005). Distracted and confused?: selective attention under load. Trends Cogn Sci,
9(2):75?82.
Lavie, N. and de Fockert, J. W. (2003). Contrasting effects of sensory limits and capacity limits in
visual selective attention. Percept Psychophys, 65(2):202?212.
Lavie, N. and Tsal, Y. (1994). Perceptual load as a major determinant of the locus of selection in
visual attention. Percept Psychophys, 56(2):183?197.
Mackay, D. (1973). Aspects of the theory of comprehension, memory and attention. Quarterly
Journal of Experimental Psychology,, 25:22?40.
Moran, J. and Desimone, R. (1985). Selective attention gates visual processing in the extrastriate
cortex. Science, 229(4715):782?784.
Mozer, M. and Baldwin, D. (2008). Experience-guided search: A theory of attentional control. In
Platt, J., Koller, D., Singer, Y., and Roweis, S., editors, Advances in Neural Information Processing Systems 20, pages 1033?1040. MIT Press, Cambridge, MA.
Navalpakkam, V. and Itti, L. (2006). Optimal cue selection strategy. In Weiss, Y., Sch?olkopf, B., and
Platt, J., editors, Advances in Neural Information Processing Systems 18, pages 987?994. MIT
Press, Cambridge, MA.
Norman, D. and Bobrow, D. (1975). On Data-limited and Resource-limited Processes. Cognitive
Psychology, 7(1):44?64.
Palmer, J. (1994). Set-size effects in visual search: the effect of attention is independent of the
stimulus for simple tasks. Vision Res, 34(13):1703?1721.
Paschler, H. (1998). The Psychology of Attention. MIT Press, Cambridge, MA.
Rao, R. P. N., Olshausen, B. A., and Lewicki, M. S., editors (2002). Probabilistic Models of the
Brain. MIT Press, Cambridge, MA.
Reynolds, J. H. and Chelazzi, L. (2004). Attentional modulation of visual processing. Annu Rev
Neurosci, 27:611?647.
Shaw, M. (1982). Attending to multiple sources of information. Cognitive Psychology, 14:353?409.
Treisman, A. M. (1960). Contextual cues in selective listening. Quarterly Journal of Experimental
Psychology, 12:242?248.
Treisman, A. M. (1969). Strategies and models of selective attention. Psychol Rev, 76(3):282?299.
Wald, A. (1947). Sequential Analysis. Wiley, New York.
Whiteley, L. and Sahani, M. (2008). Attention resolves the effects of a computational bottleneck:
modelling binding, precueing, and task-driven bias. In COSYNE 2008, pages I?98.
Yu, A., Dayan, P., and Cohen, J. (2008). Bayesian account of attentional control. Journal of Experimental Psychology: Human Percept Psychophys, in press.
Yu, A. J. and Dayan, P. (2005). Inference, attention, and decision in a bayesian neural architecture.
In Saul, L. K., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing
Systems 17, pages 1577?1584. MIT Press, Cambridge, MA.
Zhaoping, L. (2006). Theoretical understanding of the early visual processes by data compression
and data selection. Network, 17(4):301?334.
| 3516 |@word trial:1 determinant:1 cingulate:1 version:5 eliminating:1 seems:2 compression:1 nd:2 attended:2 extrastriate:1 necessity:2 inefficiency:1 suppressing:1 reynolds:2 past:1 existing:1 reaction:4 current:3 comparing:1 anterior:1 contextual:1 subsequent:1 realistic:1 informative:2 permeated:1 plot:2 generative:11 selected:1 cue:2 rts:9 dover:1 short:1 farther:1 coarse:1 boosting:2 putatively:1 location:4 mathematical:1 driver:3 consists:1 fixation:3 rife:1 combine:2 starkly:1 notably:1 indeed:5 falsify:1 themselves:1 distractor:35 brain:2 inspired:1 infelicitous:1 automatically:3 resolve:2 little:4 metaphor:1 actual:2 totally:1 provided:1 confused:1 linearity:1 suffice:1 what:3 substantially:1 contrasting:1 whilst:1 whatsoever:1 finding:1 multidimensional:1 continual:1 attenuation:7 usefully:2 act:1 exactly:2 uk:1 control:6 unit:23 platt:2 arguably:1 positive:4 treat:1 limit:2 despite:1 nontargets:1 oxford:1 modulation:1 might:2 chose:1 exert:3 therein:1 studied:2 suggests:4 conversely:1 appl:1 limited:8 palmer:4 range:3 averaged:2 directed:1 implement:1 cogn:1 cold:1 area:1 got:1 inferential:1 word:1 outset:1 regular:1 suggest:1 close:2 selection:14 put:1 influence:7 seminal:1 unattended:1 accumulation:1 map:1 straightforward:1 attention:31 cluttered:1 resolution:1 simplicity:1 tsal:2 attending:1 array:1 importantly:1 datapoints:1 his:1 century:1 notion:3 target:70 play:4 hypothesis:7 element:1 trend:2 recognition:6 particularly:1 std:1 bottom:2 baldwin:4 role:5 observed:1 capture:3 calculate:1 trade:1 removed:1 substantial:3 mozer:4 balanced:3 leak:2 downplay:1 depend:1 grateful:1 topographically:1 upon:1 inco:2 basis:1 lacuna:2 various:4 represented:1 america:1 effective:1 london:1 zemel:4 apparent:2 quite:1 larger:2 solve:1 ability:2 itself:2 noisy:2 final:1 online:1 advantage:5 net:2 ucl:2 nontarget:1 coming:4 product:1 relevant:4 realization:1 mixing:1 roweis:1 description:1 competition:1 olkopf:1 requirement:1 extending:1 generating:1 object:2 help:1 depending:1 wider:1 ac:1 pose:1 measured:1 job:1 predicted:1 involves:3 judge:1 implies:1 indicate:2 deutsch:4 come:1 guided:1 closely:1 correct:5 human:1 stringent:1 require:1 biological:1 probable:1 comprehension:1 extension:1 koch:2 around:4 exp:1 major:1 early:6 smallest:1 faithfully:1 occupying:1 mit:5 always:2 gaussian:1 rather:6 occupied:1 incompatibility:1 probabilistically:1 focus:5 modelling:2 indicates:1 likelihood:1 contrast:2 progressing:1 sense:1 helpful:3 inference:20 am:1 dayan:10 dependent:4 stopping:1 unlikely:1 hidden:9 her:1 dichotic:4 koller:1 selective:13 issue:2 among:1 spatial:2 integration:1 mackay:2 psychophysics:1 field:16 exerted:1 nicely:2 having:2 cartoon:1 devastating:1 zhaoping:3 broad:1 yu:13 look:1 identical:1 others:4 stimulus:16 report:4 few:1 employ:1 modern:1 randomly:1 zoom:2 attempt:1 huge:1 possibility:3 certainly:1 light:1 wc1n:1 cherry:2 implication:1 desimone:4 closer:2 necessary:1 experience:1 intense:1 orthogonal:1 harmful:2 circle:5 re:1 theoretical:3 psychological:1 instance:1 column:1 increased:2 modeling:4 rao:2 earlier:1 ar:1 cost:2 deviation:8 neutral:12 rare:1 recognizing:1 too:2 iee:1 answer:2 proximal:1 st:2 ie:2 preferring:1 flanker:1 probabilistic:3 off:1 treisman:5 quickly:1 together:3 again:1 central:4 ear:3 cosyne:1 worse:2 cognitive:3 itti:5 account:8 exclude:1 de:10 includes:1 reiterate:1 stream:9 view:1 reached:1 bayes:1 sort:1 parallel:1 complicated:2 relied:1 accuracy:4 degraded:9 who:1 percept:5 characteristic:3 dry:1 bayesian:10 identification:1 produced:1 critically:1 comp:2 worth:1 unaffected:2 simultaneous:1 influenced:1 failure:1 raster:1 james:4 obvious:1 associated:11 couple:1 experimenter:2 intensively:1 distractors:9 shaping:1 sophisticated:1 actually:2 focusing:1 originally:1 higher:4 response:5 wei:2 arranged:1 dimmer:1 though:2 just:3 stage:2 until:1 hand:1 broadbent:3 lack:3 somehow:1 interfere:1 quality:1 perhaps:2 olshausen:1 effect:13 concept:2 norman:3 former:2 analytically:1 hence:1 entering:1 excluded:1 spatially:3 semantic:1 eg:3 attractive:1 nuisance:1 ambiguous:1 noted:1 anything:1 covering:1 disrupting:1 confusion:1 performs:1 disruption:1 consideration:1 funding:1 began:1 sigmoid:1 common:1 kissing:1 cohen:1 volume:1 discussed:1 interpretation:1 extend:1 interpret:2 accumulate:1 significant:2 cambridge:5 automatic:1 tuning:1 focal:1 had:4 longer:1 cortex:2 similarity:1 inferotemporal:1 something:1 posterior:2 recent:1 perspective:1 irrelevant:11 driven:2 apart:2 periphery:1 schism:1 came:1 inexplicable:1 captured:1 greater:2 somewhat:1 signal:7 ii:1 relates:1 multiple:3 stem:1 match:1 england:2 cross:1 post:1 prevented:1 equally:1 impact:3 variant:1 wald:2 simplistic:1 circumstance:2 exerts:1 vision:1 sometimes:1 confined:1 whereas:5 diagram:1 leaving:3 source:2 appropriately:1 fifty:1 extra:2 finely:1 sch:1 comment:1 subject:8 ample:1 iii:1 easy:4 marginalization:2 equidistant:1 psychology:7 architecture:1 opposite:1 idea:2 lesser:1 attenuated:2 tradeoff:2 br:1 listening:4 cousin:1 bottleneck:1 whether:2 six:1 peter:1 speech:1 york:2 cause:1 constitute:1 useful:1 aimed:1 selectional:1 amount:1 induces:1 processed:3 generate:1 percentage:1 holistically:1 sign:10 neuroscience:2 arising:1 per:3 implausibly:1 affected:2 group:1 key:11 four:2 burgeoning:1 threshold:2 monitor:1 changing:1 neither:1 grab:1 eriksen:15 year:1 run:1 letter:11 reporting:2 almost:1 place:1 decision:2 duncan:4 incompatible:13 confusing:3 entirely:1 layer:1 simplification:1 display:2 played:1 chelazzi:2 constraint:1 scene:1 aspect:1 speed:3 oup:1 rendered:1 relatively:5 influential:2 combination:2 poor:3 lavie:18 smaller:7 describes:1 slightly:1 across:1 rev:6 making:2 happens:1 whiteley:3 slowed:1 explained:1 restricted:1 interference:3 ceiling:1 resource:7 remains:1 discus:1 mechanism:2 singer:1 know:1 locus:2 whichever:3 adopted:1 available:2 eight:1 probe:1 quarterly:2 appropriate:1 shaw:2 structure2:1 alternative:2 altogether:1 slower:1 existence:3 gate:1 obviates:1 top:3 remaining:1 include:3 maintaining:1 marginalized:1 build:3 society:1 uselessness:1 question:1 arrangement:1 looked:1 receptive:11 strategy:3 rt:2 forster:2 guessing:1 attentional:24 separate:1 unable:2 capacity:9 mapped:3 sci:1 topic:1 acoustical:1 argue:4 extent:2 considers:1 reason:2 navalpakkam:3 useless:2 modeled:1 providing:1 ratio:2 difficult:5 debate:1 negative:3 design:1 ambitious:1 sprt:1 perform:1 allowing:2 neuron:2 observation:2 acknowledge:1 supporting:1 optional:1 situation:1 extended:1 ever:1 precise:1 defining:1 hinton:1 communication:1 distracted:1 sharp:1 inferred:1 required:1 specified:1 conflict:4 errorbars:2 distinction:1 able:4 suggested:7 psychophys:4 perception:4 exemplified:1 pattern:2 challenge:1 rf:32 including:4 memory:2 critical:6 difficulty:2 force:1 turning:1 representing:1 created:1 psychol:5 sahani:2 review:1 understanding:3 acknowledgement:1 determining:1 marginalizing:2 fully:3 bear:1 rationale:1 mixed:2 suggestion:1 interesting:1 allocation:1 limitation:1 foundation:1 awareness:1 degree:1 informativeness:1 principle:1 viewpoint:1 corrupting:2 famously:1 corrupt:1 charitable:1 editor:4 row:1 course:1 compatible:13 accounted:1 last:1 formal:1 bias:4 weaker:1 attn:1 fall:1 wide:1 face:3 taking:1 saul:1 sensory:7 collection:3 made:1 far:3 excess:2 approximate:2 ignore:1 summing:1 assumed:4 knew:1 search:4 why:1 table:8 channel:1 nature:1 robust:1 terminate:1 boxed:1 bottou:1 complex:2 necessarily:1 icann:1 spread:1 main:2 neurosci:3 whole:1 noise:10 nothing:1 fashion:1 gatsby:3 darker:1 ny:1 wiley:1 formalization:1 position:4 explicit:2 crude:2 perceptual:4 late:3 coincided:1 down:3 annu:2 erroneous:1 load:70 underuse:1 revolution:1 normative:4 showing:1 appeal:1 moran:2 evidence:2 operationalization:1 essential:1 adding:1 effectively:2 sequential:2 nat:1 led:1 simply:1 likely:1 explore:1 neurophysiological:1 visual:10 partially:1 lewicki:1 binding:1 poetic:1 ma:5 slot:1 identity:2 labelled:1 shared:1 hard:1 except:2 reducing:2 semantically:1 averaging:1 total:1 lens:2 called:2 experimental:4 indicating:1 arises:2 scan:1 phenomenon:2 |
2,776 | 3,517 | Temporal Difference Based Actor Critic Learning Convergence and Neural Implementation
Dotan Di Castro, Dmitry Volkinshtein and Ron Meir
Department of Electrical Engineering
Technion, Haifa 32000, Israel
{dot@tx},{dmitryv@tx},{rmeir@ee}.technion.ac.il
Abstract
Actor-critic algorithms for reinforcement learning are achieving renewed popularity due to their good convergence properties in situations where other approaches
often fail (e.g., when function approximation is involved). Interestingly, there is
growing evidence that actor-critic approaches based on phasic dopamine signals
play a key role in biological learning through cortical and basal ganglia loops.
We derive a temporal difference based actor critic learning algorithm, for which
convergence can be proved without assuming widely separated time scales for the
actor and the critic. The approach is demonstrated by applying it to networks
of spiking neurons. The established relation between phasic dopamine and the
temporal difference signal lends support to the biological relevance of such algorithms.
1
Introduction
Actor-critic (AC) algorithms [22] were probably among the first algorithmic approaches to reinforcement learning (RL). In recent years much work focused on state, or state-action, value functions as a
basis for learning. These methods, while possessing desirable convergence attributes in the context
of table lookup representation, led to convergence problems when function approximation was involved. A more recent line of research is based on directly (and usually parametrically) representing
the policy, and performing stochastic gradient ascent on the expected reward, estimated through trying out various actions and sampling trajectories [3, 15, 23]. However, such direct policy methods
often lead to very slow convergence due to large estimation variance. One approach suggested in
recent years to remedy this problem is the utilization of AC approaches, where the value function
is estimated by a critic, and passed to an actor which selects an appropriate action, based on the
approximated value function. The first convergence result for a policy gradient AC algorithm based
on function approximation was established in [13], and extended recently in [5, 6]. At this stage
it seems that AC based algorithms provide a solid foundation for provably effective approaches to
RL based on function approximation. Whether these methods will yield useful solutions to practical
problems remains to be seen.
RL has also been playing an increasingly important role in neuroscience, and experimentalists have
directly recorded the activities of neurons while animals perform learning tasks [20], and used imaging techniques to characterize human brain activities [17, 24] during learning. It was suggested long
ago that the basal ganglia, a set of ancient sub-cortical brain nuclei, are implicated in RL. Moreover,
these nuclei are naturally divided into two components, based on the separation of the striatum (the
main input channel to the basal ganglia) into the ventral and dorsal components. Several imaging
studies [17, 24] have suggested that the ventral stream is associated with value estimation by a so
called critic, while the dorsal stream has been implicated in motor output, action selection, and
learning by a so called actor. Two further experimental findings support the view taken in this work.
First, it has been observed [20] that the short latency phasic response of the dopamine neurons in
the midbrain strongly resembles the temporal difference (TD) signal introduced in theory of TDlearning [22], which can be used by AC algorithms for both the actor and the critic. Since mid-brain
dopaminergic neurons project diffusively to both the ventral and dorsal components of the striatum,
these results are consistent with a TD-based AC learning interpretation of the basal ganglia. Second,
recent results suggest that synaptic plasticity occurring at the cortico-striatal synapses is strongly
modulated by dopamine [18]. Based on these observations it has been suggested that the basal ganglia take part in TD based RL, with the (global) phasic dopamine signal serving as the TD signal
[16] modulating synaptic plasticity.
Some recent work has been devoted to implementing RL in networks of spiking neurons (e.g., [1,
9, 12]). Such an approach may lead to specific and experimentally verifiable hypotheses regarding
the interaction of known synaptic plasticity rules and RL. In fact, one tantalizing possibility is to
test these derived rules in the context of ex-vivo cultured neural networks (e.g., [19]), which are
connected to the environment through input (sensory) and output (motor) channels. We then envision
dopamine serving as a biological substrate for implementing the TD signal in such a system. The
work cited above is mostly based on direct policy gradient algorithms, (e.g., [3]), leading to nonAC approaches. Moreover, these algorithms were based directly on the reward, rather than on the
biologically better motivated TD signal, which provides more information than the reward itself,
and is expected to lead to improved convergence.
2
A Temporal Difference Based Actor-Critic Algorithm
The TD-based AC algorithm developed in this section is related to the one presented in [5, 6]. While
the derivation of the present algorithm differs from the latter work (which also stressed the issue of
the natural gradient) , the essential novel theoretical feature here is the establishment of convergence1
without the restriction to two time scales which was used in [5, 6, 13]. This result is also important
in a biological context, where, as far as we are aware, there is no evidence for such a time scale
separation.
2.1
Problem Formulation
We consider a finite Markov Decision Process (MDP) in discrete time with a finite state set X of
size |X | and a finite action set U. The MDP models the environment in which the agent acts. Each
selected action u ? U determines a stochastic matrix P (u) = [P (y|x, u)]x,y?X where P (y|x, u) is
the transition probability from a state x ? X to a state y ? X given the control u. A parameterized
policy is described by a conditional probability function, denoted by ?(u|x, ?), which maps observation x ? X into a control u ? U given a parameter ? ? RK . For each state x ? X the agent
receives a corresponding reward r(x). The agent?s goal is to adjust the parameter ? in order to attain
maximum average reward over time.
For each ? ? RK , we have a Markov Chain (MC) induced by P (y|x, u) and ?(u|x, ?). The state
transitions of the MC are obtained by first generating an action u according to ?(u|x, ?), and then
generating the next state according to P (y|x, u)]x,y?X
. Thus, the MC has a transition matrix P (?) =
R
[P (y|x, ?)]x,y?X which is given by P (y|x, ?) = U P (y|x, u)d?(u|x, ?). We denote the set of these
? We denote by P (x, u, y)
transition probabilities by P = {P (?)|? ? RK }, and its closure by P.
the stationary probability to be in state x, choose action u and go to state y. Several technical
assumptions are required in the proofs below.
? is aperiodic, recurrent, and contains a single
Assumption 2.1. (i) Each MC P (?), P (?) ? P,
equivalence class. (ii) The function ?(u|x, ?) is twice differentiable. Moreover, there exist positive
constants Br and B? , such that for all x ? X , u ? U, ? ? RK and 1 ? k1 , k2 ? K, we have
|r(x)| ? Br , |??(u|x, ?)/??k | ? B? , |? 2 ?(u|x, ?)/??k1 ?k2 | ? B? .
As a result of assumption 2.1(i), we have the following lemma regarding the stationary distribution
(Theorem 3.1 in [8]).
1
Throughout this paper convergence refers to convergence to a small ball around a stationary point; see
Theorem 2.6 for a precise definition.
? has a unique stationary distribution,
Lemma 2.1. Under Assumption 2.1(i), each MC, P (?) ? P,
denoted by ?(?), satisfying ?(?)0 P (?) = ?(?)0 , where x0 is the transpose of vector x.
Next, we define a measure for performance of an agent in an environment. The average reward per
stage of a MC starting from an initial state x0 ? X is defined by
#
"
T
?
1 X
?
r(xn )?x0 = x ,
J(x|?) , lim E?
T ??
T n=0
where E? [?] denotes the expectation under the probability measure P (?), and xn is the state at time
n. The agent?s goal is to find ? ? RK which maximizes J(x|?). The following lemma shows
that under Assumption 2.1, the average reward per stage does not depend on the initial states (see
Theorem 4.7 in [10]).
Lemma 2.2. Under Assumption 2.1 and Lemma 2.1, the average reward per stage, J(x|?), is independent of the starting state, is denoted by ?(?), and satisfies ?(?) = ?(?)0 r.
Based on Lemma 2.2, the agent?s goal is to find a parameter vector ?, which maximizes the average
reward per stage ?(?). Performing the maximization directly on ?(?) is hard. In the sequel we
show how this maximization can be performed by optimizing ?(?), using ??(?). A consequence of
Assumption 2.1 and the definition of ?(?) is the following lemma (see Lemma 1 in [15]).
Lemma 2.3. For each x, y ? X and for each ? ? RK , the functions P (y|x, ?), ?(x|?), and ?(?),
are bounded, twice differentiable, and have bounded first and second derivatives.
Next, we define the differential value function of state x ? X which represents the average reward
the agent receives upon starting from a state x0 and reaching a recurrent state x? for the first time.
Mathematically,
" T
#
?
X
?
h(x|?) , E?
(r(xn ) ? ?(?))?x0 = x ,
(1)
n=0
?
where T , min{k > 0|xk = x }. We define h(?) , (h(x1 |?), . . . , h(x|X | |?)) ? R|X | . For each
? ? RK and x ? X , h(x|?), r(x), and ?(?) satisfy Poisson?s equation (see Theorem 7.4.1 in [4]),
X
h(x|?) = r(x) ? ?(?) +
P (y|x, ?)h(y|?).
(2)
y?X
Based on the differential value definition we define the temporal difference (TD) between the states
x ? X and y ? X . Formally,
d(x, y) , r(x) ? ?(?) + h(y|?) ? h(x|?).
(3)
The TD measures the difference between the differential value estimate following the receipt of
reward r(x) and a move to a new state y, and the estimate of the current differential state value at
state x.
2.2
Algorithmic details and single time scale convergence
We start with a definition of the likelihood ratio derivative, ?(x, u|?) , ??(u|x, ?)/?(u|x, ?),
which we assume to be bounded.
Assumption 2.2. For all x ? X , u ? U, and ? ? RK , there exists a positive constant, B? , such
that |?(x, u|?)| ? B? < ?.
In order to improve the agent?s performance, we need to follow the gradient direction. The following
theorem shows how the gradient of the average reward per stage can be calculated by the TD signal.
Similar variants of the theorem were proved using the Q-value [23] or state value [15] instead of the
TD-signal.
Theorem 2.4. The gradient of the average reward per stage for ? ? RK can be expressed by
X
??(?) =
P (x, u, y)?(x, u|?) (d(x, y) + f (x))
(f (x) arbitrary).
(4)
x,y?X ,u?U
The theorem was proved using an advantage function argument in [6]. We provide a direct proof
in section A of the supplementary material. The flexibility resulting from the function f (x) allows
us to encode the TD signal using biologically realistic positive values only, without influencing the
convergence proof. In this paper, for simplicity, we use f (x) = 0.
Based on Theorem 2.4, we suggest an TD-based AC algorithm. This algorithm is motivated by [15]
where an actor only algorithm was proposed. In [15] the differential value function was re-estimated
afresh for each regenerative cycle leading to a large estimation variance. Using the continuity of the
actor?s policy function in ?, the difference between the estimates between regenerative cycles is
small. Thus, the critic has a good initial estimate at the beginning of each cycle, which is used
here in order to reduce the variance. A related AC algorithm was proposed in [5, 6], where two
time scales were assumed in order to use Borkar?s two time scales convergence theorem [7]. In our
proposed algorithm, and associated convergence theorem, we do not assume different time scales
for the actor and for the critic.
We present batch mode update equations2 in Algorithm 1 for the actor and the critic. The algorithm
is based on some recurrent state x? ; the visit times to this state are denoted by t0 , t1 , . . .. Updated
occur only at these times (batch mode). We define a cycle of the algorithm by the time indices which
? h(x),
?
satisfy tm ? n < tm+1 . The variables d,
and ?? are the critic?s estimates for d, h(x|?), and
?(?) respectively.
Algorithm 1 Temporal Difference Based Actor Critic Algorithm
1: Given
? An MDP with finite set X of states and a recurrent state x? , satisfying 2.1(i).
? Hitting times t0 < t1 < t2 < ?P
? ? for the state x? . P
?
?
2
< ?.
? Step coefficients ?m such that m=1 ?m = ? and m=1 ?m
K
? A parameterized policy ?(u|x, ?), ? ? R , which satisfies Assumption 2.1(ii).
? A set H, constants Bh? and B? , and an operator ?H according to Assumption B.1.
? Step parameters ?? and ?h satisfying Theorem 2.6.
2: Initiate the critic?s variables:
? ??0 = 0 (the estimate of the average reward per stage)
? 0 (x) = 0, ?x ? X (the estimate of the differential value function)
? h
3: Initiate the actor: ?0 = 0 and choose f (x) (see (4))
4: for each state xtm+1 visited do
5:
Critic: For all x ? X , Nm (x) , min{tm < k < tm+1 |xk = x}, (min(?) = ?)
? n , xn+1 ) = r(xn ) ? ??m + h
? m (xn+1 ) ? h
? m (xn ),
d(x
?
?
tm+1 ?1
X
? m+1 (x) = h
? m (x) + ?m ?h ?
? n , xn+1 )? ,
h
d(x
?x ? X ,
n=Nm (x)
tm+1 ?1
??m+1 = ??m + ?m ??
X
(r(xn ) ? ??m ).
n=tm
Ptm+1 ?1
? n , xn+1 ) + f (xn ))
Actor: ?m+1 = ?m + ?m n=t
?(xn , un |?m )(d(x
m
? m+1 and ?m+1 onto H (see Assumption B.1.).
7:
Project each component of h
8: end for
6:
In order to prove the convergence of Algorithm 1, we establish two basic results. The first shows that
the algorithm converges to the set of ordinary differential equations (5), and the second establishes
conditions under which the differential equations converge locally.
2
In order to prove convergence certain boundedness conditions need to be imposed, which appear as step
7 in the algorithm. For lack of space, the precise definition of the set H is given in Assumption B.1 of the
supplementary material.
Theorem 2.5. Under Assumptions 2.1 and B.1, Algorithm 1 converges to the following set of ODE?s
?
?
?
X
(x)
?
?
?
?
=
T
(?)??(?)
+
C(?)
(?(?)
?
?
?
)
+
D
(?)
h(x|?)
?
h(x)
,
?
?
?
?
x?X
?
?
(5)
??
?
h(x)
= ?h h(x|?) ? h(x)
+ ?h T (?) (?(?) ? ??) , x ? X
?
?
?
?
?
??? = ?? T (?) (?(?) ? ??) ,
with probability 1, where
?
?
T = min{k > 0|x0 = x , xk = x },
D(x) (?) = E?
"T ?1
X
T (?) = E? [T ],
C(?) = E?
"T ?1
X
#
?
?
?
?(xn , un |?)?x0 = x ,
n=0
1 {xn+1
#
"T ?1
#
?
?
X
?
?
?
?
= x} ?(xn , un |?)?x0 = x + E?
(1 {xn = x} ?(xn , un |?)?x0 = x ,
n=0
n=0
and where T (?), C(?), and D(x) (?) are continuous with respect to ?.
Theorem 2.5 is proved in section B of the supplementary material, based on the theory of stochastic approximation, and more specifically, on Theorem 5.2.1 in [14]. An advantage of the proof
technique is that it does not need to assume two time scales.
The second theorem, proved in section C of the supplementary material, states the conditions for
which ?(?t ) converges to a ball around the local optimum.
Theorem 2.6. If we choose ?? ? B?2? /?? and ?h ? Bh2? /?h , for some positive constants ?h and ?? ,
then lim supt?? k??(?(t))k ? ?, where ? , BC ?? + |X |BD ?h . The constants B?? and Bh? are
defined in Section C of the supplementary material.
3
A Neural Algorithm for the Actor Using McCulloch-Pitts Neurons
In this section we apply the previously developed algorithm to the case of neural networks. We start
with the classic binary valued McCulloch-Pitts neuron, and then consider a more realistic spiking
neuron model. While the algorithm presented in Section 2 was derived and proved to converge in
batch mode, we apply it here in an online fashion. The derivation of an online learning algorithm
from the batch version is immediate (e.g., [15]), and a proof of convergence in this setting is currently
underway.
A McCulloch-Pitts actor network
The dynamics of the binary valued neurons, given at time n by {ui (n)}N
i=1 , ui (n) ? {0, 1}, is
assumed to be based on stochastic discrete time parallel updates, given by
Pr(ui (n) = 1) = ?(vi (n))
where vi (n) =
N
X
wij uj (n ? 1)
(i = 1, 2, . . . , N ).
j=1
Here ?(v) = 1/(1 + exp(?v)), and the parameters ? in Algorithm 1 are given by {wij }, where
wij (n) is the j 7? i synaptic weight at time n. Each neuron?s stochastic output ui is viewed as an
action.
Applying the actor update from Algorithm 1 we obtain the following online learning rule
wij (n + 1) = wij (n) + ?d(x(n), x(n + 1)) (ui (n) ? ?(vi (n))) uj (n ? 1).
(6)
where d(x(n), x(n + 1)) is the TD signal.
The update (6) can be interpreted as an error-driven Hebbian-like learning rule modulated by the
TD signal. It resembles the direct policy update rule presented in [2], except that in this rule the
reward signal is replaced by the TD signal (computed by the critic). Moreover, the eligibility trace
formalism in [2] differs from our formulation.
We describe a simulation experiment conducted using a one layered feed-forward artificial neural network which functions as an actor, combined with a non biologically motivated critic. The
purpose of the experiment is to examine a simple neuronal model, using different actor and critic
architectures. The actor network consists of a single layered feed-forward network of McCullochPitts neurons, and TD modulated synapses as described above, where the TD signal is calculated by
a critic. The environment is a maze with barriers consisting of 36 states, see Figure 1(b), where a
reward of value 1 is provided at the top right corner, and is zero elsewhere. Every time the agent
receives a reward, it is transferred randomly to a different location in the maze. At each time step,
the agent is given an input vector which represents the state. The output layer consists of 4 output
neurons where each neuron represents an action from the action set U = {up, down, left, right}. We
used two different input representations for the actor, consisting either of 12 or 36 neurons (note that
the minimum number of input neurons to represent 36 states is 6, and the maximum number is 36).
The architecture with 36 input neurons represents each maze state with one exclusive neuron, thus,
there is no overlap between input vectors. The architecture with 12 input neurons uses a representation where each state is represented by two neurons, leading to overlaps between the input vectors.
We tested two types of critic: a table based critic which performs iterates according to Algorithm 1,
and an exact TD which provides the TD of the optimal policy. The results are shown in Figure 1(c),
averaged over 25 runs, and demonstrate the importance of good input representations and precise
value estimates.
0.12
Average Reward
per Stage
0.1
0.08
0.06
0.04
0.02
0
(a)
(b)
0
5
10
Number of Steps
15
5
x 10
(c)
Figure 1: (a) A illustration of the McCulloch-Pitts network. (b) A diagram of the maze where the agent needs
to reach the reward at the upper right corner. (c) The average reward per stage in four different cases: an actor
consisting of 12 input neurons and a table based critic (blue crosses), an actor consisting of 36 input neurons
and a table based critic (green stars), an actor consisting of 12 input neurons and exact critic (red circles), and
an actor consisting of 36 input neurons and an exact TD (black crosses). The optimal average reward per stage
is denoted by the dotted line, while a random agent achieves a reward of 0.005.
A spiking neuron actor
Actual neurons function in continuous time producing action potentials. In extension of [1, 9], we
developed an update rule which is based on the Spike Response Model (SRM) [11]. For each neuron
we define a state variable vi (t) which represents the membrane potential. The dynamics of vi (t) is
given by
N
X
X
?ij (t ? t?i , t ? tfj ),
(7)
vi (t) = ?i (t ? t?i ) +
wij (t)
j=1
tfj
where wij (t) is the synaptic efficacy, t?i is the last spike time of neuron i prior o t, ?i (t) is the
refractory response, tfj are the times of the presynaptic spikes emitted prior to time t, and ?ij (t ?
t?i , t ? tfj ) is the response induced by neuron j at neuron i. The second summation in (7) is over
all spike times of neuron j emitted prior to time t. The neuron model is assumed to have a noisy
threshold, which we model by an escape noise model [11]. According to this model, the neuron fires
in the time interval [t, t + ?t) with probability ui (t)?t = ?i (vi (t) ? vth )?t, where vth is the firing
threshold and ?i (?) is a monotonically increasing function. When the neuron reaches the threshold
it is assumed to fire and the membrane potential is reset to vr .
We consider a network of continuous time neurons and synapses. Based on Algorithm 1, using a
small time step ?t, we find
wij (t + ?t) = wij (t) + ?d(t)?ij (t).
(8)
We define the output of the neuron (interpreted as an action) at time t by ui (t). We note that the
neuron?s output is discrete and that at each time t, a neuron can fire, ui (t) = 1, or be quiescent,
ui (t) = 0. Using the definition of ? from Section 2.2, yields (similar to [9])
? 0 P
f
? ?i (t)
?
if ui (t) = 1
Htj ?ij (t ? ti , t ? tj ),
?i (t)
?ij (t) =
0
P
?t?
(t)
f
i
? ?
?
if ui (t) = 0
Ht ?ij (t ? ti , t ? tj ),
1??t?i (t)
j
Taking the limit ?t ? 0, yields the following continuous time update rule
Fpost ({tfi })
}|
z
?
{
!
X
dwij (t)
= ?d(t) (1/?i (t))
?(t ? tfi ) ? 1 ?0i (t)
dt
Hi
z
X
Fpre ({tfj })
}|
{
f
?
?ij (t ? ti , t ? tj ) .
(9)
Htj
Similarly to [1, 9] we interpret the update rule (9) as a TD modulated spike time dependent plasticity
rule. A detailed discussion and interpretation of this update in a more biological context will be left
to the full paper.
We applied the update rule (9) to an actor network consisting of spiking neurons based on (7). The
network?s goal was to reach a circle at the center of a 2D plain =, where the agent can move, using
Newtonian dynamics, in the four principle directions. The actor is composed of an input layer and
a single layer of modifiable weights. The input layer consists of ?sensory? neurons which fire according to the agent?s location in the environment. The synaptic dynamics of the actor is determined
by (9). The critic receives the same inputs as the actor, but uses a linear function approximation
architecture rather than the table lookup used in Algorithm 1. A standard parameter update rule
appropriate for this architecture (e.g., ch. 8 in [22]) was used to update the critic?s parameters3 . The
output layer of the actor consists of four neuronal groups, representing the directions in which the
agent can move, coded based on a firing rate model using Gaussian tuning curves. The TD signal
is calculated according to (3). Whenever it reaches the centered circle, it receives a reward, and is
transferred randomly to a new position in the environment.
Results of such a simulation are presented in Figure 3. Figure 3-a displays the agent?s typical random
walk like behavior prior to learning, . Figure 3-b depicts four typical trajectories representing the
agent?s actions after a learning phase. Finally, Figure 3-c demonstrates the increase of the average
reward per stage, ?, vs. time.
20
0.02
15
15
0.015
10
10
5
?
20
0.01
0.005
5
0
0
0
10
(a)
20
0
0
10
(b)
20
0
200
400
time[sec]
600
(c)
Figure 2: (a) Typical agent tracks prior to learning. (b) Agent trajectories following learning. (c) Average
reward per stage plotted against time.
4
Discussion
We have presented a temporal difference based actor critic learning algorithm for reinforcement
learning. The algorithm was derived from first principles based on following a noisy gradient of the
3
Algorithm 1 relies on a table lookup critic, while in this example we used a function approximation based
critic, due to the large (continuous) state space.
average reward, and a convergence proof was presented without relying on the widely used two time
scale separation for the actor and the critic. The derived algorithm was applied to neural networks,
demonstrating their effective operation in maze problems. The motivation for the proposed algorithm was biological, providing a coherent computational explanation for several recently observed
phenomena: actor critic architectures in the basal ganglia, the relation of phasic dopaminergic neuromodulators to the TD signal, and the modulation of the spike time dependent plasticity rules by
dopamine. While a great deal of further work needs to be done on both the theoretical and biological components of the framework, we hope that these results provide a tentative step in the (noisy!)
direction of explaining biological RL.
References
[1] D. Baras and R. Meir. Reinforcement learning, spike time dependent plasticity and the bcm rule. Neural
Comput., 19(8):22452279, 2007
[2] J. Baxter and P.L. Bartlett. Hebbian synaptic modifications in spiking neurons that learn. (Technical rep.).
Canberra: Research School of Information Sciences and Engineering, Australian National University,
1999.
[3] J. Baxter and P.L. Bartlett. Infinite-Horizon Policy-Gradient Estimation. J. of Artificial Intelligence Research, 15:319?350, 2001.
[4] D.P. Bertsekas. Dynamic Programming and Optimal Control, Vol I., 3rd Ed. Athena Scinetific, 2006.
[5] S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Incremental natural actor-critic algorithms. In J.C.
Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20,
pages 105?112. MIT Press, Cambridge, MA, 2008.
[6] S. Bhatnagar, R.S. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica, To
appear, 2008.
[7] V.S. Borkar. Stochastic approximation with two time scales. Syst. Control Lett., 29(5):291294, 1997.
[8] P. Bremaud. Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues. Springer, 1999.
[9] R.V. Florian. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity.
Neural Computation, 19:14681502, 2007.
[10] R.G. Gallager. Discrete Stochastic Processes. Kluwer Academic Publishers, 1995.
[11] W. Gerstner and W.M. Kistler. Spinking Neuron Models. Cambridge University Press, Cambridge, 2002.
[12] E.M. Izhikevich. Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling.
Cerebral Cortex, 17(10):2443-52, 2007.
[13] V.R. Konda and J. Tsitsiklis. On actor critic algorithms. SIAM J. Control Optim., 42(4):11431166, 2003.
[14] H.J. Kushner and G.G. Yin. Stochastic Approximation Algorithms and Applications. Springer, 1997.
[15] P. Marbach and J. Tsitsiklis. Simulation-Based Optimization of Markov Reward Processes. IEEE. Trans.
Auto. Cont., 46:191?209, 1998.
[16] P.R. Montague, P. Dayan, and T.J. Sejnowski. A framework for mesencephalic dopamine systems based
on predictive hebbian learning. Journal of Neuroscience, 16:19361947, 1996.
[17] J. ODoherty, P. Dayan, J. Schultz, R. Deichmann, K. Friston, and R.J. Dolan. Dissociable roles of ventral
and dorsal striatum in instrumental conditioning. Science, 304:452454, 2004.
[18] J.N.J. Reynolds and J.R. Wickens. Dopamine-dependent plasticity of corticostriatal synapses. Neural Networks, 15(4-6):507521, 2002.
[19] S. Marom and G. Shahaf. Development, learning and memory in large random networks of cortical neurons: lessons beyond anatomy. Quarterly Reviews of Biophysics, 35:6387, 2002.
[20] W. Schultz. Multiple reward signals in the brain. Nature Reviews Neuroscience, 1:199207, Dec. 2000.
[21] S. Singh and P. Dayan. Analytical mean squared error curves for temporal difference learning. Machine
Learning, 32:540, 1998.
[22] R. S. Sutton and A. G. Barto. Reinforcement Learning. MIT Press, 1998.
[23] R. Sutton, D. McAllester, S. Singh and Y. Mansour. Policy-Gradient Methods for Reinforcement Learning
with Function Approximation. Advances in Neural Information Processing Systems, 12:1057?1063, 2000.
[24] E.M. Tricomi, M.R. Delgado, and J.A. Fiez. Modulation of caudate activity by action contingency. Neuron, 41(2):281292, 2004.
| 3517 |@word version:1 seems:1 instrumental:1 closure:1 simulation:4 solid:1 boundedness:1 delgado:1 initial:3 contains:1 efficacy:1 renewed:1 interestingly:1 envision:1 bc:1 reynolds:1 current:1 optim:1 bd:1 realistic:2 plasticity:8 motor:2 update:12 v:1 stationary:4 intelligence:1 selected:1 xk:3 beginning:1 short:1 ptm:1 provides:2 iterates:1 ron:1 location:2 direct:4 differential:8 prove:2 consists:4 x0:9 expected:2 behavior:1 examine:1 growing:1 brain:4 relying:1 td:24 actual:1 increasing:1 project:2 provided:1 moreover:4 bounded:3 maximizes:2 mcculloch:4 israel:1 interpreted:2 developed:3 htj:2 finding:1 temporal:9 every:1 act:1 ti:3 k2:2 demonstrates:1 platt:1 utilization:1 control:5 appear:2 producing:1 bertsekas:1 positive:4 t1:2 engineering:2 influencing:1 local:1 timing:1 limit:1 consequence:1 striatum:3 sutton:4 firing:2 modulation:3 black:1 twice:2 resembles:2 equivalence:1 averaged:1 practical:1 unique:1 differs:2 signaling:1 attain:1 refers:1 suggest:2 onto:1 selection:1 operator:1 bh:2 layered:2 context:4 applying:2 restriction:1 map:1 demonstrated:1 imposed:1 center:1 go:1 starting:3 focused:1 simplicity:1 rule:14 classic:1 updated:1 cultured:1 play:1 exact:3 substrate:1 programming:1 us:2 hypothesis:1 approximated:1 satisfying:3 observed:2 role:3 electrical:1 tdlearning:1 connected:1 cycle:4 environment:6 ui:11 reward:29 dynamic:5 ghavamzadeh:2 depend:1 solving:1 singh:2 predictive:1 upon:1 basis:1 montague:1 various:1 tx:2 represented:1 derivation:2 separated:1 effective:2 describe:1 monte:1 sejnowski:1 artificial:2 widely:2 supplementary:5 valued:2 tested:1 itself:1 noisy:3 online:3 advantage:2 differentiable:2 analytical:1 interaction:1 reset:1 loop:1 flexibility:1 roweis:1 dissociable:1 convergence:18 optimum:1 generating:2 incremental:1 converges:3 newtonian:1 derive:1 recurrent:4 ac:10 ij:7 school:1 australian:1 direction:4 anatomy:1 aperiodic:1 attribute:1 stochastic:8 centered:1 human:1 mcallester:1 kistler:1 material:5 implementing:2 biological:8 summation:1 mathematically:1 extension:1 around:2 stdp:1 exp:1 great:1 algorithmic:2 pitt:4 ventral:4 achieves:1 purpose:1 estimation:4 currently:1 visited:1 modulating:1 establishes:1 hope:1 mit:2 gaussian:1 supt:1 establishment:1 rather:2 reaching:1 barto:1 encode:1 derived:4 likelihood:1 dependent:5 dayan:3 xtm:1 volkinshtein:1 relation:2 koller:1 wij:9 selects:1 provably:1 issue:1 among:1 denoted:5 development:1 animal:1 field:1 aware:1 sampling:1 represents:5 t2:1 bremaud:1 escape:1 randomly:2 composed:1 national:1 replaced:1 phase:1 consisting:7 fire:4 possibility:1 adjust:1 tj:3 devoted:1 chain:2 ancient:1 walk:1 haifa:1 re:1 circle:3 plotted:1 theoretical:2 formalism:1 maximization:2 ordinary:1 parametrically:1 afresh:1 technion:2 srm:1 conducted:1 wickens:1 characterize:1 combined:1 cited:1 siam:1 sequel:1 lee:2 squared:1 recorded:1 nm:2 neuromodulators:1 choose:3 receipt:1 corner:2 derivative:2 leading:3 syst:1 potential:3 lookup:3 star:1 sec:1 coefficient:1 satisfy:2 vi:7 stream:2 performed:1 view:1 red:1 start:2 parallel:1 vivo:1 il:1 variance:3 yield:3 lesson:1 mc:6 bhatnagar:2 trajectory:3 carlo:1 ago:1 synapsis:4 reach:4 whenever:1 synaptic:8 ed:1 definition:6 against:1 involved:2 naturally:1 associated:2 di:1 proof:6 proved:6 lim:2 feed:2 marom:1 dt:1 follow:1 response:4 improved:1 formulation:2 done:1 strongly:2 convergence1:1 stage:13 receives:5 shahaf:1 lack:1 continuity:1 bh2:1 mode:3 izhikevich:1 mdp:3 remedy:1 deal:1 distal:1 during:1 mccullochpitts:1 eligibility:1 trying:1 demonstrate:1 performs:1 novel:1 possessing:1 recently:2 spiking:6 rl:8 refractory:1 conditioning:1 cerebral:1 interpretation:2 kluwer:1 interpret:1 cambridge:3 gibbs:1 tuning:1 rd:1 similarly:1 marbach:1 dot:1 actor:40 cortex:1 recent:5 optimizing:1 driven:1 certain:1 binary:2 rep:1 seen:1 minimum:1 florian:1 converge:2 monotonically:1 signal:18 ii:2 full:1 desirable:1 multiple:1 hebbian:3 technical:2 academic:1 cross:2 long:1 divided:1 visit:1 regenerative:2 coded:1 biophysics:1 variant:1 basic:1 experimentalists:1 expectation:1 poisson:1 dopamine:10 represent:1 dec:1 ode:1 interval:1 diagram:1 publisher:1 probably:1 ascent:1 induced:2 emitted:2 tfi:2 ee:1 baxter:2 architecture:6 reduce:1 regarding:2 tm:7 br:2 t0:2 whether:1 motivated:3 bartlett:2 passed:1 linkage:1 queue:1 action:15 useful:1 latency:1 detailed:1 verifiable:1 mid:1 locally:1 meir:2 exist:1 dotted:1 estimated:3 neuroscience:3 rmeir:1 popularity:1 per:12 serving:2 blue:1 modifiable:1 discrete:4 track:1 vol:1 basal:6 key:1 four:4 group:1 threshold:3 demonstrating:1 achieving:1 ht:1 imaging:2 year:2 run:1 parameterized:2 throughout:1 separation:3 decision:1 layer:5 hi:1 display:1 activity:3 occur:1 argument:1 min:4 performing:2 dopaminergic:2 transferred:2 department:1 according:7 ball:2 membrane:2 increasingly:1 biologically:3 modification:1 midbrain:1 castro:1 pr:1 taken:1 equation:3 remains:1 previously:1 fail:1 phasic:5 initiate:2 singer:1 end:1 operation:1 apply:2 quarterly:1 appropriate:2 batch:4 denotes:1 top:1 kushner:1 konda:1 k1:2 uj:2 establish:1 corticostriatal:1 move:3 spike:8 exclusive:1 gradient:10 lends:1 athena:1 presynaptic:1 assuming:1 index:1 cont:1 illustration:1 ratio:1 providing:1 mostly:1 striatal:1 trace:1 implementation:1 policy:11 perform:1 upper:1 neuron:43 observation:2 markov:4 finite:4 immediate:1 situation:1 extended:1 precise:3 mansour:1 dotan:1 arbitrary:1 introduced:1 required:1 tentative:1 bcm:1 coherent:1 established:2 trans:1 beyond:1 suggested:4 usually:1 below:1 green:1 memory:1 explanation:1 overlap:2 natural:3 friston:1 representing:3 improve:1 fiez:1 auto:1 prior:5 review:2 underway:1 dolan:1 foundation:1 nucleus:2 contingency:1 agent:19 consistent:1 principle:2 editor:1 playing:1 critic:36 elsewhere:1 last:1 transpose:1 implicated:2 tsitsiklis:2 cortico:1 odoherty:1 explaining:1 taking:1 barrier:1 curve:2 calculated:3 cortical:3 transition:4 xn:17 maze:5 plain:1 sensory:2 forward:2 lett:1 reinforcement:7 baras:1 schultz:2 far:1 mesencephalic:1 dmitry:1 global:1 automatica:1 assumed:4 quiescent:1 un:4 continuous:5 table:6 channel:2 learn:1 nature:1 gerstner:1 main:1 motivation:1 noise:1 x1:1 neuronal:2 canberra:1 depicts:1 fashion:1 slow:1 vr:1 sub:1 position:1 diffusively:1 comput:1 rk:9 theorem:17 down:1 specific:1 evidence:2 essential:1 exists:1 importance:1 occurring:1 horizon:1 led:1 tantalizing:1 borkar:2 tfj:5 yin:1 ganglion:6 gallager:1 vth:2 expressed:1 hitting:1 springer:2 ch:1 determines:1 satisfies:2 relies:1 dwij:1 ma:1 parameters3:1 conditional:1 goal:4 viewed:1 experimentally:1 hard:1 specifically:1 except:1 determined:1 typical:3 infinite:1 lemma:9 called:2 experimental:1 formally:1 support:2 latter:1 stressed:1 dorsal:4 modulated:4 relevance:1 caudate:1 phenomenon:1 ex:1 |
2,777 | 3,518 | The Infinite Factorial Hidden Markov Model
Jurgen Van Gael?
Department of Engineering
University of Cambridge, UK
[email protected]
Yee Whye Teh
Gatsby Unit
University College London, UK
[email protected]
Zoubin Ghahramani
Department of Engineering
University of Cambridge, UK
[email protected]
Abstract
We introduce a new probability distribution over a potentially infinite number of
binary Markov chains which we call the Markov Indian buffet process. This process extends the IBP to allow temporal dependencies in the hidden variables. We
use this stochastic process to build a nonparametric extension of the factorial hidden Markov model. After constructing an inference scheme which combines slice
sampling and dynamic programming we demonstrate how the infinite factorial
hidden Markov model can be used for blind source separation.
1
Introduction
When modeling discrete time series data, the hidden Markov model [1] (HMM) is one of the most
widely used and successful tools. The HMM defines a probability distribution over observations
y1 , y2 , ? ? ? yT using the following generative model: it assumes there is a hidden Markov chain
s1 , s2 , ? ? ? , sT with st ? {1 ? ? ? K} whose dynamics is governed by a K by K stochastic transition
matrix ?. At each timestep t, the Markov chain generates an output yt using some likelihood
model F parametrized by a state dependent parameter ?st . We can write the probability distribution
induced by the HMM as follows1
p(y1:T , s1:T ) =
T
Y
p(st |st?1 )p(yt |st ) =
t=1
T
Y
?st?1 ,st F (yt ; ?st ).
(1)
t=1
Figure 1 shows the graphical model for the HMM.
One shortcoming of the hidden Markov model is the limited representational power of the latent
variables. One way to look at the distribution defined by the HMM is to write down the marginal
distribution of yt given the previous latent state st?1
X
X
p(yt |st?1 ) =
p(st |st?1 )p(yt |st ) =
?st?1 ,st F (yt ; ?st ).
(2)
st
st
Equation (2) illustrates that the observations are generated from a dynamic mixture model. The
factorial hidden Markov model (FHMM), developed in [2], addresses the limited representational
power of the hidden Markov model. The FHMM extends the HMM by representing the hidden state
?
http://mlg.eng.cam.ac.uk/jurgen
To make the notation more convenient, we assume w.l.o.g. that for all our models, all latent chains start in
(m)
a dummy state that is in the 0 state. E.g. for the HMM s0 = 0, for the FHMM s0 = 0 for all m.
1
1
Figure 2: The Factorial Hidden Markov Model
Figure 1: The Hidden Markov Model
in a factored form. This way, information from the past is propagated in a distributed manner through
a set of parallel Markov chains. The parallel chains can be viewed as latent features which evolve
over time according to Markov dynamics. Formally, the FHMM defines a probability distribution
over observations y1 , y2 , ? ? ? yT as follows: M latent chains s(1) , s(2) , ? ? ? , s(M ) evolve according
to Markov dynamics and at each timestep t, the Markov chains generate an output yt using some
likelihood model F parameterized by a joint state-dependent parameter s(1:m) . The graphical model
t
in figure 2 shows how the FHMM is a special case of a dynamic Bayesian network. The FHMM has
been successfully applied in vision [3], audio processing [4] and natural language processing [5].
Unfortunately, the dimensionality M of our factorial representation or equivalently, the number of
parallel Markov chains, is a new free parameter for the FHMM which we would prefer learning
from data rather than specifying it beforehand.
Recently, [6] introduced the basic building block for nonparametric Bayesian factor models called
the Indian Buffet Process (IBP). The IBP defines a distribution over infinite binary matrices Z where
element znk denotes whether datapoint n has feature k or not. The IBP can be combined with
distributions over real numbers or integers to make the features useful for practical problems.
In this work, we derive the basic building block for nonparametric Bayesian factor models for time
series which we call the Markov Indian Buffet Process (mIBP). Using this distribution we build a
nonparametric extension of the FHMM which we call the Infinite Factorial Hidden Markov Model
(iFHMM). This construction allows us to learn a factorial representation for time series.
In the next section, we develop the novel and generic nonparametric mIBP distribution. Section 3
describes how to use the mIBP do build the iFHMM. Which in turn can be used to perform independent component analysis on time series data. Section 4 shows results of our application of the
iFHMM to a blind source separation problem. Finally, we conclude with a discussion in section 5.
2
The Markov Indian Buffet Process
Similar to the IBP, we define a distribution over binary matrices to model whether a feature at time
t is on or off. In this representation rows correspond to timesteps and the columns to features or
Markov chains. We want the distribution over matrices to satisfy the following two properties: (1)
the potential number of columns (representing latent features) should be able to be arbitrary large;
(2) the rows (representing timesteps) should evolve according to a Markov process.
Below, we will formally derive the mIBP distribution in two steps: first, we describe a distribution
over binary matrices with a finite number of columns. We choose the hyperparameters carefully so
we can easily integrate out the parameters of the model. In a second phase, we take the limit as the
number of features goes to infinity in a manner analogous to [7]?s derivation of infinite mixtures.
2.1
A finite model
Let S represent a binary matrix with T rows (datapoints) and M columns (features). stm represents
the hidden state at time t for Markov chain m. Each Markov chain evolves according to the transition
matrix
1 am am
(m)
W
=
,
(3)
1 b m bm
2
(m)
where Wij = p(st+1,m = j|stm = i). We give the parameters of W (m) distributions am ?
Beta(?/M, 1) and bm ? Beta(?, ?). Each chain starts with a dummy zero state s0m = 0. The
hidden state sequence for chain m is generated by sampling T steps from a Markov chain with
transition matrix W (m) . Summarizing, the generative specification for this process is
?
?m ? {1, 2, ? ? ? , M } : am ? Beta
,1
, bm ? Beta(?, ?),
(4)
M
t?1,m st?1,m
s0m = 0 , stm ? Bernoulli(a1?s
bm
).
m
Next, we evaluate the probability of the state matrix S with the transition matrix parameters W (m)
01 10 11
marginalized out. We introduce the following notation, let c00
m , cm , cm , cm be the number of 0 ?
0, 0 ? 1, 1 ? 0 and 1 ? 1 transitions respectively, in binary chain m (including the transition from
the dummy state to the first state). We can then write
p(S|a, b) =
M
Y
00
c01
10
c11
(1 ? am )cm amm (1 ? bm )cm bmm .
(5)
m=1
We integrate out a and b with respect to the conjugate priors defined in equation (4) and find
p(S|?, ?, ?) =
M
Y
?
?
11
10
01
00
M ?( M + cm )?(cm + 1)?(? + ?)?(? + cm )?(? + cm )
,
?
01
10
11
?( M
+ c00
m + cm + 1)?(?)?(?)?(? + ? + cm + cm )
m=1
(6)
where ?(x) is the Gamma function.
2.2
Taking the infinite limit
Analogous to the IBP, we compute the limit for M ? ? of the finite model in equation (6). The
probability of a single matrix in the limit as M ? ? is zero. This is not a problem since we
are only interested in the probability of a whole class of matrices, namely those matrices that can
be transformed into each other through column permutations. In other words, our factorial model is
exchangeable in the columns as we don?t care about the ordering of the features. Hence, we compute
the infinite limit for left-ordered form (lof)-equivalence classes [6].
The left-ordered form of a binary S matrix can be defined as follows: we interpret one column of
length T as encoding a binary number: column m encodes the number 2T ?1 s1m + 2T ?2 s2m + ? ? ? +
sT m . We call the number which a feature encodes the history of the column. Then, we denote with
Mh the number of columns in the matrix S that have the same history. We say a matrix is a lofmatrix if its columns are sorted in decreasing history values. Let S be a lof-matrix, then we denote
with [S] the set of all matrices that can be transformed into S using only column permutations; we
call [S] the lof-equivalence class. One can check that the number of elements in the lof-equivalence
!
. We thus find the probability of the equivalence class of S to be
class of S is equal to Q2T M
?1
h=0
p([S])
=
X
Mh !
p(S|?, ?, ?)
(7)
S?[S]
=
M
Y
?
?
01
00
10
11
M ?( M + cm )?(cm + 1)?(? + ?)?(? + cm )?(? + cm )
.
Q2T ?1
?
00
01
10
11
?( M + cm + cm + 1)?(?)?(?)?(? + ? + cm + cm )
h=0 Mh ! m=1
M!
(8)
This form allows us to compute a meaningful limit as M ? ?. A writeup on the technical details
of this computation can be found on the author?s website. The end result has the following form
M+
Y (c01 ? 1)!c00 !?(? + ?)?(? + c10 )?(? + c11 )
?M+
m
m
m
m
exp{??HT }
,
lim p([S]) = Q2T ?1
00 + c01 )!?(?)?(?)?(? + ? + c10 + c11 )
M ??
(c
m
m
m
m
m=1
h=0 Mh !
(9)
where Ht denotes the t?th Harmonic number and M+ denotes the number of Markov chains that
switch on at least once between 0 and T , i.e. M+ is the effective dimension of our model.
3
2.3
Properties of the distribution
First of all, it is interesting to note from equation (9) that our model is exchangeable in the columns
and Markov exchangeable2 in the rows.
Next, we derive the distribution in equation (9) through a stochastic process that is analogous to
the Indian Buffet Process but slightly more complicated for the actors involved. In this stochastic
process, T customers enter an Indian restaurant with an infinitely long buffet of dishes organized in
a line. The first customer enters the restaurant and takes a serving from each dish, starting at the left
of the buffet and stopping after a Poisson(?) number of dishes as his plate becomes overburdened.
A waiter stands near the buffet and takes notes as to how many people have eaten which dishes. The
t?th customer enters the restaurant and starts at the left of the buffet. At dish m, he looks at the
customer in front of him to see whether he has served himself that dish.
? If so, he asks the waiter how many people have previously served themselves dish m when
the person in front of them did (the waiters replies to him the number c11
m ) and how many
people didn?t serve themselves dish m when the person in front of them did (the waiter
replies to him the number c10
m ). The customer then serves himself dish m with probability
10
11
(c11
m + ?)/(? + ? + cm + cm ).
? Otherwise, he asks the waiter how many people have previously served themselves dish m
when the person in front of them did not (the waiters replies to him the number c01
m ) and
how many people didn?t serve themselves dish m when the person in front of them did not
either (the waiter replies to him the number c00
m ). The customer then serves himself dish m
00
01
with probability c00
m /(cm + cm ).
The customer then moves on to the next dish and does exactly the same. After the customer has
passed all dishes people have previously served themselves from, he tries Poisson(?/t) new dishes.
(t)
If we denote with M1 the number of new dishes tried by the t?th customer, the probability of any
particular matrix being produced by this process is
M
?
?
01
00
10
11
Y
?M+
M ?( M + cm )?(cm + 1)?(? + ?)?(? + cm )?(? + cm )
p([S]) = QT
.
exp{??H
}
T
?
01
10
11
(t)
?( M
+ c00
m + cm + 1)?(?)?(?)?(? + ? + cm + cm )
m=1
t=1 M1 !
(10)
We can recover equation (9) by summing over all possible matrices that can be generated using
the Markov Indian Buffet process that are in the same lof-equivalence class. It is straightforward
to check that there are exactly
QT
(t)
t=1 M1 !
Q2T ?1
h=0 Mh !
of these. Multiplying this by equation (10) we recover
equation (9). This construction shows that the effective dimension of the model (M+ ) follows a
Poisson(?HT ) distribution.
2.4
A stick breaking representation
Although the representation above is convenient for theoretical analysis, it is not very practical for
inference. Interestingly, we can adapt the stick breaking construction for the IBP [8] to the mIBP.
This will be very important for the iFHMM as it will allow us to use a combination of slice sampling
and dynamic programming to do inference.
The first step in the stick breaking construction is to find the distribution of a(1) > a(2) > ? ? ? ,
the order statistics of the parameters a. Since the distribution on the variables am in our model are
identical to the distribution of the feature parameters in the IBP model, we can use the result in [8]
that these variables have the following distribution
a(1)
p(a(m) |a(m?1) )
? Beta(?, 1),
(11)
??1
= ?a??
(m?1) a(m) I(0 ? a(m) ? a(m?1) ).
(12)
The variables bm are all independent draws from a Beta(?, ?) distribution which is independent of
M . Hence if we denote with b(m) the b variable corresponding to the m?th largest a value (in other
words: the b value corresponding to a(m) ) then it follows that b(m) ? Beta(?, ?).
2
A sequence is Markov exchangeable if its distribution is invariant under permutations of the transitions.
4
Figure 3: The Infinite Factorial Hidden Markov Model
3
The Infinite Factorial Hidden Markov Model
In this section, we explain how to use the mIBP as a building block in a full blown probabilistic
model. The mIBP provides us with a matrix S which we interpret as an arbitrarily large set of parallel Markov chains. First we augment our binary representation with a more expressive component
which can describe feature specific properties. We do this by introducing a base distribution H from
which we sample a parameter m H for each Markov chain. This is a rather flexible setup as
the base distribution can introduce a parameter for every chain and every timestep, which we will
illustrate in section 3.1.
Now that we have a model with a more expressive latent structure, we want to add a likelihood
model F which describes the distribution over the observations conditional on the latent structure.
Formally, F (yt | , st ) describes the probability of generating yt given the model parameters
and the current latent feature state st . We note that there are two important conditions which
the likelihood must satisfy in order for the limit M to be valid: (1) the likelihood must be
invariant to permutations of the features, (2) the likelihood cannot depend on m if stm = 0. Figure 3
shows the graphical model for our construction which we call the Infinite Factorial Hidden Markov
Model (iFHMM). In the following section, we describe one particular choice of base distribution
and likelihood model which performs Independent Component Analysis on time series.
3.1
The Independent Component Analysis iFHMM
Independent Component Analysis [9] (ICA) means different things to different people. Originally
invented as an algorithm to unmix a signal into a set of independent signals, it will be more insightful
for our purpose to think of ICA in terms of the probabilistic model which we describe below. As we
explain in detail in section 4, we are interested in ICA to solve the blind source separation problem.
Assume that M signals are represented through the vectors xm ; grouping them we can represent
the signals using the matrix X = [x1 x2 ? ? ? xM ]. Next, we linearly combine the signals using a
mixing matrix W to generate the observed signal Y = XW . Additionally, we will assume IID
Normal(0, Y2 ) noise added: Y = XW + .
A variety of fast algorithms exist which unmix the observations Y and recover the signal X. However, crucial to these algorithms is that the number of signals is known in advance. [10] used the
IBP to design the Infinite Independent Component Analysis (iICA) model which learns an appropriate number of signals from exchangeable data. Our ICA iFHMM model extends the iICA for time
series.
The ICA iFHMM generative model can be described as follows: we sample S mIBP and pointwise multiply (denoted by ) it with a signal matrix X. Each entry in X is an IID sample from a
Laplace(0, 1) distribution. One could choose many other distributions for X, but since in section 4
we will model speech data, which is known to be heavy tailed, the Laplace distribution is a convenient choice. Speakers will be speaking infrequently so pointwise multiplying a heavy tailed distribution with a sparse binary matrix achieves our goal of producing a sparse heavy tailed distribution.
Next, we introduce a mixing matrix W which has a row for each signal in S X and a column
2
for each observed dimension in Y . The entries for W are sampled IID from a Normal(0, W
)
distribution. Finally, we combine the signal and mixing matrices as in the finite case to form the
5
observation matrix Y : Y = (S X)W + where is Normal(0, ?Y2 ) IID noise for each element.
In terms of the general iFHMM model defined in the previous section, the base distribution H is
a joint distribution over columns of X and rows of W . The likelihood F performs the pointwise
multiplication, mixes the signals and adds the noise. It can be checked that our likelihood satisfies
the two technical conditions for proper iFHMM likelihoods described in section 3.
3.2
Inference
Inference for nonparametric models requires special treatment as the potentially unbounded dimensionality of the model makes it hard to use exact inference schemes. Traditionally, in nonparametric
factor models inference is done using Gibbs sampling, sometimes augmented with Metropolis Hastings steps to improve performance. However, it is commonly known that naive Gibbs sampling in
a time series model is notoriously slow due to potentially strong couplings between successive time
steps [11]. In the context of the infinite hidden Markov model, a solution was recently proposed
in [12], where a slice sampler adaptively truncates the infinite dimensional model after which a dynamic programming performs exact inference. Since a stick breaking construction for the iFHMM
is readily available, we can use a very similar approach for the iFHMM. The central idea is the
following: we introduce an auxiliary slice variable ? with the following distribution
? ? Uniform(0,
min
m:?t,stm =1
am ).
(13)
It is not essential that we sample from the uniform distribution, in fact for some of our experiments
we use the more flexible Beta distribution. The resulting joint distribution is
p(?, a, b, S) = p(?|a, S)p(a, b, S).
(14)
It is clear from the equation above that one recovers the original mIBP distribution when we integrate
out ?. However, when we condition the joint distribution on ? we find
p(S|Y , ?, a, b) ? p(S|Y , a, b)
I(0 ? ? ? minm:?t,stm =1 am )
minm:?t,stm =1 am
(15)
which forces all columns of S for which am < ? to be in the all zero state. Since there can only be
a finite number of am > ?, this effectively implies that we need only resample a finite number of
columns of S.
We now describe our algorithm in the context of the ICA iFHMM: we start with an initial S matrix
and sample a, b. Next, conditional on our initial S and the data Y , we sample the ICA parameters
X and W . We then start an iterative sampling scheme which involves the following steps:
1. We sample the auxiliary slice variable ?. This might involve extending the representation
of S, X and W ,
2. For all the represented features, we sample S, X and W ,
3. We resample the hyperparameters (?Y , ?W , ?, ?, ?) of our model,
4. We compact our representation by removing all unused features.
We experimented with 3 different algorithms for step 2. The first, a naive Gibbs sampler, did not
perform well as we expected. The second algorithm, which we used for our experiments, is a blocked
Gibbs sampler which fixes all but one column of S and runs a forward-filtering backward-sampling
sweep on the remaining column. This allows us to analytically integrate out one column of X in
the dynamic program and resample it from the posterior afterwards. W can be sampled exactly
conditional on X, S and Y . A third algorithm runs dynamic programming on multiple chains at
once. We originally designed this algorithm as it has the potential to merge two features in one
sweep. However, we found that because we cannot integrate out X and W in this setting, the
inference was not faster than our second algorithm. Note that because the bulck of the computation
is used for estimating X and W , the dynamic programming based algorithms are effectively as fast
as the naive Gibbs sampler. A prototype implementation of the iFHMM sampler in Matlab or .NET
can be obtained from the first author.
6
(a) Ground Truth
(b) ICA iFHMM
(c) iICA
(d) ICA iFHMM
(e) iICA
Figure 4: Blind speech separation experiment; figures represent which speaker is speaking at a certain point in time: columns are speakers, rows are white if the speaker is talking and black otherwise.
The left figure is ground truth, the next two figures in are for the 10 microphone experiment, the right
two figures are for the 3 microphone experiment.
4
Experiments
To test our model and inference algorithms, we address a blind speech separation task, also known
as the cocktail party problem. More specifically, we record multiple people who are simultaneously speaking, using a set of microphones. Given the mixed speech signals, the goal is to separate
out the individual speech signals. Key to our presentation is that we want to illustrate that using
nonparametric methods, we can learn the number of speakers from a small amount of data. Our
first experiment learns to recover the signals in a setting with more microphones then speakers, our
second experiment uses less microphones then speakers.
The experimental setup was the following: we downloaded data from 5 speakers from the Speech
Separation Challenge website3 . The data for each speaker consists of 4 sentences which we appended with random pauses in between each sentence. Figure 4(a) illustrates which person is talking
at what point in time. Next, we artificially mix the data 10 times. Each mixture is a linear combination of each of the 5 speakers using Uniform(0, 1) mixing weights. We centered the data to have
zero mean and unit variance and added IID Normal(0, ?Y2 ) noise with ?Y = 0.3.
In our first experiment we compared the ICA iFHMM with the iICA model using all 10 microphones.
We subsample the data so we learn from 245 datapoints. We initialized the samplers for both models
with an initial S matrix with 10 features, 5% random entries on. We use a Gamma(1.0, 4.0) prior on
?. In both models, we use a InverseGamma(2.0, 1.0) prior for ?Y and ?W . Finally, for the iFHMM,
we chose a Gamma(10.0, 1.0) prior on ? and a Gamma(1.0, 1.0) prior on ? to encode our belief that
people speak for larger stretches of time, say the time to pronounce a sentence. We ran the samplers
for 5000 iterations and then gathered 20 samples every 20 iterations.
For both the ICA iFHMM and iICA models, we average the 20 samples and rearrange the features
to have maximal overlap with the ground truth features. Figure 4(b) shows that the ICA iFHMM
model recognizes that the data was generated from 5 speakers. Visual inspection of the recovered S
matrix also shows that the model discovers who is speaking at what time. 4(c) illustrated the results
of the iICA model on the same data. Although the model discovers some structure in the data, it fails
to find the right number of speakers (it finds 9) and does a poor job in discovering which speaker is
active at which time. We computed the average mutual information between the 5 columns of the
true S matrix and the first 5 columns of the recovered S matrices. We find that the iFHMM has an
average mutual information of 0.296 compared to 0.068 for the iICA model. The difference between
the two models is strictly limited to the difference between using the IBP versus mIBP. We want to
emphasize that although one could come up with ad-hoc heuristics to smooth the iICA results, the
ICA iFHMM is a principled probabilistic model that does a good job at comparable computational
cost.
In a second experiment, we chose to perform blind speech separation using only the first 3 microphones. We subsampled a noiseless version of the data to get 489 datapoints. We ran both the ICA
iFHMM and iICA inference algorithms using exactly the same settings as in the previous experi3
http://www.dcs.shef.ac.uk/ martin/SpeechSeparationChallenge.htm
7
ment. Figure 4(d) and 4(e) show the average of 20 samples, rearranged to match the ground truth. In
this setting both methods fail to identify the number of speakers although the ICA iFHMM clearly
performs better. The ICA iFHMM finds one too many signal: the spurious signal is very similar
to the third signal which suggests that the error is a problem of the inference algorithm and not so
much of the model itself. The iICA on the other hand performs poorly: it is very hard to find any
structure in the recovered Z matrix. We compared the mutual information as described above and
find that the iFHMM has a mutual information of 0.091 compared to 0.028 for the iICA model.
5
Discussion
The success of the Hidden Markov Model set off a wealth of extensions to adapt it to particular
situations. [2] introduced a factorial hidden Markov model which explicitly models dynamic latent
features while in [13] a nonparametric version of the the Hidden Markov Model was presented.
In this paper we ?complete the square? by presenting a nonparametric Factorial Hidden Markov
Model. We introduced a new stochastic process for latent feature representation of time series
called the Markov Indian Buffet Process. We showed how this stochastic process can be used to
build a nonparametric extension of the FHMM which we call the iFHMM. Another issue which
deserves further exploration is inference: in [2] it was found that a structured variational method
provides a good balance between accuracy and computational effort. An interesting open problem
is whether we can adapt the structured variational method to the iFHMM. Finally, analogous to the
two-parameter IBP [14] we would like to add one more degree of flexibility to control the 0 ? 1
transition probability more finely. Although the derivation of the mIBP with this extra parameter is
straightforward, we as yet lack a stick breaking construction for this model which is crucial for our
inference scheme.
Acknowledgments
We kindly acknowledge David Knowles for discussing the generalized Amari error and A. Taylan
Cemgil for his suggestions on blind source separation. Jurgen Van Gael is supported by a Microsoft
Research PhD scholarship; Zoubin Ghahramani is also in the Machine Learning department, CMU.
References
[1] L. R. Rabiner, ?A tutorial on hidden markov models and selected applications in speech recognition,?
Proceedings of the IEEE, vol. 77, pp. 257?286, 1989.
[2] Z. Ghahramani and M. I. Jordan, ?Factorial hidden markov models,? Machine Learning, vol. 29, pp. 245?
273, 1997.
[3] P. Wang and Q. Ji, ?Multi-view face tracking with factorial and switching hmm,? in Proceedings of the
Seventh IEEE Workshops on Application of Computer Vision, pp. 401?406, IEEE Computer Society, 2005.
[4] B. Logan and P. Moreno, ?Factorial hmms for acoustic modeling,? 1998.
[5] K. Duh, ?Joint labeling of multiple sequences: A factorial hmm approach,? in 43rd Annual Meeting of the
Association of Computational Linguistics (ACL) - Student Research Workshop, 2005.
[6] T. L. Griffiths and Z. Ghahramani, ?Infinite latent feature models and the indian buffet process,? Advances
in Neural Information Processing Systems, vol. 18, pp. 475?482, 2006.
[7] R. M. Neal, ?Bayesian mixture modeling,? Maximum Entropy and Bayesian Methods, 1992.
[8] Y. W. Teh, D. G?or?ur, and Z. Ghahramani, ?Stick-breaking construction for the indian buffet process,?
Proceedings of the International Conference on Artificial Intelligence and Statistics, vol. 11, 2007.
[9] A. Hyvarinen and E. Oja, ?Independent component analysis: Algorithms and applications,? Neural Networks, vol. 13, pp. 411?30, 2000.
[10] D. Knowles and Z. Ghahramani, ?Infinite sparse factor analysis and infinite independent components
analysis,? Lecture Notes in Computer Science, vol. 4666, p. 381, 2007.
[11] S. L. Scott, ?Bayesian methods for hidden markov models: Recursive computing in the 21st century,?
Journal of the American Statistical Association, vol. 97, pp. 337?351, Mar. 2002.
[12] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani, ?Beam sampling for the infinite hidden markov
model,? in The 25th International Conference on Machine Learning, vol. 25, (Helsinki), 2008.
[13] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen, ?The infinite hidden markov model,? Advances in
Neural Information Processing Systems, vol. 14, pp. 577 ? 584, 2002.
[14] Z. Ghahramani, T. L. Griffiths, and P. Sollich, ?Bayesian nonparametric latent feature models,? Bayesian
Statistics, vol. 8, 2007.
8
| 3518 |@word version:2 open:1 mibp:11 tried:1 eng:2 asks:2 initial:3 series:8 interestingly:1 past:1 current:1 recovered:3 yet:1 must:2 readily:1 moreno:1 designed:1 generative:3 discovering:1 website:1 selected:1 intelligence:1 inspection:1 inversegamma:1 record:1 provides:2 successive:1 unbounded:1 beta:8 consists:1 combine:3 manner:2 introduce:5 expected:1 ica:16 themselves:5 multi:1 decreasing:1 becomes:1 stm:7 estimating:1 notation:2 didn:2 what:2 follows1:1 cm:31 developed:1 c01:4 temporal:1 every:3 exactly:4 uk:8 exchangeable:4 unit:2 stick:6 control:1 producing:1 engineering:2 cemgil:1 limit:7 switching:1 encoding:1 merge:1 might:1 black:1 chose:2 acl:1 equivalence:5 specifying:1 suggests:1 hmms:1 limited:3 pronounce:1 practical:2 acknowledgment:1 recursive:1 block:3 convenient:3 word:2 griffith:2 zoubin:3 get:1 cannot:2 context:2 yee:1 www:1 customer:9 yt:12 go:1 straightforward:2 starting:1 factored:1 datapoints:3 his:2 century:1 traditionally:1 analogous:4 laplace:2 construction:8 exact:2 programming:5 speak:1 us:1 element:3 infrequently:1 recognition:1 invented:1 observed:2 enters:2 wang:1 ordering:1 ran:2 principled:1 cam:3 dynamic:12 depend:1 serve:2 easily:1 joint:5 mh:5 htm:1 represented:2 derivation:2 fast:2 shortcoming:1 london:1 describe:5 effective:2 artificial:1 labeling:1 whose:1 heuristic:1 widely:1 solve:1 larger:1 say:2 otherwise:2 amari:1 statistic:3 think:1 itself:1 beal:1 hoc:1 sequence:3 net:1 ucl:1 ment:1 maximal:1 mixing:4 poorly:1 flexibility:1 representational:2 extending:1 generating:1 derive:3 develop:1 ac:5 illustrate:2 coupling:1 qt:2 ibp:11 jurgen:3 c10:3 job:2 strong:1 auxiliary:2 involves:1 implies:1 come:1 stochastic:6 centered:1 exploration:1 fix:1 c00:6 extension:4 strictly:1 stretch:1 lof:5 normal:4 exp:2 ground:4 taylan:1 achieves:1 resample:3 purpose:1 him:5 largest:1 successfully:1 tool:1 clearly:1 rather:2 encode:1 bernoulli:1 likelihood:10 check:2 am:11 summarizing:1 inference:14 dependent:2 stopping:1 hidden:28 eaten:1 spurious:1 wij:1 transformed:2 interested:2 issue:1 flexible:2 augment:1 denoted:1 special:2 mutual:4 marginal:1 equal:1 once:2 sampling:8 identical:1 represents:1 look:2 oja:1 gamma:4 simultaneously:1 individual:1 saatci:1 s0m:2 subsampled:1 phase:1 microsoft:1 multiply:1 mixture:4 rearrange:1 chain:21 beforehand:1 initialized:1 logan:1 amm:1 theoretical:1 column:23 modeling:3 ifhmm:28 deserves:1 cost:1 introducing:1 entry:3 uniform:3 successful:1 seventh:1 front:5 too:1 dependency:1 combined:1 adaptively:1 st:25 person:5 international:2 probabilistic:3 off:2 central:1 choose:2 american:1 unmix:2 potential:2 student:1 satisfy:2 explicitly:1 blind:7 ad:1 try:1 view:1 start:5 recover:4 parallel:4 complicated:1 appended:1 square:1 accuracy:1 variance:1 who:2 correspond:1 gathered:1 identify:1 rabiner:1 fhmm:9 bayesian:8 produced:1 iid:5 multiplying:2 served:4 notoriously:1 history:3 iica:12 minm:2 datapoint:1 explain:2 checked:1 mlg:1 pp:7 involved:1 recovers:1 propagated:1 sampled:2 treatment:1 lim:1 dimensionality:2 organized:1 carefully:1 originally:2 done:1 mar:1 reply:4 hand:1 hastings:1 expressive:2 lack:1 defines:3 building:3 y2:5 true:1 hence:2 analytically:1 neal:1 illustrated:1 white:1 speaker:14 generalized:1 whye:1 plate:1 presenting:1 complete:1 demonstrate:1 performs:5 harmonic:1 variational:2 novel:1 recently:2 discovers:2 ji:1 association:2 he:5 m1:3 interpret:2 blocked:1 cambridge:2 gibbs:5 enter:1 rd:1 language:1 specification:1 actor:1 base:4 add:3 posterior:1 showed:1 dish:16 certain:1 binary:10 arbitrarily:1 success:1 discussing:1 meeting:1 care:1 waiter:7 c11:5 signal:19 full:1 mix:2 afterwards:1 multiple:3 smooth:1 technical:2 faster:1 adapt:3 match:1 long:1 a1:1 basic:2 vision:2 himself:3 poisson:3 noiseless:1 cmu:1 iteration:2 represent:3 sometimes:1 q2t:4 beam:1 want:4 shef:1 wealth:1 source:4 crucial:2 extra:1 finely:1 induced:1 thing:1 jordan:1 call:7 integer:1 near:1 unused:1 switch:1 variety:1 restaurant:3 timesteps:2 idea:1 prototype:1 whether:4 passed:1 effort:1 speech:8 speaking:4 matlab:1 cocktail:1 useful:1 gael:3 clear:1 involve:1 factorial:18 amount:1 nonparametric:12 rearranged:1 http:2 generate:2 exist:1 tutorial:1 blown:1 dummy:3 serving:1 discrete:1 write:3 vol:10 key:1 ht:3 backward:1 timestep:3 run:2 parameterized:1 extends:3 knowles:2 separation:8 draw:1 prefer:1 comparable:1 annual:1 infinity:1 x2:1 helsinki:1 encodes:2 ywteh:1 generates:1 min:1 martin:1 department:3 structured:2 according:4 combination:2 poor:1 conjugate:1 describes:3 slightly:1 sollich:1 ur:1 metropolis:1 evolves:1 s1:2 bmm:1 invariant:2 equation:9 previously:3 turn:1 fail:1 end:1 serf:2 available:1 generic:1 appropriate:1 buffet:13 original:1 assumes:1 denotes:3 remaining:1 linguistics:1 recognizes:1 graphical:3 marginalized:1 xw:2 ghahramani:9 build:4 scholarship:1 society:1 sweep:2 move:1 added:2 separate:1 duh:1 hmm:9 parametrized:1 length:1 pointwise:3 balance:1 equivalently:1 setup:2 unfortunately:1 truncates:1 potentially:3 design:1 implementation:1 proper:1 perform:3 teh:3 observation:6 markov:46 finite:6 acknowledge:1 situation:1 y1:3 dc:1 arbitrary:1 introduced:3 david:1 namely:1 sentence:3 website3:1 acoustic:1 s1m:1 address:2 able:1 below:2 xm:2 scott:1 challenge:1 program:1 including:1 belief:1 power:2 overlap:1 natural:1 force:1 pause:1 representing:3 scheme:4 improve:1 naive:3 prior:5 evolve:3 multiplication:1 lecture:1 permutation:4 mixed:1 interesting:2 suggestion:1 filtering:1 versus:1 integrate:5 downloaded:1 znk:1 degree:1 s0:2 heavy:3 row:7 supported:1 free:1 rasmussen:1 allow:2 taking:1 face:1 sparse:3 van:3 slice:5 distributed:1 dimension:3 transition:8 stand:1 valid:1 author:2 commonly:1 forward:1 bm:6 party:1 hyvarinen:1 compact:1 emphasize:1 active:1 summing:1 conclude:1 don:1 latent:13 iterative:1 tailed:3 additionally:1 learn:3 artificially:1 constructing:1 did:5 kindly:1 linearly:1 s2:1 whole:1 hyperparameters:2 noise:4 subsample:1 x1:1 augmented:1 gatsby:2 slow:1 fails:1 governed:1 breaking:6 third:2 learns:2 down:1 removing:1 s2m:1 specific:1 insightful:1 experimented:1 grouping:1 essential:1 workshop:2 effectively:2 phd:1 illustrates:2 entropy:1 infinitely:1 visual:1 ordered:2 tracking:1 talking:2 truth:4 satisfies:1 conditional:3 viewed:1 sorted:1 goal:2 presentation:1 hard:2 infinite:19 specifically:1 sampler:7 microphone:7 called:2 experimental:1 meaningful:1 formally:3 college:1 people:9 indian:10 evaluate:1 audio:1 |
2,778 | 3,519 | Sequential effects: Superstition or rational behavior?
Angela J. Yu
Department of Cognitive Science
University of California, San Diego
[email protected]
Jonathan D. Cohen
Department of Psychology
Princeton University
[email protected]
Abstract
In a variety of behavioral tasks, subjects exhibit an automatic and apparently suboptimal sequential effect: they respond more rapidly and accurately to a stimulus
if it reinforces a local pattern in stimulus history, such as a string of repetitions or
alternations, compared to when it violates such a pattern. This is often the case
even if the local trends arise by chance in the context of a randomized design, such
that stimulus history has no real predictive power. In this work, we use a normative
Bayesian framework to examine the hypothesis that such idiosyncrasies may reflect the inadvertent engagement of mechanisms critical for adapting to a changing
environment. We show that prior belief in non-stationarity can induce experimentally observed sequential effects in an otherwise Bayes-optimal algorithm. The
Bayesian algorithm is shown to be well approximated by linear-exponential filtering of past observations, a feature also apparent in the behavioral data. We derive
an explicit relationship between the parameters and computations of the exact
Bayesian algorithm and those of the approximate linear-exponential filter. Since
the latter is equivalent to a leaky-integration process, a commonly used model
of neuronal dynamics underlying perceptual decision-making and trial-to-trial dependencies, our model provides a principled account of why such dynamics are
useful. We also show that parameter-tuning of the leaky-integration process is
possible, using stochastic gradient descent based only on the noisy binary inputs.
This is a proof of concept that not only can neurons implement near-optimal prediction based on standard neuronal dynamics, but that they can also learn to tune
the processing parameters without explicitly representing probabilities.
1
Introduction
One common error human subjects make in statistical inference is that they detect hidden patterns
and causes in what are genuinely random data. Superstitious behavior, or the inappropriate linking
of stimuli or actions with consequences, can often arise in such situations, something also observed
in non-human subjects [1, 2]. One common example in psychology experiments is that despite a
randomized experimental design, which deliberately de-correlate stimuli from trial to trial, subjects
pick up transient patterns such as runs of repetitions and alternations, and their responses are facilitated when a stimulus continues to follow a local pattern, and impeded when such a pattern is
violated [3]. It has been observed in numerous experiments [3?5], that subjects respond more accurately and rapidly if a trial is consistent with the recent pattern (e.g. AAAA followed by A, BABA
followed by B), than if it is inconsistent (e.g. AAAA followed by B, BABA followed by A). This
sequential effect is more prominent when the preceding run has lasted longer. Figure 1a shows reaction time (RT) data from one such experiment [5]. Error rates follow a similar pattern, reflecting
a true expectancy-based effect, rather than a shift in RT-accuracy trade-off.
A natural interpretation of these results is that local patterns lead subjects to expect a stimulus,
whether explicitly or implicitly. They readily respond when a subsequent stimulus extends the local
pattern, and are ?surprised? and respond less rapidly and accurately when a subsequent stimulus
violates the pattern. When such local patterns persist longer, the subjects have greater confidence in
1
c
1 ? P (xt |xt?1 )
RT (ms)
0.52
1st half
2nd half
0.51
0.5
0.49
0.48
RARARARARARARARA
RRAARRAARRAARRAA
RRRRAAAARRRRAAAA
RRRRRRRRAAAAAAAA
d
2
0.7
1st half
2nd half
50
p0 (?) 1
0
0
0.6
0.5
?
1
0.5
0.4
RT (ms)
b
1 ? P (xt |xt?1 )
a
1st haf
2nd half
model
0
0.3
RARARARARARARARA
RRAARRAARRAARRAA
RRRRAAAARRRRAAAA
RRRRRRRRAAAAAAAA
?50
RARARARARARARARA
RRAARRAARRAARRAA
RRRRAAAARRRRAAAA
RRRRRRRRAAAAAAAA
Figure 1: Bayesian modeling of sequential effects. (a) Median reaction time (RT) from Cho et al
(2002) affected by recent history of stimuli, in which subjects are required to discriminate a small ?o?
from a large ?O? using button-presses. Along the abscissa are all possible four-trial sub-sequences,
in terms of repetitions (R) and alternations (A). Each sequence, read from top to bottom, proceeds
from the earliest stimulus progressively toward the present stimulus. As the effects were symmetric
across the two stimulus types, A and B, each bin contains data from a pair of conditions (e.g. RRAR
can be AAABB or BBBAA). RT was fastest when a pattern is reinforced (RRR followed by R,
or AAA followed by A); it is slowest when an ?established? pattern is violated (RRR followed by
A, or AAA followed by R). (b) Assuming RT decreases with predicted stimulus probability (i.e.
RT increases with 1?P (xt |xt?1 ), where xt is the actual stimulus seen), then FBM would predict
much weaker sequential effects in the second half (blue: 720 simulated trials) than in the first half
(red: 840 trials). (c) DBM predicts persistently strong sequential effects in both the first half (red:
840 trials) and second half (blue: 720 trials). Inset shows prior over ? used; the same prior was also
used for the FBM in (b). ? = .77. (d) Sequential effects in behavioral data were equally strong in
the first half (red: 7 blocks of 120 trials each) and the second half (blue: 6 blocks of 120 trials each).
Green dashed line shows a linear transformation from the DBM prediction in probability space of
(c) into the RT space. The fit is very good given the errorbars (SEM) in the data.
the pattern, and are therefore more surprised and more strongly affected when the pattern is violated.
While such a strategy seems plausible, it is also sub-optimal. The experimental design consists of
randomized stimuli, thus all runs of repetitions or alternations are spurious, and any behavioral tendencies driven by such patterns are useless. However, compared to artificial experimental settings,
truly random sequential events may be rare in the natural environment, where the laws of physics and
biology dictate that both external entities and the observer?s viewpoint undergo continuous transformations for the most part, leading to statistical regularities that persist over time on characteristic
timescales. The brain may be primed to extract such statistical regularities, leading to what appears
to be superstitious behavior in an artificially randomized experimental setting.
In section 2, we use Bayesian probability theory to build formally rigorous models for predicting
stimuli based on previous observations, and compare differentially complex models to subjects?
actual behavior. Our analyses imply that subjects assume statistical contingencies in the task to
persist over several trials but non-stationary on a longer time-scale, as opposed to being unknown
but fixed throughout the experiment. We are also interested in understanding how the computations
necessary for prediction and learning can be implemented by the neural hardware. In section 3, we
show that the Bayes-optimal learning and prediction algorithm is well approximated by a linear filter
that weighs past observations exponentially, a computationally simpler algorithm that also seems to
fit human behavior. Such an exponential linear filter can be implemented by standard models of
neuronal dynamics. We derive an explicit relationship between the assumed rate of change in the
world and the time constant of the optimal exponential linear filter. Finally, in section 4, we will
show that meta-learning about the rate of change in the world can be implemented by stochastic
gradient descent, and compare this algorithm with exact Bayesian learning.
2
Bayesian prediction in fixed and changing worlds
One simple internal model that subjects may have about the nature of the stimulus sequence in a
2-alternative forced choice (2AFC) task is that the statistical contingencies in the task remain fixed
throughout the experiment. Specifically, they may believe that the experiment is designed such that
there is a fixed probability ?, throughout the experiment, of encountering a repetition (xt = 1) on
any given trial t (thus probability 1?? of seeing an alternation xt = 0). What they would then learn
2
b
DBM
d
c
p(?t |xt )
FBM
p(?|xt )
a
Trial
Trial
Figure 2: Bayesian inference assuming fixed and changing Bernoulli parameters. (a) Graphical
model for the FBM. ? ? [0, 1], xt ? {0, 1}. The numbers in circles show example values for the
variables. (b) Graphical model for the DBM. ?t = ??(?t ? ?t?1 ) + (1 ? ?)p0 (?t ), where we assume the prior p0 to be a Beta distribution. The numbers in circles show examples values for the
variables. (c) Grayscale shows the evolution of posterior probability mass over ? for FBM (darker
color indicate concentration of mass), given the sequence of truly random (P (xt ) = .5) binary
data (blue dots). The mean of the distribution, in cyan, is also the predicted stimulus probability:
P (xt = 1|xt?1 ) = h?|xt?1 i. (d) Evolution of posterior probability mass for the DBM (grayscale)
and predictive probability P (xt = 1|xt?1 ) (cyan); they perpetually fluctuate with transient runs of
repetitions or alternations.
about the task over the time course of the experiment is the appropriate value of ?. We call this the
Fixed Belief Model (FBM). Bayes? Rule tells us how to compute the posterior:
p(?|xt ) ? P (xt |?)p(?) = ? rt +a+1 (1 ? ?)t?rt +b+1
where rt denotes the number of repetitions observed so far (up to t), xt is the set of binary
observations (x1 , . . . , xt ), and the prior distribution p(?) is assumed to be a beta distribution:
p(?) = p0 (?) = Beta(a, b). The predicted probability of
R seeing a repetition on the next trial is
the mean of this posterior distribution: P (xt+1 = 1|xt ) = ?p(?|xt )d? = h?|xt i.
A more complex internal model that subjects may entertain is that the relative frequency of repetition (versus alternation) can undergo discrete changes at unsignaled times during the experimental
session, such that repetitions are more prominent at times, and alternation more prominent at other
times. We call this the Dynamic Belief Model (DBM), in which ?t has a Markovian dependence
on ?t?1 , so that with probability ?, ?t = ?t?1 , and probability 1 ? ?, ?t is redrawn from a fixed
distribution p0 (?t ) (same Beta distribution as for the prior). The observation xt is still assumed to
be drawn from a Bernoulli process with rate parameter ?t . Stimulus predictive probability is now
the mean of the iterative prior, P (xt = 1|xt?1 ) = h?t |xt?1 i, where
p(?t = ?|xt?1 ) = ?p(?t?1 = ?|xt?1 ) + (1 ? ?)p0 (?t = ?)
p(?t |xt ) ? P (xt |?t )p(?t |xt?1 )
Figures 2a;b illustrate the two graphical models. Figures 2c;d demonstrate how the two models respond differently to the exact same sequence of truly random binary observations (? = .5). While
inference in FBM leads to less variable and more accurate estimate of the underlying bias as the
number of samples increases, inference in DBM is perpetually driven by local transients. Relating back to the experimental data, we plot the probability of not observing the current stimulus for
each type of 5-stimulus sequences in Figure 1 for (b) FBM and (c) DBM, since RT is known to
lengthen with reduced stimulus expectancy. Comparing the first half of a simulated experimental
session (red) with the second half (blue), matched to the number of trials for each subject, we see
that sequential effects significantly diminish in the FBM, but persist in the DBM. A re-analysis
of the experimental data (Figure 1d) shows that sequential effects also persist in human behavior,
confirming that Bayesian prediction based on a (Markovian) changeable world can account for behavioral data, while that based on a fixed world cannot. In Figure 1d, the green dashed line shows
that a linear transformation of the DBM sequential effect (from Figure 1c) is quite a good fit of the
behavioral data. It is also worth noting that in the behavioral data there is a slight over all preference
(shorter RT) for repetition trials. This is easily captured by the DBM by assuming p0 (?t ) to be
skewed toward repetitions (see Figure 1c inset). The same skewed prior cannot produce a bias in the
FBM, however, because the prior only figures into Bayesian inference once at the outset, and is very
quickly overwhelmed by the accumulating observations.
3
3
2
1
0
2
4
6
Trials
8
d
c
1
num
exp
0.2
Reconstruction
num
exp
0.8
?.57
Coeffcients
x 10
4
0.15
0.6
0.1
0.4
0.05
0.2
0
2
4
6
8
0
0
0.5
?
Trials
.77
1
e
log b/(1 ? b)
2
1
380
0.8
360
RT (ms)
b
?4
5
Coefficients
a
0.6
0.4
0.2
0
0
1
True P (xt = 1|xt?1 )
0.5
b
0.8
Bayes
Exp
340
320
300
0.5
0
?2
0.2
280
0.2
alt
rep
0.4
0.6
0.8
P (xt = 1|xt?1 )
Figure 3: Exponential discounting a good descriptive and normative model. (a) For each of the
six subjects, we regressed RR on repetition trials against past observations, RT ? C + b1 xt?1 +
b2 xt?2 + . . ., where x? is assigned 0 if it was repetition, and 1 if alternation, the idea being that
recent repetition trials should increase expectation of repetition and decrease RR, and recent alternation should decrease expectation of repetition and increase RR on a repetition trial. Separately we
also regressed RR?s on alternation trials against past observations (assigning 0 to alternation trials,
and 1 to repetitions). The two sets of coefficients did not differ significantly and were averaged
togther (red: average across subjects, error bars: SEM). Blue line shows the best exponential fit to
these coefficients. (b) We regressed Pt obtained from exact Bayesian DBM inference, against past
observations, and obtained a set of average coefficients (red); blue is the best exponential fit. (c) For
different values of ?, we repeat the process in (b) and obtain the best exponential decay parameter
? (blue). Optimal ? closely tracks the 2/3 rule for a large range of values of ?. ? is .57 in (a),
so ? = .77 was used to generate (b). (d) Both the optimal exponential fit (red) and the 2/3 rule
(blue) approxiate the true Bayesian Pt well (green dashed line shows perfect match). ? = .77. For
smaller values of ?, the fit is even better; for larger ?, the exponential approximation deteriorates
(not shown). (e) For repetition trials, the greater the predicted probability of seeing a repetition
(xt = 1), the faster the RT, whether trials are categorized by Bayesian predictive probabilities (red:
? = .77, p0 = Beta(1.6, 1.3)), or by linear exponential filtering (blue). For alternation trials, RT?s
increase with increasing predicted probability of seeing a repetition. Inset: for the biases b ? [.2, .8],
the log prior ratio (shift in the initial starting point, and therefore change in the distance to decision
boundary) is approximately linear.
3
Exponential filtering both normative and descriptive
While Bayes? Rule tells us in theory what the computations ought to be, the neural hardware may
only implement a simpler approximation. One potential approximation is suggested by related work
showing that monkeys? choices, when tracking reward contingencies that change at unsignaled
times, depend linearly on previous observations that are discounted approximately exponentially
into the past [6]. This task explicitly examines subjects? ability to track unsignaled statistical regularities, much like the kind we hypothesize to be engaged inadvertently in sequential effects.
First, we regressed the subjects? reward rate (RR) against past observations and saw that the linear
coefficients decay approximately exponentially into the past (Figure 3a). We define reward rate as
mean accuracy/mean RT, averaged across subjects; we thus take into account both effects in RT and
accuracy as a function of past experiences. We next examined whether there is also an element of
exponential discounting embedded in the DBM inference algorithm. Linear regression of the predictive probability Pt , P (xt = 1|xt?1 ), which should correlate positively with RR (since it correlates positively with accuracy and negatively with RT) against previous observations xt?1 , xt?2 , . . .
Pt?1
yields coefficients that also decay exponentially into the past (Figure 3b): Pt ? C+? ? =1 ? ? xt?? .
Linear exponential filtering thus appears to be both a good descriptive model of behavior, and a good
normative model approximating Bayesian inference.
An obvious question is how this linear exponential filter relates to exact Bayesian inference, in
particular how the rate of decay relates to the assumed rate of change in the world (parameterized
by ?). We first note that the linear exponential filter has an equivalent iterative form:
Pt , P (xt = 1|xt?1 ) = C +?
t?1
X
? ? xt?? = C(1 ? ?)+??xt?1 +?Pt?1 .
? =1
We then note that the nonlinear Bayesian update rule can also be written as:
Pt+1 =
t
1? K
Kt ? Pt2
1
1
2
1
Pt
(1 ? ?) + xt?1 ?
+
?P
? (1??) + ?xt + ?Pt
t
2
2
Pt ? Pt
1 ? Pt
2
3
3
4
(1)
where Kt , h?t2 |xt?1 i, and we approximate Pt by its mean value hPt i = 1/2, and Kt by its mean
value hKt i = 1/3. These expected values are obtained by expanding Pt and Kt in their iterative
forms and assuming hPt i = hPt?1 i and hKt i = hKt?1 i, and also assuming that p0 is the uniform
distribution. We verified numerically (data not shown) that this mean approximation is quite good for
a large range of ? (though it gets progressively worse when ? ? 1, probably because the equilibrium
assumptions deviate farther from reality as changes become increasingly rare).
Notably, our calculations imply ? ? 32 ?, which makes intuitive sense, since slower changes should
result in longer integration time window, whereas faster changes should result in shorter memory.
Figure 3c shows that the best numerically obtained ? (by fitting an exponential to the linear regression coefficients) for different values of ? (blue) is well approximated by the 2/3 rule (black dashed
line). For the behavioral data in Figure 3a, ? was found to be .57, which implies ? = .77; the simulated data in Figure 3b are in fact obtained by assuming ? = .77, hence the remarkably good fit
between data and model. Figure 3d shows that reconstructed Pt based on the numerically optimal
linear exponential filter (red) and the 2/3 rule (blue) both track the true Bayesian Pt very well.
In the previous section, we saw that exact Bayesian inference for the DBM is a good model of behavioral data. In this section, we saw that linear exponential filtering also seems to capture the data
well. To compare which of the two better explains the data, we need a more detailed account of how
stimulus history-dependent probabilities translate into reaction times. A growing body of psychological [7] and physiological data [8] support the notion that some form of evidence integration up
to a fixed threshold underlies binary perceptual decision making, which both optimizes an accuracyRT trade-off [9] and seems to be implemented in some form by cortical neurons [8]. The idealized,
continuous-time version of this, the drift-diffusion model (DDM), has a well characterized mean
z
stopping time [10], Td = A
tanh Az
c2 , where A and c are the mean and standard deviation of unit
time fluctuation, and z is the distance between the starting point and decision boundary. The vertical
P (s0 |xt )
axis for the DDM is in units of log posterior ratio log P
(s1 |xt ) . An unbiased (uniform) prior over s
implies a stochastic trajectory that begins at 0 and drifts until it hits one of the two boundaries ?z.
When the prior is biased at b 6= .5, it has an additive effect in the log posterior ratio space and moves
b
the starting point to log 1?b
. For the relevant range of b (.2 to .8), the shift shift in starting point
is approximately linear in b (Figure 3e inset), so that the new distance to the boundary is approxiAz+Akb
mately z + kb. Thus, the new mean decision time is z+kb
. Typically in DDM models
A tanh
c2
of decision-making, the signal-to-noise ratio is small, i.e. A ? c, such that tanh is highly linear in
2
the relevant range. We therefore have Td (b) ? zc2 + 2zk
c2 b, implying that the change in mean decision
time is linear in the bias b, in units of probability.
This linear relationship between RT and b was already born out by the good fit between sequential
effects in behavioral data and for the DBM in Figure 1d. To examine this more closely, we run the
exact Bayesian DBM algorithm and the linear exponential filter on the actual sequences of stimuli
observed by the subjects, and plot median RT against predicted stimulus probabilities. In Figure 3e,
we see that for both exact Bayesian (red) and exponential (blue) algorithms, RT?s decrease on repetition stimuli when predicted probability for repetition increased; conversely, RT?s increase on alternation trials when predicted probability for repetition increase (and therefore predicted probability
for alternation decrease). For both Bayesian inference and linear exponential filtering, the relationship between RT and stimulus probability is approximately linear. The linear fit in fact appears
better for the exponential algorithm than exact Bayesian inference, which, conditioned on the DDM
being an appropriate model for binary decision making, implies that the former may be a better
model of sequential adaptation than exact Bayesian inference. Further experimentation is underway
to examine this prediction more carefully.
Another implication of the SPRT or DDM formulation of perceptual decision-making is that incorrect prior bias, such as due to sequential effects in a randomized stimulus sequence, induces a net
cost in accuracy (even though the RT effects wash out due to the linear dependence on prior bias).
?ax0 2
)
The error rate with a bias x0 in starting point is 1+e12za ? e1?(e
2az ?e?2az [10], implying error rate rises
monotonically with bias in either direction. This is a quantitative characterization of our claim that
extrageneous prior bias, such as due to sequential effects, induces suboptimality in decision-making.
5
0
0
0.5
?
1
d
1
0.8
0.6
0.6
0.2
0.2
1000
2000
3000
4000
0
0
5000
Timesteps
p(?|xt )
p(?t|xt )
0.4
0.4
0
0
c
Probability
?=0
?=.4
?=.5
?=.6
p(?)
1000
2000
3000
4000
5000
Probability
5
1
0.8
Estimate of ?
b
Estimate of ?
a
Timesteps
Timesteps
Figure 4: Meta-learning about the rate of change. (a) Graphical model for exact Bayesian learning.
Numbers are example values for the variables. (b) Mean of posterior p(?|xt ) as a function of
timesteps, averaged over 30 sessions of simulated data, each set generated from different true values
of ? (see legend; color-coded dashed lines indicate true ?). Inset shows prior over ?, p(?) =
Beta(17, 3). Time-course of learning is not especially sensitive to the exact form of the prior (not
shown). (c) Stochastic gradient descent with a learning rate of .01 produce estimates of ? (thick
lines, width denotes SEM) that converge to the true values of ? (dashed lines). Initial estimate of ?,
before seeing any data, is .9. Learning based on 50 sessions of 5000 trials for each value of ?. (d)
Marginal posterior distributions over ? (top panel) and ?t (bottom panel) on a sample run, where
probability mass is color-coded: brighter color is more mass.
4
Neural implementation and learning
So far, we have seen that exponential discounting of the past not only approximates exact Bayesian
inference, but fits human behavioral data. We now note that it has the additional appealing property
of being equivalent to standard models of neuronal dynamics. This is because the iterative form
of the linear exponential filter in Equation 1 has a similar form to a large class of leaky integration
neuronal models, which have been used extensively to model perceptual decision-making on a relatively fast time-scale [8, 11?15], as well as trial-to-trial interactions on a slower time-scale [16?20].
It is also related to the concept of eligibility trace in reinforcement learning [21], which is important
for the temporal credit assignment problem of relating outcomes to states or actions that were responsible for them. Here, we provided the computational rationale for this exponential discounting
the past ? it approximates Bayesian inference under DBM-like assumptions.
Viewed as a leaky-integrating neuronal process, the parameters of Equation 1 have the following
semantics: 21 (1 ? ?) can be thought of as a constant bias, 31 ?xt?1 as the feed-forward input, and
2
3 ?Pt?1 as the leaky recurrent term. Equation 1 suggests that neurons utilizing a standard form
of integration dynamics can implement near-optimal Bayesian prediction under the non-stationary
assumption, as long as the relative contributions of the different terms are set appropriately. A natural
question to ask next is how neurons can learn to set the weights appropriately. We first note that xt
is a sample from the distribution P (xt |xt?1 ). Since P (xt |xt?1 ) has the approximate linear form in
Equation 1, with dependence on a single parameter ?, learning about near-optimal predictions can
potentially be achieved based on estimating the value of ? via the stochastic samples x1 , x2 , . . ..
We implement a stochastic gradient descent algorithm, in which ?
? is adjusted incrementally on each
trial in the direction of the gradient, which should bring ?
? closer to the true ?.
dPt
?
?t = ?
? t?1 + ?(xt ? P?t )
d?
?
where ?
? t is the estimate of ? after observing xt , and Pt is the estimate of Pt using the estimate
?
? t?1 (before seeing xt ). Figure 4c shows that learning via the binary samples is indeed possible: for
different true values of ? (dashed lines) that generated different data sets, stochastic gradient descent
produced estimates of ?
? that converge to the true values, or close to them (thick lines; widths denote
SEM estimated from 50 sessions of learning). A key challenge for future work is to clarify whether
t
and how the gradient, dP
d? , can be computed by neural machinery (perhaps approximately).
For comparison, we also implement the exact Bayesian learning algorithm, which augments the
DBM architecture by representing ? as a hidden variable instead of a fixed known parameter:
p(?, ?t |xt ) ? p(?|xt?1 )P (xt |?t )p(?t |?, xt?1 ) .
Figure 4a illustrates this augmented model graphically. Figure 4b shows the evolution of the mean
of the posterior distribution over ?, or h?|xt i. Based on sets of 30 sessions of 5000 trials, generated
6
from each of four different true values of ?, the mean value of ? under the posterior distribution
tends toward the true ? over time. The prior we assume for ? is a beta distribution (Beta(17, 3),
shown in the inset of Figure 4b).
Compared to exact Bayesian learning, stochastic gradient descent has a similar learning rate. But
larger values of ? (e.g. ? = .6) tend to be under-estimated, possibly due to the fact that the analytical
approximation for ? is under-estimated for larger ?. For data that were generated from a fixed
Bernoulli process with rate .5, an equivalently appropriate model is the DBM with ? = 0 ? stochastic
gradient descent produced estimates of ? (thick red line) that converge to 0 on the order of 50000
trials (details not shown). Figure 4d shows that the posterior inference about ? and ?t undergoes
distinct phases when true ? = 0 and there is no correlation between one timestep and the next.
There is an initial phase where marginal posterior mass for ? tends toward high values of ?, while
marginal posterior mass for ?t fluctuates around .5. Note that this combination is an alternative,
equally valid generative model for completely randomized sequence of inputs. However, this joint
state is somehow unstable, and ? tends toward 0 while ?t becomes broad and fluctuates wildly. This
is because as inferred ? gets smaller, there is almost no information about ?t from past observations,
thus the marginal posterior over ?t tends to be broad (high uncertainty) and fluctuates along with
each data point. ? can only decrease slowly because so little information about the hidden variables
is obtained from each data point. For instance, it is very difficult to infer from what is believed
to be an essentially random sequence whether the underlying Bernoulli rate really tends to change
once every 1.15 trials or 1.16 trials. This may explain why subjects show no diminished sequential
effects over the course of a few hundred trials (Figure 1d). While the stochastic gradient results
demonstrate that, in principle, the correct values of ? can be learned via the sequence of binary
observations x1 , x2 , . . . , further work is required to demonstrate whether and how neurons could
implement the stochastic gradient algorithm or an alternative learning algorithm .
5
Discussion
Humans and other animals constantly have to adapt their behavioral strategies in response to changing environments: growth or shrinkage in food supplies, development of new threats and opportunities, gross changes in weather patterns, etc. Accurate tracking of such changes allow the animals to
adapt their behavior in a timely fashion. Subjects have been observed to readily alter their behavioral
strategy in response to recent trends of stimulus statistics, even when such trends are spurious. While
such behavior is sub-optimal for certain behavioral experiments, which interleave stimuli randomly
or pseudo-randomly, it is appropriate for environments in which changes do take place on a slow
timescale. It has been observed, in tasks where statistical contingencies undergo occasional and
unsignaled changes, that monkeys weigh past observations linearly but with decaying coefficients
(into the past) in choosing between options [6]. We showed that human subjects behave very similarly in 2AFC tasks with randomized design, and that such discounting gives rise to the frequently
observed sequential effects found in such tasks [5]. We showed that such exponential discounting approximates optimal Bayesian inference under assumptions of statistical non-stationarity, and derived
an analytical, approximate relationship between the parameters of the optimal linear exponential filter and the statistical assumptions about the environment. We also showed how such computations
can be implemented by leaky integrating neuronal dynamics, and how the optimal tuning of the
leaky integration process can be achieved without explicit representation of probabilities.
Our work provides a normative account of why exponential discounting is observed in both stationary and non-stationary environments, and how it may be implemented neurally. The relevant
neural mechanisms seem to be engaged both in tasks when the environmental contingencies are
truly changing at unsignaled times, and also in tasks in which the underlying statistics are stationary but chance patterns masquerade as changing statistics (as seen in sequential effects). This work
bridges and generalizes previous descriptive accounts of behavioral choice under non-stationary task
conditions [6], as well as mechanistic models of how neuronal dynamics give rise to trial-to-trial interactions such as priming or sequential effects [5, 13, 18?20]. Based the relationship we derived
between the rate of behavioral discounting and the subjects? implicit assumptions about the rate of
environmental changes, we were able to ?reverse-engineer? the subjects? internal assumptions. Subjects appear to assume ? = .77, or changing about once every four trials. This may have implications
for understanding why working memory has the observed capacity of 4-7 items.
7
In a recent human fMRI study [22], subjects appeared to have different learning rates in two phases
of slower and faster changes, but notably the first phase contained no changes, while the second
phase contained frequent ones. This is a potential confound, as it has been observed that adaptive
responses change significantly upon the first switch but then settle into a more stable regime [23]. It
is also worth noting that different levels of sequential effects/adaptive response appear to take place
at different time-scales [4, 23], and different neural areas seem to be engaged in processing different
types of temporal patterns [24]. In the context of our model, it may imply that there is sequential
adaptation happening at different levels of processing (e.g. sensory, cognitive, motor), and their
different time-scales may reflect different characteristic rate of changes at these different levels.
A related issue is that brain needs not to have explicit representation of the rate of environmental
changes, which are implicitly encoded in the ?leakiness? of neuronal integration over time. This is
consistent with the observation of sequential effects even when subjects are explicitly told that the
stimuli are random [4]. An alternative explanation is that subjects do not have complete faith in the
experimenter?s instructions [25]. Further work is needed to clarify these issues.
We used both a computationally optimal Bayesian learning algorithm, and a simpler stochastic gradient descent algorithm, to learn the rate of change (1-?). Both algorithms were especially slow at
learning the case when ? = 0, which corresponds to truly randomized inputs. This implies that completely random statistics are difficult to internalize, when the observer is searching over a much larger
hypothesis space that contains many possible models of statistical regularity, which can change over
time. This is consistent with previous work [26] showing that discerning ?randomness? from binary
observations may require surprisingly many samples, when statistical regularities are presumed to
change over time. Although this earlier work used a different model for what kind of statistical
regularities are allowed, and how they change over time (temporally causal and Markovian in ours,
an acausal correlation function in theirs), as well as the nature of the inference task (on-line in our
setting, and off-line in theirs), the underlying principles and conclusions are similar: it is very difficult to discriminate a truly randomized sequence, which by chance would contain runs of repetitions
and alternations, from one that has changing biases for repetitions and alternations over time.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
Skinner, B F (1948). J. Exp. Psychol. 38: 168-72.
Ecott, C L & Critchfield, T S (2004). J. App. Beh. Analysis 37: 249-65.
Laming, D R J (1968). Information Theory of of Choice-Reaction Times, Academic Press, London.
Soetens, E, Boer, L C, & Hueting, J E (1985). JEP: HPP 11: 598-616.
Cho, R, et al (2002). Cognitive, Affective, & Behavioral Neurosci. 2: 283-99.
Sugrue, L P, Corrado, G S, & Newsome, W T (2004). Science 304: 1782-7.
Smith, P L & Ratcliff, R. Trends Neurosci. 27: 161-8.
Gold, J I & Shadlen, M N (2002). Neuron 36: 299-308.
Wald, A & Wolfowitz, J (1948). Ann. Math. Statisti. 19: 326-39.
Bogacz, et al (2006). Psychological Review 113: 700-65.
Cook, E P & Maunsell, J H R (2002). Nat. Neurosci. 5: 985-94.
Grice, G R (1972). Perception & Psychophysics 12: 103-7.
McClelland, J L. Attention & Performance XIV: 655-88. MIT Press.
Smith, P L (1995). Psychol. Rev. 10: 567-93.
Yu, A J (2007). Adv. in Neur. Info. Proc. Systems 19: 1545-52.
Dayan, P & Yu, A J (2003). IETE J. Research 49: 171-81.
Kim, C & Myung, I J (1995). 17th Ann. Meeting. of Cog. Sci. Soc.: 472-7.
Mozer, M C, Colagrosso, M D, & Huber, D E (2002). Adv. in Neur. Info. Proc. Systems 14: 51-57.
Mozer, M C, Kinoshita, S, & Shettel, M (2007). Integrated Models of Cog. Sys.: 180-93.
Simen, P, Cohen, J D, & Holmes, P (2006). Neur. Netw. 19: 1013-26.
Sutton, R S & Barto, A G (1998). Reinforcement Learning: An Introduction, MIT Press.
Behrens, T E J, Woolrich, M W, Walton, M E, & Rushworth, M F S (2007). Nat. Neurosci. 10: 1214-21.
Kording, K P, Tenenbaum, J B, & Shadmehr, R (2007). Nat. Neurosci. 10: 779-86.
Huettel, S A, Mack, P B, & McCarthy, G (2002). Nat. Neurosci. 5: 485-90.
Hertwig, R & Ortmann, A (2001). Behavioral & Brain Sciences 24: 383-403.
Bialek, W (2005). Preprint q-bio.NC/0508044, Princeton University.
8
| 3519 |@word trial:42 version:1 interleave:1 seems:4 nd:3 instruction:1 p0:9 pick:1 initial:3 born:1 contains:2 ours:1 past:15 reaction:4 current:1 comparing:1 hpp:1 assigning:1 written:1 readily:2 subsequent:2 additive:1 confirming:1 lengthen:1 motor:1 hypothesize:1 designed:1 plot:2 progressively:2 update:1 stationary:6 half:13 implying:2 generative:1 item:1 cook:1 sys:1 smith:2 farther:1 num:2 leakiness:1 provides:2 characterization:1 math:1 preference:1 simpler:3 along:2 c2:3 beta:8 become:1 supply:1 surprised:2 incorrect:1 consists:1 fitting:1 behavioral:18 affective:1 x0:1 notably:2 indeed:1 presumed:1 expected:1 abscissa:1 examine:3 aaaa:2 growing:1 brain:3 frequently:1 behavior:9 discounted:1 td:2 food:1 actual:3 little:1 inappropriate:1 window:1 increasing:1 becomes:1 begin:1 provided:1 underlying:5 matched:1 panel:2 mass:7 estimating:1 what:6 bogacz:1 kind:2 string:1 monkey:2 transformation:3 ought:1 temporal:2 quantitative:1 every:2 pseudo:1 huber:1 growth:1 hit:1 bio:1 unit:3 maunsell:1 appear:2 before:2 local:7 tends:5 consequence:1 despite:1 sutton:1 fluctuation:1 approximately:6 xiv:1 black:1 examined:1 conversely:1 suggests:1 fastest:1 range:4 averaged:3 responsible:1 block:2 implement:6 jep:1 area:1 adapting:1 dictate:1 significantly:3 thought:1 confidence:1 induce:1 outset:1 seeing:6 integrating:2 weather:1 get:2 cannot:2 close:1 context:2 accumulating:1 equivalent:3 soetens:1 graphically:1 attention:1 starting:5 impeded:1 rule:7 examines:1 utilizing:1 holmes:1 searching:1 notion:1 diego:1 pt:20 behrens:1 exact:15 superstitious:2 hypothesis:2 trend:4 persistently:1 approximated:3 element:1 genuinely:1 continues:1 persist:5 predicts:1 observed:11 bottom:2 preprint:1 capture:1 grice:1 adv:2 decrease:6 trade:2 principled:1 zc2:1 environment:6 gross:1 weigh:1 mozer:2 reward:3 dynamic:9 depend:1 predictive:5 negatively:1 upon:1 completely:2 easily:1 joint:1 differently:1 forced:1 fast:1 distinct:1 london:1 artificial:1 tell:2 outcome:1 choosing:1 apparent:1 quite:2 larger:4 plausible:1 fluctuates:3 encoded:1 otherwise:1 ability:1 statistic:4 timescale:1 noisy:1 sequence:12 descriptive:4 rr:6 net:1 analytical:2 reconstruction:1 interaction:2 adaptation:2 frequent:1 relevant:3 rapidly:3 translate:1 gold:1 intuitive:1 haf:1 faith:1 differentially:1 az:3 regularity:6 walton:1 produce:2 perfect:1 hkt:3 derive:2 illustrate:1 recurrent:1 strong:2 soc:1 implemented:6 predicted:9 indicate:2 implies:4 differ:1 direction:2 thick:3 closely:2 correct:1 filter:10 stochastic:12 rrar:1 redrawn:1 human:8 kb:2 transient:3 settle:1 violates:2 bin:1 explains:1 require:1 really:1 adjusted:1 clarify:2 around:1 diminish:1 credit:1 exp:4 equilibrium:1 predict:1 dbm:19 claim:1 proc:2 tanh:3 saw:3 sensitive:1 bridge:1 repetition:27 mit:2 rather:1 primed:1 fluctuate:1 shrinkage:1 barto:1 discerning:1 earliest:1 derived:2 bernoulli:4 ratcliff:1 lasted:1 slowest:1 rigorous:1 kim:1 detect:1 sense:1 inference:18 dependent:1 dayan:1 stopping:1 integrated:1 typically:1 hidden:3 spurious:2 interested:1 semantics:1 issue:2 development:1 animal:2 integration:8 psychophysics:1 marginal:4 once:3 biology:1 broad:2 yu:3 afc:2 alter:1 fmri:1 future:1 t2:1 superstition:1 stimulus:32 few:1 randomly:2 phase:5 stationarity:2 highly:1 truly:6 hpt:3 implication:2 accurate:2 kt:4 closer:1 entertain:1 necessary:1 experience:1 shorter:2 machinery:1 circle:2 re:1 causal:1 weighs:1 psychological:2 increased:1 instance:1 modeling:1 earlier:1 markovian:3 ax0:1 newsome:1 assignment:1 cost:1 deviation:1 rare:2 uniform:2 hundred:1 dependency:1 engagement:1 cho:2 st:3 randomized:9 boer:1 told:1 off:3 physic:1 laming:1 quickly:1 reflect:2 woolrich:1 opposed:1 possibly:1 slowly:1 idiosyncrasy:1 worse:1 cognitive:3 external:1 leading:2 colagrosso:1 account:6 potential:2 de:1 b2:1 coefficient:8 explicitly:4 idealized:1 observer:2 apparently:1 observing:2 red:11 rushworth:1 bayes:5 decaying:1 option:1 timely:1 contribution:1 accuracy:5 characteristic:2 reinforced:1 yield:1 bayesian:31 accurately:3 produced:2 trajectory:1 worth:2 randomness:1 history:4 app:1 explain:1 against:6 frequency:1 obvious:1 proof:1 rational:1 jdc:1 experimenter:1 ask:1 color:4 carefully:1 reflecting:1 back:1 appears:3 feed:1 simen:1 shettel:1 follow:2 response:5 formulation:1 though:2 strongly:1 wildly:1 implicit:1 until:1 correlation:2 working:1 nonlinear:1 incrementally:1 somehow:1 undergoes:1 perhaps:1 believe:1 effect:26 concept:2 true:13 unbiased:1 deliberately:1 evolution:3 discounting:8 assigned:1 hence:1 read:1 symmetric:1 former:1 skinner:1 during:1 skewed:2 width:2 eligibility:1 suboptimality:1 m:3 prominent:3 complete:1 demonstrate:3 bring:1 common:2 cohen:2 exponentially:4 linking:1 interpretation:1 slight:1 relating:2 numerically:3 approximates:3 theirs:2 automatic:1 tuning:2 session:6 similarly:1 dot:1 stable:1 longer:4 encountering:1 etc:1 something:1 posterior:14 mccarthy:1 recent:6 showed:3 optimizes:1 driven:2 reverse:1 certain:1 meta:2 binary:9 rep:1 alternation:17 meeting:1 seen:3 captured:1 greater:2 additional:1 preceding:1 converge:3 wolfowitz:1 monotonically:1 dashed:7 signal:1 relates:2 neurally:1 corrado:1 infer:1 fbm:10 match:1 faster:3 calculation:1 characterized:1 long:1 believed:1 adapt:2 academic:1 equally:2 e1:1 coded:2 prediction:9 underlies:1 regression:2 wald:1 essentially:1 expectation:2 achieved:2 whereas:1 remarkably:1 separately:1 median:2 appropriately:2 biased:1 probably:1 subject:28 tend:1 undergo:3 legend:1 inconsistent:1 seem:2 call:2 near:3 noting:2 variety:1 mately:1 fit:11 psychology:2 timesteps:4 brighter:1 architecture:1 suboptimal:1 switch:1 idea:1 shift:4 whether:6 six:1 cause:1 action:2 useful:1 detailed:1 tune:1 ddm:5 extensively:1 hardware:2 induces:2 augments:1 mcclelland:1 reduced:1 generate:1 tenenbaum:1 deteriorates:1 estimated:3 track:3 reinforces:1 blue:13 discrete:1 affected:2 threat:1 key:1 four:3 threshold:1 drawn:1 changing:8 verified:1 diffusion:1 timestep:1 button:1 run:7 facilitated:1 parameterized:1 uncertainty:1 respond:5 extends:1 throughout:3 almost:1 place:2 decision:11 cyan:2 followed:8 unsignaled:5 x2:2 regressed:4 relatively:1 department:2 neur:3 combination:1 across:3 remain:1 smaller:2 increasingly:1 contain:1 appealing:1 rev:1 making:7 rrr:2 aaa:2 s1:1 kinoshita:1 confound:1 mack:1 computationally:2 equation:4 mechanism:2 needed:1 mechanistic:1 generalizes:1 experimentation:1 occasional:1 appropriate:4 alternative:4 statisti:1 slower:3 changeable:1 dpt:1 top:2 angela:1 denotes:2 graphical:4 opportunity:1 pt2:1 build:1 especially:2 approximating:1 move:1 question:2 already:1 strategy:3 concentration:1 rt:27 dependence:3 bialek:1 exhibit:1 gradient:12 dp:1 distance:3 simulated:4 entity:1 capacity:1 sci:1 unstable:1 toward:5 assuming:6 useless:1 relationship:6 ratio:4 hertwig:1 equivalently:1 difficult:3 nc:1 potentially:1 info:2 trace:1 rise:3 design:4 implementation:1 sprt:1 unknown:1 vertical:1 observation:18 neuron:6 descent:8 behave:1 situation:1 beh:1 ucsd:1 drift:2 inferred:1 pair:1 required:2 california:1 errorbars:1 learned:1 established:1 able:1 bar:1 proceeds:1 suggested:1 pattern:20 perception:1 appeared:1 regime:1 challenge:1 green:3 memory:2 explanation:1 belief:3 power:1 critical:1 event:1 natural:3 predicting:1 representing:2 imply:3 numerous:1 temporally:1 axis:1 psychol:2 extract:1 deviate:1 prior:18 understanding:2 review:1 underway:1 relative:2 law:1 embedded:1 expect:1 rationale:1 filtering:6 versus:1 baba:2 contingency:5 consistent:3 s0:1 shadlen:1 principle:2 viewpoint:1 myung:1 course:3 repeat:1 surprisingly:1 bias:11 weaker:1 allow:1 leaky:7 boundary:4 cortical:1 world:6 valid:1 sensory:1 forward:1 commonly:1 perpetually:2 san:1 expectancy:2 reinforcement:2 adaptive:2 far:2 correlate:3 kording:1 reconstructed:1 approximate:4 netw:1 implicitly:2 b1:1 assumed:4 grayscale:2 continuous:2 iterative:4 why:4 reality:1 learn:4 nature:2 zk:1 expanding:1 sem:4 complex:2 artificially:1 priming:1 did:1 timescales:1 linearly:2 neurosci:6 noise:1 arise:2 allowed:1 categorized:1 x1:3 neuronal:9 positively:2 body:1 augmented:1 fashion:1 darker:1 slow:2 sub:3 explicit:4 exponential:29 perceptual:4 cog:2 xt:73 inset:6 showing:2 normative:5 decay:4 physiological:1 ajyu:1 alt:1 evidence:1 sequential:24 wash:1 nat:4 overwhelmed:1 conditioned:1 illustrates:1 happening:1 contained:2 tracking:2 corresponds:1 chance:3 constantly:1 environmental:3 viewed:1 ann:2 experimentally:1 change:26 diminished:1 specifically:1 acausal:1 shadmehr:1 engineer:1 discriminate:2 engaged:3 experimental:8 tendency:1 inadvertently:1 sugrue:1 formally:1 internal:3 support:1 latter:1 jonathan:1 violated:3 princeton:3 |
2,779 | 352 | Kohonen Networks and Clustering: Comparative
Performance in Color Clustering
Wesley Snyder
Department of Radiology
Bowman Gray School of
Medicine
Wake Forest University
Winston-Salem, NC 27103
Daniel Nissman, David Van den Bout,
and Grift BUbro
Center for Communications and Signal Processing
North Carolina State University
Raleigh, NC 27695
Abstract
The problem of color clustering is defined and shown to be a problem of
assigning a large number (hundreds of thousands) of 3-vectors to a
small number (256) of clusters. Finding those clusters in such a way that
they best represent a full color image using only 256 distinct colors is a
burdensome computational problem. In this paper, the problem is solved
using "classical" techniques -- k-means clustering, vector quantization
(which turns out to be the same thing in this application), competitive
learning, and Kohonen self-organizing feature maps. Quality of the
result is judged subjectively by how much the pseudo-color result
resembles the true color image, by RMS quantization error, and by run
time. The Kohonen map provides the best solution.
1 INTRODUCTION
"Clusteringn , "vector quantization", and "unsupervised learning" are all words which
descn'be the same process: assigning a few exemplars to represent a large set of samples.
Perfonning that process is the subject of a substantial body of literature. In this paper, we
are concerned with the comparison of various clustering techniques to a particular, practical application: color clustering.
The color clustering problem is as follows: an image is recorded in full color -- that is,
three components, RED, GREEN, and BLUE, each of which has been measured to 8 bits
of precision. Thus, each pixel is a 24 bit quantity. We must find a representation in which
2563 possible colors are represented by only 8 bits per pixel. That is, for a problem with
256000 variables (512 x 512) variables, assign each variable to one of only 256 classes.
The color clustering problem is currently of major economic interest since millions of display systems are sold each year which can only store 8 bits per pixel, but on which users
would like to be able to display "true" color (or at least as near true color as possible).
In this study, we have approached the problem using the standard techniques from the literature (including k-means -- ISODATA clustering[1,3,61, LBG[4]), competitive learning
(referred to as CL herein) [2] , and Kohonen feature maps [5 ,7 ,9]. The Kohonen feature map
984
Kohonen Networks and Clustering
(referred to as KFM herein) was found to win "hands down", providing both the best quality image
(subjectively) and objectively (based on quantization error), as well as the fastest nm times.
2 BACKGROuND-METHODS TESTED
In almost all clustering algorithms, we begin with some (usually ad-hoc) determination of initial
cluster centers. The number of such centers generally remains the same, although some algorithms
(e.g. ISODATA[lO]) allow the number to evolve through the nmning of the algorithm. In this work.
we know that we want to find 256 distinct clusters. The basic idea behind most of these methods is
to update the cluster closest to the current data point by moving it some small increment towards
that data point Mter the data has been presented to the algorithm sufficiently often, the clusters
should converge to the real cluster means. 'JYpically, one has to cycle through the training set several times (sometimes a large number of times) to get an acceptable solution. Each run though the
training set is termed an epoch.
2.1 K-MEANS
The well-known [6] k-means algorithm for clustering is as follows (see [10] for a tutorial explanation).
1. ;Begin with an arbitrary assignment of samples to clusters or begin with an arbitrary set of clus-
ter centers and assign samples to nearest centers.
2. Compute the sample mean of each cluster.
3. Reassign each sample to the cluster with the nearest mean.
4. If the classification of all samples has not changed, stop; else go to step 2.
2.2 LBG VECTOR QUANTIZATION
In this method, 256 colors are picked randomly from the scene. These are referred to as the "codebook". Each pixel in the image is then assigned to the "nearest" entry in the codebook. After
assignment of all pixels, the mean of each bini is calculated. If the difference between the codebook entry and the mean of the corresponding bin is below threshold for all entries, the "optimal"
codebook has been located. In [4], the algorithm is shown to work for a large variety of distance
functions; however, for applications (such as this one) where the Euclidean metric is most appropriate, the algorithm becomes identical to k-means. In [8], results similar to those we found are
reported in the color clustering problem.
2.3 KOHONEN MAPS AND COMPETITIVE LEARNING
In competitive learning algorithms, data examples are presented sequentially to the system. The
cluster center most similar to the data example is determined, and that center is moved slightly
toward the example.
I
That is, all the pixels assigned to that entry in the codebook.
985
986
Snyder, Nissman, Vcm den Bout, and Bilbro
The update rule for competitive learning can be described as follows:
(EQ 1)
where Wi is the weight vector (or mean) corresponding to cluster i and h is the learning parameter
(typically on the order of 0.01).
In the case of Kohonen maps, however, the algorithm is slightly more complicated. All clusters are
connected to each other according to a topological map. When the closest cluster to a data point
(the primary cluster) is updated, so are its immediate neighbors (the proximity clusters) in tenns of
the topological map. In feature space, it is possible, initially, for the neighbors of the primary cluster to not be its topological neighbors. By the nature of the update rule, the neighbors of the primary
cluster in topological space will become its neighbors in feature space after some period of time.
This is very desirable for applications in which a minimum distance between related clusters is
desired (the Tmveliog Salesman Problem, for example).
Often, it is the case that a single cluster is chosen much of the time, if not all of the time, because of
the order in which data is presented and the manner in which the clusters are initialized. In order to
make clustering work in a practical context, one needs to include a tenn in the distance calculation
which reduces the probability of updating an often-used cluster. Such a term is called the conscience[2]. Its effect is to increase the effective distance of a cluster from a data point An alternative approach to the use of a conscience is to increment a counter for each cluster which has been
passed over for updating and then subtract some multiple of this counter from the calculated distance. We call this the loneliness term, and used it because the implementation turned out to be
more convenient, and the perfonnance similar to that of conscience.
For KFM, the primary cluster is updated as indicated in Eqn. 1. The proximity clusters are updated
in a similar fashion
(EQ2)
where Wj is the weight vector corresponding to the proximity cluster j, dij is the topological distance
between clusters i andj, and F ('1\. dij) is some decreasing function of the distance between i andj
with a maximum at '1\.
3 Application to Color Clustering
Making no assumptions concerning the input image, we chose an appropriate topology for the
KFM algorithm which would easily lend itself to describing a uniform distribution of colors in
RGB space. Such a distribution is a rectangular solid in the 3-D color space. We chose the dimensions of this block to be 6x7x6 -- corresponding to 252 clusters mther than the 256 allowable -under the assumption that the omission of those four clusters would not make a perceptible difference. The clusters were initialized as a small block positioned at the center of RGB space with the
long axis in the green direction. This orientation was chosen because human eyes are most sensitive
to green wavelengths and, hence, more resolution may be required along this axis. The exact initial
orientation does not matter in the final solution, but was chosen to aid in speed of convergence.
Kohonen Networks and Clustering
In an attempt to significantly speed up training, each data point was assigned to one of the eight
subcubes of RGB space. and then only a specified subset of clusters was searched for an appropriate candidate for updating. The clusters were subdivided, roughly, into eight subcubes as well. The
effect of this is to decrease training time by approximately a factor of eight. Also, in the interest of
processing time, only the six most immediate topological neighbors (those with a topological distance of one from the primary cluster) were updated. This same heuristic was applied for both CL
and KFM experiments.
4 RESULTS
We applied all the techniques discussed, in various implementations, to actual color images, includingio particular, pictures of faces. Although also tested on larger images, all times given in this
report are against a baseline case of a 128x128 image: three bands of input (red, green, blue -- 8 bits
each), and one band (8 bits) of output, plus a lookup table output indicating what 24 bit color each
of the 8 bit pattern represented. Given sufficient training, all the techniques produced pseudo-color
images which were extremely lifelike. Comparing the images closely on a CRT, a trained observer
will note variations in the color rendering, particularly in sparse colors (e.g. blue eyes in a facial
scene), and will also observe color contouring. However, these details are subtle, and are not easily
reproducible in a conference proceedings. Map files and corresponding images were generated for
5, 10, and 15 epochs using h =0.05 and proximity h =0.00625. Direct comparisons were made
between Kohonen feature maps, competitive learning, and the results from k-means (and the LBG
formulation of k-means). For the training runs using competitive learning, all clusters were initialized to random values within the unit sphere located in the center of RGB space. The conscience
concept was used here.
In this section all timing comparisons are done on a Microvax 2000, although we have also run
many of the same programs on a Decstation. The Decstation typically runs 10-15 times as fast as
the Microvax. In order to compare techniques fairly, all timing is reported for the same image.
4.1 K-MEANS AND LBG EXPERIMENTS
The performance of k-means and LBG algorithms were strongly dependent on how long they were
allowed to run. After approximately 90 minutes of execution of k-means, the results were as good
(subjectively) as from Kohonen maps. In different experiments, k-means was started from the following initial configurations:
1. 256 points randomly (uniformly) distributed over RGB space
2. The 256 points on the main diagonal of color space (red=green=blue)
3. A uniform (3D) grid spread over RGB
4. Uniformly distributed over the surface of the color cube
5. Randomly distributed near the origin
987
988
Snyder, Nissman, Vcm den Bout, and Bilbro
Choice 2 gave the best overall performance, where "best" is detennined by the time required to
converge to a point where the resulting image looked "equally good" subjectively. K-means
reqUired 87 minutes to reach this standard quality, although it took 9 hours to completely converge
(until no cluster center moved more than .5 units in one iteration).
4.2 EXPERIMENTS ON KOHONEN AND COMPETITIVE LEARNING
KFM gave an excellent rendering of color images. In particular, blue eyes were rendered extremely
well in images of faces. Depending on the value of the conscience parameter, the competitive learning algorithm tended to rendered blue eyes as brown, since the dominant skin tones in facial images
are shades of brown.
Speed comparisons. All of these runs were done on Microvaxen.
Algorithm
Total time
Time/epoch
Kohonen
15:42
1:34
CompLearn
8:38
:52
Converting the image:
1:34 for Kohonen
4: 16 for Competitve Learning
The subjective judgments of picture quality were made using the 10 epoch case of Kohonen maps
as a reference. To quantitatively compare the performance of Kohonen maps and competitive learning, we computed the RMS color error:
(EQ3)
where Vi is the actual color 3-vector at pixel i, and Ci is the color represented by the mean of the
cluster to which pixel i is currently assigned. Plotting E vs. epoch number for both Kohonen and
competitive learning, we find the results in the figure below.
Kohonen Networks and Clustering
~----~----~------.------p----~18+07
...
...we
58+06
c
o
~
N
Competitive Network
-...
C
as
:::I
28+06
a
Kohonen Network
~----------~------------~----~18+06
o
10
20
30
Epochs
40
50
It is clear from this figure that the KFM network converges more rapidly to a stable solution with
much lower error than does the competitive network. Such figures can be deceiving in image processingt howe vert since RMS error is a notoriously bad quality measure (small regions may have
very large errors in order to make the overall average error low). In this caset howevert the
Kohonen map preserves the accuracy of color rendering in small regions quite well.
To ~valuate the sensitivity to initial cluster center choicest both competitive learning and KFM
were applied with different choices of centers. We found that competitive learning often converged
to undesirable renderings t whereas KFM always yielded a good solution t even when the initial centers were all at OtOtO.
5 DISCUSSION
The quality of rendering attained by these algorithms is due to the nature of facial images. There is
a great deal of density in the flesh colored region and a comparatively smallert but nonetheless siz-
989
990
Snyder, Nissman, Vcm den Bout, and Bilbro
able, amount in the background colors. The competitive learning algorithm found these high density regions with no problem. Greater difficulty was had with the blue eyes, since there are few
examples of blue to be trained on and hence the algorithm was swai11ped by the high density
regions. If one let the competitive learning algorithm run for a large nwnber of epochs, it eventually
found the blue cluster. The assignment of clusters to subdivisions of feature space guarantees that
no region of the image was particularly emphasized, therefore allowing clusters that were solely
influenced by less represented colors. However, this can also "waste" clusters in regions where
there are few examp'les.
Furthermore, the topological structure of the Kohonen map allows one to make certain asswnptions
to speed up the algorithm.
Despite a minor penalty in computational speed per epoch, the Kohonen algorithm produces the
image with the least error in the least amount of time. With appropriate choice of parameters, the
clustered image becomes indistinguishable from the original in less than ten epochs, for essentially
arbitrary initial conditions (as opposed to competitive learning). The other clustering techniques
require significantly longer times.
6 REFERENCES
1. G. H. Ball and D. J. Hall, "ISODATA, A Novel Method of Data Analysis and Pattern Classification" SRI Technical Report (NTIS AD699616), Stanford, CA, 1965
2. D. DeSieno, "Adding a Conscience to Competitive Learning", International Conference On
Neural Networks, Vol. 1, pp. 117-124, 1988
3. K. Fukunaga, Introduction to Pattern Recognition, Academic Press, Orlando FL, 1972
4. Y. Linde, A. Bozo, and R. Gray, "An Algorithm for Vector Quantizer Design", IEEE Trans.
Com. Vol. COM-28, No.1, pp. 84-95, Jan. 1980
5. T. Kohonen, "Self-Organized Formation of Topologically Correct Feature Maps", Biological
Cybernetics, 43:56-69,1982
6. J. Mac Queen "Some Methods for Classification and Analysis of Multivariate Observations",
Proc. 5th Berkeley Symposium, 1, pp. 281-297,1967
7. N. Nasrabadi and Y. Feng, "Vector Quantization of Images Based upon Kohonen Self-organizing Feature Maps", IEEE International Conference on Neural Networks, Vol. 1, pp. 101-108,
1986
8. H. Potlapalli, M. Jaisimha, H. Barad, A. Martinez, M. Lohrenz, J. Ryan, and J. Pollard. "Classification Techniques for Digital Map Compression" 21st Southeastern Symposiwn on System
Theory,pp.268-272. 1989. Tal~,Fl,~h,1989
9. H. Ritter and K. Schulten, "Kohonen Self-organizing Maps: Exploring their Computational
Capabilities" IEEE International Conference on Neural Networks, Vol. 1, pp. 109-116,1988
10.C. W. Therrien, Design, Estimation, and Classification, Wiley, NY, 1989
Part XV
VLSI
| 352 |@word sri:1 compression:1 carolina:1 rgb:6 solid:1 initial:6 configuration:1 daniel:1 subjective:1 current:1 comparing:1 com:2 assigning:2 must:1 reproducible:1 update:3 v:1 tenn:1 tone:1 colored:1 conscience:6 provides:1 quantizer:1 codebook:5 x128:1 bowman:1 along:1 direct:1 become:1 symposium:1 manner:1 roughly:1 decreasing:1 actual:2 becomes:2 begin:3 what:1 finding:1 guarantee:1 pseudo:2 berkeley:1 unit:2 timing:2 xv:1 despite:1 solely:1 approximately:2 chose:2 plus:1 resembles:1 fastest:1 bilbro:3 practical:2 block:2 decstation:2 jan:1 significantly:2 vert:1 convenient:1 word:1 get:1 undesirable:1 judged:1 context:1 map:19 center:13 go:1 rectangular:1 resolution:1 rule:2 variation:1 increment:2 updated:4 user:1 exact:1 origin:1 recognition:1 particularly:2 located:2 updating:3 solved:1 thousand:1 kfm:8 wj:1 cycle:1 connected:1 region:7 counter:2 decrease:1 substantial:1 trained:2 upon:1 completely:1 easily:2 various:2 represented:4 distinct:2 fast:1 effective:1 ntis:1 approached:1 formation:1 quite:1 heuristic:1 larger:1 stanford:1 objectively:1 radiology:1 itself:1 final:1 hoc:1 took:1 kohonen:25 turned:1 rapidly:1 organizing:3 detennined:1 moved:2 convergence:1 cluster:42 produce:1 comparative:1 converges:1 depending:1 siz:1 exemplar:1 measured:1 minor:1 nearest:3 school:1 eq:1 flesh:1 direction:1 closely:1 correct:1 human:1 crt:1 bin:1 require:1 subdivided:1 orlando:1 assign:2 clustered:1 biological:1 ryan:1 exploring:1 proximity:4 sufficiently:1 grift:1 hall:1 great:1 major:1 estimation:1 proc:1 currently:2 sensitive:1 always:1 baseline:1 burdensome:1 dependent:1 typically:2 initially:1 vlsi:1 pixel:8 overall:2 classification:5 orientation:2 fairly:1 cube:1 identical:1 unsupervised:1 deceiving:1 report:2 quantitatively:1 few:3 randomly:3 preserve:1 attempt:1 interest:2 behind:1 facial:3 perfonnance:1 euclidean:1 initialized:3 desired:1 assignment:3 queen:1 mac:1 entry:4 subset:1 hundred:1 uniform:2 dij:2 reported:2 st:1 density:3 international:3 sensitivity:1 ritter:1 therrien:1 recorded:1 nm:1 opposed:1 lookup:1 waste:1 north:1 salem:1 matter:1 ad:1 vi:1 picked:1 observer:1 red:3 competitive:19 complicated:1 capability:1 desieno:1 accuracy:1 judgment:1 produced:1 notoriously:1 cybernetics:1 converged:1 reach:1 tended:1 influenced:1 against:1 nonetheless:1 pp:6 stop:1 eq2:1 color:33 organized:1 subtle:1 positioned:1 wesley:1 attained:1 formulation:1 done:2 though:1 strongly:1 furthermore:1 until:1 hand:1 eqn:1 quality:6 indicated:1 gray:2 effect:2 concept:1 true:3 brown:2 hence:2 assigned:4 deal:1 indistinguishable:1 self:4 allowable:1 image:24 novel:1 million:1 discussed:1 grid:1 examp:1 had:1 moving:1 stable:1 longer:1 surface:1 subjectively:4 dominant:1 closest:2 multivariate:1 termed:1 store:1 certain:1 tenns:1 minimum:1 greater:1 converting:1 converge:3 period:1 nasrabadi:1 signal:1 full:2 desirable:1 multiple:1 reduces:1 technical:1 determination:1 calculation:1 academic:1 long:2 sphere:1 concerning:1 equally:1 basic:1 essentially:1 metric:1 iteration:1 represent:2 sometimes:1 background:2 want:1 whereas:1 wake:1 else:1 howe:1 file:1 subject:1 thing:1 call:1 near:2 ter:1 concerned:1 rendering:5 variety:1 gave:2 topology:1 economic:1 idea:1 six:1 rms:3 passed:1 linde:1 penalty:1 pollard:1 reassign:1 subcubes:2 generally:1 clear:1 amount:2 band:2 ten:1 tutorial:1 per:3 blue:9 vol:4 snyder:4 four:1 threshold:1 year:1 run:8 topologically:1 almost:1 acceptable:1 bit:8 fl:2 display:2 topological:8 winston:1 yielded:1 scene:2 tal:1 speed:5 extremely:2 fukunaga:1 rendered:2 department:1 according:1 ball:1 andj:2 slightly:2 wi:1 perceptible:1 making:1 den:4 remains:1 turn:1 describing:1 eventually:1 know:1 salesman:1 eight:3 observe:1 appropriate:4 alternative:1 original:1 clustering:18 include:1 medicine:1 bini:1 classical:1 comparatively:1 feng:1 skin:1 quantity:1 looked:1 primary:5 diagonal:1 win:1 distance:8 toward:1 providing:1 nc:2 implementation:2 design:2 allowing:1 observation:1 sold:1 immediate:2 communication:1 arbitrary:3 omission:1 nwnber:1 david:1 required:3 specified:1 bout:4 herein:2 hour:1 trans:1 able:2 usually:1 below:2 pattern:3 program:1 green:5 including:1 explanation:1 lend:1 difficulty:1 eye:5 isodata:3 picture:2 axis:2 started:1 epoch:9 literature:2 evolve:1 digital:1 sufficient:1 plotting:1 lo:1 changed:1 raleigh:1 allow:1 neighbor:6 face:2 sparse:1 van:1 distributed:3 calculated:2 dimension:1 made:2 mter:1 sequentially:1 table:1 nature:2 ca:1 forest:1 excellent:1 cl:2 main:1 spread:1 lohrenz:1 martinez:1 allowed:1 valuate:1 body:1 clus:1 referred:3 fashion:1 ny:1 aid:1 wiley:1 precision:1 schulten:1 lbg:5 candidate:1 down:1 minute:2 shade:1 bad:1 emphasized:1 quantization:6 adding:1 ci:1 execution:1 subtract:1 wavelength:1 towards:1 contouring:1 determined:1 uniformly:2 called:1 total:1 subdivision:1 indicating:1 searched:1 competitve:1 perfonning:1 tested:2 |
2,780 | 3,520 | An Extended Level Method for
Efficient Multiple Kernel Learning
Zenglin Xu?
Rong Jin?
Irwin King?
Michael R. Lyu?
?
Dept. of Computer Science & Engineering
Dept. of Computer Science & Engineering
The Chinese University of Hong Kong
Michigan State University
Shatin, N.T., Hong Kong
East Lansing, MI, 48824
{zlxu, king, lyu}@cse.cuhk.edu.hk
[email protected]
?
Abstract
We consider the problem of multiple kernel learning (MKL), which can be formulated as a convex-concave problem. In the past, two efficient methods, i.e.,
Semi-Infinite Linear Programming (SILP) and Subgradient Descent (SD), have
been proposed for large-scale multiple kernel learning. Despite their success, both
methods have their own shortcomings: (a) the SD method utilizes the gradient of
only the current solution, and (b) the SILP method does not regularize the approximate solution obtained from the cutting plane model. In this work, we extend
the level method, which was originally designed for optimizing non-smooth objective functions, to convex-concave optimization, and apply it to multiple kernel
learning. The extended level method overcomes the drawbacks of SILP and SD
by exploiting all the gradients computed in past iterations and by regularizing the
solution via a projection to a level set. Empirical study with eight UCI datasets
shows that the extended level method can significantly improve efficiency by saving on average 91.9% of computational time over the SILP method and 70.3%
over the SD method.
1
Introduction
Kernel learning [5, 9, 7] has received a lot of attention in recent studies of machine learning. This
is due to the importance of kernel methods in that kernel functions define a generalized similarity
measure among data. A generic approach to learning a kernel function is known as multiple kernel
learning (MKL) [5]: given a list of base kernel functions/matrices, MKL searches for the linear combination of base kernel functions which maximizes a generalized performance measure. Previous
studies [5, 14, 13, 4, 1] have shown that MKL is usually able to identify appropriate combination of
kernel functions, and as a result to improve the performance.
A variety of methods have been used to create base kernels. For instance, base kernels can be created
by using different kernel functions; they can also be created by using a single kernel function but
with different subsets of features. As for the performance measures needed to find the optimal kernel function, several measures have been studied for multiple kernel learning, including maximum
margin classification errors [5], kernel-target alignment [4], and Fisher discriminative analysis [13].
The multiple kernel learning problem was first formulated as a semi-definite programming (SDP)
problem by [5]. An SMO-like algorithm was proposed in [2] in order to solve medium-scale problems. More recently, a Semi-Infinite Linear Programming (SILP) approach was developed for
MKL [12]. SILP is an iterative algorithm that alternates between the optimization of kernel weights
and the optimization of the SVM classifier. In each step, given the current solution of kernel weights,
it solves a classical SVM with the combined kernel; it then constructs a cutting plane model for the
objective function and updates the kernel weights by solving a corresponding linear programming
problem. Although the SILP approach can be employed for large scale MKL problems, it often suffers from slow convergence. One shortcoming of the SILP method is that it updates kernel weights
solely based on the cutting plane model. Given that a cutting plane model usually differs significantly from the original objective function when the solution is far away from the points where
the cutting plane model is constructed, the optimal solution to the cutting plane model could be
significantly off target. In [10], the authors addressed the MKL problems by a simple Subgradient
Descent (SD) method. However, since the SD method is memoryless, it does not utilize the gradients computed in previous iterations, which could be very useful in boosting the efficiency of the
search.
To further improve the computational efficiency of MKL, we extended the level method [6], which
was originally designed for optimizing non-smooth functions, to the optimization of convex-concave
problems. In particular, we regard the MKL problem as a saddle point problem. In the present work,
similar to the SILP method, we construct in each iteration a cutting plane model for the target
objective function using the solutions to the intermediate SVM problems. A new solution for kernel
weights is obtained by solving the cutting plane model. We furthermore adjust the new solution via a
projection to a level set. This adjustment is critical in that it ensures on one hand the new solution is
sufficiently close to the current solution, and on the other hand the new solution significantly reduces
the objective function. We show that the extended level method has a convergence rate of O(1/?2 )
for a ?-accurate solution. Although this is similar to that of the SD method, the extended level
method is advantageous in that it utilizes all the gradients that have been computed so far. Empirical
results with eight UCI datasets show that the extended level method is able to greatly improve the
efficiency of multiple kernel learning in comparison with the SILP method and the SD method.
The rest of this paper is organized as follows. In section 2, we review the efficient algorithms
that have been designed for multiple kernel learning. In section 3, we describe the details of the
extended level method for MKL, including a study of its convergence rate. In section 4, we present
experimental results by comparing both the effectiveness and the efficiency of the extended level
method with the corresponding measures of SILP and SD. We conclude this work in section 5.
2
Related Work
Let X = (x1 , . . . , xn ) ? Rn?d denote the collection of n training samples that are in a ddimensional space. We further denote by y = (y1 , y2 , . . . , yn ) ? {?1, +1}n the binary class labels
for the data points in X. We employ the maximum margin classification error, an objective used
in SVM, as the generalized performance measure. Following [5], the problem of multiple kernel
learning for classification in the primal form is defined as follows:
!
m
X
1
?
?
pi Ki (? ? y),
(1)
min max f (p, ?) = ? e ? (? ? y)
p?P ??Q
2
i=1
where P = {p ? Rm : p? e = 1, 0 ? p ? 1} and Q = {? ? Rn : ?? y = 0, 0 ? ? ? C}
are two solid convex regions, denoting the set of kernel weights and the set of SVM dual variables,
respectively. Here, e is a vector of all ones, C is the trade-off parameter in SVM, {Ki }m
i=1 is a group
of base kernel matrices, and ? defines the element-wise product between two vectors. It is easy to
verify that f (p, ?) is convex on p and concave on ?. Thus the above optimization problem is indeed
a convex-concave problem. It is important to note that the block-minimization formulation of MKL
presented in [10, 2] is equivalent to (1).
A straightforward approach toward solving the convex-concave problem in (1) is to transform it
into a Semi-definite Programming (SDP) or a Quadratically Constrained Quadratic Programming
(QCQP) [5, 2]. However, given their computational complexity, they cannot be applied to largescale MKL problems. Recently, Semi-infinite Linear Programming (SILP) [12] and Subgradient
Descent (SD) [10] have been applied to handle large-scale MKL problems. We summarize them
into a unified framework in Algorithm 1. Note that a superscript is used to indicate the index of
iteration, a convention that is used throughout this paper. We use [x]t to denote x to the power of t
in the case of ambiguity.
As indicated in Algorithm 1, both methods divide the MKL problem into two cycles: the inner cycle
solves a standard SVM problem to update ?, and the outer cycle updates the kernel weight vector
Algorithm 1 A general framework for solving MKL
1: Initialize p0 = e/m and i = 0
2: repeat
Pm
3:
Solve the dual of SVM with kernel K = j=1 pij Kj and obtain optimal solution ?i
4:
Update kernel weights by pi+1 = arg min{?i (p; ?) : p ? P}
5:
Update i = i + 1 and calculate stopping criterion ?i
6: until ?i ? ?
p. They differ in the 4th step in Algorithm 1: the SILP method updates p by solving a cutting
plane model, while the SD method updates p using the subgradient of the current solution. More
specifically, ?i (p; ?) for SILP and SD are defined as follows:
?iSILP (p; ?)
=
min{? : ? ? f (p, ?j ), j = 0, . . . , i},
(2)
?iSD (p; ?)
=
1
kp ? pi k22 + ?i (p ? pi )? ?p f (pi , ?i ),
2
(3)
?
where ?i is the step size that needs to be decided dynamically (e.g., by a line search). ?p f (pi , ?i ) =
? 21 [(?i ?y)? K1 (?i ?y), . . . , (?i ?y)? Km (?i ?y)]? denotes the subgradient of f (?, ?) with respect
to p at (pi , ?i ). Comparing the two methods, we observe
? In SILP, the cutting plane model ?SILP (p) utilizes all the {?j }ij=1 obtained in past iterations. In contrast, SD only utilizes ?i of the current solution pi .
? SILP updates the solution for p based on the cutting plane model ?SILP (p). Since the
cutting plane model is usually inaccurate when p is far away from {pj }ij=1 , the updated
solution p could be significantly off target [3]. In contrast, a regularization term kp ?
pi k22 /2 is introduced in SD to prevent the new solution being far from the current one, pi .
The proposed level method combines the strengths of both methods. Similar to SILP, it utilizes the
gradient information of all the iterations; similar to SD, a regularization scheme is introduced to
prevent the updated solution from being too far from the current solution.
3
Extended Level Method for MKL
We first introduce the basic steps of the level method, followed by the extension of the level method
to convex-concave problems and its application to MKL.
3.1
Introduction to the Level Method
The level method [6] is from the family of bundle methods, which have recently been employed to
efficiently solve regularized risk minimization problems [11]. It is an iterative approach designed
for optimizing a non-smooth objective function. Let f (x) denote the convex objective function
to be minimized over a convex domain G. In the ith iteration, the level method first constructs a
lower bound for f (x) by a cutting plane model, denoted by g i (x). The optimal solution, denoted
i
by x
?i , that minimizes the cutting plane model g i (x) is then computed. An upper bound f and a
lower bound f i are computed for the optimal value of the target optimization problem based on
x
?i . Next, a level set for the cutting plane model g i (x) is constructed, denoted by Li = {x ? G :
i
g i (x) ? ?f + (1 ? ?)f i } where ? ? (0, 1) is a tradeoff constant. Finally, a new solution xi+1
is computed by projecting xi onto the level set Li . It is important to note that the projection step,
serving a similar purpose to the regularization term in SD, prevents the new solution xi+1 from
being too far away from the old one xi . To demonstrate this point, consider a simple example
minx {f (x) = [x]2 : x ? [?4, 4]}. Assume x0 = ?3 is the initial solution. The cutting plane
model at x0 is g 0 (x) = 9 ? 6(x + 3). The optimal solution minimizing g 0 (x) is x
?1 = 4. If we
directly take x
?1 as the new solution, as SILP does, we found it is significantly worse than x0 in
terms of [x]2 . The level method alleviates this problem by projecting x0 = ?3 to the level set
L0 = {x : g 0 (x) ? 0.9[x0 ]2 + 0.1g 0 (?
x1 ), ?4 ? x ? 4} where ? = 0.9. It is easy to verify that
the projection of x0 to L0 is x1 = ?2.3, which significantly reduces the objective function f (x)
compared with x0 .
3.2
Extension of the Level Method to MKL
We now extend the level method, which was originally designed for optimizing non-smooth functions, to convex-concave optimization. First, since f (p, ?) is convex in p and concave in ?, according to van Neuman Lemma, for any optimal solution (p? , ?? ) we have
f (p, ?? ) = max f (p, ?) ? f (p? , ?? ) ? f (p? , ?) = min f (p, ?).
??Q
(4)
p?P
This observation motivates us to design an MKL algorithm which iteratively updates both the lower
and the upper bounds for f (p, ?) in order to find the saddle point. To apply the level method, we
first construct the cutting plane model. Let {pj }ij=1 denote the solutions for p obtained in the last
i iterations. Let ?j = arg max??Q f (pj , ?) denote the optimal solution that maximizes f (pj , ?).
We construct a cutting plane model g i (p) as follows:
g i (p) = max f (p, ?j ).
(5)
1?j?i
We have the following proposition for the cutting plane model g i (x)
Proposition 1. For any p ? P, we have (a) g i+1 (p) ? g i (p), and (b) g i (p) ? max??Q f (p, ?).
Next, we construct both the lower and the upper bounds for the optimal value f (p? , ?? ). We define
i
two quantities f i and f as follows:
f i = min g i (p) and
i
f = min f (pj , ?j ).
(6)
1?j?i
p?P
j
The following theorem shows that {f j }ij=1 and {f }ij=1 provide a series of increasingly tight
bounds for f (p? , ?? ).
j
i
Theorem 1. We have the following properties for {f j }ij=1 and {f }ij=1 : (a) f i ? f (p? , ?? ) ? f ,
1
2
i
(b) f ? f ? . . . ? f , and (c) f 1 ? f 2 ? . . . ? f i .
Proof. First, since g i (p) ? max??Q f (p, ?) for any p ? P, we have
f i = min g i (p) ? min max f (p, ?).
p?P ??Q
p?P
Second, since f (pj , ?j ) = max f (pj , ?), we have
??Q
i
f = min f (pj , ?j ) =
1?j?i
min
max f (p, ?) ? min max f (p, ?) = f (p? , ?? ).
p?{p1 ,...,pi } ??Q
p?P ??Q
Combining the above results, we have (a) in the theorem. It is easy to verify (b) and (c).
We furthermore define the gap ?i as
i
?i = f ? f i .
The following corollary indicates that the gap ?i can be used to measure the sub-optimality for
solution pi and ?i .
Corollary 2. (a) ?j ? 0, j = 1, . . . , i, (b) ?1 ? ?2 ? . . . ? ?i , (c) |f (pj , ?j )?f (p? , ?? )| ? ?i
It is easy to verify these three properties of ?i in the above corollary using the results of Theorem 1.
i
In the third step, we construct the level set Li using the estimated bounds f and f i as follows:
i
Li = {p ? P : g i (p) ? ?i = ?f + (1 ? ?)f i },
(7)
where ? ? (0, 1) is a predefined constant. The new solution, denoted by pi+1 , is computed as
the projection of pi onto the level set Li , which is equivalent to solving the following optimization
problem:
pi+1 = arg min kp ? pi k22 : p ? P, f (p, ?j ) ? ?i , j = 1, . . . , i .
(8)
p
Although the projection is regarded as a quadratic programming problem, it can often be solved efficiently because its solution is likely to be the projection onto one of the hyperplanes of polyhedron
Li . In other words, only very few linear constraints of L are active; most of them are inactive. This
sparse nature usually leads to significant speedup of QP, similar to the solver of SVM. As we argue
in the last subsection, by means of the projection, we on the one hand ensure pi+1 is not very far
away from pi , and on the other hand ensure significant progress is made in terms of g i (p) when the
solution is updated from pi to pi+1 . Note that the projection step in the level method saves the effort
of searching for the optimal step size in SD, which is computationally expensive as will be revealed
later. We summarize the steps of the extended level method in Algorithm 2.
Algorithm 2 The Level Method for Multiple Kernel Learning
1: Initialize p0 = e/m and i = 0
2: repeat
Pm
3:
Solve the dual problem of SVM with K = j=1 pij Kj to obtain the optimal solution ?i
4:
Construct the cutting plane model g i (p) in (5)
i
5:
Calculate the lower bound f i and the upper bound f in (6), and the gap ?i in (3.2)
6:
Compute the projection of pi onto the level set Li by solving the optimization problem in (8)
7:
Update i = i + 1
8: until ?i ? ?
Finally, we discuss the convergence behavior of the level method. In general, convergence is guaranteed because the gap ?i , which bounds the absolute difference between f (p? , ?? ) and f (pi , ?i ),
monotonically decreases through iterations. The following theorem shows the convergence rate of
the level method when applied to multiple kernel learning.
Theorem 3. To obtain a solution p that satisfies the stopping criterion, i.e., | max??Q f (p, ?) ?
f (p? , ?? )| ? ?, the maximum number of iterations N that the level method requires is bounded
2
?
, where c(?) = (1??)21?(2??) and L = 21 mnC 2 max ?max (Ki ). The
as follows N ? 2c(?)L
?2
1?i?m
operator ?max (M ) computes the maximum eigenvalue of matrix M .
Due to space limitation, the proof of Theorem 3 can be found in the long version of this paper.
Theorem 3 tells us that the convergence rate of the level method is O(1/?2 ). It is important to note
that according to Information Based Complexity (IBC) theory, given a function family F(L) with a
fixed Lipschitz constant L, O(1/?2 ) is almost the optimal convergence rate that can be achieved for
any optimization method based on the black box first order oracle. In other words, no matter which
optimization method is used, there always exists an function f (?) ? F(L) such that the convergence
rate is O(1/?2 ) as long as the optimization method is based on a black box first order oracle. More
details can be found in [8, 6].
4
Experiments
We conduct experiments to evaluate the efficiency of the proposed algorithm for MKL in constrast
with SILP and SD, the two state-of-the-art algorithms for MKL.
4.1
Experimental Setup
We follow the settings in [10] to construct the base kernel matrices, i.e.,
? Gaussian kernels with 10 different widths ({2?3 , 2?2 , . . . , 26 }) on all features and on each
single feature
? Polynomial kernels of degree 1 to 3 on all features and on each single feature.
Table 1: The performance comparison of three MKL algorithms. Here n and m denote the size of
training samples and the number of kernels, respectively.
Time(s)
Accuracy (%)
#Kernel
Time(s)
Accuracy (%)
#Kernel
Time(s)
Accuracy (%)
#Kernel
Time(s)
Accuracy (%)
#Kernel
SD
SILP
Level
Iono n = 175 m = 442
33.5 ?11.6 1161.0 ?344.2 7.1 ?4.3
92.1 ?2.0
92.0 ?1.9
92.1?1.9
26.9 ?4.0
24.4 ?3.4
25.4?3.9
Pima n = 384 m = 117
39.4 ?8.8
62.0 ?15.2
9.1 ?1.6
76.9 ?1.9
76.9 ?2.1
76.9?2.1
16.6 ?2.2
12.0 ?1.8
17.6?2.6
Wpbc n = 198 m = 442
7.8 ?2.4
142.0 ?122.3
5.3 ?1.3
77.0 ?2.9
76.9 ?2.8
76.9?2.9
19.5 ?2.8
17.2 ?2.2
20.3?2.6
Vote n = 218 m = 205
23.7 ?9.7
26.3 ?12.4
4.1 ?1.3
95.7 ?1.0
95.7 ?1.0
95.7?1.0
14.0 ?3.6
10.6 ?2.6
13.8?2.6
SD
Breast
47.4 ?8.9
96.6 ?0.9
13.1 ?1.7
Sonar
60.1 ?29.6
79.1 ?4.5
39.8 ?3.9
Heart
4.7 ?2.8
82.2 ?2.2
17.5 ?1.8
Wdbc
122.9?38.2
96.7 ?0.8
16.6 ?3.2
SILP
Level
n = 342 m = 117
54.2 ?9.4
4.6 ?1.0
96.6 ?0.8
96.6?0.8
10.6 ?1.1
13.3?1.5
n = 104 m = 793
1964.3?68.4 24.9?10.6
79.3 ?4.2
79.0?4.7
34.2 ?2.6
38.6?4.1
n = 135 m = 182
79.2 ?38.1
2.1 ?0.4
82.2 ?2.0
82.2?2.1
15.2 ?1.5
18.6?1.9
n = 285 m = 403
146.3 ?48.3
15.5?7.5
96.5 ?0.9
96.7?0.8
12.9 ?2.3
15.6?3.0
Each base kernel matrix is normalized to unit trace. The experiments are conducted on a PC with
3.2GHz CPU and 2GB memory. According to the above scheme of constructing base kernel matrices, we select a batch of UCI data sets, with the cardinality and dimension allowed by the memory
limit of the PC, from the UCI repository for evaluation. We repeat all the algorithms 20 times for
each data set. In each run, 50% of the examples are randomly selected as the training data and the
remaining data are used for testing. The training data are normalized to have zero mean and unit
variance, and the test data are then normalized using the mean and variance of the training data. The
regularization parameter C in SVM is set to 100 as our focus is to evaluate the computational time, as
justified in [10]. For a fair comparison among the MKL algorithms, we adopt the same stopping criterion for all three algorithms under comparison: we
adopt the duality gap criterion used in [10], i.e.,
Pm
max (??y)? Ki (??y)?(??y)?
p
K
j=1 j j (??y), and stop the algorithm when the criterion
1?i?m
is less than 0.01 or the number of iterations larger than 500. We empirically initialize the parameter ?
to 0.9 and increase it to 0.99 when the ratio ?i /?i is less than 0.01 for all experiments, since a larger
? accelerates the projection when the solution is close to the optimal one. We use the SimpleMKL
toolbox [10] to implement the SILP and SD methods. The linear programming in the SILP method
and the auxiliary subproblems in the level method are solved using a general optimization toolbox
MOSEK (http://www.mosek.com). The toolbox for the level method can be downloaded from
http://www.cse.cuhk.edu.hk/?zlxu/toolbox/level_mkl.html.
4.2
Experimental Results
We report the following performance measures: prediction accuracy, training time, and the averaged
number of kernels selected. From Table 1, we observe that all algorithms achieve almost the same
prediction accuracy under the same stopping criterion. This is not surprising because all algorithms
are essentially trying to solve the same optimization problem. Regarding the computational efficiency, we observe that the time cost of the SILP approach is the highest among all the three MKL
algorithms. For datasets ?Iono? and ?Sonar?, the SILP method consumes more than 30 times the
computational cycles of the other two methods for MKL. We also observe that the level method is the
most efficient among three methods in comparison. To obtain a better picture of the computational
efficiency of the proposed level method, we compute the time-saving ratio, as shown in Table 2. We
observe that the level method saves 91.9% of computational time on average when compared with
the SILP method, and 70.3% of computational time when compared with the SD method.
In order to see more details of each optimization algorithm, we plot the logarithm values of the
MKL objective function to base 10 against time in Figure 1. Due to space limitation, we randomly
choose only three datasets, ?Iono?, ?Breast?, and ?Pima?, as examples. It is interesting to find that
the level method converges overwhelmingly faster than the other two methods. The efficiency of the
level method arises from two aspects: (a) the cutting plane model utilizes the computational results
of all iterations and therefore boosts the search efficiency, and (b) the projection to the level sets
ensures the stability of the new solution. A detailed analysis of the SD method reveals that a large
number of function evaluations are consumed in order to compute the optimal stepsize via a line
search. Note that in convex-concave optimization, every function evaluation in the line search of SD
requires solving an SVM problem. As an example, we found that for dataset ?Iono?, although SD
and the level method require similar numbers of iterations, SD calls the SVM solver 1231 times on
average, while the level method only calls it 47 times. For the SILP method, the high computational
cost is mainly due to the oscillation of solutions. This instability leads to very slow convergence
when the solution is close to the optimal one, as indicated by the long tail of SILP in Figure 1. The
instability of SILP is further confirmed by the examination of kernel weights, as shown below.
To understand the evolution of kernel weights (i.e., p), we plot the evolution curves of the five largest
kernel weights for datasets ?Iono?, ?Breast?, and ?Pima? in Figure 2. We observe that the values
of p computed by the SILP method are the most unstable due to oscillation of the solutions to the
cutting plane models. Although the unstable-solution problem is to some degree improved by the
SD method, we still clearly observe that p fluctuates significantly through iterations. In contrast,
for the proposed level method, the values of p change smoothly through iterations. We believe that
the stability of the level method is mainly due to the accurate estimation of bounds as well as the
regularization of the projection to the level sets. This observation also sheds light on why the level
method can be more efficient than the SILP and the SD methods.
Table 2: Time-saving ratio of the level method over the SILP and the SD method
Iono
78.9
99.4
Level/SD (%)
Level/SILP (%)
Breast
90.4
91.6
Pima
77.0
85.4
Evolution of objective values with time
Sonar
58.7
98.7
Wpbc
32.5
88.7
Heart
54.7
97.3
Evolution of objective values with time
3.6
3.55
3.5
4.36
3.65
3.6
3.55
4.34
4.32
4.3
4.28
4.26
4.24
3.45
3.4
SD
SILP
Level
4.38
log of objective
3.7
3.65
Average
70.3
91.9
4.4
SD
SILP
Level
3.7
log of objective
log of objective
3.75
Wdbc
87.4
89.4
Evolution of objective values with time
3.75
SD
SILP
Level
3.8
Vote
82.8
84.5
4.22
0
20
40
60
80
100
3.5
0
time (s)
(a) Iono
10
20
30
40
time (s)
(b) Breast
50
60
4.2
0
10
20
30
40
50
60
70
time (s)
(c) Pima
Figure 1: Evolution of objective values over time (seconds) for datasets ?Iono?, ?Breast?, and
?Pima?. The objective values are plotted on a logarithm scale (base 10) for better comparison.
Only parts of the evolution curves are plotted for SILP due to their long tails.
5
Conclusion and Future Work
In this paper, we propose an extended level method to efficiently solve the multiple kernel learning
problem. In particular, the level method overcomes the drawbacks of both the SILP method and
the SD method for MKL. Unlike the SD method that only utilizes the gradient information of the
current solution, the level method utilizes the gradients of all the solutions that are obtained in past
iterations; meanwhile, unlike the SILP method that updates the solution only based on the cutting
plane model, the level method introduces a projection step to regularize the updated solution. It
is the employment of the projection step that guarantees finding an updated solution that, on the
one hand, is close to the existing one, and one the other hand, significantly reduces the objective
function. Our experimental results have shown that the level method is able to greatly reduce the
computational time of MKL over both the SD method and the SILP method. For future work, we
plan to find a scheme to adaptively set the value of ? in the level method and apply the level method
to other tasks, such as one-class classification, multi-class classification, and regression.
Acknowledgement
The work was supported by the National Science Foundation (IIS-0643494), National Institute of Health
(1R01GM079688-01) and Research Grants Council of Hong Kong (CUHK4150/07E and CUHK4125/07).
References
[1] F. R. Bach. Consistency of the group Lasso and multiple kernel learning. Journal of Machine Learning
Research, 9:1179?1225, 2008.
Evolution of the kernel weight values in SILP
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.5
0.4
p values
1
0.9
0.6
0.5
0.4
0.5
0.4
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0
20
40
60
80
0
100
0.1
0
100
200
iteration
300
400
0
500
(a) Iono/SD
0.8
0.7
0.7
0.7
0.4
p values
0.9
0.8
p values
0.9
0.8
0.5
0.6
0.5
0.4
0.3
0.2
0.2
0.1
0.1
0
0
0.1
0
20
40
60
iteration
80
100
120
0
140
(d) Breast/SD
(e) Breast/SILP
Evolution of the kernel weight values in SD
0.9
0.8
0.8
0.7
0.7
0.7
0.4
p values
0.9
0.8
p values
0.9
0.5
0.6
0.5
0.4
0.5
0.4
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0
0
15
20
(g) Pima/SD
25
30
20
0.6
0.3
iteration
15
Evolution of the kernel weight values in Level method
1
10
10
(f) Breast/Level
Evolution of the kernel weight values in SILP
1
5
5
iteration
1
0
0
iteration
0.6
35
0.4
0.2
25
30
0.5
0.3
20
25
0.6
0.3
15
20
Evolution of the kernel weight values in Level method
0.9
0.6
15
(c) Iono/Level
Evolution of the kernel weight values in SILP
1
10
10
(b) Iono/SILP
1
5
5
iteration
1
0
0
iteration
Evolution of the kernel weight values in SD
p values
0.6
0.3
0
p values
Evolution of the kernel weight values in Level method
1
0.9
p values
p values
Evolution of the kernel weight values in SD
1
0.9
0.1
0
20
40
60
80
100
0
0
iteration
(h) Pima/SILP
5
10
15
20
25
30
iteration
(i) Pima/Level
Figure 2: The evolution curves of the five largest kernel weights for datasets ?Iono?, ?Breast? and
?Pima? computed by the three MKL algorithms
[2] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the SMO
algorithm. In ICML, 2004.
[3] J. Bonnans, J. Gilbert, C. Lemar?echal, and C. Sagastiz?abal. Numerical Optimization, Theoretical and
Practical Aspects. Springer-Verlag, Berlin, 2nd ed., 2006.
[4] N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. S. Kandola. On kernel-target alignment. In NIPS 13,
pages 367?373, 2001.
[5] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. I. Jordan. Learning the kernel matrix
with semidefinite programming. Journal of Machine Learning Research, 5, 2004.
[6] C. Lemar?echal, A. Nemirovski, and Y. Nesterov. New variants of bundle methods. Mathematical Programming, 69(1), 1995.
[7] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine
Learning Research, 6, 2005.
[8] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. John Wiley
and Sons Ltd, 1983.
[9] C. S. Ong, A. J. Smola, and R. C. Williamson. Learning the kernel with hyperkernels. Journal of Machine
Learning Research, 6, 2005.
[10] A. Rakotomamonjy, F. R. Bach, S. Canu, and Y. Grandvalet. SimpleMKL. Technical Report HAL00218338, INRIA, 2008.
[11] A. Smola, S. V. N. Vishwanathan, and Q. Le. Bundle methods for machine learning. In NIPS 20, pages
1377?1384, 2007.
[12] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning. Journal of
Machine Learning Research, 7, 2006.
[13] J. Ye, J. Chen, and S. Ji. Discriminant kernel and regularization parameter learning via semidefinite
programming. In ICML, 2007.
[14] A. Zien and C. S. Ong. Multiclass multiple kernel learning. In ICML, 2007.
| 3520 |@word kong:3 repository:1 version:1 polynomial:1 advantageous:1 nd:1 km:1 p0:2 elisseeff:1 solid:1 initial:1 series:1 denoting:1 past:4 existing:1 current:8 comparing:2 com:1 surprising:1 john:1 r01gm079688:1 numerical:1 designed:5 plot:2 update:12 selected:2 plane:23 ith:1 boosting:1 cse:3 hyperplanes:1 five:2 mathematical:1 constructed:2 combine:1 introduce:1 lansing:1 x0:7 indeed:1 behavior:1 zlxu:2 p1:1 sdp:2 mnc:1 multi:1 cpu:1 solver:2 cardinality:1 bounded:1 maximizes:2 medium:1 minimizes:1 developed:1 unified:1 finding:1 guarantee:1 every:1 concave:10 shed:1 classifier:1 rm:1 neuman:1 unit:2 grant:1 yn:1 engineering:2 sd:42 limit:1 despite:1 solely:1 simplemkl:2 black:2 inria:1 studied:1 dynamically:1 nemirovski:2 averaged:1 decided:1 practical:1 testing:1 silp:48 block:1 definite:2 differs:1 implement:1 pontil:1 empirical:2 significantly:9 projection:15 word:2 cannot:1 close:4 onto:4 operator:1 risk:1 instability:2 www:2 equivalent:2 gilbert:1 straightforward:1 attention:1 convex:13 constrast:1 regarded:1 regularize:2 stability:2 handle:1 searching:1 updated:5 target:6 programming:12 lanckriet:2 element:1 expensive:1 solved:2 calculate:2 region:1 ensures:2 cycle:4 sonnenburg:1 trade:1 decrease:1 highest:1 consumes:1 complexity:3 cristianini:2 nesterov:1 ong:2 employment:1 solving:8 tight:1 efficiency:11 shortcoming:2 describe:1 kp:3 tell:1 fluctuates:1 larger:2 solve:6 transform:1 superscript:1 eigenvalue:1 propose:1 product:1 uci:4 combining:1 alleviates:1 achieve:1 olkopf:1 exploiting:1 convergence:10 converges:1 ij:7 received:1 progress:1 solves:2 ddimensional:1 auxiliary:1 indicate:1 convention:1 differ:1 drawback:2 require:1 bonnans:1 proposition:2 rong:1 extension:2 sufficiently:1 lyu:2 abal:1 adopt:2 purpose:1 estimation:1 label:1 council:1 largest:2 create:1 minimization:2 clearly:1 always:1 gaussian:1 overwhelmingly:1 corollary:3 l0:2 focus:1 polyhedron:1 indicates:1 mainly:2 hk:2 greatly:2 contrast:3 stopping:4 inaccurate:1 arg:3 among:4 classification:5 dual:3 denoted:4 html:1 plan:1 constrained:1 art:1 initialize:3 construct:9 saving:3 icml:3 ibc:1 mosek:2 minimized:1 report:2 future:2 employ:1 few:1 randomly:2 kandola:1 national:2 iono:12 evaluation:3 adjust:1 alignment:2 introduces:1 semidefinite:2 pc:2 primal:1 light:1 bundle:3 predefined:1 accurate:2 conduct:1 divide:1 old:1 logarithm:2 taylor:1 plotted:2 theoretical:1 instance:1 cost:2 rakotomamonjy:1 subset:1 conducted:1 too:2 zenglin:1 combined:1 adaptively:1 off:3 michael:1 ambiguity:1 choose:1 worse:1 li:7 matter:1 later:1 lot:1 accuracy:6 variance:2 efficiently:3 identify:1 confirmed:1 suffers:1 ed:1 against:1 proof:2 mi:1 stop:1 dataset:1 subsection:1 organized:1 originally:3 follow:1 improved:1 formulation:1 box:2 furthermore:2 smola:2 until:2 hand:6 mkl:29 defines:1 indicated:2 believe:1 ye:1 k22:3 verify:4 normalized:3 y2:1 evolution:17 regularization:7 memoryless:1 iteratively:1 width:1 hong:3 generalized:3 criterion:6 trying:1 demonstrate:1 wise:1 recently:3 qp:1 empirically:1 ji:1 extend:2 tail:2 significant:2 consistency:1 pm:3 canu:1 shawe:1 afer:1 similarity:1 base:10 own:1 recent:1 optimizing:4 verlag:1 binary:1 success:1 employed:2 cuhk:2 monotonically:1 semi:5 ii:1 multiple:17 zien:1 reduces:3 smooth:4 technical:1 faster:1 bach:3 long:4 dept:2 prediction:2 variant:1 basic:1 regression:1 breast:10 essentially:1 iteration:25 kernel:70 achieved:1 justified:1 addressed:1 sch:2 rest:1 unlike:2 effectiveness:1 jordan:2 call:2 intermediate:1 revealed:1 easy:4 variety:1 lasso:1 inner:1 regarding:1 reduce:1 tradeoff:1 consumed:1 multiclass:1 shatin:1 inactive:1 bartlett:1 gb:1 ltd:1 effort:1 useful:1 detailed:1 http:2 estimated:1 serving:1 group:2 prevent:2 pj:9 utilize:1 isd:1 subgradient:5 run:1 throughout:1 family:2 almost:2 utilizes:8 oscillation:2 accelerates:1 ki:4 bound:11 followed:1 guaranteed:1 quadratic:2 oracle:2 strength:1 constraint:1 vishwanathan:1 qcqp:1 aspect:2 min:12 optimality:1 speedup:1 according:3 alternate:1 combination:2 increasingly:1 son:1 projecting:2 ghaoui:1 heart:2 computationally:1 discus:1 needed:1 apply:3 eight:2 observe:7 away:4 generic:1 appropriate:1 stepsize:1 save:2 batch:1 original:1 denotes:1 remaining:1 ensure:2 k1:1 chinese:1 classical:1 micchelli:1 objective:19 quantity:1 gradient:7 minx:1 berlin:1 outer:1 argue:1 unstable:2 discriminant:1 toward:1 index:1 ratio:3 minimizing:1 setup:1 pima:10 subproblems:1 trace:1 design:1 motivates:1 upper:4 observation:2 datasets:7 jin:1 descent:3 extended:12 y1:1 rn:2 introduced:2 toolbox:4 smo:2 quadratically:1 boost:1 nip:2 able:3 usually:4 below:1 summarize:2 max:15 including:2 memory:2 power:1 critical:1 examination:1 regularized:1 largescale:1 scheme:3 improve:4 picture:1 conic:1 created:2 health:1 kj:2 review:1 acknowledgement:1 interesting:1 limitation:2 foundation:1 downloaded:1 degree:2 pij:2 grandvalet:1 pi:22 echal:2 sagastiz:1 wpbc:2 repeat:3 last:2 supported:1 understand:1 institute:1 absolute:1 sparse:1 van:1 regard:1 ghz:1 dimension:1 xn:1 curve:3 yudin:1 computes:1 author:1 collection:1 made:1 far:7 approximate:1 cutting:23 overcomes:2 active:1 reveals:1 conclude:1 discriminative:1 xi:4 msu:1 search:6 iterative:2 sonar:3 why:1 table:4 nature:1 rongjin:1 williamson:1 meanwhile:1 constructing:1 domain:1 allowed:1 fair:1 xu:1 x1:3 slow:2 wiley:1 sub:1 third:1 theorem:8 list:1 svm:13 exists:1 importance:1 margin:2 gap:5 chen:1 wdbc:2 smoothly:1 michigan:1 saddle:2 likely:1 prevents:1 adjustment:1 springer:1 satisfies:1 formulated:2 king:2 lipschitz:1 fisher:1 lemar:2 change:1 infinite:3 specifically:1 hyperkernels:1 lemma:1 duality:2 experimental:4 vote:2 east:1 atsch:1 select:1 irwin:1 arises:1 cuhk4150:1 evaluate:2 regularizing:1 |
2,781 | 3,521 | Reducing statistical dependencies in natural signals
using radial Gaussianization
Siwei Lyu
Computer Science Department
University at Albany, SUNY
Albany, NY 12222
[email protected]
Eero P. Simoncelli
Center for Neural Science
New York University
New York, NY 10003
[email protected]
Abstract
We consider the problem of transforming a signal to a representation in which
the components are statistically independent. When the signal is generated as a
linear transformation of independent Gaussian or non-Gaussian sources, the solution may be computed using a linear transformation (PCA or ICA, respectively).
Here, we consider a complementary case, in which the source is non-Gaussian
but elliptically symmetric. Such a source cannot be decomposed into independent components using a linear transform, but we show that a simple nonlinear
transformation, which we call radial Gaussianization (RG), is able to remove all
dependencies. We apply this methodology to natural signals, demonstrating that
the joint distributions of nearby bandpass filter responses, for both sounds and images, are closer to being elliptically symmetric than linearly transformed factorial
sources. Consistent with this, we demonstrate that the reduction in dependency
achieved by applying RG to either pairs or blocks of bandpass filter responses is
significantly greater than that achieved by PCA or ICA.
1 Introduction
Signals may be manipulated, transmitted or stored more efficiently if they are transformed to a representation in which there is no statistical redundancy between the individual components. In the
context of biological sensory systems, the efficient coding hypothesis [1, 2] proposes that the principle of reducing redundancies in natural signals can be used to explain various properties of biological
perceptual systems. Given a source model, the problem of deriving an appropriate transformation
to remove statistical dependencies, based on the statistics of observed samples, has been studied for
more than a century. The most well-known example is principal components analysis (PCA), a linear transformation derived from the second-order signal statistics (i.e., the covariance structure), that
can fully eliminate dependencies for Gaussian sources. Over the past two decades, a more general
method, known as independent component analysis (ICA), has been developed to handle the case
when the signal is sampled from a linearly transformed factorial source. ICA and related methods
have shown success in many applications, especially in deriving optimal representations for natural
signals [3, 4, 5, 6].
Although PCA and ICA bases may be computed for nearly any source, they are only guaranteed to
eliminate dependencies when the assumed source model is correct. And even in cases where these
methodologies seems to produce an interesting solution, the components of the resulting representation may be far from independent. A case in point is that of natural images, for which derived ICA
transformations consist of localized oriented basis functions that appear similar to the receptive field
descriptions of neurons in mammalian visual cortex [3, 5, 4]. Although dependency between the
responses of such linear basis functions is reduced compared to that of the original pixels, this reduc1
Linearly
transformed
factorial
Factorial
Elliptical
Gaussian
Spherical
Fig. 1. Venn diagram of the relationship between density models. The two circles represent the linearly
transformed factorial densities as assumed by the ICA methods, and elliptically symmetric densities
(ESDs). The intersection of these two classes is the set of all Gaussian densities. The factorial densities
form a subset of the linearly transformed factorial densities and the spherically symmetric densities
form a subset of the ESDs.
tion is only slightly more than that achieved with PCA or other bandpass filters [7, 8]. Furthermore,
the responses of ICA and related filters still exhibit striking higher-order dependencies [9, 10, 11].
Here, we consider the dependency elimination problem for the class of source models known as
elliptically symmetric densities (ESDs) [12]. For ESDs, linear transforms have no effect on the
dependencies beyond second-order, and thus ICA decompositions offer no advantage over PCA. We
introduce an alternative nonlinear procedure, which we call radial Gaussianization (RG). In RG,
the norms of whitened signal vectors are nonlinearly adjusted to ensure that the resulting output
density is a spherical Gaussian, whose components are statistically independent. We first show that
the joint statistics of proximal bandpass filter responses for natural signals (sounds and images) are
better described as an ESD than linearly transformed factorial sources. Consistent with this, we
demonstrate that the reduction in dependency achieved by applying RG to such data is significantly
greater than that achieved by PCA or ICA. A preliminary version of portions of this work was
described in [13].
2 Elliptically Symmetric Densities
The density of a random vector x ? Rd with zero mean is elliptically symmetric if it is of the form:
!
1
1 T ?1
p(x) =
f
?
x
?
x
,
(1)
1
2
?|?| 2
where
? is a positive definite matrix, f (?) is the generating function satisfying f (?) ? 0 and
R?
2
f
(?r
/2) rd?1 dr < ?, and the normalizing constant ? is chosen so that the density integrates
0
to one [12]. The definitive characteristic of an ESD is that the level sets of constant probability are
ellipsoids determined by ?. In the special case when ? is a multiple of the identity matrix, the level
sets of p(x) are hyper-spheres and the density is known as a spherically symmetric density (SSD).
Assuming x has finite second-order statistics, ? is a multiple of the covariance matrix, which implies
that any ESD can be transformed into an SSD by a PCA/whitening operation.
When the generating function is an exponential, the resulting ESD is a zero-mean multivariate Gaussian with covariance matrix ?. In this case, x can also be regarded as a linear transformation of a
vector s containing independent unit-variance Gaussian components, as: x = ??1/2 s. In fact, the
Gaussian is the only density that is both elliptically symmetric and linearly decomposable into independent components [14]. In other words, the Gaussian densities correspond to the intersection of
the class of ESDs and the class assumed by the ICA methods. As a special case, a spherical Gaussian
is the only spherically symmetric density that is also factorial (i.e., has independent components).
These relationships are illustrated in a Venn diagram in Fig. 1.
Apart from the special case of Gaussian densities, a linear transformation such as PCA or ICA cannot
completely eliminate dependencies in the ESDs. In particular, PCA and whitening can transform
an ESD variable to a spherically symmetric variable, xwht , but the resulting density will not be
factorial unless it is Gaussian. And ICA would apply an additional rotation (i.e., an orthogonal
2
(a)
(b)
rout
pout(r)
g(r)
(c)
rin
(e)
(d)
pin(r)
(f )
Fig. 2. Radial Gaussianization procedure for 2D data. (a,e): 2D joint densities of a spherical Gaussian
and a non-Gaussian SSD, respectively. The plots are arranged such that a spherical Gaussian has equalspaced contours. (b,f): radial marginal densities of the spherical Gaussian in (a) and the SSD in (e),
respectively. Shaded regions correspond to shaded annuli in (a) and (e). (c): the nonlinear mapping
that transforms the radii of the source to those of the spherical Gaussian. (d): log marginal densities of
the Gaussian in (a) and the SSD in (e), as red dashed line and green solid line, respectively.
matrix) to transform xwht to a new set of coordinates maximizing a higher-order contrast function
(e.g., kurtosis). However, for spherically symmetric xwht , p(xwht ) is invariant to rotation, and thus
unaffected by orthogonal transformations.
3 Radial Gaussianization
Given that linear transforms are ineffective in removing dependencies from a spherically symmetric
variable xwht (and hence the original ESD variable x), we need to consider non-linear mappings. As
described previously, a spherical Gaussian is the only SSD with independent components. Thus, a
natural solution for eliminating the dependencies in a non-Gaussian spherically symmetric xwht is to
transform it to a spherical Gaussian.
Selecting such a non-linear mapping without any further constraint is a highly ill-posed problem.
It is natural to restrict to nonlinear mappings that act radially, preserving the spherical symmetry. Specifically, one can show that the generating function of p(xwht ) is completely determined
d?1
by its radial marginal distribution: pr (r) = r ? f (?r2 /2), where r = kxwht k, ?(?) is the standard
Gamma function, and ? is the normalizing constant that ensures that the density integrates to one.
In the special case of a spherical Gaussian of unit variance, the radial marginal is a chi-density
d?1
with d degrees of freedom: p? (r) = 2d/2?1r ?(d/2) exp(?r2 /2). We define the radial Gaussianization
(RG) transformation as xrg = g(kxwht k) kxxwht
, where nonlinear function g(?) is selected to map the
wht k
radial marginal density of xwht to the chi-density. Solving for a monotonic g(?) is a standard onedimensional density-mapping problem, and the unique solution is the composition of the inverse
cumulative density function (CDF) of p? with the CDF of pr : g(r) = F??1 Fr (r). A illustration of
the procedure is provided in Fig. 2. In practice, we can estimate Fr (r) from a histogram computed
from training data, and use this to construct a numerical approximation (i.e., a look-up table) of the
continuous function g? (r). Note that the accuracy of the estimated RG transformation will depend on
the number of data samples, but is independent of the dimensionality of the data vectors.
In summary, a non-Gaussian ESD signal can be radially Gaussianized by first applying PCA and
whitening operations to remove second-order dependency (yielding an SSD), followed by a nonlinear transformation that maps the radial marginal to a chi-density.
4 Application to Natural Signals
An understanding of the statistical behaviors of source signals is beneficial for many problems in
signal processing, and can also provide insights into the design and functionality of biological sensory systems. Gaussian signal models are widely used, because they are easily characterized and
often lead to clean and efficient solutions. But many naturally occurring signals exhibit striking
3
non-Gaussian statistics, and much recent literature focuses on the problem of characterizing and
exploiting these behaviors. Specifically, ICA methodologies have been used to derive linear representations for natural sound and image signals whose coefficients are maximally sparse or independent [3, 5, 6]. These analyses generally produced basis sets containing bandpass filters resembling
those used to model the early transformations of biological auditory and visual systems.
Despite the success of ICA methods in providing a fundamental motivation for sensory receptive
fields, there are a number of simple observations that indicate inconsistencies in this interpretation. First, the responses of ICA or other bandpass filters exhibit striking dependencies, in which
the variance of one filter response can be predicted from the amplitude of another nearby filter response [10, 15]. This suggests that although the marginal density of the bandpass filter responses are
heavy-tailed, their joint density is not consistent with the linearly transformed factorial source model
assumed by ICA. Furthermore, the marginal distributions of a wide variety of bandpass filters (even
a ?filter? with randomly selected zero-mean weights) are all highly kurtotic [7]. This would not be
expected for the ICA source model: projecting the local data onto a random direction should result
in a density that becomes more Gaussian as the neighborhood size increases, in accordance with a
generalized version of the central limit theorem [16]. A recent quantitative study [8] further showed
that the oriented bandpass filters obtained through ICA optimization on images lead to a surprisingly
small improvement in reducing dependency relative to decorrelation methods such as PCA. Taken
together, all of these observations suggest that the filters obtained through ICA optimization represent a ?shallow? optimum, and are perhaps not as uniquely suited for image or sound representation
as initially believed. Consistent with this, recently developed models for local image statistics model
local groups of image bandpass filter responses with non-Gaussian ESDs [e.g., 17, 18, 11, 19, 20].
These all suggest that RG might provide an appropriate means of eliminating dependencies in natural signals. Below, we test this empirically.
4.1 Dependency Reduction in Natural Sounds
We first apply RG to natural sounds. We used sound clips from commercial CDs, which have a
sampling frequency of 44100 Hz and typical length of 15 ? 20 seconds, and contents including
animal vocalization and recordings in natural environments. These sound clips were filtered with a
bandpass gammatone filter, which are commonly used to model the peripheral auditory system [21].
In our experiments, analysis was based on a filter with center frequency of 3078 Hz.
Shown in the top row of column (a) in Fig.3 are contour plots of the joint histograms obtained
from pairs of coefficients of a bandpass-filtered natural sound, separated with different time intervals. Similar to the empirical observations for natural images [17, 11], the joint densities are nonGaussian, and have roughly elliptically symmetric contours for temporally proximal pairs. Shown
in the top row of column (b) in Fig.3 are the conditional histograms corresponding to the same pair
of signals. The ?bow-tie? shaped conditional distribution, which has been also observed in natural
images [10, 11, 15], indicates that the conditional variance of one signal depends on the value of the
other. This is a highly non-Gaussian behavior, since the conditional variances of a jointly Gaussian
density are always constant, independent of the value of the conditioning variable. For pairs that
are distant, both the second-order correlation and the higher-order dependency become weaker. As
a result, the corresponding joint histograms show more resemblance to the factorial product of two
one-dimensional super-Gaussian densities (bottom row of column (a) in Fig.3), and the shape of the
corresponding conditional histograms (column (b)) is more constant, all as would be expected for
two independent random variables .
As described in previous sections, the statistical dependencies in an elliptically symmetric random
variable can be effectively removed by a linear whitening operation followed by a nonlinear radial
Gaussianization, the latter being implemented as histogram transform of the radial marginal density of the whitened signal. Shown in columns (c) and (d) in Fig.3 are the joint and conditional
histograms of the transformed data. First, note that when the two signals are nearby, RG is highly
effective, as suggested by the roughly Gaussian joint density (equally spaced circular contours), and
by the consistent vertical cross-sections of the conditional histogram. However, as the temporal separation between the two signals increases, the effects of RG become weaker (middle row, Fig. 3).
When the two signals are distant (bottom row, Fig.3), they are nearly independent, and applying RG
can actually increase dependency, as suggested by the irregular shape of the conditional densities
(bottom row, column (d)).
4
(a)
(b)
(c)
(d)
0.1 msec
(4 samples)
1.5 msec
(63 samples)
3.5 msec
(154 samples)
Fig. 3. Radial Gaussianization of natural sounds. (a): Contour plots of joint histograms of pairs
of band-pass filter responses of a natural sound clip. Each row corresponds to pairs with different
temporal separation, and levels are chosen so that a spherical Gaussian density will have equally spaced
contours. (c) Joint histograms after whitening and RG transformation. (b,d): Conditional histograms
of the same data shown in (a,c), computed by independently normalizing each column of the joint
histogram. Histogram intensities are proportional to probability, except that each column of pixels is
independently rescaled so that the largest probability value is displayed as white.
To quantify more precisely the dependency reduction achieved by RG, we measure the statistical
dependency of our multivariate sources using the multi-information (MI) [22], which is defined as
the Kulback-Leibler divergence [23] between the joint distribution and
R the product of its marginals:
Q
P
I(x) = DKL p(x) k k p(xk ) = dk=1 H(xk ) ? H(x), where H(x) = p(x) log (p(x)) dx is the differential entropy of x, and H(xk ) denotes the differential entropy of the kth component of x. As
a measure of statistical dependency among the elements of x, MI is non-negative, and is zero if
and only if the components of x are mutually independent. Furthermore, MI is invariant to any
transformation on individual components of x (e.g., element-wise rescaling).
To compare the effect of different dependency reduction methods, we estimated the MI of pairs of
bandpass filter responses with different temporal separations. This is achieved with a non-parametric
?bin-less? method based on the order statistics [24], which alleviates the strong bias and variance
intrinsic to the more traditional binning (i.e., ?plug-in?) estimators. It is especially effective in this
case, where the data dimensionality is two. We computed the MI for each pair of raw signals, as well
as pairs of the PCA, ICA and RG transformed signals. The ICA transformation was obtained using
RADICAL [25], an algorithm that directly optimizes the MI using a smoothed grid search over a
non-parametric estimate of entropy.
The results, averaged over all 10 sounds, are plotted in Fig. 4. First, we note that PCA produces a
relatively modest reduction in MI: roughly 20% for small separations, decreasing gradually as the
separation increase. We also see that ICA offers very little additional reduction over PCA for small
separations. In contrast, the nonlinear RG transformation achieves an impressive reduction (nearly
100%) in MI for pairs separated by less than 0.5 msec. This can be understood by considering the
joint and conditional histograms in Fig. 3. Since the joint density of nearby pairs is approximately
elliptically symmetric, ICA cannot provide much improvement beyond what is obtained with PCA,
while RG is expected to perform well. On the other hand, the joint densities of more distant pairs
(beyond 2.5 msec) are roughly factorial, as seen in the bottom row of Fig. 3. In this case, neither
PCA nor ICA is effective in further reducing dependency, as is seen in the plots of Fig. 4, but RG
makes the pairs more dependent, as indicated by an increase in MI above that of the original pairs
for separation over 2.5 msec. This is a direct result of the fact that the data do not adhere to the
elliptically symmetric source model assumptions underlying the RG procedure. For intermediate
separations (0.2 to 2 msec), there is a transition of the joint densities from elliptically symmetric
to factorial (second row in Fig. 3), and ICA is seen to offer a modest improvement over PCA. We
5
0.5
0.3
0.2
raw
pca/ica
rg
0.4
MI (bits/coeff)
0.4
MI (bits/coeff)
0.5
raw
pca/ica
rg
0.3
0.2
0.1
0.1
0
0.1
0.5
1
separation (msec)
1.5
0
1
2 2.5 3.5
2
4
8
16
separation (samples)
32
Fig. 4. Left: Multi-information (in bits/coefficient) for pairs of bandpass filter responses of natural
audio signals, as a function of temporal separation. Shown are the MI of the raw filter response pairs,
as well as the MI of the pairs transformed with PCA, ICA, and RG. Results are averaged over 10
natural sound signals. Right: Same analysis for pairs of bandpass filter responses averaged over 8
natural images.
1.3
blk size = 7x7
1
blk size = 3x3
0.7
blk size = 15x15
1.2
0.9
1.1
0.6
0.8
1
0.5
0.7
0.9
0.4
0.6
0.3
0.5
0.8
0.7
0.4
0.2
0.2
0.3
0.4
3?3
0.5
0.6
0.6
0.4
0.5
0.6
7?7
0.7
0.8
0.9
0.6
0.7
0.8
0.9
1
1.1
15 ? 15
Fig. 5. Reduction of MI (bits/pixel) achieved with ICA and RG transforms, compared to that achieved
with PCA, for pixel blocks of various sizes. The x-axis corresponds to ?I pca . Pluses denotes ?Irg , and
circles denotes ?Iica . Each plotted symbol corresponds to the result from one image in our test set.
found qualitatively similar behaviors (right column in Fig. 4) when analyzing pairs of bandpass filter
responses of natural images using the data sets described in the next section.
4.2 Dependency Reduction in Natural Images
We have also examined the ability of RG to reduce dependencies of image pixel blocks with local mean removed. We examined eight images of natural woodland scenes from the van Hateren
database [26]. We extracted the central 1024 ? 1024 region from each, computed the log of the intensity values, and then subtracted the local mean [8] by convolving with an isotropic bandpass filter
that captures an annulus of frequencies in the Fourier domain ranging from ?/4 to ? radians/pixel.
We denote blocks taken from these bandpass filtered images as xraw . These blocks were then transformed with PCA (denoted xpca ), ICA (denoted xica ) and RG (denoted xrg ). These block data are
of significantly higher dimension than the filter response pairs examined in the previous section.
For this reason, we switched our ICA computations from RADICAL to the more efficient FastICA
algorithm [27], with a contrast function g(u) = 1 ? exp(?u2 ) and using the symmetric approach for
optimization.
We would like to compare the dependency reduction performance of each of these methods using
multi-information. However, direct estimation of MI becomes difficult and less accurate with higher
data dimensionality. Instead, as in [8], we can avoid direct estimation of MI by evaluating and
comparing the differences in MI of the various transformed blocks relative to xraw . Specifically, we
use ?I pca = I(xraw ) ? I(x pca ) as a reference value, and compare this with ?Iica = I(xraw ) ? I(xica) and
?Irg = I(xraw ) ? I(xrg ). Full details of this computation are described in [13].
6
Shown in Fig.5 are scatter plots of ?I pca versus ?Iica (red circles) and ?Irg (blue pluses) for various
block sizes. Each point corresponds to MI computation over blocks from one of eight bandpassfiltered test images. As the figure shows, RG achieves significant reduction in MI for most images,
and this holds over a range of block sizes, whereas ICA shows only a very small improvement
relative to PCA1 . We again conclude that ICA does not offer much advantage over second-order
decorrelation algorithms such as PCA, while RG offers significant improvements. These results may
be attributed to the fact that the joint density for local pixel blocks tend to be close to be elliptically
symmetric [17, 11].
5 Conclusion
We have introduced a new signal transformation known as radial Gaussianization (RG), which can
eliminate dependencies of sources with elliptically symmetric densities. Empirically, we have shown
that RG transform is highly effective at removing dependencies between pairs of samples in bandpass filtered sounds and images, and within local blocks of bandpass filtered images.
One important issue underlying our development of this methodology is the intimate relation between source models and dependency reduction methods. The class of elliptically symmetric densities represents a generalization of the Gaussian family that is complementary to the class of linearly
transformed factorial densities (see Fig. 1). The three dependency reduction methods we have discussed (PCA, ICA and RG) are each associated with one of these classes, and are each guaranteed
to produce independent responses when applied to signals drawn from a density belonging to the
corresponding class. But applying one of these methods to a signal with an incompatible source
model may not achieve the expected reduction in dependency (e.g., applying ICA to an ESD), and
in some cases can even increase dependencies (e.g., applying RG to a factorial density).
Several recently published methods are related to RG. An iterative Gaussianization scheme transforms any source model to a spherical Gaussian by alternating between linear ICA transformations
and nonlinear histogram matching to map marginal densities to Gaussians [28]. However, in general, the overall transformation of iterative Gaussianization is an alternating concatenation of many
linear/nonlinear transformations, and results in a substantial distortion of the original source space.
For the special case of ESDs, RG provides a simple one-step procedure with minimal distortion.
Another nonlinear transform that has also been shown to be able to reduce higher-order dependencies in natural signals is divisive normalization [15]. In the extended version of this paper [13], we
show that there is no ESD source model for whose dependencies can be completely eliminated by
divisive normalization. On the other hand, divisive normalization provides a rough approximation
to RG, which suggests that RG might provide a more principled justification for normalization-like
nonlinear behaviors seen in biological sensory systems.
There are a number of extensions of RG that are worth considering in the context of signal representation. First, we are interested in specific sub-families of ESD for which the nonlinear mapping
of signal amplitudes in RG may be expressed in closed form. Second, the RG methodology provides a solution to the efficient coding problem for ESD signals in the noise-free case, and it is
worthwhile to consider how the solution would be affected by the presence of sensor and/or channel noise. Third, we have shown that RG substantially reduces dependency for nearby samples of
bandpass filtered image/sound, but that performance worsens as the coefficients become more separated, where their joint densities are closer to factorial. Recent models of natural images [29, 30]
have used Markov random fields based on local elliptically symmetric models, and these are seen to
provide a natural transition of pairwise joint densities from elliptically symmetric to factorial. We
are currently exploring extensions of the RG methodology to such global models. And finally, we
are currently examining the statistics of signals after local RG transformations, with the expectation
that remaining statistical regularities (e.g., orientation and phase dependencies in images) can be
studied, modeled and removed with additional transformations.
References
[1] F Attneave. Some informational aspects of visual perception. Psych. Rev., 61:183?193, 1954.
1
Similar results for the comparison of ICA to PCA were obtained with a slightly different method of removing the mean values of each block [8].
7
[2] H B Barlow. Possible principles underlying the transformation of sensory messages. In W A Rosenblith,
editor, Sensory Communication, pages 217?234. MIT Press, Cambridge, MA, 1961.
[3] B A Olshausen and D J Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381:607?609, 1996.
[4] A van der Schaaf and J H van Hateren. Modelling the power spectra of natural images: Statistics and
information. Vision Research, 28(17):2759?2770, 1996.
[5] A J Bell and T J Sejnowski. The ?independent components? of natural scenes are edge filters. Vision
Research, 37(23):3327?3338, 1997.
[6] M S Lewicki. Efficient coding of natural sounds. Nature Neuroscience, 5(4):356?363, 2002.
[7] R. Baddeley. Searching for filters with ?interesting? output distributions: an uninteresting direction to
explore. Network, 7:409?421, 1996.
[8] Matthias Bethge. Factorial coding of natural images: how effective are linear models in removing higherorder dependencies? J. Opt. Soc. Am. A, 23(6):1253?1268, 2006.
[9] B Wegmann and C Zetzsche. Statistical dependence between orientation filter outputs used in an human
vision based image code. In Proc Visual Comm. and Image Processing, volume 1360, pages 909?922,
Lausanne, Switzerland, 1990.
[10] E P Simoncelli. Statistical models for images: Compression, restoration and synthesis. In Proc 31st Asilomar Conf on Signals, Systems and Computers, volume 1, pages 673?678, Pacific Grove, CA, November
2-5 1997. IEEE Computer Society.
[11] M J Wainwright and E P Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In S. A. Solla, T. K. Leen, and K.-R. M?uller, editors, Adv. Neural Information Processing Systems
(NIPS*99), volume 12, pages 855?861, Cambridge, MA, May 2000. MIT Press.
[12] K.T. Fang, S. Kotz, and K.W. Ng. Symmetric Multivariate and Related Distributions. Chapman and Hall,
London, 1990.
[13] S. Lyu and E. P. Simoncelli. Nonlinear extraction of ?independent components? of elliptically symmetric densities using radial Gaussianization. Technical Report TR2008-911, Computer Science Technical
Report, Courant Inst. of Mathematical Sciences, New York University, April 2008.
[14] D. Nash and M. S. Klamkin. A spherical characterization of the normal distribution. Journal of Multivariate Analysis, 55:56?158, 1976.
[15] O Schwartz and E P Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience,
4(8):819?825, August 2001.
[16] William Feller. An Introduction to Probability Theory and Its Applications, volume 1. Wiley, January
1968.
[17] C Zetzsche and G Krieger. The atoms of vision: Cartesian or polar? J. Opt. Soc. Am. A, 16(7), July 1999.
[18] J. Huang and D. Mumford. Statistics of natural images and models. In IEEE International Conference on
Computer Vision and Pattern Recognition (CVPR), 1999.
[19] A Srivastava, X Liu, and U Grenander. Universal analytical forms for modeling image probability. IEEE
Pat. Anal. Mach. Intell., 24(9):1200?1214, Sep 2002.
[20] Y. Teh, M. Welling, and S. Osindero. Energy-based models for sparse overcomplete representations.
Journal of Machine Learning Research, 4:1235?1260, 2003.
[21] P I M Johannesma. The pre-response stimulus ensemble of neurons in the cochlear nucleus. In Symposium
on Hearing Theory (IPO), pages 58?69, Eindhoven, Holland, 1972.
[22] M. Studeny and J. Vejnarova. The multiinformation function as a tool for measuring stochastic dependence. In M. I. Jordan, editor, Learning in Graphical Models, pages 261?297. Dordrecht: Kluwer., 1998.
[23] T. Cover and J. Thomas. Elements of Information Theory. Wiley-Interscience, 2nd edition, 2006.
[24] A. Kraskov, H. St?ogbauer, and P. Grassberger. Estimating mutual information. Phys. Rev. E, 69(6):66?82,
Jun 2004.
[25] E. G. Learned-Miller and J. W. Fisher. ICA using spacings estimates of entropy. Journal of Machine
Learning Research, 4(1):1271?1295, 2000.
[26] J H van Hateren and A van der Schaaf. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proc. R. Soc. Lond. B, 265:359?366, 1998.
[27] A. Hyv?arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3):626?634, 1999.
[28] Scott Saobing Chen and Ramesh A. Gopinath. Gaussianization. In Advances in Neural Computation
Systems (NIPS), pages 423?429, 2000.
[29] S. Roth and M. Black. Fields of experts: A framework for learning image priors. In IEEE Conference on
Computer Vision and Patten Recognition (CVPR), volume 2, pages 860?867, 2005.
[30] S Lyu and E P Simoncelli. Statistical modeling of images with fields of Gaussian scale mixtures. In
B Sch?olkopf, J Platt, and T Hofmann, editors, Adv. Neural Information Processing Systems 19, volume 19,
Cambridge, MA, May 2007. MIT Press.
8
| 3521 |@word worsens:1 middle:1 version:3 eliminating:2 seems:1 norm:1 compression:1 nd:1 hyv:1 covariance:3 decomposition:1 solid:1 reduction:15 liu:1 selecting:1 past:1 elliptical:1 comparing:1 scatter:1 dx:1 grassberger:1 numerical:1 distant:3 shape:2 hofmann:1 remove:3 plot:5 selected:2 xk:3 isotropic:1 filtered:6 provides:3 characterization:1 mathematical:1 direct:3 become:3 differential:2 symposium:1 interscience:1 introduce:1 pairwise:1 expected:4 ica:39 roughly:4 behavior:5 nor:1 multi:3 chi:3 informational:1 decomposed:1 spherical:14 decreasing:1 little:1 considering:2 becomes:2 provided:1 estimating:1 underlying:3 what:1 substantially:1 psych:1 developed:2 transformation:24 lsw:1 temporal:4 quantitative:1 act:1 tie:1 schwartz:1 control:1 unit:2 platt:1 appear:1 positive:1 understood:1 local:9 accordance:1 limit:1 despite:1 mach:1 analyzing:1 approximately:1 might:2 plus:2 black:1 studied:2 examined:3 suggests:2 shaded:2 lausanne:1 multiinformation:1 range:1 statistically:2 averaged:3 unique:1 practice:1 block:13 definite:1 x3:1 procedure:5 empirical:1 universal:1 bell:1 significantly:3 johannesma:1 matching:1 word:1 radial:16 pre:1 suggest:2 cannot:3 onto:1 close:1 context:2 applying:7 map:3 center:2 maximizing:1 resembling:1 roth:1 independently:2 decomposable:1 insight:1 estimator:1 regarded:1 deriving:2 fang:1 century:1 handle:1 searching:1 coordinate:1 justification:1 commercial:1 hypothesis:1 element:3 satisfying:1 recognition:2 mammalian:1 database:1 binning:1 observed:2 bottom:4 capture:1 ipo:1 region:2 ensures:1 adv:2 solla:1 removed:3 rescaled:1 substantial:1 principled:1 transforming:1 environment:1 comm:1 nash:1 feller:1 depend:1 solving:1 rin:1 completely:3 basis:3 easily:1 joint:20 sep:1 various:4 separated:3 fast:1 effective:5 london:1 sejnowski:1 hyper:1 neighborhood:1 dordrecht:1 whose:3 irg:3 posed:1 widely:1 cvpr:2 distortion:2 ability:1 statistic:12 transform:7 jointly:1 emergence:1 vocalization:1 advantage:2 grenander:1 kurtosis:1 matthias:1 analytical:1 product:2 fr:2 bow:1 alleviates:1 wht:1 achieve:1 gammatone:1 description:1 olkopf:1 kulback:1 exploiting:1 regularity:1 optimum:1 produce:3 generating:3 derive:1 radical:2 strong:1 soc:3 implemented:1 c:1 predicted:1 implies:1 indicate:1 quantify:1 direction:2 switzerland:1 radius:1 gaussianization:13 correct:1 filter:29 stochastic:1 functionality:1 human:1 elimination:1 bin:1 arinen:1 generalization:1 preliminary:1 opt:2 biological:5 eindhoven:1 adjusted:1 extension:2 exploring:1 hold:1 hall:1 normal:1 exp:2 lyu:3 mapping:6 achieves:2 early:1 estimation:2 albany:3 integrates:2 proc:3 polar:1 currently:2 largest:1 tool:1 uller:1 rough:1 mit:3 sensor:1 gaussian:37 always:1 super:1 esds:8 avoid:1 derived:2 focus:1 improvement:5 modelling:1 indicates:1 contrast:3 am:2 inst:1 dependent:1 wegmann:1 eliminate:4 initially:1 relation:1 transformed:15 interested:1 pixel:7 issue:1 among:1 ill:1 overall:1 denoted:3 orientation:2 proposes:1 animal:1 development:1 special:5 schaaf:2 marginal:10 field:7 construct:1 mutual:1 shaped:1 ng:1 sampling:1 eliminated:1 chapman:1 represents:1 extraction:1 look:1 atom:1 nearly:3 patten:1 report:2 stimulus:1 oriented:2 randomly:1 manipulated:1 gamma:1 divergence:1 intell:1 individual:2 phase:1 cns:1 william:1 freedom:1 message:1 highly:5 circular:1 mixture:2 yielding:1 zetzsche:2 accurate:1 grove:1 edge:1 closer:2 orthogonal:2 unless:1 modest:2 circle:3 plotted:2 overcomplete:1 minimal:1 column:9 modeling:2 kurtotic:1 cover:1 measuring:1 restoration:1 hearing:1 subset:2 pout:1 uninteresting:1 fastica:1 examining:1 osindero:1 stored:1 dependency:41 proximal:2 st:2 density:52 fundamental:1 international:1 together:1 bethge:1 synthesis:1 nongaussian:1 again:1 central:2 containing:2 huang:1 dr:1 conf:1 convolving:1 expert:1 rescaling:1 coding:4 coefficient:4 depends:1 tion:1 closed:1 portion:1 red:2 accuracy:1 variance:6 characteristic:1 efficiently:1 ensemble:1 correspond:2 spaced:2 miller:1 raw:4 produced:1 studeny:1 annulus:2 worth:1 unaffected:1 published:1 iica:3 explain:1 siwei:1 phys:1 rosenblith:1 energy:1 frequency:3 attneave:1 naturally:1 associated:1 mi:19 attributed:1 radian:1 sampled:1 auditory:2 gain:1 radially:2 dimensionality:3 amplitude:2 actually:1 higher:6 courant:1 methodology:6 response:19 maximally:1 april:1 arranged:1 leen:1 furthermore:3 correlation:1 hand:2 nonlinear:14 perhaps:1 resemblance:1 indicated:1 olshausen:1 effect:3 barlow:1 hence:1 alternating:2 symmetric:27 spherically:7 leibler:1 illustrated:1 white:1 uniquely:1 generalized:1 demonstrate:2 image:36 wise:1 ranging:1 recently:2 rotation:2 empirically:2 blk:3 conditioning:1 volume:6 discussed:1 interpretation:1 kluwer:1 onedimensional:1 marginals:1 significant:2 composition:1 cambridge:3 rd:2 grid:1 bandpassfiltered:1 ssd:7 cortex:2 impressive:1 whitening:5 base:1 multivariate:4 recent:3 showed:1 optimizes:1 apart:1 success:2 inconsistency:1 der:2 transmitted:1 preserving:1 greater:2 additional:3 seen:5 ogbauer:1 signal:39 dashed:1 july:1 multiple:2 simoncelli:6 sound:16 full:1 reduces:1 technical:2 characterized:1 esd:11 offer:5 sphere:1 believed:1 cross:1 plug:1 equally:2 dkl:1 whitened:2 vision:6 expectation:1 histogram:15 represent:2 normalization:4 achieved:9 cell:2 irregular:1 whereas:1 spacing:1 interval:1 diagram:2 adhere:1 source:23 sch:1 ineffective:1 hz:2 recording:1 tend:1 jordan:1 call:2 presence:1 kraskov:1 intermediate:1 variety:1 audio:1 restrict:1 reduce:2 pca:30 york:3 elliptically:18 generally:1 woodland:1 factorial:19 transforms:5 band:1 clip:3 vejnarova:1 reduced:1 estimated:2 neuroscience:2 blue:1 affected:1 group:1 redundancy:2 demonstrating:1 suny:1 drawn:1 neither:1 clean:1 rout:1 inverse:1 striking:3 family:2 kotz:1 separation:11 incompatible:1 coeff:2 bit:4 guaranteed:2 followed:2 constraint:1 precisely:1 scene:2 nearby:5 x7:1 fourier:1 aspect:1 lond:1 relatively:1 department:1 pacific:1 peripheral:1 belonging:1 beneficial:1 slightly:2 shallow:1 rev:2 projecting:1 invariant:2 pr:2 gradually:1 taken:2 asilomar:1 mutually:1 previously:1 pin:1 operation:3 gaussians:2 apply:3 eight:2 worthwhile:1 appropriate:2 subtracted:1 alternative:1 original:4 thomas:1 top:2 denotes:3 ensure:1 remaining:1 graphical:1 especially:2 society:1 mumford:1 receptive:3 parametric:2 dependence:2 primary:1 traditional:1 exhibit:3 kth:1 higherorder:1 concatenation:1 cochlear:1 reason:1 assuming:1 length:1 code:2 modeled:1 relationship:2 ellipsoid:1 illustration:1 providing:1 difficult:1 negative:1 design:1 anal:1 perform:1 teh:1 vertical:1 neuron:2 observation:3 markov:1 finite:1 ramesh:1 november:1 displayed:1 january:1 pat:1 extended:1 communication:1 smoothed:1 august:1 intensity:2 introduced:1 pair:22 nonlinearly:1 learned:1 nip:2 able:2 beyond:3 suggested:2 below:1 perception:1 pattern:1 scott:1 green:1 including:1 wainwright:1 power:1 decorrelation:2 natural:37 scheme:1 temporally:1 axis:1 jun:1 prior:1 understanding:1 literature:1 relative:3 fully:1 interesting:2 proportional:1 versus:1 localized:1 gopinath:1 switched:1 nucleus:1 degree:1 consistent:5 principle:2 editor:4 heavy:1 cd:1 row:9 summary:1 surprisingly:1 free:1 bias:1 weaker:2 wide:1 characterizing:1 sparse:3 venn:2 van:5 dimension:1 transition:2 cumulative:1 contour:6 evaluating:1 sensory:7 commonly:1 qualitatively:1 far:1 welling:1 transaction:1 global:1 assumed:4 eero:2 conclude:1 spectrum:1 continuous:1 search:1 iterative:2 decade:1 tailed:1 table:1 gaussianized:1 channel:1 nature:3 robust:1 ca:1 symmetry:1 domain:1 linearly:9 motivation:1 definitive:1 noise:2 edition:1 complementary:2 fig:21 ny:2 wiley:2 sub:1 msec:8 bandpass:21 exponential:1 perceptual:1 intimate:1 third:1 x15:1 removing:4 theorem:1 specific:1 symbol:1 nyu:1 r2:2 dk:1 normalizing:3 consist:1 intrinsic:1 effectively:1 occurring:1 krieger:1 cartesian:1 chen:1 suited:1 rg:41 intersection:2 entropy:4 explore:1 visual:5 expressed:1 lewicki:1 u2:1 holland:1 monotonic:1 corresponds:4 extracted:1 cdf:2 ma:3 conditional:10 identity:1 fisher:1 content:1 determined:2 specifically:3 reducing:4 typical:1 except:1 principal:1 pas:1 divisive:3 latter:1 hateren:3 baddeley:1 srivastava:1 |
2,782 | 3,522 | Extracting State Transition Dynamics from Multiple
Spike Trains with Correlated Poisson HMM
Kentaro Katahira1,2 , Jun Nishikawa2 , Kazuo Okanoya2 and Masato Okada1,2
1
Graduate School of Frontier Sciences The University of Tokyo
Kashiwa, Chiba 277-8561, Japan
2
RIKEN Brain Science Institute
Wako, Saitama 351-0198, Japan
[email protected]
Abstract
Neural activity is non-stationary and varies across time. Hidden Markov Models
(HMMs) have been used to track the state transition among quasi-stationary discrete neural states. Within this context, independent Poisson models have been
used for the output distribution of HMMs; hence, the model is incapable of tracking the change in correlation without modulating the firing rate. To achieve this,
we applied a multivariate Poisson distribution with correlation terms for the output distribution of HMMs. We formulated a Variational Bayes (VB) inference
for the model. The VB could automatically determine the appropriate number of
hidden states and correlation types while avoiding the overlearning problem. We
developed an efficient algorithm for computing posteriors using the recursive relationship of a multivariate Poisson distribution. We demonstrated the performance
of our method on synthetic data and a real spike train recorded from a songbird.
1
Introduction
Neural activities are highly non-stationary and vary from time to time according to stimuli and
internal state changes. Hidden Markov Models (HMMs) have been used for segmenting spike trains
into quasi-stationary states, in which the spike train is regarded as stationary, hence the statistics
(e.g., cross-correlation and inter-spike interval) can be calculated [1, 2, 3]. We can also calculate
these statistics by using time-binned count data (e.g., the Peri-Stimulus Time Histogram or PSTH).
However, we need a large trial set to obtain good estimates for all bins, which can be problematic
in neurophysiological experiments. HMMs enlarge the effective amount of data for estimating the
statistics. Moreover, the PSTH approach cannot be applied to cases where we cannot align spike
data to stimuli or the behaviors of animals. HMMs are suitable for such situations.
Previous studies using HMMs have assumed that all neural activities were independent of one another given the hidden states; hence, the models could not discriminate states whose firing rates
were almost the same but whose correlations among neurons were different. However, there has
been reports that shows the correlation between neurons changes within a fraction of a second without modulating the firing rate (e.g., [4]). We developed a method that enabled us to segment spike
trains based on differences in neuronal correlation as well as the firing rate. Treating neuronal correlations (including higher-order, and not only pairwise correlations) among multiple spike trains
has been one of the central challenges in computational neuroscience. There have been approaches
to calculating correlations by binarizing spike trains with small bin sizes [5, 6]. These approaches
are limited to treating correlations of short bin length that includes at most one spike. Here, we
introduce a multivariate Poisson distribution with a higher-order correlation structure (simply abbreviated as a correlated Poisson distribution) as the output distribution for HMMs. The correlated
Poisson distribution can incorporate correlation at arbitrary time intervals.
1
To construct optimal model from limited neurophysiological data, it is crucial to select a model
that has appropriate complexity, and avoid over-fitting. In our model, model complexity corresponds to the number of hidden states and types of correlations (we have a choice as to whether
to include pairwise correlation, third-order correlation, or higher order correlation). The maximum
likelihood approach adopted in previous studies [1, 7, 8] cannot be used for this purpose since the
likelihood criterion simply increases as the number of model parameters increases. A number of
model-selection criteria used with the maximum likelihood approach, i.e., Akaike?s information criteria (AIC), minimum description length (MDL), and Bayesian information criteria (BIC) are based
on the asymptotic assumption that only holds when a large number of data is obtained. Furthermore,
asymptotic normality, which is assumed in these criteria, does not hold in non-identifiable models
including HMMs [9].
In this study, we applied the variational Bayes (VB) method [10, 11] to HMMs whose output distribution is a correlated Poisson distribution. VB is one of the approximations of the Bayes method
and can avoid over-fitting even when the sample size is small. An optimal model structure can
be determined based on tractable variational free energy, which is the upper bound of the negative
marginal log-likelihood. Since the variational free energy does not need the asymptotic assumption,
VB works well even when the sample size is small in practice [12]. The computation of posteriors for a correlated Poisson distribution imposes serious computational burdens. We developed an
efficient algorithm to calculate these by using the recurrence relationship of a multivariate Poisson
distribution [13]. To the best of our knowledge, this is the first report that has introduced VB method
for a correlated Poisson distribution. Although Markov chain Monte Carlo (MCMC) methods has
been applied to inferring posteriors for a correlated Poisson distribution [14], MCMC schemes are
computationally demanding.
We demonstrate the performance of the method on multiple spike data both on a synthesized spike
train and real spike data recorded from the forebrain nucleus for the vocal control (HVC) of an
anesthetized songbird.
2
2.1
Method
HMM with multivariate Poisson distribution
Suppose that we obtain spike trains of C neurons by using simultaneous recordings. As preprocessing, we first discretize the spike trains with a non-overlapping window whose length is
? to obtain spike-count data. The number of spikes of neurons c in the tth window of the nth
n,t
C
trial is denoted by xn,t
= {xn,t
c . The spike-count data are summarized as X
c }c=1 and X =
n,t N,T
{X }n=1,t=1 . Let us assume spike-count data-set X is produced by a K-valued discrete hidden
state, Y = {y n,t }N,T
n=1,t=1 , and the sequences of hidden states are generated by a first-order Markov
n,t
process whose state transition matrix is a = {aij }K,K
= j|y n,t?1 = i), ?n,t and
i=1,j=1 : aij = p(y
?
?K
K
the initial state probability is ? = {?i } : ?i = p(y n,1 = i), ?n , where i=1 ?i = 1, j=1 aij = 1,
aij ? 0, ?i,j . Hidden states yt are represented by a binary variable ykn,t such that if the hidden state
at the tth window of the nth trial is k, then ykn,t = 1; otherwise 0. At state k, the spike count is
assumed to be generated according to p(xn,t
c |?k ), whose specific form is given in the following.
Next, we introduce the correlated Poisson distribution. For brevity, we have omitted the superscript,
n, t, for the moment. As an example, let us first consider cases of the trivariate Poisson model
(C = 3) with second- and third-order correlations. We will introduce an auxiliary hidden variable,
sl , l ? ? ? {1, 2, 3, 12, 13, 23, 123} which satisfies
x1 =s1 + s12 + s13 + s123 ,
x2 =s2 + s12 + s23 + s123 ,
x3 =s3 + s13 + s23 + s123 .
x
Each sl obeys P(sl |?l ), where P(x|?) denotes a univariate Poisson distribution: P(x|?) = ?x! e?? .
Due to the reproducing properties of the Poisson distribution, each xi also marginally follows a
Poisson distribution with parameter ?i + ?ij + ?ik + ?ijk , i, j, k ? {1, 2, 3}, i ?= j ?= k. The mean
vector of this distribution is (?1 + ?12 + ?13 + ?123 , ?2 + ?12 + ?23 + ?123 , ?3 + ?13 + ?23 + ?123 )T
2
(T denotes the transposition) and its variance-covariance matrix is given by
(
)
?1 + ?12 + ?13 + ?123
?12 + ?123
?13 + ?123
?12 + ?123
?2 + ?12 + ?23 + ?123
?23 + ?123
.
?13 + ?123
?23 + ?123
?3 + ?13 + ?23 + ?123
The general definition of the multivariate Poisson distribution is given using the vector, S =
(s1 , s2 , ..., sL )T , and C ? L matrix B = [B1 , B2 , ..., BJ ], C ? L with 0 and 1 elements, where
Bj , j = 1, ..., J is a sub-matrix of dimensions C ?C Cj , where C Cj is the number of combinations
of choosing j from C elements. Vector x = (x1 , x2 , ..., xC )T defined as x = BS follows a multivariate Poisson distribution. In the above trivariate example, S = (s1 , s2 , s3 , s12 , s13 , s23 , s123 )T
and B = [B1 , B2 , B3 ], where
0
1
B1 = @ 0
0
0
1
0
1
0
0
1
0 A , B2 = @ 1
1
0
1
0
1
1
0
1
0
1
1 A , B3 = @ 1 A .
1
1
(1)
We can also consider only the second-order correlation model by setting B = [B1 , B2 ] and S =
(s1 , s2 , s3 , s12 , s13 , s23 )T , or only the third-order correlation model by setting B = [B1 , B3 ] and
S = (s1 , s2 , s3 , s123 )T . The probability mass function of x is given by
? ?
p(x|?k ) =
P(sl |?k,l ),
(2)
S?G(x) l??
where G(x) denotes the set of S such that x = BS. The calculation of this probability can be
computationally expensive, since summations over possible S might be exhaustive, especially when
there is a large number of spikes per window. However, the computational burden can be alleviated
by using recurrence relations for a multivariate Poisson distribution [13]. For further details on
computation, see the Appendix. We call the HMM with this output distribution the Correlated
Poisson HMM (CP-HMM). When we assume that the spike counts for all neurons are independent,
(i.e., B = B1 , S = (s1 , s2 , s3 )T ), the output distribution is reduced to
p(x|?k ) =
C
?
P(xc |?k,c ).
(3)
c=1
We call the HMM with this distribution the independent Poisson HMM (IP-HMM). IP-HMM is a
special case of CP-HMM. The complete log-likelihood for CP-HMM is
[K
N
T ?
K ?
K
?
? n,1
?
log p(X, Y, S|?) =
yk log ?k +
ykn,t?1 ykn,t
? log akk ?
n=1 k=1
+
T ?
K
?
ykn,t
t=2 k=1 k? =1
{
log 1S n,t [G(X
n,t
)]
t=1 k=1
?
}]
P(sn,t
l |?k,l )
,
(4)
l??
where ? = (?, a, ?) and 1A [x] is an indicator function, which equals 1 if A ? x and 0 otherwise.
2.2 Variational Bayes
Here, we derive VB for CP-HMMs. We use conjugate prior distributions for all parameters of CPHMMs, which enabled the posterior distribution to have the same form as the prior distribution. The
prior distribution for initial probability distribution ? and state transition matrix a is the Dirichlet
distribution:
K
?
(?) K
(A) K
?(?) = D({?k }K
|{u
}
),
?(a)
=
D({aik }K
(5)
k=1
k=1
k=1 |{uk }k=1 ).
k
K
where D(?) is defined as D({ak }K
k=1 |{uk }k=1 ) =
i=1
PK
?(
QK
uk )
k=1 ?(uk )
k=1
?k
k=1
auk k ?1 . The conjugate prior for
the parameter of the Poisson mean, ? = {?k,l }K
k=1,l=1 , of each auxiliary hidden variable, {sl }l?? ,
is
K ?
?
?(?) =
G(?k,l |?0 , ?0 ),
(6)
k=1 l??
3
where G(?) denotes the Gamma distribution defined as G(?|?, ?) =
iments we discuss in the following, we set the hyperparameters as
?0 = 0.1, ?0 = 0.1.
??
??1 ???
e
. In the exper?(?) ?
(?)
(A)
uj = uj = 0.1, ?j, and
The Bayesian method calculates p(?, Z|X, M ), which is a posterior of unknown parameters and
hidden variable set Z = (Y, S) given the data and model structure, M (in our case, this indicates
the number of hidden states, and correlation structure). However, the calculation of the posterior
involves a difficult integral. The VB approach approximates the true posterior, p(?, Z|X, M ), by
factored test distribution r(?)Q(Z). To make the test distribution closer to the true posterior, we
need to minimize Kullback-Leibler (KL) divergence from r(?)Q(Z) to p(?, Z|X, M ):
?
?
r(?)Q(Z)
KL(r(?)Q(Z)||p(?, Z|X, M )) ? log
p(Z, ?|X, M ) r(?)Q(Z)
= log p(X|M ) ? ?log p(X, Z, ?|M )?r(?)Q(Z) ? Hr (?) ? HQ (Z),
(7)
where ???p(x) denotes the expectation over p(x) and Hp (x) = ?? log p(x)?p(x) is the entropy of the
distribution, p(x). Since the log marginal likelihood log p(X|M ) is independent of r(?) and Q(Z),
minimizing KL divergence is equivalent to minimizing variational free energy
F ? ??log p(X, Z, ?|M )?r(?)Q(Z) ? Hr (?) ? HQ (Z).
(8)
VB alternatively minimizes F with respect to Q(Z) and r(?). This minimization with respect to
Q(Z) is called the VB-E step, and the VB-M step for r(?).
VB-E step By using the Lagrange multiplier method, the VB-E step is derived as
Q(Z) =
1
exp?log p(X, Z|?)?r(?) ,
CQ
where CQ is a normalization constant. More specifically, the following quantities are calculated:
?ykn,t ?Q(Z)
=
p?(ykn,t = 1|X n,1:t )?
p(X n,t+1:T |ykn,t = 1)
?K
?(yin,t = 1|X n,1:t )?
p(X n,t+1:T |yin,t = 1)
i=1 p
?ykn,t?1 ykn,t
? ?Q(Z)
=
p?(ykn,t?1 = 1|X n,1:t?1 )?
akk? p?(X n,t |??k )?
p(X n,t+1:T |ykn,t
? = 1)
?K ?K
n,t?1
n,1:t?1
n,t
n,t+1:T
?(yi
= 1|X
)?
aij p?(X |?j )?
p(X
|yjn,t = 1)
i=1
j=1 p
These quantities are obtained by the forward-backward algorithm [11]. The subnormarized quantity
a
?ij is defined as a
?ij = exp(?log aij ?r(a) ) and p?(X n,t |?k ) is
?
?
? n,t |?k,l ),
p?(X n,t |?k ) =
P(s
(9)
k,l
Skn,t ?G(X n,t ) l??
? l |?k,l ) is a sub-normalized distribution:
where P(s
{
}
? k,l ? log(sl !) ? ?
? k,l ,
? l |?k,l ) = exp sl log ?
P(s
where
(10)
{
}
? k,l = exp ?log ?k,l ?r(? ) , ?
? k,l = ??k,l ?r(? ) .
?
k
k
These quantities can be calculated by using the recurrence relations of the multivariate Poisson
distribution (See the Appendix). The calculation of the posterior for S is given as:
?
n,t ?
? n,t
l?? P(sk,l |?k,l )
Skn,t ?G(X n,t ) sk,l
n,t
n,t
?sk,l ?Q(Z) = ?yk ?Q(Z) ?
.
(11)
?
? n,t |?k,l )
n,t
P(s
n,t
Sk ?G(X
)
l??
k,l
This is also calculated by using the recurrence relations of the multivariate Poisson distribution.
VB-M step By again using the Lagrange multiplier method, the VB-M step is derived as
r(?)
=
1
?(?) exp?log p(X, Z|?)?Q(Z) ,
Cr
4
B
A
(a)
1
State 3
0
5
0
5
0
5
0
5
0
5
0
5
0
5
0
0
(d)
(c)
(b)
State 2
1
0 0.5 1 1.5
State 3
State 3
State 2
State 1
State 1
0
2
0
0.5
1
Variational free energy
C
20
40
60
80
4600
Independent
2nd-order
3rd-order (true model)
Full-order
4500
4400
4300
4200
4100
4000
1
100
2
3
4
5
Number of hidden states
t (window index)
Figure 1: Typical examples of estimation results for correlated Poisson-HMM with third-order correlation applied to simulated spike train. A: From top, 1) spike train of three neurons, 2) the probability
of state k staying at window t denoted by ?ykt ?Q(Z) , 3) spike count data xti , and 4) posterior mean
for hidden variables stk,l . B: Posterior mean for Poisson mean ?k,l for all states. C: Variational free
energy calculated for all models.
where Cr is a normalization constant. More specifically, r(?) = r(?)r(a)r(?), and
r(?)
=
? K
D({?k }K
k=1 |{wk }k=1 ), r(a) =
=
K Y
Y
K
Y
a K
D({aik }K
k=1 |{wik }k=1 ),
i=1
r(?)
?
G(?k,l |wk,l
, wk? ),
k=1 l??
where
?
wk,l
= ?0 +
N X
T
X
?
?sn,t
k,l ?Q(Z) , wk = ?0 +
n=1 t=1
(?)
wj? = uj
+
N
X
N X
T
X
?ykn,t ?Q(Z) ,
n=1 t=1
(a)
a
?yjn,1 ?Q(Z) , wij
= uj
n=1
+
N X
T
X
?yin,t?1 yjn,t ?Q(Z) .
n=1 t=2
The VB computes the VB-E and VB-M steps alternatively until the variational free energy converges
to a local minimum. In the experiment we discuss in the following, we started the algorithm from
10 different initializations to avoid a poor local minimum solution.
3
Demonstration on synthetic spike train
By using the synthetic spike train of three neurons, let us first demonstrate how to apply our method
to a spike train. In the case of three neurons, we have four choices for the correlation types that have
(1) no correlation term, (2) only a second-order correlation term, (3) only a third-order correlation
term, and (4) both of these. After this, we will call them IP-HMM, 2CP-HMM, 3CP-HMM, and fullCP-HMM. We generated spike trains by using a multivariate Poisson distribution with only a thirdorder correlation whose Poisson mean depends on periods as: (a) ?1 = ?2 = ?3 = 0.5, ?123 = 0.0
for t ? [1, 10], (b) ?1 = ?2 = ?3 = 1.5, ?123 = 0.0 for t ? [11, 50], (c) ?1 = ?2 = ?3 = 0.5,
?123 = 1.0 for t ? [51, 90], and (d) ?1 = ?2 = ?3 = 0.5, and ?123 = 0.0 for t ? [91, 100].
The periods (b) and (c) have the same mean firing rate (the mean spike count in one window is
?i + ?123 = 1.5, i ? {1, 2, 3}), but they only differ in the third-order correlation. Therefore,
classical Poisson-HMMs that employ an independent Poisson assumption [1, 2, 7] are not able to
segment them into distinct states. Figure 1A shows that our method was able to do so. We generated
5
Table 1: Results of model selection for spike Table 2: Results of model selection with time
stationary assumption (K = 1)
trains from HVC
Stimulus
BOS
REV
Silent
K
4
4
3
Correlation Structure
Independent
3rd-order
Independent
Stimulus
BOS
REV
Silent
A: BOS
0
Correlation Structure
2nd order
2nd order
Full order
State 1 State 2 State 3 State 4
1
2
3
4
5
0 20 40 0 10 20 0 5 10 0 1 2
B: REV
0
State 1 State 2 State 3 State 4
1
2
3
4
5
0 20 40 0 10 20 0 10 20 0 1 2
C: Silent
0
1
State 1 State 2 State 3
2
3
4
5
Time (sec.)
0 20 40 0 10 20 0 0.1 0.2
Figure 2: Typical examples of estimates of VB for spike train from HVC with (A) bird?s own song,
(B) its reversed song, and (C) no stimuli presented. Selected model based on variational free energy
was used for each condition (see Table 1). Each row corresponds to different trials. Background
? k,l
color indicates most probable state at each time window. Right panels indicate posterior mean ?
for all states.
spike trains for 10 trials, but only one trial is shown. The periods (b) and (c) are segmented into
states 1 and 2, whose Poisson means are different (Fig. 1B). The bottom four lines in Fig. 1A plot the
posterior mean for {stk,l }l?? (Here, we omitted the index of trial n). These plots separately visualize
the contribution of the independent factor and correlation factor on spike counts xtc , c ? {1, 2, 3}.
The spike counts in period (b) can be viewed as independent firing. Even if the spikes are in the
same window, this can be regarded as just a coincidence predicted by the assumption of independent
firing. In contrast, the spike counts in period (c) can be regarded as having been contributed by
common factor st2,123 , as well as independent factors st2,i , i ? {1, 2, 3}. Here, we used a 3CP-HMM
having three hidden states. Because periods (a) and (d) have identical statistics, it is clear that the
model with three states (K = 3) is sufficient for modeling this spike train. Then, can we select this
model from the data? Figure 1C shows the variational free energy, F. The 3CP-HMM with three
hidden states yields the lowest F, implying that it is optimal. The 3CP-HMMs with fewer hidden
states, IP-HMMs, or 2CP-HMMs cannot represent the statistical structure of the data, and hence
yield higher F. The 3CP-HMMs with more hidden states (K > 3) or full-CP-HMMs (K ? 3) can
include an optimal model, but by being penalized by a Bayesian Occam?s razor, yield higher F.
Thus, we can select the optimal model based on F, at least in this example.
4
Application to spike trains from HVC in songbird
We applied our method to data collected from the nucleus HVC of songbird. HVC is an important
nucleus that integrates auditory information and motor information of song sequences [15]. We
obtained spike trains of three single units by using a silicon probe from one anesthetized Bengalese
finch. The bird?s own song (BOS) and reversed song (REV) were presented 50 times for each
6
Table 3: Log-likelihood on test data (REV).
Method
Independent & stationary assumption (K = 1)
Stationary assumption (K = 1, correlation type is selected)
Independent assumption (IP-HMM) (K is selected)
CP-HMM (all selected)
full-CP-HMM (K is selected)
Log-likelihood (mean ? s.d.)
-255.691 (? 2.074)
-247.640 (? 1.659)
-230.353 (? 0.958)
-229.143 (? 1.242)
-230.272 (? 1.244)
stimulus during recording. Spontaneous activities (Silent) were recorded so that we could obtain the
same amount of data as the stimulus-presented data. More details on the recordings are described
elsewhere [16].
We modeled spike trains for all stimuli using IP-HMMs and CP-HMMs by varying the number of
states K and various correlation structure. We then selected the model that yielded the lowest free
energy. We used window length ? = 100 (ms). The selected models are summarized in Table 1.
Figure 2 shows a typical example of spike trains and the segmentation results for the selected models.
The CP-HMMs were only selected for spike trains when REV was presented. If we assume that the
spike statistics did not change over the trials (in our case, this corresponds to the model with only
one hidden state, K = 1), CP-HMMs were selected under all experimental conditions. These results
reflect the fact that neurons in anesthetized animals simultaneously transit between high-firing and
low-firing states [17], which can be captured by a Poisson distribution with correlation terms. Timestationary assumptions have often been employed to obtain a sufficient sample size for estimating
correlation (e.g., [6]). Our results suggest that we should be careful when interpreting such results;
even when the spike trains seem to have a correlation, if we take state transition into account, spike
trains may be better captured by using an independent Poisson model.
We measured predictive performance on test data to verify how well our model capture the statistical
properties of the spike train. Here, we used spike trains for REV where 3CP-HMM was selected.
We first divided spike trains into 20 training and 20 test trials. In the training phase, we constructed
models using the model selection based on the variational free energy with four restrictions: (1)
an independent & stationary assumption (K = 1), (2) a stationary assumption (K = 1, correlation type was selected), (3) IP-HMM (K was selected), (4) CP-HMM (no restrictions), and (5) the
full-CP-HMM (K was selected). In the prediction phase, we calculated the log-likelihood on test
data under the posterior mean ???r(?) of selected models. The results are summarized in Table 3.
We took averaged over different choices of training set and prediction sets. We can see that the
log-likelihood on the test data improved by taking both the state transition and correlation structure
into consideration. These results imply that CP-HMMs can characterize the spike train better than
classical Poisson-HMMs. The full-CP-HMM include 2nd-order CP-HMM, but shows lower predictive performance than the model in which correlation type were selected. This is likely due to
over-fitting to the training data. The VB approach selected the model with tappropriate complexity
avoiding over-fitting.
5
Discussion
We constructed HMMs whose output is a correlated multivariate Poisson distribution for extracting
state-transition dynamics from multiple spike trains. We applied the VB method for inferring the
posterior over the parameter and hidden variables of the models. We have seen that VB can be
used to select an appropriate model (the number of hidden states and correlation structure), which
gives a better prediction. Our method incorporated the correlated Poisson distribution for treating
pairwise and higher-order correlations. There have been approaches that have calculated correlations
by binarizing spike data with log-linear [5] or maximum-entropy models [6]. These approaches are
limited to treating correlations in short bin lengths, which include at most one spike. In contrast,
our approach can incorporate correlations in an arbitrary time window from exact synchronization
to firing-rate correlations on a modest time scale. The major disadvantages of our model are that it
is incapable of negative correlations. It can be incorporated by employing a mixture of multivariatePoisson distributions for the output distribution of HMMs. VB can easily be extended to such
models.
7
Appendix: Calculation of correlated Poisson distribution in VB-E step
The sub-normalized distribution (Eq.9) can be calculated by using the recurrence relation of multivariate Poisson distribution [13]. Let us consider the tri-variate (C = 3) with the second-order
correlation case, where B = [B1 , B2 ]. Here, the recursive scheme for the calculating Eq.9 is:
? If all elements of X = (X1 , X2 , X3 ) are non-zero, then
? 1 P(X
? 1 = x1 , X2 = x2 , X3 = x3 |?) =?
? 1 = x1 ? 1, X2 = x2 , X3 = x3 |?)
x1 P(X
? 12 P(X
? 1 = x1 ? 1, X2 = x2 ? 1, X3 = x3 |?)
+?
? 13 P(X
? 1 = x1 ? 1, X2 = x2 , X3 = x3 ? 1|?).
+?
? If at most one element of X is non-zero, then
?
?
3
? ? ??
? ij
? 1 = x1 , X2 = x2 , X3 = x3 |?) = exp ?
? i = xi |?i ), i, j ? 1, 2, 3.
P(X
?
P(X
?
?
i<j
i=1
? If only one of the xi ?s (say, xk ) is zero, then
? ik ? ?
? jk }P(X
? 1 = x1 , X2 = x2 , X3 = x3 |?) = exp{??
? i = xi , Xj = xj |?i , ?j , ?ij ).
P(X
This recursive scheme can be generalized to more than three dimensions. We use the alternative
?k
definition of the multivariate Poisson random vector, x such that x = l=1 ?l sl , where the vectors,
? 1 P(X
? k P(X
?
?
?l , denote a lth column of matrix B. Let us define vector ?? = (?
= x ? ?1 |?), ..., ?
=
x ? ?k |?))T . Then, the recurrence relations are rewritten as
?
xP(X
= x|?) = B?? .
(12)
By using the quantities obtained in this calculation, ?sn,t
k,l ?Q(Z) is calculated as
?
? n,t ? ?l |?k )
?k,l P(X
n,t
?sn,t
.
k,l ?Q(S) = ?yk ?Q(Z)
? n,t |?k )
P(X
(13)
References
[1] M. Abeles, H. Bergman, I. Gat, I. Meilijson, E. Seidemann, N. Thishby, and E. Vaadia, Proc
Nat Acad Sci USA, 92:8616-8620, 1995.
[2] I. Gat, N. Tishby, and M. Abeles, Network: Computation in Neural Systems, 8:297-22, 1997.
[3] L. M. Jones, A. Fontanini, B. F. Sadacca, P. Miller, and D. B. Katz, Proc Nat Acad Sci USA
104:18772-18777, 2007.
[4] E. Vaadia, I. Haalman, M. Abeles. H. Bergman, Y. Prut, H. Slovin, and A. Aertsen, Nature,
373:515-518, 1995.
[5] H. Nakahara, and S. Amari, Neural Computation 14:2269-2316, 2002.
[6] E. Schneidman, M. J. Berry, R. Segev and W. Bialek, Nature 440:1007-1012, 2006.
[7] G. Radons, J.D. Becker, B. D?ulfer, and J Kr?uger, Biological Cybernetics, 71:359-73, 1994.
[8] M. Danoczy and R. Hahnloser, Advances in NIPS, 18, 2005.
[9] K. Yamazaki and S. Watanabe, Neurocomputing 69:62-84, 2005.
[10] H. Attias, in Proc. of 15th Conference on Uncertainty in Artificial Intelligence, 21-30, 1999.
[11] M. J. Beal, Variational Algorithms for Approximate Bayesian Inference, Ph.D thesis, University College London, 2003.
[12] S. Watanabe, Y. Minami, A. Nakamura, and N. Ueda, Advances in NIPS, 15, 2002.
[13] K. Kano and K. Kawamura, Communications in Statistics, 20:165-178, 1991.
[14] L. Meligkotsidou, Statistics and Computing, 17:93-107, 2007
[15] A.C. Yu and D. Margoliash, Science, 273:1871-1875, 1996.
[16] J. Nishikawa and K. Okanoya, in preparation.
[17] G. Uchida, M. Fukuda, and M. Tanifuji, Physical Review E, 73:031910, 2006
8
| 3522 |@word trial:9 nd:4 covariance:1 moment:1 initial:2 wako:1 motor:1 treating:4 plot:2 stationary:10 implying:1 selected:17 fewer:1 intelligence:1 xk:1 short:2 transposition:1 psth:2 constructed:2 ik:2 fitting:4 introduce:3 pairwise:3 inter:1 behavior:1 brain:1 automatically:1 xti:1 window:11 estimating:2 moreover:1 panel:1 mass:1 lowest:2 minimizes:1 developed:3 katahira:1 uk:4 control:1 unit:1 segmenting:1 local:2 acad:2 ak:1 firing:10 might:1 bird:2 initialization:1 hmms:25 limited:3 graduate:1 obeys:1 averaged:1 recursive:3 practice:1 x3:14 alleviated:1 vocal:1 suggest:1 cannot:4 selection:4 context:1 restriction:2 equivalent:1 demonstrated:1 yt:1 factored:1 regarded:3 enabled:2 margoliash:1 spontaneous:1 suppose:1 aik:2 exact:1 akaike:1 bergman:2 element:4 expensive:1 jk:1 bottom:1 coincidence:1 capture:1 calculate:2 wj:1 yk:3 complexity:3 thirdorder:1 dynamic:2 segment:2 predictive:2 binarizing:2 easily:1 represented:1 various:1 riken:1 train:32 distinct:1 effective:1 london:1 monte:1 artificial:1 choosing:1 exhaustive:1 whose:9 valued:1 say:1 otherwise:2 amari:1 statistic:7 superscript:1 ip:7 beal:1 sequence:2 vaadia:2 took:1 achieve:1 description:1 converges:1 staying:1 nishikawa:1 derive:1 ac:1 measured:1 ij:5 school:1 eq:2 auxiliary:2 predicted:1 involves:1 indicate:1 differ:1 tokyo:2 bin:4 kawamura:1 probable:1 biological:1 summation:1 minami:1 frontier:1 hold:2 exp:7 bj:2 visualize:1 major:1 vary:1 omitted:2 purpose:1 estimation:1 proc:3 integrates:1 s12:4 modulating:2 minimization:1 avoid:3 cr:2 varying:1 derived:2 likelihood:10 indicates:2 contrast:2 bos:4 inference:2 hidden:22 relation:5 quasi:2 wij:1 among:3 denoted:2 animal:2 special:1 marginal:2 equal:1 construct:1 having:2 enlarge:1 identical:1 jones:1 yu:1 report:2 stimulus:9 auk:1 serious:1 employ:1 gamma:1 divergence:2 simultaneously:1 neurocomputing:1 phase:2 highly:1 mdl:1 mixture:1 akk:2 chain:1 s123:5 integral:1 closer:1 modest:1 hvc:6 yjn:3 column:1 modeling:1 disadvantage:1 saitama:1 tishby:1 characterize:1 varies:1 abele:3 synthetic:3 finch:1 peri:1 trivariate:2 again:1 central:1 recorded:3 reflect:1 thesis:1 japan:2 s13:4 account:1 summarized:3 b2:5 includes:1 wk:5 sec:1 depends:1 meilijson:1 bayes:4 contribution:1 minimize:1 variance:1 qk:1 ykn:13 miller:1 yield:3 prut:1 bayesian:4 produced:1 marginally:1 carlo:1 cybernetics:1 simultaneous:1 definition:2 energy:10 okanoya:1 auditory:1 knowledge:1 color:1 cj:2 segmentation:1 higher:6 improved:1 furthermore:1 just:1 correlation:47 until:1 overlapping:1 uger:1 usa:2 b3:3 normalized:2 true:3 multiplier:2 verify:1 hence:4 leibler:1 haalman:1 during:1 recurrence:6 songbird:4 xtc:1 razor:1 criterion:5 m:1 generalized:1 complete:1 demonstrate:2 cp:23 interpreting:1 variational:13 consideration:1 common:1 physical:1 jp:1 approximates:1 katz:1 synthesized:1 silicon:1 rd:2 hp:1 align:1 multivariate:14 posterior:15 own:2 seidemann:1 incapable:2 binary:1 yi:1 captured:2 minimum:3 seen:1 kazuo:1 employed:1 determine:1 period:6 schneidman:1 multiple:4 full:6 segmented:1 calculation:5 cross:1 divided:1 calculates:1 prediction:3 expectation:1 poisson:41 histogram:1 represent:1 normalization:2 background:1 separately:1 interval:2 crucial:1 tri:1 recording:3 seem:1 call:3 extracting:2 xj:2 bic:1 variate:1 silent:4 attias:1 masato:1 whether:1 becker:1 song:5 clear:1 amount:2 ph:1 tth:2 reduced:1 sl:9 problematic:1 s3:5 neuroscience:1 track:1 per:1 sadacca:1 discrete:2 four:3 backward:1 fraction:1 uncertainty:1 almost:1 ueda:1 s23:4 appendix:3 vb:24 radon:1 bound:1 aic:1 identifiable:1 activity:4 yielded:1 binned:1 segev:1 x2:15 uchida:1 according:2 combination:1 poor:1 kano:1 conjugate:2 across:1 rev:7 b:2 s1:6 computationally:2 forebrain:1 abbreviated:1 count:11 discus:2 tractable:1 adopted:1 rewritten:1 apply:1 probe:1 appropriate:3 alternative:1 denotes:5 dirichlet:1 include:4 top:1 fukuda:1 xc:2 calculating:2 especially:1 uj:4 classical:2 quantity:5 spike:53 okada1:1 aertsen:1 bialek:1 hq:2 reversed:2 simulated:1 sci:2 hmm:27 transit:1 collected:1 length:5 index:2 relationship:2 cq:2 modeled:1 minimizing:2 demonstration:1 difficult:1 negative:2 unknown:1 contributed:1 upper:1 discretize:1 neuron:9 markov:4 st2:2 situation:1 extended:1 incorporated:2 communication:1 reproducing:1 arbitrary:2 introduced:1 kl:3 nip:2 able:2 challenge:1 including:2 suitable:1 demanding:1 nakamura:1 indicator:1 hr:2 mn:1 normality:1 scheme:3 nth:2 wik:1 imply:1 started:1 jun:1 sn:4 prior:4 review:1 berry:1 asymptotic:3 synchronization:1 nucleus:3 slovin:1 sufficient:2 xp:1 imposes:1 occam:1 row:1 elsewhere:1 penalized:1 free:10 aij:6 fontanini:1 institute:1 taking:1 anesthetized:3 chiba:1 calculated:9 xn:3 transition:7 dimension:2 computes:1 forward:1 preprocessing:1 employing:1 approximate:1 kullback:1 b1:7 assumed:3 xi:4 alternatively:2 sk:4 table:6 nature:2 exper:1 yamazaki:1 did:1 pk:1 s2:6 hyperparameters:1 x1:10 neuronal:2 fig:2 sub:3 inferring:2 watanabe:2 iments:1 third:6 specific:1 burden:2 kr:1 gat:2 nat:2 entropy:2 yin:3 simply:2 univariate:1 likely:1 neurophysiological:2 kashiwa:1 lagrange:2 tracking:1 ykt:1 corresponds:3 satisfies:1 hahnloser:1 lth:1 viewed:1 kentaro:1 formulated:1 nakahara:1 careful:1 stk:2 change:4 determined:1 specifically:2 typical:3 called:1 discriminate:1 experimental:1 ijk:1 select:4 college:1 internal:1 brevity:1 preparation:1 incorporate:2 mcmc:2 skn:2 avoiding:2 correlated:13 |
2,783 | 3,523 | Biasing Approximate Dynamic Programming with a
Lower Discount Factor
Marek Petrik
Department of Computer Science
University of Massachusetts Amherst
Amherst, MA 01003
[email protected]
Bruno Scherrer
LORIA Campus Scientifique B.P. 239
54506 Vandoeuvre-les-Nancy, France
[email protected]
Abstract
Most algorithms for solving Markov decision processes rely on a discount factor,
which ensures their convergence. It is generally assumed that using an artificially
low discount factor will improve the convergence rate, while sacrificing the solution quality. We however demonstrate that using an artificially low discount factor
may significantly improve the solution quality, when used in approximate dynamic
programming. We propose two explanations of this phenomenon. The first justification follows directly from the standard approximation error bounds: using
a lower discount factor may decrease the approximation error bounds. However,
we also show that these bounds are loose, thus their decrease does not entirely
justify the improved solution quality. We thus propose another justification: when
the rewards are received only sporadically (as in the case of Tetris), we can derive
tighter bounds, which support a significant improvement in the solution quality
with a decreased discount factor.
1
Introduction
Approximate dynamic programming methods often offer surprisingly good performance in practical
problems modeled as Markov Decision Processes (MDP) [6, 2]. To achieve this performance, the
parameters of the solution algorithms typically need to be carefully tuned. One such important parameter of MDPs is the discount factor ?. Discount factors are important in infinite-horizon MDPs,
in which they determine how the reward is counted. The motivation for the discount factor originally
comes from economic models, but has often no meaning in reinforcement learning problems. Nevertheless, it is commonly used to ensure that the rewards are bounded and that the Bellman operator
is a contraction [8]. In this paper, we focus on the quality of the solutions obtained by approximate
dynamic programming algorithms. For simplicity, we disregard the computational time, and use
performance to refer to the quality of the solutions that are eventually obtained.
In addition to regularizing the rewards, using an artificially low discount factor sometimes has a
significant effect on the performance of the approximate algorithms. Specifically, we have observed
a significant improvement of approximate value iteration when applied to Tetris, a common reinforcement learning benchmark problem. The natural discount factor in Tetris is 1, since the received
rewards have the same importance, independently of when received. Currently, the best results
achieved with approximate dynamic programming algorithms are on average about 6000 lines removed in a single game [4, 3]. Our results, depicted in Figure 1, with approximate value iteration
and standard features [1] show that setting the discount factor to ? ? (0.84, 0.88) gives the best
expected total number of removed lines, a bit more than 20000. That is five times the performance
with discount factor of ? = 1 (about 4000). The improved performance for ? ? (0.84, 0.88) is surprising, since computing a policy for this discount factor dramatically improves the return calculated
with ? = 1.
Average of 10 runs of average scores on 100 games
25000
0.8
0.84
0.88
0.92
0.96
1.0
20000
15000
10000
5000
0
0
10
20
30
40
50
60
70
80
90
100
Iterations
Figure 1: Performance of approximate value iteration on Tetris with different discount factors. For
each value of ?, we ran the experiments 10 times and recorded the evolution of the score (the
evaluation of the policy with ? = 1) on the 100 games, and averaged over 10 learning runs.
In this paper, we study why using a lower discount factor improves the quality of the solution with
regard to a higher discount factor. First, in Section 2, we define the framework for our analysis.
In Section 3 we analyze the influence of the discount factor on the standard approximation error
bounds [2]. Then in Section 4 we argue that, in the context of this paper, the existing approximation
error bounds are loose. Though these bounds may be tightened by a lower discount factor, they are
not sufficient to explain the improved performance. Finally, to explain the improved performance,
we identify a specific property of Tetris in Section 5 that enables the improvement. In particular,
the rewards in Tetris are received sparsely, unlike the approximation error, which makes the value
function less sensitive to the discount factor than the approximation error.
2
Framework and Notations
In this section we formalize the problem of adjusting the discount factor in approximate dynamic
programming. We assume ?-discounted infinite horizon problems, with ? < 1. Tetris does not
directly fit in this class, since its natural discount factor is 1. It has been shown, however, that
undiscounted infinite horizon problems with a finite total reward can be treated as discounted problems [7]. Blackwell optimality implies that there exists ? ? < 1 such that for all ? > ? ? the
?-discounted problem and the undiscounted problem have the same optimal policy. We therefore
treat Tetris as a discounted problem with a discount factor ? ? < 1 near one. The analysis is based
on Markov decision processes, defined as follows.
Definition 1. A Markov Decision Process is a tuple (S, A, P, r). S is the set of states, A is the set of
actions, P : S ? S ? A 7? [0, 1] is the transition function (P (s0 , s, a) is the probability of transiting
to state s0 from state s given action a), and r : S ? A 7? R+ is a (non-negative) reward function.
We assume that the number of states and actions is finite, but possibly very large. For sake of simplicity, we also assume that the rewards are non-negative; our analysis can be extended to arbitrary
rewards in a straight-forward way. We write krk? to denote the maximal reward for any action and
state.
Given a Markov decision process (S, A, P, r) and some discount factor ?, the objective is to find a
policy, i.e. a mapping ? : S 7? A, with the maximal value from any initial states s. The value v ? (s)
of ? from state s is defined as the ?-discounted infinite horizon return:
"?
#
X
?
t
v (s) := E
? r(st , at ) s0 = s, a0 = ?(s0 ), . . . , at = ?(st ) .
t=0
It is well known [7, 2] that this problem can be solved by computing the optimal value function v ? ,
which is the fixed point of the Bellman operator Lv = max? r? + ?P? v. Here r? is the vector on S
with components r(s, ?(s)) and P ? is the stochastic matrix associated with a policy ?.
We consider in this paper that the MDP is solved with 1) an approximate dynamic programming
algorithm and 2) a different discount factor ? < ?. In particular, our analysis applies to approximate
value and policy iteration with existing error bounds. These methods invariably generate a sequence
of approximate value functions, which we denote as v?? . Then, ?? is a policy greedy with regard to
the approximate value function v?? .
As we have two different discount factors, we use a subscript to denote the discount factor used in
calculating the value. Let ? be a discount factor and ? any policy. We use v?? to represent the value
of policy ? calculated with the discount factor ?; when ? is the optimal policy corresponding to the
discount ?, we will simply denote its value v? . As mentioned above, our objective is to compare,
?
for the discount factor ?, the value v? of the optimal policy and the value v? ? . Here, ?? is the
policy derived from the approximate ?-discount value. The following shows how this error may be
decomposed in order to simplify the analysis. Most of our analysis is in terms of L? mainly because
this is the most common measure used in the existing error bounds. Moreover, the results could be
extended to L2 norm in a rather straight-forward way without a qualitative difference in the results.
?
From the optimality of v? , v? ? v? ? and from the non-negativity of the rewards, it is easy to show
?
?
that the value function is monotonous with respect to the discount factor, and therefore: v? ? ? v? ? .
??
??
Thus 0 ? v? ? v? ? v? ? v? and consequently:
?
?
?
e(?) := kv? ? v? ? k? ? kv? ? v? ? k? ? kv? ? v? k? + kv? ? v? ? k? = ed (?) + ea (?).
?
where ed (?) := kv? ? v? k? denotes the discount error, and ea (?) := kv? ? v? ? k? the approximation error. In other words, a bound of the loss due to using ?? instead of the optimal policy for
discount factor ? is the sum of the error on the optimal value function due to the change of discount
and the error due to the approximation for discount ?. In the remainder of the paper, we analyze
each of these error terms.
3
Error Bounds
In this section, we develop a discount error bound and overview the existing approximation error
bounds. We also show how these bounds motivate decreasing the discount factor in the majority of
MDPs. First, we bound the discount error as follows.
Theorem 2. The discount error due to using a discount factor ? instead of ? is:
ed (?) = kv? ? v? k? ?
???
krk? .
(1 ? ?)(1 ? ?)
Proof. Let L? and L? be the Bellman operators for the corresponding discount factors. We have
now:
kv? ? v? k?
= kL? v? ? L? v? k? = kL? v? ? L? v? + L? v? ? L? v? k?
? kL? v? ? L? v? k? + kL? v? ? L? v? k? ? kL? v? ? L? v? k? + ?kv? ? v? k?
Let P? , r? and P? , r? be the transition matrices and rewards of policies greedy with regard to v? for
? and ? respectively. Then we have:
L? v? ? L? v?
L? v? ? L? v?
= (?P? v? + r? ) ? (?P? v? + r? ) ? (? ? ?)P? v?
= (?P? v? + r? ) ? (?P? v? + r? ) ? (? ? ?)P? v? .
Finally, the bound follows from above as:
kv? ? v? k? ?
1
???
max{k(? ? ?)P? v? k? , k(? ? ?)P? v? k? } ?
krk? .
1??
(1 ? ?)(1 ? ?)
Remark 3. This bound is trivially tight, that is there exists a problem for which the bound reduces to
equality. It is however also straightforward to construct a problem in which the bound is not tight.
0.5
110
0.4
100
0.3
?
90
0.2
80
0.1
70
0
60
0
0.2
0.4
?
0.6
0.8
1
Figure 2: Example e(?) function in a
problem with ? = 0.9 and = 0.01
and krk? = 10.
3.1
0
0.2
0.4
?
0.6
0.8
1
Figure 3: The dependence of on ?
needed for the improvement in Proposition 6.
Approximation Error Bound
We now discuss the dependence of the approximation error ea (?) on the discount factor ?. Approximate dynamic programming algorithms like approximate value and policy iteration build a sequence
of value functions (?
v?k )k?0 with ??k being the policy greedy with respect to v??k . These algorithms
are approximate because at each iteration the value v??k is an approximation of some target value
v?k , which is hard to compute. The analysis of [2] (see Section 6.5.3 and Proposition 6.1 for value
iteration, and Proposition 6.2 for policy iteration) bounds the loss of using the policies ??k instead of
the optimal policy:
2?
?k
lim sup kv? ? v? ? k? ?
sup k?
v?k ? v?k k? .
(1)
2
(1
?
?)
k??
k
To completely describe how Eq. (1) depends on the discount factor, we need to bound the one-step
approximation error k?
v?k ? v?k k in terms of ?. Though this specific error depends on the particular approximation framework used and is in general difficult to estimate, we propose to make the
following assumption.
Assumption 4. There exists ? (0, 1/2), such that for all k, the single-step approximation error is
bounded by:
krk? .
k?
v?k ? v?k k? ?
1??
We consider only ? 1/2 because the above assumption holds with = 1/2 and the trivial constant
approximation v??k = krk? /2.
Remark 5. Alternatively to Assumption 4, we could assume that the approximation error is constant
in the discount factor ?, i.e. k?
v?k ? v?k k? ? = O(1) for some for all ?. We believe that such a
bound is unlikely in practice. To show that, consider an MDP with two states s0 and s1 , and a single
action. The transitions loop from each state to itself, and the rewards
? 0 ) = 0 and r(s1 ) = 1.
? are r(s
Assume a linear least-squares approximation with basis M = [1/ 2; 1/ 2]. The approximation
error in terms of ? is: 1/2(1 ? ?) = O(1/(1 ? ?)).
If Assumption 4 holds, we see from Eq. (1) that the approximation error ea is bounded as:
2?
ea (?) ?
krk? .
(1 ? ?)3
3.2
Global Error Bound
Using the results above, and considering that Assumption 4 holds, the cumulative error bound when
using approximate dynamic programming with a discount factor ? < ? is:
???
2?
e(?) = ea (?) + ed (?) ?
krk? +
krk? .
(1 ? ?)(1 ? ?)
(1 ? ?)3
An example of this error bound is shown in Figure 2: the bound is minimized for ? ' 0.8 < ?. This
is because the approximation error decreases rapidly in comparison with the increasing discount
error. More generally, the following proposition suggests how we should choose ?.
Proposition 6. If the approximation factor introduced in Assumption 4 is sufficiently large, precisely if > (1 ? ?)2 /2(1
p + 2?), then the best error bound e(?) will be achieved for the discount
factor ? = (2 + 1) ? (2 + 1)2 + (2 ? 1) < ?.
Figure 3 shows the approximation error fraction necessary to improve the performance. Notice that
the fraction decreases rapidly when ? ? 1.
Proof. The minimum of ? 7? e(?) can be derived analytically by taking its derivative:
e0 (?)
= ?(1 ? ?)?2 krk? + (1 ? ?)?3 2krk? + (?3)2?(?1)(1 ? ?)?4 krk?
?? 2 + 2(2 + 1)? + 2 ? 1
(1 ? ?)2 + 2(1 ? ?) + 6?
krk
=
krk? .
=
?
(1 ? ?)4
(1 ? ?)4
So we want to know when ? 7? ?1/2? 2 + (2 + 1)? + 1/2(2 ? 1) equals 0. The discriminant
0
? = (2 + 1)2 + (2
? ? 1) = 2(2 + 3) is always
? positive. Therefore e (?) equals 0 for the points
?? = (2 + 1) ? ? and ?+ = (2 + 1) + ? and is positive in between and negative outside.
This means that ?? is a local minimum of e and ?+ a local maximum.
It is clear that ?+ > 1 > ?. From the definition of ? and the fact (cf Assumption 4) that ? 1/2,
we see that ?? ? 0. Then, the condition ?? < ? is satisfied if and only if:
s
?? < ?
?
p
(2 + 1) ? (2 + 1)2 + (2 ? 1) < ? ? 1 ?
?
?
<
1?
2 + 1
?
?2?(2 + 1) + ? 2 < 2 ? 1 ?
s
1+
1+
2 ? 1
?
<
(2 + 1)2
2 + 1
?2
2 ? 1
?
2 ? 1
+
?1?2
<1+
(2 + 1)2
2 + 1
(2 + 1)2
(2 + 1)2
(1 ? ?)2
< 2
1 + 2?
where the inequality holds after squaring, since both sides are positive.
4
Bound Tightness
We show in this section that the bounds on the approximation error ea (?) are very loose for ? ? 1
and thus the analysis above does not fully explain the improved performance. In particular, there
exists a naive bound on the approximation error that is dramatically tighter than the standard bounds
when ? is close to 1.
Lemma 7. There exists a constant c ? R+ such that for all ? we have kv? ? v?? k? ? c/(1 ? ?).
Proof. Let P ? , r? and P? , r? be the transition reward functions of the optimal approximate policies
respectively. The functions may depend on the discount factor, but we omit that to simplify the
notation. Then the approximation error is:
kv? ? v?? k? = k(I ? ?P ? )?1 r? ? (I ? ? P? )?1 r?k? ?
1
(kr? k? + k?
r k? ) .
1??
Thus setting c = 2 max? kr? k? proves the lemma.
Lemma 7 implies that for every MDP, there exists a discount factor ?, such that Eq. (1) is not
tight. Consider even that the single-step approximation error is bounded by a constant, such that
lim supk?? k?
v?k ? v?k k? ? . This is impractical, as discussed in Remark 5, but it tightens the
bound. Such a bound implies that: ea (?) ? 2?/(1 ? ?)2 . From Lemma 7, this bound is loose
2?
c
when (1??)
2 > 1?? . Thus we have that there exists ? < 1 for which the standard approximation
error bounds are loose, whenever > 0. The looseness of the bound will be more apparent in
problems with high discount factors. For example in the MDP formulation of Blackjack [5] the
discount factor ? = 0.999, in which case the error bound may overestimate the true error by a factor
up to 1/(1 ? ?) = 1000.
The looseness of the approximation error bounds may seem to contradict Example 6.4 in [2], which
shows that Eq. (1) is tight. The discrepancy is because in our analysis we assume that the MDP has
3
250
2.8
150
100
200
|| a ? b ||?
Bellman error
Bellman error / true error
200
150
50
0
0.85
0.9
?
0.95
2
0.2
0.4
1
Figure 4: Looseness of the
Bellman error bound.
2.4
2.2
100
50
0
0.8
2.6
?
0.6
0.8
1
0
Figure 5: Bellman error
bound as a function of ? for
a problem with ? = 0.9.
0.2
0.4 ?
0.6
0.8
Figure 6: The approximation
error with a = v?? and b =
v? .
fixed rewards and number of states, while the example in [2] assumes that the reward depends on
the discount factor and the number of states is potentially infinite. Another way to put it is to say
that Example 6.4 shows that for any discount factor ? there exists an MDP (which depends on ?)
for which the bound Eq. (1) is tight. We, on the other hand, show that there does not exist a fixed
MDP such that for all discount factor ? the bound Eq. (1) is tight.
Proposition 6 justifies the improved performance with a lower discount factor by a more rapid decrease in ea with ? than the increase in ed . The naive bound from Lemma 7 however shows that ea
may scale with 1/(1 ? ?), the same as ed . As a result, while the approximation error will decrease,
it may not be sufficient to offset the increase in the discount error.
Some of the standard approximation error bound may be tightened by using a lower discount factor.
For example consider the standard a-posteriori approximation error bound for the value function
v?? [7] :
1
kv? ? v??? k? ?
kL? v?? ? v?? k? ,
1??
where ?
?? is greedy with respect to v?? . This bound is widely used and known as Bellman error
bound. The following example demonstrates that the Bellman error bound may also be loose for ?
close to 1:
1
0
P1 =
0
1
P2 =
0
0
1
1
r1 = 1
2
r2 = 2
2
Assume that the current value function is the value of a policy with the transition matrix and reward
P1 , r1 , while the optimal policy has the transition matrix and reward P2 , r2 . The looseness of the
1
bound is depicted in Figure 4. The approximation error bound scales with (1??)
2 , while the true
1
error scales with 1?? . As a result, for ? = 0.999, the bound is 1000 times the true error value in
this example. The intuitive reason for the looseness of the bound is that the bound treats each state
as recurrent, even when is it transient.
The global error bound may be also tightened by using a lower discount factor ? as follows:
?
?
kv? ? v? ? k? ?
1
???
kL? v?? ? v?? k? +
krk? .
1??
(1 ? ?)(1 ? ?)
Finding the discount factor ? that minimizes this error is difficult, because the function may not
be convex or differentiable. Thus the most practical method is a sub-gradient optimization method.
The global error bound the MDP example above is depicted in Figure 5.
5
Sparse Rewards
In this section, we propose an alternative explanation for the performance improvement in Tetris
that does not rely on the loose approximation error bounds. A specific property of Tetris is that the
rewards are not received in every step, i.e. they are sparse. The value function, on the other hand,
is approximated in every step. As a result, the return should be less sensitive to the discount factor
than the approximation error. Decreasing the discount factor will thus reduce the approximation
error more significantly than it increases the discount error. The following assumption formalizes
this intuition.
Assumption 8 (Sparse rewards). There
Pm exists an integer q such that for all m ? 0 and all instantiations ri with non-zero probability: i=0 ri ? bm/qc and ri ? {0, 1}.
P?
Now define u? = i=0 ? i ti , where ti = 1 when i ? 0 mod q. Then let Im = {i ri = 1, i ? m}
and Jm = {j tj = 1, j ? m} and let I = I? and J = J? . From the definition, these two sets
satisfy that |Im | ? |Jm |. First we show the following lemma.
Lemma 9. Given sets Im and Jm , there exists an injective function f : I ? J, such that f (i) ? i.
Proof. By induction on m. The base case m = 0 is trivial. For the inductive case, consider the
following two cases: 1) rm+1 = 0. From the inductive assumption, there exists a function that maps
Im to Jm . Now, this is also an injective function that maps Im+1 = Im to Jm+1 . 2) rm+1 = 1. Let
j ? = max Jm+1 . Then if j ? = m + 1 then the function f : Im ? Jm can be extended by setting
f (m + 1) = j ? . If j ? ? m then since |Jm+1 | ? 1 = |Jj ? ?1 | ? |Im |, such an injective function
exists from the inductive assumption.
In the following, let Ri be the random variable representing the reward received in step i. It is
possible to prove that the discount error scales with a coefficient that is lower than in Theorem 2:
Theorem
10. iLet ? ? ? ? ?, let k = ? log(1 ? ?)/(log(?) ? log(? ? ?)), and let ? =
hP
k
i
E
i=0 ? Ri . Then assuming the reward structure as defined in Assumption 8 we have that:
kv? ? v? k? ? ? k ku? ? u? k? + ? ?
? k (? q ? ? q )
+ ?.
(1 ? ? q )(1 ? ? q )
Proof. Consider ? be the optimal policy for the discount factor ?. Then we have: 0 ? v? ? v? ?
v?? ? v?? . In the remainder of the proof, we drop the superscript ? for simplicity, that is v? = v?? ,
not the optimal value function. Intuitively, the proof is based on ?moving? the rewards to earlier
steps to obtain a regular rewards structure. A small technical problem with this approach is that
moving the rewards that are close to the initial time step decreases the bound. Therefore, we treat
these rewards separately within the constant ?. First, we show that for f (i) ? k, we have that
? i ? ? i ? ? f (i) ? ? f (i) . Let j = f (i) = i ? k, for some k ? 0. Then:
?j ? ?j
?
j
?
? j+k ? ? j+k
max
??[0,???]
log(1 ? ? k ) ? log(1 ? ? k )
? log(1 ? ? k )
?
,
log(?) ? log(?)
log(?) ? log(? ? ?)
with the maximization used to get a sufficient condition independent of ?. Since the function f
maps only at most bk/qc values of Im to j < k, there is such |Iz | = k, that ?x ? Im \ Iz f (x) ? k
Then we have for j > k:
?
0 ? v? ? v?
=
?
lim E ?
m??
?+
?
X
?
X
i?Im \Iz
i
?
i
(? ? ? )? ? ? + lim E ?
?
X
m??
(?
f (i)
??
f (i)
)tf (i) ?
i=1...m?f (i)?k
(? j ? ? j )tj = ? + ? k (u? ? u? ).
j=k
Because the playing board in Tetris is 10 squares wide, and each piece has 4 squares, it takes on
average 2.5 moves to remove a line. Since Theorem 10 applies only to integer values of q, we use a
Tetris formulation in which dropping each piece requires two steps. A proper Tetris action is taken
in the first step, and there is no action in the second one. To make this model identical to the original
1
formulation, we change the discount factor to ? 2 . Then the upper bound from Theorem 10 on the
k 2.5
discount error is: kv? ? v? k? ? ? (? ? ? 2.5 )/(1 ? ? 2.5 )(1 ? ? 2.5 ) + ?, Notice that ? is a
constant; it is independent of the new discount factor ?.
The sparse rewards property can now be used to motivate the performance increase, even if the
approximation error is bounded by /(1 ? ?) instead of by /(1 ? ?)3 (as Lemma 7 suggests).
The approximation error bound will not, in most cases, satisfy the sparsity assumption, as the errors
are typically distributed almost uniformly over the state space and is received in every step as a
result. Therefore, for sparse rewards, the discount error increase will typically be offset by the larger
decrease in the approximation error.
The cumulative error bounds derived above predict it is beneficial to reduce the discount factor to ?
when:
(? 2.5 ? ? 2.5 )
kv? ? v? k? ? ? k
+?+
<
.
(1 ? ? 2.5 )(1 ? ? 2.5 )
1??
1??
The effective discount factor ? ? in Tetris is not known, but consider for example that it is ? ? =
0.99. Assuming ? = 0.1 we have that k = 48, which means that the first b48/2.5c rewards must
be excluded, and included in ?. The bounds then predict that for ? 0.4 the performance of
approximate value iteration may be expected to improve using ? ? ? ? ?.
We end by empirically illustrating the influence of reward sparsity in a general context. Consider
a simple 1-policy, 7-state chain problem. Consider two reward instances, one with a single reward
of 1, and the other with randomly generated rewards. We show the comparison of the effects of a
lower discount factor of these two examples in Figure 6. The dotted line represents the global error
with sparse rewards, and the solid line represents the cumulative error with dense rewards. Sparsity
of rewards makes a decrease of the discount factor more interesting.
6
Conclusion and Future Work
We show in this paper that some common approximation error bounds may be tightened with a lower
discount factor. We also identified a class of problems in which a lower discount factor is likely to
increase the performance of approximate dynamic programming algorithms. In particular, these are
problems in which the rewards are received relatively sparsely. We concentrated on a theoretical
analysis of the influence of the discount factor, not on the specific methods which could be used to
determine a discount factor. The actual dependence of the performance on the discount factor may
be non-trivial, and therefore hard to predict based on simple bounds. Therefore, the most practical
approach is to first predict an improving discount factor based on the theoretical predictions, and
then use line search to find a discount factor that ensures good performance. This is possible since
the discount factor is a single-dimensional variable with a limited range.
The central point of our analysis is based on bounds that are in general quite loose. An important
future direction is to analyze the approximation error more carefully. We shall do experiments
in order to see if we can have some insight on the form (i.e. the distribution) of the error for
several settings (problems, approximation architecture). If such errors follow some law, it might be
interesting to see whether it helps to tighten the bounds.
Acknowledgements This work was supported in part by the Air Force Office of Scientific Research Grant
No. FA9550-08-1-0171 and by the National Science Foundation Grant No. 0535061. The first author was also
supported by a University of Massachusetts Graduate Fellowship.
References
[1] Dimitri P. Bertsekas and Sergey Ioffe. Temporal differences-based policy iteration and applications in
neuro-dynamic programming. Technical Report LIDS-P-2349, LIDS, 1997.
[2] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-dynamic programming. Athena Scientific, 1996.
[3] V.F. Farias and B. Van Roy. Probabilistic and Randomized Methods for Design Under Uncertainty, chapter
6: Tetris: A Study of Randomized Constraint Sampling. Springer-Verlag, 2006.
[4] Sham Machandranath Kakade. A Natural Policy Gradient. In Advances in neural information processing
systems, pages 1531?1538. MIT Press, 2001.
[5] Ronald Parr, Lihong Li, Gavin Taylor, Christopher Painter-Wakefield, and Michael L. Littman. An analysis
of linear models, linear value function approximation, and feature selection for reinforcement learning. In
International Conference on Machine Learning, 2008.
[6] Warren B. Powell. Approximate Dynamic Programming. Wiley-Interscience, 2007.
[7] Martin L. Puterman. Markov decision processes: Discrete stochastic dynamic programming. John Wiley
& Sons, Inc., 2005.
[8] Richard S. Sutton and Andrew Barto. Reinforcement learning. MIT Press, 1998.
| 3523 |@word illustrating:1 norm:1 contraction:1 solid:1 initial:2 uma:1 score:2 tuned:1 existing:4 current:1 surprising:1 must:1 john:2 ronald:1 enables:1 remove:1 drop:1 greedy:4 fa9550:1 five:1 qualitative:1 prove:1 interscience:1 expected:2 rapid:1 p1:2 bellman:9 discounted:5 decomposed:1 decreasing:2 actual:1 jm:8 considering:1 increasing:1 notation:2 campus:1 bounded:5 moreover:1 minimizes:1 finding:1 impractical:1 formalizes:1 temporal:1 every:4 ti:2 demonstrates:1 rm:2 grant:2 omit:1 bertsekas:2 overestimate:1 positive:3 local:2 treat:3 sutton:1 subscript:1 might:1 suggests:2 limited:1 range:1 graduate:1 averaged:1 practical:3 practice:1 powell:1 significantly:2 word:1 regular:1 ilet:1 get:1 close:3 selection:1 operator:3 put:1 context:2 influence:3 map:3 straightforward:1 independently:1 convex:1 qc:2 simplicity:3 insight:1 justification:2 target:1 programming:14 roy:1 approximated:1 sparsely:2 observed:1 solved:2 ensures:2 decrease:9 removed:2 machandranath:1 ran:1 mentioned:1 intuition:1 reward:39 littman:1 dynamic:14 motivate:2 depend:1 solving:1 tight:6 petrik:2 completely:1 basis:1 farias:1 chapter:1 describe:1 effective:1 outside:1 apparent:1 quite:1 widely:1 larger:1 say:1 tightness:1 itself:1 superscript:1 sequence:2 differentiable:1 propose:4 maximal:2 fr:1 remainder:2 loop:1 rapidly:2 achieve:1 intuitive:1 kv:18 convergence:2 undiscounted:2 r1:2 help:1 derive:1 develop:1 recurrent:1 andrew:1 received:8 eq:6 p2:2 c:1 come:1 implies:3 direction:1 stochastic:2 transient:1 proposition:6 tighter:2 im:11 hold:4 sufficiently:1 gavin:1 mapping:1 predict:4 parr:1 blackjack:1 currently:1 sensitive:2 tf:1 mit:2 always:1 rather:1 barto:1 office:1 derived:3 focus:1 improvement:5 mainly:1 posteriori:1 squaring:1 typically:3 unlikely:1 a0:1 france:1 scherrer:2 equal:2 construct:1 sampling:1 identical:1 represents:2 discrepancy:1 minimized:1 future:2 report:1 simplify:2 richard:1 randomly:1 national:1 invariably:1 evaluation:1 tj:2 chain:1 tuple:1 necessary:1 injective:3 taylor:1 sacrificing:1 e0:1 theoretical:2 monotonous:1 instance:1 earlier:1 maximization:1 st:2 international:1 amherst:2 randomized:2 probabilistic:1 michael:1 central:1 recorded:1 satisfied:1 choose:1 possibly:1 scientifique:1 derivative:1 dimitri:2 return:3 li:1 coefficient:1 inc:1 satisfy:2 depends:4 piece:2 analyze:3 sup:2 sporadically:1 square:3 air:1 painter:1 identify:1 straight:2 explain:3 whenever:1 ed:6 definition:3 associated:1 proof:7 adjusting:1 massachusetts:2 nancy:1 lim:4 improves:2 formalize:1 carefully:2 ea:10 originally:1 higher:1 follow:1 improved:6 formulation:3 though:2 wakefield:1 hand:2 christopher:1 quality:7 scientific:2 mdp:9 believe:1 effect:2 true:4 evolution:1 equality:1 analytically:1 inductive:3 excluded:1 puterman:1 game:3 demonstrate:1 meaning:1 common:3 empirically:1 overview:1 tightens:1 discussed:1 significant:3 refer:1 trivially:1 pm:1 hp:1 bruno:2 lihong:1 moving:2 loria:2 base:1 verlag:1 inequality:1 minimum:2 determine:2 sham:1 reduces:1 technical:2 offer:1 prediction:1 neuro:2 iteration:11 sometimes:1 represent:1 sergey:1 achieved:2 addition:1 want:1 separately:1 fellowship:1 decreased:1 unlike:1 mod:1 seem:1 integer:2 near:1 easy:1 fit:1 architecture:1 identified:1 economic:1 reduce:2 whether:1 jj:1 action:7 remark:3 dramatically:2 generally:2 clear:1 discount:82 concentrated:1 generate:1 exist:1 notice:2 dotted:1 write:1 discrete:1 dropping:1 iz:3 shall:1 nevertheless:1 fraction:2 sum:1 run:2 uncertainty:1 almost:1 decision:6 bit:1 entirely:1 bound:66 precisely:1 constraint:1 ri:6 sake:1 optimality:2 relatively:1 martin:1 department:1 transiting:1 beneficial:1 son:1 kakade:1 lid:2 s1:2 intuitively:1 taken:1 discus:1 loose:8 eventually:1 needed:1 know:1 end:1 alternative:1 original:1 denotes:1 assumes:1 ensure:1 cf:1 calculating:1 build:1 prof:1 objective:2 move:1 dependence:3 gradient:2 majority:1 athena:1 argue:1 discriminant:1 trivial:3 reason:1 induction:1 assuming:2 modeled:1 difficult:2 potentially:1 negative:3 design:1 proper:1 policy:26 looseness:5 upper:1 markov:6 benchmark:1 finite:2 extended:3 arbitrary:1 introduced:1 bk:1 blackwell:1 kl:7 biasing:1 sparsity:3 max:5 explanation:2 marek:1 natural:3 rely:2 treated:1 force:1 representing:1 improve:4 mdps:3 negativity:1 naive:2 l2:1 acknowledgement:1 law:1 loss:2 fully:1 interesting:2 vandoeuvre:1 lv:1 foundation:1 sufficient:3 s0:5 tightened:4 playing:1 surprisingly:1 supported:2 tsitsiklis:1 side:1 warren:1 wide:1 taking:1 sparse:6 distributed:1 regard:3 van:1 calculated:2 transition:6 cumulative:3 forward:2 commonly:1 reinforcement:4 author:1 bm:1 counted:1 tighten:1 approximate:23 contradict:1 global:4 instantiation:1 ioffe:1 assumed:1 alternatively:1 search:1 why:1 ku:1 improving:1 artificially:3 krk:15 dense:1 motivation:1 board:1 wiley:2 sub:1 theorem:5 specific:4 offset:2 r2:2 exists:12 importance:1 kr:2 justifies:1 horizon:4 depicted:3 simply:1 likely:1 supk:1 applies:2 springer:1 ma:1 consequently:1 change:2 hard:2 included:1 infinite:5 specifically:1 uniformly:1 justify:1 lemma:8 total:2 tetri:15 disregard:1 support:1 regularizing:1 phenomenon:1 |
2,784 | 3,524 | Asynchronous Distributed Learning of Topic Models
Arthur Asuncion, Padhraic Smyth, Max Welling
Department of Computer Science
University of California, Irvine
{asuncion,smyth,welling}@ics.uci.edu
Abstract
Distributed learning is a problem of fundamental interest in machine learning and
cognitive science. In this paper, we present asynchronous distributed learning algorithms for two well-known unsupervised learning frameworks: Latent Dirichlet
Allocation (LDA) and Hierarchical Dirichlet Processes (HDP). In the proposed
approach, the data are distributed across P processors, and processors independently perform Gibbs sampling on their local data and communicate their information in a local asynchronous manner with other processors. We demonstrate
that our asynchronous algorithms are able to learn global topic models that are
statistically as accurate as those learned by the standard LDA and HDP samplers,
but with significant improvements in computation time and memory. We show
speedup results on a 730-million-word text corpus using 32 processors, and we
provide perplexity results for up to 1500 virtual processors. As a stepping stone in
the development of asynchronous HDP, a parallel HDP sampler is also introduced.
1
Introduction
Learning algorithms that can perform in a distributed asynchronous manner are of interest for several
different reasons. The increasing availability of multi-processor and grid computing technology
provides an immediate and practical motivation to develop learning algorithms that are able take
advantage of such computational resources. Similarly, the increasing proliferation of networks of
low-cost devices motivates the investigation of distributed learning in the context of sensor networks.
On a deeper level, there are fundamental questions about distributed learning from the viewpoints
of artificial intelligence and cognitive science.
In this paper, we focus on the specific problem of developing asynchronous distributed learning algorithms for a class of unsupervised learning techniques, specifically LDA [1] and HDP [2] with learning via Gibbs sampling. The frameworks of LDA and HDP have recently become popular due to
their effectiveness at extracting low-dimensional representations from sparse high-dimensional data,
with multiple applications in areas such as text analysis and computer vision. A promising approach
to scaling these algorithms to large data sets is to distribute the data across multiple processors and
develop appropriate distributed topic-modeling algorithms [3, 4, 5]. There are two somewhat distinct
motivations for distributed computation in this context: (1) to address the memory issue when the
original data and count matrices used by the algorithm exceed the main memory capacity of a single
machine; and (2) using multiple processors to significantly speed up topic-learning, e.g., learning a
topic model in near real-time for tens of thousands of documents returned by a search-engine.
While synchronous distributed algorithms for topic models have been proposed in earlier work, here
we investigate asynchronous distributed learning of topic models. Asynchronous algorithms provide
several computational advantages over their synchronous counterparts: (1) no global synchronization step is required; (2) the system is extremely fault-tolerant due to its decentralized nature; (3)
heterogeneous machines with different processor speeds and memory capacities can be used; (4)
new processors and new data can be incorporated into the system at any time.
Our primary novel contribution is the introduction of new asynchronous distributed algorithms for
LDA and HDP, based on local collapsed Gibbs sampling on each processor. We assume an asyn1
chronous ?gossip-based? framework [6] which only allows pairwise interactions between random
processors. Our distributed framework can provide substantial memory and time savings over singleprocessor computation, since each processor only needs to store and perform Gibbs sweeps over P1 th
of the data, where P is the number of processors. Furthermore, the asynchronous approach can scale
to large corpora and large numbers of processors, since no global synchronization steps are required.
While building towards an asynchronous algorithm for HDP, we also introduce a novel synchronous
distributed inference algorithm for HDP, again based on collapsed Gibbs sampling.
In the proposed framework, individual processors perform Gibbs sampling locally on each processor
based on a noisy inexact view of the global topics. As a result, our algorithms are not necessarily
sampling from the proper global posterior distribution. Nonetheless, as we will show in our experiments, these algorithms are empirically very robust and converge rapidly to high-quality solutions.
We first review collapsed Gibbs sampling for LDA and HDP. Then we describe the details of our
distributed algorithms. We present perplexity and speedup results for our algorithms when applied
to text data sets. We conclude with a discussion of related work and future extensions of our work.
2
A brief review of topic models
?
?k
?
Before delving into the details of our distributed algorithms, we first
describe the LDA and HDP topic models. In LDA, each document j is
? kj ?
? kj
modeled as a mixture over K topics, and each topic k is a multinomial
?
?
distribution, ?wk , over a vocabulary of W words1 . Each document?s
Zij
Z ij
mixture over topics, ?kj , is drawn from a Dirichlet distribution with
X ij
?wk
N
parameter ?. In order to generate a new document, ?kj is first sampled ?wkK X ij N J
J
?
from a Dirichlet distribution with parameter ?. For each token i in
that document, a topic assignment zij is sampled from ?kj , and the Figure 1: Graphical models
specific word xij is drawn from ?wzij . The graphical model for LDA for LDA (left) and HDP (right).
is shown in Figure 1, and the generative process is below:
?k,j ? D[?]
?w,k ? D[?]
zij ? ?k,j
xij ? ?w,zij .
Given observed data, it is possible to infer the posterior distribution of the latent variables. One
can perform collapsed Gibbs sampling [7] by integrating out ?kj and ?wk and sampling the topic
assignments in the following manner:
N ?ij + ?
?ij
P (zij = k|z ?ij , w) ? P wk
Njk
+? .
(1)
?ij
w Nwk + W ?
Nwk denotes the number of word tokens of type w assigned to topic k, while Njk denotes the
number of tokens in document j assigned to topic k. N ?ij denotes the count with token ij removed.
j
j
The HDP mixture model is composed of a hierarchy of Dirichlet processes. HDP is similar to LDA
and can be viewed as the model that results from taking the infinite limit of the following finite
mixture model. Let L be the number of mixture components, and ?k be top level Dirichlet variables
drawn from a Dirichlet distribution with parameter ?/L. The mixture for each document, ?kj , is
generated from a Dirichlet with parameter ??k . The multinomial topic distributions, ?wk are drawn
from a base Dirichlet distribution with parameter ?. As in LDA, zij is sampled from ?kj , and word
xij is sampled from ?wzij . If we take the limit of this model as L goes to infinity, we obtain HDP:
?k ? D[?/L]
?k,j ? D[??k ]
?w,k ? D[?]
zij ? ?k,j
xij ? ?w,zij .
To sample from the posterior, we follow the details of the direct assignment sampler for HDP [2].
Both ?kj and ?wk are integrated out, and zij is sampled from a conditional distribution that is
almost identical to that of LDA, except that a small amount of probability mass is reserved for the
instantiation of a new topic. Note that although HDP is defined to have an infinite number of topics,
the only topics that are instantiated are those that are actually used.
3
Asynchronous distributed learning for the LDA model
We consider the problem of learning an LDA model with K topics in a distributed fashion where
documents are distributed across P processors. Each processor p stores the following local variables:
1
To avoid clutter, we write ?wk or ?kj to denote the set of all components, i.e. {?wk } or {?kj }. Similarly,
when sampling from a Dirichlet, we write ?kj ? D[??k ] instead of [?1,j , ..?K,j ] ? D[??1 , .., ??K ].
2
p
p
wij
contains the word type for each token i in document j in the processor, and zij
contains the
?p
assigned topic for each token. Nwk is the global word-topic count matrix stored at the processor?
this matrix stores counts of other processors gathered during the communication step and does not
p
include the processor?s local counts. Nkj
is the local document-topic count matrix (derived from
p
z p ), Nwp is the simple word count on a processor (derived from wp ), and Nwk
is the local word-topic
p
p
count matrix (derived from z and w ) which only contains the counts of data on the processor.
Newman et al. [5] introduced a parallel version of LDA based on collapsed Gibbs sampling (which
we will call Parallel-LDA). In Parallel-LDA, each processor receives P1 of the documents in the
corpus and the z?s are globally initialized. Each iteration of the algorithm is composed of two steps:
a Gibbs sampling step and a synchronization step. In the sampling step, each processor samples its
local z p by using the global topics of the previous iteration. In the synchronization step, the local
p
counts Nwk
on each processor are aggregated to produce a global set of word-topic counts Nwk .
This process is repeated for either a fixed number of iterations or until the algorithm has converged.
Parallel-LDA can provide substantial memory and time savings. However, it is a fully synchronous
algorithm since it requires global synchronization at each iteration. In some applications, a global
synchronization step may not be feasible, e.g. some processors may be unavailable, while other
processors may be in the middle of a long Gibbs sweep, due to differences in processor speeds. To
gain the benefits of asynchronous computing, we introduce an asynchronous distributed version of
LDA (Async-LDA) that follows a similar two-step process to that above. Each processor performs
a local Gibbs sampling step followed by a step of communicating with another random processor.
For Async-LDA, during each iteration, the processors perform a full sweep of collapsed Gibbs
sampling over their local topic assignment variables z p according to the following conditional distribution, in a manner directly analogous to Equation 1,
(N ?p + N p )?ij
?ij
wk + ?
P (zpij = k|zp?ij , wp ) ? P
Npjk
+? .
(2)
?ij
?p + N p )
w (N
wk + W ?
?p
p
?p
The combination of Nwk
and Nwk
is used in the sampling equation. Recall that Nwk
represents
processor p?s belief of the counts of all the other processors with which it has already communicated
p
(not including processor p?s local counts), while Nwk
is the processor?s local word-topic counts.
p
Thus, the sampling of the z ?s is based on the processor?s ?noisy view? of the global set of topics.
p
Once the inference of z p is complete (and Nwk
is up- Algorithm 1 Async-LDA
dated), the processor finds another finished processor and
for each processor p in parallel do
initiates communication2 . We are generally interested in
repeat
the case where memory and communication bandwidth
Sample z p locally (Equation 2)
g
are both limited. We also assume in the simplified gosReceive Nwk
from random proc g
p
sip scheme that a processor can establish communication
Send Nwk to proc g
with every other processor ? later in the paper we also
if p has met g before then
?p
?p
?g + Ng
discuss scenarios that relax these assumptions.
Nwk
? Nwk
?N
wk
wk
In the communication step, let us consider the case where
two processors, p and g have never met before. In this
p
case, processors simply exchange their local Nwk
?s (their
local contribution to the global topic set), and processor p
g
?p
simply adds Nwk
to its Nwk
, and vice versa.
else
?p
?p
g
Nwk
? Nwk
+ Nwk
end if
until convergence
end for
Consider the case where two processors meet again. The processors should not simply swap and add
?p
their local counts again; rather, each processor should first remove from Nwk
the previous influence
of the other processor during their previous encounter, in order to prevent processors that frequently
meet from over-influencing each other. We assume in the general case that a processor does not
store in memory the previous counts of all the other processors that processor p has already met.
?p
Since the previous local counts of the other processor were already absorbed into Nwk
and are thus
not retrievable, we must take a different approach. In Async-LDA, the processors exchange their
p
Nwk
?s, from which the count of words on each processor, Nwp can be derived. Using processor g?s
? g by sampling N g topic values randomly without replacement from
Nwg , processor p creates N
w
wk
2
We don?t discuss in general the details of how processors might identify other processors that have finished
their iteration, but we imagine that a standard protocol could be used, like P2P.
3
P
?p
?p
?p
collection {Nwk
}. We can imagine that there are k Nwk
colored balls, with Nwk
balls of color k,
g
from which we pick Nw balls uniformly at random without replacement. This process is equivalent
? g acts as a substitute for the N g
to sampling from a multivariate hypergeometric distribution. N
wk
wk
that processor p received during their previous encounter. Since all knowledge of the previous
g
Nwk
is lost, this method can be justified by Laplace?s principle of indifference (or the principle of
?p
? g and adding the current N g :
maximum entropy). Finally, we update Nwk
by subtracting N
wk
wk
?p
?p
?g + Ng
Nwk
? Nwk
?N
wk
wk
? g ? MH [Nwg ; N ?p , .., N ?p ] .
N
w,1
w,K
w,k
where
(3)
Pseudocode for Async-LDA is provided in the display box for Algorithm 1. The assumption of
limited memory can be relaxed by allowing processors to cache previous counts of other processors
g
? g . We can also relax the assumption of limited bandwidth.
? the cached Nwk
would replace N
wk
Processor p could forward its individual cached counts (from other processors) to g, and vice versa,
to quicken the dissemination of information. In fixed topologies where the network is not fully
connected, forwarding is necessary to propagate the counts across the network. Our approach can
be applied to a wide variety of scenarios with varying memory, bandwidth, and topology constraints.
4
Synchronous and asynchronous distributed learning for the HDP model
Inference for HDP can be performed in a distributed manner as well. Before discussing our asynchronous HDP algorithm, we first describe a synchronous parallel inference algorithm for HDP.
We begin with necessary notation for HDPs: ? is the concentration parameter for the top level
Dirichlet Process (DP), ? is the concentration parameter for the document level DP, ?k ?s are toplevel topic probabilities, and ? is the Dirichlet parameter for the base distribution. The graphical
model for HDP is shown in Figure 1.
We introduce Parallel-HDP, which is analogous to Parallel-LDA except that new topics may be
added during the Gibbs sweep. Documents are again distributed across the processors. Each processor maintains local ?kp parameters which are augmented when a new topic is locally created. During
the Gibbs sampling step, each processor locally samples the z p topic assignments. In the synchrop
nization step, the local word-topic counts Nwk
are aggregated into a single matrix of global counts
p
Nwk , and the local ?k ?s are averaged to form a global ?k . The ?, ?k and ? hyperparameters are
also globally resampled during the synchronization step ? see Teh et al. [2] for details. We fix ? to
be a small constant. While ? and ? can also be fixed, sampling these parameters improves the rate
of convergence. To facilitate sampling, relatively flat gamma priors are placed on ? and ?. Finally,
these parameters and the global count matrix are distributed back to the processors.
Algorithm 2 Parallel-HDP
repeat
for each processor p in parallel do
Sample z p locally
p
Send Nwk
, ?kp to master node
end for P
p
Nwk ? p Nwk
P p
?k ? ( p ?k ) / P
Resample ?, ?k , ? globally
Distribute Nwk , ?, ?k , ? to all processors
until convergence
Algorithm 3 Async-HDP
for each processor p in parallel do
repeat
Sample z p and then ?p , ?kp , ? p locally
g
Receive Nwk
, ?g , ?kg from random proc g
p
Send Nwk
, ?p , ?kp to proc g
if p has met g before then
?p
?p
?g + Ng
Nwk
? Nwk
?N
wk
wk
else
?p
?p
g
Nwk
? Nwk
+ Nwk
end if
?p ? (?p + ?g ) / 2 and ?kp ? (?kp + ?kg ) / 2
until convergence
end for
Motivated again by the advantages of local asynchronous communication between processors, we
propose an Async-HDP algorithm. It is very similar in spirit to Async-LDA, and so we focus on the
differences in our description. First, the sampling equation for z p is different to that of Async-LDA,
since some probability mass is reserved for new topics:
?
(N ?p +N p )?ij +?
?ij
p p
?
N
+
?
?
if k ? Kp
? P (N ?p +N p )wk
?ij
pjk
k ,
w
wk +W ?
P (zpij = k|zp?ij , wp ) ?
?
p
? ?p ?new
if k is new.
W ,
4
Total number of documents in training set
Size of vocabulary
Total number of words
Total number of documents in test set
KOS
3,000
6,906
410,595
430
NIPS
1,500
12,419
1,932,365
184
NYT
300,000
102,660
99,542,125
?
PUBMED
8,200,000
141,043
737,869,083
?
Table 1: Data sets used for perplexity and speedup experiments
We resample the hyperparameters ?p , ?kp , ? p locally3 during the inference step, and keep ? fixed.
In Async-HDP, a processor can add new topics to its collection during the inference step. Thus,
when two processors communicate, the number of topics on each processor might be different. One
way to merge topics is to perform bipartite matching across the two topic sets, using the Hungarian
algorithm. However, performing this topic matching step imposes a computational penalty as the
number of topics increases. In our experiments for Async-LDA, Parallel-HDP, and Async-HDP, we
do not perform topic matching, but we simply combine the topics on different processors based their
topic IDs and (somewhat surprisingly) the topics gradually self-organize and align. Newman et al.
[5] also observed this same behavior occurring in Parallel-LDA.
p
During the communication step, the counts Nwk
and the parameters ?p and ?kp values are exchanged
and merged. Async-HDP removes a processor?s previous influence through the same MH technique
used in Async-LDA. Pseudocode for Async-HDP is provided in the display box for Algorithm 3.
5
Experiments
We use four text data sets for evaluation: KOS, a data set derived from blog entries (dailykos.com);
NIPS, a data set derived from NIPS papers (books.nips.cc); NYT, a collection of news articles
from the New York Times (nytimes.com); and PUBMED, a large collection of PubMed abstracts
(ncbi.nlm.nih.gov/pubmed/). The characteristics of these four data sets are summarized in Table 1.
For our perplexity experiments, parallel processors were simulated in software and run on smaller
data sets (KOS, NIPS), to enable us to test the statistical limits of our algorithms. Actual parallel
hardware is used to measure speedup on larger data sets (NYT, PUBMED). Our simulation features
a gossip scheme over a fully connected network that lets each processor communicate with one other
random processor at the end of every iteration, e.g., with P =100, there are 50 pairs at each iteration.
In our perplexity experiments, the data set is separated into a training set and a test set. We learn our
models on the training set, and then we measure the performance of our algorithms on the test set
using perplexity, a widely-used metric in the topic modeling community.
We briefly describe how perplexity is computed for our models. Perplexity is simply the exponentiated average per-word log-likelihood. For each of our experiments, we perform S = 5 different
Gibbs runs, with each run lasting 1500 iterations (unless otherwise noted), and we obtain a sample
at the end of each of those runs. The 5 samples are then averaged when computing perplexity. For
Parallel-HDP, perplexity is calculated in the same way as in standard HDP:
s
s
X
??k + Njk
1 X X ?s ?s
s
?s = ? + Nwk . (4)
log
log p(xtest ) =
?jk ?wk where ??jk
=P
,
?
wk
s
S s
W ? + Nks
k (??k ) + Nj
jw
k
s
After the model is run on the training data, ??swk is available in sample s. To obtain ??jk
, one must
resample the topic assignments on the first half of each document in the test set while holding ??swk
s
fixed. Perplexity is evaluated on the second half of each document in the test set, given ??swk and ??jk
.
The perplexity calculation for Async-LDA and Async-HDP uses the same formula. Since each processor effectively learns a separate local topic model, we can directly compute the perplexity for
each processor?s local model. In our experiments, we report the average perplexity among processors, and we show error bars denoting the minimum and maximum perplexity among all processors.
The variance of perplexities between processors is usually quite small, which suggests that the local
topic models learned on each processor are equally accurate.
For KOS and NIPS, we used the same settings for priors and hyperpriors: ? = 0.1, ? = 0.01 for
LDA and Async-LDA, and ? = 0.01, ? ? Gam(10, 1), and ? ? Gam(2, 1) for the HDP algorithms.
Sampling ?p , ?kp , ? p requires a global view of variables like m?k , the total number of ?tables? serving
?dish? k [2]. These values can be asynchronously propagated in the same way that the counts are propagated.
3
5
1800
2000
KOS
K=8
2000
NIPS
K=10
LDA
Async?LDA
1600 K=16
1500 K=32
1800
K=20
1600 K=40
K=64
1400
1
10
Processors
1400
100
Perplexity
Perplexity
Perplexity
1700
K=80
1
10
Processors
1800
KOS
K=16
1600
1400
100
1
10 100 500 10001500
Processors
Figure 2: (a) Left: Async-LDA perplexities on KOS. (b) Middle: Async-LDA perplexities on NIPS. (c) Right:
Async-LDA perplexities on KOS with many procs. Cache=5 when P?100. 3000 iterations run when P?500.
5.1 Async-LDA perplexity and speedup results
Figures 2(a,b) show the perplexities for Async-LDA on KOS and NIPS data sets for varying numbers
of topics. The variation in perplexities between LDA and Async-LDA is slight and is significantly
less than the variation in perplexities as the number of topics K is changed. These numbers suggest
that Async-LDA converges to solutions of the same quality as standard LDA. While these results are
based on a single test/train split of the corpus, we have also performed cross-validation experiments
(results not shown) which give essentially the same results across different test/train splits.
We also stretched the limits of our algorithm by increasing P (e.g. for P =1500, there are only
two documents on each processor), and we found that performance was virtually unchanged (figure
2(c)). As a baseline we ran an experiment where processors never communicate. As the number of
processors P was increased from 10 to 1500 the corresponding perplexities increased from 2600 to
5700, dramatically higher than our Async-LDA algorithm, indicating (unsurprisingly) that processor
communication is essential to obtain good quality models. Figure 3(a) shows the rate of convergence
of Async-LDA. As the number of processors increases, the rate of convergence slows, since it takes
more iterations for information to propagate to all the processors. However, it is important to note
that one iteration in real time of Async-LDA is up to P times faster than one iteration of LDA. We
show the same curve in terms of estimated real time in figure 3(b) , assuming a parallel efficiency of
0.5, and one can see that Async-LDA converges much more quickly than LDA. Figure 3(c) shows
actual speedup results for Async-LDA on NYT and PUBMED, and the speedups are competitive
to those reported for Parallel-LDA [5]. As the data set size grows, the parallel efficiency increases,
since communication overhead is dwarfed by the sampling time.
In Figure 3(a), we also show the performance of a baseline asynchronous averaging scheme, where
?p
?p
?g
g
global counts are averaged together: Nwk
? (Nwk
+ Nwk
)/d + Nwk
. To prevent unbounded count
growth, d must be greater than 2, and so we arbitrarily set d to 2.5. While this averaging scheme
initially converges quickly, it converges to a final solution that is worse than Async-LDA, regardless
of the setting for d.
The rate of convergence for Async-LDA P =100 can be dramatically improved by letting each prog
cessor maintain a cache of previous Nwk
counts of other processors. Figures 3(a,b), C=5, show the
g
improvement made by letting each processor cache the five most recently seen Nwk
?s. Note that we
still assume a limited bandwidth ? processors do not forward individual cached counts, but instead
share a single matrix of combined cache counts that helps the processors to achieve faster burn-in
time. In this manner, one can elegantly make a tradeoff between time and memory.
LDA
Async?LDA P=10
Async?LDA P=100
Async?LDA P=100 C=5
2000
30
25
Speedup
2000
2500
LDA
Async?LDA P=10
Async?LDA P=100
Async?LDA P=100 C=5
Averaging P=100
Perplexity
Perplexity
2500
Perfect
Async?LDA (PUBMED)
Async?LDA (NYT)
20
15
10
5
1500
0
500
Iteration
1000
1500
0
50
Relative Time
100
1
8
16
24
Processors (MPI)
32
Figure 3: (a) Left: Convergence plot for Async-LDA on KOS, K=16. (b) Middle: Same plot with x-axis as
relative time. (c) Right: Speedup results for NYT and PUBMED on a cluster, using Message Passing Interface.
6
1400
1300 NIPS
1200
1
10
Processors
3000
No. of Topics || Perplexity
No. of Topics || Perplexity
Perplexity
HDP
Parallel?HDP
1500
Async?HDP
1400
KOS
1300
HDP
Parallel?HDP P=10
Parallel?HDP P=100
2000
Perplexity
1000
100
No. of Topics
0
0
500
Iteration
1000
3000
2000
HDP
Async?HDP P=10
Async?HDP P=100 C=5
Perplexity
1000
No. of Topics
0
0
500
Iteration
1000
Figure 4: (a) Left: Perplexities for Parallel-HDP and Async-HDP. Cache=5 used for Async-HDP P=100. (b)
Middle: Convergence plot for Parallel-HDP on KOS. (c) Right: Convergence plot for Async-HDP on KOS.
5.2 Parallel-HDP and Async-HDP results
Perplexities for Parallel-HDP after 1500 iterations are shown in figure 4(a), and they suggest that
the model generated by Parallel-HDP has nearly the same predictive power as standard HDP. Figure
4(b) shows that Parallel-HDP converges at essentially the same rate as standard HDP on the KOS
data set, even though topics are generated at a slower rate. Topics grow at a slower rate in ParallelHDP since new topics that are generated locally on each processor are merged together during each
synchronization step. In this experiment, while the number of topics is still growing, the perplexity
has converged, because the newest topics are smaller and do not significantly affect the predictive
power of the model. The number of topics does stabilize after thousands of iterations.
Perplexities for Async-HDP are shown in figures 4(a,c) as well. On the NIPS data set, there is a
slight perplexity degradation, which is partially due to non-optimal parameter settings for ? and ?.
Topics are generated at a slightly faster rate for Async-HDP than for Parallel-HDP because AsyncHDP take a less aggressive approach on pruning small topics, since processors need to be careful
when pruning topics locally. Like Parallel-HDP, Async-HDP converges rapidly to a good solution.
5.3 Extended experiments for realistic scenarios
In certain applications, it is desirable to learn a topic model incrementally as new data arrives. In
our framework, if new data arrives, we simply assign the new data to a new processor, and then
let that new processor enter the ?world? of processors with which it can begin to communicate.
Our asynchronous approach requires no global initialization or global synchronization step. We
do assume a fixed global vocabulary, but one can imagine schemes which allow the vocabulary to
grow as well. We performed an experiment for Async-LDA where we introduced 10 new processors
(each carrying new data) every 100 iterations. In the first 100 iterations, only 10% of the KOS data
is known, and every 100 iterations, an additional 10% of the data is added to the system through
new processors. Figure 5(a) shows that perplexity decreases as more processors and data are added.
After 1000 iterations, the perplexity of Async-LDA has converged to the standard LDA perplexity.
Thus, in this experiment, learning in an online fashion does not adversely affect the final model.
In the experiments previously described, documents were randomly distributed across processors.
In reality, a processor may have a document set specialized to only a few topics. We investigated
Async-LDA?s behavior on a non-random distribution of documents over processors. After running
LDA (K=20) on NIPS, we used the inferred mixtures ?jk to separate the corpus into 20 different
sets of documents corresponding to the 20 topics. We assigned 2 sets of documents to each of 10
processors, so that each processor had a document set that was specialized to 2 topics. Figure 5(b)
shows that Async-LDA performs just as well on this non-random distribution of documents.
20%
30%
40% of data seen, etc.
2000
1500
0
200
400
600
Iteration
800
1000
3000
LDA
Async?LDA P=10 Random
Async?LDA P=10 Non?Random
LDA
Async?LDA P=100
Async?LDA P=100 (Online)
2500
2000
1500
0
LDA
Async?LDA P=10 (Balanced)
Async?LDA P=10 (Imbalanced)
Perplexity
2500
3000
10%
Perplexity
Perplexity
3000
100
200
300
Iteration
400
500
2500
2000
1500
0
200
400
600
Relative Time
800
1000
Figure 5: (a) Left: Online learning for Async-LDA on KOS, K=16. (b) Middle: Comparing random vs. nonrandom distribution of documents for Async-LDA on NIPS, K=20. (c) Right: Async-LDA on KOS, K=16,
where processors have varying amounts of data. In all 3 cases, Async-LDA converges to a good solution.
7
Another situation of interest is the case where the amount of data on each processor varies. KOS was
divided into 30 blocks of 100 documents and these blocks were assigned to 10 processors according
to a distribution: {7, 6, 4, 3, 3, 2, 2, 1, 1, 1}. We assume that if a processor has k blocks, then it will
take k units of time to complete one sampling sweep. Figure 5(c) shows that this load imbalance
does not significantly affect the final perplexity achieved. More generally, the time T p that each
processor p takes to perform Gibbs sampling dictates the communication graph that will ensue.
There exist pathological cases where the graph may be disconnected due to phase-locking (e.g. 5
processors with times T = {10, 12, 14, 19, 20} where P1, P2, P3 enter the network at time 0 and P4,
P5 enter the network at time 34). However, the graph is guaranteed to be connected over time if Tp
has a stochastic component (e.g. due to network delays), a reasonable assumption in practice.
In our experiments, we assumed a fully connected network of processors and did not focus on other
network topologies. After running Async-LDA on both a 10x10 fixed grid network and a 100 node
chain network on KOS K=16, we have verified that Async-LDA achieves the same perplexity as
LDA as long as caching and forwarding of cached counts occurs between processors.
6
Discussion and conclusions
The work that is most closely related to that in this paper is that of Mimno and McCallum [3]
and Newman et al. [5], who each propose parallel algorithms for the collapsed sampler for LDA.
In other work, Nallapati et al. [4] parallelize the variational EM algorithm for LDA, and Wolfe
et al. [8] examine asynchronous EM algorithms for LDA. The primary distinctions between our
work and other work on distributed LDA based on Gibbs sampling are that (a) our algorithms use
purely asynchronous communication rather than a global synchronous scheme, and (b) we have also
extended these ideas (synchronous and asynchronous) to HDP. More generally, exact parallel Gibbs
sampling is difficult to perform due to the sequential nature of MCMC. Brockwell [9] presents a prefetching parallel algorithm for MCMC, but this technique is not applicable to the collapsed sampler
for LDA. There is also a large body of prior work on gossip algorithms (e.g., [6]), such as Newscast
EM, a gossip algorithm for performing EM on Gaussian mixture learning [10].
Although processors perform local Gibbs sampling based on inexact global counts, our algorithms
nonetheless produce solutions that are nearly the same as that of standard single-processor samplers.
Providing a theoretical justification for these distributed algorithms is still an open area of research.
We have proposed a new set of algorithms for distributed learning of LDA and HDP models. Our
perplexity and speedup results suggest that topic models can be learned in a scalable asynchronous
fashion for a wide variety of situations. One can imagine our algorithms being performed by a large
network of idle processors, in an effort to mine the terabytes of information available on the Internet.
Acknowledgments
This material is based upon work supported in part by NSF under Award IIS-0083489 (PS, AA), IIS0447903 and IIS-0535278 (MW), and an NSF graduate fellowship (AA). MW was also supported
by ONR under Grant 00014-06-1-073, and PS was also supported by a Google Research Award.
References
[1] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
[2] Y. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. JASA, 101(476), 2006.
[3] D. Mimno and A. McCallum. Organizing the OCA: learning faceted subjects from a library of digital
books. In JCDL ?07, pages 376?385, New York, NY, USA, 2007. ACM.
[4] R. Nallapati, W. Cohen, and J. Lafferty. Parallelized variational EM for latent Dirichlet allocation: An
experimental evaluation of speed and scalability. In ICDM Workshop On High Perf. Data Mining, 2007.
[5] D. Newman, A. Asuncion, P. Smyth, and M. Welling. Distributed inference for latent Dirichlet allocation.
In NIPS 20. MIT Press, Cambridge, MA, 2008.
[6] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. Gossip algorithms: design, analysis and applications. In
INFOCOM, pages 1653?1664, 2005.
[7] T. L. Griffiths and M. Steyvers. Finding scientific topics. PNAS, 101 Suppl 1:5228?5235, April 2004.
[8] J. Wolfe, A. Haghighi, and D. Klein. Fully distributed EM for very large datasets. In ICML ?08, pages
1184?1191, New York, NY, USA, 2008. ACM.
[9] A. Brockwell. Parallel Markov chain Monte Carlo simulation by pre-fetching. JCGS, 15, No. 1, 2006.
[10] W. Kowalczyk and N. Vlassis. Newscast EM. In NIPS 17. MIT Press, Cambridge, MA, 2005.
8
| 3524 |@word middle:5 version:2 briefly:1 open:1 simulation:2 propagate:2 nks:1 xtest:1 pick:1 contains:3 njk:3 zij:10 denoting:1 document:27 current:1 com:2 comparing:1 must:3 realistic:1 remove:2 plot:4 update:1 v:1 newest:1 intelligence:1 generative:1 device:1 half:2 mccallum:2 colored:1 blei:2 provides:1 node:2 five:1 unbounded:1 direct:1 become:1 combine:1 overhead:1 introduce:3 manner:6 pairwise:1 faceted:1 p1:3 frequently:1 growing:1 multi:1 examine:1 behavior:2 proliferation:1 globally:3 gov:1 actual:2 cache:6 prefetching:1 increasing:3 provided:2 begin:2 notation:1 mass:2 kg:2 ghosh:1 finding:1 nj:1 nonrandom:1 every:4 act:1 growth:1 sip:1 unit:1 grant:1 organize:1 before:5 influencing:1 local:25 limit:4 id:1 parallelize:1 meet:2 merge:1 might:2 burn:1 initialization:1 suggests:1 forwarding:2 limited:4 graduate:1 statistically:1 averaged:3 practical:1 acknowledgment:1 lost:1 block:3 practice:1 communicated:1 area:2 significantly:4 dictate:1 matching:3 boyd:1 word:14 integrating:1 idle:1 griffith:1 pre:1 suggest:3 fetching:1 context:2 collapsed:8 influence:2 equivalent:1 send:3 go:1 regardless:1 independently:1 communicating:1 steyvers:1 variation:2 justification:1 analogous:2 laplace:1 hierarchy:1 imagine:4 smyth:3 exact:1 us:1 wolfe:2 jk:5 observed:2 p5:1 thousand:2 connected:4 news:1 decrease:1 removed:1 ran:1 substantial:2 nytimes:1 balanced:1 locking:1 mine:1 carrying:1 predictive:2 purely:1 creates:1 bipartite:1 efficiency:2 upon:1 swap:1 mh:2 train:2 separated:1 distinct:1 instantiated:1 describe:4 monte:1 kp:10 artificial:1 newman:4 quite:1 larger:1 widely:1 relax:2 otherwise:1 noisy:2 asynchronously:1 final:3 online:3 beal:1 advantage:3 jcdl:1 dwarfed:1 propose:2 subtracting:1 interaction:1 p4:1 uci:1 rapidly:2 organizing:1 brockwell:2 achieve:1 description:1 scalability:1 convergence:10 cluster:1 p:2 zp:2 prabhakar:1 produce:2 cached:4 perfect:1 converges:7 help:1 develop:2 ij:17 received:1 p2:1 hungarian:1 met:4 merged:2 closely:1 stochastic:1 nlm:1 enable:1 virtual:1 material:1 exchange:2 pjk:1 assign:1 fix:1 investigation:1 extension:1 ic:1 nw:1 achieves:1 resample:3 proc:4 applicable:1 vice:2 mit:2 sensor:1 gaussian:1 rather:2 avoid:1 caching:1 varying:3 derived:6 focus:3 improvement:2 likelihood:1 baseline:2 inference:7 integrated:1 initially:1 wij:1 interested:1 issue:1 among:2 oca:1 development:1 once:1 saving:2 never:2 ng:4 sampling:30 procs:1 identical:1 represents:1 unsupervised:2 nearly:2 icml:1 future:1 report:1 few:1 randomly:2 pathological:1 composed:2 gamma:1 individual:3 phase:1 replacement:2 maintain:1 interest:3 message:1 investigate:1 mining:1 evaluation:2 mixture:8 arrives:2 chain:2 accurate:2 arthur:1 necessary:2 nwg:2 unless:1 initialized:1 exchanged:1 theoretical:1 increased:2 modeling:2 earlier:1 tp:1 assignment:6 cost:1 entry:1 delay:1 stored:1 reported:1 varies:1 combined:1 fundamental:2 together:2 quickly:2 again:5 padhraic:1 worse:1 cognitive:2 book:2 adversely:1 aggressive:1 distribute:2 summarized:1 wk:26 availability:1 stabilize:1 later:1 view:3 performed:4 infocom:1 competitive:1 maintains:1 parallel:35 asuncion:3 p2p:1 contribution:2 variance:1 reserved:2 characteristic:1 ensue:1 gathered:1 identify:1 who:1 carlo:1 cc:1 processor:128 converged:3 inexact:2 nonetheless:2 propagated:2 sampled:5 gain:1 irvine:1 popular:1 recall:1 color:1 knowledge:1 improves:1 actually:1 back:1 higher:1 follow:1 improved:1 april:1 jw:1 evaluated:1 box:2 though:1 furthermore:1 just:1 until:4 receives:1 incrementally:1 google:1 quality:3 lda:94 scientific:1 grows:1 building:1 facilitate:1 usa:2 counterpart:1 assigned:5 wp:3 wkk:1 during:11 self:1 noted:1 mpi:1 stone:1 complete:2 demonstrate:1 performs:2 interface:1 variational:2 novel:2 recently:2 nih:1 specialized:2 pseudocode:2 multinomial:2 empirically:1 stepping:1 cohen:1 million:1 slight:2 significant:1 versa:2 gibbs:20 enter:3 stretched:1 cambridge:2 grid:2 similarly:2 had:1 etc:1 base:2 add:3 align:1 posterior:3 multivariate:1 imbalanced:1 dish:1 perplexity:48 scenario:3 store:4 certain:1 blog:1 arbitrarily:1 discussing:1 fault:1 onr:1 seen:2 minimum:1 greater:1 somewhat:2 relaxed:1 additional:1 terabyte:1 parallelized:1 converge:1 aggregated:2 ii:2 multiple:3 full:1 desirable:1 infer:1 x10:1 pnas:1 faster:3 calculation:1 cross:1 long:2 divided:1 icdm:1 equally:1 award:2 scalable:1 ko:19 heterogeneous:1 vision:1 metric:1 nization:1 essentially:2 iteration:24 suppl:1 achieved:1 justified:1 receive:1 fellowship:1 else:2 grow:2 haghighi:1 subject:1 virtually:1 lafferty:1 spirit:1 effectiveness:1 jordan:2 call:1 extracting:1 near:1 mw:2 exceed:1 split:2 variety:2 affect:3 bandwidth:4 topology:3 idea:1 tradeoff:1 synchronous:8 motivated:1 effort:1 penalty:1 returned:1 york:3 passing:1 dramatically:2 generally:3 amount:3 clutter:1 ten:1 locally:8 hardware:1 generate:1 xij:4 exist:1 nsf:2 async:68 estimated:1 hdps:1 per:1 klein:1 serving:1 write:2 four:2 drawn:4 prevent:2 verified:1 nyt:6 graph:3 run:6 master:1 communicate:5 prog:1 almost:1 reasonable:1 p3:1 scaling:1 resampled:1 internet:1 followed:1 guaranteed:1 display:2 toplevel:1 infinity:1 constraint:1 flat:1 software:1 speed:4 extremely:1 performing:2 relatively:1 speedup:10 department:1 developing:1 according:2 combination:1 ball:3 disconnected:1 dissemination:1 across:8 smaller:2 slightly:1 em:7 lasting:1 gradually:1 resource:1 equation:4 previously:1 discus:2 count:33 initiate:1 letting:2 jcgs:1 end:7 available:2 decentralized:1 hyperpriors:1 hierarchical:2 gam:2 appropriate:1 kowalczyk:1 encounter:2 shah:1 slower:2 original:1 substitute:1 denotes:3 dirichlet:16 top:2 include:1 running:2 graphical:3 ncbi:1 establish:1 unchanged:1 sweep:5 question:1 already:3 added:3 occurs:1 primary:2 nkj:1 concentration:2 cessor:1 dp:2 separate:2 simulated:1 capacity:2 topic:74 reason:1 assuming:1 hdp:63 modeled:1 providing:1 difficult:1 holding:1 slows:1 design:1 motivates:1 proper:1 perform:12 allowing:1 teh:2 imbalance:1 datasets:1 markov:1 finite:1 immediate:1 situation:2 extended:2 incorporated:1 communication:10 vlassis:1 community:1 inferred:1 introduced:3 nwk:53 pair:1 required:2 california:1 engine:1 learned:3 hypergeometric:1 distinction:1 nip:15 address:1 able:2 bar:1 below:1 usually:1 max:1 memory:11 including:1 belief:1 power:2 scheme:6 technology:1 brief:1 dated:1 library:1 finished:2 axis:1 created:1 perf:1 kj:12 text:4 review:2 prior:3 relative:3 unsurprisingly:1 synchronization:9 fully:5 allocation:4 validation:1 digital:1 jasa:1 imposes:1 article:1 principle:2 viewpoint:1 share:1 changed:1 token:6 repeat:3 placed:1 asynchronous:24 surprisingly:1 supported:3 exponentiated:1 deeper:1 allow:1 wide:2 taking:1 sparse:1 distributed:31 benefit:1 curve:1 calculated:1 vocabulary:4 world:1 mimno:2 forward:2 collection:4 made:1 simplified:1 welling:3 swk:3 pruning:2 keep:1 global:22 tolerant:1 instantiation:1 corpus:5 conclude:1 assumed:1 don:1 search:1 latent:5 table:3 reality:1 promising:1 learn:3 nature:2 robust:1 delving:1 unavailable:1 investigated:1 necessarily:1 protocol:1 elegantly:1 did:1 main:1 motivation:2 hyperparameters:2 nallapati:2 repeated:1 body:1 augmented:1 gossip:5 pubmed:8 fashion:3 ny:2 jmlr:1 learns:1 formula:1 load:1 specific:2 nwp:2 essential:1 workshop:1 adding:1 effectively:1 sequential:1 occurring:1 entropy:1 simply:6 absorbed:1 indifference:1 partially:1 aa:2 acm:2 ma:2 conditional:2 viewed:1 careful:1 towards:1 replace:1 feasible:1 specifically:1 infinite:2 except:2 uniformly:1 sampler:6 averaging:3 dailykos:1 degradation:1 total:4 newscast:2 experimental:1 indicating:1 mcmc:2 |
2,785 | 3,525 | Finding Latent Causes in Causal Networks:
an Efficient Approach Based on Markov Blankets
1
Jean-Philippe Pellet 1 ,2
jep@zurich . ibm . com
Pattern Recognition and Machine Learning Group
Swiss Federal Institute of Technology Zurich
8092 Zurich, Switzerland
Andre Elisseeff2
ae l@ zurich.ibm .c om
2 Data Analytics Group
IBM Research GmbH
8803 Rlischlikon, Switzerland
Abstract
Causal structure-discovery techniques usually assume that all causes of more than
one variable are observed. This is the so-called causal sufficiency assumption.
In practice, it is untestable, and often violated. In this paper, we present an efficient causal structure-learning algorithm, suited for causally insufficient data.
Similar to algorithms such as IC* and FCI, the proposed approach drops the
causal sufficiency assumption and learns a structure that indicates (potential) latent
causes for pairs of observed variables. Assuming a constant local density of the
data-generating graph, our algorithm makes a quadratic number of conditionalindependence tests w.r.t. the number of variables. We show with experiments
that our algorithm is comparable to the state-of-the-art FCI algorithm in accuracy,
while being several orders of magnitude faster on large problems. We conclude
that MBCS* makes a new range of causally insufficient problems computationally
tractable.
Keywords: Graphical Models, Structure Learning, Causal Inference.
1 Introduction: Task Definition & Related Work
The statistical definition of causality pioneered by Pearl (2000) and Spirtes et al. (2001) has shed
new light on how to detect causation. Central in this approach is the automated detection of causeeffect relationships using observational (i.e., non-experimental) data. This can be a necessary task,
as in many situations, performing randomized controlled experiments to unveil causation can be impossible, unethical , or too costly. When the analysis deals with variables that cannot be manipulated,
being able to learn from data collected by observing the running system is the only possibility.
It turns out that learning the full causal structure of a set of variables is, in its most general form ,
impossible. If we suppose that the "causal ground truth" can be represented by a directed acyclic
graph (DAG) over the variables to analyze, denoted by V, where the arcs denote direct causation,
current causal structure-learning algorithms can only learn an equivalence class representing statistically indistinguishable DAGs. This class can be represented by a partially directed acyclic graph
(PDAG), where arcs between variables may be undirected, indicating that both directions are equaJly
possible given the data. This is know as the problem of causal underdetermination (Pearl, 2000).
Common to most structure-learning algorithms are three important assumptions which ensure the
correctness of the causal claims entailed by the returned PDAG (see Scheines, 1997, for a more
extensive discussion of these assumptions and of their implications). First, the causal Markov condition states that every variable is independent of its non-effects given its direct causes. It implies
that every dependency can be explained by some form of causation (direct, indirect, common cause,
or any combination). Second, the faithfulness condition demands that the dependencies be DAGisomorphic; i.e., that there be a DAG whose entailed variable dependencies coincide exactly with
the dependencies found in the data. Third, causal sufficiency of the data states that every common
cause for two variables in V is also in V. Causal sufficiency often appears as the most controversial assumption as it is generally considered impossible to ensure that all possible causes are
measured-there is no such thing as a closed world. In this paper, we are interested in relaxing
causal sufficiency: we do not require the data to contain all common causes of pairs of variables.
Some of the few algorithms that relax causal sufficiency are Inductive Causation* (IC*) by Pearl and
Verma (1991); Pearl (2000), and Fast Causal Inference (FCI) by Spirtes et al. (1995, 2001). The kind
of graph IC* and FCI return is known as a partial ancestral graph (PAG), which indicate for each link
whether it (potentially) is the manifestation of a hidden common cause for the two linked variables.
Assuming continuous variables with linear causal influences, Silva et al. (2006) recover hidden
variables that are the cause for more than two observed variables, to infer the relationships between
the hidden variables themselves. They check additional constraints on the covariance matrix, known
as tetrad constraints (Scheines et aI., 1995), entailed by special kinds of hidden structures.
There are more specialized techniques to deal with hidden variables. Elidan et al. (2001) look for
structural signatures of hidden variables in a learned DAG model. Boyen et al. (1999) describe a
technique that looks for violation of the Markov condition to infer the presence of latent variables in
Bayesian networks. Once a hidden variable is identified, Elidan and Friedman (2001) discuss how
to assign it a given dimensionality to best model its interactions with the observed variables.
In this paper, we describe recent advances in making the PAG-learning task tractable for a wider
range of problems, and present the Markov blanketlcollidet set (MBCS*) algorithm. In Section 2,
we formally describe the PAG-Iearning task and motivate it with an example. Section 3 describes
FCI. We then present MBCS* in Section 4 and compare it experimentally to FCI in Section 5. We
finally conclude in Section 6. Correctness proofs are provided in the supplemental material l .
Notation Throughout this paper, uppercase capitals such as X and Y denote variables or nodes in
a graph and sets of variables are set in boldface, such as V. Hand L (possibly with indices) denote
latent (unobserved) variables. Bold lowercase greek characters such as 7r are paths (ordered list of
nodes), while the calligraphic letter 9 refers to a graph. Finally, we denote conditional independence
of X and Y given Z by the notation (X Jl Y I Z).
2 Mixed Ancestral Graphs & Partial Ancestral Graphs
In this section, we first introduce the notation of mixed ancestral graphs (MAGs) and partial ancestral
graphs (pAGs) used by Spirtes et al. (1996) and describe how to learn them on a high level. We first
review the definition of a V-structure.
Definition 2.1 (V-structure) In a causal DAG, a V-structure is a triplet X ---> Z f - Y, where X
and Yare nonadjacent. Z is then called an unshielded collider for X and Y. Its presence implies:
::JS Xy<;;; V \{X , Y ,Z}: ((X JlY ISxy) and (X.,ilY I Sxy u{Z})),
(1)
In a V-structure, two causes X and Y, which are made independent by S xy, become dependent
when conditioned on a common effect Z (or one of its descendants). This is the base fact that allows
initial edge orientation in causal structure learning.
Let us now suppose we are learning from data whose (unknown) actual causal DAG is:
(2)
Further assume that HI and H2 are hidden. Assuming the adjacencies X - Y - Z - W have
been found, conditional-independence tests will reveal that (X Jl Z) and (X .,il Z I Y), which is
a sufficient condition for the V-structure X ---> Y f - Z. Similarly, (Y Jl W) and (Y .,il W I Z) is
a sufficient condition for the V-structure Y ---> Z f - W. In a DAG like in a PDAG, however, those
two overlapping V-structures are incompatible. The simplest DAG compatible with those findings
needs the addition of an extra variable H:
X
I Available
--->
Y
f-
H
--->
Z
f-
W.
at http : //j p.pellet.name/publis/pellet08nips_supplement . pdf .
(3)
Actually, (3) is the projection of the latent structure in (2). In the projection of a latent structure as
defined by Pearl (2000), all hidden variables are parentless and have only two direct effects. Verma
(1993) proved that any hidden structure has at least one projection. Notice that we cannot recover
the information about the two separate hidden variables HI and H 2 . In the projection, information
can thus be lost with respect to the true latent structure.
Whereas causally sufficient datasets are represented as DAGs and learned as PDAGs to represent
independence-equivalent DAGs, the projection of latent structures is represented by special graphs
known as mixed ancestral graphs (MAGs) (Spirtes et aI., 2001), which allow for bidirected arrows to
represent a hidden cause for a pair of variables. Independence-equivalent MAGs are represented by
partial ancestral graphs (PAGs). PAGs are thus to MAGs what PDAGs are to DAGs, and structurelearning algorithms like FCI return a PAG.
PAGs allow four kinds of arrows: ----7, 0----7,
and +----+. X ----7 Y in the PAG denotes true causation
X ----7 Y in the projection; X +----+ Y indicates the presence of a latent cause X +-- H ----7 Y (without
excluding direct causation); X 0----7 Y denotes either true causation X ----7 Y or a latent cause X +-H ----7 Y (or a combination of both); finally, X
Y denotes potential causation from X ----7 Y
or Y ----7 X and/or a latent common cause X +-- H ----7 Y in the projection, and is thus the most
"agnostic link." An asterix as an arrowhead is a wildcard for any of the three possible endpoints of
a link, such that X ...--+ Y, for instance, means any of X ----7 Y, X 0----7 Y, and X +----+ Y. Additionally,
we also use the notation X <--* ~ <--* Y to indicate that Z is a definite noncollider for X and Y, such
that any of X ...--+ Z ----7 Y, X+-- Z f-* Y , or X+-- Z ----7 Y can occur, but not X ...--+ Z f-* Y.
<J-O,
<J-O
To illustrate how MAGs and PAGs are related to a latent structure, consider the causal graph shown
in Figure I (i). There, the hidden variable H is a cause for 3 observed variables, and L is a hidden
variable in the causal chain from Z to W. All other variables are observed. In (ii), we show the
projection of (i): note that we lose information about L and about the fact that HI, H 2 , and H 3
are actually the same variable. The corresponding MAG is shown in (iii), and in (iv) the PAG that
represents the class of independence-equivalent MAGs of which (iii) is a member. Note how the
causal-underdetermination problem influences PAG learning: for instance, the model shown in (vi) ,
if learned as a PAG, will also be represented as in (iv). (v) is commented on later in the text.
S +-H
x~Z+-Y
/1
I
L ~W
xi?
S+-HI
S
S
E2 1 x~l~
\" /
Z+-Y
tB?'
W
3
x~!~
Z~Y
z~
t
t
W
W
S
x~
I
W
S
x~ l~
Z +-Y
t
W
(i)
(ii)
(iii)
(iv)
(v)
(vi)
Figure 1: (i) Example of a causal structure with the hidden variables Hand L. (ii) Projection of (i).
(iii) MAG representing the projection (ii). (iv) PAG representing the class of projections
that are independence-equivalent to (iii). (v) The moral graph of (iii). (vi) Another structure with no hidden variable whose learned PAG is (iv).
3 Learning PAGs with the FeI Algorithm
This section now turns to the task of learning PAGs with conditional-independence tests and describes shortly the reference algorithm, FCI.
In principle, learning the structure of a PAG is not much different from learning the structure of a
PDAG. The main difference is that instead of creating V-structures in a PDAG, we now just add
arrow heads into the identified colliders, independently of what the other arrow endpoints are. A
PAG-Iearning algorithm could thus operate this way:
1. Adjacencies: insert the "agnostic link" X
Y if IfS c:;;; V \ {X , Y} : (X )l Y I S);
2. V-structures: when the condition (1) holds for triplet (X , Z , Y) , add arrow heads into Z;
3. Orientations: use rules to further orient "agnostic" endpoints wherever possible.
<J-O
The second difference w.r.t. PDAG learning is in the set of rules applied in Step 3 to further orient
the graph. Those rules are detailed in the next subsection .
To the best of our knowledge, the FCI algorithm is regarded as the state-of-the-art implementation of a PAG-Iearning algorithm. We list its pseudocode in Algorithm]. The notation Nb(X)
stands for the set of direct neighbors of X in the graph being constructed g (and potentially
changes at each iteration). The set ExtDSep(X, Y) is the union of Possible-D-Sep(X, Y) and
Possible-D-Sep(Y, X). Possible-D-Sep(X, Y) is the set of nodes Z where there is an undirected path 7r between X and Z such that for each subpath 8 ...... W ...... T of 7r, either (a) W is a
collider; or (b) W is not marked as a noncollider and 8, W, T are a triangle. (A triangle is a set of
three nodes all adjacent to one another.)
We list the orientation rules as a separate procedure in Algorithm 2, as we reuse them in our algorithm. Rule I preserves acyc1icity. Rule 2 honors the noncollider constraint when one of the two
endpoints is an arrowhead. Rule 3 orients double-triangle structures; for instance; it orients 8 o----t Z
in Figure 1 (iii). Rule 4 needs the following definition (Spirtes et al., 1995).
Definition 3.1 (DDP) In a PAG g, 7r is a definite discriminating path (DDP) between 8 and Y
(8 , Y nonadjacent) for Z (Z -=I- 8, Y) if and only if 7r is an undirected path between 8 and Y
containing Z, Z precedes Y on 7r, every vertex V between 8 and Z on 7r is a collider or a definite
noncollider on 7r, and.'
(i)
(ii)
if V and V' are adjacent on 7r and Viis between V and Z, then V +---+ V' on 7r;
if V is between 8 and Z on 7r and V is a collider on 7r, then V ~ Yin g, else Y
+---+
V in
g.
Figure 2 shows an example for a DDP and for Rule 4, wruch produces the orientation Z +---<> Y. For
a more extensive justification and a proof of those rules, see Spirtes et al. (1995, 2001).
The time complexity of FCI makes it non-scalable for larger networks. In particular, the two subset
searches at lines 5 and 19 of Algorithm I are computationally costly in dense networks. In the next
section, we present an algorithm that takes another approach at PAG learning to tackle problems
larger than those that FCI can handle.
Figure 2: Path
7r
=
(8 , V , X , Z , Y ) is a DDP for Z. Rule 4 adds an arrow head into Z if Z tj.
S SY .
4 Efficient Structure Learning with the MBCS* Algorithm
In this section, we propose a PAG-Iearning algorithm, MBCS*, which is more efficient than FCI in
the sense that it performs much fewer conditional-independence tests, whose average conditioningset size is smaller. We show in Section 5 that MBCS* compares very favorably to FCI on test
networks in terms of computational tractability, while reaching similar accuracy. Pseudocode for
MCBS* is listed in Algorithm 3.
MBCS* proceeds in three steps: first, it detects the Markov blankets for each variable; second,
it examines the triangle structures to identify colliders and noncolliders; finally, it uses the same
orientation rules as FCI to obtain the maximally oriented PAG. We detail the first two steps below;
the orientation rules are the same as for FCI.
4.1
Step 1: Learning the Markov Blanket
The first phase of MBCS * builds an undirected graph where each variable is connected to all members of its Markov blanket.
Definition 4.1 (Markov blanket) The Markov blanket of a node X is the smallest set of variables
Mb(X) such that 'VY E V \ Mb(X) \ {X} : (X Jl Y I Mb(X)).
Assuming faithfulness, Mb(X) is unique. In a DAG, it corresponds to the parents, children, and
children's parents (spouses) of X. We extend trus to MAGs.
Algorithm 1
Input:
V :
I :
Output:
g:
9 = FCI(V, J)
set of observed variables
a conditional-independence oracle, called with the notation ( . JL . I .)
maximally oriented partial ancestral graph
1: 9 f-- fully connected graph over V
2: i f-- 0
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15 :
16:
17:
II Detect adjacencies
while :3(X - Y) s.t. INb(X)1 > i do
for each X - Y s.t. INb(X)1 > i do
for each S <:;; Nb (X) \ {Y} of size i do
if (X JL Y I S) then
remove link X - Y from 9
S Xy, S yX f-- S
break from loop line 4
end if
end for
end for
if--i+l
end while
for each X - Z - Y s.t. X , Y nonadjacent do
if Z
S XY then orient as X --> Z f-- Y
end for
tt
Algorithm 2
Input:
9=
9:
S XY :
Output:
g:
II Detect additional adjacencies
18: for each pair of adjacent variables X , Y do
19:
for each S <:;; ExtDSep(X,Y)\{X,Y} do
20:
if(X JL Y IS) then
21:
remove link X - Y from 9
22:
S Xy, S yxf-- S
23:
break from loop line 18
24:
end if
25:
end for
26: end for
27: orient every link as
0-0
II
28:
29:
30:
31:
Orient V-structures
for each X ...... Z ...... Y s.t. X , Y nonadjacent do
if Z
S XY then orient as X ....... Z f-* Y
else mark Z as noncollider: X ...... Z ...... Y
end for
tt
32: return ORIENTMAXIMALLY(9 , \f(X,Y):
S XY )
ORIENTMAXIMALLY(9 , a list of sets S XY)
partial ancestral graph
for (some) nonadjacent pairs (X , Y): a d-separating set of variables
maximally oriented partial ancestral graph
1: while 9 is changed by some rule do
2:
for each X *-0 Y such that there is a directed path from X to Y do orient as X ....... Y
3:
for each X ....... Z
Y do orient as X ....... Z --> Y
for each X ....... Z f-* Y with S *-0 Z and S E S XY do orient as S ....... Z
4:
5:
for each defi nite di scriminating path 11" between Sand Y for Z do
6:
if X ...... Y where X is adjacent to Z on 11" and X , Z, Yare a triangle then
7:
if S SY exists and Z
S SY then orient as X ....... Z f-* Y
8:
else mark Z as a noncollider X ...... Z ...... Y
9:
end if
10:
end for
11: end while
0-;<
II Rule 1
II Rule 2
II Rule 3
II Rule 4
tt
Algorithm 3
Input:
V :
I :
Output:
g:
1:
9
9 = MBCS*(V, J)
set of observed variables
a conditional-independence oracle, called with the notation ( . JL . I .)
maximally oriented partial ancestral graph
II Initialization
f-- empty graph over V
II Find Markov blankets (Grow-Shrink)
2: for each X E V do
3:
S f-- empty set of Markov blanket variables
while:3Y E V \ {X} s.t. (X .,It Y IS) do
4:
5:
add Y to S
6:
while :3Y E S s.t. (X JL Y I S \ {Y} ) do
7:
remove Y from S
8:
for each Y E S do add link X 0-0 Y
9: end for
II Add noncollider constraints
10: for each X 0-0 Z 0-0 Y s.t. X , Y nonadjacent do
11 :
mark as noncollider X 0-0 Z 0-0 Y
12: end for
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
II Adjust local structures (Collider Set search)
C f-- empty list of collider-orientation directives
for each X ...... Y in a fully connected triangle do
if :3collider set Z <:;; Tri(X - Y) then
S XY f-- d-separating set for (X , Y)
remove link X ...... Y from 9
for each Z E Z do
add ordered triplet (X , Z, Y) to C
for each Z E S XY do
mark as noncollider X ...... Z ...... Y
end if
end for
for each orientation directive (X, Z , Y) E C do
if X ...... Z ...... Y then orient as X ....... Z f-* Y
end for
27: return ORIENTMAXIMALLY(9 , \f(X ,Y):
S XY )
Property 4.2 In a faithful MAG, the Markov blanket Mb(X) of a node X is the set of parents,
children, children 's parents (spouses) of X, as well as the district of X and of the children of X,
and the parents of each node of these districts, where the district of a node Y is the set of all nodes
reachable from Y using only bidirected edges. (Proofin supplemental material.)
We use algorithmic ideas from Margaritis and Thrun (1999) to learn the Markov blanket of a node
with a linear number of conditional-independence tests (proof in the supplemental material of Margaritis and Thrun , 1999). This technique is used in lines 3 to 6 of Algorithm 3. The resulting graph is
an undirected graph called moral graph where each node is connected to its Markov blanket. Therefore, it contains spurious links to its spouses, to members of its district, to members of its children's
district, and to parents of nodes in those districts, which we all call SD links (for Spouse/District).
Removal of those links is done in the second step of MBCS *.
4.2
Step 2: Removing the SD Links
In the second step of MBCS*, each undirected edge must be identified as either an SD link to be
removed, or a true link of the original MAG to be kept. Direct parents and children are dependent
given any conditioning set, while spouses and district members (and their parents) can be made
independent. For each link X - Y, a search is thus performed to try to d-separate the two connected
nodes. This search can be limited to the smallest of the Markov blankets of X and Y, as by definition
they contain all nodes that can minimally make them independent from each other, provided they
are linked by an SD link. If such a d-separating set S Xy is found, the link is removed. Interestingly,
identifying a d-separating set SXY also identifies the collider set for X and Y.
Definition 4.3 (Collider set) In an undirected graph 9 over V, let Thi(X - Y) (X , Y adjacent)
be the set of all vertices that form a triangle with X and Y. Suppose that 9 is the moral graph
of the DAG or MAG representing the causal structure of a faithful dataset. A set of vertices Z ~
Thi (X - Y) then has the Collider set property for the pair (X , Y) if it is the largest set that fulfills
:3 SXy ~ V \ {X , Y} \ Z:(X Jl Y
ISxy)
and VZ E Z : (X .,Ii Y I S X Y U {Z}).
Collider sets are useful because each node in them satisfies the property of a collider (1) and reveals
a V-structure. Suppose (X Jl Y I SXy): then each node Z (connected by a non-SD link to both X
and Y) not in S Xy is a collider. This follows from the fact that for each path X ....... Z ....... Y where
Z rf. S xy , the only structural possibility is to have arrow head pointing into Z by the definition of dseparation (Pearl, 1988). Similarly, if Z E SXY, then any orientation is possible save for a collider.
Those two types of constraints appear in lines 19 and 21 of Algorithm 3, respectively. Note that
more noncollider constraints are added in line 11: in the case X
Z
Y with X , Y nonadjacent,
we know that Z cannot be a V-structure owing to the following lemma.
0-0
0-0
Lemma 4.4 In the moral graph gm of a DAG or a MAG g, whenever the pattern X
occurs in g, then X and Yare linked in gm. (Proof in supplemental material.)
;<--7
Z
+--->
Y
In practice, the search for collider sets and simultaneously for d-separating sets in lines 15 and 16
is performed following the implementation proposed by Pellet and Elisseeff (2008). They also
discuss why V-structure orientations must be delayed to line 25 instead of being made immediately
in line 19.
In the supplemental material to this paper, we prove that MBCS* correctly identifies all adjacencies
and V-structures. The final orientation step (Algorithm 2) requires d-separating-set information for
Rules 3 and 4: we also prove that MBCS* provides all necessary information.
5 Experimental Evaluation
We now compare FCI and MBCS* with a series of experiments. We took two standard benchmark
networks, ALARM and HAILFINDER, and for each of them, chose to hide 0, 1, 2, and 3 variables,
creating in total 8 learning problems. On a first series on experiments, the algorithms were run with
a d-separation oracle, which is equivalent to perfect conditional-independence tests. Conditioning
Table 1: Comparison of MBCS* and FCI where conditional-independence tests are done using a
d-separation oracle. We report the number of tests t; the weighted number of tests wt,
where each test contributes to wt a summand equal to the size of its conditioning set; and
the ratio of t for FCI over the t for MBCS*: r ~ t (FCI) I t (MBCS*).
Alg.
ALARM
MBCS*
FCI
wt
r
2,237
9,340
12,123
27,666
4
3,397
21,497
18,113
95,497
o
MBCS *
FCI
MBCS *
FCI
MBCS*
FCI
HAILFINDER
t
#hid. v.
2
5,208
31,018
27,576
145,322
3
7,527
231,096
42,133
1,612,106
e'" 30
5,333
2,254,774
35,841
20,153,894
6,516
2,302,707
42,379
20,448,775
2
7,205
2,324,503
46,291
20,608,841
322
3
18,244
2,622,312
117,209
22,888,622
143
I
-
6
l
-
6
-
l
30
r
l
423
I
353
l
Hailfinder
50
_
35
wt
o
Alarm
40
t
#hid. v.
0 = =0 - - - - - - - - - ,
Edgeerrors
c=J Orientation errors
"
"
'0 25
~ 20
:J
C
Ql
~
15
~ 10
5
o
MBCS' FCI MBCS' FCI MBCS' FCI MBCS' FCI
o hid.v.
1 hid.v.
2 hid.v.
3 hid.v.
MBCS'FCI MBCS'FCI MBCS' FCI MBCS' FCI
o hid.v. 1 hid.v.
2 hid.v.
3 hid.v.
Figure 3: Comparison of MBCS* and FCI where conditional-independence tests are done using
Fisher's z-test. We compare the number of edge errors (missing/extraneous) and orientation errors (including missing/extraneous hidden variables). Error bars show the standard
deviation over the 5 runs.
on hidden variables was prohibited. The results are listed in Table 1. In a second series, multivariate
Gaussian datasets (with 500 datapoints) were sampled from the networks and data corresponding
to the hidden variables were removed. The algorithms were run with Fisher's z -test on partial
correlation as conditional-independence test. This was repeated 5 times for each learning problem. 2
For FCI, we used the authors' implementation in TETRAD (Scheines et aI., 1995). MBCS* was
implemented in Matlab. See Figure 3 for the comparison.
Table 1 shows in the columns named t that MBCS* makes up to 3 orders of magnitude fewer
conditional-independence tests than FCI on the tested networks. As the number of tests alone does
not reflect the quality of the algorithm, we also list in the wt column a weighted sum of tests, where
each test is weighted by the size of its conditioning set. As the ALARM network becomes denser
by hiding certain variables, the difference between FCI and MBCS* becomes even more apparent.
The inverse phenomenon is to be observed for HAILFINDER, where the difference between FCI and
MBCS* gets smaller: this is because this network is more densely connected, and both algorithms
exhibit a behavior gradually evolving towards the worst case of the fully connected graph. FCI
slowly "catches up" with MBCS* in those circumstances.
2We would have liked to both vary the number of samples for each dataset and include more test networks,
but the running times of Fer on the larger instances, even when run with an upper limit on the maximum size
of conditioning sets, were prohibitive, ranging up to a week on dense networks on a 2 GHz machine.
Figure 3 essentially shows that the difference of accuracy between FCI and MBCS* is not significant
in either way. On each learning problem, the returned PAGs have been checked for correctness with
respect to the maximally oriented PAG go theoretically obtainable (as returned by the first series of
experiments). The discrepancies were classified either as edge errors (when an arc was missing or
extraneous in the returned PAG W.r.t. go), or orientation errors (when a predicted arc in the returned
PAG was indeed present in go, but had a reversed direction or different end points). On aIlS learning
problems, both edge and orientation errors are similar within the margin indicated by the standarddeviation error bars.
Note that the overall relatively high error rate comes from the failure of statistical tests with limited sample size. This indicates that structure learning is a hard problem and that low-sample-size
situations where tests typically fail must be investigated further.
6
Conclusion
With the formalism of MAGs and PAGs, it is possible to learn an independence-equivalence class
of projections of latent structures. We have shown an algorithm, MBCS*, which is much more
efficient than the reference FCI algorithms on networks that are sufficiently sparse, making up to
three orders of magnitude fewer conditional-independence tests to retrieve the same structure. We
have experimental evidence that structural accuracy of MBCS* is as good as that of FCI. MBCS*
is based on a first phase that identifies the Markov blanket of the underlying MAG, and then makes
local adjustments to remove the spurious links and identify all colliders. The last step involving
orientation rules is the same as for FCI. The reduced practical complexity makes MBCS* solve in
minutes problems that FCI would need several days to solve. In that sense, MBCS* makes a whole
new range of problems computationally tractable.
References
X. Boyen, N. Friedman, and D. Koller. Discovering the hidden structure of complex dynamic systems. In
Proceedings of the 15th Conference on Uncertainty in Artijicial1ntelligence, 1999.
G. Elidan and N. Friedman. Learning the dimensionality of hidden variables. In Proceedings of the 17th
Conference in Uncertainty in Artijicial1ntelligence, pages 144-151 , 2001.
G. Elidan, N. Lotner, N. Friedman, and D. Koller. Discovering hidden variables: A structure-based approach.
In Proceedings of the 13th Conference on Advances in Neural Information Processing Systems, 2001.
D. Margaritis and S. Thrun. Bayesian network induction via local neighborhoods. In Advances in Neural
Information Processing Systems 12, 1999.
1. Pearl. Causality: Models, Reasoning, and Inference . Cambridge University Press, 2000.
1. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann,
Los Altos, 1988.
1. Pearl and T. Verma. A theory of inferred causation. In Proc. of the Second Int. Con! on Principles of
Knowledge Representation and Reasoning. Morgan Kaufmann, 1991.
J.-P. Pellet and A. Elisseeff. Using Markov blankets for causal structure learning. Journal of Machine Learning
Research, 9: 1295-1342, 2008 .
R. Scheines. An introduction to causal inference. In V. McKim and S. Turner, editors, Causality in Crisis?,
pages 185- 200. Univ. of Notre Dame Press, 1997.
R. Scheines, P. Spirtes, C. Glymour, C. Meek, and T. Richardson. The TETRAD project: Constraint based aids
to causal model specification. Technical report, Carnegie Mellon University, Dpt. of Philosophy, 1995.
R. Silva, R. Scheines, C. Glymour, and P. Spirtes. Learning the structure of linear latent variable models.
Journal of Machine Learning Research, 7: 191-246,2006.
P. Spirtes, C. Meek, and T. Richardson. Causal inference in the presence of latent variables and selection
bias. In Philippe Besnard and Steve Hanks, editors, Proceedings of the 11th Conference on Uncertainty in
ArtijicialIntelligence, pages 491--498, San Mateo, CA, 1995. Morgan Kaufmann .
P. Spirtes, T. Richardson, and C. Meek. Heuristic greedy search algorithms for latent variable models. In
Proceedings of the 6th International Workshop on Artijiciallntelligence and Statistics, 1996.
P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search, Second Edition. The MIT Press,
200 I. ISBN 0262194406.
T. Verma. Graphical aspects of causal models. Technical Report R-191, Cognitive Systems Laboratory, UCLA,
1993.
| 3525 |@word covariance:1 elisseeff:2 initial:1 series:4 contains:1 mag:15 interestingly:1 current:1 com:1 artijiciallntelligence:1 must:3 remove:5 drop:1 unshielded:1 alone:1 greedy:1 fewer:3 prohibitive:1 discovering:2 provides:1 node:16 district:8 constructed:1 direct:7 become:1 descendant:1 prove:2 jly:1 introduce:1 theoretically:1 indeed:1 behavior:1 themselves:1 detects:1 actual:1 becomes:2 provided:2 hiding:1 notation:7 underlying:1 alto:1 agnostic:3 project:1 what:2 crisis:1 kind:3 ail:1 supplemental:5 finding:2 unobserved:1 ifs:1 every:5 tackle:1 shed:1 iearning:4 exactly:1 appear:1 causally:3 local:4 sd:5 limit:1 path:8 chose:1 initialization:1 minimally:1 mateo:1 equivalence:2 relaxing:1 limited:2 analytics:1 range:3 statistically:1 directed:3 unique:1 faithful:2 practical:1 practice:2 lost:1 definite:3 union:1 swiss:1 jep:1 procedure:1 fci:43 nite:1 thi:2 evolving:1 projection:12 refers:1 get:1 cannot:3 selection:1 nb:2 impossible:3 influence:2 equivalent:5 missing:3 go:3 besnard:1 independently:1 identifying:1 immediately:1 rule:20 examines:1 regarded:1 datapoints:1 retrieve:1 handle:1 justification:1 suppose:4 gm:2 pioneered:1 us:1 defi:1 recognition:1 observed:9 worst:1 connected:8 removed:3 complexity:2 nonadjacent:7 dynamic:1 signature:1 motivate:1 triangle:7 sep:3 indirect:1 represented:6 univ:1 fast:1 describe:4 precedes:1 neighborhood:1 jean:1 whose:4 larger:3 apparent:1 denser:1 solve:2 relax:1 plausible:1 heuristic:1 statistic:1 richardson:3 final:1 isbn:1 took:1 propose:1 interaction:1 mb:5 fer:1 hid:10 loop:2 los:1 parent:8 double:1 empty:3 produce:1 generating:1 perfect:1 liked:1 wider:1 illustrate:1 measured:1 keywords:1 implemented:1 predicted:1 blanket:14 implies:2 indicate:2 come:1 switzerland:2 direction:2 greek:1 collider:18 owing:1 observational:1 material:5 adjacency:5 sand:1 require:1 assign:1 insert:1 hold:1 sufficiently:1 considered:1 ic:3 ground:1 prohibited:1 algorithmic:1 week:1 claim:1 pointing:1 dseparation:1 vary:1 smallest:2 proc:1 lose:1 largest:1 vz:1 correctness:3 pellet:4 weighted:3 federal:1 mit:1 gaussian:1 reaching:1 ily:1 spouse:5 indicates:3 check:1 detect:3 sense:2 inference:6 dependent:2 lowercase:1 typically:1 hidden:22 spurious:2 koller:2 interested:1 unveil:1 overall:1 orientation:16 denoted:1 extraneous:3 art:2 special:2 equal:1 once:1 represents:1 look:2 discrepancy:1 report:3 intelligent:1 summand:1 few:1 causation:11 oriented:5 manipulated:1 preserve:1 simultaneously:1 densely:1 delayed:1 phase:2 friedman:4 detection:1 possibility:2 evaluation:1 adjust:1 entailed:3 violation:1 notre:1 light:1 uppercase:1 tj:1 chain:1 implication:1 edge:6 partial:9 necessary:2 xy:16 arrowhead:2 iv:5 causal:31 instance:4 column:2 formalism:1 bidirected:2 tractability:1 vertex:3 subset:1 deviation:1 too:1 dependency:4 density:1 international:1 randomized:1 discriminating:1 ancestral:11 probabilistic:1 asterix:1 central:1 reflect:1 containing:1 possibly:1 slowly:1 cognitive:1 creating:2 return:4 potential:2 bold:1 int:1 vi:3 later:1 break:2 performed:2 closed:1 try:1 observing:1 analyze:1 linked:3 recover:2 om:1 il:2 accuracy:4 kaufmann:3 sy:3 identify:2 bayesian:2 underdetermination:2 pdag:6 classified:1 andre:1 whenever:1 checked:1 definition:10 failure:1 pags:9 e2:1 proof:4 di:1 con:1 sampled:1 proved:1 dataset:2 subsection:1 knowledge:2 dimensionality:2 directive:2 obtainable:1 actually:2 appears:1 steve:1 day:1 maximally:5 sufficiency:6 done:3 shrink:1 hank:1 just:1 correlation:1 hand:2 overlapping:1 quality:1 reveal:1 indicated:1 name:1 effect:3 contain:2 true:4 inductive:1 noncollider:10 laboratory:1 spirtes:11 deal:2 adjacent:5 indistinguishable:1 manifestation:1 pdf:1 tt:3 performs:1 silva:2 reasoning:3 ranging:1 common:7 specialized:1 pseudocode:2 lotner:1 endpoint:4 conditioning:5 jl:11 extend:1 significant:1 mellon:1 cambridge:1 dag:14 ai:3 similarly:2 had:1 reachable:1 specification:1 base:1 add:7 j:1 multivariate:1 recent:1 hide:1 certain:1 honor:1 calligraphic:1 morgan:3 additional:2 elidan:4 ii:17 full:1 infer:2 technical:2 faster:1 inb:2 controlled:1 prediction:1 scalable:1 involving:1 ae:1 circumstance:1 essentially:1 iteration:1 represent:2 addition:1 whereas:1 else:3 grow:1 extra:1 operate:1 tri:1 hailfinder:4 undirected:7 thing:1 member:5 call:1 structural:3 presence:4 iii:7 automated:1 independence:18 identified:3 idea:1 whether:1 reuse:1 moral:4 returned:5 cause:16 matlab:1 generally:1 useful:1 detailed:1 listed:2 simplest:1 reduced:1 http:1 vy:1 notice:1 correctly:1 carnegie:1 group:2 commented:1 four:1 capital:1 kept:1 graph:32 sum:1 orient:13 run:4 letter:1 inverse:1 uncertainty:3 named:1 throughout:1 separation:2 incompatible:1 comparable:1 hi:4 dame:1 ddp:4 meek:3 quadratic:1 oracle:4 occur:1 constraint:7 fei:1 ucla:1 aspect:1 performing:1 relatively:1 glymour:3 combination:2 describes:2 smaller:2 character:1 making:2 wherever:1 explained:1 gradually:1 computationally:3 scheines:7 zurich:4 turn:2 discus:2 fail:1 know:2 tractable:3 end:18 available:1 yare:3 save:1 shortly:1 original:1 dpt:1 denotes:3 running:2 ensure:2 include:1 graphical:2 yx:1 build:1 added:1 occurs:1 sxy:5 costly:2 exhibit:1 reversed:1 link:20 separate:3 separating:6 thrun:3 collected:1 boldface:1 induction:1 assuming:4 index:1 relationship:2 insufficient:2 ratio:1 ql:1 potentially:2 margaritis:3 favorably:1 implementation:3 pdags:2 unknown:1 upper:1 markov:17 datasets:2 arc:4 benchmark:1 philippe:2 situation:2 excluding:1 head:4 inferred:1 pair:6 extensive:2 faithfulness:2 subpath:1 learned:4 pearl:9 able:1 bar:2 proceeds:1 usually:1 pattern:2 below:1 boyen:2 tb:1 rf:1 including:1 turner:1 representing:4 technology:1 identifies:3 catch:1 text:1 review:1 discovery:1 removal:1 fully:3 mixed:3 acyclic:2 h2:1 controversial:1 sufficient:3 principle:2 editor:2 verma:4 ibm:3 compatible:1 changed:1 last:1 bias:1 allow:2 institute:1 neighbor:1 pag:20 sparse:1 ghz:1 world:1 stand:1 author:1 made:3 coincide:1 san:1 reveals:1 conclude:2 xi:1 continuous:1 latent:16 search:7 triplet:3 why:1 table:3 additionally:1 learn:5 ca:1 contributes:1 alg:1 investigated:1 complex:1 main:1 dense:2 arrow:7 whole:1 alarm:4 edition:1 child:7 repeated:1 gmbh:1 causality:3 untestable:1 aid:1 third:1 learns:1 removing:1 minute:1 list:6 evidence:1 exists:1 workshop:1 magnitude:3 conditioned:1 demand:1 margin:1 suited:1 vii:1 yin:1 ordered:2 adjustment:1 partially:1 corresponds:1 truth:1 satisfies:1 conditional:13 marked:1 towards:1 unethical:1 fisher:2 experimentally:1 change:1 hard:1 wt:5 lemma:2 called:5 total:1 experimental:3 wildcard:1 indicating:1 formally:1 mark:4 fulfills:1 violated:1 philosophy:1 tested:1 phenomenon:1 |
2,786 | 3,526 | Multi-stage Convex Relaxation for Learning with
Sparse Regularization
Tong Zhang
Statistics Department
Rutgers University, NJ
[email protected]
Abstract
We study learning formulations with non-convex regularizaton that are natural for
sparse linear models. There are two approaches to this problem:
? Heuristic methods such as gradient descent that only find a local minimum.
A drawback of this approach is the lack of theoretical guarantee showing that
the local minimum gives a good solution.
? Convex relaxation such as L1 -regularization that solves the problem under
some conditions. However it often leads to sub-optimal sparsity in reality.
This paper tries to remedy the above gap between theory and practice. In particular, we investigate a multi-stage convex relaxation scheme for solving problems
with non-convex regularization. Theoretically, we analyze the behavior of a resulting two-stage relaxation scheme for the capped-L1 regularization. Our performance bound shows that the procedure is superior to the standard L1 convex
relaxation for learning sparse targets. Experiments confirm the effectiveness of
this method on some simulation and real data.
1
Introduction
Consider a set of input vectors x1 , . . . , xn ? Rd , with corresponding desired output variables
y1 , . . . , yn . The task of supervised learning is to estimate the functional relationship y ? f (x)
between the input x and the output variable y from the training examples {(x1 , y1 ), . . . , (xn , yn )}.
The quality of prediction is often measured through a loss function ?(f (x), y). We assume that
?(f, y) is convex in f throughout the paper. In this paper, we consider linear prediction model
f (x) = wT x. As in boosting or kernel methods, nonlinearity can be introduced by including nonlinear features in x.
We are mainly interested in the scenario that d n. That is, there are many more features than the
number of samples. In this case, an unconstrained empirical risk minimization is inadequate because
the solution overfits the data. The standard remedy for this problem is to impose a constraint on w
to obtain a regularized problem. An important target constraint is sparsity, which corresponds to the
(non-convex) L0 regularization, defined as kwk0 = |{j : wj 6= 0}| = k. If we know the sparsity
parameter k for the target vector, then a good learning method is L0 regularization:
n
? = arg min
w
w?Rd
1X
?(wT xi , yi ) subject to kwk0 ? k.
n i=1
(1)
If k is not known, then one may regard k as a tuning parameter, which can be selected through crossvalidation. This method is often referred to as subset selection in the literature. Sparse learning is
an essential topic in machine learning, which has attracted considerable interests recently. It can be
shown that the solution of the L0 regularization problem in (1) achieves good prediction accuracy
1
? However, a fundamental difficulty with
if the target function can be approximated by a sparse w.
this method is the computational cost, because the number of subsets of {1, . . . , d} of cardinality k
(corresponding to the nonzero components of w) is exponential in k.
Due to the computational difficult, in practice, it is necessary to replace (1) by some easier to solve
formulations below:
n
1X
? = arg min
w
?(wT xi , yi ) + ?g(w),
(2)
w?Rd n
i=1
where ? > 0 is an appropriately chosen regularization condition. We obtain a formulation equivalent to (2) by choosing the regularization function as g(w) = kwk0 . However, this function is
discontinuous. For computational reasons, it is helpful to consider a continuous approximation with
g(w) = kwkp , where p > 0. If p ? 1, the resulting formulation is convex. In particular, by choosing the closest approximation with p = 1, one obtain Lasso, which is the standard convex relaxation
formulation for sparse learning. With p ? (0, 1), the Lp regularization kwkp is non-convex but continuous. In this paper, we are also interested in the following capped-L1 approximation of kwk0 ,
Pd
with g(w) = j=1 min(|wj |, ?), where for v ? R: This is a good approximation to L0 because
P
as ? ? 0, j min(|wj |, ?)/? ? kwk0 . Therefore when ? ? 0, this regularization condition is
equivalent to the sparse L0 regularization upto a rescaling of ?. Note that the capped-L1 regularization is also non-convex. It is related to the so-called SCAD regularization in statistics, which is a
smoother version. We use the simpler capped-L1 regularization because the extra smoothness does
not affect our algorithm or theory.
For a non-convex but smooth regularization condition such as capped-L1 or Lp with p ? (0, 1),
standard numerical techniques such as gradient descent leads to a local minimum solution. Unfortunately, it is difficult to find the global optimum, and it is also difficult to analyze the quality of the
local minimum. Although in practice, such a local minimum solution may outperform the Lasso solution, the lack of theoretical (and practical) performance guarantee prevents the more wide-spread
applications of such algorithms. As a matter of fact, results with non-convex regularization are difficult to reproduce because different numerical optimization procedures can lead to different local
minima. Therefore the quality of the solution heavily depend on the numerical procedure used.
The situation is very difficult for a convex relaxation formulation such as L1 -regularization (Lasso).
The global optimum can be easily computed using standard convex programming techniques. It is
known that in practice, 1-norm regularization often leads to sparse solutions (although often suboptimal). Moreover, its performance has been theoretically analyzed recently. For example, it is
known from the compressed sensing literature that under certain conditions, the solution of L1 relaxation may be equivalent to L0 regularization asymptotically even when noise is present (e.g. [3]
and references therein). If the target is truly sparse, then it was shown in [9] that under some restrictive conditions referred to as irrepresentable conditions, 1-norm regularization solves the feature
selection problem. The prediction performance of this method has been considered in [4, 8, 1].
Despite of its success, L1 -regularization often leads to suboptimal solutions because it is not a good
approximation to L0 regularization. Statistically, this means that even though it converges to the
true sparse target when n ? ? (consistency), the rate of convergence can be suboptimal. The
only way to fix this problem is to employ a non-convex regularization condition that is closer to
L0 regularization, such as the capped-L1 regularization. The superiority of capped-L1 is formally
proved later in this paper.
Because of the above gap between practice and theory, it is important to study direct solutions of
non-convex regularization beyond the standard L1 relaxation. Our goal is to design a numerical procedure that leads to a reproducible solution with better theoretical behavior than L1 -regularization.
This paper shows how this can be done. Specifically, we consider a general multi-stage convex relaxation method for solving learning formulations with non-convex regularization. In this scheme,
concave duality is used to construct a sequence of convex relaxations that give better and better
approximations to the original non-convex problem. Moreover, using the capped-L1 regularization,
we show that after only two stages, the solution gives better statistical performance than standard
Lasso when the target is approximately sparse. In essence, this paper establishes a performance
guarantee for non-convex formulations using a multi-stage convex relaxation approach that is more
sophisticated than the standard one-stage convex relaxation (which is the standard approach com2
monly studied in the current literature). Experiments confirm the effectiveness of the multi-stage
approach.
2
Concave Duality
Given a continuous regularization function g(w) in (2) which may be non-convex, we are interested
in rewriting it using concave duality. Let h(w) : Rd ? ? ? Rd be a map with range ?. It may not
be a one-to-one map. However, we assume that there exists a function g?h (u) defined on ? such that
g(w) = g?h (h(w)) holds.
We assume that we can find h so that the function g?h (u) is a concave function of u on ?. Under
this assumption, we can rewrite the regularization function g(w) as:
g(w) = inf vT h(w) + gh? (v)
(3)
v?Rd
using concave duality [6]. In this case, gh? (v) is the concave dual of g?h (u) given below
gh? (v) = inf ?vT u + g?h (u) .
u??
Moreover, it is well-known that the minimum of the right hand side of (3) is achieved at
? = ?u g?h (u)|u=h(w) .
v
(4)
This is a very general framework. For illustration, we include two example non-convex sparse
regularization conditions discussed in the introduction.
Pd
p
Lp regularization We consider the regularization condition g(w) =
j=1 |wj | for some
q
q
p ? (0, 1). Given any q > p, (3) holds with h(w) = [|w1 | , . . . , |wd | ] and gh? (v) =
P p/(p?q)
c(p, q) j vj
defined on the domain {v : vj ? 0}, where c(p, q) = (q ? p)pp/(q?p) q q/(p?q) .
Pd
p/q
In this case, g?h (u) =
on ? = {u : uj ? 0}. The solution in (4) is given by
j=1 uj
? j = (p/q)|wj |p?q .
v
Pd
Capped-L1 regularization We consider the regularization condition g(w) = j=1 min(|wj |, ?).
Pd
In this case, (2) holds with h(w) = [|w1 |, . . . , |wd |] and gh? (v) = j=1 ?(1 ? vj )I(vj ? [0, 1])
defined on the domain {v : vj ? 0}, where I(?) is the set indicator function. The solution in (4) is
? j = I(|wj | ? ?).
given by v
3
Multi-stage Convex Relaxation
We consider a general
P procedure for solving (2) with convex loss and non-convex regularization
g(w). Let h(w) = j hj (w) be a convex relaxation of g(w) that dominates g(w) (for example,
it can be the smallest convex upperbound (i.e., the inf over all convex upperbounds) of g(w)). A
simple convex relaxation of (2) becomes
?
?
n
d
X
X
1
? = arg min ?
w
?(wT xi , yi ) + ?
hj (w)? .
(5)
w?Rd n
i=1
j=1
This simple relaxation can yield a solution that is not close to the solution of (2). However, if h
satisfies the condition of Section 2, then it is possible to write g(w) as (3). Now, with this new
representation, we can rewrite (2) as
#
" n
1X
? v
? ] = arg min
?(wT xi , yi ) + ?vT h(w) + ?gh? (v), ,
(6)
[w,
w,v?Rd n
i=1
? that
This is clearly equivalent to (2) because of (3). If we can find a good approximation of v
? = [1, . . . , 1], then the above formulation can lead to a refined
improves upon the initial value of v
convex problem in w that is a better convex relaxation than (5).
3
Our numerical procedure exploits the above fact, which tries to improve the estimation of vj over
the initial choice of vj = 1 in (5) using an iterative algorithm. This can be done using an alternating
optimization procedure, which repeatedly applies the following two steps:
? First we optimize w with v fixed: this is a convex problem in w with appropriately chosen
h(w).
? Second we optimize v with w fixed: although non-convex, it has a closed form solution
that is given by (4).
The general procedure is presented in Figure 1. It can be regarded as a generalization of CCCP
(concave-convex programming) [7], which takes h(w) = w. By repeatedly refining the parameter
v, we can potentially obtain better and better convex relaxation, leading to a solution superior to that
of the initial convex relaxation. Note that using the Lp and capped-L1 regularization conditions in
Section 2, this procedure lead to more specific multi-stage convex relaxation algorithms. We skip
the details due to the space limitation.
Tuning parameters: ?
Input: training data (x1 , y1 ), . . . , (xn , yn )
?
Output: weight vector w
?j = 1
initialize v
Repeat the following two steps until convergence:
P
? = arg minw?Rd n1 ni=1 ?(wT xi , yi ) + ??
vT h(w)
? Let w
(?)
? = ?u g?h (u))|u=h(w)
? Let v
Figure 1: Multi-stage Convex Relaxation Method
4
Theory of Two-stage Convex Relaxation for Capped-L1 Regularization
Although the reasoning in Section 3 is appealing, it is only a heuristic argument without any formal
theoretical guarantee. In contrast, the simple one-stage L1 relaxation is known to perform reasonably
well under certain assumptions. Therefore unless we can develop a theory to show the effectiveness
of the multi-stage procedure in Figure 1, our proposal is mere yet another local minimum finding
scheme that may potentially stuck into a bad local solution.
This section tries to address this issue. Although we have not yet developed a complete theory for
the general procedure, we are able to obtain a learning bound for the capped-L1 regularization. In
particular, if the target function is sparse, then the performance of the solution after merely twostages of our procedure is superior to that of Lasso. This demonstrates the effectiveness of the
multi-stage approach. Since the analysis is rather complicated, we focus on the least squares loss
only, and only for the solution after two-stages of the algorithm.
For a complete theory, the following questions are worth asking:
? Under what conditions, the global solution with non-convex penalty is statistically better
than the (one-stage) convex relaxation solution? That is, when does it lead to better prediction accuracy or generalization error?
? Under what conditions, there is only one local minimum solution close to the solution of
the initial convex relaxation, and it is also the global optimum? Moreover, does multi-stage
convex relaxation find this solution?
The first question answers whether it is beneficial to use a non-convex penalty function. The second
question answers whether we can effectively solve the resulting non-convex problem using multistage convex relaxation. The combination of the two questions leads to a satisfactory theoretical
answer to the effectiveness of the multi-stage procedure.
A general theory along this line will be developed in the full paper. In the following, instead of
trying to answer the above questions separately, we provide a unified finite sample analysis for the
procedure that directly addresses the combined effect of the two questions. The result is adopted
4
from [8], which justifies the multi-stage convex relaxation approach by showing that the two-stage
procedure using capped-L1 regularization can lead to better generalization than the standard one
stage L1 regularization.
The procedure we shall analyze, which is a special case of the multi-stage algorithm in Figure 1 with
capped-L1 regularization and only two stages, is described in Figure 2. It is related to the adaptive
Lasso method [10]. The result is reproducible when the solution of the first stage is unique because
it involves two well-defined convex programming problems. Note that it is described with least
squares loss only because our analysis assumes least squares loss: a more general analysis for other
loss functions is possible but would lead to extra complications that are not central to our interests.
Tuning parameters: ?, ?
Input: training data (x1 , y1 ), . . . , (xn , yn )
?0
Output: weight vector w
? by solving the L1 penalization problem:
Stage 1: Compute w
#
" n
1X T
2
? = arg min
w
(w xi ? yi ) + ?kwk1 .
w?Rd n
i=1
Stage 2: Solving the following selective L1 penalization problem:
?
n
X
1X T
? 0 = arg min ?
w
(w xi ? yi )2 + ?
d
n i=1
w?R
?
|wj |? .
? j |??
j:|w
Figure 2: Two-stage capped-L1 Regularization
This particular two-stage procedure also has an intuitive interpretation (besides treating it as a special case of multi-stage convex relaxation). We shall refer to the feature components corresponding
to the large weights as relevant features, and the feature components smaller the cut-off threshold ?
as irrelevant features. We observe that as an estimation method, L1 regularization has two important properties: shrink estimated weights corresponding to irrelevant features toward zero; shrink
estimated weights corresponding to relevant features toward zero. While the first effect is desirable,
the second effect is not. In fact, we should avoid shrinking the weights corresponding to the relevant
features if we can identify these features. This is why the standard L1 regularization may have suboptimal performance. However, after the first stage of L1 regularization, we can identify the relevant
features by picking the components corresponding to the largest weights; in the second stage of L1
regularization, we do not have to penalize the features selected in the first stage, as in Figure 2.
A related method, called relaxed Lasso, was proposed recently by Meinshausen [5], which is similar
to a two-stage Dantzig selector in [2]. Their idea differs from our proposal in that in the second
? It was pointed out
stage, the weight coefficients wj0 are forced to be zero when j ?
/ supp0 (w).
? can exactly identify all non-zero components of the target vector, then in
in [5] that if supp0 (w)
the second stage, the relaxed Lasso can asymptotically remove the bias in the first stage Lasso.
However, it is not clear what theoretical result can be stated when Lasso cannot exactly identify all
relevant features. In the general case, it is not easy to ensure that relaxed Lasso does not degrade
the performance when some relevant coefficients become zero in the first stage. On the contrary, the
two-stage penalization procedure in Figure 2, which is based on the capped-L1 regularization, does
not require that all relevant features are identified. Consequently, we are able to prove a result for
Figure 2 with no counterpart for relaxed Lasso.
Definition 4.1 Let w = [w1 , . . . , wd ] ? Rd and ? ? 0, we define the set of relevant features with
threshold ? as:
supp? (w) = {j : |wj | > ?}.
P
1/2
2
Moreover, if |wi1 | ? ? ? ? ? |wid | are in descending order, then define ?k (w) =
j>k |wij |
as the 2-norm of the largest k components (in absolute value) of w.
For simplicity, we assume sub-Gaussian noise as follows.
5
Assumption 4.1 Assume that {yi }i=1,...,n are independent (but not necessarily identically distributed) sub-Gaussians: there exists ? ? 0 such that ?i and ?t ? R,
Eyi et(yi ?Eyi ) ? e?
2 2
t /2
.
Both Gaussian and bounded random variables are sub-Gaussian using the above definition. For
2 2
example, if a random variable ? ? [a, b], then E? et(??E?) ? e(b?a) t /8 . If a random variable is
2 2
Gaussian: ? ? N (0, ? 2 ), then E? et? ? e? t /2 .
Pn
Theorem 4.1 Let Assumption 4.1 hold. Let A? = n1 i=1 xi xTi , define MA? = supi6=j |A?i,j |, and
? such that Ey = w
? T x, and assume that
assume that A?j,j = 1 for all j. Consider any target vector w
? contains only s non-zeros where s ? d/3 and assume that MA? s ? 1/6. Let k = |supp? (w)|.
?
w
Consider the two-stage
method
in
Figure
2.
Given
?
?
(0,
0.5),
with
probability
larger
than
1
?
2?:
p
if ?/48 ? ? ? 12? 2 ln(2d/?)/n, then
!
r
p
20q
? 0 ? wk
? 2 ? 24 k ? q? + 24? 1 +
?
kw
ln(1/?) + 168?k (w),
n
?
where q = |supp1.5? (w)|.
The proof of this theorem can be found in [8]. Note that the theorem allows the situation d
n, which is what we are interested in. The condition MA? s ? 1/6, often referred to as mutual
coherence, is also quite standard in the analysis of L1 regularization, e.g., in [1, 3]. Although the
condition is idealized, the theorem nevertheless yields important insights into the behavior of the
two-stage algorithm. This theorem leads to a bound for Lasso with ? = ? or q = 0. The bound has
the form
?
? 0 ? wk
? 2 = O(?k (w)
? + k?).
kw
This bound is tight for Lasso, in the sense
? that the right hand side cannot be improved except for
the constant. In particular, the factor O( k?) cannot be removed using Lasso ? this can be easily
verified with an orthogonal design matrix.
p It is known that in order for Lasso to be effective, one
has to pick ? no smaller than
the
order
?
ln d/n. Therefore, the generalization of standard Lasso
p
? + ? k ln d/n, which cannot be improved. Similar results appear in [1, 4].
is of the order ?k (w)
Now, with a small ?, the bound in Theorem 4.1 can
?be significantly better than that of the standard
? k? and k ? q k. The latter condition is true
Lasso result if the sparse target satisfies ?k (w)
? ? |supp? (w)|.
? These conditions are satisfied when most non-zero coefficients
when |supp1.5? (w)|
? in supp? (w)
? are relatively large in magnitude and the rest is small in 2-norm. That is, when
of w
? can be decompose as a sparse vector with large coefficients plus another (less sparse)
the target w
? (that is, all nonzero
vector with small coefficients. In the extreme case when
pq = k = |supp0 (w)|
0
? are large), we obtain kw
? ?wk
? 2 = O( k ln(1/?)/n) for the
components of w
p two-stage procedure,
? ? wk
? 2 = O( k ln(d/?)/n). Again,
which is superior to the standard one-stage Lasso bound kw
this bound cannot be improved for Lasso, and the difference can be significant when d is large.
5
Experiments
In the following, we show with a synthetic and a real data that our multi-stage approach improves
the standard Lasso in practice. In order to avoid cluttering, we only study results for the two-stage
procedure of Figure 2, which corresponds to the capped-L1 regularization. We shall also compare
it to the two-stage Lp regularization method with p = 0.5, which corresponds to the adaptive Lasso
approach [10]. Note that instead of tuning the ? parameter in Figure 2, in these experiments, we
? that are larger than the threshold ? (i.e., q = |{j : |w
? j | > ?}|
tune the number of features q in w
is the number of features that are not regularized in stage-2). This is clearly more convenient than
tuning ?. The standard Lasso corresponds to q = 0.
In the first experiment, we generate an n ? d random matrix with its column j corresponding to
[x1,j , . . . , xn,j ], and each element ofPthe matrix is an independent standard Gaussian N (0, 1). We
n
? is generated with k
then normalize its columns so that i=1 x2i,j = n. A truly sparse target ?,
6
?
?
6
?
?
?
4
?
?
3
parameter estimation error
0.100
?
0.020
q=0
q=1
q=3
Lp (p=0.5)
5
?
?
?
?
?
0.005
?
?
?
?
0.500
2.000
?
training error
?
?
q=0
q=1
q=3
Lp (p=0.5)
?
7
nonzero elements that are uniformly distributed from [?10, 10]. The observation yi = ??T xi + i ,
where each i ? N (0, ? 2 ). In this experiment, we take n = 25, d = 100, k = 5, ? = 1, and repeat
the experiment 100 times. The average training error and 2-norm parameter estimation error are
reported in Figure 3. We compare the performance of the two-stage method with different q versus
the regularization parameter ?. As expected, the training error becomes smaller when q increases.
Compared to the standard Lasso (which corresponds to q = 0), substantially smaller estimation
error is achieved with q = 3 for Capped-L1 regularization and with p = 0.5 for Lp regularization.
This shows that the multi-stage convex relaxation approach is effective.
?
?
?
?
?
?
2
?
?
?
?
?
?
0.01
0.02
0.05
0.10
0.20
0.50
1.00
2.00
0.01
lambda
0.02
0.05
0.10
0.20
0.50
1.00
2.00
lambda
Figure 3: Performance of multi-stage convex relaxation on simulation data. Left: average training
squared error versus ?; Right: parameter estimation error versus ?.
In the second experiment, we use real data to illustrate the effectiveness of the multi-stage approach.
Due to the space limitation, we only report the performance on a single data, Boston Housing. This
is the housing data for 506 census tracts of Boston from the 1970 census, available from the UCI
Machine Learning Database Repository: http://archive.ics.uci.edu/ml/. Each census tract is a datapoint, with 13 features (we add a constant offset on e as the 14th feature), and the desired output is
the housing price. In the experiment, we randomly partition the data into 20 training plus 456 test
points. We perform the experiments 100 times, and report training and test squared error versus the
regularization parameter ? for different q. The results are plotted in Figure 4. In this case, q = 1
achieves the best performance. This means one feature can be reliably identified in this example.
In comparison, adaptive Lasso is not effective. Note that this dataset contains only a small number
(d = 14) features, which is not the case where we can expect significant benefit from the multi-stage
approach (most of other UCI data similarly contain only small number of features). In order to
illustrate the advantage of the two-stage method more clearly, we also consider a modified Boston
Housing data, where we append 20 random features (similar to the simulation experiments) to the
original Boston Housing data, and rerun the experiments. The results are shown in Figure 5. As
expected from Theorem 4.1 and the discussion thereafter, since d becomes large, the multi-stage
convex relaxation approach with capped-L1 regularization (q > 0) has significant advantage over
the standard Lasso (q = 0).
References
[1] Florentina Bunea, Alexandre Tsybakov, and Marten H. Wegkamp. Sparsity oracle inequalities
for the Lasso. Electronic Journal of Statistics, 1:169?194, 2007.
[2] Emmanuel Candes and Terence Tao. The Dantzig selector: statistical estimation when p is
much larger than n. Annals of Statistics, 2007.
[3] David L. Donoho, Michael Elad, and Vladimir N. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Info. Theory, 52(1):6?18, 2006.
7
80
?
q=0
q=1
q=2
Lp (p=0.5)
?
?
?
?
?
?
?
q=0
q=1
q=2
Lp (p=0.5)
?
70
?
40
50 60
?
?
?
?
?
30
?
?
?
50
?
?
?
10
?
?
?
?
?
?
?
?
?
?
?
?
?
0.1
?
?
?
?
?
?
?
?
?
?
60
?
?
test error
?
20
training error
?
0.2
0.5
1.0
2.0
0.1
0.2
0.5
lambda
1.0
2.0
lambda
?
?
?
?
?
q=0
q=1
q=2
Lp (p=0.5)
?
?
?
test error
10.0
?
?
?
?
?
?
?
?
?
5.0
training error
200
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
1.0
2.0
250
?
q=0
q=1
q=2
Lp (p=0.5)
150
?
100
?
50.0
200.0
Figure 4: Performance of multi-stage convex relaxation on the original Boston Housing data. Left:
average training squared error versus ?; Right: test squared error versus ?.
?
?
?
?
0.5
?
0.1
0.2
0.5
1.0
2.0
5.0
0.1
lambda
?
0.2
?
?
?
?
?
?
0.5
1.0
2.0
5.0
lambda
Figure 5: Performance of multi-stage convex relaxation on the modified Boston Housing data. Left:
average training squared error versus ?; Right: test squared error versus ?.
[4] Vladimir Koltchinskii. Sparsity in penalized empirical risk minimization. Annales de l?Institut
Henri Poincar?, 2008.
[5] Nicolai Meinshausen. Lasso with relaxation. ETH Research Report, 2005.
[6] R. Tyrrell Rockafellar. Convex analysis. Princeton University Press, Princeton, NJ, 1970.
[7] Alan L. Yuille and Anand Rangarajan. The concave-convex procedure. Neural Computation,
15:915?936, 2003.
[8] Tong Zhang. Some sharp performance bounds for least squares regression with L1 regularization. The Annals of Statistics, 2009. to appear.
[9] Peng Zhao and Bin Yu. On model selection consistency of Lasso. Journal of Machine Learning
Research, 7:2541?2567, 2006.
[10] Hui Zou. The adaptive lasso and its oracle properties. Journal of the American Statistical
Association, 101:1418?1429, 2006.
8
| 3526 |@word repository:1 version:1 norm:5 simulation:3 pick:1 initial:4 contains:2 current:1 wd:3 nicolai:1 yet:2 attracted:1 numerical:5 partition:1 remove:1 reproducible:2 treating:1 selected:2 boosting:1 complication:1 simpler:1 zhang:2 along:1 direct:1 become:1 prove:1 theoretically:2 peng:1 expected:2 behavior:3 multi:23 xti:1 kwk0:5 cardinality:1 becomes:3 cluttering:1 moreover:5 bounded:1 what:4 substantially:1 developed:2 unified:1 finding:1 nj:2 guarantee:4 concave:8 exactly:2 demonstrates:1 yn:4 superiority:1 appear:2 local:9 despite:1 approximately:1 plus:2 therein:1 studied:1 dantzig:2 meinshausen:2 koltchinskii:1 range:1 statistically:2 practical:1 unique:1 ofpthe:1 practice:6 differs:1 procedure:21 poincar:1 empirical:2 eth:1 significantly:1 convenient:1 cannot:5 irrepresentable:1 selection:3 close:2 risk:2 descending:1 optimize:2 equivalent:4 map:2 marten:1 convex:61 simplicity:1 recovery:1 insight:1 regarded:1 annals:2 target:13 heavily:1 programming:3 element:2 approximated:1 cut:1 database:1 com2:1 wj:9 removed:1 pd:5 multistage:1 depend:1 solving:5 rewrite:2 tight:1 yuille:1 upon:1 easily:2 forced:1 effective:3 choosing:2 refined:1 quite:1 heuristic:2 larger:3 solve:2 elad:1 compressed:1 statistic:5 housing:7 sequence:1 advantage:2 relevant:8 uci:3 intuitive:1 normalize:1 crossvalidation:1 convergence:2 optimum:3 rangarajan:1 tract:2 converges:1 illustrate:2 develop:1 stat:1 measured:1 solves:2 skip:1 involves:1 drawback:1 discontinuous:1 wid:1 bin:1 require:1 fix:1 generalization:4 decompose:1 hold:4 considered:1 ic:1 achieves:2 smallest:1 estimation:7 wi1:1 largest:2 establishes:1 bunea:1 minimization:2 clearly:3 gaussian:5 modified:2 rather:1 avoid:2 hj:2 pn:1 l0:8 focus:1 refining:1 mainly:1 contrast:1 sense:1 helpful:1 reproduce:1 selective:1 interested:4 wij:1 tao:1 rerun:1 arg:7 dual:1 issue:1 special:2 initialize:1 mutual:1 construct:1 kw:4 yu:1 report:3 employ:1 randomly:1 n1:2 interest:2 investigate:1 analyzed:1 truly:2 extreme:1 closer:1 necessary:1 minw:1 orthogonal:1 unless:1 institut:1 desired:2 plotted:1 overcomplete:1 theoretical:6 column:2 asking:1 cost:1 subset:2 inadequate:1 reported:1 answer:4 synthetic:1 combined:1 fundamental:1 off:1 picking:1 wegkamp:1 terence:1 michael:1 w1:3 again:1 central:1 satisfied:1 squared:6 lambda:6 american:1 zhao:1 leading:1 rescaling:1 supp:4 upperbound:1 de:1 wk:4 coefficient:5 matter:1 rockafellar:1 eyi:2 idealized:1 later:1 try:3 closed:1 analyze:3 overfits:1 complicated:1 candes:1 square:4 ni:1 accuracy:2 yield:2 identify:4 mere:1 worth:1 datapoint:1 definition:2 pp:1 proof:1 proved:1 dataset:1 improves:2 sophisticated:1 alexandre:1 supervised:1 improved:3 formulation:9 done:2 though:1 shrink:2 stage:56 until:1 hand:2 nonlinear:1 lack:2 quality:3 effect:3 contain:1 true:2 remedy:2 counterpart:1 regularization:59 alternating:1 nonzero:3 satisfactory:1 essence:1 trying:1 wj0:1 complete:2 l1:36 gh:6 reasoning:1 recently:3 superior:4 functional:1 discussed:1 interpretation:1 association:1 refer:1 significant:3 smoothness:1 rd:11 unconstrained:1 tuning:5 consistency:2 similarly:1 pointed:1 nonlinearity:1 pq:1 stable:1 add:1 closest:1 inf:3 irrelevant:2 scenario:1 certain:2 inequality:1 success:1 kwk1:1 vt:4 yi:10 minimum:9 relaxed:4 impose:1 ey:1 smoother:1 full:1 desirable:1 alan:1 smooth:1 cccp:1 prediction:5 regression:1 rutgers:2 kernel:1 achieved:2 penalize:1 proposal:2 separately:1 appropriately:2 extra:2 rest:1 archive:1 subject:1 contrary:1 anand:1 effectiveness:6 presence:1 easy:1 identically:1 affect:1 lasso:30 suboptimal:4 identified:2 idea:1 whether:2 penalty:2 repeatedly:2 clear:1 tune:1 tsybakov:1 generate:1 http:1 outperform:1 estimated:2 write:1 shall:3 thereafter:1 threshold:3 nevertheless:1 rewriting:1 verified:1 asymptotically:2 relaxation:36 merely:1 annales:1 tzhang:1 throughout:1 electronic:1 coherence:1 florentina:1 bound:9 oracle:2 constraint:2 kwkp:2 argument:1 min:9 relatively:1 department:1 scad:1 combination:1 beneficial:1 smaller:4 lp:12 appealing:1 census:3 ln:6 know:1 adopted:1 available:1 gaussians:1 observe:1 upto:1 original:3 assumes:1 include:1 ensure:1 upperbounds:1 exploit:1 restrictive:1 emmanuel:1 uj:2 question:6 gradient:2 degrade:1 topic:1 reason:1 toward:2 besides:1 relationship:1 illustration:1 vladimir:2 difficult:5 unfortunately:1 potentially:2 info:1 stated:1 append:1 design:2 reliably:1 perform:2 observation:1 finite:1 descent:2 situation:2 y1:4 sharp:1 introduced:1 david:1 supp0:3 trans:1 capped:19 beyond:1 address:2 able:2 below:2 sparsity:5 including:1 natural:1 difficulty:1 regularized:2 indicator:1 scheme:4 improve:1 x2i:1 literature:3 loss:6 expect:1 limitation:2 versus:8 penalization:3 penalized:1 repeat:2 side:2 formal:1 bias:1 wide:1 absolute:1 sparse:18 distributed:2 regard:1 benefit:1 xn:5 stuck:1 adaptive:4 temlyakov:1 henri:1 selector:2 confirm:2 ml:1 global:4 xi:9 continuous:3 iterative:1 why:1 reality:1 reasonably:1 necessarily:1 zou:1 domain:2 vj:7 spread:1 noise:3 x1:5 referred:3 tong:2 shrinking:1 sub:4 exponential:1 theorem:7 bad:1 specific:1 showing:2 sensing:1 offset:1 dominates:1 essential:1 exists:2 effectively:1 hui:1 magnitude:1 justifies:1 gap:2 easier:1 boston:6 prevents:1 applies:1 corresponds:5 satisfies:2 ma:3 goal:1 consequently:1 donoho:1 replace:1 price:1 considerable:1 specifically:1 except:1 uniformly:1 tyrrell:1 wt:6 called:2 duality:4 formally:1 latter:1 princeton:2 |
2,787 | 3,527 | A Massively Parallel Digital
Learning Processor
Hans Peter Graf
[email protected]
Srihari Cadambi
[email protected]
Igor Durdanovic
[email protected]
Venkata Jakkula
Murugan Sankardadass
Eric Cosatto
Srimat Chakradhar
[email protected] [email protected] [email protected] [email protected]
NEC Laboratories, America
4 Independence Way, Suite 200; Princeton, NJ 07738, USA
Abstract
We present a new, massively parallel architecture for accelerating machine
learning algorithms, based on arrays of vector processing elements (VPEs)
with variable-resolution arithmetic. Groups of VPEs operate in SIMD
(single instruction multiple data) mode, and each group is connected to an
independent memory bank. The memory bandwidth thus scales with the
number of VPEs, while the main data flows are local, keeping power dissipation
low. With 256 VPEs, implemented on two FPGAs (field programmable gate
array) chips, we obtain a sustained speed of 19 GMACS (billion multiplyaccumulate per sec.) for SVM training, and 86 GMACS for SVM
classification. This performance is more than an order of magnitude higher
than that of any FPGA implementation reported so far. The speed on one
FPGA is similar to the fastest speeds published on a Graphics Processor for
the MNIST problem, despite a clock rate that is an order of magnitude lower.
Tests with Convolutional Neural Networks show similar compute performances.
This massively parallel architecture is particularly attractive for embedded
applications, where low power dissipation is critical.
1
I n trod u cti on
Machine learning demands higher and higher compute-performance, but serial processors are not
improving that much anymore - at least not as quickly as they used to. Mainstream processor
development is moving to multi-core systems, using shared memory technology to hide the
parallel nature of the processors. But shared memory technology does not scale to hundreds or
thousands of cores. In order to reach such levels of parallelization alternative approaches have to
be developed. Massively parallel general-purpose computers had limited success so far, because of
difficulties programming these machines, and they remain a niche market, mostly in highperformance computing. Yet processors specialized for certain application domains, such as
graphics processors or routing processors 1, have been parallelized to several hundred cores and are
successful mass products. They improve performance over general-purpose processors by
focusing on a few key algorithmic elements, yet still maintain enough flexibility that they can be
programmed for a variety of applications. We explore in this paper if a similar approach can lead
to efficient machine learning processors.
1
e.g. Nvidia, Quadro FX 5600 graphics processor; Cisco, CRS-1 routing processor
Several processors optimized for machine learning, in particular for neural networks, were
developed during the 1980?s and 90?s. Examples are the Synapse-1 architecture [1], or the
Connectionist Network Supercomputer, CNS1 [2]. Recently there has been less activity in this
field, but some accelerators are sold today for specific applications, such as the Axeon [3]
processor for power train control of cars. Beside digital processors a large number of analog
circuits were built, emulating neural network structures. Extremely high performance with low
power dissipation is achievable, see e.g. [4][5], but these networks have little flexibility. SVM
implementations on FPGA have been demonstrated in recent years [6-8], yet reached only low
compute-performances. All machine learning processors had only limited success so far,
indicating how difficult it is to find a good combination of performance, flexibility, price and ease
of use. An important consideration is that many applications of machine learning, such as video
analysis, data mining, or personalization of services, show the most promise in embedded systems.
Embedded learning requires high compute performance while dissipating little power, a
combination that is difficult to achieve, and so far required application specific IC (ASIC). Our
aim is to develop architectures that meet the requirements for embedded learning, but are
programmable and therefore can be used in a wide range of applications.
With the goal of analyzing different architectures we designed a development and testing
environment where the parallel computation is mapped onto FPGA?s. Initially this system was
intended only for experimentation, but its performance is so high that this platform is useful in its
own right as accelerator for high-performance systems. While the experiments shown here
emphasize high performance, the architecture has been designed from the start for low power
dissipation. The main features for achieving this goal are: low-resolution arithmetic, keeping the
main data flow local, low operating frequencies, and a modular design, so that unused parts can be
powered down dynamically. All results shown here are from the test platform; migration to lowpower FPGA or chip designs are done in a later stage.
2
Al gori th ms - A ri th meti c - A rch i te ctu re
For a substantial improvement over a general purpose processor, the algorithms, the arithmetic
units, as well as the architecture have to be optimized simultaneously. This is not just an exercise
in hardware design, but algorithms and their software implementations have to be developed
concurrently. Most machine learning algorithms have not been developed with parallelization in
mind. Therefore, we first need to find good parallel versions, identify their performance
bottlenecks, and then extract common computational patterns that can be mapped into accelerator
hardware.
2.1
Algorithms
Characteristic for machine learning is that large amounts of data need to be processed, often with
predictable data access patterns and no dependency between operations over large segments of the
computation. This is why data-parallelization can often provide good accelerations on multi-core
chips, clusters of machines, or even on loosely coupled networks of machines. Using MapReduce,
speedups linear with the number of processors have been reported in [9] for several machine
learning algorithms. Up to 16 cores were tested, and simulations indicate good scaling to more
processors in some cases.
Many algorithms, such as KNN, K-means clustering, LVQ, and Neural Networks can be reduced
to forms where the computation is dominated by vector-matrix multiplications, which are easily
parallelizable. For Convolutional Neural Networks (CNN) the data flow can be complex, yet the
core of the computation is a convolution, an operation which has been studied extensively for
parallel implementations. For Support Vector Machines (SVM), several parallel algorithms were
described, but most saturate quickly for more than 16 processors. Scaling to larger numbers of
processors has been demonstrated, applying MapReduce on a graphics processor with 128 cores
[10]. Another implementation on a cluster of 48 dual-core machines (with 384 MMX units) [11]
scales even super-linearly, and, according to simulations, scales to thousands of cores.
Based on this analysis it is clear that vector-matrix and matrix-matrix multiplications for large
vector dimensionalities and large numbers of vectors must be handled efficiently. Yet this alone is
not sufficient since data access patterns vary greatly between algorithms. We analyze this here in
more detail for SVM and CNN. These algorithms were chosen, because they are widely used for
industrial applications and cover a broad range of computation, I/O, and memory requirements.
The characteristics of the SVM training are summarized in Table 1. We use an approach similar to
the one described in [11] to split different parts of the computation between a host CPU and the
FPGA accelerator. For large dimensions d of the vectors the calculation of the columns of the
kernel matrix dominates by far. This is needed to update the gradients, and in the present
implementation, only this part is mapped onto the FPGA. If the dimensionality d is smaller than
around 100, operations 2 and 5 can become bottlenecks and should also be mapped onto the
accelerator. Challenging is that for each kernel computation a new data vector has to be loaded
4
into the processor, leading to very high I/O requirements. We consider here dimensions of 10 - 10
5
7
and numbers of training data of 10 - 10 , resulting easily in Gigabytes that need to be transferred
to the processors at each iteration.
1
2
3
4
5
6
Operation
Initialize all ?x, Gx
Do
Find working set ?i, ?j
Update ?i, ?j
Get 2 columns of kernel matrix
Update gradients Gx
While not converged
Computation
2n
IO
2n
Unit
CPU
I
I
I
I
I * 2n
I*2
I * (2d+2dn)
I*n
CPU
CPU
FPGA
CPU
*
*
*
*
2n
10
2nd
n
Table 1: Compute- and IO-requirements of each step for SVM training (SMO algorithm).
n: number of training data; d: dimension of the vectors; G: gradients; ?: support vector
factors; I: number of iterations. The last column indicates whether the execution happens
on the host CPU or the accelerator FPGA. It is assumed that the kernel computation
requires a dot product between vectors (e.g. rbf, polynomial, tanh kernels).
Neural network algorithms are essentially sequences of vector-matrix multiplications, but
networks with special connectivity patterns, such as convolutional networks have very different IO
characteristics than fully connected networks. Table 2 shows the computation and IO requirements
for scanning several convolution kernels over one input plane. A full network requires multiple of
these operations for one layer, with nonlinearities between layers. We map all operations onto the
FPGA accelerator, since intermediate results are re-used right away. The most significant
2
difference to between the SVM and CNN is the Compute/IO ratio: SVM: ~ 1; CNN: ~ L*k > 100.
Therefore the requirements for these two algorithms are very different, and handling both cases
efficiently is quite a challenge for an architecture design.
Operation
Load L kernels
For all input pixels
Shift in new pixel
Multiply kernels
Shift out result
1
2
3
4
Computation
IO
2
L* k
n* m
2
n*m*L*k
n*m
Unit
FPGA
FPGA
FPGA
FPGA
FPGA
Table 2: Compute- and IO-requirements for CNN computation (forward pass), where l
kernels of size k*k are scanned simultaneously over an input plane of size n*m. This is
representative for implementations with kernel unrolling (kernel pixels processed in
parallel). Internal shifts, computation of the non-linearity, and border effects not shown.
2.2
Arithmetic
Hardware can be built much more compactly and runs with lower power dissipation, if it uses
fixed-point instead of floating-point operations. Fortunately, many learning algorithms tolerate a
low resolution in most of the computations. This has been investigated extensively for neural
networks [12][13], but less so for other learning algorithms. Learning from data is inherently a
noisy process, because we see only a sparse sampling of the true probability distributions. A
different type of noise is introduced in gradient descent algorithms, when only a few training data
are used at a time to move the optimization forward iteratively. This noise is particularly
pronounced for stochastic gradient descent. There is no point in representing noisy variables with
high resolution, and it is therefore a property inherent to many algorithms that low-resolution
computation can be used.
It is important, not to confuse this tolerance to low resolution with the resolution required to avoid
numeric instabilities. Some of the computations have to be performed with a high resolution, in
particular for variables that are updated incrementally. They maintain the state of the optimization
and may change in very small steps. But usually by far the largest part of the computation can be
executed at a low resolution. Key is that the hardware is flexible enough and can take advantage of
reduced resolution while handling high resolution where necessary.
Problem
Adult
Forest
MNIST
NORB
Kernel: Float
Obj. f.
# SV
31,930.77 11,486
653,170.7 49,333
4,960.13
6,172
1,243.71
3,077
F-score
77.58
98.29
99.12
93.34
Kernel: 16 bit fixed point
Obj. f.
# SV
F-score
31,930.1 11,490 77.63
652,758
49,299 98.28
4,959.64 6,166
99.11
1,244.76 3,154
93.26
F-sc. (4b in)
NA
NA
99.11
92.78
Table 3: Comparison of the results of SVM training when the kernels are represented with
floating point numbers (32 or 64 bits) (left half) and with 16 bit fixed point (right half). The
last column shows the results when the resolution of the training data is reduced from 8 bit
to 4 bit. For NORB this reduces the accuracy; all other differences in accuracy are not
significant. All are two class problems: Adult: n=32,562, d=122; Forest: n=522,000, d=54
(2 against the rest); MNIST: n=60,000, d=784 (odd?even); NORB: n=48,560, d=5,184.
We developed a simulator that allows running the training algorithms with various resolutions in
each of the variables. A few examples for SVM training are shown in Table 3. Reducing the
resolution of the kernel values from double or float to 16 bit fixed point representations does not
affect the accuracy for any of the problems. Therefore all the multiplications in the dot products
for the kernel computation can be done in low resolutions (4?16 bit in the factors), but the
accumulator needs sufficient resolution to avoid over/under flow (48 bit). Once the calculation of
the kernel value is completed, it can be reduced to 16 bit. A low resolution of 16 bit is also
tolerable for the ? values, but a high resolution is required for the gradients (double). For Neural
Networks, including CNN, several studies have confirmed that states and gradients can be kept at
low resolutions (<16 bit), but the weights must be maintained at a high resolution (float) (see e.g.
[12]). In our own evaluations 24 bits in the weights tend to be sufficient. Once the network is
trained, for the classification low resolutions can be used for the weights as well (<16 bit).
2.3
A rc h i t e c t u re
Figure 1: Left: Schematic of the architecture with the main data flows; on one FPGA 128
VPE are configured into four SIMD groups; L-S: Load-store units. Right: Picture of an
FPGA board; in our experiments one or two of them are used, connected via PCI bus to a
host CPU.
Based on the analysis above, it is clear that the architecture must be optimized for processing
massive amounts of data with relatively low precision. Most of the time, data access patterns
are predictable and data are processed in blocks that can be stored contiguously. This type of
computation is well suited for vector processing, and simple vector processing elements
(VPE) with fixed-point arithmetic can handle the operations. Since typically large blocks of
data are processed with the same operation, groups of VPE can work in SIMD (single
instruction multiple data) mode. Algorithms must then be segmented to map the highvolume, low precision parts onto the vector accelerators and parts requiring high precision
arithmetic onto the CPU.
The most important design decision is the organization of the memory. Most memory
accesses are done in large blocks, so that the data can be streamed, making complex caching
unnecessary. This is fortunate, since the amounts of data to be loaded onto the processor are
so large that conventional caching strategies would be overwhelmed anyway. Because the
blocks tend to be large, a high data bandwidth is crucial, but latency for starting a block
transfer is less critical. Therefore we can use regular DDR memories and still get high IO
rates. This led to the design shown schematically in Figure 1, where independent memory
banks are connected via separate IO ports for each group of 32 VPE.
By connecting multiple of the units shown in Figure 1 to a CPU, this architecture scales to
larger numbers of VPE. Parallel data IO and parallel memory access scale simultaneously
with the number of parallel cores, and we therefore refer to this as the P3 (P-cube) architecture.
Notice also that the main data flow is only local between a group of VPE and its own memory
block. Avoiding movements of data over long distances is crucial for low power dissipation. How
far this architecture can reasonably scale with one CPU depends on the algorithms, the amount of
data and the vector dimensionality (see below). A few hundred VPE per CPU have provided good
accelerations in all our tests, and much higher numbers are possible with multi-core CPUs and
faster CPU-FPGA connections.
3
I mp l e men tati on of th e P 3 A rch i t ectu re
This architecture fits surprisingly well onto some of the recent FPGA chips that are available
with several hundred Digital Signal Processors (DSP) units and over 1,000 IO pins for data
transfers. The boards used here contain each one Xilinx Virtex 5 LX330T-2 FPGA coupled to 4
independent DDR2 SDRAM with a total of 1GB, and 2 independent 4MB SSRAM memory
banks (commercial board from AlphaData). One FPGA chip contains 192 DSP with a
maximum speed of 550MHz, which corresponds to a theoretical compute-performance of
105.6 GMACS (18 bit and 25 bit operands). There is a total of 14 Mbit of on-chip memory,
and the chip incorporates 960 pins for data IO. Due to routing overhead, not all DSP units
can be used and the actual clock frequencies tend to be considerably lower than what is
advertised for such chips (typically 230MHz or less for our designs). Nevertheless, we
obtain high performances because we can use a large number of DSP units for executing the
main computation.
The main architecture features are:
? Parallel processing (on one chip): 128 VPE (hardware DSP) are divided into 4
blocks of 32, each group controlled by one sequencer with a vector instruction set.
? Custom Precision: Data are represented with 1 to 16 bit resolution. Higher
resolutions are possible by operating multiple DSP as one processor.
? Overlapping Computation and Communication: CPU-FPGA communication is
overlapped with the FPGA computation.
? Overlap Memory Operations with Computation: All loads and stores from the FPGA
to off-chip memory are performed concurrently with computations.
? High Off-chip Memory Bandwidth: 6 independent data ports, each 32 bits wide,
access banked memories concurrently (12GB/s per chip).
?
?
Streaming Data Flow, Simple Access Patterns: Load/store units are tailored for
streaming input and output data, and for simple, bursty access patterns. Caching is
done under application control with dual-port memory on chip.
Load/store with (de)compression: For an increase of effective IO bandwidth the
load/store units provide compression and decompression in hardware.
Figure 2 shows the configuration of the VPEs for vector dot product computation used for
SVM training and classification. For training, the main computation is the calculation of one
column of the kernel matrix. One vector is pre-fetched and stored in on-chip memory. All other
vectors are streamed in from off-chip memory banks 1-4. Since this is a regular and predictable
access pattern, we can utilize burst-mode, achieving a throughput of close to one memory word
per cycle. But the speed is nevertheless IO bound. When several vectors can be stored on-chip, as
is the case for classification, then the speed becomes compute-bound.
Figure 2: Architecture for vector dot-product computation. The left side shows a high-level
schematic with the main data flow. The data are streamed from memory banks 1-4 to the
VPE arrays, while memory banks 5 and 6, alternatively receive results or stream them back
to the host. The right side shows how a group of VPE is pipelined to improve clock speed.
The operation for SVM training on the FPGA corresponds to a vector-matrix multiplication and
the one for classification to a matrix-matrix multiplication. Therefore the configuration of Figure 2
is useful for many other algorithms as well, where operations with large vectors and matrices are
needed, such as Neural Networks. We implemented a specialized configuration for Convolutional
Neural Networks, for more efficiency and lower power dissipation. The VPE are daisy-chained
and operate as systolic array. In this way we can take advantage of the high computation to IO
ratio (Table 2) to reduce the data transfers from memory.
4
E val u ati on s
We evaluated SVM training and classification with the NORB and MNIST problems, the latter
with up to 2 million training samples (data from [11]). Both are benchmarks with vectors of high
dimensionality, representative for applications in image and video analysis. The computation is
split between CPU and FPGA as indicated by Table 1. The DDR2 memory banks are clocked at
230MHz, providing double that rate for data transfers. The data may be compressed to save IO
bandwidth. On the FPGA they are decompressed first and distributed to the VPE. In our case, a 32
bit word contains eight 4-bit vector components. Four 32 bit words are needed to feed all 32 VPEs
of a group; therefore clocking the VPE faster than 115MHz does not improve performance. A
VPE executes a multiplication plus add operation in one clock cycle, resulting in a theoretical
maximum of 14.7 GMACS per chip. The sustained compute-rate is lower, about 9.4 GMACS, due
to overhead (see Table 4). The computation on the host CPU overlaps with that on the FPGA, and
has no effect on the speed in the experiments shown here. For the classification the VPE can be
clocked higher, at 230 MHz. By using 4-bit operands we can execute 2 multiply-accumulates
simultaneously on one DSP, resulting in speed that is more than four times higher and a sustained
43.0 GMACS limited by the number and speed of the VPE. Adding a second FPGA card doubles
the speed, showing little saturation effects yet, but for more FPGA per CPU there will be
saturation (see Fig. 3). The compute speed in GMACS obtained for NORB is almost identical.
#
60k
2M
Iterations
8,000
266,900
CPU
time
754s
--
speed
0.5
--
CPU+MMX
time
speed
240 s
1.57
531,534 s
1.58
CPU+FPGA
time
speed
40 s
9.42
88,589 s
9.48
CPU+2 FPGA
time
speed
21 s
17.9
48,723 s
17.2
Table 4: Training times and average compute speed for SVM training. Systems tested: CPU,
Opteron, 2.2GHz; CPU using MMX; CPU with one FPGA; CPU with two FPGA boards.
Results are shown for training sizes of 60k and 2M samples. Compute speed is in GMACS (just
kernel computations). Training algorithm: SMO with second order working set selection.
Parallelizations of SVM training have been reported recently for a GPU [10] and for a cluster [11],
both using the MNIST data. In [10] different bounds for stopping were used than here and in [11].
Nevertheless, a comparison of the compute performance is possible, because based on the number
of iterations we can compute the average GMACS for the kernel computations. As can be seen in
Table 5 a single FPGA is similar in speed to a GPU with 128 stream processors, despite a clock
rate that is about 5.5 times lower for I/O and 11 times lower for the VPE. The cluster with 384
MMX units is about 6 times faster than one FPGA with 128 VPE, but dissipates about two orders
of magnitude more electric power. For the FPGA this calculation includes only the computation of
the kernel values while the part on the CPU is neglected. This is justified for this study, because
the rest of the calculations can be mapped on the FPGA as well and will increase the power
dissipation only minimally.
Number
Clock
Operand
Power
Average
of cores
speed
type
dissipation
compute speed
CPU (Opteron)
1
2.2 GHz
float
40 W
0.5 GMACS
GPU (from [10])
128 1.35 GHz
float
80 W
7.4 GMACS
Cluster (from [11])
384
1.6 GHz
byte
> 1 kW
54 GMACS
FPGA
128 0.12 GHz 4 bit nibble
9W
9.4 GMACS
Table 5: Comparison of performances for SVM training (MNIST data). GPU: Nvidia 8800 GTX.
Cluster: 48 dual core CPU (Athlon), 384 MMX units. The GPU was training with 60k samples
([10], table 2, second order), the cluster trained with 2 million samples.
Processor
Figure 3: Acceleration of SVM training as a function of the number of VPE. MNIST n:
2,000,000, d=784; NORB: n=48,560, d=5,184. The points for 128 and 256 VPE are
experimental, the higher ones are simulations. Curves MNIST, NORB: Multiple FPGA are
attached to one CPU. Curve MNIST C: Each FPGA is attached to a separate host CPU.
Scaling of the acceleration with the number of VPEs is shown in Figure 3. The reference speed is
that of one FPGA attached to a CPU. The evaluation has been done experimentally for 128 and
256 VPEs, and beyond that with a simulator. The onset of saturation depends on the
dimensionality of the vectors, but to a much lesser extent on the number of training vectors (up to
the limit of the memory on the FPGA card). MNIST saturates for more than two FPGAs because
then the CPU and FPGA computation times become comparable. For the larger vectors of NORB
(d=5,184) this saturation starts to be noticeable for more than 4 FPGA. Alternatively, a system can
be scaled by grouping multiple CPU, each with one attached FPGA accelerator. Then the scaling
follows a linear or even super-linear acceleration (MNIST C) to several thousand VPE. If the
CPUs are working in a cluster arrangement, the scaling is similar to the one described in [11].
For convolutional neural networks, the architecture of Figure 2 is modified to allow a block of
VPE to operate as systolic array. In this way convolutions can be implemented with minimal data
movements. In addition to the convolution, also sub-sampling and non-linear functions plus the
logistics to handle multiple layers with arbitrary numbers of kernels in each layer are done on the
FPGA. Four separate blocks of such convolvers are packed onto one FPGA, using 100 VPE.
Clocked at 115MHz, this architecture provides a maximum of 11.5 GMACS. Including all the
overhead the sustained speed is about 10 GMACS.
5
Con cl u s i on s
By systematically exploiting characteristic properties of machine learning algorithms, we
developed a new massively parallel processor architecture that is very efficient and can be scaled
to thousands of processing elements. The implementation demonstrated here is more than an order
of magnitude higher in performance than previous FPGA implementations of SVM or CNN. For
the MNIST problem it is comparable to the fastest GPU implementations reported so far. These
results underline the importance of flexibility over raw compute-speed for massively parallel
systems. The flexibility of the FPGA allows more efficient routing and packing of the data and the
use of computations with the lowest resolution an algorithm permits. The results of Table 5
indicate the potential of this architecture for low-power operation in embedded applications.
R e f e re n c e s
[1] Ramacher, et al. (1995) Synapse-1: A high-speed general purpose parallel neurocomputer system. In
Proc. 9th Intl. Symposium on Parallel Processing (IPPS'95), pp. 774-781.
[2] Asanovic, K., Beck, Feldman, J., Morgan, N. & Wawrzynek, J. (1994) A Supercomputer for Neural
Computation, Proc. IEEE Intl. Joint Conference on Neural Networks, pp. 5-9, Orlando, Florida.
[3] Neil, P., (2005) Combining hardware with a powerful automotive MCU for powertrain applications. In
Industrial Embedded Resource Guide, p. 88.
[4] Korekado, et al. (2003) A Convolutional Neural Network VLSI for Image Recognition Using
Merged/Mixed Analog-Digital Architecture, in Proc. 7th KES 2003, Oxford, pp 169-176.
[5] Murasaki, M., Arima, Y. & Shinohara, H. (1993) A 20 Tera-CPS Analog Neural Network Board. In Proc.
Int. Joint Conf. Neural Networks, pp. 3027 ? 3030.
[6] Pedersen, R., Schoeberl, M. (2006), An Embedded Support Vector Machine, WISE 2006.
[7] Dey, S., Kedia, M. Agarwal, N., Basu, A., Embedded Support Vector Machine: Architectural
Enhancements and Evaluation, in Proc 20th Int. Conf. VLSI Design.
[8] Anguita, D., Boni, A., Ridella, S., (2003) A Digital Architecture for Support Vector Machines: Theory,
Algorithm, and FPGA Implementation, IEEE Trans. Neural Networks, 14/5, pp.993-1009.
[9] Chu, C., Kim, S., Lin, Y., Yu, Y., Bradski, G., Ng, A. & Olukotun, K. (2007) Map-Reduce for Machine
Learning on Multicore, Advances in Neural Information Processing Systems 19, MIT Press.
[10] Catanzaro, B., Sundaram, N., & Keutzer, K. (2008) Fast Support Vector Machine Training and
Classification on Graphics Processors, Proc. 25th Int. Conf. Machine Learning, pp 104-111.
[11] Durdanovic, I., Cosatto, E. & Graf, H. (2007) Large Scale Parallel SVM Implementation. In L. Bottou,
O. Chapelle, D. DeCoste, J. Weston (eds.), Large Scale Kernel Machines, pp. 105-138, MIT Press.
[12] Simard, P & Graf, H. (1994) Backpropagation without Multiplication. In J. Cowan, G. Tesauro, J.
Alspector, (eds.), Neural Information Processing Systems 6, pp. 232 ? 239, Morgan Kaufmann.
[13] Savich, A., Moussa, M., Areibi, S., (2007) The Impact of Arithmetic Representation on Implementing
MLP-BP on FPGAs: A Study, IEEE Trans. Neural Networks, 18/1, pp. 240-252.
| 3527 |@word cnn:7 version:1 polynomial:1 compression:2 achievable:1 nd:1 underline:1 instruction:3 simulation:3 configuration:3 contains:2 score:2 ati:1 com:7 yet:6 chu:1 must:4 gpu:6 designed:2 update:3 sundaram:1 alone:1 half:2 ctu:1 plane:2 core:13 rch:2 provides:1 gx:2 rc:1 dn:1 burst:1 become:2 symposium:1 sustained:4 overhead:3 market:1 alspector:1 quadro:1 multi:3 simulator:2 little:3 cpu:34 actual:1 decoste:1 unrolling:1 becomes:1 provided:1 linearity:1 circuit:1 mass:1 lowest:1 what:1 mcu:1 developed:6 nj:1 suite:1 scaled:2 control:2 unit:13 service:1 local:3 limit:1 io:16 despite:2 accumulates:1 analyzing:1 oxford:1 meet:1 plus:2 minimally:1 studied:1 dynamically:1 challenging:1 catanzaro:1 fastest:2 limited:3 programmed:1 ease:1 range:2 accumulator:1 testing:1 block:9 backpropagation:1 sequencer:1 tera:1 pre:1 word:3 regular:2 get:2 onto:9 close:1 pipelined:1 selection:1 applying:1 instability:1 conventional:1 map:3 demonstrated:3 starting:1 resolution:24 array:5 gigabyte:1 handle:2 anyway:1 fx:1 updated:1 today:1 commercial:1 massive:1 programming:1 us:1 overlapped:1 element:4 recognition:1 particularly:2 thousand:4 connected:4 cycle:2 movement:2 substantial:1 environment:1 predictable:3 neglected:1 chained:1 trained:2 segment:1 vpe:23 eric:1 efficiency:1 compactly:1 packing:1 easily:2 joint:2 chip:17 represented:2 america:1 various:1 train:1 fast:1 effective:1 sc:1 mmx:5 pci:1 quite:1 modular:1 larger:3 widely:1 compressed:1 knn:1 neil:1 noisy:2 sequence:1 advantage:2 product:5 mb:1 combining:1 flexibility:5 achieve:1 pronounced:1 billion:1 exploiting:1 cluster:8 requirement:7 double:4 intl:2 enhancement:1 executing:1 develop:1 multicore:1 odd:1 noticeable:1 implemented:3 indicate:2 merged:1 stochastic:1 opteron:2 routing:4 implementing:1 orlando:1 around:1 tati:1 ic:1 bursty:1 algorithmic:1 vary:1 purpose:4 proc:6 tanh:1 largest:1 mit:2 concurrently:3 aim:1 super:2 modified:1 avoid:2 cr:1 caching:3 dsp:7 improvement:1 indicates:1 greatly:1 industrial:2 kim:1 stopping:1 streaming:2 typically:2 initially:1 vlsi:2 chakradhar:1 pixel:3 classification:8 dual:3 flexible:1 development:2 platform:2 special:1 initialize:1 cube:1 field:2 simd:3 once:2 shinohara:1 ng:1 sampling:2 identical:1 kw:1 broad:1 yu:1 igor:1 throughput:1 connectionist:1 inherent:1 few:4 meti:1 simultaneously:4 floating:2 beck:1 intended:1 maintain:2 organization:1 mlp:1 bradski:1 mining:1 clocking:1 multiply:2 custom:1 evaluation:3 personalization:1 cosatto:3 trod:1 necessary:1 loosely:1 re:5 theoretical:2 minimal:1 column:5 cover:1 mhz:6 hundred:4 fpga:51 successful:1 graphic:5 reported:4 stored:3 dependency:1 scanning:1 sv:2 considerably:1 migration:1 off:3 connecting:1 quickly:2 na:2 connectivity:1 cisco:1 conf:3 simard:1 leading:1 highperformance:1 potential:1 nonlinearities:1 de:1 sec:1 summarized:1 includes:1 int:3 configured:1 mp:1 depends:2 stream:2 onset:1 later:1 performed:2 lab:7 analyze:1 reached:1 start:2 parallel:19 daisy:1 accuracy:3 convolutional:6 loaded:2 characteristic:4 efficiently:2 kaufmann:1 identify:1 raw:1 pedersen:1 confirmed:1 published:1 processor:31 converged:1 executes:1 reach:1 parallelizable:1 ed:2 against:1 frequency:2 pp:9 con:1 car:1 dimensionality:5 back:1 focusing:1 feed:1 higher:9 tolerate:1 synapse:2 done:6 evaluated:1 execute:1 dey:1 just:2 stage:1 clock:6 working:3 overlapping:1 incrementally:1 mode:3 indicated:1 usa:1 effect:3 requiring:1 true:1 contain:1 gtx:1 laboratory:1 iteratively:1 attractive:1 during:1 maintained:1 xilinx:1 m:1 clocked:3 dissipation:9 image:2 wise:1 consideration:1 recently:2 common:1 specialized:2 operand:3 attached:4 million:2 analog:3 ridella:1 significant:2 refer:1 feldman:1 decompression:1 had:2 dot:4 moving:1 access:9 han:1 chapelle:1 mainstream:1 operating:2 add:1 own:3 hide:1 recent:2 tesauro:1 massively:6 asic:1 certain:1 nvidia:2 store:5 success:2 seen:1 morgan:2 fortunately:1 mbit:1 parallelized:1 signal:1 arithmetic:7 multiple:8 full:1 reduces:1 segmented:1 ramacher:1 faster:3 calculation:5 long:1 lin:1 divided:1 host:6 serial:1 controlled:1 schematic:2 impact:1 essentially:1 iteration:4 kernel:23 tailored:1 agarwal:1 receive:1 schematically:1 justified:1 athlon:1 addition:1 cps:1 float:5 parallelizations:1 crucial:2 parallelization:3 operate:3 rest:2 tend:3 cowan:1 flow:8 incorporates:1 obj:2 unused:1 intermediate:1 split:2 enough:2 variety:1 independence:1 affect:1 fit:1 architecture:22 bandwidth:5 reduce:2 lesser:1 shift:3 bottleneck:2 whether:1 handled:1 gb:2 accelerating:1 peter:1 programmable:2 useful:2 latency:1 clear:2 amount:4 extensively:2 hardware:7 processed:4 decompressed:1 reduced:4 notice:1 per:6 promise:1 arima:1 group:9 key:2 four:4 nevertheless:3 achieving:2 kes:1 kept:1 utilize:1 contiguously:1 olukotun:1 year:1 run:1 powerful:1 almost:1 architectural:1 p3:1 fetched:1 decision:1 keutzer:1 scaling:5 comparable:2 bit:22 layer:4 bound:3 activity:1 scanned:1 bp:1 ri:1 software:1 automotive:1 dominated:1 speed:25 extremely:1 relatively:1 speedup:1 transferred:1 according:1 combination:2 remain:1 smaller:1 wawrzynek:1 making:1 happens:1 ddr2:2 advertised:1 resource:1 bus:1 pin:2 needed:3 mind:1 powertrain:1 available:1 operation:15 experimentation:1 permit:1 eight:1 away:1 tolerable:1 anymore:1 save:1 alternative:1 gate:1 supercomputer:2 fpgas:3 florida:1 gori:1 clustering:1 running:1 completed:1 move:1 streamed:3 arrangement:1 strategy:1 gradient:7 distance:1 separate:3 mapped:5 card:2 extent:1 systolic:2 ratio:2 providing:1 difficult:2 mostly:1 executed:1 implementation:12 design:8 ddr:1 packed:1 convolution:4 sold:1 benchmark:1 descent:2 logistics:1 emulating:1 communication:2 saturates:1 arbitrary:1 ipps:1 introduced:1 required:3 optimized:3 connection:1 smo:2 trans:2 adult:2 beyond:1 usually:1 pattern:8 below:1 challenge:1 saturation:4 built:2 including:2 memory:26 video:2 hpg:1 power:13 critical:2 overlap:2 difficulty:1 dissipates:1 representing:1 improve:3 technology:2 picture:1 extract:1 coupled:2 byte:1 mapreduce:2 powered:1 multiplication:8 val:1 graf:3 embedded:8 beside:1 fully:1 mixed:1 accelerator:9 men:1 digital:5 sufficient:3 port:3 bank:7 systematically:1 dissipating:1 surprisingly:1 last:2 keeping:2 side:2 allow:1 guide:1 wide:2 basu:1 sparse:1 tolerance:1 distributed:1 ghz:5 dimension:3 curve:2 numeric:1 asanovic:1 forward:2 far:8 emphasize:1 assumed:1 unnecessary:1 norb:8 alternatively:2 why:1 table:14 nature:1 transfer:4 reasonably:1 inherently:1 improving:1 forest:2 durdanovic:2 complex:2 investigated:1 electric:1 domain:1 cl:1 bottou:1 main:9 linearly:1 border:1 noise:2 fig:1 representative:2 board:5 precision:4 sub:1 exercise:1 fortunate:1 anguita:1 niche:1 down:1 saturate:1 load:6 specific:2 showing:1 svm:20 dominates:1 grouping:1 mnist:12 adding:1 importance:1 nec:8 magnitude:4 te:1 execution:1 confuse:1 overwhelmed:1 demand:1 suited:1 led:1 explore:1 srihari:1 neurocomputer:1 corresponds:2 cti:1 weston:1 goal:2 lvq:1 acceleration:5 rbf:1 shared:2 price:1 change:1 experimentally:1 reducing:1 total:2 pas:1 experimental:1 indicating:1 internal:1 support:6 latter:1 princeton:1 tested:2 avoiding:1 handling:2 |
2,788 | 3,528 | Tracking Changing Stimuli in Continuous Attractor
Neural Networks
C. C. Alan Fung, K. Y. Michael Wong
Department of Physics, The Hong Kong University of Science and Technology,
Clear Water Bay, Hong Kong, China
[email protected], [email protected]
Si Wu
Department of Informatics, University of Sussex, Brighton, United Kingdom
Institute of Neuroscience, Shanghai Institutes for Biological Sciences,
State Key Laboratory of Neurobiology, Chinese Academy of Sciences, Shanghai 200031, China.
[email protected]
Abstract
Continuous attractor neural networks (CANNs) are emerging as promising models for describing the encoding of continuous stimuli in neural systems. Due to
the translational invariance of their neuronal interactions, CANNs can hold a continuous family of neutrally stable states. In this study, we systematically explore
how neutral stability of a CANN facilitates its tracking performance, a capacity
believed to have wide applications in brain functions. We develop a perturbative
approach that utilizes the dominant movement of the network stationary states in
the state space. We quantify the distortions of the bump shape during tracking,
and study their effects on the tracking performance. Results are obtained on the
maximum speed for a moving stimulus to be trackable, and the reaction time to
catch up an abrupt change in stimulus.
1
Introduction
Understanding how the dynamics of a neural network is shaped by the network structure, and consequently facilitates the functions implemented by the neural system, is at the core of using mathematical models to elucidate brain functions [1]. The impact of the network structure on its dynamics
is twofold: on one hand, it decides stationary states of the network which leads to associative memory; and on the other hand, it carves the landscape of the state space of the network as a whole
which may contribute to other cognitive functions, such as movement control, spatial navigation,
population decoding and object categorization.
Recently, a type of attractor networks, called continuous attractor neural networks (CANNs), has
received considerable attention (see, e.g., [2, 3, 4, 6, 7, 8, 9, 10, 11, 12, 13, 5]). These networks
possess a translational invariance of the neuronal interactions. As a result, they can hold a family of
stationary states which can be translated into each other without the need to overcome any barriers.
Thus, in the continuum limit, they form a continuous manifold in which the system is neutrally
stable, and the network state can translate easily when the external stimulus changes continuously.
Beyond pure memory retrieval, this large-scale stucture of the state space endows the neural system
with a tracking capability. This is different from conventional models of associative memory, such
as the Hopfield model [14], in which the basin of each attactor is well separated from the others.
The tracking dynamics of a CANN has been investigated by several authors in the literature (see,
e.g., [3, 4, 5, 8, 11]). These studies have shown that a CANN has the capacity of tracking a moving
1
stimulus continuously and that this tracking property can well justify many brain functions. Despite
these successes, however, a detailed analysis of the tracking behaviors of a CANN is still lacking.
These include, for instance, 1) the conditions under which a CANN can successfully track a moving
stimulus, 2) the distortion of the shape of the network state during the tracking, and 3) the effects
of these distortions on the tracking speed. In this paper we will report, as far as we know, the first
systematic study on these issues. We hope this study will help to establish a complete picture about
the potential applications of CANNs in neural systems.
We will use a simple, analytically-solvable, CANN model as the working example. We display
clearly how the dynamics of a CANN is decomposed into different distortion modes, corresponding
to, respectively, changes in the height, position, width and skewness of the network state. We then
demonstrate which of them dominates the tracking behaviors of the network. In order to solve the
dynamics which is otherwise extremely complicated for a large recurrent network, we develop a
time-dependent perturbation method to approximate the tracking performance of the network. The
solution is expressed in a simple closed-form, and we can approximate the network dynamics up to
an arbitory accuracy depending on the order of perturbation used. We expect that our method will
provide a useful tool for the theoretical studies of CANNs. Our work generates new predictions on
the tracking behaviors of CANNs, namely, the maximum tracking speed to moving stimuli, and the
reaction time to sudden changes in external stimuli, both are testable by experiments.
2
The Intrinsic Dynamics of CANNs
We consider a one-dimensional continuous stimulus being encoded by an ensemble of neurons. The
stimulus may represent, for example, the moving direction, the orientation, or a general continuous
feature of an external object. Let U (x, t) be the synaptic input at time t to the neurons with preferred
stimulus of real-valued x. We will consider stimuli and responses with correlation length a much
less than the range of x, so that the range can be effectively taken to be (??, ?). The firing rate
r(x, t) of these neurons increases with the synaptic input, but saturates in the presence of a global
activity-dependent inhibition. A solvable model that captures these features is given by
U (x, t)2
R
r(x, t) =
,
(1)
1 + k? dx0 U (x0 , t)2
where ? is the neural density, and k is a small positive constant controlling the strength of global
inhibition. The dynamics of the synaptic input U (x, t) is determined by the external input Iext (x, t),
the network input from other neurons, and its own relaxation. It is given by
Z
dU (x, t)
?
= Iext (x, t) + ? dx0 J(x, x0 )r(x0 , t) ? U (x, t),
(2)
dt
where ? is the time constant, which is typically of the order 1 ms, and J(x, x0 ) is the neural interaction from x0 to x. The key characteristic of CANNs is the translational invariance of their neural
interactions. In our solvable model, we choose Gaussian interactions with a range a, namely,
?
(3)
J(x, x0 ) = exp[?(x ? x0 )2 /(2a2 )]J/ 2?a2 .
CANN models with other neural interactions and inhibition mechanisms have been studied [2, 3, 4,
7, 9]. However, our model has the advantage of permitting a systematic perturbative improvement.
Nevertheless, the final conclusions of our model are qualitatively applicable to general cases (to be
further discussed at the end of the paper).
We first consider the intrinsic
dynamics of the CANN model in the absence of external stimuli. For
?
0 < k < kc ? ?J 2 /(8 2?a), the network holds a continuous family of stationary states, which are
?
?
2
? (x|z) = U0 exp ? (x ? z) ,
U
(4)
4a2
?
where U0 = [1 + (1 ? k/kc )1/2 ]J/(4 ?ak). These stationary states are translationally invariant
among themselves and have the Gaussian bumped shape peaked at arbitrary positions z.
The stability of the Gaussian bumps can be studied by considering the dynamics of fluctuations.
? (x|z) + ?U (x, t). Then we obtain
Consider the network state U (x, t) = U
Z
d
(5)
? ?U (x, t) = dx0 F (x, x0 )?U (x0 , t) ? ?U (x, t),
dt
2
Height
v0
v1
0.5
0
-1
0
-0.5
-2
Position
1
1
2
2
1
2
Skew
1
v3
0.5
0
-1
0
-0.5
1
-0.5
Width
-2
0.25
0
-1
0
-0.25
-2
-1
v2
0.5
1
2
-2
-1
1
0.5
0
-1
0
-0.5
-1
Figure 1: The first four basis functions of the quantum harmonic oscillators, which represent four
distortion modes of the network dynamics, namely, changes in the height, position, width and skewness of a bump state.
R
where the interaction kernel is given by F (x, x0 ) = ? dx00 J(x, x00 )?r(x00 )/?U (x0 ).
2.1
The motion modes
To compute the eigenfunctions and eigenvalues of the kernel F (x, x0 ), we choose the wave functions
of the quantum harmonic oscillators as the basis, namely,
exp(?? 2 /2)Hn (?)
vn (x|z) = p
,
(2?)1/2 an!2n
(6)
?
where ? ? (x ? a)/( 2a) and Hn (?) is the nth order Hermite polynomial function. Indeed, the
ground state of the quantum harmonic oscillator corresponds to the Gaussian bump, and the first,
second, and third excited states correspond to fluctuations in the peak position, width, and skewness
of the bump respectively (see Fig. 1). The eigenvalues of the kernel F are calculated to be
?0 = 1 ? (1 ? k/kc )1/2 ; ?n = 1/2n?1 , for n ? 1.
(7)
The eigenfunctions of F can also be analytically calculated, which turn out to be either the
basis functions vn (x|z) or a linear combination of them. Here we only list?the first four of
them, which
are u0 (x|z) = v0 (x|z), u1 (x|z) = v1p
(x|z), u2 (x|z) = 1/( 2D0 )v0 (x|z) +
p
1
?
k/k
)/D
v
(x|z),
with
D
=
[(1
?
2
1 ? k/kc )2 + 1/2]1/2 and u3 (x|z) =
(1
?
2
c
0 2
0
p
p
1/7v1 (x, z) + 6/7v3 (x, z).
The eigenfunctions of F correspond to the various distortion modes of the bump. Since ?1 = 1
and all other eigenvalues are less than 1, the stationary state is neutrally stable in one component,
and stable in all other components. The first two eigenfunctions are particularly important. (1)
The eigenfunction for the eigenvalue ?0 is u0 (x|z), and represents a distortion of the amplitude of
the bump. As we shall see, amplitude changes of the bump affect its tracking performance. (2)
Central to the tracking capability of CANNs, the eigenfunction for the eigenvalue 1 is u1 (x|z) and
is neutrally stable. We note that u1 (x|z) ? ?v0 (x|z)/?z, corresponding to the shift of the bump
position among the stationary states. This neutral stability is the consequence of the translational
invariance of the network. It implies that when there are external inputs, however small, the bump
will move continuously. This is a unique property associated with the special structure of a CANN,
not shared by other attractor models. Other eigenfunctions correspond to distortions of the shape of
the bump, for example, the eigenfunction u3 (x|z) corresponds to a skewed distortion of the bump.
2.2
The energy landscape
It is instructive to consider the energy landscape in the state space of a CANN. Since F (x, x0 ) is not
symmetric, a Lyapunov function cannot be derived forPEq. (5). Nevertheless, for each peak position
z, one can define an effective energy function E|z = n (1 ? ?n )bn |2z /2, where bn |z is the overlap
3
2
U(x)
1.5
1
0.5
0
-2
0
x
2
Figure 2: The canyon formed by the stationary states of a CANN projected onto the subspace formed
by b1 |0 , the position shift, and b0 |0 , the height distortion. Motion along the canyon corresponds to
the displacement of the bump (inset).
? (x|z) and the nth eigenfunction of F centered at z. Then the dynamics in Eq. (5)
between U (x) ? U
can be locally described by the gradient descent of E|z in the space of bn |z . Since the set of points
bn |z = 0 for n 6= 1 traces out a line with E|z = 0 in the state space when z varies, one can envisage a
canyon surrounding the line and facilitating the local gradient descent dynamics, as shown in Fig. 2.
A small force along the tangent of the canyon can move the network state easily. This illustrates
how the landscape of the state space of a CANN is shaped by the network structure, leading to the
neutral stability of the system, and how this neutral stability shapes the network dynamics.
3
The Tracking Behaviors
We now consider the network dynamics in the presence of a weak external stimulus. Suppose the
neural response at time t is peaked at z(t). Since the dynamics is primarily dominated by the translational motion of the bump, with secondary distortions in shape, we may develop a time-dependent
perturbation analysis using {vn (x|z(t))} as the basis, and consider perturbations in increasing orders
of n. This is done by considering solutions of the form
?
X
?
U (x, t) = U (x|z(t)) +
an (t)vn (x|z(t)).
(8)
n=0
Furthermore, since the Gaussian bump is the steady-state solution of the dynamical equation in
the absence of external stimuli, the neuronal interaction term in Eq. (2) can be linearized for weak
stimuli. Making use of the orthonormality and completeness of {vn (x|z(t))}, we obtain from Eq. (2)
expressions for dan /dt at each order n of perturbation, which are
?
!
#
" q
?
?
d
1 ? ?n
In
1 dz
1/2
+
an =
? U0 (2?) a?n1 + nan?1 ? n + 1an+1
dt
?
?
2a dt
r
?
1 X (n + 2r)! (?1)r
+
an+2r ,
(9)
? r=1
n!
2n+3r?1 r!
where In (t) is the projection of the external input Iext (x, t) on the nth eigenfunction.
Determining z(t) by the center of mass of U (x, t), we obtain the self-consistent condition
?
!
p
P?
I1 + n=3,odd n!!/(n ? 1)!!In + a1
dz
2a
p
=
.
p
P
dt
? U0 (2?)1/2 a + ?
(n ? 1)!!/n!!an
n=0,even
(10)
Eqs.(9) and (10) are the master equations of the perturbation method. We can approximate the
network dynamics up to an arbitary accuracy depending on the choice of the order of perturbation.
In practice, low order perturbations already yield very accurate results.
3.1
Tracking a moving stimulus
Consider the external stimulus consisting of a Gaussian bump, namely, Iext (x, t) = ?U0 exp[?(x ?
z0 )2 /4a2 ]. Perturbation up to the order n = 1 yields a1 (t) = 0, [d/dt + (1 ? ?0 )/? ]a0 =
4
4
1
(a)
(b)
0.8
3
0.6
s
s
2
0.4
1
0
0
0.2
50
100
t
150
200
0
0
250
vmax
0.01
0.02
v
0.03
0.04
Figure 3: (a) The time dependence of the separation s starting from different initial values. Symbols:
simulations with N = 200 and v = 0.025. Lines: n = 5 perturbation. Dashed lines: s1 (bottom)
and s2 (top). (b) The dependence of the terminal separation s on the stimulus speed v. Symbols:
simulations with N = 200. Dashed
? line: n = 1 perturbation. Parameters: ? = 0.05, a = 0.5,
? = 1, k = 0.5, ? = N/(2?), J = 2?a2 .
?U0
p
(2?)1/2 a exp[?(z0 ? z)2 /8a2 ]/? , and
?
?
?
(z0 ? z)2
dz
= (z0 ? z) exp ?
R(t)?1 ,
dt
?
8a2
(11)
Rt
where R(t) = 1 + ? ?? (dt0 /? ) exp[?(1 ? ?0 )(t ? t0 )/? ? (z0 ? z(t0 ))2 /8a2 ], representing the
ratio of the bump height relative to that in the absence of the external stimulus (? = 0). Hence,
the dynamics is driven by a pull of the bump position towards the stimulus position z0 . The factor
R(t) > 1 implies that the increase in amplitude of the bump slows down its response.
The tracking performance of a CANN is a key property that is believed to have wide applications in
neural systems. Suppose the stimulus is moving at a constant velocity v. The dynamical equation
becomes identical to Eq. (11), with z0 = vt. Denoting the lag of the bump behind the stimulus by
s = z0 ? z we have, after the transients,
"
#?1
2
2
2
2
ds
?se?s /8a
?e?s /8a
= v ? g(s); g(s) ?
1+
.
(12)
dt
?
1 ? ?0
The value of s is determined by two competing factors: the first term represents the movement of
the stimulus, which tends to enlarge the separation, and the second term represents the collective
effects of the neuronal recurrent interactions, which tends to reduce the lag. Tracking is maintained
when these two factors match each other, i.e., v = g(s); otherwise, s diverges.
?
The function g(s) is concave, and has the maximum value of gmax = 2?a/(? e) at s = 2a.
This means that if v > gmax , the network is unable to track the stimulus. Thus, gmax defines the
maximum trackable speed of a moving stimulus. Notably, gmax increases with the strength of the
external signal and the range of neuronal recurrent interactions. This is reasonable since it is the
neuronal interactions that induce the movement of the bump. gmax decreases with the time constant
of the network, as this reflects the responsiveness of the network to external inputs.
On the other hand, for v < gmax , there is a stable and unstable fixed point of Eq. (12), respectively
denoted by s1 and s2 . When the initial distance is less than s2 , it will converge to s1 . Otherwise, the
tracking of the stimulus will be lost. Figs. 3(a) and (b) show that the analytical results of Eq. (12)
well agree with the simulation results.
3.2
Tracking an abrupt change of the stimulus
Suppose the network has reached a steady state with an external stimulus stationary at t < 0, and
the stimulus position jumps from 0 to z0 suddenly at t = 0. This is a typical scenario in experiments studying mental rotation behaviors. We first consider the case that the jump size z0 is small
compared with the range a of neuronal interactions. In the limit of weak stimulus, the dynamics is
described by Eq. (11) with R(t) = 1. We are interested in estimating the reaction time T , which is
5
the time taken by the bump to move to a small distance ? from the stimulus position. The reaction
time increases logarithmically with the jump size, namely, T ? (? /?) ln(|z0 |/?).
400
2
(a)
T
200
1.5
U(x)
300
(b)
Simulation
"n=1" perturbation
"n=2" perturbation
"n=3" perturbation
"n=4" perturbation
"n=5" perturbation
100
0
0
1
0.5
0.5
1
1.5
z0
2
2.5
0
3
-2
0
x
2
Figure 4: (a) The dependence of the reaction time T on the new stimulus position z0 . Parameters: as
in Fig.3. (b) Profiles of the bump between the old and new positions at z0 = ?/2 in the simulation.
When the strength ? of the external stimulus is larger, improvement using a perturbation analysis
up to n = 1 is required when the jump size z0 is large. This amounts to taking into account the
change of the bump height during its movement from the old to new position. The result is identical
to Eq. (11), with R(t) replaced by
?
?
?
?
Z t 0
?
(1 ? ?0 )
(1 ? ?0 )
(z0 ? z(t0 ))2
dt
R(t) = 1 +
exp ?
t +?
exp ?
(t ? t0 ) ?
.
1 ? ?0
?
?
8a2
0 ?
(13)
Indeed, R(t) represents the change in height during the movement of the bump. Contributions from
the second and third terms show that it is highest at the initial and final positions respectively, and
lowest at some point in between, agreeing with simulation results shown in Fig. 4(b). Fig. 4(a)
shows that the n = 1 perturbation overcomes the insufficiency of the logarithmic estimate, and has
an excellent agreement with simulation results for z0 up to the order of 2a. We also compute the
reaction time up to the n = 5 perturbation, and the agreement with simulations remains excellent
even when z0 goes beyond 2a. This implies that beyond the range of neuronal interaction, tracking
is influenced by the distortion of the width and the skewed shape of the bump.
4
The Two-Dimensional Case
We can straightforwardly extend the above analysis to two-dimensional (2D) CANNs. Consider
a neural ensemble encoding a 2D continuous stimulus x = (x1 , x2 ), and the network dynamics
satisfies Eqs. (1-3) with x and x0 being replaced by x and x0 , respectively. We can check that the
network holds a continuous family of stationary states given by
?
?
(x ? z)2
?
U (x|z) = U0 exp ?
,
(14)
4a2
where z is a free parameter indicating the position of the network state in a 2D manifold, and
(x ? z)2 = (x1 ? z1 )2 + (x2 ? z2 )2 the Euclidean distance between x and z.
By applying the stability analysis as in Sec. 2, we obtain the distortion modes of the bump dynamics,
which are expressed as the product of the motion modes in the 1D case, i.e.,
um,n (x|z) = um (x1 |z1 )un (x2 |z2 ),
for m, n = 0, 1, 2, . . .
(15)
The eigenvalues for these motion modes are calculated to be ?0,0 = ?0 , ?m,0 = ?m , for m 6= 0,
?0,n = ?n , for n 6= 0, and ?m,n = ?m ?n , for m 6= 0 and n 6= 0.
The mode u1,0 (x|z) corresponds to the position shift of the bump in the direction x1 and u0,1 (x|z)
the position shift in the direction x2 . A linear combination of them, c1 u1,0 (x|z) + c2 u0,1 (x|z),
corresponds to the position shift of the bump in the direction (c1 , c2 ). We see that the eigenvalues
6
for these motion modes are 1, implying that the network is neutrally stable in the 2D manifold.
The eigenvalues for all other motion modes are less than 1. Figure 5 illustrates the tracking of a
2D stimulus, and the comparison of simulation results on the reaction time with the perturbative
approach. The n = 1 perturbation already has an excellent agreement over a wide range of stimulus
positions.
400
(b)
300
T
200
U(x,y)
(a)
1.2
1
0.8
0.6
0.4
0.2
0
-3 -2
-1
x
Simulation
Theory
0
1
2
3 -3
-2
1
0 y
-1
2
3
100
00
0.5
1
1.5 2
|z0 - z(0)|
2.5
3
Figure 5: (a) The tracking process of the network; (b) The reaction time vs. the jump size. The
simulation result is compared
with the theoretical prediction. Parameters: N = 40 ? 40, k = 0.5,
?
a = 0.5, ? = 1, J = 2?a2 , ? = N/(2?)2 and ? = 0.05.
5
Conclusions and Discussions
To conclude, we have systematically investigated how the neutral stability of a CANN facilitates
the tracking performance of the network, a capability which is believed to have wide applications in
brain functions. Two interesting behaviors are observed, namely, the maximum trackable speed for
a moving stimulus and the reaction time for catching up an abrupt change of a stimulus, logarithmic
for small changes and increasing rapidly beyond the neuronal range. These two properties are associated with the unique dynamics of a CANN. They are testable in practice and can serve as general
clues for checking the existence of a CANN in neural systems. In order to solve the dynamics which
is otherwise extremely complicated for a large recurrent network, we have developed a perturbative
analysis to simplify the dynamics of a CANN. Geometrically, it is equivalent to projecting the network state on its dominant directions of the state space. This method works efficiently and may be
widely used in the study of CANNs.
The special structure of a CANN may have other applications in brain functions, for instance, the
highly structured state space of a CANN may provide a neural basis for encoding the topological
relationship of objects in a feature space, as suggested by recent psychophysical experiments [15,
16]. It is likely that the distance between two memory states in a CANN defines the perceptual
similarity between the two objects. Interestingly to note that the perceptual similarity measured by
the psychometric functions of human subjects in a categorization task has a similar logarithimic
nature as that of reaction times in a CANN [17]. To study these issues theoretically and justify the
experimental findings, it is important for us to have analytic solutions of the state space and the
dynamical behaviors of CANNs. We expect the analytical solution developed here will serve as a
valuable mathematical tool.
The tracking dynamics of a CANN has also been studied by other authors. In particular, Zhang
proposed a mechanism of using asymmetrical recurrent interactions to drive the bump, so that the
shape distortion is minimized [4]. Xie et al. further proposed a double ring network model to achieve
these asymmetrical interactions in the head-direction system [8]. It is not clear how this mechanism
can be generated in other neural systems. For instance, in the visual and hippocampal systems, it is
often assumed that the bump movement is directly driven by external inputs (see, e.g., [5, 19, 20]),
and the distortion of the bump is inevitable (indeed the bump distortions in [19, 20] are associated
with visual perception). The contribution of this study is on that we quantify how the distortion of
the bump shape affects the network tracking performance, and obtain a new finding on the maximum
trackable speed of the network.
7
Finally, we would like to remark on the generality of the results in this work and their relationships to
other studies in the literature. To pursue an analytical solution, we have used a divisive normalization
to represent the inhibition effect. This is different from the Mexican-hat type of recurrent interactions
used by many authors. For the latter, it is often difficult to get a closed-form of the network stationary
state. Amari used a Heaviside function to simplify the neural response, and obtained the boxshaped network stationary state [2]. However, since the Heaviside function is not differentiable, it is
difficult to describe the tracking dynamics in the Amari model. Truncated sinusoidal functions have
been used, but it is difficult to use them to describe general distortions of the bumps [3]. Here, by
using divisive normalization and the Gaussian-shaped recurrent interactions, we solve the network
stationary states and the tracking dynamics analytically.
One may be concerned about the feasibility of the divisive normalization. First, we argue that neural
systems can have resources to implement this mechanism [7, 18]. Let us consider, for instance, a
neural network, in which all excitatory neurons are connected to a pool of inhibitory neurons. Those
inhibitory neurons have a time constant much shorter than that of excitatory neurons, and they inhibit
the activities of excitatory neurons in a uniform shunting way, thus achieving the effect of divisive
normalization. Second, and more importantly, the main conclusions of our work are qualitatively
indpendent of the choice of the model. This is because our calculation is based on the fact that the
dynamics of a CANN is dominated by the motion mode of position shift of the network state, and
this property is due to the translational invariance of the neuronal recurrent interactions, rather than
the inhibition mechanism. We have formally proved that for a CANN model, once the recurrent
interactions are translationally invariant, the interaction kernel has a unit eigenvalue with respect to
the position shift mode irrespective of the inhibition mechanism (to be reported elsewhere).
This work is partially supported by the Research Grant Council of Hong Kong (Grant No. HKUST
603606 and HKUST 603607), BBSRC (BB/E017436/1) and the Royal Society.
References
[1] P. Dayan and L. Abbott, Theoretical Neuroscience: Computational and Mathematical Modelling of Neural Systems, (MIT Press, Cambridge MA, 2001).
[2] S. Amari, Biological Cybernetics 27, 77 (1977).
[3] R. Ben-Yishai, R. Lev Bar-Or and H. Sompolinsky, Proc. Natl. Acad. Sci. USA, 92 3844
(1995).
[4] K.-C. Zhang, J. Neurosicence 16, 2112 (1996).
[5] A. Samsonovich and B. L. McNaughton, J. Neurosci. 17, 5900 (1997).
[6] B. Ermentrout, Reports on Progress in Physics 61, 353 (1998).
[7] S. Deneve, P. Latham and A. Pouget, Nature Neuroscience, 2, 740 (1999).
[8] X. Xie, R. H. R. Hahnloser and S. Seung, Phys. Rev. E 66, 041902 (2002).
[9] A. Renart, P. Song and X. Wang, Neuron 38, 473 (2003).
[10] C. Brody, R. Romo and A. Kepecs, Current Opinion in Neurobiology, 13, 204-211 (2003)
[11] S. Wu and S. Amari, Neural Computation 17, 2215 (2005)
[12] B. Blumenfeld, S. Preminger, D. Sagi and M. Tsodyks, Neuron 52, 383 (2006).
[13] C. Chow and S. Coombes, SIAM J. Appl. Dyn. Sys. 5, 552-574, 2006.
[14] J. Hopfield, Proc. Natl. Acad. Sci. USA, 79 2554 (1982).
[15] J. Jastorff, Z. Kourtzi and M. Giese, J. Vision 6, 791 (2006).
[16] A. B. A. Graf, F. A. Wichmann, H. H. B?ulthoff, and B. Sch?olkopf, Neural Computation 18,
143 (2006).
[17] J. Zhang, J. Mathematical Psychology 48, 409 (2004)
[18] D. Heeger, J. Neurophysiology 70, 1885 (1993).
[19] M. Berry II, I. Brivanlou, T. Jordon and M. Meister, Nature 398, 334 (1999).
[20] Y. Fu, Y. Shen and Y. Dan, J. Neuroscience 21, 1 (2001).
8
| 3528 |@word neurophysiology:1 kong:3 polynomial:1 coombes:1 simulation:11 linearized:1 bn:4 excited:1 initial:3 united:1 denoting:1 interestingly:1 reaction:10 current:1 z2:2 hkust:2 si:1 perturbative:4 ust:2 shape:9 analytic:1 v:1 stationary:13 implying:1 sys:1 core:1 sudden:1 mental:1 completeness:1 contribute:1 zhang:3 hermite:1 height:7 mathematical:4 along:2 c2:2 dan:2 theoretically:1 x0:15 notably:1 indeed:3 behavior:7 themselves:1 samsonovich:1 brain:5 terminal:1 decomposed:1 considering:2 increasing:2 becomes:1 estimating:1 mass:1 lowest:1 skewness:3 emerging:1 pursue:1 developed:2 finding:2 concave:1 um:2 control:1 unit:1 grant:2 positive:1 local:1 sagi:1 tends:2 limit:2 consequence:1 insufficiency:1 despite:1 encoding:3 ak:1 acad:2 lev:1 firing:1 fluctuation:2 china:2 studied:3 appl:1 range:8 ulthoff:1 unique:2 practice:2 lost:1 implement:1 displacement:1 projection:1 induce:1 get:1 cannot:1 onto:1 applying:1 wong:1 conventional:1 equivalent:1 dz:3 center:1 romo:1 go:1 attention:1 starting:1 shen:1 abrupt:3 pure:1 pouget:1 importantly:1 pull:1 stability:7 population:1 mcnaughton:1 elucidate:1 controlling:1 suppose:3 agreement:3 velocity:1 logarithmically:1 particularly:1 bottom:1 observed:1 wang:1 capture:1 tsodyks:1 connected:1 sompolinsky:1 movement:7 decrease:1 highest:1 valuable:1 inhibit:1 ermentrout:1 seung:1 dynamic:28 serve:2 basis:5 translated:1 easily:2 hopfield:2 various:1 surrounding:1 separated:1 effective:1 describe:2 dt0:1 encoded:1 lag:2 solve:3 valued:1 distortion:18 larger:1 otherwise:4 widely:1 amari:4 envisage:1 final:2 associative:2 advantage:1 eigenvalue:9 differentiable:1 analytical:3 interaction:20 product:1 rapidly:1 translate:1 achieve:1 academy:1 olkopf:1 double:1 diverges:1 categorization:2 ring:1 ben:1 object:4 help:1 depending:2 develop:3 ac:1 recurrent:9 measured:1 odd:1 b0:1 received:1 progress:1 eq:10 implemented:1 implies:3 quantify:2 lyapunov:1 direction:6 stucture:1 bumped:1 centered:1 human:1 transient:1 opinion:1 biological:2 alanfung:1 hold:4 ground:1 exp:10 bump:35 u3:2 continuum:1 a2:11 proc:2 applicable:1 council:1 successfully:1 tool:2 reflects:1 hope:1 mit:1 clearly:1 gaussian:7 rather:1 derived:1 improvement:2 modelling:1 check:1 hk:2 dependent:3 dayan:1 typically:1 a0:1 chow:1 kc:4 i1:1 interested:1 canns:12 translational:6 issue:2 orientation:1 among:2 denoted:1 spatial:1 special:2 once:1 shaped:3 enlarge:1 identical:2 represents:4 inevitable:1 peaked:2 minimized:1 others:1 stimulus:40 report:2 simplify:2 primarily:1 replaced:2 translationally:2 consisting:1 attractor:5 n1:1 highly:1 navigation:1 dyn:1 behind:1 natl:2 yishai:1 accurate:1 fu:1 gmax:6 shorter:1 old:2 euclidean:1 catching:1 theoretical:3 jordon:1 instance:4 neutral:5 uniform:1 reported:1 straightforwardly:1 varies:1 trackable:4 density:1 peak:2 siam:1 systematic:2 physic:2 informatics:1 decoding:1 pool:1 michael:1 continuously:3 central:1 choose:2 hn:2 cognitive:1 external:16 leading:1 account:1 potential:1 sinusoidal:1 kepecs:1 sec:1 closed:2 reached:1 wave:1 capability:3 complicated:2 contribution:2 formed:2 accuracy:2 characteristic:1 efficiently:1 ensemble:2 correspond:3 yield:2 landscape:4 weak:3 drive:1 cybernetics:1 influenced:1 phys:1 synaptic:3 energy:3 associated:3 proved:1 amplitude:3 dt:10 xie:2 response:4 done:1 generality:1 furthermore:1 correlation:1 d:1 hand:3 working:1 defines:2 mode:12 usa:2 effect:5 orthonormality:1 asymmetrical:2 analytically:3 hence:1 symmetric:1 laboratory:1 bbsrc:1 during:4 width:5 skewed:2 self:1 sussex:1 maintained:1 steady:2 giese:1 hong:3 m:1 hippocampal:1 brighton:1 complete:1 demonstrate:1 latham:1 motion:8 harmonic:3 recently:1 rotation:1 shanghai:2 discussed:1 extend:1 cambridge:1 moving:9 stable:7 similarity:2 inhibition:6 v0:4 dominant:2 own:1 recent:1 driven:2 scenario:1 success:1 vt:1 responsiveness:1 converge:1 v3:2 dashed:2 u0:11 signal:1 ii:1 d0:1 alan:1 match:1 calculation:1 believed:3 retrieval:1 permitting:1 neutrally:5 shunting:1 a1:2 feasibility:1 impact:1 prediction:2 renart:1 vision:1 represent:3 kernel:4 normalization:4 ion:1 c1:2 sch:1 posse:1 eigenfunctions:5 subject:1 facilitates:3 presence:2 concerned:1 affect:2 psychology:1 competing:1 reduce:1 cn:1 shift:7 t0:4 expression:1 song:1 remark:1 useful:1 clear:2 detailed:1 se:1 amount:1 locally:1 inhibitory:2 neuroscience:4 track:2 shall:1 key:3 four:3 nevertheless:2 achieving:1 changing:1 canyon:4 abbott:1 v1:2 deneve:1 relaxation:1 geometrically:1 master:1 family:4 reasonable:1 wu:2 vn:5 utilizes:1 separation:3 brody:1 nan:1 display:1 topological:1 activity:2 strength:3 x2:4 dominated:2 generates:1 u1:5 speed:7 extremely:2 department:2 fung:1 structured:1 combination:2 agreeing:1 rev:1 making:1 s1:3 wichmann:1 projecting:1 invariant:2 taken:2 ln:1 equation:3 agree:1 remains:1 resource:1 describing:1 skew:1 mechanism:6 turn:1 know:1 end:1 studying:1 meister:1 v2:1 hat:1 existence:1 top:1 include:1 carves:1 testable:2 chinese:1 establish:1 society:1 suddenly:1 psychophysical:1 move:3 already:2 dependence:3 rt:1 gradient:2 subspace:1 distance:4 unable:1 sci:2 capacity:2 manifold:3 argue:1 unstable:1 water:1 length:1 relationship:2 cann:25 ratio:1 kingdom:1 difficult:3 trace:1 slows:1 collective:1 neuron:11 descent:2 truncated:1 saturates:1 neurobiology:2 head:1 perturbation:20 arbitrary:1 namely:7 required:1 z1:2 eigenfunction:5 beyond:4 suggested:1 bar:1 dynamical:3 perception:1 royal:1 memory:4 overlap:1 force:1 endows:1 solvable:3 nth:3 representing:1 technology:1 picture:1 irrespective:1 catch:1 understanding:1 literature:2 tangent:1 checking:1 berry:1 determining:1 relative:1 graf:1 lacking:1 expect:2 interesting:1 basin:1 consistent:1 systematically:2 excitatory:3 elsewhere:1 supported:1 free:1 institute:2 wide:4 taking:1 barrier:1 overcome:1 calculated:3 quantum:3 author:3 qualitatively:2 jump:5 projected:1 vmax:1 iext:4 clue:1 far:1 bb:1 approximate:3 preferred:1 overcomes:1 global:2 decides:1 b1:1 conclude:1 assumed:1 x00:2 continuous:11 un:1 bay:1 promising:1 nature:3 du:1 investigated:2 excellent:3 main:1 neurosci:1 whole:1 s2:3 profile:1 facilitating:1 x1:4 neuronal:10 fig:6 psychometric:1 position:23 heeger:1 perceptual:2 third:2 z0:19 down:1 phkywong:1 inset:1 symbol:2 list:1 dominates:1 intrinsic:2 effectively:1 illustrates:2 logarithmic:2 explore:1 likely:1 visual:2 expressed:2 tracking:31 partially:1 u2:1 corresponds:5 satisfies:1 ma:1 hahnloser:1 consequently:1 towards:1 oscillator:3 twofold:1 shared:1 absence:3 considerable:1 change:11 determined:2 typical:1 justify:2 mexican:1 called:1 secondary:1 invariance:5 experimental:1 divisive:4 arbitary:1 indicating:1 formally:1 latter:1 dx0:3 kourtzi:1 heaviside:2 instructive:1 |
2,789 | 3,529 | Modeling human function learning
with Gaussian processes
Thomas L. Griffiths Christopher G. Lucas Joseph J. Williams
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720-1650
{tom griffiths,clucas,joseph williams}@berkeley.edu
Michael L. Kalish
Institute of Cognitive Science
University of Louisiana at Lafayette
Lafayette, LA 70504-3772
[email protected]
Abstract
Accounts of how people learn functional relationships between continuous variables have tended to focus on two possibilities: that people are estimating explicit
functions, or that they are performing associative learning supported by similarity.
We provide a rational analysis of function learning, drawing on work on regression in machine learning and statistics. Using the equivalence of Bayesian linear
regression and Gaussian processes, we show that learning explicit rules and using similarity can be seen as two views of one solution to this problem. We use
this insight to define a Gaussian process model of human function learning that
combines the strengths of both approaches.
1
Introduction
Much research on how people acquire knowledge focuses on discrete structures, such as the nature
of categories or the existence of causal relationships. However, our knowledge of the world also
includes relationships between continuous variables, such as the difference between linear and exponential growth, or the form of causal relationships, such as how pressing the accelerator of a car
influences its velocity. Research on how people learn relationships between two continuous variables ? known in the psychological literature as function learning ? has tended to emphasize two
different ways in which people could be solving this problem. One class of theories (e.g., [1, 2, 3])
suggests that people are learning an explicit function from a given class, such as the polynomials
of degree k. This approach attributes rich representations to human learners, but has traditionally
given limited treatment to the question of how such representations could be acquired. A second
approach (e.g., [4, 5]) emphasizes the possibility that people learn by forming associations between
observed values of input and output variables, and generalize based on the similarity of new inputs
to old. This approach has a clear account of the underlying learning mechanisms, but faces challenges in explaining how people generalize so broadly beyond their experience, making predictions
about variable values that are significantly removed from their previous observations. Most recently,
hybrids of these two approaches have been proposed (e.g., [6, 7]), with explicit functions being
represented, but associative learning.
Previous models of human function learning have been oriented towards understanding the psychological processes by which people solve this problem. In this paper, we take a different approach,
1
presenting a rational analysis of function learning, in the spirit of [8]. This rational analysis provides
a way to understand the relationship between the two approaches that have dominated previous work
? rules and similarity ? and suggests how they might be combined. The basic strategy we pursue
is to consider the abstract computational problem involved in function learning, and then to explore
optimal solutions to that problem with the goal of shedding light on human behavior. In particular,
the problem of learning a functional relationship between two continuous variables is an instance of
regression, and has been extensively studied in machine learning and statistics.
There are a variety of solution to regression problems, but we focus on methods related to Bayesian
linear regression (e.g., [9]), which allow us to make the expectations of learners about the form of
functions explicit through a prior distribution. Bayesian linear regression is also directly related to
a nonparametric approach known as Gaussian process prediction (e.g., [10]), in which predictions
about the values of an output variable are based on the similarity between values of an input variable.
We use this relationship to connect the two traditional approaches to modeling function learning, as
it shows that learning rules that describe functions and specifying the similarity between stimuli for
use in associative learning are not mutually exclusive alternatives, but rather two views of the same
solution to this problem. We exploit this fact to define a rational model of human function learning
that incorporates the strengths of both approaches.
2
Models of human function learning
In this section we review the two traditional approaches to modeling human function learning ? rules
and similarity ? and some more recent hybrid approaches that combine the two.
2.1
Representing functions with rules
The idea that people might represent functions explicitly appears in one of the first papers on human
function learning [1]. This paper proposed that people assume a particular class of functions (such
as polynomials of degree k) and use the available observations to estimate the parameters of those
functions, forming a representation that goes beyond the observed values of the variables involved.
Consistent with this hypothesis, people learned linear and quadratic functions better than random
pairings of values for two variables, and extrapolated appropriately. Similar assumptions guided
subsequent work exploring the ease with which people learn functions from different classes (e.g.,
[2], and papers have tested statistical regression schemes as potential models of learning, examining
how well human responses were described by different forms of nonlinear regression (e.g., [3]).
2.2
Similarity and associative learning
Associative learning models propose that people do not learn relationships between continuous variables by explicitly learning rules, but by forging associations between observed variable pairs and
generalizing based on the similarity of new variable values to old. The first model to implement this
approach was the Associative-Learning Model (ALM; [4, 5]), in which input and output arrays are
used to represent a range of values for the two variables between which the functional relationship
holds. Presentation of an input activates input nodes close to that value, with activation falling off
as a Gaussian function of distance, explicitly implementing a theory of similarity in the input space.
Learned weights determine the activation of the output nodes, being a weighted linear function of the
activation of the input nodes. Associative learning for the weights is performed by applying gradient
descent on the squared error between current output activation and the correct value. In practice, this
approach performs well when interpolating between observed values, but poorly when extrapolating
beyond those values. As a consequence, the same authors introduced the Extrapolation-Association
Model (EXAM), which constructs a linear approximation to the output of the ALM when selecting
responses, producing a bias towards linearity that better matches human judgments.
2.3
Hybrid approaches
Several papers have explored methods for combining rule-like representations of functions with
associative learning. One example of such an approach is the set of rule-based models explored in
[6]. These models used the same kind of input representation as ALM and EXAM, with activation
2
of a set of nodes similar to the input value. However, the models also feature a set of hidden units,
where each hidden unit corresponds to a different parameterization of a rule from a given class
(polynomial, Fourier, or logistic). The values of the hidden nodes ? corresponding to the values
of the rules they instantiate ? are combined linearly to obtain output predictions, with the weight
of each hidden node being learned through gradient descent (with a penalty for the curvature of
the functions involved). A more complex instance of this kind of approach is the Population of
Linear Experts (POLE) model [7], in which hidden units each represent different linear functions,
but the weights from input to hidden nodes indicate which linear function should be used to make
predictions for particular input values. As a consequence, the model can learn non-linear functions
by identifying a series of local linear approximations, and can even model situations in which people
seem to learn different functions in different parts of the input space.
3
Rational solutions to regression problems
The models outlined in the previous section all aim to describe the psychological processes involved
in human function learning. In this section, we consider the abstract computational problem underlying this task, using optimal solutions to this problem to shed light on both previous models and
human learning. Viewed abstractly, the computational problem behind function learning is to learn
a function f mapping from x to y from a set of real-valued observations xn = (x1 , . . . , xn ) and
tn = (t1 , . . . , tn ), where ti is assumed to be the true value yi = f (xi ) obscured by additive noise.1
In machine learning and statistics, this is referred to as a regression problem. In this section, we discuss how this problem can be solved using Bayesian statistics, and how the result of this approach
is related to Gaussian processes. Our presentation follows that in [10].
3.1
Bayesian linear regression
Ideally, we would seek to solve our regression problem by combining some prior beliefs about the
probability of encountering different kinds of functions in the world with the information provided
by x and t. We can do this by applying Bayes? rule, with
p(f |xn , tn ) = R
p(tn |f, xn )p(f )
,
p(tn |f, xn )p(f ) df
F
(1)
where p(f ) is the prior distribution over functions in the hypothesis space F, p(tn |f, xn ) is the
probability of observing the values of tn if f were the true function, known as the likelihood, and
p(f |xn , tn ) is the posterior distribution over functions given the observations xn and tn . In many
cases, the likelihood is defined by assuming that the values of ti are independent given f and xi ,
being Gaussian with mean yi = f (xi ) and variance ?t2 . Predictions about the value of the function
f for a new input xn+1 can be made by integrating over the posterior distribution,
Z
p(yn+1 |xn+1 , tn , xn ) =
p(yn+1 |f, xn+1 )p(f |xn , tn ) df,
(2)
f
where p(yn+1 |f, xn+1 ) is a delta function placing all of its mass on yn+1 = f (xn+1 ).
Performing the calculations outlined in the previous paragraph for a general hypothesis space F is
challenging, but becomes straightforward if we limit the hypothesis space to certain specific classes
of functions. If we take F to be all linear functions of the form y = b0 + xb1 , then our problem takes
the familiar form of linear regression. To perform Bayesian linear regression, we need to define a
prior p(f ) over all linear functions. Since these functions are identified by the parameters b0 and
b1 , it is sufficient to define a prior over b = (b0 , b1 ), which we can do by assuming that b follows
a multivariate Gaussian distribution with mean zero and covariance ?b . Applying Equation 1 then
results in a multivariate Gaussian posterior distribution on b (see [9]) with
?1 T
T
E[b|xn , tn ] = ?t2 ??1
X n tn
(3)
b + Xn Xn
?1
1 T
(4)
cov[b|xn , yn ] =
??1
b + 2 Xn Xn
?t
1
Following much of the literature on human function learning, we consider only one-dimensional functions,
but this approach generalizes naturally to the multi-dimensional case.
3
where Xn = [1n xn ] (ie. a matrix with a vector of ones horizontally concatenated with xn+1 ) Since
yn+1 is simply a linear function of b, applying Equation 2 yields a Gaussian predictive distribution,
with yn+1 having mean [1 xn+1 ]E[b|xn , tn ] and variance [1 xn+1 ]cov[b|xn , tn ][1 xn+1 ]T . The
predictive distribution for tn+1 is similar, but with the addition of ?t2 to the variance.
While considering only linear functions might seem overly restrictive, linear regression actually
gives us the basic tools we need to solve this problem for more general classes of functions. Many
classes of functions can be described as linear combinations of a small set of basis functions. For
example, all kth degree polynomials are linear combinations of functions of the form 1 (the constant
function), x, x2 , . . . , xk . Letting ?(1) , . . . , ?(k) denote a set of functions, we can define a prior
on the class of functions that are linear combinations of this basis by expressing such functions in
the form f (x) = b0 + ?(1) (x)b1 + . . . + ?(k) (x)bk and defining a prior on the vector of weights
b. If we take the prior to be Gaussian, we reach the same solution as outlined in the previous
paragraph, substituting ? = [1n ?(1) (xn ) . . . ?(k) (xn )] for X and [1 ?(1) (xn+1 ) . . . ?(k) (xn+1 )]
for [1 xn+1 ], where ?(xn ) = [?(x1 ) . . . ?(xn )]T .
3.2
Gaussian processes
If our goal were merely to predict yn+1 from xn+1 , yn , and xn , we might consider a different
approach, simply defining a joint distribution on yn+1 given xn+1 and conditioning on yn . For
example, we might take the yn+1 to be jointly Gaussian, with covariance matrix
Kn
kn,n+1
(5)
Kn+1 =
kTn,n+1 kn+1
where Kn depends on the values of xn , kn,n+1 depends on xn and xn+1 , and kn+1 depends only
on xn+1 . If we condition on yn , the distribution of yn+1 is Gaussian with mean kTn,n+1 K?1
n y
T
?1
and variance kn+1 ? kn,n+1 Kn kn,n+1 . This approach to prediction uses a Gaussian process, a
stochastic process that induces a Gaussian distribution on y based on the values of x. This approach
can also be extended to allow us to predict yn+1 from xn+1 , tn , and xn by adding ?t2 In to Kn ,
where In is the n ? n identity matrix, to take into account the additional variance associated with
tn .
The covariance matrix Kn+1 is specified using a two-place function in x known as a kernel, with
Kij = K(xi , xj ). Any kernel that results in an appropriate (symmetric, positive-definite) covariance
matrix for all x can be used. Common kinds of kernels include radial basis functions, e.g.,
1
K(xi , xj ) = ?12 exp(? 2 (xi ? xj )2 )
(6)
?2
with values of y for which values of x are close being correlated, and periodic functions, e.g.,
2?
K(xi , xj ) = ?32 exp(?42 (cos( [xi ? xj ])))
(7)
?5
indicating that values of y for which values of x are close relative to the period ?3 are likely to be
highly correlated. Gaussian processes thus provide a flexible approach to prediction, with the kernel
defining which values of x are likely to have similar values of y.
3.3
Two views of regression
Bayesian linear regression and Gaussian processes appear to be quite different approaches. In
Bayesian linear regression, a hypothesis space of functions is identified, a prior on that space is
defined, and predictions are formed averaging over the posterior, while Gaussian processes simply
use the similarity between different values of x, as expressed through a kernel, to predict correlations
in values of y. It might thus come as a surprise that these approaches are equivalent.
Showing that Bayesian linear regression corresponds to Gaussian process prediction is straightforward. The assumption of linearity means that the vector yn+1 is equal to Xn+1 b. It follows
that p(yn+1 |xn+1 ) is a multivariate Gaussian distribution with mean zero and covariance matrix
Xn+1 ?b XTn+1 . Bayesian linear regression thus corresponds to prediction using Gaussian processes, with this covariance matrix playing the role of Kn+1 above (ie. using the kernel function K(xi , xj ) = [1 xi ][1 xj ]T ). Using a richer set of basis functions corresponds to taking
Kn+1 = ?n+1 ?b ?Tn+1 (ie. K(xi , xj ) = [1 ?(1) (xi ) . . . ?(k) (xi )][1 ?(1) (xi ) . . . ?(k) (xi )]T ).
4
It is also possible to show that Gaussian process prediction can always be interpreted as Bayesian
linear regression, albeit with potentially infinitely many basis functions. Just as we can express
a covariance matrix in terms of its eigenvectors and eigenvalues, we can express a given kernel
K(xi , xj ) in terms of its eigenfunctions ? and eigenvalues ?, with
K(xi , xj ) =
?
X
?k ?(k) (xi )?(k) (xj )
(8)
k=1
for any xi and xj . Using the results from the previous paragraph, any kernel can be viewed as the
result of performing Bayesian linear regression with a set of basis functions corresponding to its
eigenfunctions, and a prior with covariance matrix ?b = diag(?).
These results establish an important duality between Bayesian linear regression and Gaussian processes: for every prior on functions, there exists a corresponding kernel, and for every kernel, there
exists a corresponding prior on functions. Bayesian linear regression and prediction with Gaussian
processes are thus just two views of the same solution to regression problems.
4
Combining rules and similarity through Gaussian processes
The results outlined in the previous section suggest that learning rules and generalizing based on
similarity should not be viewed as conflicting accounts of human function learning. In this section,
we briefly highlight how previous accounts of function learning connect to statistical models, and
then use this insight to define a model that combines the strengths of both approaches.
4.1
Reinterpreting previous accounts of human function learning
The models presented above were chosen because the contrast between rules and similarity in
function learning is analogous to the difference between Bayesian linear regression and Gaussian
processes. The idea that human function learning can be viewed as a kind of statistical regression [1, 3] clearly connects directly to Bayesian linear regression. While there is no direct formal
correspondence, the basic ideas behind Gaussian process regression with a radial basis kernel and
similarity-based models such as ALM are closely related. In particular, ALM has many commonalities with radial-basis function neural networks, which are directly related to Gaussian processes
[11]. Gaussian processes with radial-basis kernels can thus be viewed as implementing a simple
kind of similarity-based generalization, predicting similar y values for stimuli with similar x values.
Finally, the hybrid approach to rule learning taken in [6] is also closely related to Bayesian linear
regression. The rules represented by the hidden units serve as a basis set that specify a class of
functions, and applying penalized gradient descent on the weights assigned to those basis elements
serves as an online algorithm for finding the function with highest posterior probability [12].
4.2
Mixing functions in a Gaussian process model
The relationship between Gaussian processes and Bayesian linear regression suggests that we
can define a single model that exploits both similarity and rules in forming predictions. In
particular, we can do this by taking a prior that covers a broad class of functions ? including
those consistent with a radial basis kernel ? or, equivalently, modeling y as being produced by
a Gaussian process with a kernel corresponding to one of a small number of types. Specifically, we assume that observations are generated by choosing a type of function from the set
{Positive Linear, Negative Linear, Quadratic, Nonlinear}, where the probabilities of these alternatives are defined by the vector ?, and then sampling y from a Gaussian process with a kernel corresponding to the appropriate class of functions. The relevant kernels are introduced in the previous
sections (taking ?Nonlinear? to correspond to the radial basis kernel), with the ?Positive Linear? and
?Negative Linear? kernels being derived in a similar way to the standard linear kernel but with the
mean of the prior on b being [0 1] and [1 ?1] rather than simply zero.
Using this Gaussian process model allows a learner to make an inference about the type of function
from which their observations are drawn, as well as the properties of the function of that type. In
practice, we perform probabilistic inference using a Markov chain Monte Carlo (MCMC) algorithm
(see [13] for an introduction). This algorithm defines a Markov chain for which the stationary
5
distribution is the distribution from which we wish to sample. In our case, this is the posterior
distribution over types and the hyperparameters for the kernels ? given the observations x and t.
The hyperparameters include ?1 and ?2 defined above and the noise in the observations ?t2 . Our
MCMC algorithm repeats two steps. The first step is sampling the type of function conditioned on
x, t, and the current value of ?, with the probability of each type being proportional to the product of
p(tn |xn ) for the corresponding Gaussian process and the prior probability of that type as given by ?.
The second step is sampling the value of ? given xn , tn , and the current type, which is done using
a Metropolis-Hastings procedure (see [13]), proposing a value for ? from a Gaussian distribution
centered on the current value and deciding whether to accept that value based on the product of the
probability it assigns to tn given xn and the prior p(?). We use an uninformative prior on ?.
5
Testing the Gaussian process model
Following a recent review of computational models of function learning [6], we look at two quantitative tests of Gaussian processes as an account of human function learning: reproducing the order
of difficulty of learning functions of different types, and extrapolation performance. As indicated
earlier, there is a large literature consisting of both models and data concerning human function
learning, and these simulations are intended to demonstrate the potential of the Gaussian process
model rather than to provide an exhaustive test of its performance.
5.1
Difficulty of learning
A necessary criterion for a theory of human function learning is accounting for which functions
people learn readily and which they find difficult ? the relative difficulty of learning various functions. Table 1 is an augmented version of results presented in [6] which compared several models
to the empirically observed difficulty of learning a range of functions. Each entry in the table is the
mean absolute deviation (MAD) of human or model responses from the actual value of the function,
evaluated over the stimuli presented in training. The MAD provides a measure of how difficult it is
for people or a given model to learn a function. The data reported for each set of studies are ordered
by increasing MAD (corresponding to increasing difficulty). In addition to reproducing the MAD
for the models in [6], the table includes results for seven Gaussian process (GP) models.
The seven GP models incorporated different kernel functions by adjusting their prior probability.
Drawing on the {Positive Linear, Negative Linear, Quadratic, Nonlinear} set of kernel functions, the
most comprehensive model took ? = (0.5, 0.4, 0.09, 0.01).2 Six other GP models were examined
by assigning certain kernel functions zero prior probability and re-normalizing the modified value
of ? so that the prior probabilities summed to one. The seven distinct GP models are presented in
Table 1 and labeled by the kernel functions with non-zero prior probability: Linear (Positive Linear
and Negative Linear), Quadratic, Nonlinear (Radial Basis Function), Linear and Quadratic, Linear
and Nonlinear, Quadratic and Nonlinear, and Linear, Quadratic, and Nonlinear. The last two rows of
Table 1 give the correlations between human and model performance across functions, expressing
quantitatively how well each model captured the pattern of human function learning behavior. The
GP models perform well according to this metric, providing a closer match to the human data than
any of the models considered in [6], with the quadratic kernel and the models with a mixture of
kernels tending to provide a closer match to human behavior.
5.2
Extrapolation performance
Predicting and explaining people?s capacity for generalization ? from stimulus-response pairs to
judgments about a functional relationship between variables ? is the second key component of our
account. This capacity is assessed in the way in which people extrapolate, making judgments about
stimuli they have not encountered before. Figure 1 shows mean human predictions for a linear, exponential, and quadratic function (from [4]), together with the predictions of the most comprehensive
GP model (with Linear, Quadratic and Nonlinear kernel functions). The regions to the left and right
of the vertical lines represent extrapolation regions, being input values for which neither people nor
2
The selection of these values was guided by results indicating the order of difficulty of learning functions
of these different types for human learners, but we did not optimize ? with respect to the criteria reported here.
6
Hybrid models
Function
Human ALM
Poly Fourier Logistic
Byun (1995, Expt 1B)
Linear
.20
.04
.04
.05
.16
Square root
.35
.05
.06
.06
.19
Byun (1995, Expt 1A)
Linear
.15
.10
.33
.33
.17
Power, pos. acc.
.20
.12
.37
.37
.24
Power, neg. acc.
.23
.12
.36
.36
.19
Logarithmic
.30
.14
.41
.41
.19
Logistic
.39
.18
.51
.52
.33
Byun (1995, Expt 2)
Linear
.18
.01
.18
.19
.12
Quadratic
.28
.03
.31
.31
.24
Cyclic
.68
.32
.41
.40
.68
Delosh, Busemeyer, & McDaniel (1997)
Linear
.10
.04
.11
.11
.04
Exponential
.15
.05
.17
.17
.02
Quadratic
.24
.07
.27
.27
.11
Correlation of human and model performance
Linear
1.0
.83
.45
.45
.93
Rank-order
1.0
.55
.51
.51
.77
Gaussian process models
Linear Quad RBF LQ
LR
QR
LQR
.0002
.06
.004
.02
.06
.05
.0002 .0002
.02
.03
.0001
.02
.0003
.11
.06
.10
.20
.004
.004
.02
.04
.20
.04
.08
.05
.07
.22
.0002 .0002 .0009 .0001
.004
.05
.003 .003
.02
.03
.02
.02
.04
.05
.03
.03
.20
.18
.18
.18
.0003
.20
.50
.005
.09
.50
.05
.14
.50
.0003 .0002
.09
.12
.50
.49
.001
.04
.49
.0002
.04
.49
.0005
.03
.1
.005
.01
.06
.03
.02
.07
.0005 .0003
.01
.02
.06
.06
.002
.009
.04
.0004
.01
.04
.93
.76
.92
.80
.92
.75
.92
.82
.92
.83
.93
.83
.93
.83
.001
.02
Linear
Exponential
Quadratic
(c)
Exponential
.997
.989
.997
.997
.997
.997
.994
.995
(a)
Quadratic
.961
.470
.901
.882
.886
.892
.878
.877
Table 1: Difficulty of learning results. Rows correspond to functions learned in experiments reviewed in [6]. Columns give the mean absolute deviation (MAD) from the true functions for human
learners and different models (Gaussian process models with multiple kernels are denoted by the
initials of their kernels, e.g., LQR = Linear, Quadratic, and Radial Basis Function). Human MAD
values represent sample means (for a single subject over trials, then over subjects), and reflect both
estimation and production errors, being higher than model MAD values which are computed using
deterministic model predictions and thus reflect only estimation error. The last two rows give the
linear and rank-order correlations of the human and model MAD values, providing an indication of
how well the model matches the difficulty people have in learning different functions.
Function
Human / Model
Model
EXAM
Linear
Quad
RBF
LQ
LR
RQ
LRQ
Linear
.999
.999
.997
.999
.999
.999
.998
.999
(b)
Figure 1: Extrapolation performance. (a)-(b) Mean predictions on linear, exponential, and quadratic
functions for (a) human participants (from [4]) and (b) a Gaussian process model with Linear,
Quadratic, and Nonlinear kernels. Training data were presented in the region between the vertical lines, and extrapolation performance was evaluated outside this region. (c) Correlations between
human and model extrapolation. Gaussian process models are denoted as in Table 1.
7
the model were trained. Both people and the model extrapolate near optimally on the linear function, and reasonably accurate extrapolation also occurs for the exponential and quadratic function.
However, there is a bias towards a linear slope in the extrapolation of the exponential and quadratic
functions, with extreme values of the quadratic and exponential function being overestimated.
Quantitative measures of extrapolation performance are shown in Figure 1 (c), which gives the
correlation between human and model predictions for EXAM [4, 5] and the seven GP models. While
none of the GP models produce quite as high a correlation as EXAM on all three functions, all of
the models except that with just the linear kernel produce respectable correlations. It is particularly
notable that this performance is achieved without the optimization of any free parameters, while the
predictions of EXAM were the result of optimizing two parameters for each of the three functions.
6
Conclusions
We have presented a rational account of human function learning, drawing on ideas from machine
learning and statistics to show that the two approaches that have dominated previous work ? rules and
similarity ? can be interpreted as two views of the same kind of optimal solution to this problem. Our
Gaussian process model combines the strengths of both approaches, using a mixture of kernels to
allow systematic extrapolation as well as sensitive non-linear interpolation. Tests of the performance
of this model on benchmark datasets show that it can capture some of the basic phenomena of human
function learning, and is competitive with existing process models. In future work, we aim to extend
this Gaussian process model to allow it to produce some of the more complex phenomena of human
function learning, such as non-monotonic extrapolation (via periodic kernels) and learning different
functions in different parts of the input space (via mixture modeling).
Acknowledgments. This work was supported by grant FA9550-07-1-0351 from the Air Force Office of Scientific Research and grants 0704034 and 0544705 from the National Science Foundation.
References
[1] J. D. Carroll. Functional learning: The learning of continuous functional mappings relating stimulus and
response continua. Education Testing Service, Princeton, NJ, 1963.
[2] B. Brehmer. Hypotheses about relations between scaled variables in the learning of probabilistic inference
tasks. Organizational Behavior and Human Decision Processes, 11:1?27, 1974.
[3] K. Koh and D. E. Meyer. Function learning: Induction of continuous stimulus-response relations. Journal
of Experimental Psychology: Learning, Memory, and Cognition, 17:811?836, 1991.
[4] E. L. DeLosh, J. R. Busemeyer, and M. A. McDaniel. Extrapolation: The sine qua non of abstraction in
function learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23:968?986,
1997.
[5] J. R. Busemeyer, E. Byun, E. L. DeLosh, and M. A. McDaniel. Learning functional relations based
on experience with input-output pairs by humans and artificial neural networks. In K. Lamberts and
D. Shanks, editors, Concepts and Categories, pages 405?437. MIT Press, Cambridge, 1997.
[6] M. A. McDaniel and J. R. Busemeyer. The conceptual basis of function learning and extrapolation:
Comparison of rule-based and associative-based models. Psychonomic Bulletin and Review, 12:24?42,
2005.
[7] M. Kalish, S. Lewandowsky, and J. Kruschke. Population of linear experts: Knowledge partitioning and
function learning. Psychological Review, 111:1072?1099, 2004.
[8] J. R. Anderson. The adaptive character of thought. Erlbaum, Hillsdale, NJ, 1990.
[9] J. M. Bernardo and A. F. M. Smith. Bayesian theory. Wiley, New York, 1994.
[10] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to linear prediction and
beyond. In M. I. Jordan, editor, Learning in Graphical Models, pages 599?621. MIT Press, Cambridge,
MA, 1998.
[11] R. M. Neal. Priors for infinite networks. Technical Report CRG-TR-94-1, Department of Computer
Science, University of Toronto, 1994.
[12] D.J.C. MacKay. Probable networks and plausible predictions - a review of practical bayesian methods for
supervised neural networks. Network: Computation in Neural Systems, 6:469?505, 1995.
[13] W.R. Gilks, S. Richardson, and D. J. Spiegelhalter, editors. Markov Chain Monte Carlo in Practice.
Chapman and Hall, Suffolk, UK, 1996.
8
| 3529 |@word trial:1 version:1 briefly:1 polynomial:4 seek:1 simulation:1 covariance:8 accounting:1 tr:1 initial:1 cyclic:1 series:1 selecting:1 lqr:2 existing:1 current:4 activation:5 assigning:1 readily:1 additive:1 subsequent:1 xb1:1 extrapolating:1 stationary:1 instantiate:1 parameterization:1 xk:1 smith:1 fa9550:1 lr:2 provides:2 node:7 toronto:1 direct:1 pairing:1 combine:4 reinterpreting:1 paragraph:3 alm:6 acquired:1 behavior:4 nor:1 multi:1 actual:1 quad:2 considering:1 increasing:2 becomes:1 provided:1 estimating:1 underlying:2 linearity:2 mass:1 kind:7 interpreted:2 pursue:1 proposing:1 finding:1 nj:2 berkeley:3 every:2 quantitative:2 ti:2 bernardo:1 growth:1 shed:1 scaled:1 uk:1 partitioning:1 unit:4 grant:2 yn:17 producing:1 appear:1 t1:1 positive:5 before:1 local:1 service:1 limit:1 consequence:2 interpolation:1 might:6 studied:1 examined:1 equivalence:1 suggests:3 specifying:1 challenging:1 co:1 ease:1 limited:1 range:2 lafayette:2 acknowledgment:1 practical:1 busemeyer:4 testing:2 gilks:1 practice:3 implement:1 definite:1 procedure:1 significantly:1 thought:1 integrating:1 griffith:2 radial:8 suggest:1 close:3 selection:1 influence:1 applying:5 optimize:1 equivalent:1 deterministic:1 williams:3 go:1 straightforward:2 kruschke:1 identifying:1 assigns:1 rule:19 insight:2 array:1 population:2 traditionally:1 analogous:1 us:1 hypothesis:6 velocity:1 element:1 particularly:1 labeled:1 observed:5 role:1 solved:1 capture:1 region:4 removed:1 highest:1 rq:1 ideally:1 trained:1 solving:1 predictive:2 serve:1 learner:5 basis:16 po:1 joint:1 represented:2 various:1 distinct:1 describe:2 monte:2 artificial:1 choosing:1 outside:1 exhaustive:1 quite:2 richer:1 solve:3 valued:1 plausible:1 drawing:3 statistic:5 cov:2 richardson:1 abstractly:1 jointly:1 gp:8 kalish:3 associative:9 online:1 pressing:1 eigenvalue:2 indication:1 took:1 propose:1 product:2 relevant:1 combining:3 mixing:1 poorly:1 qr:1 produce:3 exam:6 b0:4 expt:3 indicate:1 come:1 guided:2 closely:2 correct:1 attribute:1 stochastic:1 centered:1 human:41 implementing:2 education:1 hillsdale:1 generalization:2 probable:1 crg:1 exploring:1 hold:1 considered:1 hall:1 exp:2 deciding:1 mapping:2 predict:3 cognition:2 substituting:1 continuum:1 commonality:1 estimation:2 sensitive:1 tool:1 weighted:1 mit:2 clearly:1 activates:1 gaussian:48 always:1 aim:2 modified:1 rather:3 office:1 derived:1 focus:3 rank:2 likelihood:2 contrast:1 inference:3 abstraction:1 accept:1 hidden:7 relation:3 flexible:1 denoted:2 lucas:1 summed:1 mackay:1 equal:1 construct:1 having:1 sampling:3 chapman:1 placing:1 broad:1 look:1 suffolk:1 future:1 report:1 t2:5 stimulus:7 quantitatively:1 ktn:2 oriented:1 national:1 comprehensive:2 lewandowsky:1 familiar:1 intended:1 connects:1 consisting:1 possibility:2 highly:1 mixture:3 extreme:1 light:2 behind:2 chain:3 accurate:1 closer:2 necessary:1 experience:2 old:2 re:1 causal:2 obscured:1 psychological:4 instance:2 column:1 modeling:5 kij:1 earlier:1 cover:1 respectable:1 pole:1 deviation:2 entry:1 organizational:1 examining:1 erlbaum:1 optimally:1 reported:2 connect:2 kn:15 periodic:2 combined:2 ie:3 overestimated:1 probabilistic:2 off:1 systematic:1 michael:1 together:1 squared:1 reflect:2 cognitive:1 expert:2 account:9 potential:2 includes:2 notable:1 explicitly:3 depends:3 performed:1 view:5 extrapolation:14 root:1 sine:1 observing:1 competitive:1 bayes:1 participant:1 slope:1 formed:1 square:1 air:1 variance:5 judgment:3 yield:1 correspond:2 generalize:2 bayesian:20 lambert:1 emphasizes:1 produced:1 none:1 carlo:2 xtn:1 acc:2 lrq:1 reach:1 tended:2 involved:4 naturally:1 associated:1 rational:6 treatment:1 adjusting:1 knowledge:3 car:1 actually:1 appears:1 higher:1 supervised:1 tom:1 response:6 specify:1 done:1 evaluated:2 anderson:1 just:3 correlation:8 hastings:1 christopher:1 nonlinear:10 defines:1 logistic:3 indicated:1 scientific:1 byun:4 concept:1 true:3 assigned:1 symmetric:1 neal:1 criterion:2 presenting:1 demonstrate:1 tn:22 performs:1 recently:1 common:1 tending:1 functional:7 psychonomic:1 empirically:1 conditioning:1 association:3 extend:1 relating:1 expressing:2 cambridge:2 outlined:4 similarity:18 encountering:1 carroll:1 curvature:1 posterior:6 multivariate:3 recent:2 optimizing:1 certain:2 delosh:3 yi:2 neg:1 seen:1 captured:1 additional:1 determine:1 period:1 multiple:1 technical:1 match:4 calculation:1 concerning:1 prediction:23 regression:32 basic:4 expectation:1 df:2 metric:1 represent:5 kernel:33 achieved:1 addition:2 uninformative:1 appropriately:1 eigenfunctions:2 subject:2 incorporates:1 spirit:1 seem:2 jordan:1 near:1 variety:1 xj:12 psychology:3 identified:2 idea:4 brehmer:1 whether:1 six:1 penalty:1 york:1 clear:1 eigenvectors:1 nonparametric:1 extensively:1 induces:1 category:2 mcdaniel:4 delta:1 overly:1 broadly:1 discrete:1 express:2 key:1 falling:1 drawn:1 neither:1 merely:1 place:1 decision:1 shank:1 correspondence:1 quadratic:20 encountered:1 strength:4 x2:1 dominated:2 fourier:2 performing:3 department:2 according:1 combination:3 across:1 character:1 joseph:2 metropolis:1 making:2 koh:1 taken:1 equation:2 mutually:1 discus:1 mechanism:1 letting:1 serf:1 available:1 generalizes:1 appropriate:2 alternative:2 existence:1 thomas:1 include:2 graphical:1 exploit:2 concatenated:1 restrictive:1 establish:1 question:1 occurs:1 strategy:1 exclusive:1 traditional:2 gradient:3 kth:1 distance:1 capacity:2 seven:4 mad:8 induction:1 assuming:2 relationship:12 providing:2 acquire:1 equivalently:1 difficult:2 potentially:1 negative:4 perform:3 vertical:2 observation:8 markov:3 datasets:1 benchmark:1 descent:3 situation:1 defining:3 extended:1 incorporated:1 reproducing:2 introduced:2 bk:1 pair:3 specified:1 california:1 learned:4 conflicting:1 beyond:4 pattern:1 challenge:1 including:1 memory:2 belief:1 power:2 difficulty:8 hybrid:5 force:1 predicting:2 representing:1 scheme:1 spiegelhalter:1 prior:22 literature:3 understanding:1 review:5 relative:2 highlight:1 accelerator:1 proportional:1 foundation:1 degree:3 sufficient:1 consistent:2 editor:3 playing:1 production:1 row:3 penalized:1 extrapolated:1 supported:2 repeat:1 last:2 free:1 bias:2 allow:4 understand:1 formal:1 institute:1 explaining:2 face:1 taking:3 bulletin:1 absolute:2 xn:51 world:2 rich:1 author:1 made:1 adaptive:1 emphasize:1 b1:3 conceptual:1 assumed:1 xi:19 continuous:7 table:7 reviewed:1 learn:10 nature:1 reasonably:1 ca:1 correlated:2 interpolating:1 complex:2 poly:1 diag:1 did:1 linearly:1 noise:2 hyperparameters:2 x1:2 augmented:1 referred:1 wiley:1 meyer:1 explicit:5 wish:1 exponential:9 lq:2 specific:1 qua:1 showing:1 explored:2 normalizing:1 exists:2 albeit:1 adding:1 conditioned:1 surprise:1 generalizing:2 logarithmic:1 simply:4 explore:1 likely:2 forming:3 infinitely:1 horizontally:1 expressed:1 ordered:1 louisiana:1 monotonic:1 corresponds:4 ma:1 goal:2 presentation:2 viewed:5 identity:1 rbf:2 towards:3 specifically:1 except:1 infinite:1 averaging:1 duality:1 experimental:2 la:1 shedding:1 indicating:2 people:22 assessed:1 mcmc:2 princeton:1 tested:1 phenomenon:2 extrapolate:2 |
2,790 | 353 | Self-organization of Hebbian Synapses
in Hippocampal Neurons
Thomas H. Brown,t Zachary F. Mainen,t Anthony M. Zador,t and Brenda J. Claiborne?
t Department of Psychology
? Division of Life Sciences
Yale University
University of Texas
New Haven, cr 06511
San Antonio, TX 78285
ABSTRACT
We are exploring the significance of biological complexity for neuronal
computation. Here we demonstrate that Hebbian synapses in realistically-modeled hippocampal pyramidal cells may give rise to two novel
forms of self-organization in response to structured synaptic input. First,
on the basis of the electrotonic relationships between synaptic contacts,
a cell may become tuned to a small subset of its input space. Second, the
same mechanisms may produce clusters of potentiated synapses across
the space of the dendrites. The latter type of self-organization may be
functionally significant in the presence of nonlinear dendritic conductances.
1 INTRODUCTION
Long-term potentiation (LTP) is an experimentally observed form of synaptic plasticity
that has been interpreted as an instance of a Hebbian modification (Kelso et al, 1986;
Brown et al, 1990). The induction ofLTP requires synchronous presynaptic activity and
postsynaptic depolarization (Kelso et al, 1986). We have previously developed a detailed
biophysical model of the LTP observed at synapses onto hippocampal region CAl pyrami-
39
40
Brown, Mainen, Zador, and Claiborne
Figure 1: Two-dimensional projection of a reconstructed hippocampal CAl pyramidal cell.
dal neurons (Zador et al, 1990). The synapses at which this form of LTP occurs are distributed across an extensive dendritic arbor (Fig. 1). During synaptic stimulation, the
membrane voltage at each synapse is different. In this way, a biological neuron differs
from the processing elements typically used in neural network models, where the postsynaptic activity can be represented by a single state variable. We have developed an electrotonic model based on an anatomically reconstructed neuron. We have used this model to
explore how the spatial distribution of inputs and the temporal relationships of their activation affect synaptic potentiation.
2 THE NEURONAL MODEL
Standard compartmental modeling techniques were used to represent the electrical structure of hippocampal CAl pyramidal cells.
2.1 MORPHOLOGY AND ELECTRICAL PARAMETERS
Morphometric data were obtained from three-dimensional reconstructions (Brown et al.,
1991) of hippocampal neurons (Fig. 1). A correction factor was applied to the membrane
area based on an estimate for spine density of 2/llm. The original measurements divided
a single neuron into 3000-4000 cylinders with an average length of 5.5 J.1m. For simulation
purposes, this structure was collapsed into 300-400 compartments, preserving the connectivity pattern and changes in process diameter. Electrical constants were Rm = 70 ID-cm 2,
em= 1 JlF'lcrrll, Ri = 200 n-cm (Spruston & Johnston 1990). The membrane was electrically passive. Synaptic currents were modeled as the sum of fast AMPA and slow NMDA
conductances on the head of a two-compartment spine (Zador et al., 1990). The AMPA
conductance was represented by an alpha function (Jack et al., 1975) with time constant of
1.5 msec (Brown and Johnston, 1983). The NMDA conductance was represented by a
more complicated function with two time constants and a voltage dependence due to voltage-sensitive channel blocking by Mg2+ ions (see Zador et aI., 1990; Brown et al. 1991).
The initial peak conductances, gAMPA and gNMDA' were set to 0.5 and 0.1 nS respectively.
Self-organization of Hebbian Synapses in Hippocampal Neurons
2.2 SIMULATION AND SYNAPTIC MODIFICATION
Simulations were run on a Sun 4/330 workstation using a customized version of NEURON.
a simulator developed by Michael Hines (Hines. 1989). Prior to a simulation. 5 patterns of
40 synapses were selected at random from a pool of synapses distributed unifonnly over
the apical and basal dendrites. Simulations were divided into trials of 100 msec. At the
beginning of each trial a particular pattern of synapses was activated synchronously (3
stimuli at intervals of 3 msec). The sequential presentation of all 5 selected patterns
constituted an epoch. An entire simulation consisted of 20 presentation epochs. Over the
course of each trial. membrane potential was computed at each location in the dendritic
tree. and these voltages were used to compute weight changes .!\Wij according to the
Hebbian algorithm described below. After each trial. the actual peak AMPA conductances
(gAMPA. hereafter denoted g$1J were scaled by the sigmoidal function
gmax
(1)
where 0' detennines the steepness of the sigmoid. and gfM% was set to 1.0 nS.
The rule for synaptic modification was based on a biophysical interpretation (Kairiss et aI .?
1991; Brown et aI .? 1991) of a generalized bilinear fonn of Hebbian algorithm (Brown et
aI.? 1990):
(2)
where a. ~. and 'Y are functionals.l) is a constant. a.(t) represents postsynaptic activity and
a .(t) represents presynaptic activity. This equation specifies an interactive fonn of synaptic
ehhancement combined with three noninteractive forms of synaptic depression, all of
which have possible neurobiological analogs (Brown et aI. 1990). The interactive tenn was
derived from a biophysical model of LTP induction in a spine (Zador et aI'21990). A simplified version of this model was used to compute the concentration of Ca +-bound calmodulin. [CaM-C84]. It has been suggested that CaM-C84 may trigger protein kinases
responsible for LTP induction. In general [CaM-C8.4] was a nonlinear function of subsynaptic voltage (Zador et al .? 1990).
The biophysical mechanisms underlying synaptic depression are less well understood. The
constant l) represents a passive decay process and was generally set to zero. The functional
~ represents heterosynaptic depression based on postsynaptic activity. In these simulations, ~ was proportional the amount of depolarization of the subsynaptic membrane from
resting potential (V$111 - V ). The functional 'Y represents homosynaptic depression based
on presynaptic activity. Were. 'Y was proportional to the AMPA conductance. which can
be considered a measure of exclusively presynaptic activity because it is insensitive to
postsynaptic voltage. The three activity-dependent tenns were integrated over the period
of the trial in order to obtain a measure of weight change. Reinterpreting a. ~. and 'Yas constants. the equation is thus:
.!\Wij
=
f
"ial
[a [CamCa 4] -
~ (V.rYII- V,..,,)
- 'YgAMPA -l)] dt.
(3)
41
42
Brown, Mainen, Zador, and Claiborne
-40
-40
-40
-60
-60
-80 IL.-_ _ __
o
............ .
-====
-801.:..':_'
o
tOO
50
: ....... .
:...
m.sec
50
tOO
m.sec
................................................
....
. ...... .... .. -............ ......
-80~~~~~~~~~~~~~~~~~~
o
5
10
15
20
epochs
Figure 2: Interactions among Hebbian synapses produce differing global effects ("winning" and
"losing" patterns) on the basis of the spatial distribution of synapses. The PSP (always measured
at the soma) due to two different patterns of 40 synapses are plotted as a function of the presentation
epoch. Initially, pattern 1 (solid line) evoked a slightly greater PSP than pattern 2 (dotted line; inset, top right). Mter 20 epochs these responses were reversed: thePSP due to pattern 1 was depressed while the PSP due to pattern 2 was potentiated (inset, top left).
3 RESULTS
Analysis of the simulations revealed self-organization in the form of differential modification of synaptic strengths (Mainen et al. 1990). Two aspects of the self-organization phenomena were distinguished. In some simulations, a form of pattern selection was observed
in which clear "winners" and "losers" emerged. In other simulations, the average synaptic
efficacy remained about the same, but spatial heterogeneities~lustering~f synaptic
strength developed. Different measures were used to assess these phenomena.
3.1 PATTERNSELECTION
The change in the peak postsynaptic potential recorded at the soma (P SP) provided one useful measure of pattern selection. In many simulations, pattern selection resulted in a
marked potentiation of the PSP due to some patterns and a depression of the PSP due to
others. The PSP can be regarded as an indirect measure of the functional consequence of
self-organization. In the simulation illustrated in Fig. 2, patterns of 40 synapses produced
an average PSP of 15 mV before learning. After learning, responses ranged from 10% to
150% of this amount Underlying.pattern selection was a ch8!!ge in the average peak synaptic conductance for the patterng8YIIO).1 The initial value of g8YII was .!he same for all patterns, and its final value was bounded by eq. 1. In many simulations, g8YII approached the
upper bound for some patterns and the lower bound for other patterns (Fig. 3). In this way,
the neuron became selectively tuned to a subset of its original set of inputs. The specificity
Self-organization of Hebbian Synapses in Hippocampal Neurons
1.0
!
....... ..... .....
............................
0.5/?????????????????????????
~
~
o
5
10
15
20
epochs
Figure 3. The mean synaptic conductance gSy"of two patterns is plotted as a function of the presentation epoch. Both patterns began with iaenucal total synaptic strength (40 synapses with gs,r& =
0.5 nS). Synaptic conductances were constrained to the range [0.0, 1.0] nS. Mter twenty epochs,
gSY" of pattern 1 (solid line) approached the minimum ofO.OnS while gsy" of pattern 2 (dotted line)
approached the maximum of 1.0 nS.
of this tuning was dependent on the parameter values of the neuronal model, learning rule,
and stimulus set.
3.2 CLUSTER FORMAnON
Heterogeneity in the spatial distribution of strengthened and weakened synapses was often
observed. After learning, spatial clusters of synapses with similar conductances formed.
These spatial heterogeneities can be illustrated in several ways. In one convenient method
(see Brown et al., 1991), synapses are represented as colored points superimposed on a rendition of the neuronal morphology as illustrated in Fig. 1. By COlor-coding gsyn for each
synapse in a pattern, correlations in synaptic strength across dendritic space are immediately apparent. In a second method, better suited to the monochrome graphics available in the
present text, the evolution of the variance of gsyn is plotted as a function of time (Fig. 4).
In the simulation illustrated here, the increase in variance was due to the formation of a single, relatively large cluster of strengthened synapses. Within other parameter regimes, multiple clusters of smaller size were formed.
4
DISCUSSION
The important differences between synaptic modifications in the biophysically-modeled
neuron and those in simple processing elements arise from voltage gradients present in the
realistic model (Brown et aI., 1991; Kairiss et al., 1990). In standard processing elements,
g
1 Although SJ" and the somatic PSP were generally correlated, the relationship between the two is
not linear, as was often evident in simulations (compare initial trials in Figs. 2 and 3).
43
44
Brown, Mainen, Zador, and Claiborne
1.0
--_.-._--.-.----_._._.--_.
til
b
o. 0
"'""""'-----'--'--"---'~.............-'--~--'---''__'_...........__'
L-..-...........
o
5
10
15
20
epochs
Figure 4: Synaptic heterogeneity is indicated by increases in the variance (02) of the set of synaptic
conductances for each pattern. The variances of the peak synaptic conductances. (g'l,J of 4 patterns
are plotted as ajy)lction of the epoch. The variance of all 4 patterns approached the theoretical
maximum of JO.5. In this parameter regime, the variance was due to the potentiation of a single
large cluster of synapes combined with the depression of other synapses.
a single state variable represents postsynaptic activity. In contrast, the critical subsynaptic
voltages which represent postsynaptic activity in the neuron are correlated but are not strictly equal. The structure and electrical properties of the cell interact with its synaptic input
to detennine the precise spatiotemporal pattern of membrane voltage. Thus, the voltage at
any synapse depends strongly on its electrotonic relationships to other active synapses. The
way in which this local depolarization affects the nature of self-organization depends on the
specific mechanisms of the synaptic modification rule. We have modeled a pair of opposing voltage-dependent mechanisms. An interactive potentiation mechanism (the functional
ex) promotes cooperativity between spatially proximal synapses with temporally correlated
activity. A heterosynaptic depression mechanism (the functional P), which is independent
of presynaptic activity, promotes competition among spatially proximal synapses.
Through mechanisms such as these, the specific electrotonic structure of a neuron predetennines a complex set of interactions between any given spatial distribution of synaptic
inputs. We have shown that these higher-order interactions can give rise to self-organization with at least two interesting effects.
4.1 SPARSE REPRESENTATION
The phenomenon of pattern selection demonstrates how Hebbian self-organization may
naturally tune neurons to respond to a subset of their input space. This tuning mechanism
might allow a large field of neurons to develop a sparse coding of the activity in a set of
input fibers, since each neuron would respond to a particular small portion of the input
space. Sparse coding may be advantageous to associative learning and other types of neural
computation (Kanerva, 1988).
Self-organization of Hebbian Synapses in Hippocampal Neurons
4.2 CLUSTERING AND NONLINEAR COMPUTATION
The fonnation of clusters of strengthened synapses illustrates a property of Hebbian selforganization whose functional significance might only be appreciated in the presence of
nonlinear (voltage-dependent) dendritic conductances. We have examined the self-organization process in an electrically passive neuron. Under these conditions, the presence of
clustering within patterns has little effect on the observed output. In fact, it is known that
hippocampal cells of the type modeled possess a variety of spatially heterogeneous nonlinear dendritic conductances (Jones et al., 1989). The computational role of such nonlinearities is just beginning to be explored. It is possible that interactions between synaptic
clustering and nonlinear membrane patches may significantly affect both the perfonnance
of dendritic computations and the process of self-organization itself.
Acknowledgments
This research was supported by grants from the Office of Naval Research, the Defense Advanced Research Projects Agency, and the Air Force Office of Scientific Research.
References
Brown, T .H. and Johnston, D. (1983) Voltage-clamp analysis of mossy fiber synaptic input
to hippocampal neurons. J. Neurophysiol. SO: 487-507.
Brown, T.H., Kairiss, E.W. and Keenan, C.L. (1990) Hebbian synapses: biophysical mechanisms and algorithms. Annu. Rev. Neurosci. 13: 475-512.
Brown, T.H., Zador, A.M., Mainen, Z.F. and Claiborne, BJ. (1991) Hebbian modifications
in hippocampal neurons. In J. Davis and M. Baudry (eds.), LTP: A Debate of Current Issues (Cambridge, MA: MIT Press).
Hines, M. (1989) A program for simulation of nerve equations with branching geometries.
Int. J. Bio-Med Comp 24: 55-68.
Jack, J., Noble, A. and Tsien, R.W. (1975) Electrical Current Flow in Excitable Membranes (London: Oxford Univ. Press).
Jones, O.T., Kunze, D.L and Angelides, KJ. (1989) Localization and mobility ofw-conotoxin-sensitive Ca2+ channels in hippocampal CAl neurons. Science 244: 1189-1193.
Kairiss, E.W., Mainen, Z.F., Claiborne, BJ. and Brown, T.H. (1991) Dendritic control of
hebbian compuations. In F. Eeckman (ed.), Analysis and Modeling of Neural Systems
(Boston, MA: Kluwer Academic Publishers).
Kanerva, P. (1988) Sparse distributed memory. (Cambridge, MA: MIT Press).
Kelso, S.R., Ganong, Brown, T.H. (1986) Hebbian synapses in hippocampus. Proc. Natl.
Acad. Sci. USA 83: 5326-5330.
Mainen, Z.M., Zador, A.M., Claiborne, B. and Brown, T.H. (1990) Hebbian synapses induce feature mosaics in hippocampal dendrites. Soc. Neurosci. Abstr. 16: 492.
Spruston, N. and Johnston, D. (1990) Whole-cell patch clamp analysis of the passive membrane properties of hippocampal neurons. Soc. N eurosci. Abstr. 16: 1297.
Zador, A., Koch, C. and Brown, T.H. (1990) Biophysical model of a hebbian synapse.
Proc. Natl. Acad. Sci. USA 87: 6718-6722.
45
| 353 |@word trial:6 selforganization:1 version:2 jlf:1 advantageous:1 hippocampus:1 simulation:16 fonn:2 solid:2 initial:3 exclusively:1 hereafter:1 mainen:8 efficacy:1 tuned:2 current:3 activation:1 realistic:1 plasticity:1 tenn:1 selected:2 beginning:2 ial:1 colored:1 location:1 sigmoidal:1 ofo:1 become:1 differential:1 reinterpreting:1 spine:3 morphology:2 simulator:1 actual:1 little:1 provided:1 project:1 underlying:2 bounded:1 cm:2 interpreted:1 depolarization:3 developed:4 differing:1 temporal:1 interactive:3 rm:1 scaled:1 demonstrates:1 bio:1 angelides:1 grant:1 control:1 before:1 understood:1 local:1 consequence:1 acad:2 bilinear:1 id:1 oxford:1 might:2 weakened:1 examined:1 evoked:1 range:1 acknowledgment:1 responsible:1 differs:1 calmodulin:1 area:1 significantly:1 kelso:3 projection:1 convenient:1 induce:1 specificity:1 protein:1 kairiss:4 onto:1 selection:5 cal:4 collapsed:1 zador:12 immediately:1 rule:3 regarded:1 mossy:1 trigger:1 losing:1 mosaic:1 element:3 blocking:1 observed:5 role:1 electrical:5 region:1 sun:1 agency:1 complexity:1 cam:3 localization:1 division:1 basis:2 neurophysiol:1 indirect:1 represented:4 tx:1 fiber:2 univ:1 fast:1 london:1 cooperativity:1 approached:4 formation:1 apparent:1 emerged:1 whose:1 compartmental:1 itself:1 final:1 associative:1 biophysical:6 reconstruction:1 clamp:2 interaction:4 loser:1 realistically:1 competition:1 mg2:1 cluster:7 abstr:2 produce:2 develop:1 measured:1 eq:1 soc:2 detennines:1 gampa:2 potentiation:5 biological:2 dendritic:8 exploring:1 strictly:1 correction:1 koch:1 considered:1 gnmda:1 bj:2 eurosci:1 purpose:1 proc:2 sensitive:2 lustering:1 mit:2 always:1 cr:1 voltage:13 office:2 derived:1 monochrome:1 naval:1 superimposed:1 contrast:1 dependent:4 typically:1 entire:1 integrated:1 initially:1 wij:2 issue:1 among:2 denoted:1 spatial:7 constrained:1 equal:1 field:1 represents:6 jones:2 noble:1 gsyn:2 others:1 stimulus:2 haven:1 resulted:1 baudry:1 geometry:1 opposing:1 cylinder:1 conductance:15 organization:14 activated:1 natl:2 gmax:1 unifonnly:1 perfonnance:1 spruston:2 tree:1 mobility:1 plotted:4 dal:1 theoretical:1 instance:1 modeling:2 apical:1 subset:3 too:2 graphic:1 spatiotemporal:1 proximal:2 combined:2 density:1 peak:5 pool:1 michael:1 connectivity:1 jo:1 recorded:1 til:1 potential:3 nonlinearities:1 sec:2 coding:3 int:1 mv:1 depends:2 portion:1 complicated:1 ass:1 air:1 compartment:2 il:1 became:1 formed:2 variance:6 biophysically:1 produced:1 comp:1 synapsis:28 synaptic:28 ed:2 naturally:1 workstation:1 color:1 nmda:2 nerve:1 higher:1 dt:1 response:3 synapse:4 strongly:1 just:1 correlation:1 nonlinear:6 indicated:1 scientific:1 usa:2 effect:3 brown:20 consisted:1 ranged:1 evolution:1 spatially:3 illustrated:4 during:1 self:14 branching:1 davis:1 generalized:1 hippocampal:15 evident:1 demonstrate:1 passive:4 jack:2 novel:1 began:1 sigmoid:1 stimulation:1 functional:6 winner:1 insensitive:1 analog:1 interpretation:1 he:1 resting:1 functionally:1 kluwer:1 significant:1 measurement:1 cambridge:2 ai:7 eeckman:1 tuning:2 depressed:1 tenns:1 life:1 preserving:1 minimum:1 greater:1 period:1 multiple:1 fonnation:1 hebbian:17 academic:1 long:1 divided:2 promotes:2 heterogeneous:1 represent:2 cell:7 ion:1 interval:1 johnston:4 pyramidal:3 publisher:1 posse:1 med:1 ltp:6 flow:1 presence:3 revealed:1 variety:1 affect:3 psychology:1 texas:1 synchronous:1 defense:1 depression:7 antonio:1 electrotonic:4 generally:2 useful:1 detailed:1 clear:1 tune:1 amount:2 diameter:1 specifies:1 dotted:2 steepness:1 basal:1 soma:2 sum:1 run:1 heterosynaptic:2 respond:2 ca2:1 patch:2 bound:3 yale:1 g:1 activity:13 strength:4 ri:1 homosynaptic:1 aspect:1 c8:1 relatively:1 structured:1 department:1 according:1 gfm:1 electrically:2 membrane:9 across:3 psp:8 em:1 postsynaptic:8 slightly:1 smaller:1 rev:1 modification:7 anatomically:1 kanerva:2 equation:3 previously:1 mechanism:9 ge:1 available:1 detennine:1 distinguished:1 thomas:1 original:2 top:2 clustering:3 contact:1 occurs:1 concentration:1 dependence:1 gradient:1 reversed:1 sci:2 presynaptic:5 induction:3 length:1 modeled:5 relationship:4 debate:1 ofw:1 rise:2 kinase:1 twenty:1 potentiated:2 upper:1 neuron:22 heterogeneity:4 head:1 precise:1 synchronously:1 somatic:1 pair:1 extensive:1 suggested:1 below:1 pattern:30 regime:2 program:1 memory:1 rendition:1 critical:1 force:1 customized:1 advanced:1 temporally:1 excitable:1 brenda:1 kj:1 text:1 prior:1 epoch:10 keenan:1 interesting:1 proportional:2 morphometric:1 course:1 supported:1 appreciated:1 allow:1 sparse:4 distributed:3 zachary:1 san:1 simplified:1 ganong:1 functionals:1 reconstructed:2 alpha:1 sj:1 claiborne:7 neurobiological:1 ons:1 global:1 mter:2 active:1 llm:1 channel:2 nature:1 ca:1 correlated:3 dendrite:3 interact:1 complex:1 ampa:4 anthony:1 sp:1 significance:2 constituted:1 neurosci:2 whole:1 arise:1 neuronal:4 fig:7 strengthened:3 slow:1 n:5 msec:3 winning:1 annu:1 remained:1 specific:2 inset:2 explored:1 decay:1 sequential:1 illustrates:1 boston:1 suited:1 tsien:1 explore:1 hines:3 ma:3 marked:1 presentation:4 experimentally:1 change:4 total:1 arbor:1 selectively:1 latter:1 phenomenon:3 ex:1 |
2,791 | 3,530 | The Conjoint Effect of Divisive Normalization and
Orientation Selectivity on Redundancy Reduction in
Natural Images
Matthias Bethge
MPI for Biological Cybernetics
72076 T?ubingen, Germany
[email protected]
Fabian Sinz
MPI for Biological Cybernetics
72076 T?ubingen, Germany
[email protected]
Abstract
Bandpass filtering, orientation selectivity, and contrast gain control are prominent
features of sensory coding at the level of V1 simple cells. While the effect of
bandpass filtering and orientation selectivity can be assessed within a linear model,
contrast gain control is an inherently nonlinear computation. Here we employ the
class of Lp elliptically contoured distributions to investigate the extent to which
the two features?orientation selectivity and contrast gain control?are suited to
model the statistics of natural images. Within this framework we find that contrast
gain control can play a significant role for the removal of redundancies in natural
images. Orientation selectivity, in contrast, has only a very limited potential for
redundancy reduction.
1
Introduction
It is a long standing hypothesis that sensory systems are adapted to the statistics of their inputs.
These natural signals are by no means random, but exhibit plenty of regularities. Motivated by
information theoretic principles, Attneave and Barlow suggested that one important purpose of this
adaptation in sensory coding is to model and reduce the redundancies [4; 3] by transforming the
signal into a statistically independent representation.
The problem of redundancy reduction can be split into two parts: (i) finding a good statistical model
of the natural signals and (ii) a way to map them into a factorial representation. The first part
is relevant not only to the study of biological systems, but also to technical applications such as
compression and denoising. The second part offers a way to link neural response properties to
computational principles, since neural representations of natural signals must be advantageous in
terms of redundancy reduction if the hypothesis were true. Both aspects have been extensively
studied for natural images [2; 5; 8; 19; 20; 21; 24]. In particular, it has been shown that applying
Independent Component Analysis (ICA) to natural images consistently and robustly yields filters
that are localized, oriented and show bandpass characteristics [19; 5]. Since those features are also
ascribed to the receptive fields of neurons in the primary visual cortex (V1), it has been suggested
that the receptive fields of V1 neurons are shaped to form a minimally redundant representation of
natural images [5; 19].
From a redundancy reduction point of view, ICA offers a small but significant advantage over other
linear representations [6]. In terms of density estimation, however, it is a poor model for natural
images since already a simple non-factorial spherically symmetric model yields a much better fit to
the data [10].
Recently, Lyu and Simoncelli proposed a method that converts any spherically symmetric distribution into a (factorial) Gaussian (or Normal distribution) by using a non-linear transformation of the
1
norm of the image patches [17]. This yields a non-linear redundancy reduction mechanism, which
exploits the superiority of the spherically symmetric model over ICA. Interestingly, the non-linearity
of this Radial Gaussianization method closely resembles another feature of the early visual system,
known as contrast gain control [13] or divisive normalization [20]. However, since spherically symmetric models are invariant under orthogonal transformations, they are agnostic to the particular
choice of basis in the whitened space. Thus, there is no role for the shape of the filters in this model.
Combining the observations from the two models of natural images, we can draw two conclusions:
On the one hand, ICA is not a good model for natural images, because a simple spherically symmetric model yields a much better fit [10]. On the other hand, the spherically symmetric model in
Radial Gaussianization cannot capture that ICA filters do yield a higher redundancy reduction than
other linear transformations. This leaves us with the questions whether we can understand the emergence of oriented filters in a more general redundancy reduction framework, which also includes a
mechanism for contrast gain control.
In this work we address this question by using the more general class of Lp -spherically symmetric
models [23; 12; 15]. These models are quite similar to spherically symmetric models, but do depend
on the particular shape of the linear filters. Just like spherically symmetric models can be nonlinearly transformed into isotropic Gaussians, Lp -spherically symmetric models can be mapped into
a unique class of factorial distributions, called p-generalized Normal distributions [11]. Thus, we
are able to quantify the influence of orientation selective filters and contrast gain control on the
redundancy reduction of natural images in a joint model.
2
2.1
Models and Methods
Decorrelation and Filters
All probabilistic models in this paper are defined on whitened natural images. Let C be the co1
variance matrix of the pixel intensities for an ensemble x1 , ..., xm of image patches, then C ? 2
1
constitutes the symmetric whitening transform. Note that all vectors y = V C ? 2 x, with V being
? 12
an orthogonal matrix, have unit covariance. V C
yield the linear filters that are applied to the raw
image patches before feeding them in the probabilistic models described below. Since any decorre1
lation transform can be written as V C ? 2 , the choice of V determines the shape of the linear filters.
In our experiments, we use three different kinds of V :
1
SYM The simplest choice is VSYM = I, i. e. y = C ? 2 x contains the coefficients in the symmetric
whitening basis. From a biological perspective, this case is interesting as the filters resemble receptive fields of retinal ganglion cells with center-surround properties.
ICA The filters VICA of ICA are determined by maximizing the non-Gaussanity of the marginal
distributions. For natural image patches, ICA is known to yield orientation selective filters in resemblance to V1 simple cells. While other orientation selective bases are possible, the filters defined
by VICA correspond to the optimal choice for redundancy reduction under the restriction to linear
models.
HAD The coefficients in the basis VHAD = ?1m HVICA , with H denoting an arbitrary Hadamard
matrix, correspond to a sum over the different ICA coefficients, each possibly having a flipped sign.
Hadamard matrices are defined by the two properties Hij = ?1 and HH > = mI. This case can
be seen as the opposite extreme to the case of ICA. Instead of running an independent search for the
most Gaussian marginals, the central limit theorem is used to produce the most Gaussian components by using the Hadamard transformation to mix all ICA coefficients with equal weight resorting
to the independence assumption underlying ICA.
2.2 Lp -spherically Symmetric Distributions
The contour lines of spherically symmetric distributions have constant Euclidean norm. Similarly, the contour lines of Lp -spherically symmetric distributions have constant p-norm1 ||y||p :=
1
Note that ||y||p is only a norm in the strict sense if p ? 1. However, since the following considerations also
hold for 0 < p < 1, we will employ the term ?p-norm? and the notation ?||y||p ? for notational convenience.
2
p
Pn
p
|yi |p The set of vectors with constant p-norm Sn?1
(r) := {y ? Rn : ||y||p = r, p >
p
0, r > 0} is called p-sphere of radius r. Different examples of p-spheres are shown along the
coordinate axis of Figure 1. For p 6= 2 the distribution is not invariant under arbitrary orthogonal
transformations, which means that the choice of the basis V can make a difference in the likelihood
of the data.
i=1
p-generalized Normal Distributions
Lp Spherically Symmetric
Distributions
Factorial Distributions
ICA
HAD
cICA
cSYM
Normal Distribution
p
SYM
p=2: Spherically
Symmetric Distributions cHAD
Figure 1: The spherically symmetric distributions are a subset of the Lp -spherical symmetric distributions. The right shapes indicate the iso-density lines for the different distributions. The Gaussian
is the only L2 -spherically symmetric distribution with independent marginals. Like the Gaussian
distribution, all p-generalized Normal distributions have independent marginals. ICA, SYM, ... denote the models used in the experiments below.
A multivariate random variable Y is called Lp -spherically symmetric distributed if it can be written
as a product Y = RU , where U is uniformly distributed on Sn?1
(1) and R is a univariate nonp
negative random variable with an arbitrary distribution [23; 12]. Intuitively, R corresponds to the
radial component, i. e. the length ||y||p measured with the p-norm. U describes the directional components in a polar-like coordinate system (see Extra Material). It can be shown that this definition
is equivalent to the density %(y) of Y having the form %(y) = f (||y||pp ) [12]. This immediately
suggests two ways of constructing an Lp -spherically symmetric distribution. Most obviously, one
can specify a density %(y) that has the form %(y) = f (||y||pp ). An example is the p-generalized
Normal distribution (gN) [11]
Pn
p
pn
i=1 |yi |
%(y) =
exp
?
= f (||y||pp ).
(1)
n
2
1
2?
n
2
n
? p (2? ) p 2
Analogous to the Gaussian being the only factorial spherically symmetric distribution [1], this distribution is the only Lp -spherically symmetric distribution with independent marginals [22]. For the
p-generalized Normal, the marginals are members of the exponential power family.
In our experiments, we will use the p-generalized Normal to model linear marginal independence by
fitting it to the coefficients of the various bases in whitened space. Since this distribution is sensitive
to the particular filter shapes for p 6= 2, we can assess how well the distribution of the linearly
transformed image patches is matched by a factorial model.
An alternative way of constructing an Lp -spherically symmetric distribution is to specify the radial
distribution %r . One example, which will be used later, is obtained by choosing a mixture of LogNormal distributions (RMixLogN). In Cartesian coordinates, this yields the density
K
pn?1 ? np X
?k
(log ||y||p ? ?k )2
? exp ?
.
(2)
%(y) =
2?k2
||y||np ?k 2?
2n ?n 1
p
k=1
3
An immediate consequence of any Lp -spherically symmetric distribution being specified by its radial density is the possibility to change between any two of those distributions by transforming the
radial component with (F2?1 ? F1 )(||y||p ), where F1 and F2 are cumulative distribution functions
(cdf) of the source and the target density, respectively. In particular, for a fixed p, any Lp -spherically
symmetric distribution can be transformed into a factorial one by the transform
z = g(y) ? y =
(F2?1 ? F1 )(||y||p )
y.
||y||p
This transform closely resembles contrast gain control models for primary visual cortex [13; 20],
1
which use a different gain function having the form g?(y) = c+r
with r = ||y||22 [17].
We will use the distribution of equation (2) to describe the joint model consisting of a linear filtering
step followed by a contrast gain control mechanism. Once, the linear filter responses in whitened
space are fitted with this distribution, we non-linearly transform it into a the factorial p-generalized
?1
Normal by the transformation g(y) ? y = (FgN
? FRMixLogN )(||y||p )/||y||p ? y.
Finally, note that because a Lp -spherically symmetric distribution is specified by its univariate radial
distribution, fitting it to data boils down to estimating the univariate density for R, which can be done
efficiently and robustly.
3
3.1
Experiments and Results
Dataset
We use the dataset from the Bristol Hyperspectral Images Database [7], which was already used in
previous studies [25; 16]. All images had a resolution of 256?256 pixels and were converted to gray
level by averaging over the channels. From each image circa 5000 patches of size 15?15 pixels were
drawn at random locations for training (circa 40000 patches in total) as well as circa 6250 patches
per image for testing (circa 50000 patches in total). In total, we sampled ten pairs of training and
test sets in that way. All results below are averaged over those. Before computing the linear filters,
the DC component was projected out with an orthogonal transformation using a QR decomposition.
Afterwards, the data was rescaled in order to make whitening a volume conserving transformation
(a transformation with determinant one) since those transformations leave the entropy unchanged.
3.2
Evaluation Measure
In all our experiments, we used the Average Log Loss (ALL) to assess the P
quality of the fit and
m
1
the redundancy reduction achieved. The ALL = n1 E% [? log2 %?(y)] ? mn
?(y) is
k=1 ? log2 %
the negative mean log-likelihood of the model distribution under the true distribution. If the model
distribution matches the true one, the ALL equals the entropy. Otherwise, the difference between
the ALL and the entropy of the true distribution is exactly the Kullback-Leiber divergence between
the two. The difference between the ALLs of two models equals the reduction in multi-information
(see Extra Material) and can therefore be used to quantify the amount of redundancy reduction.
3.3
Experiments
We fitted the Lp -spherically symmetric distributions from equations (1) and (2) to the image patches
in the bases HAD, SYM, and ICA by a maximum likelihood fit on the radial component. For the
mixture of Log-Normal distributions, we used EM for a mixture of Gaussians on the logarithm of
the p-norm of the image patches.
For each model, we computed the maximum likelihood estimate of the model parameters and determined the best value for p according to the ALL in bits per component on a training set. The final
ALL was computed on a separate test set.
For ICA, we performed a gradient descent over the orthogonal group on the log-likelihood of a
product of independent exponential power distributions, where we used the result of the FastICA
algorithm by Hyv?arinen et al. as initial starting point [14]. All transforms were computed separately
for each training set.
4
HAD
SYM
ICA
cHAD
cSYM
cICA
Figure 2: ALL in bits per component as a function of p. The linewidth corresponds to the standard
deviation over ten pairs of training and test sets. Left: ALL for the bases HAD, SYM and ICA under
the p-generalized Normal (HAD, SYM, ICA) and the factorial Lp -spherically symmetric model with
the radial component modeled by a mixture of Log-Normal distributions (cHAD, cSYM, cICA).
Right: Bar plot for the different ALL indicated by horizontal lines in the left plot.
In order to compare the redundancy reduction of the different transforms with respect to the pixel
basis (PIX), we computed a non-parametric estimate of the marginal entropies of the patches before
the DC component was projected out [6]. Since the estimation is not bound to a particular parametric
model, we used the mean of the marginal entropies as an estimate of the average log-loss in the pixel
representation.
3.4
Results
Figure 2 and Table 1 show the ALL for the bases HAD, SYM, and ICA as a function of p. The
upper curve bundle represents the factorial p-generalized Normal model, the lower bundle the nonfactorial model with the radial component modeled by a mixture of Log-Normal distributions with
five mixtures. The ALL for the factorial models always exceeds the ALL for the non-factorial
models. At p = 2, all curves intersect, because all models are invariant under a change of basis for
that value. Note that the smaller ALL of the non-factorial model cannot be attributed to the mixture
of Log-Normal distributions having more degrees of freedom. As mentioned in the introduction, the
p-generalized Normal is the only factorial Lp -spherically symmetric distribution [22]. Therefore,
marginal independence is such a rigid assumption that the output scale is the only degree of freedom
left.
From the left plot in Figure 2, we can assess the influence of the different filter shapes and contrast
gain control on the redundancy reduction of natural images. We used the best ALL of the HAD
basis under the p-generalized Normal as a baseline for a whitening transformation without contrast
gain control (HAD). Analogously, we used the best ALL of the HAD basis under the non-factorial
model as a baseline for a pure contrast gain control model (cHAD). We compared these values
to the best ALL obtained by using the SYM and the ICA basis under both models. Because the
filters of SYM and ICA resemble receptive field properties of retinal ganglion cells and V1 simple
cells, respectively, we can assess their possible influence on the redundancy reduction with and
without contrast gain control. The factorial model corresponds to the case without contrast gain
control (SYM and ICA). Since we have shown that the non-factorial model can be transformed into
a factorial one by a p-norm based divisive normalization operation, these scores correspond to the
cases with contrast gain control (cSYM and cICA). The different cases are depicted by the horizontal
lines in Figure 2.
As already reported in other works, plain orientation selectivity adds only very little to the redundancy reduction achieved by decorrelation and is less effective than the baseline contrast gain control model [10; 6; 17]. If both orientation selectivity and contrast gain control are combined (cICA)
it is possible to achieve about 9% extra redundancy reduction in addition to baseline whitening
5
Absolute Difference [Bits/Comp.]
HAD - PIX
?3.2947 ? 0.0018
SYM- PIX
?3.3638 ? 0.0022
ICA - PIX
?3.4110 ? 0.0024
cHAD - PIX ?3.5692 ? 0.0045
cSYM - PIX ?3.5945 ? 0.0047
cICA - PIX
?3.6205 ? 0.0049
Relative Difference [% wrt. cICA]
91.0016 ? 0.0832
92.9087 ? 0.0782
94.2135 ? 0.0747
98.5839 ? 0.0134
99.2815 ? 0.0098
100.0000 ? 0.0000
Table 1: Difference in ALL for gray value images with standard deviation over ten training and test
set pairs. The column on the left displays the absolute difference to the PIX representation. The
column on the right shows the relative difference with respect to the largest reduction achieved by
ICA with non-factorial model.
Figure 3: The curve in the upper right corner depicts the trans?1
formation ||z||p = (FgN
?
FRMixLogN )(||y||p ) of the radial
component in the ICA basis for
gray scale images. The resulting radial distribution over ||z||p
corresponds to the radial distribution of the p-generalized Normal.
The inset shows the gain function
F
(||y||p )
g(||y||p ) = RMixLogN
in log||y||p
log coordinates. The scale parameter of the p-generalized normal was
chosen such that the marginal had
unit variance.
2
10
HAD
SYM
ICA
1
10
0
10
?1
10
?1
10
0
10
1
10
2
10
3
10
(HAD). By setting the other models in relation to the best joint model (cICA:= 100%), we are able
to tell apart the relative contributions of bandpass filtering (HAD= 91%), particular filter shapes
(SYM= 93%, ICA= 94%), contrast gain control (cHAD= 98.6%) as well as combined models
(cSYM= 99%, cICA := 100%) to redundancy reduction (see Table 1). Thus, orientation selectivity
(ICA) contributes less to the overall redundancy reduction than any model with contrast gain control
(cHAD, cSYM, cICA). Additionally, the relative difference between the joint model (cICA) and
plain contrast gain control (cHAD) is only about 1.4%. For cSYM it is even less, about 0.7%. The
difference in redundancy reduction between center-surround filters and orientation selective filters
becomes even smaller in combination with contrast gain control (1.3% for ICA vs. SYM, 0.7% for
cICA vs. cSYM). However, it is still significant (t-test, p = 5.5217 ? 10?9 ).
(F ?1 ?FRMixLogN )(||y||p )
When examining the gain functions g(||y||p ) = gN
resulting from the transforma||y||p
c
tion of the radial components, we find that they approximately exhibit the form g(||y||p ) = ||y||
?.
p
The inset in Figure 3 shows the gain control function g(||y||p ) in a log-log plot. While standard contrast gain control models assume p = 2 and ? = 2, we find that ? between 0.90 and 0.93 to be optimal for redundancy reduction. p depends on the shape of the linear filters and ranges from approx1
imately 1.2 to 2. In addition, existing contrast gain models assume the form g(||y||2 ) = ?+||y||
2,
2
while we find that ? must be approximately zero.
In the results above, the ICA filters always achieve the lowest ALL under both p-spherically symmetric models. For examining whether these filters really represent the best choice, we also optimized the filter shapes under the model of equation (2) via maximum likelihood estimation on the
orthogonal group in whitened space [9; 18]. Figure 4 shows the filter shapes for ICA and the ones
obtained from the optimization, where we used either the ICA solution or a random orthogonal matrix as starting point. Qualitatively, the filters look exactly the same. The ALL also changed just
6
Figure 4: Filters optimized for ICA (left) and for the p-spherically symmetric model with radial
mixture of Log-Normal distributions starting from the ICA solution (middle) and from a random
basis (right). The first filter corresponds to the DC component, the others to the filter shapes under
the respective model. Qualitatively the filter shapes are very similar. The ALL for the ICA basis
under the mixture of Log-Normal model is 1.6748 ? 0.0058 bits/component (left), the ALL with the
optimized filters is 1.6716 ? 0.0056 (middle) and 1.6841 ? 0.0068 (right).
marginally from 1.6748 ? 0.0058 to 1.6716 ? 0.0056 or 1.6841 ? 0.0068, respectively. Thus, the
ICA filters are a stable and optimal solution under the model with contrast gain control, too.
4
Summary
In this report, we studied the conjoint effect of contrast gain control and orientation selectivity on
redundancy reduction for natural images. In particular, we showed how the Lp -spherically distribution can be used to tune a nonlinearity of contrast gain control to remove higher-order redundancies
in natural images.
The idea of using an Lp -spherically symmetric model for natural images has already been brought
up by Hyv?arinen and K?oster in the context of Independent Subspace Analysis [15]. However, they
do not use the Lp -distribution for contrast gain control, but apply a global contrast gain control filter
on the images before fitting their model. They also use a less flexible Lp -distribution since their goal
is to fit an ISA model to natural images and not to carry out a quantitative comparison as we did.
In our work, we find that the gain control function turns out to follow a power law, which parallels
the classical model of contrast gain control. In addition, we find that edge filters also emerge in the
non-linear model which includes contrast gain control. The relevance of orientation selectivity for
redundancy reduction, however, is further reduced. In the linear framework (possibly endowed with
a point-wise nonlinearity for each neuron) the contribution of orientation selectivity to redundancy
reduction has been shown to be smaller than 5% relative to whitening (i. e. bandpass filtering)
alone [6; 10]. Here, we found that the contribution of orientation selectivity is even smaller than two
percent relative to whitening plus gain control. Thus, this quantitative model comparison provides
further evidence that orientation selectivity is not critical for redundancy reduction, while contrast
gain control may play a more important role.
Acknowledgements
The authors would like to thank Reshad Hosseini, Sebastian Gerwinn and Philipp Berens for fruitful discussions. This work is supported by the German Ministry of Education, Science, Research and Technology through
the Bernstein award to MB (BMBF; FKZ: 01GQ0601), a scholarship of the German National Academic Foundation to FS, and the Max Planck Society.
References
[1] S. F. Arnold and J. Lynch. On Ali?s characterization of the spherical normal distribution. Journal of the
Royal Statistical Society. Series B (Methodological), 44(1):49?51, 1982.
7
[2] J. J. Atick. Could information theory provide an ecological theory of sensory processing? Network,
3:213?251, 1992.
[3] F. Attneave. Informational aspects of visual perception. Psychological Review, 61:183?193, 1954.
[4] H. B. Barlow. Sensory mechanisms, the reduction of redundancy, and intelligence. In The Mechanisation
of Thought Processes, pages 535?539, London: Her Majesty?s Stationery Office, 1959.
[5] A. J. Bell and T. J. Sejnowski. The ?independent components? of natural scenes are edge filters. Vision
Res., 37(23):3327?38, 1997.
[6] M. Bethge. Factorial coding of natural images: How effective are linear model in removing higher-order
dependencies? J. Opt. Soc. Am. A, 23(6):1253?1268, June 2006.
[7] G. J. Brelstaff, A. Parraga, T. Troscianko, and D. Carr. Hyperspectral camera system: acquisition and analysis. In B. J. Lurie, J. J. Pearson, and E. Zilioli, editors, Proceedings of SPIE, volume 2587, pages 150?
159, 1995. The database can be downloaded from: http://psy223.psy.bris.ac.uk/hyper/.
[8] G. Buchsbaum and A. Gottschalk. Trichromacy, opponent colours coding and optimum colour information transmission in the retina. Proceedings of the Royal Society of London. Series B, Biological Sciences,
220:89?113, November 1983.
[9] A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints.
SIAM J. Matrix Anal. Appl., 20(2):303?353, 1999.
[10] J. Eichhorn, F. Sinz, and M. Bethge. Simple cell coding of natural images in V1: How much use is
orientation selectivity? (arxiv:0810.2872v1). 2008.
[11] I. R. Goodman and S. Kotz. Mutltivariate ?-generalized normal distributions. Journal of Multivariate
Analysis, 3:204?219, 1973.
[12] A. K. Gupta and D. Song. lp -norm spherical distribution. Journal of Statistical Planning and Inference,
60:241?260, 1997.
[13] D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9:181?198, 1992.
[14] A. Hyv?arinen, J. Karhunen, and E. Oja. Independent Component Analysis. John Wiley & Sons, 2001.
[15] A. Hyv?arinen and U. K?oster. Complex cell pooling and the statistics of natural images. Network, 18:81?
100, 2007.
[16] T.-W. Lee, T. Wachtler, and T. J. Sejnowski. Color opponency is an efficient representation of spectral
properties in natural scenes. Vision Res, 42(17):2095?2103, Aug 2002.
[17] S. Lyu and E. P. Simoncelli. Nonlinear extraction of ?independent components? of elliptically symmetric densities using radial Gaussianization. Technical Report TR2008-911, Computer Science Technical
Report, Courant Inst. of Mathematical Sciences, New York University, April 2008.
[18] J. H. Manton. Optimization algorithms exploiting unitary constraints. IEEE Transactions on Signal
Processing, 50:635 ? 650, 2002.
[19] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381:607?609, June 1996.
[20] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience,
4(8):819?825, August 2001.
[21] E. P. Simoncelli and O. Schwartz. Modeling surround suppression in V1 neurons with a statisticallyderived normalization model. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Adv. Neural Information Processing Systems (NIPS*98), volume 11, pages 153?159, Cambridge, MA, 1999. MIT Press.
[22] F. H. Sinz, S. Gerwinn, and M. Bethge. Characterization of the p-generalized normal distribution. Journal
of Multivariate Analysis, 07/26/ 2008.
[23] D. Song and A. K. Gupta. lp -norm uniform distribution. Proceedings of the American Mathematical
Society, 125:595?601, 1997.
[24] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proc R Soc Lond B Biol Sci., 265(1394):1724?1726, 1998.
[25] T Wachtler, T W Lee, and T J Sejnowski. Chromatic structure of natural scenes. Journal of the Optical
Society of America. A, Optics, image science, and vision, 18:65?77, 2001. PMID: 11152005.
8
| 3530 |@word determinant:1 middle:2 compression:1 advantageous:1 norm:10 hyv:4 covariance:1 decomposition:1 carry:1 reduction:28 initial:1 contains:1 score:1 series:2 denoting:1 interestingly:1 existing:1 must:2 written:2 john:1 shape:12 eichhorn:1 remove:1 plot:4 v:2 alone:1 intelligence:1 leaf:1 isotropic:1 iso:1 smith:1 provides:1 characterization:2 location:1 fabee:1 philipp:1 five:1 mathematical:2 along:1 edelman:1 fitting:3 ascribed:1 ica:37 mpg:2 planning:1 multi:1 informational:1 spherical:3 little:1 becomes:1 estimating:1 linearity:1 underlying:1 notation:1 agnostic:1 matched:1 lowest:1 kind:1 finding:1 transformation:11 sinz:3 quantitative:2 exactly:2 k2:1 uk:1 control:34 unit:2 schwartz:2 superiority:1 planck:1 before:4 limit:1 consequence:1 approximately:2 plus:1 minimally:1 studied:2 resembles:2 suggests:1 appl:1 limited:1 range:1 statistically:1 averaged:1 unique:1 camera:1 testing:1 vica:2 intersect:1 bell:1 thought:1 radial:16 cannot:2 convenience:1 context:1 applying:1 influence:3 restriction:1 equivalent:1 map:1 fruitful:1 center:2 maximizing:1 starting:3 resolution:1 mbethge:1 immediately:1 pure:1 coordinate:4 analogous:1 target:1 play:2 hypothesis:2 database:2 role:3 capture:1 leiber:1 adv:1 solla:1 rescaled:1 mentioned:1 transforming:2 depend:1 ali:1 f2:3 mechanisation:1 basis:12 joint:4 various:1 cat:1 america:1 pmid:1 describe:1 effective:2 london:2 sejnowski:3 tell:1 formation:1 choosing:1 pearson:1 hyper:1 quite:1 otherwise:1 statistic:4 emergence:2 transform:5 final:1 obviously:1 advantage:1 matthias:1 product:2 mb:1 adaptation:1 relevant:1 combining:1 hadamard:3 conserving:1 achieve:2 qr:1 exploiting:1 regularity:1 optimum:1 transmission:1 produce:1 leave:1 ac:1 measured:1 aug:1 soc:2 resemble:2 indicate:1 quantify:2 radius:1 closely:2 gaussianization:3 filter:37 material:2 education:1 arinen:4 feeding:1 f1:3 really:1 opt:1 biological:5 hold:1 normal:24 exp:2 lyu:2 early:1 reshad:1 purpose:1 estimation:3 polar:1 gq0601:1 proc:1 sensitive:1 largest:1 wachtler:2 brought:1 mit:1 gaussian:6 always:2 lynch:1 pn:4 chromatic:1 office:1 majesty:1 june:2 notational:1 consistently:1 methodological:1 likelihood:6 contrast:32 suppression:1 baseline:4 sense:1 am:1 psy:1 inference:1 inst:1 rigid:1 nonfactorial:1 her:1 relation:1 transformed:4 selective:4 germany:2 pixel:5 overall:1 orientation:18 flexible:1 schaaf:1 marginal:6 field:6 equal:3 once:1 shaped:1 having:4 extraction:1 flipped:1 represents:1 look:1 constitutes:1 plenty:1 np:2 others:1 report:3 employ:2 retina:1 oriented:2 imately:1 oja:1 divergence:1 national:1 geometry:1 consisting:1 n1:1 freedom:2 investigate:1 possibility:1 evaluation:1 mixture:9 extreme:1 circa:4 bundle:2 edge:2 respective:1 orthogonal:7 euclidean:1 logarithm:1 re:2 opponency:1 fitted:2 psychological:1 column:2 modeling:1 gn:2 norm1:1 deviation:2 subset:1 uniform:1 fastica:1 examining:2 too:1 reported:1 dependency:1 combined:2 density:9 siam:1 standing:1 probabilistic:2 lee:2 bethge:4 analogously:1 central:1 possibly:2 corner:1 american:1 potential:1 converted:1 de:2 retinal:2 coding:5 includes:2 coefficient:5 depends:1 later:1 view:1 performed:1 tion:1 parallel:1 contribution:3 ass:4 variance:2 characteristic:1 efficiently:1 ensemble:1 yield:8 correspond:3 directional:1 raw:1 marginally:1 comp:1 cybernetics:2 bristol:1 sebastian:1 definition:1 acquisition:1 pp:3 attneave:2 mi:1 attributed:1 boil:1 spie:1 gain:38 sampled:1 dataset:2 color:1 higher:3 courant:1 follow:1 response:3 specify:2 april:1 done:1 contoured:1 just:2 atick:1 hand:2 horizontal:2 chad:8 cohn:1 nonlinear:2 quality:1 gray:3 resemblance:1 indicated:1 olshausen:1 effect:3 true:4 barlow:2 spherically:32 symmetric:35 troscianko:1 mpi:2 generalized:15 prominent:1 theoretic:1 carr:1 percent:1 image:36 wise:1 consideration:1 recently:1 volume:3 marginals:5 significant:3 surround:3 cambridge:1 resorting:1 similarly:1 nonlinearity:2 had:16 stable:1 cortex:4 whitening:7 base:5 add:1 multivariate:3 showed:1 perspective:1 apart:1 selectivity:14 ubingen:2 gerwinn:2 ecological:1 yi:2 der:1 seen:1 ministry:1 redundant:1 signal:6 ii:1 afterwards:1 simoncelli:4 mix:1 isa:1 exceeds:1 technical:3 match:1 academic:1 offer:2 long:1 sphere:2 award:1 whitened:5 vision:3 arxiv:1 normalization:5 represent:1 achieved:3 cell:10 addition:3 separately:1 source:1 goodman:1 extra:3 strict:1 pooling:1 member:1 unitary:1 bernstein:1 split:1 independence:3 fit:5 buchsbaum:1 fkz:1 opposite:1 reduce:1 idea:1 whether:2 motivated:1 colour:2 song:2 f:1 york:1 elliptically:2 tune:1 factorial:21 amount:1 transforms:2 extensively:1 ten:3 simplest:1 reduced:1 http:1 sign:1 neuroscience:2 per:3 group:2 redundancy:29 drawn:1 v1:8 convert:1 sum:1 pix:8 family:1 kotz:1 patch:12 draw:1 bit:4 bound:1 followed:1 display:1 adapted:1 optic:1 orthogonality:1 constraint:2 scene:3 aspect:2 lond:1 optical:1 according:1 combination:1 poor:1 describes:1 smaller:4 em:1 son:1 lp:23 intuitively:1 invariant:3 equation:3 turn:1 german:2 mechanism:4 hh:1 wrt:1 gaussians:2 operation:1 endowed:1 opponent:1 apply:1 spectral:1 robustly:2 alternative:1 running:1 log2:2 exploit:1 scholarship:1 hosseini:1 lation:1 classical:1 society:5 unchanged:1 already:4 question:2 receptive:5 primary:3 parametric:2 striate:1 exhibit:2 gradient:1 subspace:1 link:1 mapped:1 separate:1 thank:1 sci:1 extent:1 tuebingen:2 ru:1 length:1 code:1 modeled:2 hij:1 negative:2 anal:1 upper:2 neuron:4 observation:1 bris:1 fabian:1 descent:1 november:1 immediate:1 dc:3 rn:1 arbitrary:3 august:1 intensity:1 nonlinearly:1 pair:3 specified:2 optimized:3 nip:1 trans:1 address:1 able:2 suggested:2 bar:1 below:3 perception:1 xm:1 max:1 royal:2 power:3 critical:1 decorrelation:2 natural:29 mn:1 technology:1 axis:1 sn:2 oster:2 review:1 l2:1 removal:1 acknowledgement:1 co1:1 relative:6 law:1 loss:2 interesting:1 filtering:5 localized:1 conjoint:2 foundation:1 downloaded:1 degree:2 principle:2 editor:2 changed:1 summary:1 supported:1 sym:15 understand:1 arnold:1 lognormal:1 emerge:1 absolute:2 sparse:1 distributed:2 van:2 curve:3 plain:2 cumulative:1 contour:2 sensory:6 author:1 qualitatively:2 projected:2 transaction:1 kullback:1 global:1 cica:12 gottschalk:1 search:1 table:3 additionally:1 channel:1 nature:2 inherently:1 contributes:1 complex:1 berens:1 constructing:2 did:1 linearly:2 x1:1 depicts:1 wiley:1 bmbf:1 heeger:1 bandpass:5 exponential:2 theorem:1 down:1 removing:1 inset:2 gupta:2 evidence:1 aria:1 hyperspectral:2 karhunen:1 cartesian:1 suited:1 entropy:5 depicted:1 fgn:2 univariate:3 ganglion:2 visual:6 corresponds:5 determines:1 cdf:1 ma:1 goal:1 change:2 manton:1 determined:2 uniformly:1 averaging:1 denoising:1 kearns:1 called:3 total:3 divisive:3 assessed:1 relevance:1 hateren:1 biol:1 |
2,792 | 3,531 | Dynamic Visual Attention: Searching for coding
length increments
Xiaodi Hou1,2 and Liqing Zhang1 ?
Department of Computer Science and Engineering, Shanghai Jiao Tong University
No. 800 Dongchuan Road, 200240, China
2
Department of Computation and Neural Systems, California Institute of Technology
MC 136-93, Pasadena, CA, 91125, USA
[email protected], [email protected]
1
Abstract
A visual attention system should respond placidly when common stimuli are presented, while at the same time keep alert to anomalous visual inputs. In this paper,
a dynamic visual attention model based on the rarity of features is proposed. We
introduce the Incremental Coding Length (ICL) to measure the perspective entropy gain of each feature. The objective of our model is to maximize the entropy
of the sampled visual features. In order to optimize energy consumption, the
limit amount of energy of the system is re-distributed amongst features according to their Incremental Coding Length. By selecting features with large coding
length increments, the computational system can achieve attention selectivity in
both static and dynamic scenes. We demonstrate that the proposed model achieves
superior accuracy in comparison to mainstream approaches in static saliency map
generation. Moreover, we also show that our model captures several less-reported
dynamic visual search behaviors, such as attentional swing and inhibition of return.
1
Introduction
Visual attention plays an important role in the human visual system. This voluntary mechanism
allows us to allocate our sensory and computational resources to the most valuable information
embedded in the vast amount of incoming visual data. In the past decade, we have witnessed the
success of a number of computational models on visual attention (see [6] for a review). Many of
these models analyze static images, and output ?saliency maps?, which indicate the probability of
eye fixations. Models such as [3] and [4] have tremendously boosted the correlation between eye
fixation data and saliency maps.
However, during the actual continuous perception process, important dynamic behaviors such as the
sequential order of attended targets, shifts of attention by saccades, and the inhibitory mechanism
that precludes us from looking at previously observed targets, are not thoroughly discussed in the
research on visual attention. Rather than contributing to the accuracy of saliency map generation,
we instead consider alternative approaches to understand visual attention: is there a model that
characterizes the ebbs and flows of visual attention?
Up to the present, this question is not comprehensively answered by existing models. Algorithms
simulating saccades in some attention systems [23, 7] are designed for engineering expediency rather
than scientific investigation. These algorithms are not intended to cover the full spectrum of dynamic
properties of attention, nor to provide a convincing explanation of the continuous nature of attention
behaviors.
?
http://www.its.caltech.edu/?xhou
http://bcmi.sjtu.edu.cn/?zhangliqing
In this paper, we present a novel attention model that is intrinsically continuous. Unlike space-based
models who take discrete frames of images as the elementary units, our framework is based on continuous sampling of features. Inspired by the principle of predictive coding [9], we use the concept
of energy to explain saliency, feature response intensity, and the appropriation of computational resources in one unified framework. The appropriation of energy is based on the Incremental Coding
Length, which indicates the rarity of a feature. As a result, stimuli that correlate to rarely activated
features will receive the highest energy, and become salient. Since the proposed model is temporally
continuous, we can demonstrate a series of simulations of dynamic attention, and provide plausible
explanations of previously unexamined behaviors.
1.1
Space and Feature Based Attention
Many of the bottom-up visual attention models follow the Koch and Ullman framework [10]. By
analyzing feature maps that topographically encode the spatial homogeneity of features, an algorithm can detect the local irregularities of the visual input. This paradigm explains the generation of
attention from a one-shot observation of an image. However, several critical issues may be raised
when this framework is applied to continuous observations (e.g. video). First, space-based attention itself cannot interpret ego-motion. Additional coordinate transformation models are required
to translate spatial cues between two different frames. Second, there are attention mechanisms that
operate after the generation of saliency, such as attentional modulation [19], and Inhibition of Return
(IOR) [8]. The initial space-based framework is not likely to provide a convincing explanation to
these mechanisms.
In addition to saliency based on local irregularity, recent investigations in V4 and MT cortical areas demonstrate that attention can also be elicited by particular features [13, 18]. In the field of
computational models, explorations that are biased by features are also used in task-dependent spatial saliency analysis [16]. The emerging evidence in feature-driven attention has encouraged us to
propose a pure feature-based attention model in parallel with the space-based feature map paradigm.
1.2
On the Cause of Attention
Finding ?irregular patterns? as a criterion for attention is widely used in computational models. In a
more rigid form, saliency can be defined by the residuals of Difference of Gaussian filter banks [7],
regions with maximal self-information [3], or most discriminant center-surround composition [4].
However, all of these principles do little to address the cause of saliency mechanisms in the brain.
At the level of computation, we cannot attribute the formation of attention to functional advantages
such as foraging for foods [6]. In this paper, we hypothesize that visual attention is driven by the
predictive coding principle, that is, the optimization of metabolic energy consumption in the brain.
In our framework, the behavior of attention is explained as a consequence of an actively-searching
observer who seeks a more economical neural code to represent the surrounding visual environment.
2
The Theory
Motivated by the sparse coding strategy [15] discovered in primary visual cortex, we represent
an image patch as a linear combination of sparse coding basis functions, which are referred as
features. The activity ratio of a feature is its average response to image patches over time and
space. The activity of the feature ensemble is considered as a probability function. We evaluate
each feature with respect to its Incremental Coding Length (ICL). The ICL of ith feature is defined
as the ensemble?s entropy gain during the activity increment of ith feature. In accordance with
the general principle of predictive coding [17], we redistribute energy to features according to their
ICL contribution: frequently activated features receive less energy than rarer features. Finally, the
saliency of a region is obtained by summing up the activity of all features at that region.
2.1
Sparse Feature Representation
Experimental studies [15] have shown that the receptive fields of simple-cells in the primary visual
cortex produce a sparse representation. With standard methods [2], we learn a set of basis functions
that yields a sparse representation of natural image patches. These basis functions are used as
features in the analysis of attention. Specifically, we use 120000 8 ? 8 RGB image patches from
natural scenes for training. A set of 8 ? 8 ? 3 = 192 basis functions is obtained. (See Fig. 1).
Let A be the sparse basis, where ai is the ith basis function. Let W = A?1 be the bank of filter
functions, where W = [w1 , w2 , . . . , w192 ]> . Each row vector wj of W can be considered as a
linear filter to the image patch.
The sparse representation s of an image patch is its response to all filter functions. Given a vectorized
image x, we have s = Wx. Since each basis function represents a structural primitive, in the
cortex representation of natural images, only a small population of neurons are activated at one
time. Considering the energy consumed by neural activity in the brain, this sparse coding strategy is
advantageous [11].
A
W
Figure 1: First 30 components of the basis functions A and the corresponding filter functions W
are shown in this figure.
2.2
The Incremental Coding Length
In contrast to the long-term evolution of sparse representation, which reflects the general statistics
of nature, short-term habituations, such as potentiation of synaptic strengths, occur during brief
observations in a particular environment. In order to evaluate the immediate energy changes in
the cortex, some previous work has analyzed the information representation and coding in early
visual system [20, 21, 1]. Guided by the insights behind predictive coding [17], we propose the
Incremental Coding Length (ICL) as a computational principle based on features. This principle
aims to optimize the immediate energy distribution in the system in order to achieve an energyeconomic representation of its environment.
The activity ratio pi for ith feature is defined as its relative response level over a sequence of sampling. Given the sample matrix X = [x1 , x2 , . . . , xk , . . .], where xk is an vectorized image patch,
we can compute the activity ratio pi as:
P
k
k | wi x |
pi = P P
.
k
i
k | wi x |
(1)
Furthermore, we denote p = [p1 , p2 , . . .]> as the probability function of feature activities. Note
that the activity ratio and the energy are abstract values that reflect the statistics of features. Wiring
this structure at the neuronal level goes beyond the scope of this paper. However, studies [13] have
suggested evidence of a population of neurons that is capable of generating a representation for intermodal features. In our implementation, the distribution p addresses the computational properties
of this putative center.
Since the visual information is jointly encoded by all features, the most efficient coding strategy
should make equal use of all possible feature response levels. To achieve this optimality, the model
needs to maximize the entropy H(p). Since p is determined by the samples X, it is possible for a
system to actively bias the sampling process in favor of maximizing information transmission.
At a certain point of time, the activity ratio distribution is p. We consider a new excitation to feature
? is:
i, which will add a variation ? to pi , and change the whole distribution. The new distribution p
( p +?
j
1+? , j =i
p?j =
pj
6 i
1 + ?, j =
Feature distribuon
Incremental Coding Length
0.02
0.04
0.01
0.02
0
0
0
20
40
60
80
100
120
140
160
180
200
0
20
40
60
80
100
120
140
160
180
200
Basis
Image
Saliency map
Figure 2: The framework of feature-based selective attention.
This variation therefore changes the entropy of feature activities. The change of entropy with respect
to the feature activity probability increment is:
P
P
? j6=i pj log pj
? j6=i pj log pj
?H(p)
?pi log pi
=?
?
= ?1 ? logpi ?
,
?pi
?pi
?pi
?pi
where:
?
P
j6=i
pj log pj
?pi
= H(p) ? 1 + pi + pi log pi ,
Accordingly, we define the Incremental Coding Length (ICL) to be:
ICL(pi ) =
2.3
?H(p)
= ?H(p) ? pi ? log pi ? pi log pi
?pi
(2)
Energy Redistribution
? tells us whether
We define the salient feature set S as: S = {i | ICL(pi ) > 0}. The partition {S, S}
successive observations of feature i would increase H(p). In the context of visual attention, the
intuition behind the salient feature set is straightforward: A feature is salient only when succeeding
activations of that feature can offer entropy gain to the system.
Within this general framework of feature-level optimization, we can redistribute the energy among
features. The amount of energy received by each feature is denoted di . Non-salient features are
? For features in the salient feature set, let:
automatically neglected by setting dk = 0 (k ? S).
ICL(pi )
,
di = X
ICL(pj )
(if i ? S).
(3)
j?S
Finally, given an image X = [x1 , x2 , . . . , xn ], we can quantify the saliency map M =
[m1 , m2 , . . . , mn ] as:
X
mk =
di wi xk .
(4)
i?S
In Eq. 4, we notice that the saliency of a patch is not constant. It is determined by the distribution
of p, which can be obtained by sampling the environment over space and time.
According to Eq. 4, we notice that the saliency of a patch may vary over time and space. An
intuitive explanation to this property is the contextual influence: under different circumstances,
?salient features? are defined in different manners to represent the statistical characteristics of the
immediate environment.
3
The Experiment
We proposed a framework that explains dynamic visual attention as a process that spends limited
available energy preferentially on rarely-seen features. In this section, we examine experimentally
the behavior of our attention model.
3.1
Static Saliency Map Generation
By sequentially sampling over all possible image patches, we calculate the feature distribution of
a static image and generate the corresponding saliency map. These maps are then compared with
records of eye fixations of human subjects. The accuracy of an algorithm is judged by the area under
its ROC curve.
We use the fixation data collected by Bruce et al. [3] as the benchmark for comparison. This data
set contains the eye fixation records from 20 subjects for the full set of 120 images. The images
are down-sampled to an appropriate scale (86 ? 64, 14 of the original size). The results for several
models are indicated below. Due to a difference in the sampling density used in drawing the ROC
curve, the listed performance is slightly different (about 0.003) from that given in [3] and [4]. The
algorithms, however, are all evaluated using the same benchmark and their relative performance
should be unaffected. Even though it is not designed for static saliency map generation, our model
achieves the best performance among mainstream approaches.
Table 1: Performances on static image saliency
Itti et al. [7]
0.7271
input image our approach human fixa?ons
Bruce et al. [3]
0.7697
input image
Gao et al. [4]
0.7729
our approach human fixa?ons
Our model
0.7928
input image
our approach human fixa?ons
Figure 3: Some examples of our experimental images.
3.2
Dynamic Saliency on Videos
A distinctive property of our model is that it is updated online. As proposed in Eq. 2, ICL is
defined by the feature activity ratio distribution. This distribution can be defined over space (when
sampling within one 2-D image) as well as over time (when sampling over a sequence of images).
The temporal correlation among frames can be considered as a Laplacian distribution. Accordingly,
at the tth frame, the cumulative activity ratio distribution pt yields:
pt =
t?1
? ?t
1 X
?? ,
exp(
)?p
Z ? =0
?
? ? is the feature distribution of the ? th image. Z =
where ? is the half life. p
normalization factor that ensures pt is a probability distribution.
(5)
R
pt (x)dx is the
In video saliency analysis, one of the potential challenges comes from simultaneous movements of
the targets and self-movements of the observer. Since our model is feature-based, spatial movements
of an object or changing perspectives will not dramatically affect the generation of saliency maps. In
order to evaluate the detection accuracy of our approach under changing environment, we compare
the dynamic visual attention model with models proposed in [7] and [5].
In this experiment, we use a similar criterion to that described in [5]. The efficacy of the saliency
maps to a videoclip is determined by comparing the response intensities at saccadic locations and
random locations. Ideally, an effective saliency algorithm would have high output at locations gazed
by observers, and tend not to response in most of the randomly chosen locations.
To quantify this tendency of selectivity, we first compute the distribution of saliency value at human
saccadic locations qs and the distribution at random locations qr . Then, KL divergency is used to
measure their dissimilarity. Higher the KL divergency is, more easily a model can discriminate
human saccadic locations in the image.
KL = 0.2493
KL = 0.3403
KL = 0.5432
80
80
80
60
60
60
40
40
40
20
20
20
0
0
0
A: input sample
0.2
0.4
0.6
0.8
B: model in [7]
1
0
0
0.2
0.4
0.6
0.8
C: model in [5]
1
0
0.2
0.4
0.6
0.8
1
D: our model
Figure 4: The eye-track records and the video is obtained from [5]. This video contains both target
movements and self-movements. In this video, 137 saccades (yellow dots in figure A) are collected.
Given the sequence of generated saliency maps, we can obtain the saliency distribution at human
saccade locations (narrow blue bars), and random locations (wide green bars). The KL-divergency
of these two distribution indicates the performance of each model.
3.3
Dynamic Visual Search
We are particularly interested in the dynamic behaviors of attention. Reported by researchers in
neurobiological experiments, an inhibitory effect was aroused after sustained attention [12]. This
mechanism is referred as Inhibition of Return (IOR) [8]. Research on the cumulative effects of
attention [24] has suggested that the dynamics of visual search have broad implications for scene
perception, perceptual learning, automaticity, and short term memory. In addition, as a mechanism that prevents an autonomous system from being permanently attracted to certain salient spots
and thereby to facilitate productive exploration, the computational modeling of IOR is of practical
value in AI and robotics. Previous computational models such as [22, 7] implemented the IOR in
a spatially-organized, top-down manner, whereas our model samples the environment online and is
driven by data in a bottom-up manner. Spontaneous shifts of attention to new visual cues, as well
as the ?refusal of perception? behavior arise naturally as consequences of our active search model.
Moreover, unlike the spatial ?inhibitory masking? approach in [7], our model is feature-based and
is therefore free from problems caused by spatial coordinate transformations.
3.3.1
Modeling Sensory Input
The sensory structure of the human retina is not uniform. The resolution of perception decreases
when eccentricity increases. In order to overcome the physical limitations of the retina, an overt eye
movement is made so that the desired visual stimuli can be mapped onto the foveal region. Similar
to the computational approximations in [14], we consider the fovea sampling bias as a weighted
mask W over the reconstructed saliency map. Let the fovea be located at (x0 , y0 ); the saliency at
(x, y) is weighted by W(x, y):
?
?
2
2
1
W(x, y) = e? 2 (x?x0 ) +(y?y0 ) + ?.
(6)
In the experiments, we choose ? = 1.
3.3.2
Overt Eye Movements towards Saliency Targets with Inhibition of Return
In the incremental perception of one static image, our dynamic visual system is guided by two factors. The first factor is the non-homogeneous composition of features in the observed data that
fosters feature preferences in the system. The second factor is a foveal structure that allows the
system to bias its sampling via overt eye movements. The interplay of these two factors leads to an
active visual search behavior that moves towards a maximum entropy equilibrium in the feature distribution. It is also worth noting that these two factors achieve a hysteresis effect that is responsible
for Inhibition Of Return (IOR). A recently attended visual region is not likely to regain eye fixation
within short interval because of the foveated weighting. This property of IOR is demonstrated by
our experiments.
An implementation of our dynamic visual search is shown in the algorithm box.
Dynamic Visual Attention
1. At time t, calculate feature ICL based on pt
2. Given current eye fixation, generate a saliency map with foveal bias.
3. By a saccade, move eye to the global maximum of the saliency map.
4. Sample top N ?informative? (largest ICL) features in fixation neighborhood. (In our experiment, N = 10)
? t , update pt+1 , and go to Step. 1.
5. Calculate p
It is also worth noting that, when run on the images provided by [3], our dynamic visual attention
algorithm demonstrates especially pronounced saccades when multiple salient regions are presented
in the same image. Although we have not yet validated these saccades against human retinal data,
to our knowledge this sort of ?attentional swing? has never been reported in other computational
systems.
4
26
91
219
279
1
48
76
98
294
2
11
30
105
137
Figure 5: Results on dynamic visual search
4
Discussions
A novel dynamic model of visual attention is described in this paper. We have proposed Incremental
Coding Length as a general principle by which to distribute energy in the attention system. In this
principle, the salient visual cues correspond to unexpected features - according to the definition of
ICL, these features may elicit entropy gain in the perception state and are therefore assigned high
energy.
To validate this theoretical framework, we have examined experimentally various aspects of visual
attention. In experiments comparing with static saliency maps, our model more accurately predicted
saccades than did other mainstream models. Because the model updates its state in an online manner,
we can consider the statistics of a temporal sequence and our model achieved strong results in video
saliency generation. Finally, when feature-based ICL is combined with foveated sampling, our
model provides a coherent mechanism for dynamic visual search with inhibition of return.
In expectation of further endeavors, we have presented the following original ideas. 1) In addition
to spatial continuity cues, which are demonstrated in other literature, saliency can also be measured
using features. 2) By incorporating temporal dynamics, a visual attention system can capture a broad
range of novel behaviors that have not successfully been explained by saliency map analysis. And
3) dynamic attention behaviors might quantitatively be explained and simulated by the pursuit of a
maximum entropy equilibrium in the state of perception.
5
Acknowledgements
We thank Neil Bruce, John Tsotsos, and Laurent Itti for sharing their experimental data. The first
author would like to thank Charles Frogner, Yang Cao, Shengping Zhang and Libo Ma for their
insightful discussions on the paper. The reviewers? pertinent comments and suggestions also helped
to improve the quality of the paper. The work was supported by the National High-Tech Research
Program of China (Grant No. 2006AA01Z125) and the National Basic Research Program of China
(Grant No. 2005CB724301)
References
[1] V. Balasubramanian, D. Kimber, and M. Berry. Metabolically Efficient Information Processing. Neural
Computation, 13(4):799?815, 2001.
[2] A. Bell and T. Sejnowski. The independent components of natural scenes are edge filters. Vision Research,
37(23):3327?3338, 1997.
[3] N. Bruce and J. Tsotsos. Saliency Based on Information Maximization. Advances in Neural Information
Processing Systems, 18, 2006.
[4] D. Gao, V. Mahadevan, and N. Vasconcelos. The discriminant center-surround hypothesis for bottom-up
saliency. pages 497?504, 2007.
[5] L. Itti and P. Baldi. Bayesian Surprise Attracts Human Attention. Advances in Neural Information
Processing Systems, 18:547, 2006.
[6] L. Itti and C. Koch. Computational modeling of visual attention. Nature Reviews Neuroscience, 2(3):194?
203, 2001.
[7] L. Itti, C. Koch, E. Niebur, et al. A model of saliency-based visual attention for rapid scene analysis.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254?1259, 1998.
[8] R. Klein. Inhibition of return. Trends in Cognitive Sciences, 4(4):138?147, 2000.
[9] C. Koch and T. Poggio. Predicting the visual world: silence is golden. Nature Neuroscience, 2:9?10,
1999.
[10] C. Koch and S. Ullman. Shifts in selective visual attention: towards the underlying neural circuitry. Hum
Neurobiol, 4(4):219?27, 1985.
[11] W. Levy and R. Baxter. Energy Efficient Neural Codes. Neural Codes and Distributed Representations:
Foundations of Neural Computation, 1999.
[12] S. Ling and M. Carrasco. When sustained attention impairs perception. Nature neuroscience, 9(10):1243,
2006.
[13] J. Maunsell and S. Treue. Feature-based attention in visual cortex. Trends in Neurosciences, 29(6):317?
322, 2006.
[14] J. Najemnik and W. Geisler. Optimal eye movement strategies in visual search. Nature, 434(7031):387?
391, 2005.
[15] B. Olshausen et al. Emergence of simple-cell receptive field properties by learning a sparse code for
natural images. Nature, 381(6583):607?609, 1996.
[16] R. Peters and L. Itti. Beyond bottom-up: Incorporating task-dependent influences into a computational
model of spatial attention. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007.
[17] R. Rao and D. Ballard. Predictive coding in the visual cortex: a functional interpretation of some extraclassical receptive-field effects. Nature Neuroscience, 2:79?87, 1999.
[18] J. Reynolds, T. Pasternak, and R. Desimone. Attention Increases Sensitivity of V4 Neurons. Neuron,
26(3):703?714, 2000.
[19] S. Treue and J. Maunsell. Attentional modulation of visual motion processing in cortical areas MT and
MST. Nature, 382(6591):539?541, 1996.
[20] J. van Hateren. Real and optimal neural images in early vision. Nature, 360(6399):68?70, 1992.
[21] M. Wainwright. Visual adaptation as optimal information transmission. Vision Research, 39(23):3960?
3974, 1999.
[22] D. Walther, D. Edgington, and C. Koch. Detection and tracking of objects in underwater video. Computer
Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society
Conference on, 1.
[23] D. Walther, U. Rutishauser, C. Koch, and P. Perona. Selective visual attention enables learning and
recognition of multiple objects in cluttered scenes. Computer Vision and Image Understanding, 100(12):41?63, 2005.
[24] J. Wolfe, N. Klempen, and K. Dahlen. Post-attentive vision. Journal of Experimental Psychology: Human
Perception and Performance, 26(2):693?716, 2000.
| 3531 |@word advantageous:1 seek:1 simulation:1 rgb:1 attended:2 thereby:1 shot:1 initial:1 series:1 contains:2 selecting:1 efficacy:1 foveal:3 reynolds:1 past:1 existing:1 current:1 contextual:1 comparing:2 activation:1 yet:1 dx:1 attracted:1 extraclassical:1 john:1 najemnik:1 mst:1 partition:1 wx:1 informative:1 pertinent:1 enables:1 hypothesize:1 designed:2 succeeding:1 update:2 cue:4 half:1 intelligence:1 accordingly:2 xk:3 ith:4 short:3 record:3 provides:1 location:9 successive:1 preference:1 zhang:2 alert:1 become:1 walther:2 fixation:8 sustained:2 baldi:1 manner:4 introduce:1 x0:2 mask:1 rapid:1 behavior:11 p1:1 nor:1 frequently:1 examine:1 brain:3 inspired:1 balasubramanian:1 automatically:1 food:1 actual:1 little:1 considering:1 provided:1 moreover:2 underlying:1 neurobiol:1 spends:1 emerging:1 unified:1 finding:1 transformation:2 temporal:3 golden:1 demonstrates:1 unit:1 grant:2 zhang1:1 maunsell:2 engineering:2 local:2 accordance:1 limit:1 consequence:2 analyzing:1 laurent:1 modulation:2 might:1 china:3 examined:1 limited:1 range:1 practical:1 responsible:1 irregularity:2 spot:1 area:3 elicit:1 bell:1 road:1 cannot:2 onto:1 judged:1 context:1 influence:2 optimize:2 www:1 map:20 logpi:1 center:3 maximizing:1 demonstrated:2 primitive:1 attention:54 go:2 straightforward:1 reviewer:1 cluttered:1 resolution:1 pure:1 m2:1 insight:1 q:1 population:2 searching:2 intermodal:1 underwater:1 coordinate:2 increment:4 variation:2 updated:1 target:5 play:1 pt:6 autonomous:1 spontaneous:1 homogeneous:1 hypothesis:1 ego:1 trend:2 recognition:3 particularly:1 located:1 wolfe:1 carrasco:1 observed:2 role:1 bottom:4 capture:2 calculate:3 region:6 wj:1 ensures:1 movement:9 highest:1 decrease:1 valuable:1 intuition:1 environment:7 ideally:1 productive:1 dynamic:22 neglected:1 topographically:1 predictive:5 distinctive:1 basis:9 easily:1 various:1 surrounding:1 jiao:1 effective:1 sejnowski:1 tell:1 formation:1 neighborhood:1 encoded:1 widely:1 plausible:1 cvpr:1 drawing:1 precludes:1 favor:1 statistic:3 neil:1 jointly:1 itself:1 emergence:1 online:3 interplay:1 advantage:1 sequence:4 propose:2 regain:1 maximal:1 adaptation:1 cao:1 translate:1 achieve:4 intuitive:1 pronounced:1 validate:1 qr:1 transmission:2 eccentricity:1 produce:1 generating:1 incremental:10 object:3 measured:1 received:1 eq:3 strong:1 p2:1 implemented:1 predicted:1 indicate:1 come:1 quantify:2 guided:2 attribute:1 filter:6 exploration:2 human:12 redistribution:1 explains:2 potentiation:1 investigation:2 elementary:1 koch:7 considered:3 exp:1 equilibrium:2 scope:1 circuitry:1 achieves:2 early:2 vary:1 overt:3 largest:1 successfully:1 reflects:1 weighted:2 gaussian:1 aim:1 rather:2 boosted:1 treue:2 encode:1 validated:1 indicates:2 tech:1 contrast:1 tremendously:1 detect:1 dependent:2 rigid:1 pasadena:1 perona:1 selective:3 interested:1 issue:1 among:3 denoted:1 spatial:8 raised:1 field:4 equal:1 never:1 vasconcelos:1 sampling:11 encouraged:1 represents:1 broad:2 stimulus:3 quantitatively:1 aroused:1 retina:2 randomly:1 national:2 homogeneity:1 intended:1 detection:2 analyzed:1 redistribute:2 activated:3 behind:2 implication:1 edge:1 capable:1 desimone:1 poggio:1 re:1 desired:1 theoretical:1 mk:1 witnessed:1 modeling:3 rao:1 cover:1 divergency:3 maximization:1 uniform:1 reported:3 foraging:1 combined:1 thoroughly:1 density:1 geisler:1 sensitivity:1 v4:2 w1:1 reflect:1 choose:1 cognitive:1 itti:6 return:7 ullman:2 actively:2 potential:1 distribute:1 retinal:1 coding:21 hysteresis:1 caused:1 helped:1 observer:3 analyze:1 characterizes:1 sort:1 elicited:1 parallel:1 masking:1 bruce:4 contribution:1 accuracy:4 who:2 characteristic:1 ensemble:2 yield:2 saliency:39 correspond:1 yellow:1 bayesian:1 accurately:1 mc:1 niebur:1 economical:1 worth:2 researcher:1 j6:3 unaffected:1 explain:1 simultaneous:1 sharing:1 synaptic:1 definition:1 against:1 attentive:1 energy:19 naturally:1 di:3 static:9 gain:4 sampled:2 icl:15 intrinsically:1 knowledge:1 organized:1 higher:1 follow:1 response:7 evaluated:1 though:1 box:1 furthermore:1 correlation:2 continuity:1 quality:1 indicated:1 scientific:1 olshausen:1 usa:1 effect:4 facilitate:1 concept:1 swing:2 evolution:1 assigned:1 spatially:1 wiring:1 during:3 self:3 excitation:1 criterion:2 demonstrate:3 motion:2 image:33 novel:3 recently:1 charles:1 common:1 superior:1 functional:2 mt:2 physical:1 shanghai:1 discussed:1 interpretation:1 m1:1 interpret:1 composition:2 surround:2 ai:2 dot:1 cortex:6 mainstream:3 inhibition:7 add:1 recent:1 perspective:2 driven:3 selectivity:2 certain:2 success:1 life:1 caltech:2 seen:1 additional:1 maximize:2 paradigm:2 full:2 multiple:2 offer:1 long:1 liqing:1 post:1 laplacian:1 anomalous:1 basic:1 circumstance:1 expectation:1 vision:7 represent:3 normalization:1 robotics:1 cell:2 irregular:1 receive:2 addition:3 whereas:1 achieved:1 interval:1 biased:1 operate:1 unlike:2 w2:1 comment:1 subject:2 tend:1 flow:1 habituation:1 structural:1 noting:2 yang:1 mahadevan:1 baxter:1 affect:1 psychology:1 attracts:1 idea:1 cn:2 consumed:1 shift:3 whether:1 motivated:1 allocate:1 impairs:1 peter:1 cause:2 dramatically:1 listed:1 amount:3 tth:1 http:2 generate:2 inhibitory:3 notice:2 neuroscience:5 track:1 klein:1 blue:1 discrete:1 salient:10 changing:2 pj:8 vast:1 tsotsos:2 run:1 respond:1 patch:10 putative:1 expediency:1 activity:14 strength:1 occur:1 scene:6 x2:2 aspect:1 answered:1 optimality:1 refusal:1 department:2 according:4 combination:1 slightly:1 y0:2 wi:3 explained:3 resource:2 previously:2 mechanism:8 available:1 pursuit:1 appropriate:1 simulating:1 pasternak:1 alternative:1 permanently:1 original:2 top:2 especially:1 society:2 objective:1 move:2 question:1 hum:1 strategy:4 primary:2 receptive:3 saccadic:3 amongst:1 fovea:2 attentional:4 mapped:1 ior:6 simulated:1 thank:2 consumption:2 collected:2 discriminant:2 sjtu:2 length:11 code:4 ratio:7 convincing:2 preferentially:1 implementation:2 observation:4 neuron:4 benchmark:2 xiaodi:1 voluntary:1 immediate:3 looking:1 frame:4 discovered:1 dongchuan:1 intensity:2 rarer:1 required:1 kl:6 metabolically:1 california:1 coherent:1 narrow:1 address:2 beyond:2 suggested:2 bar:2 below:1 perception:9 pattern:4 challenge:1 program:2 green:1 memory:1 explanation:4 video:8 wainwright:1 critical:1 natural:5 predicting:1 residual:1 mn:1 improve:1 technology:1 brief:1 eye:12 temporally:1 review:2 literature:1 acknowledgement:1 berry:1 understanding:1 contributing:1 relative:2 embedded:1 generation:8 limitation:1 suggestion:1 foundation:1 rutishauser:1 vectorized:2 principle:8 metabolic:1 bank:2 foster:1 pi:22 row:1 supported:1 free:1 distribu:1 silence:1 bias:4 understand:1 institute:1 wide:1 comprehensively:1 sparse:10 distributed:2 van:1 curve:2 overcome:1 cortical:2 xn:1 cumulative:2 world:1 sensory:3 author:1 made:1 correlate:1 transaction:1 reconstructed:1 neurobiological:1 ons:3 keep:1 global:1 sequentially:1 incoming:1 active:2 summing:1 spectrum:1 search:9 continuous:6 decade:1 table:1 nature:10 learn:1 ballard:1 ca:1 did:1 whole:1 ling:1 arise:1 x1:2 neuronal:1 fig:1 referred:2 gazed:1 rarity:2 roc:2 tong:1 lq:1 perceptual:1 levy:1 weighting:1 down:2 insightful:1 dk:1 evidence:2 frogner:1 incorporating:2 sequential:1 dissimilarity:1 foveated:2 surprise:1 entropy:10 likely:2 gao:2 visual:50 prevents:1 unexpected:1 tracking:1 saccade:8 ma:1 endeavor:1 towards:3 change:4 experimentally:2 specifically:1 determined:3 discriminate:1 experimental:4 tendency:1 rarely:2 appropriation:2 hateren:1 evaluate:3 |
2,793 | 3,532 | Bayesian Exponential Family PCA
Shakir Mohamed
Katherine Heller
Zoubin Ghahramani
Department of Engineering, University of Cambridge
Cambridge, CB2 1PZ, UK
{sm694,kah60,zoubin}@eng.cam.ac.uk
Abstract
Principal Components Analysis (PCA) has become established as one of the
key tools for dimensionality reduction when dealing with real valued data. Approaches such as exponential family PCA and non-negative matrix factorisation
have successfully extended PCA to non-Gaussian data types, but these techniques
fail to take advantage of Bayesian inference and can suffer from problems of overfitting and poor generalisation. This paper presents a fully probabilistic approach
to PCA, which is generalised to the exponential family, based on Hybrid Monte
Carlo sampling. We describe the model which is based on a factorisation of the
observed data matrix, and show performance of the model on both synthetic and
real data.
1
Introduction
In Principal Components Analysis (PCA) we seek to reduce the dimensionality of a D-dimensional
data vector to a smaller K-dimensional vector, which represents an embedding of the data in a lower
dimensional space. The traditional PCA algorithm is non-probabilistic and defines the eigenvectors
corresponding to the K-largest eigenvalues as this low dimensional embedding. In probabilistic
approaches to PCA, such as probabilistic PCA (PPCA) and Bayesian PCA [1], the data is modelled
by unobserved latent variables, and these latent variables define the low dimensional embedding. In
these models both the data and the latent variables are assumed to be Gaussian distributed.
This Gaussian assumption may not be suitable for all data types, especially in the case where data
is binary or integer valued. Models such as Non-negative Matrix Factorisation (NMF) [2], Discrete
Components Analysis (DCA) [3], Exponential Family PCA (EPCA) [4] and Semi-parametric PCA
(SP-PCA) [5], have been developed that endow PCA the ability to handle data for which Bernoulli
or Poisson distributions may be more appropriate. These general approaches to PCA involve the
representation of the data matrix X as a product of smaller matrices: the factor score matrix V,
representing the reduced vectors; and a data independent part ?, known as the factor loading
matrix. In the original data matrix, there are N ? D entries, and in the matrix factorisation there are
(N + D) ? K entries, which is a reduction in the number of parameters if K N, D [3].
Models such as PCA, NMF and EPCA are from the class of deterministic latent variable
models [6], since their latent variables are set to their maximum a posteriori (MAP) values.
Welling et al. [6] argue that the resulting model essentially assigns zero probability to all input
configurations that are not in the training set. This problem stems from the use of an inappropriate
objective function, and can be remedied by using an alternate approximate inference scheme. In
this paper, we propose a fully Bayesian approach to PCA generalised to the exponential family.
Our approach follows the method of factorising the data matrix into two lower rank matrices using
an exponential family distribution for the data with conjugate priors. The exponential family of distributions is reviewed in section 2, and the complete specification for the model is given in section 3.
Learning and inference in the model is performed using the Hybrid Monte Carlo approach, which is
appropriate due to the continuous nature of variables in the model. The connections to existing generalised PCA methods, such as NMF and EPCA are discussed in section 4. We present results on the
performance of our Bayesian exponential family PCA model in section 5. We report performance
using both a synthetic data set to highlight particular model properties and also on two real datasets:
the Cedar Buffalo digits dataset and data on cardiac SPECT images. The Bayesian approach gives us
many samples of the final low dimensional embedding of the data, and techniques for determining a
single low dimensional embedding are discussed in section 6. In section 7 we conclude, and present
a survey of possible future work.
2
Exponential Family Models
In the exponential family of distributions, the conditional probability of a value xn given parameter
value ?, takes the following form:
p(xn |?) = exp{s(xn )> ? + h(xn ) + g(?)}
(1)
where s(xn ) are the sufficient statistics, ? is a vector of natural parameters, h(xn ) is a function of
the data and g(?) is a function of the parameters. In this paper, the natural representation of the
exponential family likelihood is used, such that s(xn ) = xn .
It is convenient to represent a variable xn that is drawn from an exponential family distribution using the notation: xn ? Expon(?) with natural parameters ?. Probability distributions that
belong to the exponential family also have natural conjugate prior distributions p(?). The conjugate
prior distribution for the exponential family distribution of equation (1) is:
p(?) ? exp{?> ? + ?g(?) + f (?)}
(2)
where ? and ? are hyperparameters of the prior distribution. In this case we use the notation:
? ? Conj(?, ?) as shorthand for the conjugate distribution.
As an example, for binary data an appropriate data distribution is the Bernoulli distribution.
The distribution is usually written as p(x|?) = ?x (1 ? ?)1?x , with ? in [0,1]. The exponential
?
family form of this distribution, using the terms in equation (1) are: h(x) = 0, ? = ln( 1??
)
?
and g(?) = ? ln(1 + e ). The natural parameters can be mapped to the parameter values of
the distribution using the link function, which is the logistic sigmoid in the case of the Bernoulli
distribution. The terms of the conjugate distribution can also be derived easily.
3
Bayesian Exponential Family PCA
We can consider Bayesian Exponential Family PCA (BXPCA) as a method of searching for two
matrices V and ?, and we define the product matrix P = V?. In traditional PCA, the elements of
the matrix P which are the means of Gaussians, lie in the same space as that of the data X. In the
case of BXPCA and other methods for non-Gaussian PCA such as EPCA [4], this matrix represents
the natural parameters of the exponential family distribution of the data.
We represent the observed data as an N ? D matrix X = {x1 , . . . , xN }, with an individual
data point xn = [xn1 , . . . , xnD ]. N is the number of data points and D is the number of input
features. ? is a K ? D matrix with rows ? k . V is a N ? K matrix V = {v1 , . . . , vn }, and rows
vn = [vn1 , . . . , vnK ], are K-dimensional vectors of continuous values in R. K is the number of
latent factors representing the dimensionality of the reduced space.
3.1
Model Specification
The generative process for the BXPCA model is described in figure 1. Let m and S be hyperparameters representing a K-dimensional vector of initial mean values and an initial covariance matrix
respectively. Let ? and ? be the hyperparameters corresponding to the shape and scale parameters
of an inverse Gamma distribution. We start by drawing ? from a Gaussian distribution and the
elements ?k2 of the diagonal matrix ? from an inverse gamma distribution:
? ? N (?|m, S)
?k2 ? iG(?, ?)
(3)
Figure 1: Graphical Model for Bayesian Exponential Family PCA.
For each data point n, we draw the K-dimensional entry vn of the factor score matrix:
vn ? N (vn |?, ?)
(4)
The data is described by an exponential family distribution with natural parameters ? k . The exponential family distribution modelling the data, and the corresponding prior over the model parameters, is:
!
X
xn |vn , ? ? Expon
vnk ? k
? k ? Conj (?, ?)
(5)
k
We denote ? = {V, ?, ?, ?} as the set of unknown parameters with hyperparameters ? =
{m, S, ?, ?, ?, ?}. Given the graphical model, the joint probability of all parameters and variables
is:
p(X, ?|?) = p(X|V, ?)p(?|?, ?)p(V|?, ?)p(?|m, S)p(?|?, ?)
(6)
Using the model specification given by equations (3) - (5) and assuming that the parameter ? = 1,
the log-joint probability distribution is:
?
!>
!?
N
X
X
X
?
vnk ? k
xn + h(xn ) + g
vnk ? k ?
(7)
ln p(X, ?|?) =
n=1
+
K
X
k
k
?> ? k + g(? k ) + f (?)
k=1
N
X
K
1
1
T ?1
+
? ln(2?) ? ln |?| ? (vn ? ?) ? (vn ? ?)
2
2
2
n=1
K
1
1
ln(2?) ? ln |S| ? (? ? m)T S ?1 (? ? m)
2
2
2
K
X
+
? ln ? ? ln ?(?) + (? ? 1) ln ?i2 ? ??i2
?
i=1
where the functions h(?), g(?) and f (?) correspond to the functions of the chosen conjugate distribution for the data.
3.2
Learning
The model parameters ? = {V, ?, ?, ?} are learned from the data using Hybrid Monte Carlo
(HMC) sampling [7]. While the parameters ? = {m, S, ?, ?, ?, ?} are treated as fixed hyperparameters, these can also be learned from the data. Hybrid Monte Carlo is a suitable sampler for use
with this model since all the variables are continuous and it is possible to compute the derivative of
the log-joint probability. HMC is also an attractive scheme for sampling since it avoids the random
walk behaviour of the Metropolis or the Gibbs sampling algorithms [7].
Hybrid Monte Carlo (HMC) is an auxiliary variable sampler where we sample from an augmented distribution p(x, u), rather than the target distribution p(x), since it is easier to sample from
this augmented distribution [8]. HMC utilises the gradient of the target distribution to improve
mixing in high dimensions. In BXPCA, the target distribution is: E(?|?) = ? ln p(X, ?|?) and
represents the potential energy function. The auxiliary variable u, is Gaussian and is used to define
the kinetic energy K = 12 uT u. Furthermore, we define the gradient vector ?(X, ?) , ?E(?)
?? ,
which can be computed using equation (7). The sum of the kinetic and the potential energy
defines the Hamiltonian. Samples of ? and u are obtained by combining the Hamiltonian with the
gradient information in the simulation of so-called ?leapfrog? steps. These details and the general
pseudocode for HMC can be found in MacKay [9].
One key feature of HMC is that the dynamics is simulated in an unconstrained space. Therefore to
correctly apply HMC to this model, we must ensure that all constrained variables are transformed
to an unconstrained space, perform dynamics in this unconstrained space, and then transform the
variables back to the original constrained space. The only variable that is constrained in BXPCA
is ? where each diagonal element ?k2 > 0. Each ?k2 can be transformed to a corresponding
unconstrained variable ?k using the transformation: ?k2 = e?k . This transformation requires that
we then apply the chain rule for differentiation and that
we must include the determinant of the
Jacobian of the transformed variables, which is: |J| = ???k exp(?k2 ) = |exp(?k )| = ?k2 .
We also extended the HMC procedure to handle missing inputs in a principled manner, by
analytically integrating them out.In practice, this implies working with missing data under the
Missing at Random (MAR) assumption. Here, we divide the data into the set of observed and
missing data, X = {Xobs , Xmissing }, and use the set Xobs in the inference.
4
Related Work
Exponential Family PCA: Exponential family PCA (EPCA) [4] is a general class of PCA
algorithms that allows the ideas of PCA to be applied to any data that can be modelled from a
distribution in the exponential family. Like BXPCA, it is based on a factorisation of the data into a
factor score matrix V and a factor loading matrix ?. The algorithm is based on the optimisation
of a loss function which is based on the Bregman divergence between the data and the learned
reconstruction of the data. The learning is based on an alternating minimisation procedure where the
two matrices V and ? are optimised in turn, and each optimisation is a convex function. The EPCA
objective function can be seen as the likelihood function of a probabilistic model, and hence this optimisation corresponds to maximum a posteriori (MAP) learning. The use of MAP learning makes
EPCA a deterministic latent variable model [6], since the latent variables are set to their MAP values.
In both our model and EPCA, the product P = V? represents the natural parameters of the
distribution over the data, and must be transformed using the link function to get to the parameter
space of the associated data distribution. Our model is different from EPCA in that it is a fully
probabilistic model in which all parameters can be integrated out by MCMC. Furthermore, EPCA
does not include any form of regularisation and is prone to overfitting the data, which is avoided in
the Bayesian framework. We will compare BXPCA to EPCA throughout this paper.
Non-negative Matrix Factorisation: Non-negative Matrix Factorisation (NMF) [2] is a technique
of factorising a matrix into the product of two positive lower rank matrices. In NMF, the matrix
product P approximates the mean parameters of the data distribution, and is thus in the same space
as the data. A mean parameter for example, is the rate ? if the data is modelled as a Poisson
distribution, or is the probability of data being a 1 if the data is modelled as a Bernoulli. In NMF,
V and ? are restricted to be positive matrices, and inference corresponds to maximum likelihood
learning with a Poisson likelihood. Similarly to EPCA, this learning method places NMF in the
class of deterministic latent variable methods.
Discrete Components Analysis: The Discrete Components Analysis (DCA) [3] is a family
of probabilistic algorithms that deals with the application of PCA to discrete data and is a unification of the existing theory relating to dimensionality reduction with discrete distributions. In DCA
the product P = V? is the mean parameter of the appropriate distribution over that data, as with
NMF, and also constrains V and ? to be non-negative. The various algorithms of the DCA family
are simulated using either Gibbs sampling or variational approximations.
Bayesian Partial Membership: The Bayesian Partial Membership (BPM) model is a clustering technique that allows data points to have fractional membership in multiple clusters. The
model is derived from a finite mixture model which allows the usual indicator variables to take on
any value in the range [0,1]. The resulting model has the same form as the model shown in figure
1, but instead of the model variable V being modelled as a Gaussian with unknown mean and
covariance, it is instead modelled as a Dirichlet distribution. This difference is important, since
it affects the interpretation of the results. In the BXPCA, we interpret the matrix V as a lower
dimensional embedding of the data which can be used for dimensionality reduction. In contrast,
the corresponding matrix for the BPM model, whose values are restricted to [0,1], is the partial
membership of each data point and represents the extent to which each data point belongs to each
of the K clusters.
5
Results and Discussion
Synthetic Data: Synthetic data was generated by creating three 16-bit prototype vectors with each
bit being generated with a probability of 0.5. Each of the three prototypes is replicated 200 times,
resulting in a 600-point data set. We then flip bits in the replicates with a probability of 0.1, as in
Tipping [10], thus adding noise about each of the prototypes. BXPCA inference was run using this
data for 4000 iterations, using the first half as burn-in. Figure 2 demonstrates the learning process of
BXPCA. In the initial phase of the sampling, the energy decreases slowly and the model is unable
to learn any useful structure from the data. Around sample 750, the energy function decreases and
some useful structure has been learnt. By sample 4000 the model has learnt the original data well,
as can be seen by comparing sample 4000 and the original data.
To evaluate the performance of BXPCA, we define training and test data from the available
Ene rgy E(?)
10000
8000
6000
4000
2000
5
50
Sample 5
500
Sample 200
Sample 300
5000
Sample 500
Sample 1000
100
100
100
100
100
200
200
200
200
200
300
300
300
300
300
400
400
400
400
400
500
500
500
500
600
5
10
15
600
Sample 1250
5
10
15
600
Sample 2000
5
10
15
600
Sample 3250
500
5
10
15
600
Sample 4000
100
100
100
100
200
200
200
200
200
300
300
300
300
300
400
400
400
400
400
500
500
500
500
5
10
15
600
5
10
15
600
5
10
15
600
10
15
Original Data
100
600
5
500
5
10
15
600
5
10
15
Figure 2: Reconstruction of data from samples at various stages of the sampling. The top plot shows
the change in the energy function. The lower plots show the reconstructions and the original data.
0.7
?Box? BXPCA
?Notch? EPCA
6000
Neg. Log Prob. (Bits)
RMSE on Test Data
?Box? BXPCA
?Notch? EPCA
7000
0.6
0.5
0.4
5000
4000
3000
2000
0.3
1000
0.2
1
2
3
4
5
8
10
Latent Factors (K)
15
20
25
0
30
1
2
3
4
(a)
15
20
25
30
(b)
0
0.8
10
?Box? BXPCA
?Notch? EPCA
0.7
0.6
?1
10
0.5
|?| > 0.95
RMSE on Training Data
5
8
10
Latent Factors (K)
0.4
0.3
?2
10
0.2
0.1
EPCA
0
BXPCA
?3
1
2
3
4
5
8
10
Latent Factors (K)
(c)
15
20
25
30
10
0
5
10
15
Latent Factors (K)
20
25
30
(d)
Figure 3: Boxplots comparing the NLP and RMSE of BXPCA and EPCA for various latent factors.
data. The test data was created by randomly selecting 10% of the data points. These test data
points were set as missing values in the training data. Inference is then run using BXPCA,
which has been extended to consider missing data. This method of using missing data is a
natural way of testing these algorithms, since both are generative models. We calculate the
negative log probability (NLP) and the root mean squared error (RMSE) using the testing data.
We evaluate the same metrics for EPCA, which is also trained considering missing data. This
missing data testing methodology is also used in the experiments on real data that are described later.
In figure 3a and 3b, the RMSE and NLP of the two algorithms are compared respectively,
for various choices of the latent factor K. EPCA shows characteristic underfitting for K = 1
and demonstrates severe overfitting for large K. This overfitting is seen by the very large values
of NLP for EPCA. If we examine the RMSE on the training data shown in figure 3c, we see the
overfitting problem highlighted further, where the error on the training set is almost zero for EPCA,
whereas BXPCA manages to avoid this problem. We expect that a random model would have a
N LP = 10% ? 600 ? 16 = 960 bits, but the NLP values for EPCA are significantly larger than
this. This is because as EPCA begins to overfit, it becomes highly confident in its predictions and
the proportion of bits which it believes are 1, for example, but which are actually 0, increases.
This is shown in figure 3d, where we show the frequency of incorrect predictions, where the error
between the predicted and actual bits is greater than 0.95. BXPCA, based on a Bayesian approach
thus avoids overfitting and gives improved predictions.
Digits Data: BXPCA was applied to the CEDAR Buffalo digits dataset. The digit 2 was
used, and consists of 700 greyscale images with 64 attributes. The digits were binarised by
thresholding at a greyscale value of 128 from the 0 to 255 greyscale range. Table 1 compares the
performance of BXPCA and EPCA, using the same method of creating training and testing data
sets as for the synthetic data. BXPCA has lower RMSE and NLP than EPCA and also does not
exhibit overfitting at large K, which can be seen in EPCA by the large value of NLP at K = 5.
SPECT Data: The data set describes the diagnosis of cardiac Single Proton Emission Computed
Tomography (SPECT) images [11]. The data consists of 267 SPECT image sets, and has been
processed resulting in 22 binary attributes. Table 2 compares the performance of BXPCA and EPCA.
This dataset demonstrates that EPCA quickly overfits the data, as shown by the rapidly increasing
values of NLP, and that the two algorithms perform equally well for low values of K.
Table 1: Table comparing BXPCA and EPCA on the digit 2 dataset.
K
2
3
4
5
NLP
2032.3 2022.9 2002.4 2032.0
BXPCA
RMSE 0.389
0.385
0.380
0.383
NLP
2125.5 2482.1 2990.2 4708.8
EPCA
RMSE 0.392
0.393
0.399
0.402
BXPCA
EPCA
6
Table 2: Table Comparing BXPCA and EPCA on the SPECT dataset.
K
1
2
3
4
5
6
7
NLP
348.67 343.40 325.94 331.47 291.75 305.22 310.36
RMSE 0.441
0.433
0.405
0.419
0.377
0.393
0.383
NLP
388.18 516.78 507.79 1096.6 1727.4 4030.0 4209.0
RMSE 0.439
0.427
0.413
0.439
0.487
0.517
0.528
8
319.06
0.396
4330.0
0.560
Choice of Final Embedding
For the purposes of dimensionality reduction, PCA is used to search for a low dimensional
embedding V of the data points. In EPCA, the alternating minimisation returns a single V that is
the low dimensional representation. In BXPCA though, we do not get a single V, but rather many
samples which represent the variation in the embedding. Furthermore, we cannot simply take the
average of each of these samples to obtain a single V, since we have not included any identifiability
constraints in the model. This lack of identifiability subjects V to permutations of the columns, and
to rotations of the matrix, making an average of the samples meaningless.
There are several approaches to obtaining a single low dimensional representation from the
set of samples. The simplest approach is to choose from the set of available samples, the best global
configuration, {V? , ?? } = arg max?(s) p(X, ?(s) |?), and use this V? . A second approach aims
to give further information about the variability of the embedding. We begin by fixing the model
parameters to {?? , ?? , ?? }. These can be set using the sample chosen in the first approach. We
then sample V from the conditional distribution:
V ? p(V|X, ?? , ?? , ?? ) ? p(X|V, ?? )p(V|?? , ?? )
(8)
where equation (8) is obtained using Bayes theorem and the joint probability distribution given in
equation (6). We can now average these samples to obtain a single embedding since the problems
of rotation and permutation have been removed by constraining the variables {?? , ?? , ?? }. We
demonstrate this procedure using the synthetic data described in the previous section for K = 2.
Figure 4 shows the embedding in the 2D space for 10 data points and 20 independent samples
drawn according to equation (8). The graph shows that there is some mean value and also gives
us an understanding of the variation that is possible, in this 2D embedding. The drawback of this
last approach is that it does not give any indication of the effect of variation in ?. To gain some
understanding of this effect, we can further extend this approach by choosing Q random samples,
?? = {??(1) , ??(2) , . . . , ??(Q) }, at convergence of the HMC sampler. We then repeat the aforementioned procedure for these various ??(q) . This then gives an understanding of the variability of
the final embedding, in terms of both ? and V.
7
Conclusions and Future Work
We have described a Bayesian approach to PCA which is generalised to the exponential family.
We have employed a Hybrid Monte Carlo sampling scheme with an energy based on the log-joint
probability of the model. In particular, we have demonstrated the ability of BXPCA to learn the
structure of the data while avoiding overfitting problems, which are experienced by other maximum
likelihood approaches to exponential family PCA. We have demonstrated this using both synthetic
and real data.
Variation in Final Embedding
40
30
Dimension 1
20
10
0
?10
?20
?30
?40
?40
?20
0
20
Dimension 2
40
60
80
Figure 4: Variation in final embedding for 10 data points and various samples of V
In future the model can be extended by considering an alternate distribution for the factor
score matrix V. Instead of considering a Gaussian distribution, a Laplacian or other heavy tailed
distribution could be used, which would allow us to determine the lower dimensional embedding
of the data, and also give the model a sparseness property. We could also specifically include
restrictions on the form of the score and the loading matrices, V and ? respectively, to ensure
identifiability. This makes learning in the model more complex since we must ensure that the
restrictions are maintained. Also, it will prove interesting to consider alternate forms of inference,
specifically the techniques of sequential Monte Carlo to allow for online inference.
Acknowlegdements: We thank Peter Gehler for the EPCA implementation. SM thanks the NRF
SA and the Commonwealth Commission for support. KH was supported by an EPSRC Postdoctoral
Fellowship (grant no. EP/E042694/1).
References
[1] C. M. Bishop, Pattern Recognition and Machine Learning. Information Science and Statistics,
Springer, August 2006.
[2] D. D. Lee and H. S. Seung, ?Algorithms for non-negative matrix factorization,? in Advances in
Neural Information Processing Systems, vol. 13, pp. 556 ? 562, MIT Press, Cambridge, MA,
2001.
[3] W. Buntine and A. Jakulin, ?Discrete components analysis,? in Subspace, Latent Structure and
Feature Selection, vol. 3940/2006, pp. 1?33, Springer (LNCS), 2006.
[4] M. Collins, S. Dasgupta, and R. Schapire, ?A generalization of principal components to the
exponential family,? in Advances in Neural Information Processing Systems, vol. 14, pp. 617
? 624, MIT Press, Cambridge, MA, 2002.
[5] Sajama and A. Orlitsky, ?Semi-parametric exponential family PCA,? in Advances in Neural
Information Processing Systems, vol. 17, pp. 1177 ? 1184, MIT Press, Cambridge, MA, 2004.
[6] M. Welling, C. Chemudugunta, and N. Sutter, ?Deterministic latent variable models and their
pitfalls,? in SIAM Conference on Data Mining (SDM), pp. 196 ? 207, 2008.
[7] R. M. Neal, ?Probabilistic inference using Markov Chain Monte Carlo methods,? Tech. Rep.
CRG-TR-93-1, University of Toronto, Department of Computer Science, 1993.
[8] C. Andrieu, N. De Freitas, A. Doucet, and M. I. Jordan, ?An introduction to MCMC for machine learning,? Machine Learning, vol. 50, pp. 5?43, 2003.
[9] D. J. C. MacKay, Information Theory, Inference & Learning Algorithms. Cambridge University Press, June 2002.
[10] M. E. Tipping, ?Probabilistic visualisation of high dimensional binary data,? in Advances in
Neural Information Processing Systems, vol. 11, pp. 592 ? 598, MIT Press, Cambridge, MA,
1999.
[11] ?UCI machine learning repository.? http://archive.ics.uci.edu/ml/datasets/.
| 3532 |@word determinant:1 repository:1 loading:3 proportion:1 seek:1 simulation:1 eng:1 covariance:2 tr:1 reduction:5 initial:3 configuration:2 score:5 selecting:1 existing:2 freitas:1 comparing:4 written:1 must:4 shape:1 plot:2 generative:2 half:1 sutter:1 hamiltonian:2 toronto:1 become:1 incorrect:1 shorthand:1 consists:2 prove:1 underfitting:1 manner:1 examine:1 pitfall:1 actual:1 xobs:2 inappropriate:1 considering:3 increasing:1 becomes:1 begin:2 notation:2 developed:1 unobserved:1 transformation:2 differentiation:1 orlitsky:1 k2:7 demonstrates:3 uk:2 grant:1 generalised:4 positive:2 engineering:1 jakulin:1 optimised:1 burn:1 factorization:1 range:2 testing:4 practice:1 cb2:1 digit:6 procedure:4 lncs:1 significantly:1 convenient:1 vnk:4 integrating:1 zoubin:2 get:2 cannot:1 selection:1 restriction:2 deterministic:4 map:4 missing:9 demonstrated:2 convex:1 survey:1 assigns:1 factorisation:7 rule:1 embedding:17 searching:1 handle:2 variation:5 target:3 element:3 recognition:1 xnd:1 gehler:1 observed:3 epsrc:1 ep:1 calculate:1 decrease:2 removed:1 principled:1 constrains:1 seung:1 cam:1 dynamic:2 trained:1 easily:1 joint:5 various:6 describe:1 monte:8 choosing:1 whose:1 larger:1 valued:2 drawing:1 ability:2 statistic:2 transform:1 highlighted:1 shakir:1 final:5 online:1 advantage:1 eigenvalue:1 indication:1 sdm:1 propose:1 reconstruction:3 product:6 acknowlegdements:1 uci:2 combining:1 rapidly:1 mixing:1 kh:1 rgy:1 convergence:1 cluster:2 ac:1 fixing:1 sa:1 auxiliary:2 predicted:1 implies:1 drawback:1 attribute:2 behaviour:1 generalization:1 crg:1 around:1 ic:1 exp:4 purpose:1 largest:1 successfully:1 tool:1 mit:4 gaussian:8 aim:1 rather:2 vn1:1 avoid:1 minimisation:2 endow:1 derived:2 emission:1 june:1 leapfrog:1 bernoulli:4 rank:2 likelihood:5 modelling:1 tech:1 contrast:1 posteriori:2 inference:11 membership:4 integrated:1 visualisation:1 transformed:4 bpm:2 arg:1 aforementioned:1 constrained:3 mackay:2 sampling:8 represents:5 nrf:1 future:3 report:1 randomly:1 gamma:2 divergence:1 individual:1 phase:1 highly:1 mining:1 severe:1 replicates:1 mixture:1 chain:2 bregman:1 unification:1 conj:2 partial:3 divide:1 walk:1 column:1 entry:3 cedar:2 sajama:1 buntine:1 commission:1 learnt:2 synthetic:7 confident:1 thanks:1 siam:1 probabilistic:9 lee:1 quickly:1 squared:1 choose:1 slowly:1 creating:2 derivative:1 return:1 potential:2 de:1 performed:1 root:1 later:1 overfits:1 start:1 bayes:1 identifiability:3 rmse:11 characteristic:1 correspond:1 modelled:6 bayesian:14 manages:1 carlo:8 energy:7 frequency:1 mohamed:1 pp:7 associated:1 xn1:1 gain:1 ppca:1 dataset:5 ut:1 dimensionality:6 fractional:1 actually:1 back:1 dca:4 tipping:2 methodology:1 improved:1 box:3 mar:1 though:1 furthermore:3 stage:1 overfit:1 working:1 lack:1 defines:2 logistic:1 effect:2 andrieu:1 analytically:1 hence:1 alternating:2 i2:2 neal:1 deal:1 attractive:1 maintained:1 complete:1 demonstrate:1 image:4 variational:1 sigmoid:1 rotation:2 pseudocode:1 discussed:2 belong:1 approximates:1 relating:1 interpretation:1 interpret:1 extend:1 cambridge:7 gibbs:2 unconstrained:4 similarly:1 specification:3 belongs:1 binary:4 rep:1 neg:1 seen:4 greater:1 utilises:1 employed:1 determine:1 semi:2 multiple:1 stem:1 equally:1 laplacian:1 prediction:3 essentially:1 optimisation:3 poisson:3 expon:2 metric:1 iteration:1 represent:3 whereas:1 fellowship:1 binarised:1 meaningless:1 archive:1 subject:1 jordan:1 integer:1 constraining:1 affect:1 reduce:1 idea:1 prototype:3 pca:33 notch:3 suffer:1 peter:1 useful:2 eigenvectors:1 involve:1 tomography:1 processed:1 simplest:1 reduced:2 schapire:1 http:1 correctly:1 diagnosis:1 chemudugunta:1 discrete:6 dasgupta:1 vol:6 key:2 epca:34 drawn:2 boxplots:1 v1:1 graph:1 sum:1 run:2 inverse:2 prob:1 place:1 family:30 throughout:1 almost:1 vn:8 draw:1 bit:7 spect:5 constraint:1 department:2 according:1 alternate:3 poor:1 conjugate:6 smaller:2 cardiac:2 describes:1 lp:1 metropolis:1 making:1 restricted:2 ene:1 ln:11 equation:7 turn:1 fail:1 flip:1 available:2 gaussians:1 apply:2 appropriate:4 original:6 top:1 clustering:1 ensure:3 include:3 dirichlet:1 graphical:2 nlp:12 ghahramani:1 especially:1 objective:2 parametric:2 usual:1 traditional:2 diagonal:2 exhibit:1 gradient:3 subspace:1 link:2 remedied:1 mapped:1 simulated:2 unable:1 thank:1 argue:1 extent:1 assuming:1 katherine:1 hmc:9 greyscale:3 negative:7 implementation:1 unknown:2 perform:2 datasets:2 sm:1 markov:1 finite:1 buffalo:2 extended:4 variability:2 august:1 nmf:8 connection:1 proton:1 learned:3 established:1 usually:1 pattern:1 max:1 belief:1 suitable:2 natural:9 hybrid:6 treated:1 indicator:1 representing:3 scheme:3 improve:1 created:1 heller:1 prior:5 understanding:3 determining:1 regularisation:1 fully:3 loss:1 highlight:1 expect:1 permutation:2 interesting:1 sufficient:1 thresholding:1 factorising:2 heavy:1 row:2 prone:1 repeat:1 last:1 supported:1 allow:2 distributed:1 dimension:3 xn:15 avoids:2 replicated:1 ig:1 avoided:1 welling:2 approximate:1 dealing:1 ml:1 global:1 overfitting:8 doucet:1 assumed:1 conclude:1 postdoctoral:1 continuous:3 latent:17 search:1 tailed:1 reviewed:1 table:6 nature:1 learn:2 obtaining:1 complex:1 sp:1 noise:1 hyperparameters:5 x1:1 augmented:2 experienced:1 exponential:28 lie:1 jacobian:1 theorem:1 bishop:1 pz:1 adding:1 sequential:1 sparseness:1 easier:1 simply:1 springer:2 corresponds:2 kinetic:2 ma:4 conditional:2 change:1 included:1 generalisation:1 specifically:2 sampler:3 principal:3 called:1 support:1 collins:1 evaluate:2 mcmc:2 avoiding:1 |
2,794 | 3,533 | A ?Shape Aware? Model for semi-supervised
Learning of Objects and its Context
Abhinav Gupta1 , Jianbo Shi2 and Larry S. Davis1
Dept. of Computer Science, Univ. of Maryland, College Park
2
Dept. of Computer and Information Sciences, Univ. of Pennsylvania
[email protected], [email protected], [email protected]
1
Abstract
We present an approach that combines bag-of-words and spatial models to perform
semantic and syntactic analysis for recognition of an object based on its internal
appearance and its context. We argue that while object recognition requires modeling relative spatial locations of image features within the object, a bag-of-word
is sufficient for representing context. Learning such a model from weakly labeled
data involves labeling of features into two classes: foreground(object) or ?informative? background(context). We present a ?shape-aware? model which utilizes
contour information for efficient and accurate labeling of features in the image.
Our approach iterates between an MCMC-based labeling and contour based labeling of features to integrate co-occurrence of features and shape similarity.
1
Introduction
Understanding the meaning of a sentence involves both syntactic and semantic analysis. A bag-ofwords approach applied locally over a sentence would be insufficient to understand its meaning. For
example, ?Jack hit the bar? and ?The bar hit Jack? have different meanings even though the bag-ofwords representation is the same for both. In many cases, determining meaning also requires word
sense disambiguation using contextual knowledge. For example, does ?bar? represents a rod or a
place where drinks are served? While a combined semantic and syntactical model could be used
for representation and application of context as well, it would be expensive to apply. Syntactical
rules are generally not required for extracting knowledge about context - a topic model is generally
sufficient for contextual analysis in text [14, 15].
We use analogous reasoning to suggest a similar dichotomy in representing object structure and
context in vision. Our approach combines bag-of-words and spatial models to capture semantics
and syntactic rules, respectively, that are employed for recognizing an object using its appearance,
structure and context. We treat an object and a scene analogous to a sentence and a document
respectively. Similar to documents, object recognition in natural scenes requires modeling spatial
relationships of image features(words) within the object but for representing context in a scene, a
bag-of-words approach suffices (See Figure 1 (a) and (b)).
Learning such a model from weakly labeled data requires labeling the features in an image as belonging to an object or its context (informative background). Spatial models, such as constellation
or star models, compute a sparse representation of objects(with a fixed number of parts) by selecting features which satisfy spatial constraints. Their sparse representation reduces their utility
in the presence of occlusion. Approaches for learning a dense bag-of-features model with spatial
constraints from weakly labeled data have also been proposed. Such approaches (based on marginalizing over possible locations of the object), however, lead to poor foreground segmentation if the
training dataset is small, the images have significant clutter 1 or if some other object in the background has a strong and consistent spatial relationship with the object to be learned throughout the
1
A dataset of less cluttered images would fail to provide enough contextual information to be learned for a
model that simultaneously learns object model and its contextual relationships.
(a)
(b)
(c)
Figure 1: (a) An example of the importance of spatial constraints locally. The red color shows the features on
the foreground car. A bag of words approach fails to capture spatial structure and thus combines the front and
rear of different cars. (b) We use a spatial model of the object and a bag-of-words approach for context representation. (c) Importance of using contour information: Objects such as signs become part of the foreground
since they occur at consistent relative location to the car. If shape and contour information is combined with
co-occurrence and spatial structure of image features, then such mis-labellings can be reduced. For example,
in the above case since there are strong intervening contours between the features on the car(foreground) and
the features on signs, and there is a lack of strong contours between features on signs and features on trees
(background), it is more likely that features on the signs should be labeled as background.
Problem:
Learn the parameters of object model given the images (I1 , .., ID ), object labels (O1 , .., OD )
and Object Model Shape (M ).
Approach:
Simultaneous localization the object in training images and estimation of model parameters. This
is achieved by integrating cues from image features and contours. The criteria includes following terms:
1. Feature Statistics: The image features satisfy the co-occurrence and spatial statistics of the model.
2. Shape Similarity: The shape of the foreground object is similar to the shape of the sketch of the object.
3. Separation: The object and background features should be separated by the object boundary contours.
Table 1: Summary of ?Shape Aware? Model
training dataset. We overcome this problem by applying shape based constraints while constructing
the foreground model.
Figure 1(c) shows an example of how contours provide important information for foreground/background labeling. We add two constraints to the labeling problem using the contour
information: (a) The first constraint requires the presence of strong intervening contours between
foreground and background features. (b) The second constraint requires the shape of boundary contours be similar to the shape of the exemplar/sketch provided with the weakly labeled dataset. This
allows us to learn object models from images where there is significant clutter and in which the
object does not cover a significant part of the image. We provide an iterative solution to integrate
these constraints. Our approach first labels the image features based on co-occurrence and spatial
statistics - the features that occur in positive images and exhibit strong spatial relationships are labeled as foreground features. Based on the labels of image features, object boundaries are identified
based on how well they separate foreground and background features. This is followed by a shape
matching step which identifies the object boundary contours based on their expected shape. This
step prunes many contours and provides a better estimate of object boundaries. These boundaries
are then be used to relabel the features in the image. This provides an initialization point for the next
iteration of Gibbs sampling. Figure 2 shows the system flow of our ?Shape Aware? approach.
1.1 Related Work
Many graphical models for object recognition [11] have been inspired by models of text documents
such as LDA [6] and pLSA [7]. These models are computationally efficient because they ignore
the spatial relationships amongst image features (or parts) and use a dense object representation.
However, ignoring spatial relationships between features leads to problems (See Figure 1(a)). In
contrast, approaches that model spatial relationships [9, 5] between object parts/features are com-
Figure 2: Shape-Aware Learning (Overview): We first compute feature labels using the Gibbs sampling approach on the Spatial Author Topic model. The features labeled foreground and background are drawn in red
and yellow respectively. This is followed by object boundary extraction. The object boundaries are identified
based on how well they separate foreground and background features. Likely object boundary contours are then
matched to the sketch using a voting-based approach and the contours consistent with the shape of the sketch
are identified. These contours are then used to relabel the features using the same separation principle. The
new labels and topics from the previous time step are used as a new initialization point for the next iteration.
putationally expensive and therefore employ only sparse features representation. These approaches
fail under occlusion due to their sparse representation and their stringent requirement of a one-one
correspondence between image and object features.
There has been recent work in applying spatial constraints to topic models which enforce neighboring features to belong to similar topics [10, 2] for the purpose of segmentation. Our work is
more related to classification based approaches [8, 3] that model spatial locations of detected features based on a reference location in the image. Sudderth et. al [3] presented such a model that
can be learned in a supervised manner. Fergus et. al [8] proposed an approach to learn the model
from weakly labeled data. This was achieved by marginalizing object locations and scale. Each
object location hypothesis provides a foreground segmentation which can be used for learning the
model. Such an approach, however, is expensive unless the training images are not highly cluttered.
Additionally, they are subject to modeling errors if the object of interest is small in the training
images.
Our goal is to simultaneously learn an object model and its context model from weakly labeled
images. To learn context we require real world scenes of object and their natural surrounding environment (high clutter and small objects). We present a ?shape aware? feature based model for
recognizing objects. Our approach resolves the foreground/background labeling ambiguities by requiring that the shapes of the foreground object across the training images to be similar to a sketch
exemplar. Shape based models [1] have been used previously for object recognition. However,
contour matching is an expensive(exponential) problem due to the need to select the best subset of
contours from the set of all edges that match the shape model. Approximate approaches such as
MCMC are not applicable since matching is very closely coupled with selection. We propose an
efficient approach that iterates between an co-occurence based labeling and contour based labeling
of features.
2
Our Approach - Integrating feature and contour based cues
We assume the availability of a database of weakly labeled images which specify the presence of an
object, but not its location. Similar to previous approaches based on document models, we vector
quantize the space of image features into visual words to generate a discrete image representation.
Each visual word is analogous to a word and an image is treated analogous to a document.
Each word is associated with a topic and an author (the object). The topic distribution depends
on the associated author and the word distribution depends on the assigned topic (Section 2.1).
We start with random assignments of words to topics and authors. This is followed by a Gibbs
sampling step which simultaneously estimates the hidden variables (topic and author) and also the
parameters of the generative model that maximizes the likelihood(Section 2.2). These assignments
are then used to obtain a set of likely object boundary contours in each image. These contours are
subsequently analyzed to identify the object ?centers? and final object contours by matching with
the shape exemplar(Section 2.3). Using the new set of boundary contours, the authors corresponding
to each word are reassigned and the model is retrained using the new assignment.
2.1
Generative Model - Syntax and Semantics
Author-Topic Model: Our model is motivated by the author-topic model [13] and the model presented in [4]. We first provide a brief description of the author topic model, shown in figure 3(a).
The author-topic model is used to model documents for which a set of authors is given. For each
word in the document, an author (xi ) is chosen uniformly at random from the set of authors (ad ). A
topic (zi ) is chosen from a distribution of topics specific to the selected author and a word (wi ) is
generated from that topic. The distribution of topics (?) for each author is chosen from a symmetric
Dirichlet(?) prior and the distribution of words (?) for a topic is chosen from symmetric Dirichlet
(?) prior.
Od
ad
Rd
?
x
x
?
?
?
?
z
ri
z
?
?
?
w
?
w
l
?
?
Nd
Nd
D
D
Figure 3: (a) Author-Topic Model (b) Our Model (Spatial Author-Topic Model). Our model extends
the author topic model by including the spatial(syntactical) relationship between features.
Spatial-Author Topic Model: Our model is shown in figure 3(b). Our goal is not only to model the
distribution of type of features but also to model the distribution of spatial locations of the subset of
these features that are associated with the foreground object. We model this as follows: A feature in
the image is described by its type wi and location li . Each feature (wi , li ) is ?authored? by an author
xi which is described by its type oi 2 and its location ri . For each feature, the author xi is chosen
from a distribution, ?, which can be either uniform or generated using available priors from other
sources. Topic zi for each word is chosen from a distribution of topic specific to the type of object
oi and a word wi is generated from that topic. The distribution of topics (?) for each object type is
chosen from a symmetric Dirichlet (?) distribution3 . The distribution of a word for each topic is
chosen from a symmetric Dirichlet (?) prior.
The location of each feature, li , is sampled from the distribution p(li |oi , zi , ri ) using the following
distribution:
p(li |oi , zi , ri ) = exp(
?||li ? ri ||2 oi ,zi
)?ri (li )
?s2
(1)
2
For an image with label car, the possible object types are car, and context of car. The differentiation
between ?informative? and ?non-informative? background is captured by the probability distributions.
3
The Dirichlet distribution is an attractive distribution - it belongs to the exponential family and is conjugate
to the multinomial distribution.
The first term ensures that each feature has higher probability of being generated by nearby reference
locations. The second term enforces spatial constraints on the location of the feature that is generated
by topic (zi ). We enforce these spatial constraints by a binning approach. Each feature in the
foreground can lie in B possible bins with respect to the reference location. The distribution of the
spatial location of a feature is specific to the topic zi and the type of object oi . This distribution is
chosen from a symmetric Dirichlet (?) prior. Since we do not want to enforce spatial constraints
on the locations of the features generated by topics from context, we set ? to a constant when oi
corresponds to the context of some object.
2.2
Gibbs Sampling
We use Gibbs sampling to estimate zi and xi for each feature. Given the features (w, l), authors
assignments x, other topic assignments z?i and other hyperparameters, each zi is drawn from:
P (zi |w, l, x, z?i )
?
?
P (wi |w?i , z)P (zi |z?i , oi )P (li |xi , l?i , x?i , zi )
oi ,zi
nzwii + ? nozii + ? nBi + ?
nzi + W ? noi + T ? noi ,zi + B?
(2)
where nzwii represents the number of features of type wi in the dataset assigned to topic zi , nzi
represents the total number of features assigned to topic zi . nozii represents the number of features
that are assigned to topic zi and author of type oi and noi represents the total number of features
assigned to author oi . Bi represents the spatial bin in which feature i lies in when the reference is ri ,
noBii,zi represents the number of features from object type oi and topic zi which lie in bin Bi , noi ,zi
represents the total number of features from object type oi and topic zi . W is number of type of
words and T represents number of topic types.
Similarly, given the features (w, l), topic assignments z, other author assignments x?i and other
hyperparameters, each xi is drawn from:
P (xi |w, l, z, x?i )
?
?
P (li |xi , l?i , x?i , zi )P (zi |oi , z?i , x?i )P (ri |oi , z?i , x?i )
oi ,zi
?||li ? ri ||2 nBi + ? nozii + ? norii + ?
)
exp(
noi ,zi + B? noi + T ? noi + R?
?s2
(3)
where norii represents the number of features from object type oi that have ri as the reference location
and noi represents the total number of features from object oi . In case oi is of type context, the
second term is replaced by a constant. R represents the number of possible reference locations.
2.3
?Shape Aware? Model
The generative model presented in section 2.1 can be learned using the Gibbs sampling approach
explained above. However, this approach has some shortcomings: (a) If there are features in the
background that exhibit a strong spatial relationship with the object, they can be labeled as foreground. (b) In clutter, the labeling performance diminishes as the discriminability of the object is
lower. The labeling performance can, however, be improved if contour cues are utilized. We do
this by requiring that the shape of the object boundary contours extracted based on feature labeling
should be similar to a sketch of the object provided in the dataset. Thus, the labeling of features into
foreground and background is not only governed by co-occurrence and structural information, but
also by shape similarity. We refer to this as a ?shape aware? model.
Shape matching using contours has, in the worst case, exponential complexity since it requires
selection of the subset of contours that best constitute the foreground boundary. We avoid this
computationally expensive challenge by solving the selection problem based on the labels of features
extracted using Gibbs sampling. The spatial author-topic model is used to attend to the contours
which are likely to be object boundaries. Our shape matching module has three steps: (a) Extracting
object boundaries based on labels extracted from the spatial author topic model. (b) Extracting
boundaries consistent with the shape model by matching. (c) Using new boundaries to determine
new labels for features.
Figure 4: Extraction of object boundaries consistent with the shape of exemplar. The first step is extraction
of contours which separate foreground and background features. This is followed by a voting process. Each
contour in the image is matched to every contour in the model to extract the center of the object. The votes are
then traced back to identify the contours consistent with the shape model.
Extracting Object Boundary Contours from Feature Labels: We first determine the edges using
and group them into contours using the approach presented in [16]. Each contour cj is a collection
of 2D points (pj1 , pj2 ....). Our goal is to extract boundary contours of the object using the feature
labels. Since, the boundary contours separates foreground and background features, an estimate
of the number of foreground and background features on each side of an image contour provides
evidence as to whether that image contour is part of the object boundary. For each contour, we
measure the number of foreground and background features that lie on each side of the contour
within some fixed distance of the contour. The probability that a contour is a boundary contour
clj = 1 of the object with the side S1 being the interior of the object is given by:
PS1 (clj = 1|x) =
S2
nS1
f + ? nb + ?
S1
S2
n + 2? n + 2?
(4)
S1
where nS1
f is the total number of features with foreground label on side S1 of the contour and n
is total number of features on side S1.
Shape Matching: Given the probabilities of each contour being a part of the object boundary, we
estimate the object center using a voting-based approach [18]. Each contour votes for the center of
the object where the weight of the vote is determined based on how well the contour matches the
sketch. Non-maximal suppression is then used to estimate the candidate object locations. Once the
candidate location of the center of object is selected, we trace back the votes to estimate the new
boundary of the object. Figure 4 shows an example of the voting process and boundary contours
extracted using this approach.
Extracting New Labels: These boundaries are then used to relabel the image features into foreground and background. We use the same separation principle to label new features. Each boundary
contour votes as to whether a feature should be labeled foreground or background. If the feature lies
on the same side as the object center, then the contour votes for the feature as foreground. Votes are
weighted based on the probability of a contour being an
Pobject boundary. Therefore, the probability
?j ?ij
j
that the feature i is labeled as foreground is given by P
where ?j is the probability that the
?
j
j
contour j is on object boundary and ?ij is variable which is 1 if the object center and feature are on
same side of contour cj or 0, if the center is on opposite side. The new labels are then used as an
initialization point for the Gibbs sampling based learning of the feature model.
3
Experimental Results
We tested our ?shape-aware? model on images of cars obtained from the Label-me dataset[17].
We randomly selected 45 images for training the model from the LabelMe dataset. A potential
concern is the number of iterations/convergence required by our iterative approach. However, it was
empirically observed that, in most cases the system stabilizes after only two iterations. It should also
be noted that each iteration between contour and feature labelings is performed after 200 iterations
Figure 5: Advantages of iterative approach. At each iteration, the author topic distribution changes, which
requires retraining the model using Gibbs sampling. This can help in two ways: (A) More Focused Attention:
The feature labeling gets refined. (B) Change of Focus: A new reference point gets chosen by new distribution.
of Gibbs sampling. The advantages of having an iterative approach is shown in, figure 5. We
compared the performance of our system against the author-topic model and the author-topic model
with spatial constraints. We evaluated the performance of the algorithm by measuring the labeling
performance in training and test datasets. Better labeling in training is required for better model
learning. Figure 6 show some of the cases where both author-topic and author-topic model with
spatial constraints fail due to high clutter or the foreground object being too small in the training
dataset. The ?shape aware? model, however, shows better localization performance as compared to
the other two.
t=0
t=2
t=0
t=2
Figure 6: Two examples of how the ?shape aware? model provides better localization compared to spatial
author topic models. The odd columns show the results of the author topic model (the initialization point of
iterative approach). The even columns show the labeling provided by our algorithm after 2 iterations.
0.7
Recall
Precision
0.6
0.8
Recall
Precision
0.6
0.5
0.4
0.4
0.3
0.2
0.2
0.1
0
"Shape?Aware"
Spatial Author Topic
Author Topic
(a) Labeling (Training)
0
"Shape Aware"
Spatial Author Topic
Author Topic
(b) Labeling (Test)
Figure 7: Quantitative Comparison of author-topic, spatial author-topic and ?shape aware? model based on
randomly selected 40 images each from the training and test dataset(17000 features each approximately). The
values of the parameters used are T = 50, ? = 50
, ? = 0.01, ? = 0.01, B = 8 and ? = 0.1.
T
Figure 7 shows a quantitative comparison of the ?shape aware? model to the author-topic and the
spatial author-topic model. Recall ratio is defined as the ratio of features labeled as foreground to the
total number of foreground features. Precision is defined as the ratio of features correctly labeled as
foreground to the total number of features labeled as foreground. In the case of labeling in training
data, our approach outperforms both author-topic and spatial author-topic model. In the case of test
dataset, the author-topic model has higher recall but very low precision. The low precision of authortopic and spatial author-topic can be attributed to the fact that, in many cases the context is similar
and at the same relative locations to each other. This leads to modeling errors - these features are
learned to be part of the object. In the case of the ?shape aware? model, the shape of the objects help
in pruning these features and therefore lead to much higher precision. Low recall rates in our model
and the spatial author-topic model is because some foreground features do not satisfy the spatial
Figure 8: Example of performance of three models on a test image. ?Shape Aware? model shows high
precision in label prediction due to pruning provided by shape matching. Author Topic model shows high
recall rates because high similarity in context across images.
Figure 9: A few examples of labeling in the test dataset.
constraints and hence are falsely labeled as background features. Figure 9 shows some examples of
performance of the ?shape aware? model on test dataset.
Acknowledgements
This research was funded by US Government?s VACE program and NSF-IIS-04-47953(CAREER) award. The
authors would also like to thank Qihui Zhu for providing the code for extracting contours.
References
[1] G. Elidan, G. Heitz and D. Koller, Learning Object Shape: From Drawings to Images, IEEE CVPR 2006.
[2] X. Wang and E. Grimson, Spatial Latent Dirichlet Allocation, NIPS 2007.
[3] E. Sudderth, A. Torralba, W.T Freeman and A.S Wilsky, Learning Hierarchical Models of Scenes, Objects
and Parts, ICCV 2005.
[4] T.L Griffiths, M Steyvers, D.M Blei and J.B Tenenbaum, Integrating Topics and Syntax, NIPS 2005.
[5] D.J Crandall and D.P Huttenlocher, Weakly Supervised Learning of Part-Based Spatial Models for Visual
Object Recognition, ECCV 2006.
[6] D. Blei, A. Ng and M. Jordan, Latent Dirichlet Allocation, Journal of Machine Learning Research, 2003.
[7] T. Hofmann, Unsupervised learning by probabilistic latent semantic analysis, Machine Learning 2001.
[8] R. Fergus, L. Fei-Fei, P. Perona and A. Zisserman, Learning Object Categories from Google?s Image
Search, ICCV 2005.
[9] R. Fergus, P. Perona and A. Zisserman, Object Class Recognition by Unsupervised Scale-Invariant Learning, CVPR 2003.
[10] L. Cao and L. Fei-Fei, Spatially coherent latent topic model for concurrent object segmentation and
classification, ICCV 2007.
[11] B. Russell, A. Efros, J. Sivic, W. Freeman and A. Zisserman, Using Multiple Segmentations to Discover
Objects and their Extent in Image Collections, CVPR 2006.
[12] T.L Griffiths and M. Steyvers, Finding Scientific Topics, PNAS 2004.
[13] M. Rosen-Zvi, T. Griffiths, M. Steyvers and P. Smyth, The Author-Topic Model for Authors and Documents, UAI 2004
[14] M. Lesk, Automatic Sense Disambiguation Using Marchine Readable Dictionaries: How to Tell a Pine
Cone from Ice Cream Cone, SIGDOC 1986.
[15] D. Yarowsky, Word Sense Disambiguation Using Statistical Models of Roget?s Categories trained on
Large Corpora, COLING 1992.
[16] Q. Zhi, G. Song and J. Shi, Untangling Cycles for Contour Grouping, ICCV 2007.
[17] B. C. Russell, A. Torralba, K. P. Murphy, W. T. Freeman, LabelMe: a Database and Web-based Tool for
Image Annotation, IJCV 2008.
[18] B. Leibe, A. Leonardis and B. Schiele,Combined Object Categorization and Segmentationwith an Implicit
Shape Model, ECCV workshop on Statistical Learning in Vision, 2006.
| 3533 |@word nd:2 retraining:1 plsa:1 selecting:1 document:8 outperforms:1 contextual:4 od:2 com:1 informative:4 hofmann:1 shape:46 cue:3 generative:3 selected:4 blei:2 provides:5 iterates:2 location:22 become:1 ijcv:1 combine:3 manner:1 falsely:1 upenn:1 expected:1 inspired:1 freeman:3 resolve:1 zhi:1 provided:4 discover:1 matched:2 maximizes:1 finding:1 differentiation:1 quantitative:2 every:1 voting:4 jianbo:1 hit:2 yarowsky:1 positive:1 ice:1 attend:1 treat:1 lsd:1 id:1 approximately:1 discriminability:1 initialization:4 co:6 bi:2 enforces:1 matching:9 word:23 integrating:3 griffith:3 suggest:1 get:2 interior:1 selection:3 nb:1 context:19 applying:2 center:8 shi:1 attention:1 cluttered:2 focused:1 rule:2 steyvers:3 analogous:4 smyth:1 hypothesis:1 recognition:7 expensive:5 utilized:1 putationally:1 huttenlocher:1 labeled:17 database:2 binning:1 observed:1 module:1 wang:1 capture:2 worst:1 nbi:2 ensures:1 cycle:1 russell:2 noi:8 grimson:1 environment:1 complexity:1 schiele:1 trained:1 weakly:8 solving:1 roget:1 localization:3 untangling:1 surrounding:1 univ:2 separated:1 shortcoming:1 detected:1 dichotomy:1 labeling:22 crandall:1 tell:1 refined:1 cvpr:3 drawing:1 statistic:3 syntactic:3 final:1 advantage:2 propose:1 maximal:1 neighboring:1 cao:1 intervening:2 description:1 convergence:1 requirement:1 distribution3:1 categorization:1 object:95 help:2 exemplar:4 ij:2 odd:1 strong:6 c:2 involves:2 closely:1 subsequently:1 stringent:1 larry:1 bin:3 require:1 government:1 suffices:1 exp:2 stabilizes:1 pine:1 efros:1 dictionary:1 torralba:2 purpose:1 estimation:1 diminishes:1 applicable:1 bag:9 label:17 concurrent:1 tool:1 weighted:1 avoid:1 focus:1 vace:1 likelihood:1 contrast:1 suppression:1 sense:3 rear:1 hidden:1 koller:1 perona:2 labelings:1 i1:1 semantics:2 classification:2 spatial:46 aware:18 once:1 extraction:3 having:1 sampling:10 ng:1 represents:12 park:1 unsupervised:2 foreground:36 rosen:1 employ:1 few:1 randomly:2 simultaneously:3 murphy:1 replaced:1 occlusion:2 interest:1 highly:1 pj2:1 pj1:1 analyzed:1 accurate:1 edge:2 unless:1 tree:1 column:2 modeling:4 cover:1 measuring:1 assignment:7 subset:3 uniform:1 recognizing:2 front:1 too:1 zvi:1 combined:3 probabilistic:1 ambiguity:1 li:10 potential:1 star:1 includes:1 availability:1 satisfy:3 depends:2 ad:2 performed:1 red:2 start:1 annotation:1 oi:19 identify:2 yellow:1 served:1 simultaneous:1 against:1 associated:3 mi:1 attributed:1 sampled:1 dataset:13 recall:6 knowledge:2 color:1 car:8 segmentation:5 cj:2 back:2 higher:3 supervised:3 specify:1 improved:1 zisserman:3 evaluated:1 though:1 implicit:1 sketch:7 web:1 lack:1 google:1 lda:1 scientific:1 ns1:2 requiring:2 hence:1 assigned:5 spatially:1 symmetric:5 semantic:4 attractive:1 noted:1 criterion:1 syntax:2 reasoning:1 image:44 meaning:4 jack:2 multinomial:1 jshi:1 empirically:1 overview:1 belong:1 significant:3 refer:1 gibbs:10 rd:1 automatic:1 similarly:1 funded:1 similarity:4 add:1 recent:1 belongs:1 captured:1 employed:1 prune:1 determine:2 elidan:1 semi:1 ii:1 multiple:1 pnas:1 reduces:1 match:2 award:1 prediction:1 relabel:3 vision:2 iteration:8 achieved:2 background:22 want:1 sudderth:2 source:1 wilsky:1 umd:2 subject:1 cream:1 flow:1 jordan:1 extracting:6 structural:1 presence:3 enough:1 ps1:1 zi:25 pennsylvania:1 identified:3 opposite:1 rod:1 whether:2 motivated:1 syntactical:3 utility:1 qihui:1 song:1 constitute:1 generally:2 clutter:5 authored:1 locally:2 tenenbaum:1 category:2 reduced:1 generate:1 marchine:1 nsf:1 sign:4 nzi:2 correctly:1 discrete:1 group:1 traced:1 drawn:3 cone:2 place:1 throughout:1 extends:1 family:1 utilizes:1 separation:3 disambiguation:3 drink:1 followed:4 correspondence:1 occur:2 constraint:15 fei:4 scene:5 ri:10 nearby:1 poor:1 belonging:1 conjugate:1 across:2 wi:6 labellings:1 s1:5 explained:1 iccv:4 invariant:1 computationally:2 previously:1 fail:3 available:1 apply:1 leibe:1 hierarchical:1 enforce:3 occurrence:5 dirichlet:8 graphical:1 readable:1 ofwords:2 exhibit:2 amongst:1 distance:1 separate:4 maryland:1 thank:1 me:1 topic:66 argue:1 extent:1 code:1 o1:1 relationship:9 insufficient:1 ratio:3 providing:1 trace:1 perform:1 datasets:1 retrained:1 required:3 sentence:3 sivic:1 coherent:1 learned:5 nip:2 leonardis:1 bar:3 challenge:1 program:1 including:1 natural:2 treated:1 zhu:1 representing:3 brief:1 abhinav:1 identifies:1 coupled:1 extract:2 occurence:1 text:2 prior:5 understanding:1 acknowledgement:1 determining:1 relative:3 marginalizing:2 allocation:2 integrate:2 sufficient:2 consistent:6 principle:2 clj:2 eccv:2 summary:1 side:8 understand:1 sparse:4 boundary:30 overcome:1 heitz:1 world:1 contour:59 author:51 collection:2 approximate:1 pruning:2 ignore:1 uai:1 corpus:1 fergus:3 xi:8 search:1 iterative:5 latent:4 table:1 additionally:1 reassigned:1 learn:5 career:1 ignoring:1 quantize:1 constructing:1 dense:2 s2:4 hyperparameters:2 precision:7 fails:1 exponential:3 lie:5 governed:1 candidate:2 learns:1 coling:1 specific:3 constellation:1 evidence:1 concern:1 grouping:1 workshop:1 importance:2 ci:1 appearance:2 likely:4 visual:3 corresponds:1 extracted:4 goal:3 labelme:2 change:2 determined:1 uniformly:1 total:8 experimental:1 vote:7 select:1 college:1 internal:1 dept:2 mcmc:2 tested:1 |
2,795 | 3,534 | Relative Margin Machines
Pannagadatta K Shivaswamy and Tony Jebara
Department of Computer Science, Columbia University, New York, NY
pks2103,[email protected]
Abstract
In classification problems, Support Vector Machines maximize the margin
of separation between two classes. While the paradigm has been successful, the solution obtained by SVMs is dominated by the directions with
large data spread and biased to separate the classes by cutting along large
spread directions. This article proposes a novel formulation to overcome
such sensitivity and maximizes the margin relative to the spread of the
data. The proposed formulation can be efficiently solved and experiments
on digit datasets show drastic performance improvements over SVMs.
1
Introduction
The goal of most machine learning problems is to generalize from a limited number of
training examples. For example, in support vector machines [10] (SVMs) a hyperplane 1
of the form w? x + b = 0, w ? Rm , x ? Rm , b ? R is recovered as a decision boundary
after observing a limited number of training examples. The parameters of the hyperplane
(w, b) are estimated by maximizing the margin (the distance between w? x + b = 1 and
w? x + b = ?1) while minimizing a weighted upper bound on the misclassification rate on
the training data (the so called slack variables). In practice, the margin is maximized by
minimizing 21 w? w.
While this works well in practice, we point out that merely changing the scale of the data
can give a different solution. On one hand, an adversary can exploit this shortcoming to
transform the data so as to give bad performance. More distressingly, this shortcoming
can naturally lead to a bad performance especially in high dimensional settings. The key
problem is that SVMs simply find a large margin solution giving no attention to the spread
of the data. An excellent discriminator lying in a dimension with relatively small data
spread may be easily overlooked by the SVM solution. In this paper, we propose novel
formulations to overcome such a limitation. The crux here is to find the maximum margin
solution with respect to the spread of the data in a relative sense rather than finding the
absolute large margin solution.
Linear discriminant analysis finds a projection of the data so that the inter-class separation
is large while within class scatter is small. However, it only makes use of the first and
the second order statistics of the data. Feature selection with SVMs [12] remove that have
low discriminative value. Ellipsoidal kernel machines [9] normalize data in feature space
by estimating bounding ellipsoids. While these previous methods showed performance improvements, both relied on multiple-step locally optimal algorithms for interleaving spread
information with margin estimation. Recently, additional examples were used to improve
the generalization of the SVMs with so called ?Universum? samples [11]. Instead of leveraging additional data or additional model assumptions such as axis-aligned feature selection,
1
In this paper we use the dot product w? x with the understanding that it can be replaced with
an inner product.
1
the proposed method overcomes what seems to be a fundamental limitation of the SVMs
and subsequently yield improvements in the same supervised setting. In addition, the formulations derived in this paper are convex, can be efficiently solved and admit some useful
generalization bounds.
Notation Boldface letters indicate vectors/matrices. For two vectors u ? Rm and v ? Rm ,
u ? v indicates that ui ? vi for all i from 1 to m. 1, 0 and I denote the vectors of all ones,
all zeros and the identity matrix respectively. Their dimensions are clear from the context.
4
4
4
3
3
3
2
2
2
1
1
1
0
0
0
?1
?1
?1
?2
?2
?2
?3
?3
?4
?4
?3
?2
?1
0
1
2
3
4
?4
?4
?3
?3
?2
?1
0
1
2
3
4
?4
?4
?3
?2
?1
0
1
2
3
4
Figure 1: Top: As the data is scaled along the x-axis, the SVM solution (red or dark shade)
deviates from the maximum relative margin solution (green or light shade). Bottom: The
projections of the examples in the top row on the real line for the SVM solution (red or
dark shade) and the proposed classifier (green or light shade) in each case.
2
Motivation with a two dimensional example
Let us start with a simple two dimensional toy dataset to illustrate a problem with the
SVM solution. Consider the binary classification example shown in the top row of Figure
1 where squares denote examples from one class and triangles denote examples from the
other class. Consider the leftmost plot in the top row of Figure 1. One possible decision
boundary separating the two classes is shown in green (or light shade). The solution shown
in red (or dark shade) is the SVM estimate; it achieves the largest margin possible while
still separating both the classes. Is this necessarily ?the best? solution?
Let us now consider the same set of points after scaling the x-axis in the second and the
third plots. With progressive scaling, the SVM increasingly deviates from the green solution,
clearly indicating that the SVM decision boundary is sensitive to affine transformations of
the data and produces a family of different solutions as a result. This sensitivity to scaling
and affine transformations is worrisome. If there is a best and a worst solution in the family
of SVM estimates, there is always the possibility that an adversary exploits this scaling such
that the SVM solution we recover is poor. Meanwhile, an algorithm producing the green
decision boundary remains resilient to such adversarial scalings.
In the previous example, a direction with a small spread in the data produced a good
discriminator. Merely finding a large margin solution, on the other hand, does not recover
the best possible discriminator. This particular weakness in large margin estimation has
only received limited attention in previous work. In the above example, suppose each class is
generated from a one dimensional distribution on a line with the two classes on two parallel
lines. In this case, the green decision boundary should obtain zero test error even if it is
estimated from a finite number of samples. However, for finite training data, the SVM
solution will make errors and will do so increasingly as the data is scaled along the x-axis.
Using kernels and nonlinear mappings may help in some cases but might also exacerbate
such problems. Similarly, simple prepossessing of the data (affine ?whitening? to make the
2
dataset zero mean and unit covariance or scaling to place the data into a zero-one box) may
fail to resolve such problems.
For more insight, consider the uni-dimensional projections of the data given by the green and
red solutions in the bottom row of Figure 1. In the green solution, all points in the first class
are mapped to a single coordinate and all points in the other class are mapped to another
(distinct) coordinate. Meanwhile, the red solution produces more dispersed projections of
the two classes. As the adversarial scaling is increased, the spread of the projection in the
SVM solution increases correspondingly. Large margins are not sufficient on their own and
what is needed is a way to also control the spread of the data after projection. Therefore,
rather than just maximizing the margin, a trade-off regularizer should also be used to
minimize the spread of the projected data. In other words, we will couple large margin
estimation with regularization which seeks to bound the spread |w? x + b| of the data. This
will allow the linear classifier to recover large margin solutions not in the absolute sense but
rather relative to the spread of the data in that projection direction.
3
Formulations
Given (xi , yi )ni=1 where xi ? Rm and yi ? {?1} drawn independent and identically distributed from a distribution Pr(x, y), the Support Vector Machine primal formulation 2 is
as follows:
1
min
kwk2 + C? ? 1 s.t. yi (w? xi + b) ? 1 ? ?i , ?1 ? i ? n.
(1)
w,b,??0 2
The above formulation minimizes an upper bound on the misclassification while maximizing
the margin (the two quantities are traded off by C). In practice, the following dual of the
formulation (1) is solved:
n
max ?
0???C1
n
n
X
1 XX
?i ?j yi yj x?
?i s.t. ?? y = 0.
i xj +
2 i=1 j=1
i=1
(2)
It is easy to see that the above formulation (2) is rotation invariant; if all the xi are replaced
by Axi where A ? Rm?m , A? A = I, then the solution remains the same. However, the
solution is not guaranteed to be the same when A is not a rotation matrix. In addition, the
solution is sensitive to translations as well.
Typically, the dot product between the examples is replaced by a kernel function k : Rm ?
Rm ? R such that k(xi , xj ) = ?(xi )? ?(xj ), where ? : Rm ? H is a mapping to a Hilbert
space to obtain non-linear decision boundaries in the input space. Thus, in (2), x?
i xj is
replaced by k(xi , xj ) to obtain non-linear solutions. In rest of this paper, we denote by
K ? Rn?n the Gram matrix, whose individual entries are given by Kij = k(xi , xj ).
Next, we consider the formulation
the data with
covariPnwhich corresponds
Pn to whitening
Pn
Pthe
n
1
1
?
ance matrix. Denote by ? = n1 i=1 xi x?
i ? n2
i=1 xi
j=1 xj , and ? = n
i=1 xi , the
sample covariance and mean respectively. Consider the following formulation which we call
?-SVM:
min
w,b,??0
1
1?D
D
kwk2 + k? 2 wk2 + C? ? 1 s.t. yi (w? (xi ? ?) + b) ? 1 ? ?i ,
2
2
(3)
where 0 ? D ? 1 is an additional parameter that trades off between the two regularization
terms.
The dual of (3) can be shown to be:
n
n
X
1X
?
?1
?i yi (xi ? ?) ((1 ? D)I + D?)
?j yj (xj ? ?).
?i ?
max
2 i=1
0???C1,y? ?=0
j=1
i=1
n
X
(4)
2
After this formulation, we stop explicitly writing ?1 ? i ? n since it will be obvious from the
context.
3
It is easy to see that the above formulation (4) is translation invariant and tends to an affine
invariant solution when D tends to one. When 0 < D < 1, it can be shown, by using the
Woodbury matrix inversion formula, that the above formulation can be ?kernelized? simply
by replacing the dot products x?
i xj in (2) by:
!
K?
1
K?
1? K1
j 1
i 1
k(xi , xj ) ?
?
+
1?D
n
n
n2
?
?1
!
1
K1
11?
11?
I
1?D
I
K1
?
Ki ?
? 2
I+K
? 2
Kj ?
,
1?D
n
n
n
D
n
n
n
where Ki is the ith column of K. For D = 0 and D = 1, it is much easier to obtain the
kernelized formulations. Note that the above formula involves a matrix inversion of size n,
making the kernel computation alone O(n3 ).
3.1
RMM and its geometrical interpretation
From Section 2, it is clear that large margin in the absolute sense might be deceptive and
could merely be a by product of bad scaling of the data. To overcome this limitation, as
we pointed out earlier, we need to bound the projections of the training examples as well.
As in the two dimensional example, it is necessary to trade off between the margin and the
spread of the data. We propose a slightly modified formulation in the next section that
can be solved efficiently. For now, we write the following formulation, mainly to show how
it compares with the ?-SVM. In addition, writing the dual of the following formulation
gives some geometric intuition. Since we trade off between the projections and the margin,
implicitly, we find large relative margin. Thus we call the following formulation the Relative
Margin Machine (RMM):
min
w,b,??0
1
1
B2
kwk2 + C? ? 1 s.t. yi (w? xi + b) ? 1 ? ?i , (w? xi + b)2 ?
.
2
2
2
(5)
This is a quadratically constrained quadratic problem (QCQP). This formulation has one
extra parameter B in addition to the SVM parameter. Note that B ? 1 since having a
B less than one would mean none of the examples would satisfy yi (w? xi + b) ? 1. Let
wC and bC be the solutions obtained by solving the SVM (1) for a particular value of C,
?
then B > maxi |wC
xi + bC |, makes the constraint on the second line in the formulation (5)
inactive for each i and the solution obtained is the same as the SVM estimate.
For smaller B values, we start getting different solutions. Specifically, with a smaller B, we
still find a large margin solution such that all the projections of the training examples are
bounded by B. Thus by trying out different B values, we explore different large margin
solutions with respect to the projection and spread of the data.
In the following, we assume that the value of B is smaller than the threshold mentioned
above. The Lagrangian of (5) is given by:
n
n
X
X
1
1 ?
1 2
2
?
?
?
2
kwk + C? 1 ?
?i yi (w xi + b) ? 1 + ?i ? ? ? +
?i
(w xi + b) ? B ,
2
2
2
i=1
i=1
where ?, ?, ? ? 0 are the Lagrange multipliers corresponding to the constraints. Differentiating with respect to the primal variables and equating them to zero, it can be shown
that:
n
n
n
n
n
X
X
X
X
1 X
(I+
?i xi x?
)w?b
?
x
=
?
y
x
,
b
=
(
?
y
?
?i w? xi ), C1 = ?+?.
i i
i i i
i i
i
?1
?
i=1
i=1
i=1
i=1
i=1
Pn
Pn
Pn
P
1
?
?
Denoting by ?? = i=1 ?i xi xi ? ?? 1 i=1 ?i xi j=1 ?j xj , and by ?? = ??1 1 nj=1 ?j xj
the dual of (5) can be shown to be:
max
0???C1,??0
n
X
i=1
?i ?
n
n
X
1
1X
?i yi (xi ? ?? )? (I + ?? )?1
?j yj (xj ? ?? ) ? B 2 ?? 1
2 i=1
2
j=1
4
(6)
Note that the above formulation is translation invariant since ?? is subtracted from each xi .
?? corresponds to a ?shape matrix? (potentially low rank) determined by xi ?s that have
2
non-zero ?i . From the KKT conditions of (5), ?i ( 12 (w? xi + b)2 ? B2 ) = 0. Consequently
2
?i > 0 implies ( 21 (w? xi + b)2 ? B2 ) = 0.
Geometrically, in the above formulation (6), the data is whitened with the matrix (I + ?? )
while solving SVM. While this is similar to what is done by the ?-SVM, the matrix (I+ ?? )
is determined jointly considering both the margin of the data and the spread. In contrast,
in ?-SVM, whitening is simply a prepossessing step which can be done independently of the
margin. Note that the constraint 21 (w? xi +b)2 ? 12 B 2 can be relaxed with slack variables at
the expense of one additional parameter however this will not be investigated in this paper.
The proposed formulation is of limited use unless it can be solved efficiently. Solving (6)
amounts to solving a semi-definite program; it cannot scale beyond a few hundred data
points. Thus, for efficient solution, we consider a different but equivalent formulation.
Note that the constraint 21 (w? xi + b)2 ? 12 B 2 can be equivalently posed as two linear
constraints : (w? xi + b) ? B and ?(w? xi + b) ? B. With these constraints replacing
the quadratic constraint, we have a quadratic program to solve. In the primal, we have 4n
constraints (including ? ? 0 ) instead of the 2n constraints in the SVM. Thus, solving RMM
as a standard QP has the same order of complexity as the SVM. In the next section, we
briefly explain how the RMM can be solved efficiently from the dual.
3.2
Fast algorithm
The main idea for the fast algorithm is to have linear constraints bounding the projections
rather than quadratic constraints. The fast algorithm that we developed is based on SVMlight
[5]. We first write the equivalent of (5) with linear constraints:
min
1
w,b,??0 2
kwk2 + C? ? 1 s.t. yi (w? xi + b) ? 1 ? ?i , w? xi + b ? B, ? w? xi ? b ? B. (7)
The dual of (7) can be shown to be the following:
max? ?
?,?,?
1
?
(? ? y ? ? + ?? ) K (? ? y ? ? + ?? ) + ?? 1 ? B?? 1 ? B??? 1
2
(8)
s.t. ?? y ? ?? 1 + ??? 1 = 0, 0 ? ? ? C1, ?, ?? ? 0,
where, the operator ? denotes the element-wise product of two vectors.
The above QP (8) is solved in an iterative way. In each step, only a subset of the dual
variables are optimized. Let us say, q, r and s (?
q , r? and s?) are the indices to the free (fixed)
variables in ?, ? and ?? respectively (such that q ? q? = {1, 2, ? ? ? n} and q ? q? = ?, similarly
for the other two indices) in a particular iteration. Then the optimization over the free
variables in that step can be expressed as:
"
#? "
#"
#
Kqq ?Kqr Kqs
?q ? yq
1 ?q ? yq
?r
?Krq Krr ?Krs
?r
max ?
(9)
?q ,?r ,??
2
s
??
K
?K
K
??
sq
s
1
?
2
"
?q ? yq
?r
??s
#? "
sr
Kqq? ?Kq?r
?Krq? Kr?r
Ks?q ?Ks?r
ss
Kq?s
?Kr?s
Ks?s
s
#"
?q? ? yq?
?r?
??s?
#
?
??
+ ??
q 1 ? B?r 1 ? B?s 1
?
??
?
?
??
?
s.t. ??
q yq ? ?r 1 + ?s 1 = ??q? yq? + ?r? 1 ? ?s? 1, 0 ? ?q ? C1, ?r , ?s ? 0.
Note that while the first term in the objective above is quadratic in the free variables (over
which it is optimized), the second term is only linear.
The algorithm, solves a small sub-problem like (9) in each step until the KKT conditions
of the formulation (8) are satisfied to a given tolerance. In each step, the free variables are
selected using heuristics similar to those in SVMlight but slightly adapted to our formulation.
5
We omit the details due to lack of space. Since only a small subset of the variables is
optimized, book-keeping can be done efficiently in each step. Moreover, the algorithm can
be warm-started with a previous solution just like SVMlight .
4
Experiments
Experiments were carried out on three sets of digits - optical digits from the UCI machine
learning repository [1], USPS digits [6] and MNIST digits [7]. These datasets have different
number of features (64 in optical digits, 256 in USPS and 784 in MNIST) and training
examples (3823 in optical digits, 7291 in USPS and 60000 in MNIST). In all these multiclass experiments one versus one classification strategy was used. We start by noting that,
on the MNIST test set, an improvement of 0.1% is statistically significant [3, 4]. This
corresponds to 10 or fewer errors by one method over another on the MNIST test set.
All the parameters were tuned by splitting the training data in each case in the ratio 80:20
and using the smaller split for validation and the larger split for training. The process
was repeated five times over random splits to pick best parameters (C for SVM, C and
D for ?-SVM and C and B for RMM). A final classifier was trained for each of the 45
classification problems with the best parameters found from cross validation using all the
training examples in those classes.
In the case of MNIST digits, training ?-SVM and KLDA are prohibitive since they involve
inverting a matrix. So, to compare all the methods, we conducted an experiment with 1000
examples per training. For the larger experiments we simply excluded ?-SVM and KLDA.
The larger experiment on MNIST consisted of training with two thirds of the digits (note
that this amounts to training with 8000 examples on an average for each pair of digits) for
each binary classification task. In both the experiments, the remaining training data was
used as a validation set. The classifier that performed the best on the validation set was
used for testing.
Once we had 45 classifiers for each pair of digits, testing was done on the separate test set
available in each of these three datasets (1797 examples in the case of optical digits, 2007
examples in USPS and 10000 examples in MNIST). The final prediction given for each test
example was based on the majority of predictions made by the 45 classifiers on the test
example with ties broken uniformly at random.
Table 1 shows the result on all the three datasets for polynomial kernel with various degrees
and the RBF kernel. For each dataset, we report the number of misclassified examples using
the majority voting scheme mentioned above. It can be seen that while ?-SVM usually
performs much better compared to SVM, RMM performs even better than ?-SVM in most
cases. Interestingly, with higher degree kernels, ?-SVM seems to match the performance
of the RMM, but in most of the lower degree kernels, RMM outperforms both SVM and
?-SVM convincingly. Since, ?-SVM is prohibitive to run on large scale datasets, the RMM
was clearly the most competitive method in these experiments.
Training with entire MNIST We used the best parameters found by crossvalidation
in the previous experiments on MNIST and trained 45 classifiers for both SVM and RMM
with all the training examples for each class in MNIST for various kernels. The test results
are reported in Table 1; the advantage still carries over to the full MNIST dataset.
4
SVM
RMM B
3.5
1
RMM B2
3
RMM B
3
2.5
2
1.5
1
0.5
0
3
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
4
Figure 2: Log run time versus log number of examples from 1000 to 10000 in steps of 1000.
6
OPT
USPS
1000-MNIST
2/3-MNIST
Full MNIST
SVM
?-SVM
KLDA
RMM
SVM
?-SVM
KLDA
RMM
SVM
?-SVM
KLDA
RMM
SVM
RMM
SVM
RMM
1
71
61
71
71
145
132
132
153
696
671
1663
689
552
534
536
521
2
57
48
57
36
109
108
119
109
511
470
848
342
237
164
198
146
3
54
41
54
32
109
99
121
94
422
373
591
319
200
148
170
140
4
47
36
47
31
103
94
117
91
380
341
481
301
183
140
156
130
5
40
35
40
33
100
89
114
91
362
322
430
298
178
123
157
119
6
46
31
46
30
95
87
118
90
338
309
419
290
177
129
141
116
7
46
29
46
29
93
90
117
90
332
303
405
296
164
129
136
115
RBF
51
47
45
51
104
97
101
98
670
673
1597
613
166
144
146
129
Table 1: Number of digits misclassified with various kernels by SVM, ?-SVM and RMM
for three different datasets.
Run time comparison We studied the empirical run times using the MNIST digits 3 vs
8 and polynomial kernel with degree two. The tolerance was set to 0.001 in both the cases.
The size of the sub-problem (9) solved was 500 in all the cases. The number of training
examples were increased in steps of 1000 and the training time was noted. C value was
set at 1000. SVM was first run on the training examples. The value of maximum absolute
prediction ? was noted. We then tried three different values of B for RMM, B1 = 1+(??1)/2,
B2 = 1 + (? ? 1)/4 B3 = 1 + (? ? 1)/10. In all the cases, the run time was noted. We show
a log-log plot comparing the number of examples to the run time in Figure 2. Both SVM
and RMM have similar asymptotic behavior. However, in many cases, warm starting RMM
with previous solution significantly helped in reducing the run times.
5
Conclusions
We identified a sensitivity of Support Vector Machines and maximum absolute margin criteria to affine scalings. These classifiers are biased towards producing decision boundaries
that separate data along directions with large data spread. The Relative Margin Machine
was proposed to overcome such a problem and optimizes the projection direction such that
the margin is large only relative to the spread of the data. By deriving the dual with
quadratic constraints, a geometrical interpretation was also formulated for RMMs. An implementation for RMMs requiring only additional linear constraints in the SVM quadratic
program leads to a competitively fast implementation. Experiments showed that while affine
transformations can improve over the SVMs, RMM performs even better in practice.
The maximization of relative margin is fairly promising as it is compatible with other popular
problems handled by the SVM framework such as ordinal regression, structured prediction
etc. These are valuable future extensions for the RMM. Furthermore, the constraints that
bound the projection are unsupervised; thus RMMs can readily work in semi-supervised
and transduction problems. We will study these extensions in detail in an extended version
of this paper.
References
[1] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
[2] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and
structural results. Journal of Machine Learning Research, 3:463?482, 2002.
[3] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep
networks. In Advances in Neural Information Processing Systems 19, pages 153?160. MIT
Press, Cambridge, MA, 2007.
7
[4] D. Decoste and B. Sch?
olkopf. Training invariant support vector machines. Machine Learning,
pages 161?190, 2002.
[5] T. Joachims. Making large-scale support vector machine learning practical. In Advances in
Kernel Methods: Support Vector Machines. MIT Press, Cambridge, MA, 1998.
[6] Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and L. Jackel.
Back-propagation applied to handwritten zip code recognition. Neural Computation, 1:541?
551, 1989.
[7] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[8] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
[9] P. K. Shivaswamy and T. Jebara. Ellipsoidal kernel machines. In Proceedings of the Artificial
Intelligence and Statistics, 2007.
[10] V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995.
[11] J. Weston, R. Collobert, F. H. Sinz, L. Bottou, and V. Vapnik. Inference with the universum.
In Proceedings of the International Conference on Machine Learning, pages 1009?1016, 2006.
[12] J. Weston, S. Mukherjee, O. Chapelle, M. Pontil, T. Poggio, and V. Vapnik. Feature selection
for SVMs. In Neural Information Processing Systems, pages 668?674, 2000.
A
Generalization Bound
In this section, we give the empirical Rademacher complexity [2, 8] for function classes used
by the SVM, and modified versions of RMM and ?-SVM which can be plugged into a
generalization bound.
Maximizing the margin can be seen as choosing a function f (x) = w? x from a bounded class
of functions FE := {x ? w? x| 21 kwk2 ? E}. For a technical reason, instead of bounding
the projection on the training examples as in (5), we consider bounding the projections
on an independent set of examples drawn from Pr(x), that is, a set U = {u1 , u2 , . . . unu }.
Note that if we have an iid training set, it can be split into two parts and one part can be
used exclusively to bound the projections and the other part can be used exclusively for
classification constraints. Since the labels of the examples used to bound the projections
do not matter, we denote this set by U and the other part of the set by (xi , yi )ni=1 We
now consider the following function class which is closely related to RMM: HE,D := {x ?
?
2
w? x| 12 w? w + D
2 (w ui ) ? E ?1 ? i ? nu } where D > 0 trades off between large margin
and small bound on the projections. Similarly, consider: GE,D := {x ? w? x| 12 w? w +
D Pnu
?
2
i=1 (w ui ) ? E}, which is closely related to the class of functions considered by
2nu
?-SVM. The empirical Rademacher complexities of the three classes of functions are as
below:
v
v
? u n
? u n
X
u
uX
?1
? E ) ? UFE := 2 2E t
? E,D ) ? UGE,D := 2 2E t
R(F
x?
R(G
x?
i xi ,
i ?D xi ,
n
n
i=1
i=1
nu
n
X
2 X
?1
? E,D ) ? UHE,D := min 1
R(H
x?
?
x
+
E
?i ,
i
?,D i
??0 n
n i=1
i=1
Pnu
Pnu
Pnu
?
where ?D = I + nDu i=1
ui u?
i and ??,D =
i=1 ?i I + D
i=1 ?i ui ui . Note that the last
upper bound is not a closed form expression, but a semi-definite optimization. Now, the
upper bounds UFE , UGE,D and UHE,D can be plugged in the following theorem in place of
? ) to obtain Rademacher type generalization bounds.
R(F
Theorem 1 Fix ? > 0, let F be the class of functions from Rm ? {?1} ? R given by
f (x, y) = ?yg(x). Let {(x1 , y1 ), . . . , (xn , yn )} be drawn iid from a probability distribution
D. Then, with probability at least 1 ? ? over the
p samples of size n, the following bound holds:
? )/? + 3 (ln(2/?))/2n, where ?i = max(0, 1 ? yi g(xi ))
PrD [y 6= sign(g(x))] ? ? ? 1/n + 2R(F
are the so-called slack variables.
8
| 3534 |@word repository:2 briefly:1 inversion:2 polynomial:2 seems:2 version:2 seek:1 tried:1 covariance:2 pick:1 carry:1 exclusively:2 denoting:1 bc:2 tuned:1 interestingly:1 document:1 outperforms:1 recovered:1 comparing:1 scatter:1 readily:1 shape:1 remove:1 plot:3 v:1 alone:1 greedy:1 selected:1 fewer:1 prohibitive:2 intelligence:1 ith:1 five:1 along:4 inter:1 behavior:1 resolve:1 decoste:1 considering:1 estimating:1 notation:1 xx:1 maximizes:1 bounded:2 moreover:1 what:3 minimizes:1 developed:1 finding:2 transformation:3 nj:1 sinz:1 voting:1 tie:1 rm:10 scaled:2 classifier:8 control:1 unit:1 omit:1 yn:1 producing:2 tends:2 might:2 equating:1 wk2:1 deceptive:1 k:3 studied:1 limited:4 unu:1 statistically:1 practical:1 woodbury:1 lecun:2 yj:3 testing:2 practice:4 definite:2 ance:1 sq:1 digit:14 pontil:1 empirical:3 significantly:1 projection:19 word:1 cannot:1 selection:3 operator:1 context:2 risk:1 writing:2 equivalent:2 lagrangian:1 maximizing:4 attention:2 starting:1 independently:1 convex:1 kqr:1 splitting:1 insight:1 deriving:1 lamblin:1 coordinate:2 suppose:1 element:1 recognition:2 mukherjee:1 bottom:2 solved:8 worst:1 trade:5 valuable:1 mentioned:2 intuition:1 broken:1 ui:6 complexity:4 cristianini:1 trained:2 solving:5 usps:5 triangle:1 easily:1 various:3 regularizer:1 distinct:1 fast:4 shortcoming:2 artificial:1 newman:1 choosing:1 whose:1 heuristic:1 posed:1 solve:1 larger:3 say:1 s:1 statistic:2 transform:1 jointly:1 universum:2 final:2 advantage:1 propose:2 product:6 aligned:1 uci:2 pthe:1 normalize:1 olkopf:1 getting:1 crossvalidation:1 rademacher:4 produce:2 ufe:2 help:1 illustrate:1 received:1 solves:1 c:1 involves:1 indicate:1 implies:1 larochelle:1 direction:6 closely:2 subsequently:1 resilient:1 crux:1 fix:1 generalization:5 opt:1 extension:2 hold:1 lying:1 considered:1 mapping:2 traded:1 achieves:1 estimation:3 label:1 krr:1 jackel:1 sensitive:2 hubbard:1 largest:1 weighted:1 mit:2 clearly:2 always:1 gaussian:1 modified:2 rather:4 pn:5 derived:1 joachim:1 improvement:4 rank:1 indicates:1 mainly:1 contrast:1 adversarial:2 sense:3 inference:1 shivaswamy:2 typically:1 entire:1 kernelized:2 misclassified:2 classification:6 dual:8 proposes:1 constrained:1 fairly:1 once:1 having:1 progressive:1 unsupervised:1 future:1 report:1 few:1 individual:1 replaced:4 n1:1 possibility:1 henderson:1 weakness:1 light:3 primal:3 necessary:1 poggio:1 unless:1 taylor:1 plugged:2 increased:2 kij:1 column:1 earlier:1 maximization:1 entry:1 subset:2 hundred:1 kq:2 successful:1 conducted:1 reported:1 fundamental:1 sensitivity:3 international:1 off:6 yg:1 satisfied:1 admit:1 book:1 toy:1 b2:5 matter:1 satisfy:1 explicitly:1 vi:1 collobert:1 performed:1 helped:1 closed:1 observing:1 kwk:1 red:5 start:3 recover:3 relied:1 parallel:1 competitive:1 asuncion:1 minimize:1 square:1 ni:2 krq:2 efficiently:6 maximized:1 yield:1 generalize:1 handwritten:1 produced:1 iid:2 none:1 explain:1 obvious:1 naturally:1 couple:1 stop:1 dataset:4 exacerbate:1 popular:1 hilbert:1 back:1 higher:1 supervised:2 formulation:27 done:4 box:1 furthermore:1 just:2 until:1 hand:2 replacing:2 nonlinear:1 lack:1 propagation:1 kqq:2 b3:1 consisted:1 multiplier:1 requiring:1 regularization:2 excluded:1 noted:3 criterion:1 leftmost:1 trying:1 kqs:1 performs:3 geometrical:2 wise:2 novel:2 recently:1 rotation:2 qp:2 interpretation:2 he:1 kwk2:5 significant:1 cambridge:3 similarly:3 pointed:1 shawe:1 had:1 dot:3 chapelle:1 whitening:3 etc:1 own:1 showed:2 optimizes:1 verlag:1 binary:2 yi:13 seen:2 additional:6 relaxed:1 zip:1 maximize:1 paradigm:1 semi:3 multiple:1 full:2 technical:1 match:1 cross:1 prediction:4 regression:1 whitened:1 iteration:1 kernel:14 c1:6 addition:4 sch:1 biased:2 rest:1 extra:1 sr:1 leveraging:1 call:2 structural:1 noting:1 svmlight:3 split:4 identically:1 easy:2 bengio:2 xj:13 identified:1 inner:1 idea:1 haffner:1 multiclass:1 inactive:1 expression:1 handled:1 bartlett:1 rmm:26 york:2 deep:1 useful:1 clear:2 involve:1 amount:2 dark:3 ellipsoidal:2 locally:1 svms:9 sign:1 estimated:2 per:1 write:2 prd:1 key:1 threshold:1 drawn:3 changing:1 merely:3 geometrically:1 run:8 letter:1 place:2 family:2 separation:2 decision:7 scaling:9 bound:16 ki:2 layer:1 guaranteed:1 quadratic:7 adapted:1 constraint:16 n3:1 qcqp:1 dominated:1 wc:2 u1:1 min:5 optical:4 relatively:1 department:1 structured:1 poor:1 smaller:4 slightly:2 increasingly:2 making:2 invariant:5 pr:2 ln:1 remains:2 slack:3 fail:1 needed:1 ordinal:1 ge:1 drastic:1 available:1 competitively:1 denker:1 subtracted:1 top:4 denotes:1 tony:1 remaining:1 exploit:2 giving:1 k1:3 especially:1 objective:1 quantity:1 strategy:1 gradient:1 distance:1 separate:3 mapped:2 separating:2 majority:2 discriminant:1 reason:1 boldface:1 code:1 index:2 ellipsoid:1 ratio:1 minimizing:2 equivalently:1 fe:1 potentially:1 expense:1 implementation:2 upper:4 datasets:6 howard:1 finite:2 extended:1 y1:1 rn:1 jebara:3 uge:2 overlooked:1 inverting:1 pair:2 optimized:3 discriminator:3 quadratically:1 boser:1 nu:3 beyond:1 pannagadatta:1 adversary:2 usually:1 pattern:1 below:1 convincingly:1 program:3 green:8 max:6 including:1 misclassification:2 warm:2 scheme:1 improve:2 yq:6 axis:4 started:1 carried:1 columbia:2 kj:1 deviate:2 popovici:1 understanding:1 geometric:1 relative:10 asymptotic:1 limitation:3 worrisome:1 versus:2 validation:4 degree:4 affine:6 sufficient:1 article:1 translation:3 row:4 compatible:1 last:1 free:4 keeping:1 allow:1 correspondingly:1 differentiating:1 absolute:5 distributed:1 tolerance:2 overcome:4 boundary:7 dimension:2 axi:1 gram:1 xn:1 made:1 projected:1 uni:1 cutting:1 implicitly:1 overcomes:1 kkt:2 b1:1 discriminative:1 xi:41 distressingly:1 iterative:1 table:3 promising:1 nature:1 excellent:1 necessarily:1 meanwhile:2 investigated:1 bottou:2 spread:18 main:1 bounding:4 motivation:1 n2:2 repeated:1 x1:1 transduction:1 ny:1 sub:2 third:2 interleaving:1 formula:2 theorem:2 bad:3 shade:6 maxi:1 svm:51 pnu:4 mendelson:1 mnist:16 vapnik:3 kr:3 margin:33 easier:1 simply:4 explore:1 lagrange:1 expressed:1 ux:1 u2:1 springer:1 corresponds:3 dispersed:1 ndu:1 ma:2 weston:2 goal:1 identity:1 formulated:1 consequently:1 rbf:2 towards:1 specifically:1 prepossessing:2 determined:2 uniformly:1 hyperplane:2 reducing:1 called:3 indicating:1 support:7 |
2,796 | 3,535 | On Computational Power and the Order-Chaos
Phase Transition in Reservoir Computing
Benjamin Schrauwen
Electronics and Information Systems Department
Ghent University
B-9000 Ghent, Belgium
[email protected]
?
Lars Busing,
Robert Legenstein
Institute for Theoretical Computer Science
Graz University of Technology
A-8010 Graz, Austria
{lars,legi}@igi.tugraz.at
Abstract
Randomly connected recurrent neural circuits have proven to be very powerful
models for online computations when a trained memoryless readout function is
appended. Such Reservoir Computing (RC) systems are commonly used in two
flavors: with analog or binary (spiking) neurons in the recurrent circuits. Previous
work showed a fundamental difference between these two incarnations of the RC
idea. The performance of a RC system built from binary neurons seems to depend
strongly on the network connectivity structure. In networks of analog neurons
such dependency has not been observed. In this article we investigate this apparent dichotomy in terms of the in-degree of the circuit nodes. Our analyses based
amongst others on the Lyapunov exponent reveal that the phase transition between
ordered and chaotic network behavior of binary circuits qualitatively differs from
the one in analog circuits. This explains the observed decreased computational
performance of binary circuits of high node in-degree. Furthermore, a novel
mean-field predictor for computational performance is introduced and shown to
accurately predict the numerically obtained results.
1
Introduction
In 2001, Jaeger [1] and Maass [2] independently introduced the idea of using a fixed, randomly
connected recurrent neural network of simple units as a set of basis filters (operating at the edge-ofstability where the system has fading memory). A memoryless readout is then trained on these basis
filters in order to approximate a given time-invariant target operator with fading memory [2]. Jaeger
used analog sigmoidal neurons as network units and named the model Echo State Network (ESN).
Maass termed the idea Liquid State Machine (LSM) and most of the related literature focuses on networks of spiking neurons or threshold units. Both ESNs and LSMs are special implementations of a
concept now generally termed Reservoir Computing (RC) which subsumes the idea of using general
dynamical systems (e.g. a network of interacting optical amplifiers [3]) ? the so-called reservoirs
? in conjunction with trained memoryless readout functions as computational devices. These RC
systems have already been used in a broad range of applications (often outperforming other state-ofthe-art methods) such as chaotic time-series prediction [4], single digit speech recognition [5], and
robot control [6].
Although ESNs and LSMs are based on very similar ideas (and in applications it seems possible to
switch between both approaches without loss of performance [7]) an apparent dichotomy exists in
the influence of the reservoir?s topological structure on its computational performance. The performance of an ESN using analog, rate-based neurons, is e.g. largely independent of the sparsity of the
1
network [8] or the exact network topology such as small-world or scale-free connectivity graphs1 .
For LSMs, which consist of spiking or binary units, the opposite effect has been observed. For the
latter systems, the influence of introducing e.g. small-world or biologically measured lamina-specific
cortical interconnection statistics [9] clearly leads to an increase in performance. In the results of
[10] it can be observed (although not specifically stated there) that for networks of threshold units
with a simple connectivity topology of fixed in-degree per neuron, an increase in performance can
be found for decreasing in-degree. None of these effects can be reproduced using ESNs.
In order to systematically study this fundamental difference between binary (spiking) LSMs and
analog ESNs, we close the gap between them by introducing in Sec. 2 a class of models termed
quantized ESNs. The reservoir of a quantized ESN is defined as a network of discrete units, where
the number of admissible states of a single unit is controlled by a parameter called quantization
level. LSMs and ESNs can be interpreted as the two limiting cases of quantized ESNs for low and
high quantization level respectively. We numerically study the influence of the network topology in
terms of the in-degree of the network units on the computational performance of quantized ESNs for
different quantization levels. This generalizes and systemizes previous results obtained for binary
LSMs and analog ESNs.
In Sec. 3 the empirical results are analyzed by studying the Lyapunov exponent of quantized ESNs,
which exhibits a clear relation to the computational performance [11]. It is shown that for ESNs
with low quantization level, the chaos-order phase transition is significantly more gradual when the
networks are sparsely connected. It is exactly in this transition regime that the computational power
of a Reservoir Computing system is found to be optimal [11]. This effect disappears for ESNs with
high quantization level. A clear explanation of the influence of the in-degree on the computational
performance can be found by investigating the rank measure presented in [11]. This measure characterizes the computational capabilities of a network as a trade-off between the so-called kernel quality
and the generalization ability. We show that for highly connected reservoirs with a low quantization
level the region of an efficient trade-off implying high performance is narrow. For sparser networks
this region is shown to broaden. Consistently for high quantization levels the region is found to be
independent of the interconnection degree.
In Sec. 4 we present a novel mean-field predictor for computational power which is able to reproduce
the influence of the topology on the quantized ESN model. It is related to the predictor introduced
in [10], but it can be calculated for all quantization levels, and can be determined with a significantly reduced computation time. The novel theoretical measure matches the experimental and rank
measure findings closely.
2
Online Computations with Quantized ESNs
We consider networks of N neurons with the state variable x(t) = (x1 (t), . . . , xN (t)) ? [?1, +1]N
in discrete time t ? Z. All units have an in-degree of K, i.e. every unit i receives input from K
other randomly chosen units with independently identically distributed (iid.) weights drawn from a
normal distribution N (0, ? 2 ) with zero mean and standard deviation (STD) ?. The network state is
updated according to:
?
?
N
X
xi (t + 1) = (?m ? g) ?
wij xj (t) + u(t)? ,
j=1
where g = tanh is the usual hyperbolic tangent nonlinearity and u denotes the input common to all
units. At every time step t, the input u(t) is drawn uniformly from {?1, 1}. The function ?m (?) is
called quantization function for m bits as it maps from (?1, 1) to its discrete range Sm of cardinality
2m :
2?2m?1 (x + 1)? + 1
?m : (?1, 1) ? Sm ,
?m (x) :=
? 1.
2m
Here ?x? denotes the integer part of x. Due to ?m the variables xi (t) are discrete (?quantized?) and
assume values in Sm = {(2k+1)/2m ?1|k = 0, . . . , 2m ?1} ? (?1, 1). The network defined above
1
Shown by results of unpublished experiments which have also been reported by the lab of Jaeger through
personal communication.
2
A
m=1
B
m=3
C
m=6
Figure 1: The performance pexp (C, PAR5 ) for three different quantization levels m = 1, 3, 6 is
plotted as a function of the network in-degree K and the weight STD ?. The networks size is
N = 150, the results have been averaged over 10 circuits C, initial conditions and randomly drawn
input time series of length 104 time steps. The dashed line represents the numerically determined
critical line.
was utilized for online computations on the input stream u(?). We consider in this article tasks where
the binary target output at time t depends solely on the n input bits u(t ? ? ? 1), . . . , u(t ? ? ? n) for
a given delay parameter ? ? 0, i.e., it is given by fT (u(t ? ? ? 1), . . . , u(t ? ? ? n)) for a function
fT ? {f |f : {?1, 1}n ? {?1, 1}}. In order to approximate the target output, a linear classifier of
PN
the form sign( i=1 ?i xi (t) + b) is applied to the instantaneous network state x(t). The coefficients
?i and the bias b were trained via a one-shot pseudo-inverse regression method [1]. The RC system
consisting of the network and the linear classifier is called a quantized ESN of quantization level m
in the remainder of this paper.
We assessed the computational capabilities of a given network based on the numerically determined
performance on an example task, which was chosen to be Q
the ? -delayed parity function of n bits
n
PARn,? , i.e. the desired output at time t is PARn,? (u, t) = i=1 u(t ? ? ? i) for a delay ? ? 0 and
n ? 1. A separate readout classifier is trained for each combination of n and ? , all using the same
reservoir. We define pexp quantifying the performance of a given circuit C on the PARn task as:
pexp (C, PARn ) :=
?
X
?(C, PARn,? ),
(1)
? =0
where ?(C, PARn,? ) denotes the performance of circuit C on the PARn,? task measured in terms
of Cohen?s kappa coefficient2 . The performance results for PARn can be considered representative
for the general computational capabilities of a circuit C as qualitatively very similar results were
obtained for the ANDn task of n bits and random Boolean functions of n bit (results not shown).
In Fig. 1 the performance pexp (C, PAR5 ) is shown averaged over 10 circuits C for three different
quantization levels m = 1, 3, 6. pexp (C, PAR5 ) is plotted as a function of the network in-degree
K and the logarithm3 of the weight STD ?. Qualitatively very similar results were obtained for
different network graphs with e.g. Poisson or scale-free distributed in-degree with average K (results
not shown). A numerical approximation of the critical line, i.e. the order-chaos phase transition,
is also shown (dashed line), which was determined by the root of an estimation of the Lyapunov
coefficient4 . The critical line predicts the zone of optimal performance well for m = 1, but is less
accurate for ESNs with m = 3, 6. One can see that for ESNs with low quantization levels (m = 1, 3),
networks with a small in-degree K reach a significantly better peak performance than those with
2
? is defined as (c ? cl )/(1 ? cl ) where c is the fraction of correct trials and cl is the chance level. The sum
in eq. (1) was truncated at ? = 8, as the performance was negligible for higher delays ? > 8 for the network
size N = 150.
3
All logarithms are taken to the basis 10, i.e. log = log10 if not stated otherwise.
4
The Lyapunov coefficient ? was determined in the following way. After 20 initial simulation steps the
smallest admissible (for m) state difference ?0 (m) = 21?m was introduced in a single network unit and the
resulting state difference ? after one time step was measured averaged over 105 trials with randomly generated
networks, initial states and input streams. The initial states of all neurons were iid. uniformly over Sm . ? was
then determined by ? = ln(?/?0 (m)).
3
quantization m=1bit
0
B1
?
?
A1
quantization m=6bit
0
K=3
K=12
K=24
?1
?0.1
?1
0
0.1
log(?)?log(?0)
?0.1
0
0.1
log(?)?log(?0)
Figure 2: Phase transitions in binary networks (m = 1) differ from phase transition in high resolution networks (m = 6). An empirical estimate ? of the Lyapunov exponent is plotted as a function
of the STD of weights ? for in-degrees K = 3 (solid), K = 12 (dashed), and K = 24 (gray line). In
order to facilitate comparison, the plot for each K is centered around log(?0 ) where ?0 is the STD
of weights for which ? is zero (i.e., ?0 is the estimated critical ? value for that K). The transition
sharpens with increasing K for binary reservoirs (A), whereas it is virtually independent of K for
high resolution reservoirs (B).
high in-degree. The effect disappears for a high quantization level (m = 6). This phenomenon is
consistent with the observation that network connectivity structure is in general an important issue
if the reservoir is composed of binary or spiking neurons but less important if analog neurons are
employed. Note that for m = 3, 6 we see a bifurcation in the zones of optimal performance which
is not observed for the limiting cases of ESNs and LSMs.
3
Phase Transitions in Binary and High Resolution Networks
Where does the difference between binary and high resolution reservoirs shown in Fig. 1 originate
from? It was often hypothesized that high computational power in recurrent networks is located in
a parameter regime near the critical line, i.e., near the phase transition between ordered and chaotic
behavior (see, e.g., [12] for a review; compare also the performance with the critical line in Fig.
1). Starting from this hypothesis, we investigated whether the network dynamics of binary networks
near this transition differs qualitatively from the one of high resolution networks. We estimated the
network properties by empirically measuring the Lyapunov exponent ? with the same procedure as
in the estimation of the critical line in Fig. 1 (see text above). However, we did not only determine
the critical line (i.e., the parameter values where the estimated Lyapunov exponent crosses zero), but
also considered its values nearby. For a given in-degree K, ? can then be plotted as a function of
the STD of weights ? (centered at the critical value ?0 of the STD for that K). This was done for
binary (Fig. 2A) and high resolution networks (Fig. 2B) and for K = 3, 12, and 24. Interestingly,
the dependence of ? on the STD ? near the critical line is qualitatively quite different between the
two types of networks. For binary networks the transition becomes much sharper with increasing
K which is not the case for high resolution networks. How can this sharp transition explain the
reduced computational performance of binary ESNs with high in-degree K? The tasks considered
in this article require some limited amount of memory which has to be provided by the reservoir.
Hence, the network dynamics has to be located in a regime where memory about recent inputs is
available and past input bits do not interfere with that memory. Intuitively, an effect of the sharper
phase transition could be stated in the following way. For low ? (i.e., in the ordered regime), the
memory needed for the task is not provided by the reservoir. As we increase ?, the memory capacity
increases, but older memories interfere with recent ones, making it hard or even impossible to extract
the relevant information. This intuition is confirmed by an analysis which was introduced in [11] and
which we applied to our setup. We estimated two measures of the reservoir, the so called ?kernelquality? and the ?generalization rank?, both being the rank of a matrix consisting of certain state
vectors of the reservoir. To evaluate the kernel-quality of the reservoir, we randomly drew N = 150
input streams u1 (?), . . . , uN (?) and computed the rank of the N ? N matrix whose columns were
4
15
20
5
?2
?1
0
1
50
40
30
20
10
20
K
m=6bit
D
15
10
5
?2
?1
0
log(?)
1
C 150
K=24
100
100
50
50
0
?2
0
?1
0
1
E 150
K=3
Rank
10
B 150
K=3
Rank
40
20
K
m=1bit
A
0
?2
?1
0
1
F 150
K=24
100
generaliz.
100
kernel
50
0
?2
50
?1
0
log(?)
1
0
?2
diff.
?1
0
log(?)
1
Figure 3: Kernel-quality and generalization rank of quantized ESNs of size N = 150. Upper plots
are for binary reservoirs (m = 1bit), lower plots for high resolution reservoirs (m = 6 bit). A) The
difference between the kernel-quality and the generalization rank as a function of the log STD of
weights and the in-degree K. B) The kernel-quality (solid), the generalization rank (dashed) and
the difference between both (gray line) for K = 3 as a function of log(?). C) Same as panel B,
but for an in-degree of K = 24. In comparison to panel B, the transition of both measures is much
steeper. D,E,F) Same as panels A, B, and C respectively, but for a high resolution reservoir. All
plotted values are means over 100 independent runs with randomly drawn networks, initial states,
and input streams.
the circuit states resulting from these input streams. 5 Intuitively, this rank measures how well the
reservoir represents different input streams. The generalization rank is related to the ability of the
reservoir-readout system to generalize from the training data to test data. The generalization rank is
evaluated as follows. We randomly drew N input streams u
?1 (?), . . . , u
?N (?) such that the last three
input bits in all these input streams were identical.6 The generalization rank is then given by the
rank of the N ? N matrix whose columns are the circuit states resulting from these input streams.
Intuitively, the generalization rank with this input distribution measures how strongly the reservoir
state at time t is sensitive to inputs older than three time steps. The rank measures calculated here
will thus have predictive power for computations which require memory of the last three time steps
(see [11] for a theoretical justification of the measures). In general, a high kernel-quality and a
low generalization rank (corresponding to a high ability of the network to generalize) are desirable.
Fig. 3A and D show the difference between the two measures as a function of log(?) and the indegree K for binary networks and high resolution networks respectively. The plots show that the
peak value of this difference is decreasing with K in binary networks, whereas it is independent
of K in high resolution reservoirs, reproducing the observations in the plots for the computational
performance. A closer look for the binary circuit at K = 3 and K = 24 is given in Figs. 3B and
3C. When comparing these plots, one sees that the transition of both measures is much steeper for
K = 24 than for K = 3 which leads to a smaller difference between the measures. We interpret this
finding in the following way. For K = 24, the reservoir increases its separation power very fast as
log(?) increases. However the separation of past input differences increases likewise and thus early
input differences cannot be distinguished from late ones. This reduces the computational power of
binary ESN with large K on such tasks. In comparison, the corresponding plots for high resolution
reservoirs (Figs. 3E and 3F) show that the transition shifts to lower weight STDs ? for larger K,
but apart from this fact the transitions are virtually identical for low and high K values. Comparing
5
The initial states of all neurons were iid. uniformly over Sm . The rank of the matrix was estimated by
singular value decomposition on the network states after 15 time steps of simulation.
6
First, we drew each of the last three bits u
?(13), . . . , u
?(15) independently from a uniform distribution over
{?1, 1}. For each input stream u
?i (1), . . . , u
?i (15) we drew u
?i (1), . . . , u
?i (12) independently from a uniform
distribution over {?1, 1} and set u
?i (t) = u
?(t) for t = 13, . . . , 15.
5
A
m=1
B
m=3
m=6
C
Figure 4: Mean-field predictor p? for computational power for different quantization levels m as a
function of the STD ? of the weights and in-degree K. A) m = 1. B) m = 3. C) m = 6. Compare
this result to the numerically determined performance pexp plotted in Fig. 1.
Fig. 3D with Fig. 1C, one sees that the rank measure does not accurately predict the whole region
of good performance for high resolution reservoirs. It also does not predict the observed bifurcation
in the zones of optimal performance, a phenomenon that is reproduced by the mean-field predictor
introduced in the following section.
4
Mean-Field Predictor for Computational Performance
The question why and to what degree certain non-autonomous dynamical systems are useful devices for online computations has been addressed theoretically amongst others in [10]. There, the
computational performance of networks of randomly connected threshold gates was linked to their
separation property (for a formal definition see [2]): It was shown that only networks which exhibit
sufficiently different network states for different instances of the input stream, i.e. networks that
separate the input, can compute complex functions of the input stream. Furthermore, the authors introduced an accurate predictor for the computational capabilities for the considered type of networks
based on the separation capability which was quantified via a simple mean-field approximation of
the Hamming distance between different network states.
Here we aim at extending this approach to a larger class of networks, the class of quantized ESNs
introduced above. However a severe problem arises when directly applying the mean-field theory
developed in [10] to quantized ESNs with a quantization level m > 1: Calculation of the important
quantities becomes computationally infeasible as the state space of a network grows exponentially
with m. Therefore we introduce a modified mean-field predictor which can be efficiently computed
and which still has all desirable properties of the one introduced in [10].
Suppose the target output of the network at time t is a function fT ? F =
{f |f : {?1, 1}n ? {?1, 1}} of the n bits u(t ? ? ? 1), . . . , u(t ? ? ? n) of the input stream
u(?) with delay ? as described in Sec. 2. In order to exhibit good performance on an arbitrary
fT ? F , pairs of inputs that differ in at least one of the n bits have to be mapped by the network
to different states at time t. Only then, the linear classifier is able to assign the inputs to different function values. In order to quantify this so-called separation property of a given network, we
introduce the normalized distance d(k): It measures the average distance between two networks
states x1 (t) = (x11 (t), . . . , x1N (t)) and x2 (t) = (x21 (t), . . . , x2N (t)) arising from applying to the
same network two input streams u1 (?) and u2 (?) which only differ in the single bit at time t ? k, i.e.
u2 (t ? k) = ?u1 (t ? k). Formally we define7 :
1
x1 (t) ? x2 (t)
.
d(k) =
1
N
The average h.i is taken over all inputs u1 (?), u2 (?) from the ensemble defined above, all initial
conditions of the network and all circuits C. However, a good separation of the n bits, i.e. d(k) ?
0, ? < k ? n + ? , is a necessary but not a sufficient condition for the ability of the network
to calculate the target function. Beyond this, it is desired that the network ?forgets? all (for the
7
For vectors x = (x1 , x2 , . . .) ? RN we use the Manhattan norm kxk1 :=
6
PN
i=1
|xi |
A
m=1
m=1
B
m=6
m=6
Figure 5: Contributions d(2) (dotted) and d(?) (solid gray) to the mean-field predictor p? (dashed
line) for different quantization levels m ? {1, 6} and different in-degrees K ? {3, 24} as a function
of STD ? of the weights. The plots show slices of the 2d plots Fig. 4A and C for constant K. A) For
m = 1 it can be seen that the region in log(?)-space with high d(2) and low d(?) is significantly
larger for K = 3 than for K = 24. B) For m = 6 this region is roughly independent of K except a
shift.
target function) irrelevant bits u(t ? k), k > n + ? of the input sufficiently fast, i.e. d(k) ? 0
for k > n + ? . We use the limit d(?) = limk?? d(k) to quantify this irrelevant separation which
signifies sensitivity to initial conditions (making the reservoir not time invariant). Hence, we propose
the quantity p? as a heuristic predictor for computational power:
p? = max {d(2) ? d(?), 0} .
As the first contribution to p? we chose d(2) as it reflects the ability of a network to perform a
combination of two mechanisms: In order to exhibit a high value for d(2) the network has to separate
the inputs at the time step t ? 2 and to sustain the resulting state distance via its recurrent dynamics
in the next time step t ? 1. We therefore consider d(2) to be a measure for input separation on short
time-scales relevant for the target function. p? is calculated using a mean-field model similar to the
one presented in [10] which itself is rooted in the annealed approximation (AA) introduced in [13].
In the AA one assumes that the circuit connectivity and the corresponding weights are drawn iid.
at every time step. Although being a drastic simplification, the AA has been shown to yield good
results in the large system size limit N ? ?. The main advantage of p? over the the predictor
defined in [10] (the NM-separation) is that the calculation of p? only involves taking the average
over one input stream (as the u2 (?) is a function of u1 (?)) compared to taking the average over two
independent inputs needed for the NM-separation, resulting in a significantly reduced computation
time.
In Fig. 4 the predictor p? is plotted as a function of the STD ? of the weight distribution and the
in-degree K for three different values of the quantization level m ? {1, 3, 6}. When comparing
these results with the actual network performance pexp (PAR) on the PAR-task plotted in Fig. 1 one
can see that p? serves as a reliable predictor for pexp of a network for sufficiently small m. For
larger values of m the predictor p? starts to deviate from the true performance. The dominant effect
of the quantization level m on the performance discussed in Sec. 2 is well reproduced by p? : For
m = 1 the in-degree K has a considerable impact, i.e. for large K maximum performance drops
significantly. For m > 2 however, for larger values of K there also exists a region in the parameter
space exhibiting maximum performance.
The interplay between the two contributions d(2) and d(?) of p? delivers insight into the dependence of pexp on the network parameters. A high value of d(2) corresponds to a good separation
of inputs on short time scales relevant for the target task, a property that is found predominantly in
networks that are not strongly input driven. A small value of d(?) guarantees that inputs on which
the target function assumes the same value are mapped to nearby network states and thus a linear
readout is able to assign them to the same class irrespectively of their irrelevant remote history. For
m = 1, as can be seen in Fig. 5 the region in log(?) space where both conditions for good performance are present decreases for growing K. In contrast, for m > 2 a reverse effect is observed: for
increasing K the parameter range for ? fulfilling the two opposing conditions for good performance
grows moderately resulting in a large region of high p? for high in-degree K. This observation is
in close analogy to the behavior of the rank measure discussed in Sec. 3. Also note that p? predicts
the novel bifurcation effect also observed in Fig. 1.
7
5
Discussion
By interpolating between the ESN and LSM approaches to RC, this work provides new insights into
the question of what properties of a dynamical system lead to improved computational performance:
Performance is optimal at the order-chaos phase transition, and the broader this transition regime,
the better will the performance of the system be. We have confirmed this hypothesis by several
analyses, including a new theoretical mean-field predictor that can be computed very efficiently.The
importance of a gradual order-chaos phase transition could explain why ESNs are more often used
for applications than LSMs. Although they can have very similar performance on a given task
[7], it is significantly harder to create a LSM which operates at the edge-of-chaos: the excitation
and inhibition in the network need to be finely balanced because there tends to be a very abrupt
transition from an ordered to a epileptic state. For ESNs however, there is a broad parameter range
in which they perform well. It should be noted that the effect of quantization cannot just be emulated
by additive or multiplicative iid. or correlated Gaussian noise on the output of analog neurons. The
noise degrades performance homogeneously and the differences in the influence of the in-degree
observed for varying quantization levels cannot be reproduced. The finding that binary reservoirs
have superior performance for low in-degree stands in stark contrast to the fact that cortical neurons
have very high in-degrees of over 104 . This raises the interesting question which properties and
mechanisms of cortical circuits not accounted for in this article contribute to their computational
power. In view of the results presented in this article, such mechanisms should tend to soften the
phase transition between order and chaos.
Acknowledgments
Written under partial support by the FWO Flanders project # G.0088.09, the Photonics@be Interuniversity Attraction Poles program (IAP 6/10), the Austrian Science Fund FWF projects # P17229N04, # S9102-N13 and projects # FP6-015879 (FACETS), # FP7-216593 (SECO) of the EU.
References
[1] H. Jaeger. The ?echo state? approach to analyzing and training recurrent neural networks. GMD Report
148, German National Research Center for Information Technology, 2001.
[2] W. Maass, T. Natschl?ager, and H. Markram. Real-time computing without stable states: A new framework
for neural computation based on perturbations. Neural Computation, 14(11):2531?2560, 2002.
[3] Kristof Vandoorne, Wouter Dierckx, Benjamin Schrauwen, David Verstraeten, Roel Baets, Peter Bienstman, and Jan Van Campenhout. Toward optical signal processing using photonic reservoir computing.
Optics Express, 16(15):11182?11192, 8 2008.
[4] H. Jaeger and H. Haas. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless
communication. Science, 304:78?80, 2004.
[5] D. Verstraeten, B. Schrauwen, D. Stroobandt, and J. Van Campenhout. Isolated word recognition with the
liquid state machine: a case study. Information Processing Letters, 95(6):521?528, 2005.
[6] P. Joshi and W. Maass. Movement generation with circuits of spiking neurons. Neural Computation,
17(8):1715?1738, 2005.
[7] D. Verstraeten, B. Schrauwen, M. D?Haene, and D. Stroobandt. A unifying comparison of Reservoir
Computing methods. Neural Networks, 20:391?403, 2007.
[8] H. Jaeger. Echo state networks. Scholarpedia, 2(9):2330, 2007.
[9] S. H?ausler and W. Maass. A statistical analysis of information processing properties of lamina-specific
cortical microcircuit models. Cerebral Cortex, 17(1):149?162, 2007.
[10] N. Bertschinger and T. Natschl?ager. Real-time computation at the edge of chaos in recurrent neural
networks. Neural Computation, 16(7):1413?1436, 2004.
[11] R. Legenstein and W. Maass. Edge of chaos and prediction of computational performance for neural
microcircuit models. Neural Networks, pages 323?334, 2007.
[12] R. Legenstein and W. Maass. What makes a dynamical system computationally powerful? In S. Haykin,
J. C. Principe, T.J. Sejnowski, and J.G. McWhirter, editors, New Directions in Statistical Signal Processing: From Systems to Brain, pages 127?154. MIT Press, 2007.
[13] B. Derrida and Pomeau Y. Random networks of automata: A simple annealed approximation. Europhysics
Letters, 1(2):45?49, 1986.
8
| 3535 |@word trial:2 sharpens:1 seems:2 norm:1 busing:1 gradual:2 simulation:2 decomposition:1 solid:3 harder:1 incarnation:1 shot:1 initial:8 electronics:1 series:2 liquid:2 interestingly:1 past:2 comparing:3 written:1 numerical:1 additive:1 plot:9 drop:1 fund:1 implying:1 device:2 short:2 haykin:1 provides:1 quantized:12 node:2 lsm:3 contribute:1 sigmoidal:1 rc:7 introduce:2 theoretically:1 roughly:1 behavior:3 growing:1 brain:1 decreasing:2 actual:1 cardinality:1 increasing:3 becomes:2 provided:2 project:3 circuit:18 panel:3 what:3 interpreted:1 developed:1 finding:3 guarantee:1 pseudo:1 every:3 legi:1 exactly:1 classifier:4 control:1 unit:13 negligible:1 tends:1 limit:2 analyzing:1 solely:1 chose:1 quantified:1 limited:1 range:4 averaged:3 acknowledgment:1 differs:2 chaotic:4 digit:1 procedure:1 jan:1 empirical:2 significantly:7 hyperbolic:1 word:1 cannot:3 close:2 operator:1 influence:6 impossible:1 applying:2 map:1 center:1 annealed:2 starting:1 independently:4 automaton:1 resolution:13 abrupt:1 insight:2 attraction:1 autonomous:1 justification:1 limiting:2 updated:1 target:9 suppose:1 exact:1 hypothesis:2 baets:1 recognition:2 kappa:1 utilized:1 located:2 std:13 sparsely:1 predicts:2 observed:9 ft:4 kxk1:1 calculate:1 graz:2 readout:6 connected:5 region:9 remote:1 trade:2 decrease:1 eu:1 verstraeten:3 movement:1 balanced:1 benjamin:3 intuition:1 moderately:1 dynamic:3 personal:1 trained:5 depend:1 raise:1 predictive:1 basis:3 fast:2 sejnowski:1 dichotomy:2 harnessing:1 apparent:2 quite:1 whose:2 larger:5 heuristic:1 interconnection:2 otherwise:1 ability:5 statistic:1 seco:1 echo:3 itself:1 online:4 reproduced:4 interplay:1 advantage:1 propose:1 remainder:1 relevant:3 extending:1 jaeger:6 lamina:2 pexp:9 recurrent:7 derrida:1 measured:3 eq:1 involves:1 s9102:1 quantify:2 lyapunov:7 differ:3 exhibiting:1 direction:1 closely:1 correct:1 filter:2 lars:2 centered:2 explains:1 require:2 assign:2 generalization:10 around:1 considered:4 sufficiently:3 normal:1 mcwhirter:1 predict:3 early:1 smallest:1 belgium:1 campenhout:2 estimation:2 tanh:1 sensitive:1 create:1 reflects:1 mit:1 clearly:1 gaussian:1 aim:1 modified:1 pn:2 varying:1 broader:1 conjunction:1 focus:1 consistently:1 rank:21 contrast:2 relation:1 reproduce:1 wij:1 issue:1 x11:1 exponent:5 art:1 special:1 bifurcation:3 field:11 saving:1 x2n:1 identical:2 represents:2 broad:2 look:1 roel:1 others:2 report:1 randomly:9 composed:1 national:1 delayed:1 phase:12 consisting:2 opposing:1 amplifier:1 investigate:1 highly:1 wouter:1 severe:1 lsms:8 analyzed:1 photonics:1 accurate:2 edge:4 closer:1 partial:1 necessary:1 ager:2 logarithm:1 desired:2 plotted:8 isolated:1 theoretical:4 instance:1 column:2 boolean:1 facet:1 measuring:1 signifies:1 soften:1 introducing:2 deviation:1 ugent:1 pole:1 predictor:15 uniform:2 delay:4 reported:1 dependency:1 fundamental:2 peak:2 sensitivity:1 off:2 schrauwen:5 connectivity:5 interuniversity:1 nm:2 stark:1 sec:6 subsumes:1 coefficient:2 igi:1 scholarpedia:1 stream:15 depends:1 multiplicative:1 root:1 view:1 lab:1 linked:1 characterizes:1 steeper:2 start:1 capability:5 contribution:3 appended:1 largely:1 likewise:1 efficiently:2 ensemble:1 ofthe:1 yield:1 generalize:2 accurately:2 iid:5 none:1 emulated:1 confirmed:2 history:1 explain:2 reach:1 definition:1 energy:1 esn:7 hamming:1 austria:1 higher:1 sustain:1 improved:1 done:1 evaluated:1 strongly:3 microcircuit:2 furthermore:2 just:1 receives:1 interfere:2 quality:6 reveal:1 gray:3 grows:2 facilitate:1 effect:9 hypothesized:1 concept:1 normalized:1 true:1 hence:2 memoryless:3 maass:7 rooted:1 x1n:1 excitation:1 noted:1 delivers:1 chaos:9 novel:4 instantaneous:1 predominantly:1 common:1 superior:1 spiking:6 empirically:1 cohen:1 exponentially:1 cerebral:1 analog:9 discussed:2 numerically:5 interpret:1 esns:22 nonlinearity:2 robot:1 stable:1 cortex:1 operating:1 inhibition:1 dominant:1 showed:1 recent:2 irrelevant:3 apart:1 driven:1 termed:3 reverse:1 certain:2 binary:23 outperforming:1 seen:2 employed:1 determine:1 dashed:5 signal:2 desirable:2 reduces:1 match:1 calculation:2 cross:1 pomeau:1 europhysics:1 a1:1 graphs1:1 controlled:1 prediction:2 impact:1 regression:1 austrian:1 poisson:1 kernel:7 whereas:2 decreased:1 addressed:1 singular:1 finely:1 limk:1 natschl:2 tend:1 virtually:2 integer:1 fwf:1 joshi:1 near:4 identically:1 switch:1 xj:1 topology:4 opposite:1 idea:5 shift:2 whether:1 epileptic:1 peter:1 speech:1 generally:1 useful:1 clear:2 amount:1 fwo:1 gmd:1 reduced:3 dotted:1 sign:1 estimated:5 arising:1 per:1 discrete:4 irrespectively:1 express:1 threshold:3 drawn:5 graph:1 fraction:1 sum:1 fp6:1 run:1 inverse:1 letter:2 powerful:2 named:1 separation:11 legenstein:3 bit:19 simplification:1 topological:1 fading:2 optic:1 ausler:1 x2:3 nearby:2 u1:5 optical:2 n13:1 department:1 according:1 combination:2 smaller:1 biologically:1 making:2 haene:1 intuitively:3 invariant:2 fulfilling:1 taken:2 ln:1 computationally:2 german:1 mechanism:3 needed:2 fp7:1 drastic:1 serf:1 iap:1 studying:1 generalizes:1 available:1 distinguished:1 homogeneously:1 gate:1 broaden:1 denotes:3 assumes:2 tugraz:1 x21:1 log10:1 unifying:1 already:1 question:3 quantity:2 degrades:1 indegree:1 dependence:2 usual:1 exhibit:4 amongst:2 distance:4 separate:3 mapped:2 capacity:1 originate:1 haas:1 toward:1 length:1 setup:1 robert:1 sharper:2 stated:3 implementation:1 perform:2 photonic:1 upper:1 neuron:15 observation:3 sm:5 truncated:1 communication:2 interacting:1 rn:1 reproducing:1 perturbation:1 sharp:1 arbitrary:1 introduced:10 david:1 unpublished:1 pair:1 narrow:1 able:3 beyond:1 dynamical:4 regime:5 sparsity:1 program:1 built:1 max:1 memory:9 explanation:1 reliable:1 including:1 power:10 critical:10 predicting:1 older:2 technology:2 disappears:2 extract:1 text:1 review:1 literature:1 deviate:1 tangent:1 manhattan:1 loss:1 par:2 interesting:1 generation:1 proven:1 analogy:1 degree:27 sufficient:1 consistent:1 article:5 editor:1 systematically:1 accounted:1 parity:1 free:2 last:3 infeasible:1 wireless:1 bias:1 formal:1 institute:1 taking:2 markram:1 distributed:2 slice:1 van:2 calculated:3 cortical:4 transition:23 world:2 xn:1 stand:1 author:1 commonly:1 qualitatively:5 approximate:2 investigating:1 b1:1 xi:4 un:1 why:2 investigated:1 cl:3 complex:1 interpolating:1 did:1 main:1 whole:1 noise:2 x1:4 reservoir:32 representative:1 fig:17 stroobandt:2 forgets:1 flanders:1 late:1 admissible:2 specific:2 exists:2 consist:1 quantization:23 drew:4 importance:1 bertschinger:1 gap:1 flavor:1 sparser:1 kristof:1 ordered:4 u2:4 aa:3 corresponds:1 chance:1 quantifying:1 considerable:1 hard:1 specifically:1 determined:7 uniformly:3 diff:1 except:1 operates:1 ghent:2 called:7 experimental:1 andn:1 zone:3 formally:1 principe:1 support:1 latter:1 arises:1 assessed:1 evaluate:1 phenomenon:2 correlated:1 |
2,797 | 3,536 | Implicit Mixtures of Restricted Boltzmann Machines
Vinod Nair and Geoffrey Hinton
Department of Computer Science, University of Toronto
10 King?s College Road, Toronto, M5S 3G5 Canada
{vnair,hinton}@cs.toronto.edu
Abstract
We present a mixture model whose components are Restricted Boltzmann Machines (RBMs). This possibility has not been considered before because computing the partition function of an RBM is intractable, which appears to make
learning a mixture of RBMs intractable as well. Surprisingly, when formulated as
a third-order Boltzmann machine, such a mixture model can be learned tractably
using contrastive divergence. The energy function of the model captures threeway interactions among visible units, hidden units, and a single hidden discrete
variable that represents the cluster label. The distinguishing feature of this model
is that, unlike other mixture models, the mixing proportions are not explicitly
parameterized. Instead, they are defined implicitly via the energy function and
depend on all the parameters in the model. We present results for the MNIST and
NORB datasets showing that the implicit mixture of RBMs learns clusters that
reflect the class structure in the data.
1
Introduction
A typical mixture model is composed of a number of separately parameterized density models each
of which has two important properties:
1. There is an efficient way to compute the probability density (or mass) of a datapoint under
each model.
2. There is an efficient way to change the parameters of each model so as to maximize or
increase the sum of the log probabilities it assigns to a set of datapoints.
The mixture is created by assigning a mixing proportion to each of the component models and
it is typically fitted by using the EM algorithm that alternates between two steps. The E-step uses
property 1 to compute the posterior probability that each datapoint came from each of the component
models. The posterior is also called the ?responsibility? of each model for a datapoint. The M-step
uses property 2 to update the parameters of each model to raise the responsibility-weighted sum of
the log probabilities it assigns to the datapoints. The M-step also changes the mixing proportions of
the component models to match the proportion of the training data that they are responsible for.
Restricted Boltzmann Machines [5] model binary data-vectors using binary latent variables. They
are considerably more powerful than mixture of multivariate Bernoulli models 1 because they allow
many of the latent variables to be on simultaneously so the number of alternative latent state vectors
is exponential in the number of latent variables rather than being linear in this number as it is with
a mixture of Bernoullis. An RBM with N hidden units can be viewed as a mixture of 2N Bernoulli
models, one per latent state vector, with a lot of parameter sharing between the 2N component
models and with the 2N mixing proportions being implicitly determined by the same parameters.
1
A multivariate Bernoulli model consists of a set of probabilities, one per component of the binary data
vector.
1
Hidden units
K component
RBMs
Hidden units
Hidden units
j
j
j
Wijk
Wij
i
Visible units
(a)
k
Wijk
k
1-of-K
activation
i
Visible units
(b)
1-of-K
activation
i
Visible units
(c)
Figure 1: (a) Schematic representation of an RBM, (b) an implicit mixture of RBMs as a third-order
Boltzmann machine, (c) schematic representation of an implicit mixture.
It can also be viewed as a product of N ?uni-Bernoulli? models (plus one Bernoulli model that is
implemented by the visible biases). A uni-Bernoulli model is a mixture of a uniform and a Bernoulli.
The weights of a hidden unit define the ith probability in its Bernoulli model as pi = ?(wi ), and the
bias, b, of a hidden unit defines the mixing proportion of the Bernoulli in its uni-Bernoulli as ?(b),
where ?(x) = (1 + exp(?x))?1 .
The modeling power of an RBM can always be increased by increasing the number of hidden units
[10] or by adding extra hidden layers [12], but for datasets that contain several distinctly different types of data, such as images of different object classes, it would be more appropriate to use a
mixture of RBM?s. The mixture could be used to model the raw data or some preprocessed representation that has already extracted features that are shared by different classes. Unfortunately,
RBM?s cannot easily be used as the components of mixture models because they lack property 1:
It is easy to compute the unnormalized density that an RBM assigns to a datapoint, but the normalization term is exponentially expensive to compute exactly and even approximating it is extremely
time-consuming [11]. There is also no efficient way to modify the parameters of an RBM so that
the log probability of the data is guaranteed to increase, but there are good approximate methods [5]
so this is not the main problem. This paper describes a way of fitting a mixture of RBM?s without
explicitly computing the partition function of each RBM.
2
The model
We start with the energy function for a Restricted Boltzmann Machine (RBM) and then modify it
to define the implicit mixture of RBMs. To simplify the description, we assume that the visible and
hidden variables of the RBM are binary. The formulation below can be easily adapted to other types
of variables (e.g., see [13]).
The energy function for a Restricted Boltzmann Machine (RBM) is
X
E(v, h) = ?
WijR vi hj ,
(1)
i,j
where v is a vector of visible (observed) variables, h is a vector of hidden variables, and W R is
a matrix of parameters that capture pairwise interactions between the visible and hidden variables.
Now consider extending this model by including a discrete variable z with K possible states, represented as a K-dimensional binary vector with 1-of-K activation. Defining the energy function in
terms of three-way interactions among the components of v, h, and z gives
X
I
E(v, h, z) = ?
Wijk
vi h j z k ,
(2)
i,j,k
I
where W is a 3D tensor of parameters. Each slice of this tensor along the z-dimension is a matrix
that corresponds to the parameters of each of the K component RBMs. The joint distribution for the
mixture model is
exp(?E(v, h, z))
P (v, h, z) =
,
(3)
ZI
2
where
ZI =
X
exp(?E(u, g, y))
(4)
u,g,y
is the partition function of the implicit mixture model. Re-writing the joint distribution in the usual
mixture model form gives
P (v) =
X
P (v, h, z) =
h,z
K X
X
P (v, h|zk = 1)P (zk = 1).
(5)
k=1 h
Equation 5 defines the implicit mixture of RBMs. P (v, h|zk = 1) is the k th component RBM?s
distribution, with W R being the k th slice of W I . Unlike in a typical mixture model, the mixing
proportion P (zk = 1) is not a separate parameter in our model. Instead, it is implicitly defined
via the energy function in equation 2. Changing the bias of the k th unit in z changes the mixing
proportion of the k th RBM, but all of the weights of all the RBM?s also influence it. Figure 1 gives
a visual description of the implicit mixture model?s structure.
3
Learning
Given a set of N training cases {v1 , ..., vN }, we want to learn the parameters of the implicit mixPN
ture model by maximizing the log likelihood L = n=1 log P (vn ) with respect to W I . We use
gradient-based optimization to do this. The expression for the gradient is
N
X
?L
?E(vn , h, z)
?E(v, h, z)
?
,
(6)
=
N
?W I
?W I
?W I
P (v,h,z)
P (h,z|vn )
n=1
where hiP () denotes an expectation with respect to the distribution P (). The two expectations in
equation 6 can be estimated by sample means if unbiased samples can be generated from the corresponding distributions. The conditional distribution P (h, z|v? ) is easy to sample from, but sampling
the joint distribution P (v, h, z) requires prolonged Gibbs sampling and is intractable in practice. We
get around this problem by using the contrastive divergence (CD) learning algorithm [5], which has
been found to be effective for training a variety of energy-based models (e.g. [8],[9],[13],[4]).
Sampling the conditional distributions: We now describe how to sample the conditional distributions P (h, z|v) and P (v|h, z), which are the main operations required for CD learning. The
second case is easy: given zk = 1, we select the k th component RBM of the mixture model and
then sample from its conditional distribution Pk (v|h). The bipartite structure of the RBM makes
this distribution factorial. So the ith visible unit is drawn independently of the other units from the
Bernoulli distribution
P (vi = 1|h, zk = 1) =
1 + exp(?
1
P
j
I h )
Wijk
j
.
(7)
Sampling P (h, z|v) is done in two steps. First, the K-way discrete distribution P (z|v) is computed
(see below) and sampled. Then, given zk = 1, we select the k th component RBM and sample from
its conditional distribution Pk (h|v). Again, this distribution is factorial, and the j th hidden unit is
drawn from the Bernoulli distribution
P (hj = 1|v, zk = 1) =
1 + exp(?
1
P
i
I v )
Wijk
i
.
(8)
To compute P (z|v) we first note that
P (zk = 1|v) ? exp(?F (v, zk = 1)),
where the free energy F (v, zk = 1) is given by
X
X
I
log(1 + exp(
Wijk
vi )).
F (v, zk = 1) = ?
j
i
3
(9)
(10)
If the number of possible states of z is small enough, then it is practical to compute the quantity
F (v, zk = 1) for every k by brute-force. So we can compute
exp(?F (v, zk = 1))
P (zk = 1|v) = P
.
l exp(?F (v, zl = 1))
(11)
Equation 11 defines the responsibility of the k th component RBM for the data vector v.
Contrastive divergence learning: Below is a summary of the steps in the CD learning for the
implicit mixture model.
1. For a training vector v+ , pick a component RBM by sampling the responsibilities
P (zk = 1|v+ ). Let l be the index of the selected RBM.
2. Sample h+ ? Pl (h|v+ ).
T
3. Compute the outer product D+
l = v+ h+ .
4. Sample v? ? Pl (v|h+ ).
5. Pick a component RBM by sampling the responsibilities P (zk = 1|v? ). Let m be the
index of the selected RBM.
6. Sample h? ? Pm (h|v? ).
T
7. Compute the outer product D?
m = v? h? .
Repeating the above steps for a mini-batch of Nb training cases results in two sets of outer products
+
?
?
?
for each component k in the mixture model: Sk+ = {D+
k1 , ..., DkM } and Sk {Dk1 , ..., DkL }. Then
th
the approximate likelihood gradient (averaged over the mini-batch) for the k component RBM is
?
?
L
M
1 ?X + X ? ?
1 ?L
?
D ?
D
.
(12)
Nb ?WkI
Nb i=1 ki j=1 kj
Note that to compute the outer products D+ and D? for a given training vector, the component
RBMs are selected through two separate stochastic picks. Therefore the sets Sk+ and Sk? need not
be of the same size because the choice of the mixture component can be different for v+ and v? .
Scaling free energies with a temperature parameter: In practice, the above learning algorithm
causes all the training cases to be captured by a single component RBM, and the other components to
be left unused. This is because free energy is an unnormalized quantity that can have very different
numerical scales across the RBMs. One RBM may happen to produce much smaller free energies
than the rest because of random differences in the initial parameter values, and thus end up with
high responsibilities for most training cases. Even if all the component RBMs are initialized to the
exact same initial parameter values, the problem can still arise after a few noisy weight updates. The
solution is to use a temperature parameter T when computing the responsibilities:
exp(?F (v, zk = 1)/T )
P (zk = 1|v) = P
.
l exp(?F (v, zl = 1)/T )
(13)
By choosing a large enough T , we can make sure that random scale differences in the free energies
do not lead to the above collapse problem. One possibility is to start with a large T and then gradually
anneal it as learning progresses. In our experiments we found that using a constant T works just as
well as annealing, so we keep it fixed.
4
Results
We apply the implicit mixture of RBMs to two datasets, MNIST [1] and NORB [7]. MNIST is a
set of handwritten digit images belonging to ten different classes (the digits 0 to 9). NORB contains
stereo-pair images of 3D toy objects taken under different lighting conditions and viewpoints. There
are five classes of objects in this set (human, car, plane, truck and animal). We use MNIST mainly
as a sanity check, and most of our results are for the much more difficult NORB dataset.
Evaluation method: Since computing the exact partition function of an RBM is intractable, it is
not possible to directly evaluate the quality of our mixture model?s fit to the data, e.g., by computing
4
Figure 2: Features of the mixture model with five component RBMs trained on all ten classes of
MNIST images.
the log probability of a test set under the model. Recently it was shown that Annealed Importance
Sampling can be used to tractably approximate the partition function of an RBM [11]. While this
is an attractive option to consider in future work, for this paper we use the computationally cheaper
approach of evaluating the model by using it in a classification task. Classification accuracy is then
used as an indirect quantitative measure of how good the model is.
A reasonable evaluation criterion for a mixture modelling algorithm is that it should be able to find
clusters that are mostly ?pure? with respect to class labels. That is, the set of data vectors that a
particular mixture component has high responsibilities for should have the same class label. So it
should be possible to accurately predict the class label of a given data vector from the responsibilities
of the different mixture components for that vector. Once a mixture model is fully trained, we
evaluate it by training a classifier that takes as input the responsibilities of the mixture components
for a data vector and predicts its class label. The goodness of the mixture model is measured by the
test set prediction accuracy of this classifier.
4.1
Results for MNIST
Before attempting to learn a good mixture model of the whole MNIST dataset, we tried two simpler
modeling tasks. First, we fitted an implicit mixture of two RBM?s with 100 hidden units each to
an unlabelled dataset consisting of 4,000 twos and 4,000 threes. As we hoped, almost all of the
two?s were modelled by one RBM and almost all of the threes by the other. On 2042 held-out
test cases, there were only 24 errors when an image was assigned the label of the most probable
RBM. This compares very favorably with logistic regression which needs 8000 labels in addition
to the images and gives 36 errors on the test set even when using a penalty on the squared weights
whose magnitude is set using a validation set. Logistic regression also gives a good indication of the
performance that could be expected from fitting a mixture of two Gaussians with a shared covariance
matrix, because logistic regression is equivalent to fitting such a mixture discriminatively.
We then tried fitting an implicit mixture model with only five component RBMs, each with 25 hidden
units, to the entire training set. We purposely make the model very small so that it is possible to
visually inspect the features and the responsibilities of the component RBMs and understand what
each component is modelling. This is meant to qualitatively confirm that the algorithm can learn a
sensible clustering of the MNIST data. (Of course, the model will have poor classification accuracy
as there are more classes than clusters, so it will merge multiple classes into a single cluster.) The
features of the component RBMs are shown in figure 2 (top row). The plots in the bottom row show
the fraction of training images for each of the ten classes that are hard-assigned to each component.
The learning algorithm has produced a sensible mixture model in that visually similar digit classes
are combined under the same mixture component. For example, ones and eights require many
similar features, so they are captured with a single RBM (leftmost in fig. 2). Similarly, images of
fours, sevens, and nines are all visually similar, and they are modelled together by one RBM (middle
of fig. 2).
5
We have also trained larger models with many more mixture components. As the number of components increase, we expect the model to partition the image space more finely, with the different
components specializing on various sub-classes of digits. If they specialize in a way that respects
the class boundaries, then their responsibilities for a data vector will become a better predictor of its
class label.
The component RBMs use binary units both in the visible and hidden layers. The image dimensionality is 784 (28 ? 28 pixels). We have tried various settings for the number of mixture components
(from 20 to 120 in steps of 20) and a component?s hidden layer size (50, 100, 200, 500). Classification accuracy increases with more components, until 80 components. Additional components give
slightly worse results. The hidden layer size is set to 100, but 200 and 500 also produce similar
accuracies. Out of the 60,000 training images in MNIST, we use 50,000 to train the mixture model
and the classifier, and the remaining 10,000 as a validation set for early stopping. The final models
are then tested on a separate test set of 10,000 images.
Once the mixture model is trained, we train a logistic regression classifier to predict the class label
from the responsibilities2 . It has as many inputs as there are mixture components, and a ten-way
softmax over the class labels at the output. With 80 components, there are only 80 ? 10 + 10 =
810 parameters in the classifier (including the 10 output biases). In our experiments, classification
accuracy is consistently and significantly higher when unnormalized responsibilities are used as the
classifier input, instead of the actual posterior probabilities of the mixture components given a data
vector. These unnormalized values have no proper probabilistic interpretation, but nevertheless they
allow for better classification, so we use them in all our experiments.
Table 1 shows the classification error rate of the resulting classifier on the MNIST test set. As a simple
baseline comparison, we train a logistic regression
classifier that predicts the class label from the raw
pixels. This classifier has 784 ? 10 + 10 = 7850
parameters and yet the mixture-based classifier has
less than half the error rate. The unnormalized responsibilities therefore contain a significant amount
of information about the class labels of the images,
which indicates that the implicit mixture model has learned clusters that mostly agree with the class
boundaries, even though it is not given any class information during training.
Table 1: MNIST Test set error rates.
Logistic regression % Test
classifier input
error
Unnormalized
3.36%
responsibilities
Pixels
7.28%
4.2
Results for NORB
NORB is a much more difficult dataset than MNIST because the images are of very different classes
of 3D objects (instead of 2D patterns) shown from different viewpoints and under various lighting
conditions. The pixels are also no longer binary-valued, but instead span the grayscale range [0, 255].
So binary units are no longer appropriate for the visible layer of the component RBMs. Gaussian
visible units have previously been shown to be effective for modelling grayscale images [6], and
therefore we use them here. See [6] for details about Gaussian units. As in that paper, the variance
of the units is fixed to 1, and only their means are learned.
Learning an RBM with Gaussian visible units can be slow, as it may require a much greater number
of weight updates than an equivalent RBM with binary visible units. This problem becomes even
worse in our case since a large number of RBMs have to be trained simultaneously. We avoid it
by first training a single RBM with Gaussian visible units and binary hidden units on the raw pixel
data, and then treating the activities of its hidden layer as pre-processed data to which the implicit
mixture model is applied. Since the hidden layer activities of the pre-processing RBM are binary, the
mixture model can now be trained efficiently with binary units in the visible layer3 . Once trained,
the low-level RBM acts as a fixed pre-processing step that converts the raw grayscale images into
2
Note that the mixture model parameters are kept fixed when training the classifier, so the learning of the
mixture model is entirely unsupervised.
3
We actually use the real-valued probabilities of the hidden units as the data, and we also use real-valued
probabilities for the reconstructions. On other tasks, the learning gives similar results using binary values
sampled from these real-valued probabilities but is slower.
6
1-of-K
activation
Hidden units
m
Wjmk
k
Binary
data
j
Pre-processing
transformation
Wij
i
Gaussian visible units
(raw pixel data)
Figure 3: Implicit mixture model used for MNORB.
binary vectors. Its parameters are not modified further when training the mixture model. Figure 3
shows the components of the complete model.
A difficulty with training the implicit mixture model (or any other mixture model) on NORB is
that the ?natural? clusters in the dataset correspond to the six lighting conditions instead of the five
object classes. The objects themselves are small (in terms of area) relative to the background, while
lighting affects the entire image. Any clustering signal provided by the object classes will be weak
compared to the effect of large lighting changes. So we simplify the dataset slightly by normalizing
the lighting variations across images. Each image is multiplied by a scalar such that all images
have the same average pixel value. This significantly reduces the interference of the lighting on
the mixture learning4 . Finally, to speed up experiments, we subsample the images from 96 ? 96 to
32 ? 32 and use only one image of the stereo pair. We refer to this dataset as ?Modified NORB?
or ?MNORB?. It contains 24,300 training images and an equal number of test images. From the
training set, 4,300 are set aside as a validation set for early stopping.
We use 2000 binary hidden units for the preprocessing RBM, so the input dimensionality of the
implicit mixture model is 2000. We have tried many different settings for the number of mixture
components and the hidden layer size of the components. The best classification results are given
by 100 components, each with 500 hidden units. This model has about 100 ? 500 ? 2000 = 108
parameters, and takes about 10 days to train on an Intel Xeon 3Ghz processor.
Table 2 shows the test set error rates for a logistic regression classifier trained on various input
representations. Mixture of Factor Analyzers (MFA) [3] is similar to the implicit mixture of RBMs
in that it also learns a clustering while simultaneously learning a latent representation per cluster
component. But it is a directed model based on linear-Gaussian representations, and it can be learned
tractably by maximizing likelihood with EM. We train MFA on the raw pixel data of MNORB. The
MFA model that gives the best classification accuracy (shown in table 2) has 100 component Factor
Analyzers with 100 factors each. (Note that simply making the number of learnable parameters
equal is not enough to match the capacities of the different models because RBMs use binary latent
representations, while FAs use continuous representations. So we cannot strictly control for capacity
when comparing these models.)
A mixture of multivariate Bernoulli distributions (see e.g. section 9.3.3 of [2]) is similar to an
implicit mixture model whose component RBMs have no hidden units and only visible biases as
trainable parameters. The differences are that a Bernoulli mixture is a directed model, it has explicitly parameterized mixing proportions, and maximum likelihood learning with EM is tractable. We
train this model with 100 components on the activation probabilities of the preprocessing RBM?s
hidden units. The classification error rate for this model is shown in table 2.
4
The normalization does not completely remove lighting information from the data. A logistic regression
classifier can still predict the lighting label with 18% test set error when trained and tested on normalized
images, compared to 8% error for unnormalized images.
7
Table 2: MNORB Test set error rates for a logistic regression classifier with different types of input
representations.
Logistic regression classifier input
Unnormalized responsibilities computed
by the implicit mixture of RBMs
Probabilities computed by the transformation Wij in
fig 3 (i.e. the pre-processed representation)
Raw pixels
Unnormalized responsibilities of an MFA model
trained on the pre-processed representation in fig 3
Unnormalized responsibilities of an MFA
model trained on raw pixels
Unnormalized responsibilities of a Mixture of
Bernoullis model trained on the pre-processed
representation in fig 3
% Test error
14.65%
16.07%
20.60%
22.65%
24.57%
28.53%
These results show that the implicit mixture of RBMs has learned clusters that reflect the class
structure in the data. By the classification accuracy criterion, the implicit mixture is also better than
MFA. The results also confirm that the lack of explicitly parameterized mixing proportions does not
prevent the implicit mixture model from discovering interesting cluster structure in the data.
5
Conclusions
We have presented a tractable formulation of a mixture of RBMs. That such a formulation is even
possible is a surprising discovery. The key insight here is that the mixture model can be cast as a
third-order Boltzmann machine, provided we are willing to abandon explicitly parameterized mixing
proportions. Then it can be learned tractably using contrastive divergence. As future work, it would
be interesting to explore whether these ideas can be extended to modelling time-series data.
References
[1] Mnist database, http://yann.lecun.com/exdb/mnist/.
[2] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[3] Z. Ghahramani and G. E. Hinton. The em algorithm for mixtures of factor analyzers. Technical Report
CRG-TR-96-1, Dept. of Computer Science, University of Toronto, 1996.
[4] X. He, R. S. Zemel, and M. A. Carreira-Perpinan. Multiscale conditional random fields for image labeling.
In CVPR, pages 695?702, 2004.
[5] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14(8):1711?1800, 2002.
[6] G. E. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313:504?507, 2006.
[7] Y. LeCun, F. J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance
to pose and lighting. In CVPR, Washington, D.C., 2004.
[8] S. Roth and M. J. Black. Fields of experts: A framework for learning image priors. In CVPR, pages
860?867, 2005.
[9] S. Roth and M. J. Black. Steerable random fields. In ICCV, 2007.
[10] N. Le Roux and Y. Bengio. Representational power of restricted boltzmann machines and deep belief
networks. Neural Computation, To appear.
[11] R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In ICML, Helsinki,
2008.
[12] I. Sutskever and G. E. Hinton. Deep narrow sigmoid belief networks are universal approximators. Neural
Computation, To appear.
[13] M. Welling, M. Rosen-Zvi, and G. E. Hinton. Exponential family harmoniums with an application to
information retrieval. In NIPS 17, 2005.
8
| 3536 |@word middle:1 proportion:11 willing:1 tried:4 covariance:1 contrastive:5 pick:3 tr:1 initial:2 contains:2 series:1 comparing:1 com:1 surprising:1 activation:5 assigning:1 yet:1 numerical:1 happen:1 visible:18 partition:6 remove:1 plot:1 treating:1 update:3 aside:1 half:1 selected:3 discovering:1 plane:1 ith:2 toronto:4 simpler:1 five:4 along:1 become:1 consists:1 specialize:1 fitting:4 pairwise:1 expected:1 themselves:1 salakhutdinov:2 prolonged:1 actual:1 increasing:1 becomes:1 provided:2 dk1:1 wki:1 mass:1 what:1 transformation:2 quantitative:2 every:1 act:1 exactly:1 classifier:16 brute:1 unit:35 zl:2 control:1 appear:2 before:2 modify:2 merge:1 black:2 plus:1 collapse:1 range:1 averaged:1 directed:2 practical:1 responsible:1 lecun:2 practice:2 digit:4 steerable:1 area:1 universal:1 significantly:2 pre:7 road:1 get:1 cannot:2 nb:3 influence:1 writing:1 equivalent:2 roth:2 maximizing:2 annealed:1 independently:1 roux:1 assigns:3 pure:1 insight:1 datapoints:2 variation:1 exact:2 distinguishing:1 us:2 expensive:1 recognition:2 predicts:2 database:1 observed:1 bottom:1 capture:2 trained:12 depend:1 raise:1 harmonium:1 bipartite:1 completely:1 easily:2 joint:3 indirect:1 represented:1 various:4 train:6 effective:2 describe:1 zemel:1 labeling:1 choosing:1 sanity:1 whose:3 larger:1 valued:4 cvpr:3 tested:2 noisy:1 abandon:1 final:1 indication:1 reconstruction:1 interaction:3 product:6 mixing:10 representational:1 description:2 sutskever:1 cluster:10 extending:1 produce:2 object:8 pose:1 measured:1 progress:1 implemented:1 c:1 stochastic:1 human:1 require:2 probable:1 crg:1 strictly:1 pl:2 around:1 considered:1 exp:11 visually:3 predict:3 early:2 label:13 weighted:1 always:1 gaussian:6 modified:2 rather:1 avoid:1 hj:2 consistently:1 bernoulli:16 likelihood:4 mainly:1 check:1 modelling:4 indicates:1 learning4:1 baseline:1 stopping:2 typically:1 entire:2 hidden:29 wij:3 pixel:10 among:2 classification:11 animal:1 softmax:1 equal:2 once:3 field:3 washington:1 sampling:7 represents:1 unsupervised:1 icml:1 future:2 rosen:1 report:1 simplify:2 few:1 composed:1 simultaneously:3 divergence:5 cheaper:1 consisting:1 possibility:2 wijk:6 evaluation:2 mixture:77 held:1 initialized:1 re:1 fitted:2 increased:1 hip:1 modeling:2 xeon:1 goodness:1 uniform:1 predictor:1 zvi:1 considerably:1 combined:1 density:3 probabilistic:1 together:1 again:1 reflect:2 squared:1 huang:1 worse:2 expert:2 toy:1 vnair:1 explicitly:5 vi:4 lot:1 responsibility:19 start:2 option:1 accuracy:8 variance:1 efficiently:1 correspond:1 modelled:2 raw:8 handwritten:1 weak:1 accurately:1 produced:1 lighting:10 m5s:1 processor:1 datapoint:4 sharing:1 rbms:25 energy:12 rbm:41 sampled:2 dataset:7 car:1 dimensionality:3 actually:1 appears:1 higher:1 day:1 formulation:3 done:1 though:1 just:1 implicit:24 until:1 multiscale:1 lack:2 defines:3 logistic:10 quality:1 effect:1 contain:2 unbiased:1 normalized:1 assigned:2 attractive:1 during:1 unnormalized:11 criterion:2 leftmost:1 exdb:1 complete:1 temperature:2 image:28 purposely:1 recently:1 sigmoid:1 exponentially:1 interpretation:1 he:1 significant:1 refer:1 gibbs:1 pm:1 similarly:1 analyzer:3 longer:2 posterior:3 multivariate:3 binary:17 came:1 approximators:1 captured:2 additional:1 greater:1 maximize:1 signal:1 multiple:1 reduces:1 technical:1 match:2 unlabelled:1 retrieval:1 dkl:1 specializing:1 schematic:2 prediction:1 regression:10 expectation:2 normalization:2 background:1 want:1 separately:1 addition:1 annealing:1 extra:1 rest:1 unlike:2 finely:1 sure:1 layer3:1 unused:1 bengio:1 vinod:1 easy:3 ture:1 variety:1 enough:3 fit:1 zi:2 affect:1 idea:1 whether:1 expression:1 six:1 penalty:1 stereo:2 cause:1 nine:1 deep:3 factorial:2 amount:1 repeating:1 ten:4 processed:4 http:1 estimated:1 per:3 discrete:3 key:1 four:1 nevertheless:1 drawn:2 changing:1 preprocessed:1 prevent:1 kept:1 v1:1 fraction:1 sum:2 convert:1 parameterized:5 powerful:1 almost:2 reasonable:1 family:1 yann:1 vn:4 dkm:1 scaling:1 entirely:1 layer:8 ki:1 guaranteed:1 truck:1 activity:2 adapted:1 helsinki:1 speed:1 extremely:1 span:1 attempting:1 department:1 alternate:1 poor:1 belonging:1 describes:1 across:2 em:4 smaller:1 slightly:2 wi:1 making:1 iccv:1 restricted:6 gradually:1 interference:1 taken:1 computationally:1 equation:4 agree:1 previously:1 tractable:2 end:1 operation:1 gaussians:1 multiplied:1 apply:1 eight:1 appropriate:2 generic:1 alternative:1 batch:2 slower:1 denotes:1 clustering:3 top:1 remaining:1 k1:1 ghahramani:1 murray:1 approximating:1 tensor:2 already:1 quantity:2 fa:1 usual:1 g5:1 gradient:3 separate:3 capacity:2 outer:4 sensible:2 seven:1 evaluate:2 index:2 mini:2 minimizing:1 difficult:2 unfortunately:1 mostly:2 favorably:1 proper:1 boltzmann:9 threeway:1 inspect:1 datasets:3 defining:1 hinton:7 extended:1 canada:1 pair:2 required:1 cast:1 learned:6 narrow:1 tractably:4 nip:1 able:1 below:3 pattern:2 including:2 belief:3 power:2 mfa:6 difficulty:1 force:1 natural:1 created:1 kj:1 prior:1 discovery:1 relative:1 fully:1 expect:1 discriminatively:1 interesting:2 geoffrey:1 validation:3 viewpoint:2 pi:1 cd:3 row:2 course:1 summary:1 surprisingly:1 free:5 bias:5 allow:2 understand:1 distinctly:1 ghz:1 slice:2 boundary:2 dimension:1 evaluating:1 qualitatively:1 preprocessing:2 welling:1 approximate:3 uni:3 implicitly:3 keep:1 confirm:2 norb:8 consuming:1 grayscale:3 continuous:1 latent:7 sk:4 table:6 learn:3 zk:19 bottou:1 anneal:1 pk:2 main:2 whole:1 subsample:1 arise:1 fig:5 intel:1 slow:1 sub:1 exponential:2 perpinan:1 third:3 learns:2 bishop:1 showing:1 learnable:1 normalizing:1 intractable:4 mnist:14 adding:1 importance:1 magnitude:1 hoped:1 simply:1 explore:1 visual:1 scalar:1 springer:1 corresponds:1 extracted:1 nair:1 conditional:6 viewed:2 formulated:1 king:1 shared:2 change:4 hard:1 carreira:1 typical:2 determined:1 reducing:1 called:1 invariance:1 select:2 college:1 meant:1 dept:1 trainable:1 |
2,798 | 3,537 | Clusters and Coarse Partitions in LP Relaxations
David Sontag
CSAIL, MIT
[email protected]
Amir Globerson
School of Computer Science and Engineering
The Hebrew University
[email protected]
Tommi Jaakkola
CSAIL, MIT
[email protected]
Abstract
We propose a new class of consistency constraints for Linear Programming (LP)
relaxations for finding the most probable (MAP) configuration in graphical models. Usual cluster-based LP relaxations enforce joint consistency on the beliefs of
a cluster of variables, with computational cost increasing exponentially with the
size of the clusters. By partitioning the state space of a cluster and enforcing consistency only across partitions, we obtain a class of constraints which, although
less tight, are computationally feasible for large clusters. We show how to solve
the cluster selection and partitioning problem monotonically in the dual LP, using the current beliefs to guide these choices. We obtain a dual message passing
algorithm and apply it to protein design problems where the variables have large
state spaces and the usual cluster-based relaxations are very costly. The resulting method solves many of these problems exactly, and significantly faster than a
method that does not use partitioning.
1
Introduction
A common inference task in graphical models is finding the most likely setting of the values of the
variables (the MAP assignment). Indeed, many important practical problems can be formulated as
MAP problems (e.g., protein-design problems [9]). The complexity of the MAP problem depends
on the structure of the dependencies between the variables (i.e. the graph structure) and is known to
be NP-hard in general. Specifically, for problems such as protein-design, the underlying interaction
graphs are dense, rendering standard exact inference algorithms useless.
A great deal of effort has been spent recently on developing approximate algorithms for the MAP
problem. One promising approach is based on linear programming relaxations, solved via message
passing algorithms akin to belief propagation [2, 3]. In this case, the MAP problem is first cast as an
integer linear program, and then is relaxed to a linear program by removing the integer constraints
and adding new constraints on the continuous variables. Whenever the relaxed solution is integral, it
is guaranteed to be the optimal solution. However, this happens only if the relaxation is sufficiently
?tight? (with respect to a particular objective function).
Relaxations can be made increasingly tight by introducing LP variables that correspond to clusters
of variables in the original model. In fact, in recent work [6] we have shown that by adding a set
of clusters over three variables, complex problems such as protein-design and stereo-vision may be
solved exactly. The problem with adding clusters over variables is that computational cost scales
exponentially with the cluster size. Consider, for example, a problem where each variable has 100
states (cf. protein-design). Using clusters of s variables means adding 100s LP variables, which is
computationally demanding even for clusters of size three.
Our goal in the current paper is to design methods that introduce constraints over clusters at a reduced
computational cost. We achieve this by representing clusters at a coarser level of granularity. The
key observation is that it may not be necessary to represent all the possible joint states of a cluster of
variables. Instead, we partition the cluster?s assignments at a coarser level, and enforce consistency
only across such partitions. This removes the number of states per variable from consideration, and
instead focuses on resolving currently ambiguous settings of the variables. Following the approach
of [2], we formulate a dual LP for the partition-based LP relaxations and derive a message passing
algorithm for optimizing the dual LP based on block coordinate descent. Unlike standard message
passing algorithms, the algorithm we derive involves passing messages between coarse and fine
representations of the same set of variables.
MAP and its LP relaxation. We consider discrete pairwise Markov random fields on a graph
G = (V, E), defined as the following exponential family distribution1
p(x; ?) =
1 Pij?E ?ij (xi ,xj )
e
Z
(1)
Here ? is a parameter vector specifying how pairs of variables in E interact. The MAP problem we
consider here is to find the most likely assignment of the variables under p(x; ?) (we assume that the
evidence has already been incorporated into P
the model). This is equivalent to finding the assignment
xM that maximizes the function f (x; ?) = ij?E ?ij (xi , xj ).
The resulting discrete optimization problem may also be cast as a linear program. Define ? to
be a vector of marginal probabilities associated with the interacting pairs of variables (edges)
{?ij (xi , xj )}ij?E as well as {?i (xi )}i?V for the nodes. The set of ??s that could arise from
some joint distribution on G is known as the marginal polytope M(G) [7]. The MAP problem is
then equivalent to the following linear program:
max f (x; ?) = max ? ? ? ,
x
??M(G)
(2)
P
P
where ? ? ? = ij?E xi ,xj ?ij (xi , xj )?ij (xi , xj ). The extreme points of the marginal polytope
are integral and correspond one-to-one with assignments x. Thus, there always exists a maximizing ? that is integral and corresponds to xM . Although the number of variables in this LP is only
O(|E| + |V |), the difficulty comes from an exponential number of linear inequalities typically required to describe the marginal polytope M(G).
LP relaxations replace the difficult global constraint that the marginals in ? must arise from some
common joint distribution by ensuring only that the marginals are locally consistent with one another. The most common such relaxation, pairwise
P consistency, enforces that the edge marginals
are consistent with the node marginals, {? |
xj ?ij (xi , xj ) = ?i (xi )}. The integral extreme
points of this local marginal polytope also correspond to assignments. If a solution is obtained at
one such extreme point, it is provably the MAP assignment. However, the local marginal polytope
also contains fractional extreme points, and, as a relaxation, will in general not be tight.
We are therefore interested in tightening the relaxation. There are many known ways to do so, including cycle inequalities [5] and semi-definite constraints [8]. However, perhaps the most straightforward approach corresponds to lifting the relaxation by adding marginals over clusters of nodes to
the model (cf. generalized belief propagation [10]) and constraining them to be consistent with the
edge marginals. However, each cluster comes with a computational cost that grows as k s , where s
is the number of variables in the cluster and k is the number of states for each variable. We seek to
offset this exponential cost by introducing coarsened clusters, as we show next.
2
Coarsened clusters and consistency constraints
We begin with an illustrative example. Suppose we have a graphical model that is a triangle with
each variable taking k states. We can recover the exact marginal polytope in this case by forcing the
pairwise marginals ?ij (xi , xj ) to be consistent with some distribution ?123 (x1 , x2 , x3 ). However,
when k is large, introducing the corresponding k 3 variables to our LP may be too costly and perhaps
unnecessary, if a weaker consistency constraint would already lead to an integral extreme point. To
this end, we will use a coarse-grained version of ?123 where the joint states are partitioned into
larger collections, and consistency is enforced over the partitions.
1
We do not use potentials on single nodes ?i (xi ) since these can be folded into ?ij (xi , xj ). Our algorithm
can also be derived with explicit ?i (xi ), and we omit the details for brevity.
zk
xk
xi
zk
zi
zi
zj
Figure 1: A graphical illustration of the consistency constraint between the original (fine granularity)
edge (xi xk ) and the coarsened triplet (zi zj zk ). The two should agree on the marginal of zi zk .
For example, the shaded area in all three figures represents the same probability mass.
The simplest partitioning scheme builds on coarse-grained versions of each variable Xi . Let Zi
denote a disjoint collection of sets covering
the possible values
of Xi . For example, if variable Xi
has five states, Zi might be defined as 1 2 3 5 4 . Given such a partitioning scheme,
we can introduce a distribution over coarsened variables 123 (z1 z2 z3 ) and constrain it to agree
with ik (xi xk ) in the sense that they both yield the same marginals for
zi zk . This is illustrated
graphically in Fig. 1. In the case when Zi individuates each state, i.e., 1 2 3 4 , we
recover the usual cluster consistency constraint.
We use the above idea to construct tighter outer bounds on the marginal polytope and incorporate
them into the MAP-LP relaxation. We assume that we are given a set of clusters C. For each cluster
c C and variable i c we also have a partition Zic as in the above example2 (the choice of clusters
and partitions will be discussed later). We introduce marginals over the coarsened clusters (zc )
and constrain them to agree with the edge variables ij (xi xj ) for all edges ij c:
(3)
ij (xi xj ) =
c (zc )
xi zic xj zjc
zc \{ zic zjc }
The key idea is that the coarsened cluster represents higher-order marginals albeit at a lower resolution, whereas the edge variables represent lower-order marginals but at a finer resolution. The
constraint in Eq. 3 implies that these two representations should agree.
We can now state the LP that we set out to solve. Our LP optimizes over the following marginal
variables: ij (xi xj ) i (xi ) for the edges and nodes of the original graph, and c (zc ) for the
coarse-grained clusters. We would like to constrain these variables to belong to the following outer
bound on the marginal polytope:
i (xi )
xj ij (xi xj ) =
ij (xi xj ) =
c (zc )
0 xi zc xj zc
(4)
MC (G) =
c
c
zc \{ zi zj }
j
i
xi xj ij (xi xj ) = 1
Note that zc c (zc ) = 1 is implied by the above constraints. The corresponding MAP-LP relaxation is then:
max
(5)
MC (G)
This LP could in principle be solved using generic LP optimization tools. However, a more efficient
and scalable approach is to solve it via message passing in the dual LP, which we show how to do
in the next section. In addition, for this method to be successful, it is critical that we choose good
coarsenings, meaning that it should have few partitions per variable, yet still sufficiently tightens the
relaxation. Our approach for choosing the coarsenings is to iteratively solve the LP using an initial
relaxation (beginning with the pairwise consistency constraints), then to introduce additional cluster
constraints, letting the current solution guide how to coarsen the variables. As we showed in earlier
work [6], solving with the dual LP gives us a simple method for ?warm starting? the new LP (the
tighter relaxation) using the previous solution, and also results in an algorithm for which every step
monotonically decreases an upper bound on the MAP assignment. We will give further details of
the coarsening scheme in Section 4.
2
We use a superscript of c to highlight the fact that different clusters may use different partitionings for Zi .
Also, there can be multiple clusters on the same set of variables, each using a different partitioning.
3
Dual linear program and a message passing algorithm
In this section we give the dual of the partition-based LP from Eq. 5, and use it to obtain a message
passing algorithm to efficiently optimize this relaxation. Our approach extends earlier work by
Globerson and Jaakkola [2] who gave the generalized max-product linear programming (MPLP)
algorithm to solve the usual (non-coarsened) cluster LP relaxation in the dual.
The dual formulation in [2] was derived by adding auxiliary variables to the primal. We followed a similar approach to obtain the LP dual of Eq. 5. The dual variables are as follows:
?ij?i (xi , xj ), ?ij?j (xi , xj ), ?ij?ij (xi , xj ) for every edge ij ? E, and ?c?ij (zc ) for every coarsened cluster c and edge ij ? c. As in [2], we define the following functions of ?:
?ij?i (xi )
=
?c?ij (zic , zjc )
=
max ?ij?i (xi , xj ),
xj
max
zc \{zic ,zjc }
?ij?ij (xi , xj ) = ?ij?ij (xi , xj )
?c?ij (zc )
(6)
(7)
As we show below, the variables ? correspond to the messages sent in the message passing algorithm
that we use for optimizing the dual. Thus ?ij?i (xi ) should be read as the message sent from edge
ij to node i, and ?c?ij (zic , zjc ) is the message from the coarsened cluster to one of its intersection
edges. Finally, ?ij?ij (xi , xj ) is the message sent from an edge to itself. The dual of Eq. 5 is the
following constrained minimization problem:
h
i
X
X
X
X
min
max
?ik?i (xi ) +
max ?ij?ij (xi , xj ) +
?c?ij (zic [xi ], zjc [xj ])
?
i
s.t.
xi
k?N (i)
ij?E
xi ,xj
c:ij?c
?ij?i (xi , xj ) + ?ij?j (xi , xj ) + ?ij?ij (xi , xj ) = ?ij (xi , xj )
X
?c?ij (zc ) = 0
?c, zc
?ij ? E, xi , xj
(8)
ij?c
The notation zic [xi ] refers to the mapping from xi ? Xi to the coarse state zic ? Zic such that xi ? zic .
By convex duality, the dual objective evaluated at a dual feasible point upper bounds the primal LP
optimum, which in turn upper bounds the value of the MAP assignment. It is illustrative to compare
this dual LP with [2] where the cluster dual variables were ?c?ij (xc ). Our dual corresponds to
introducing the additional constraint that ?c?ij (xc ) = ?c?ij (x0c ) whenever zc [xc ] = zc [x0c ].
The advantage of the above dual is that it can be optimized via a simple message passing algorithm
that corresponds to block coordinate descent. The key idea is that it is possible to fix the values of
the ? variables corresponding to all clusters except one, and to find a closed form solution for the
non-fixed ?s. It then turns out that one does not need to work with ? variables directly, but can
keep only the ? message variables. Fig. 2 provides the form of the updates for all three message
types. S(c) is the set of edges in cluster c (e.g. ij, jk, ik). Importantly, all messages outgoing from
a cluster or edge must be sent simultaneously.
Here we derive the cluster to edge updates, which differ from [2]. Assume that all values of ? are
fixed except for ?c?ij (zic , zjc ) for all ij ? c in some cluster c. The term in the dual objective that
depends on ?c?ij (zic , zjc ) can be written equivalently as
h
i
X
0
0
max ?ij?ij (xi , xj ) +
?c0 ?ij (zic [xi ], zjc [xj ]) + ?c?ij (zic [xi ], zjc [xj ])
xi ,xj
=
c0 :c0 6=c,ij?c0
h
i
c c
c
c
max
b
(z
,
z
)
+
?
(z
[x
],
z
[x
])
.
ij
c?ij
i
j
i
j
i
j
c c
zi ,zj
(11)
P
Due to the constraint ij?c ?c?ij (zc ) = 0, all of the ?c?ij need to be updated simultaneously. It
can be easily shown (using an equalization argument as in [2]) that the ?c?ij (zc ) that satisfy the
constraint and minimize the objective are given by
1 X
?c?ij (zc ) = ?bij (zic , zjc ) +
bst (zsc , ztc ).
(12)
|S(c)| st?c
The message update given in Fig. 2 follows from the definition of ?c?ij . Note that none of the
cluster messages involve the original cluster variables xc , but rather only zc . Thus, we have achieved
the goal of both representing higher-order clusters and doing so at a reduced computational cost.
? Edge to Node: For every edge ij ? E and node i (or j) in the edge:
hX
i
1
2
?ij?i (xi )?? ??j
?c?ij (zic [xi ], zjc [xj ])+?ij?ij (xi , xj )+??i
(xj )+?ij (xi , xj )
i (xi )+ max
j
3
3 xj c:ij?c
where ??j
i (xi ) =
P
k?N (i)\j
?ik?i (xi ).
? Edge to Edge: For every edge ij ? E:
i
1h
2 X
?j
?c?ij (zic [xi ], zjc [xj ]) + ??i
?ij?ij (xi , xj )? ?
j (xj ) + ?i (xi ) + ?ij (xi , xj )
3 c:ij?c
3
? Cluster to Edge: First define
?
?
bij (zic , zjc ) =
X
0
0
?c0 ?ij (zic [xi ], zjc [xj ])?
max ??ij?ij (xi , xj ) +
xi ? zic
0
0
c 6=c:ij?c
xj ? zjc
(9)
The update is then:
?c?ij (zic , zjc )? ? bij (zic , zjc ) +
X
1
max
bst (zsc , ztc )
|S(c)| zc \{zic ,zjc } st?c
(10)
Figure 2: The message passing updates for solving the dual LP given in Eq. 8.
The algorithm in Fig. 2 solves the dual for a given choice of coarsened clusters. As mentioned
in Sec. 2, we would like to add such clusters gradually, as in [6]. Our overall algorithm is thus
similar in structure to [6] and proceeds as follows (we denote the message passing algorithm from
Fig. 2 by MPLP): 1. Run MPLP until convergence using the pairwiseP
relaxation, 2. Find an integral
solution x by locally maximizing the single node beliefs bi (xi ) = k?N (i) ?ki?i (xi ), 3. If the
dual objective given in Eq. 8 is sufficiently close to the primal objective f (x; ?), terminate, 4. Add
a new coarsened cluster c using the strategy given in Sec. 4, 5. Initialize messages going out of
the new cluster c to zero, and keep all the previous message values (this will not change the bound
value), 6. Run MPLP for N iterations, then return to 2.
4
Choosing coarse partitions
Until now we have not discussed how to choose the clusters to add and their partitionings. Our
strategy for doing so closely follows that of our earlier work [6]. Given a set C of candidate clusters
to add (e.g., the set of all triplets in the graph as in [6]), we would like to add a cluster that would
result in the maximum decrease of the dual bound on the MAP. In principle such a cluster could be
found by optimizing the dual for each candidate cluster, then choosing the best one. However, this is
computationally costly, so in [6] we instead use the bound decrease resulting from just once sending
messages from the candidate cluster to its intersection edges.
If we were to add the full (un-coarsened) cluster, this bound decrease would be:
X
X
d(c) =
max bij (xi , xj ) ? max
bij (xi , xj ),
ij?c
xi ,xj
where bij (xi , xj ) = ?ij?ij (xi , xj ) +
xc
P
c:ij?c
(13)
ij?c
?c?ij (zic [xi ], zjc [xj ]).
Our strategy now is as follows: we add the cluster c that maximizes d(c), and then choose a partitioning Zic for all i ? c that is guaranteed to achieve a decrease that is close to d(c). This can clearly
be achieved by using the trivial partition Zic = Xi (which achieves d(c)). However, in many cases
it is also possible to achieve it while using much coarser partitionings.
The set of all possible partitionings Zic is too large to optimize over. Instead, we consider just |Xi |
candidate partitions that are generated based on the beliefs bi (xi ). Intuitively, the states with lower
belief values bi (xi ) are less likely to influence the MAP, and can thus be bundled together. We
will therefore consider partitions where the k states with lowest belief values are put into the same
?catch-all? coarse state sci , and all other states of xi get their own coarse state. Formally, a partition
Zic is characterized by a value ?i such that sci is the set of all xi with bi (xi ) < ?i . The question next
is how big we can make the catch-all state without sacrificing the bound decrease.
We employ a greedy scheme whereby each i ? c (in arbitrary order) is partitioned separately, while
the other partitions are kept fixed. The process starts with Zic = Xi for all i ? c. We would like to
choose sci such that it is sufficiently separated from the state that achieves d(c). Formally, given a
margin parameter ? we choose ?i to be as large as possible such that the following constraint still
holds3 :
X
X
maxc
bst (zsc , ztc ) ? max
bst (xs , xt ) ? ?,
zc \{zi },
zic = sci
st?c
xc
st?c
where the first maximization is over the coarse variables Zc\i , and Zic is fixed to the catch-all state sci
(note that the partitioning for Zic is a function of ?i ). We can find the optimal ?i in time O(|Xi ||c| )
by starting with ?i = ?? and increasing it until the constraint is violated. Since each subsequent
value of sci differs by one additional state xi , we can re-use the maximizations over zc\i for the
previous value of sci in evaluating the constraint for the current sci .
It can be shown by induction that this results in a coarsening that has a guaranteed bound decrease
of at least d(c) + min(0, ?). Setting ? < 0 would give a partitioning with fewer coarse states at the
cost of a smaller guaranteed bound decrease. On the other hand, setting ? > 0 results in a margin
between the value of the dual objective (after sending the coarsened cluster message) and its value
if we were to fix xi in the max terms of Eq. 11 to a value in sci . This makes it less likely that a state
in sci will become important again in subsequent message passing iterations. For the experiments in
this paper we use ? = 3d(c), scaling ? with the value of the guaranteed bound decrease for the full
cluster. Note that this greedy algorithm does not necessarily find the partitioning with the fewest
number of coarse states that achieves the bound decrease.
5
Experiments
We report results on the protein design problem, originally described in [9]. The protein design
problem is the inverse of the protein folding problem. Given a desired backbone structure for the
protein, the goal is to construct the sequence of amino-acids that results in a low energy, and thus
stable, configuration. We can use an approximate energy function to guide us towards finding a
set of amino-acids and rotamer configurations with minimal energy. In [9] the design problem was
posed as finding a MAP configuration in a pairwise MRF. The models used there (which are also
available online) have a number of states per variable that is between 2 and 158, and contain up to
180 variables per model. The models are also quite dense so that exact calculation is not feasible.
Recently we showed [6] that all but one of the problems described in [9] can be solved exactly by
using a LP relaxation with clusters on three variables. However, since each individual state has
roughly 100 possible values, processing triplets required 106 operations, making the optimization
costly. In what follows we describe two sets of experiments that show that, by coarsening, we can
both significantly reduce the computation time and achieve similar performance as if we had used
un-coarsened triplets [6]. The experiments differ in the strategy for adding triplets, and illustrate
two performance regimes. In both experimental setups we first run the standard edge-based message
passing algorithm for 1000 iterations.
In the first experiment, we add all triplets that correspond to variables whose single node beliefs are
tied (within 10?5 ) at the maximum after running the edge-based algorithm. Since tied beliefs correspond to fractional LP solutions, it is natural to consider these in tighter relaxations. The triplets
correspond to partitioned variables, as explained in Sec. 2. The partitioning is guided by the ties in
the single node beliefs. Specifically, for each variable Xi we find states whose single node beliefs
are tied at the maximum. Denote the number of states maximizing the belief by r. Then, we partition
3
The constraint may be infeasible for ? > 0, in which case we simply choose Zic = Xi .
35
260
Dual
30
Time (Seconds)
Objective
240
220
Primal (best decoding)
200
This paper
Sontag et al. UAI ?08
180
160
0
1
2
3
Hours
4
25
20
15
This paper
Sontag et al. UAI ?08
10
5
5
0
1000
1200
1400
1600
1800
Iteration Number
Figure 3: Comparison with algorithm from [6] for the protein ?1aac?, after the first 1000 iterations.
Left: Dual objective as a function of time. Right: The cost per one iteration over the entire graph.
the states into r subsets, each containing a different maximizing state. The other (non-maximizing)
states are split randomly among the r subsets. The triplets are then constructed over the coarsened
variables Zic and the message passing algorithm of Sec. 3 is applied to the resulting structure. After
convergence of the algorithm, we recalculate the single node beliefs. These may result in a different
partition scheme, and hence new variables Zic . We add new triplets corresponding to the new variables and re-run. We repeat until the dual-LP bound is sufficiently close to the value of the integral
assignment obtained from the messages (note that these values would not coincide if the relaxation
were not tight; in these experiments they do, so the final relaxation is tight).
We applied the above scheme to the ten smallest proteins in the dataset used in [6] (for the larger
proteins we used a different strategy described next). We were able to solve all ten exactly, as in
[6]. The mean running time was six minutes. The gain in computational efficiency as a result of
using coarsened-triplets was considerable: The average state space size for coarsened triplets was
on average 3000 times smaller than that of the original triplet state space, resulting in a factor 3000
speed gain over a scheme that uses the complete (un-coarsened) triplets.4 This big factor comes
about because a very small number of states are tied per variable, thus increasing the efficiency of
our method where the number of partitions is equal to the number of tied states. While running on
full triplets was completely impractical, the coarsened message passing algorithm is very practical
and achieves the exact MAP assignments.
Our second set of experiments follows the setup of [6] (see Sec. 3), alternating between adding 5
triplets to the relaxation and running MPLP for 20 more iterations. The only difference is that, after
deciding to add a cluster, we use the algorithm from Sec. 4 to partition the variables. We tried various
settings of ?, including ? = 0 and .01, and found that ? = 3d(c) gave the best overall runtimes.
We applied this second scheme to the 15 largest proteins in the dataset.5 Of these, we found the exact
MAP in 47% of the cases (according to the criterion used in [6]), and in the rest of the cases were
within 10?2 of the known optimal value. For the cases that were solved exactly, the mean running
time was 1.5 hours, and on average the proteins were solved 8.1 times faster than with [6].6 To
compare the running times on all 15 proteins, we checked how long it took for the difference between
the dual and primal objectives to be less than .01f (xM ; ?), where xM is the MAP assignment. This
revealed that our method is faster by an average factor of 4.3. The reason why these factors are
less than the 3000 in the previous setup is that, for the larger proteins, the number of tied states is
typically much higher than that for the small ones.
Results for one of the proteins that we solved exactly are shown in Fig. 3. The cost per iteration
increases very little after adding each triplet, showing that our algorithm significantly coarsened the
clusters. The total number of iterations and number of triplets added were roughly the same. Two
triplet clusters were added twice using different coarsenings, but otherwise each triplet only needed
to be added once, demonstrating that our algorithm chose the right coarsenings.
4
These timing comparisons do not apply to [6] since that algorithm did not use all the triplets.
We do not run on the protein 1fpo, which was not solved in [6].
6
We made sure that differences were not due to different processing powers or CPU loads.
5
6
Discussion
We presented an algorithm that enforces higher-order consistency constraints on LP relaxations,
but at a reduced computational cost. Our technique further explores the trade-offs of representing
complex constraints on the marginal polytope while keeping the optimization tractable. In applying
the method, we chose to cluster variables? states based a bound minimization criterion after solving
using a looser constraint on the polytope.
A class of approaches related to ours are the ?coarse-to-fine? applications of belief propagation [1,
4]. In those, one solves low-resolution versions of an MRF, and uses the resulting beliefs to initialize
finer resolution versions. Although they share the element of coarsening with our approach, the goal
of coarse-to-fine approaches is very different from our objective. Specifically, the low-resolution
MRFs only serve to speed-up convergence of the full resolution MRF via better initialization. Thus,
one typically should not expect it to perform better than the finest granularity MRF. In contrast,
our approach is designed to strictly improve the performance of the original MRF by introducing
additional (coarse) clusters. One of the key technical differences is that in our formulation the
setting of coarse and fine variables are refined iteratively whereas in [1], once a coarse MRF has
been solved, it is not revisited.
There are a number of interesting directions to explore. Using the same ideas as in this paper, one
can introduce coarsened pairwise consistency constraints in addition the full pairwise consistency
constraints. Although this would not tighten the relaxation, by passing messages more frequently
in the coarsened space, and only occasionally revisiting the full edges, this could give significant
computational benefits when the nodes have large numbers of states. This would be much more
similar to the coarse-to-fine approach described above.
With the coarsening strategy used here, the number of variables still grows exponentially with the
cluster size, albeit at a lower rate. One way to avoid the exponential growth is to partition the
states of a cluster into a fixed number of states (e.g., two), and then constrain such partitions to be
consistent with each other. Such a process may be repeated recursively, generating a hierarchy of
coarsened variables. The key advantage in this approach is that it represents progressively larger
clusters, but with no exponential growth. An interesting open question is to understand how these
hierarchies should be constructed.
Our techniques may also be helpful for finding the MAP assignment in MRFs with structured potentials, such as context-specific Bayesian networks. Finally, these constraints can also be used when
calculating marginals.
References
[1] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient belief propagation for early vision. Int. J. Comput.
Vision, 70(1):41?54, 2006.
[2] A. Globerson and T. Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP
LP-relaxations. In Advances in Neural Information Processing Systems 21. MIT Press, 2008.
[3] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. IEEE Trans.
Pattern Anal. Mach. Intell., 28(10):1568?1583, 2006.
[4] C. Raphael. Coarse-to-fine dynamic programming. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 23(12):1379?1390, 2001.
[5] D. Sontag and T. Jaakkola. New outer bounds on the marginal polytope. In Advances in Neural Information Processing Systems 21. MIT Press, 2008.
[6] D. Sontag, T. Meltzer, A. Globerson, Y. Weiss, and T. Jaakkola. Tightening LP relaxations for MAP using
message-passing. In UAI, 2008.
[7] M. Wainwright and M. I. Jordan. Graphical models, exponential families and variational inference. Technical report, UC Berkeley, Dept. of Statistics, 2003.
[8] M. Wainwright and M. I. Jordan. Log-determinant relaxation for approximate inference in discrete
Markov random fields. IEEE Transactions on Signal Processing, 54(6):2099?2109, June 2006.
[9] C. Yanover, T. Meltzer, and Y. Weiss. Linear programming relaxations and belief propagation ? an
empirical study. JMLR, 7:1887?1907, 2006.
[10] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized
belief propagation algorithms. IEEE Trans. on Information Theory, 51(7):2282? 2312, 2005.
| 3537 |@word determinant:1 version:4 c0:5 open:1 seek:1 tried:1 recursively:1 initial:1 configuration:4 contains:1 ours:1 current:4 z2:1 yet:1 must:2 finest:1 written:1 subsequent:2 partition:22 remove:1 designed:1 update:5 progressively:1 greedy:2 fewer:1 intelligence:1 amir:1 xk:3 beginning:1 coarse:19 provides:1 node:14 revisited:1 five:1 constructed:2 become:1 ik:4 introduce:5 pairwise:7 indeed:1 roughly:2 frequently:1 freeman:1 little:1 cpu:1 increasing:3 begin:1 underlying:1 notation:1 maximizes:2 mass:1 lowest:1 what:1 backbone:1 finding:6 impractical:1 berkeley:1 every:5 growth:2 tie:1 exactly:6 partitioning:15 bst:4 omit:1 engineering:1 local:2 timing:1 mach:1 might:1 chose:2 twice:1 initialization:1 specifying:1 shaded:1 bi:4 practical:2 globerson:4 enforces:2 block:2 definite:1 differs:1 x3:1 area:1 empirical:1 significantly:3 refers:1 protein:18 get:1 close:3 selection:1 put:1 context:1 influence:1 applying:1 equalization:1 optimize:2 equivalent:2 map:23 maximizing:5 straightforward:1 graphically:1 starting:2 convex:1 formulate:1 resolution:6 importantly:1 coordinate:2 updated:1 hierarchy:2 suppose:1 exact:5 programming:5 us:2 element:1 jk:1 coarser:3 huttenlocher:1 coarsened:23 solved:9 recalculate:1 revisiting:1 cycle:1 decrease:10 trade:1 mentioned:1 complexity:1 dynamic:1 tight:6 solving:3 serve:1 efficiency:2 completely:1 triangle:1 easily:1 joint:5 various:1 kolmogorov:1 fewest:1 separated:1 describe:2 choosing:3 refined:1 quite:1 whose:2 larger:4 solve:6 posed:1 otherwise:1 statistic:1 itself:1 superscript:1 online:1 final:1 advantage:2 sequence:1 took:1 propose:1 interaction:1 product:2 raphael:1 achieve:4 convergence:3 cluster:69 optimum:1 generating:1 spent:1 derive:3 illustrate:1 ac:1 fixing:1 ij:95 school:1 eq:7 solves:3 auxiliary:1 c:1 involves:1 come:3 implies:1 tommi:2 differ:2 guided:1 direction:1 closely:1 hx:1 fix:2 probable:1 tighter:3 strictly:1 sufficiently:5 deciding:1 great:1 mapping:1 achieves:4 early:1 smallest:1 currently:1 largest:1 tool:1 minimization:3 mit:6 clearly:1 offs:1 always:1 rather:1 zic:36 avoid:1 jaakkola:5 coarsen:1 derived:2 focus:1 bundled:1 june:1 contrast:1 sense:1 helpful:1 inference:4 mrfs:2 typically:3 entire:1 going:1 interested:1 provably:1 overall:2 dual:31 among:1 constrained:1 initialize:2 uc:1 marginal:13 field:2 construct:2 once:3 equal:1 runtimes:1 represents:3 np:1 report:2 few:1 employ:1 randomly:1 simultaneously:2 intell:1 individual:1 message:34 gamir:1 extreme:5 primal:5 integral:7 edge:27 necessary:1 tree:1 re:2 desired:1 sacrificing:1 minimal:1 earlier:3 assignment:13 maximization:2 cost:10 introducing:5 subset:2 successful:1 too:2 dependency:1 st:4 explores:1 huji:1 csail:4 decoding:1 together:1 again:1 containing:1 choose:6 return:1 potential:2 sec:6 int:1 satisfy:1 depends:2 later:1 closed:1 doing:2 start:1 recover:2 minimize:1 il:1 acid:2 who:1 efficiently:1 correspond:7 yield:1 bayesian:1 mc:2 none:1 finer:2 aac:1 maxc:1 whenever:2 checked:1 definition:1 energy:5 associated:1 gain:2 dataset:2 fractional:2 higher:4 originally:1 wei:3 formulation:2 evaluated:1 just:2 until:4 hand:1 propagation:6 perhaps:2 grows:2 contain:1 hence:1 read:1 alternating:1 iteratively:2 illustrated:1 deal:1 reweighted:1 ambiguous:1 illustrative:2 covering:1 whereby:1 criterion:2 generalized:3 complete:1 meaning:1 variational:1 consideration:1 recently:2 common:3 tightens:1 exponentially:3 discussed:2 belong:1 marginals:12 significant:1 consistency:14 had:1 stable:1 add:10 own:1 recent:1 showed:2 optimizing:3 optimizes:1 forcing:1 occasionally:1 inequality:2 additional:4 relaxed:2 monotonically:2 signal:1 semi:1 resolving:1 multiple:1 full:6 technical:2 faster:3 characterized:1 calculation:1 long:1 ensuring:1 scalable:1 mrf:6 vision:3 iteration:9 represent:2 achieved:2 folding:1 whereas:2 addition:2 fine:7 separately:1 rest:1 unlike:1 sure:1 sent:4 coarsening:5 jordan:2 integer:2 granularity:3 constraining:1 split:1 revealed:1 rendering:1 meltzer:2 xj:58 zi:12 gave:2 reduce:1 idea:4 six:1 effort:1 akin:1 stereo:1 sontag:5 passing:20 involve:1 locally:2 ten:2 simplest:1 reduced:3 zj:4 disjoint:1 per:7 discrete:3 key:5 demonstrating:1 fpo:1 kept:1 graph:6 relaxation:33 enforced:1 run:5 inverse:1 coarsenings:4 extends:1 family:2 looser:1 scaling:1 bound:17 ki:1 guaranteed:5 followed:1 convergent:2 constraint:28 constrain:4 x2:1 speed:2 argument:1 min:2 structured:1 developing:1 according:1 across:2 smaller:2 increasingly:1 partitioned:3 lp:35 making:1 happens:1 intuitively:1 gradually:1 explained:1 computationally:3 agree:4 turn:2 needed:1 letting:1 tractable:1 end:1 sending:2 available:1 operation:1 yedidia:1 apply:2 enforce:2 generic:1 original:6 running:6 cf:2 graphical:5 xc:6 calculating:1 build:1 implied:1 objective:11 already:2 question:2 added:3 strategy:6 costly:4 usual:4 sci:10 mplp:5 outer:3 polytope:11 trivial:1 reason:1 enforcing:1 induction:1 useless:1 illustration:1 z3:1 hebrew:1 equivalently:1 difficult:1 setup:3 tightening:2 design:9 anal:1 x0c:2 perform:1 upper:3 observation:1 markov:2 descent:2 incorporated:1 interacting:1 arbitrary:1 rotamer:1 david:1 cast:2 pair:2 required:2 z1:1 optimized:1 hour:2 trans:2 able:1 distribution1:1 below:1 proceeds:1 xm:4 pattern:2 regime:1 program:5 max:18 including:2 belief:19 wainwright:2 power:1 critical:1 demanding:1 difficulty:1 warm:1 natural:1 yanover:1 representing:3 scheme:8 improve:1 catch:3 expect:1 highlight:1 interesting:2 pij:1 consistent:5 principle:2 share:1 zsc:3 repeat:1 keeping:1 free:1 infeasible:1 zc:25 guide:3 weaker:1 understand:1 taking:1 felzenszwalb:1 benefit:1 evaluating:1 made:2 collection:2 coincide:1 tighten:1 transaction:2 approximate:3 keep:2 global:1 uai:3 unnecessary:1 xi:92 continuous:1 un:3 triplet:20 why:1 promising:1 terminate:1 zk:5 interact:1 complex:2 necessarily:1 constructing:1 did:1 dense:2 big:2 arise:2 repeated:1 amino:2 x1:1 fig:6 explicit:1 exponential:6 comput:1 candidate:4 tied:6 jmlr:1 grained:3 bij:6 removing:1 minute:1 example2:1 xt:1 load:1 specific:1 showing:1 offset:1 x:1 evidence:1 exists:1 albeit:2 adding:9 lifting:1 margin:2 intersection:2 simply:1 likely:4 explore:1 corresponds:4 goal:4 formulated:1 towards:1 replace:1 feasible:3 hard:1 change:1 considerable:1 specifically:3 folded:1 except:2 total:1 duality:1 experimental:1 dsontag:1 formally:2 brevity:1 violated:1 incorporate:1 dept:1 outgoing:1 |
2,799 | 3,538 | Differentiable Sparse Coding
David M. Bradley
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
J. Andrew Bagnell
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Prior work has shown that features which appear to be biologically plausible as
well as empirically useful can be found by sparse coding with a prior such as
a laplacian (L1 ) that promotes sparsity. We show how smoother priors can preserve the benefits of these sparse priors while adding stability to the Maximum
A-Posteriori (MAP) estimate that makes it more useful for prediction problems.
Additionally, we show how to calculate the derivative of the MAP estimate efficiently with implicit differentiation. One prior that can be differentiated this way
is KL-regularization. We demonstrate its effectiveness on a wide variety of applications, and find that online optimization of the parameters of the KL-regularized
model can significantly improve prediction performance.
1
Introduction
Sparse approximation is a key technique developed in engineering and the sciences which approximates an input signal, X, in terms of a ?sparse? combination of fixed bases B. Sparse approximation
? that best
relies on an optimization algorithm to infer the Maximum A-Posteriori (MAP) weights W
reconstruct the signal, given the model X ? f (BW ). In this notation, each input signal forms
a column of an input matrix X, and is generated by multiplying a set of basis vectors B, and a
column from a coefficient matrix W , while f (z) is an optional transfer function. This relationship
is only approximate, as the input data is assumed to be corrupted by random noise. Priors which
produce sparse solutions for W , especially L1 regularization, have gained attention because of their
usefulness in ill-posed engineering problems [1], their ability to elucidate certain neuro-biological
phenomena, [2, 3], and their ability to identify useful features for classification from related unlabeled data [4].
Sparse coding [2] is closely connected to Independent Component Analysis as well as to certain
approaches to matrix factorization. It extends sparse approximation by learning a basis matrix B
which represents well a collection of related input signals?the input matrix X?in addition to per? . Unfortunately, existing sparse coding
forming optimization to compute the best set of weights W
algorithms that leverage an efficient, convex sparse approximation step to perform inference on the
latent weight vector [4] are difficult to integrate into a larger learning architecture. It has been
convincingly demonstrated that back-propagation is a crucial tool for tuning an existing generative
model?s output in order to improve supervised performance on a discriminative task. For example,
greedy layer-wise strategies for building deep generative models rely upon a back-propagation step
to achieve excellent model performance [5]. Unfortunately, existing sparse coding architectures pro? that is an unstable, discontinuous function of the inputs and bases;
duce a latent representation W
an arbitrarily small change in input can lead to the selection of a completely different set of latent
weights.
We present an advantageous new approach to coding that uses smoother priors which preserve the
sparsity benefits of L1 -regularization while allowing efficient convex inference and producing stable
? . In particular we examine a prior based on minimizing KL-divergence to
latent representations W
1
the uniform distribution which has long been used for approximation problems [6, 7]. We show this
increased stability leads to better semi-supervised classification performance across a wide variety
? as input. Additionally, because of
of applications for classifiers using the latent representation W
the smoothness of the KL-divergence prior, B can be optimized discriminatively for a particular
application by gradient descent, leading to outstanding empirical performance.
2
Notation
Uppercase letters, X, denote matrices and lowercase letters, x, denote vectors. For matrices, superscripts and subscripts denote rows and columns respectively. Xj is the jth column of X, X i is the
ith row of X, and Xji is the element in the ith row and jth column. Elements of vectors are indicated
by subscripts, xj , and superscripts on vectors are used for time indexing xt . X T is the transpose of
matrix X.
3
Generative Model
Sparse coding fits a generative model (1) to unlabeled data, and the MAP estimates of the latent
variables of this model have been shown to be useful as input for prediction problems [4]. (1)
divides the latent variables into two independent groups, the coefficients W and the basis B, which
combine to form the matrix of input examples X. Different examples (columns of X) are assumed
to be independent of each other. The Maximum A Posteriori (MAP) approximation replaces the
integration over W and B in (1) with the maximum value of P (X|W, B)P (W )P (B), and the
? and B,
? are the MAP estimates.
values of the latent variables at the maximum, W
? given B is an approximation problem, solving for W
? and B
? simultaneously over a set
Finding W
of independent examples is a coding problem.
Z Z
P (X) =
Z
P (X|W, B)P (W )P (B)dW dB =
B
W
Z
P (B)
B
W
Y
P (Xi |Wi , B)P (Wi )dW dB (1)
i
Given B, the negative log of the generative model can be optimized independently for each example,
and it is denoted for a generic example x by L in (2). L decomposes into the sum of two terms, a loss
function DL (xkf (Bw)) between an input example and the reconstruction produced by the transfer
function f , and a regularization function DP (wkp) that measures a distance between the coefficients
for the example w and a parameter vector p. A regularization constant ? controls the relative weight
of these two terms. For fixed B, minimizing (2) with respect to w separately for each example is
equivalent to maximizing (1).
L = DL (xkf (Bw)) + ?DP (wkp)
w
? = arg min L
(2)
(3)
w
In many applications, the anticipated distribution of x after being corrupted by noise can be modeled
by an exponential family distribution. Every exponential family distribution defines a Bregman divergence which serves as a matching loss function for estimating the parameters of the distribution1 .
One common choice for the loss/transfer functions
is the squared loss function with its matching
P
linear transfer function, DL (xkf (Bw)) = i (xi ? B i w)2 , which is the matching Bregman Divergence for x drawn from a multidimensional gaussian distribution.
The regularization function DP (wkp) is also often a Bregman divergence, but may be chosen for
other features such as the sparsity of the resulting MAP estimate w.
? A vector is commonly called
sparse if many elements are exactly zero. The entropy [9, 10], and Lpp -norm2 , p ? 1 regularization
functions [2, 3, 4] promote this form of sparsity, and all of them have shown the ability to learn bases
1
The maximum likelihood parameter estimate for any regular exponential family distribution can be found
by minimizing the corresponding Bregman divergence for that family, and every Bregman divergence has a
matching transfer function which leads to a convex minimization problem [8]. That matching transfer function
is the gradient ?? of the function ? which is associated with the Bregman divergence D? (xky) = ?(x) ?
?(y) ? hx ? y,
P??(y)i.
2 p
Lp (x) = i |xi |p corresponds to the negative log of a generalized gaussian prior.
2
containing interesting structure from unlabeled data. However, of these only L1 leads to an efficient,
convex procedure for inference, and even this prior does not produce differentiable MAP estimates.
We argue that if the latent weight vector w
? is to be used as input to a classifier, a better definition of
?sparsity? is that most elements in w
? can be replaced by elements in a constant vector p without significantly increasing the loss. One regularization function that produces this form of pseudo-sparsity
is the KL-divergence KL(wkp). This regularization function has long been used for approximation
problems in Geophysics, Crystallography, Astronomy, and Physics, where it is commonly referred to
as Maximum Entropy on the Mean (MEM) [7], and has been shown in the online setting to compete
with low L1 -norm solutions in terms of regret [11, 12].
L1 regularization provides sparse solutions because its Fenchel dual [13] is the max function, meaning only the most useful basis vectors participate
in the reconstruction. A differentiable approximaP
tion to maxi xi is a sum of exponentials, i exi , whose dual is the KL-divergence (4). Regularization
with KL has proven useful in online learning, where it is the implicit prior of the exponentiated gradient descent (EGD) algorithm. EGD has been shown to be ?sparse? in the sense that it can select a
few relevant features to use for a prediction task from many irrelevant ones.
The form of KL we use (4) is the full Bregman divergence of the negative entropy function3 . Often
KL is used to compute distances between probability distributions, and for this case the KL we
use reduces to the standard form. For sparse coding however, it is inconvenient to assume that
kwk
? 1 = kpk1 = 1, so we use the full unnormalized KL instead.
X
wi
? wi + pi
DP (wkp) =
wi log
pi
i
(4)
For the prior vector p we use a uniform vector whose L1 magnitude equals the expected L1 magnitude of w. p has an analogous effect to the q parameter in Lq -norm regularization. p ? 0
approximates L1 and p ? ? approximates L2 . Changing p affects the magnitude of the KL term,
so ? in (2) must be adjusted to balance the loss term in the sparse coding objective function (small
values of p require small values of ?).
Below we provide a) an efficient procedure for inferring w
? in this model; b) an algorithm for iteratively updating the bases B, and c) show that this model leads to differentiable estimates of w.
? We
also provide the general form of the derivative for arbitrary Bregman losses.
4
Implementation
To compute w
? with KL-regularization, we minimize (3) using exponentiated gradient descent (EGD)
with backtracking until convergence (5). EGD automatically enforces positivity constraints on the
coefficient vector w, and is particularly efficient for optimization because it is the natural mirror
descent rule for KL-regularization [12]. The gradient of the objective function (2) with respect to
the coefficient for the jth basis vector wj is given in (6) for matching loss/transfer function pairs.
wjt+1 = wjt e
?L
?? ?w
j
wj
?L
= (f (Bw) ? x)T Bj + ? log
?wj
pj
(5)
(6)
This iterative update is run until the maximum gradient element is less than a threshold, which
is estimated by periodically running a random set of examples to the limits of machine precision,
and selecting the largest gradient threshold that produces w
? within of the exact solution. The
? parameter is continuously updated to balance the number of sucessful steps and the number of
backtracking steps4 . Because L1 -regularization produces both positive and negative weights, to
compare L1 and KL regularization on the same basis we expand the basis used for KL by adding the
negation of each basis vector, which is equivalent to allowing negative weights (see Appendix B).
During sparse coding the basis matrix B is updated by Stochastic Gradient Descent (SGD), giving
?L
the update rule Bt+1 = Bt ? ? ?B
i . This update equation does not depend on the prior chosen
j
3
?H(x) = x log(x)
In our experiments, if the ratio of backtracking steps to total steps was more than 0.6, ? was decreased by
10%. Similarly ? was increased by 10% if the ratio fell below 0.3.
4
3
for w and is given in (7) for matching loss/transfer function pairs. SGD implements an implicit
L2 regularizer and is suitable for online learning, however because the magnitude of w is explicitly
penalized, the columns of B were constrained to have unit L2 norm to prevent the trivial solution of
infinitely large B and infinitely small w. The step size ?
was adjusted for the magnitude of w
? in each
application, and was then decayed over time as ? ? 1/ t. The same SGD procedure was also used
to optimize B through backpropagation, as explained in the next section.
?L
= wj (f (B i w) ? xi )
?Bji
5
(7)
Modifying a Generative Model For A Discriminative Task
Sparse Coding builds a generative model from unlabeled data that captures structure in that data by
learning a basis B. Our hope is that the MAP estimate of basis coefficients w
? produced for each
input vector x will be useful for predicting a response y associated with x. However, the sparse
coding objective function only cares about reconstructing the input well, and does not attempt to
make w
? useful as input for any particular task. Fortunately, since priors such as KL-divergence
regularization produce solutions that are smooth with respect to small changes in B and x, B can be
modified through back-propagation to make w
? more useful for prediction.
The key to computing the derivatives required for backpropagation is noting that the gradient with
respect to w of the optimization (3) at its minimum w
? can be written as a set of fixed point equations
where the gradients of the loss term equal the gradient of the regularization:
1
? .
?DP (wkp)
?
= ? ?DL (xkf (B w))
?
(8)
Then if the regularization function is twice differentiable with respect to w, we can use implicit differentiation on (8) to compute the gradient of w
? with respect to B, and x [14]. For KL-regularization
?w
?
and the simple case of a linear transfer function with squared loss, ?B
is given in (9), where ~ei is a
unit vector whose ith element is 1. A general derivation for matched loss/transfer function pairs as
w
?
defined before is provided in appendix C. Note that the ability to compute ??x
means that multiple
layers of sparse coding could be used.
?1
?
?w
?
T
=
?
B
B
+
diag(
)
(B k w
?i )T + ~ei (f (B k w)
? ? xk )
k
w
?
?Bi
6
(9)
Experiments
We verify the performance of KL-sparse coding on several benchmark tasks including the MNIST
handwritten digit recognition data-set, handwritten lowercase English characters classification,
movie review sentiment regression, and music genre classification (Appendix E). In each application, the w
? produced using KL-regularization were more useful for prediction than those produced
with L1 regularization due to the stability and differentiability provided by KL.
6.1
Sparsity
KL-regularization retained the desirable pseudo-sparsity characteristics of L1 , namely that each
example, x, produces only a few large elements in w.
? Figure 1 compares the mean sorted and
normalized coefficient distribution over the 10,000 digit MNIST test set for KL-divergence and
several Lpp regularization functions, and shows that although the KL regularization function is not
sparse in the traditional sense of setting many elements of w
? to zero, it is sparse in the sense that w
?
contains only a few large elements in each example, lending support to the idea that this sense of
sparsity is more important for classification.
6.2
Stability
Because the gradient of the KL-divergence regularization function goes to ? with increasing w, it
produces MAP estimates w
? that change smoothly with x and B (see Appendix A for more details).
4
Figure 1: Left: Mean coefficient distribution over the 10,000 digit MNIST test set for various regularization
functions. Each example w
? was sorted by magnitude and normalized by kwk
? ? before computing the mean
over all examples. Right: test set classification performance. Regularization functions that produced few large
values in each examples (such as KL and L1) performed the best. Forcing small coefficients to be exactly 0
was not necessary for good performance. Note the log scale on the horizontal axis.
Regularization
L1
KL
Gaussian Noise (Standard Deviation)
0.01
0.1
0.0283?0.0069
0.285?0.056
0.0172?0.0016
0.164?0.015
Random Translations (pixels)
0.1
1
0.138?0.026 1.211?0.213
0.070?0.011 0.671?0.080
Table 1: The 10,000 images of handwritten digits in the MNIST test set were used to show the stability
benefits of KL-regularization. Distance (in L1 ) between the representation for x, w,
? and the representation
after adding noise, divided by kwk
? 1 . KL-regularization provides representations that are significantly more
stable with respect to both uncorrelated additive Gaussian noise (Left), and correlated noise from translating
the digit image in a random direction (Right).
Table 1 quantifies how KL regularization significantly reduces the effect on w
? of adding noise to the
input x.
This stability improves the usefulness of w
? for prediction. Figure 2 shows the most-discriminative
2-D subspace (as calculated by Multiple Discriminant Analysis [15]) for the input space, the L1 and
KL coefficient space, and the KL coefficient space after it has been specialized by back-propagation.
The L1 coefficients tame the disorder of the input space so that clusters for each class are apparent,
although noisy and overlapping. The switch to KL regularization makes these clusters more distinct,
and applying back-propagation further separates the clusters.
Figure 2: Shown is the distribution of the eight most confusable digit classes in the input space and in the
coefficient spaces produced by sparse approximation. Multiple Discriminant Analysis was used to compute the
most discriminative 2-D projection of each space. The PCA-whitened input space (left) contains a lot of overlap
between the classes. L1 regularization (center) discovers structure in the unlabeled data, but still produces more
overlap between classes than KL sparse approximation (right) does with the same basis trained with L1 sparse
coding. Figure best seen in color.
6.3
Improved Prediction Performance
On all applications, the stability provided by KL-regularization improved performance over L1 , and
back-propagation further improved performance when the training set had residual error after an
output classifier was trained.
5
6.3.1
Handwritten Digit Classification
We tested our algorithm on the benchmark MNIST handwritten digits dataset [16]. 10,000 of the
60,000 training examples were reserved for validation, and classification performance was evaluated
on the separate 10,000 example test set. Each example was first reduced to 180D from 768D by
PCA, and then sparse coding was performed using a linear transfer function and squared loss5 . The
validation set was used to pick the regularization constant, ?, and the prior mean for KL, p.
Maxent classifiers6 [17] were then learned on randomly sampled subsets of the training set of various sizes. Switching from L1 -regularized to KL-regularized sparse approximation improved performance in all cases (Table 2). When trained on all 50,000 training examples, the test set classification
error of KL coefficients, 2.21%, was 37% lower than the 3.53% error rate obtained on the L1 regularized coefficients. As shown in Table 3, this increase in performance was consistent across
a diverse set of classification algorithms. After running back-propagation with the KL-prior, the
test set error was reduced to 1.30%, which improves on the best results reported7 for other shallowarchitecture permutation-invariant classifiers operating on the same data set without prior knowledge
about the problem8 , (see Table 4).
Training Set Size
L1 (Test Set)
KL (Test set)
KL After Backprop (Test Set)
Improvement from Backprop
KL (Training Set)
1000
7.72%
5.87%
5.66
3.6%
0.00%
2000
6.63%
5.06%
4.46%
11.9%
0.05%
10000
4.74%
3.00%
2.31%
23.0%
1.01%
20000
4.16%
2.51%
1.78%
29.1%
1.50%
50000
3.53%
2.21%
1.30%
43.0%
1.65%
Table 2: The ability to optimize the generative model with back-propagation leads to significant performance
increases when the training set is not separable by the model learned on the unlabeled data. Shown is the
misclassification rate on the MNIST digit classification task. Larger training sets with higher residual error
benefit more from back-propagation.
Classifier
Maxent
2-layer NN
SVM (Linear)
SVM (RBF)
PCA
7.49%
2.23%
5.55%
1.54%
L1
3.53%
2.13%
3.95%
1.94%
KL
2.21%
1.40%
2.16%
1.28%
KL+backprop
1.30%
1.36%
1.34%
1.31%
Table 3: The stability afforded by the KL-prior improves the performance of all classifier types over the
L1 prior. In addition back-propagation allows linear classifiers to do as well as more complicated non-linear
classifiers.
Algorithm
Test Set Error
L1
3.53%
KL
2.21%
KL+backprop
1.30%
SVM
1.4%
2-layer NN [18]
1.6%
3-layer NN
1.53%
Table 4: Test set error of various classifiers on the MNIST handwritten digits database.
6.3.2
Transfer to Handwritten Character Classification
In [4], a basis learned by L1 -regularized sparse coding on handwritten digits was shown to improve
classification performance when used for the related problem of handwritten character recognition
5
This methodology was chosen to match [4]
Also known as multi-class logistic regression
7
An extensive comparison of classification algorithms for this dataset can be found on the MNIST website,
http://yann.lecun.com/exdb/mnist/
8
Better results have been reported when more prior knowledge about the digit recognition problem is provided to the classifier, either through specialized preprocessing or by giving the classifier a model of how digits
are likely to be distorted by expanding the data set with random affine and elastic distortions of the training
examples or training with vicinal risk minimization. Convolutional Neural Networks produce the best results
on this problem, but they are not invariant to permutations in the input since they contain a strong prior about
how pixels are connected.
6
6
with small training data sets (< 5000 examples). The handwritten English characters dataset9 they
used consists of 16x8 pixel images of lowercase letters. In keeping with their work, we padded
and scaled the images to match the 28x28 pixel size of the MNIST data, projected onto the same
PCA basis that was used for the MNIST digits, and learned a basis from the MNIST digits by
L1 -regularized sparse coding. This basis was then used for sparse approximation of the English
characters, along with a linear transfer function and squared loss.
In this application as well, Table 5 shows that simply switching to a KL prior from L1 for sparse
approximation significantly improves the performance of a maxent classifier. Furthermore, the KL
prior allows online improvement of the sparse coding basis as more labeled data for the characterrecognition task becomes available. This improvement increases with the size of the training set, as
more information becomes available about the target character recognition task.
Training Set Size
100
500
1000
5000
20000
Raw
44.3
60.4
66.3
75.1
79.3
PCA
46.9
61.2
66.7
76.0
79.7
L1
44.0
63.7
69.5
78.9
83.3
KL
49.4
69.2
75.0
82.5
86.0
KL+backprop
50.7
69.9
76.4
84.2
89.1
Table 5: Classification Accuracy on 26-way English Character classification task.
6.3.3
Comparison to sLDA: Movie Review Sentiment Regression
KL-regularized sparse coding bears some similarities to the supervised LDA (sLDA) model introduced in [19], and we provide results for the movie review sentiment classification task [20] used
in that work. To match [19] we use vectors of normalized counts for the 5000 words with the highest tf-idf score among the 5006 movie reviews in the data set, use 5-fold cross validation, compute
predictions with linear regression on w,
? and report our performance in terms of predictive R2 (the
fraction of variabilityP
in the out-of-fold
P response values which is captured by the out-of-fold predictions y?: pR2 := 1 ? ( (y ? y?)2 )/( (y ? y?)2 )). Since the input is a probability distribution, we use
Bw
a normalized exponential transfer function, f (B, w) = keeBw k1 , to compute the reconstruction of the
input. For sparse coding we use KL-divergence for both the loss and the regularization functions,
as minimizing the KL-divergence between the empirical probability distribution of the document
given by each input vector x and f (B, w) is equivalent to maximizing the ?constrained Poisson
distribution? used to model documents in [21] (details given in appendix D). Table 6 shows that the
sparse coding generative model we use is competitive with and perhaps slightly better than LDA.
After back-propagation, its performance is superior to the supervised version of LDA, sLDA10 .
predictive R2
0.263
0.264
0.281
0.457
0.500
0.507
0.534
Algorithm
LDA [19]
64D unsupervised KL sparse coding
256D unsupervised KL sparse coding
L1 -regularized regression [19]
sLDA [19]
L2 -regularized regression
256D KL-regularized coding with backprop
Table 6: Movie review sentiment prediction task. KL-regularized sparse coding compares favorably with LDA
and sLDA.
7
Conclusion
This paper demonstrates on a diverse set of applications the advantages of using a differentiable,
smooth prior for sparse coding. In particular, a KL-divergence regularization function has significant
9
Available at http://ai.stanford.edu/?btaskar/ocr/
Given that the word counts used as input are very sparse to begin with, classifiers whose regret bounds depend on the L2 norm of the gradient of the input (such as L2 -regularized least squares) do quite well, achieving
a predictive R2 value on this application of 0.507.
10
7
advantages over other sparse priors such as L1 because it retains the important aspects of sparsity,
while adding stability and differentiability to the MAP estimate w.
? Differentiability in particular
is shown to lead to state-of-the-art performance by allowing the generative model learned from
unlabeled data by sparse-coding to be adapted to a supervised loss function.
Acknowledgments
David M. Bradley is supported by an NDSEG fellowship provided by the Army Research Office.
The authors would also like to thank David Blei, Rajat Raina, and Honglak Lee for their help.
References
[1] J. A. Tropp, ?Algorithms for simultaneous sparse approximation: part ii: Convex relaxation,? Signal
Process., vol. 86, no. 3, pp. 589?602, 2006.
[2] B. Olshausen and D. Field, ?Sparse coding with an overcomplete basis set: A strategy employed by v1??
Vision Research, 1997.
[3] Y. Karklin and M. S. Lewicki, ?A hierarchical bayesian model for learning non-linear statistical regularities in non-stationary natural signals,? Neural Computation, vol. 17, no. 2, pp. 397?423, 2005.
[4] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng, ?Self-taught learning: Transfer learning from
unlabeled data,? in ICML ?07: Proceedings of the 24th international conference on Machine learning,
2007.
[5] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, ?Greedy layer-wise training of deep networks,?
in Advances in Neural Information Processing Systems 19, B. Sch?olkopf, J. Platt, and T. Hoffman, Eds.
Cambridge, MA: MIT Press, 2007, pp. 153?160.
[6] E. Rietsch, ?The maximum entropy approach to inverse problems,? Journal of Geophysics, vol. 42, pp.
489?506, 1977.
[7] G. Besnerais, J. Bercher, and G. Demoment, ?A new look at entropy for solving linear inverse problems,?
IEEE Trans. on Information Theory, vol. 45, no. 5, pp. 1565?1578, July 1999.
[8] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, ?Clustering with bregman divergences,? Journal of
Machine Learning Research, vol. 6, pp. 1705?1749, 2005.
[9] M. Brand, ?Pattern discovery via entropy minimization,? in AISTATS 99, 1999.
[10] M. Shashanka, B. Raj, and P. Smaragdis, ?Sparse overcomplete latent variable decomposition of counts
data,? in NIPS, 2007.
[11] J. Kivinen and M. Warmuth, ?Exponentiated gradient versus gradient descent for linear predictors,? Information and Computation, pp. 1?63, 1997.
[12] N. Cesa-Bianchi and G. Lugosi, Prediction, Learning, and Games.
Cambridge University Press, 2006.
[13] R. Rifkin and R. Lippert, ?Value regularization and fenchel duality,? The Journal of Machine Learning
Research, vol. 8, pp. 441?479, 2007.
[14] D. Widder, Advanced Calculus, 2nd ed.
Dover Publications, 1989.
[15] R. Duda, P. Hart, and D. Stork, Pattern classification.
Wiley New York, 2001.
[16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, ?Gradient-based learning applied to document recognition,? Proceedings of the IEEE, vol. 86, no. 11, pp. 2278?2324, 1998.
[17] K. Nigam, J. Lafferty, and A. McCallum, ?Using maximum entropy for text classification,? 1999.
[Online]. Available: citeseer.ist.psu.edu/article/nigam99using.html
[18] P. Y. Simard, D. Steinkraus, and J. C. Platt, ?Best practices for convolutional neural networks applied
to visual document analysis,? in ICDAR ?03: Proceedings of the Seventh International Conference on
Document Analysis and Recognition. Washington, DC, USA: IEEE Computer Society, 2003, p. 958.
[19] D. M. Blei and J. D. McAuliffe, ?Supervised topic models,? in NIPS 19, 2007.
[20] B. Pang and L. Lee, ?Seeing stars: Exploiting class relationships for sentiment categorization with respect
to rating scales,? in Proceedings of the ACL, 2005, pp. 115?124.
[21] R. Salakhutdinov and G. Hinton, ?Semantic hashing,? in SIGIR workshop on Information Retrieval and
applications of Graphical Models, 2007.
8
| 3538 |@word version:1 duda:1 advantageous:1 nd:1 norm:4 calculus:1 decomposition:1 lpp:2 citeseer:1 pick:1 sgd:3 contains:2 score:1 selecting:1 document:5 existing:3 bradley:2 com:1 egd:4 must:1 written:1 periodically:1 additive:1 update:3 stationary:1 generative:10 greedy:2 website:1 warmuth:1 xk:1 mccallum:1 ith:3 dover:1 blei:2 provides:2 lending:1 along:1 consists:1 combine:1 expected:1 xji:1 examine:1 multi:1 salakhutdinov:1 steinkraus:1 automatically:1 increasing:2 becomes:2 provided:5 estimating:1 notation:2 matched:1 begin:1 developed:1 vicinal:1 finding:1 astronomy:1 differentiation:2 ghosh:1 pseudo:2 every:2 multidimensional:1 exactly:2 classifier:13 scaled:1 demonstrates:1 control:1 unit:2 platt:2 appear:1 producing:1 mcauliffe:1 positive:1 before:2 engineering:2 limit:1 switching:2 subscript:2 lugosi:1 acl:1 twice:1 factorization:1 bi:1 acknowledgment:1 lecun:2 enforces:1 practice:1 regret:2 implement:1 backpropagation:2 digit:15 procedure:3 empirical:2 significantly:5 matching:7 projection:1 word:2 regular:1 seeing:1 onto:1 unlabeled:8 selection:1 risk:1 applying:1 optimize:2 equivalent:3 map:11 demonstrated:1 center:1 maximizing:2 go:1 attention:1 independently:1 convex:5 sigir:1 disorder:1 rule:2 lamblin:1 dw:2 stability:9 analogous:1 updated:2 elucidate:1 target:1 exact:1 us:1 pa:2 element:10 recognition:6 particularly:1 updating:1 database:1 labeled:1 capture:1 calculate:1 wj:4 connected:2 highest:1 tame:1 trained:3 depend:2 solving:2 predictive:3 upon:1 basis:18 completely:1 exi:1 various:3 genre:1 regularizer:1 derivation:1 distinct:1 whose:4 apparent:1 posed:1 plausible:1 larger:2 distortion:1 slda:4 reconstruct:1 stanford:1 quite:1 ability:5 noisy:1 superscript:2 online:6 advantage:2 differentiable:6 reconstruction:3 relevant:1 rifkin:1 achieve:1 olkopf:1 exploiting:1 convergence:1 cluster:3 regularity:1 produce:10 categorization:1 help:1 andrew:1 strong:1 c:1 larochelle:1 direction:1 closely:1 discontinuous:1 modifying:1 stochastic:1 translating:1 backprop:6 require:1 hx:1 btaskar:1 biological:1 dbagnell:1 adjusted:2 bj:1 largest:1 tf:1 tool:1 hoffman:1 minimization:3 hope:1 mit:1 gaussian:4 modified:1 office:1 publication:1 improvement:3 likelihood:1 sense:4 posteriori:3 inference:3 lowercase:3 nn:3 bt:2 expand:1 pixel:4 arg:1 classification:19 ill:1 dual:2 denoted:1 among:1 html:1 constrained:2 integration:1 art:1 equal:2 field:1 ng:1 psu:1 washington:1 represents:1 look:1 unsupervised:2 icml:1 anticipated:1 promote:1 report:1 few:4 randomly:1 preserve:2 divergence:18 simultaneously:1 packer:1 replaced:1 bw:6 negation:1 attempt:1 function3:1 uppercase:1 bregman:9 necessary:1 divide:1 maxent:3 confusable:1 inconvenient:1 overcomplete:2 increased:2 column:7 fenchel:2 retains:1 deviation:1 subset:1 uniform:2 usefulness:2 predictor:1 seventh:1 reported:1 corrupted:2 decayed:1 international:2 lee:3 physic:1 continuously:1 squared:4 pr2:1 ndseg:1 containing:1 cesa:1 positivity:1 derivative:3 leading:1 simard:1 star:1 coding:30 coefficient:15 explicitly:1 tion:1 performed:2 lot:1 kwk:3 competitive:1 complicated:1 minimize:1 square:1 pang:1 accuracy:1 convolutional:2 merugu:1 characteristic:1 efficiently:1 reserved:1 identify:1 handwritten:10 raw:1 bayesian:1 produced:6 multiplying:1 simultaneous:1 ed:2 definition:1 pp:10 associated:2 sampled:1 dataset:2 color:1 knowledge:2 improves:4 back:11 higher:1 hashing:1 supervised:6 xky:1 response:2 improved:4 methodology:1 evaluated:1 shashanka:1 furthermore:1 implicit:4 until:2 horizontal:1 tropp:1 ei:2 banerjee:1 propagation:11 overlapping:1 defines:1 logistic:1 lda:5 indicated:1 perhaps:1 olshausen:1 building:1 effect:2 usa:1 verify:1 normalized:4 contain:1 regularization:39 iteratively:1 dhillon:1 semantic:1 during:1 self:1 game:1 unnormalized:1 generalized:1 exdb:1 demonstrate:1 l1:33 pro:1 meaning:1 wise:2 image:4 discovers:1 common:1 superior:1 specialized:2 empirically:1 stork:1 approximates:3 mellon:2 significant:2 honglak:1 cambridge:2 ai:1 smoothness:1 tuning:1 similarly:1 had:1 stable:2 similarity:1 operating:1 base:4 raj:1 irrelevant:1 forcing:1 certain:2 arbitrarily:1 seen:1 minimum:1 fortunately:1 care:1 captured:1 employed:1 signal:6 semi:1 smoother:2 full:2 multiple:3 desirable:1 infer:1 reduces:2 ii:1 smooth:2 match:3 x28:1 cross:1 long:2 retrieval:1 divided:1 hart:1 promotes:1 laplacian:1 prediction:12 neuro:1 regression:6 whitened:1 vision:1 cmu:2 poisson:1 robotics:2 addition:2 fellowship:1 separately:1 decreased:1 crucial:1 sch:1 norm2:1 fell:1 db:2 lafferty:1 effectiveness:1 leverage:1 noting:1 bengio:2 variety:2 xj:2 fit:1 affect:1 switch:1 architecture:2 idea:1 haffner:1 pca:5 sentiment:5 york:1 deep:2 useful:10 differentiability:3 reduced:2 http:2 estimated:1 per:1 diverse:2 carnegie:2 vol:7 taught:1 group:1 key:2 ist:1 threshold:2 loss5:1 achieving:1 drawn:1 changing:1 prevent:1 pj:1 v1:1 relaxation:1 padded:1 fraction:1 sum:2 compete:1 run:1 letter:3 inverse:2 distorted:1 extends:1 family:4 yann:1 appendix:5 layer:6 bound:1 fold:3 replaces:1 smaragdis:1 adapted:1 constraint:1 idf:1 kpk1:1 widder:1 ri:1 afforded:1 aspect:1 min:1 separable:1 combination:1 battle:1 across:2 slightly:1 reconstructing:1 character:7 wi:5 lp:1 biologically:1 explained:1 invariant:2 indexing:1 equation:2 count:3 icdar:1 serf:1 available:4 eight:1 ocr:1 hierarchical:1 differentiated:1 generic:1 xkf:4 running:2 clustering:1 graphical:1 music:1 giving:2 k1:1 especially:1 build:1 society:1 lippert:1 objective:3 strategy:2 bagnell:1 traditional:1 gradient:17 dp:5 subspace:1 distance:3 separate:2 thank:1 participate:1 topic:1 argue:1 unstable:1 trivial:1 discriminant:2 modeled:1 relationship:2 retained:1 ratio:2 minimizing:4 balance:2 difficult:1 unfortunately:2 favorably:1 negative:5 july:1 implementation:1 perform:1 allowing:3 bianchi:1 benchmark:2 descent:6 optional:1 hinton:1 wkp:6 dc:1 arbitrary:1 rating:1 david:3 introduced:1 pair:3 required:1 kl:60 namely:1 optimized:2 extensive:1 learned:5 geophysics:2 nip:2 trans:1 distribution1:1 below:2 pattern:2 sparsity:10 convincingly:1 max:1 including:1 suitable:1 overlap:2 natural:2 rely:1 regularized:12 predicting:1 misclassification:1 karklin:1 residual:2 raina:2 kivinen:1 advanced:1 improve:3 movie:5 axis:1 x8:1 text:1 prior:26 review:5 l2:6 popovici:1 discovery:1 relative:1 loss:15 discriminatively:1 permutation:2 bear:1 interesting:1 proven:1 versus:1 validation:3 integrate:1 affine:1 consistent:1 article:1 uncorrelated:1 pi:2 translation:1 row:3 penalized:1 supported:1 transpose:1 english:4 jth:3 keeping:1 exponentiated:3 institute:2 wide:2 sparse:48 duce:1 benefit:4 calculated:1 author:1 collection:1 commonly:2 preprocessing:1 projected:1 approximate:1 mem:1 pittsburgh:2 assumed:2 discriminative:4 xi:5 latent:10 iterative:1 quantifies:1 decomposes:1 table:12 additionally:2 learn:1 transfer:15 expanding:1 elastic:1 nigam:1 excellent:1 bottou:1 diag:1 aistats:1 noise:7 referred:1 wiley:1 precision:1 inferring:1 exponential:5 lq:1 xt:1 maxi:1 r2:3 svm:3 dl:4 workshop:1 mnist:12 adding:5 gained:1 mirror:1 magnitude:6 crystallography:1 entropy:7 smoothly:1 backtracking:3 simply:1 likely:1 infinitely:2 forming:1 army:1 visual:1 lewicki:1 corresponds:1 relies:1 bji:1 ma:1 sorted:2 rbf:1 wjt:2 change:3 called:1 total:1 duality:1 brand:1 select:1 support:1 outstanding:1 rajat:1 tested:1 phenomenon:1 correlated:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.