Unnamed: 0
int64
0
7.24k
id
int64
1
7.28k
raw_text
stringlengths
9
124k
vw_text
stringlengths
12
15k
2,200
2,998
Stochastic Relational Models for Discriminative Link Prediction Wei Chu CCLS, Columbia University New York, NY 10115 Kai Yu NEC Laboratories America Cupertino, CA 95014 Shipeng Yu, Volker Tresp, Zhao Xu Siemens AG, Corporate Research & Technology, 81739 Munich, Germany Abstract We introduce a Gaussian process (GP) framework, stochastic relational models (SRM), for learning social, physical, and other relational phenomena where interactions between entities are observed. The key idea is to model the stochastic structure of entity relationships (i.e., links) via a tensor interaction of multiple GPs, each defined on one type of entities. These models in fact define a set of nonparametric priors on infinite dimensional tensor matrices, where each element represents a relationship between a tuple of entities. By maximizing the marginalized likelihood, information is exchanged between the participating GPs through the entire relational network, so that the dependency structure of links is messaged to the dependency of entities, reflected by the adapted GP kernels. The framework offers a discriminative approach to link prediction, namely, predicting the existences, strengths, or types of relationships based on the partially observed linkage network as well as the attributes of entities (if given). We discuss properties and variants of SRM and derive an efficient learning algorithm. Very encouraging experimental results are achieved on a toy problem and a user-movie preference link prediction task. In the end we discuss extensions of SRM to general relational learning tasks. 1 Introduction Relational learning concerns the modeling of physical, social, or other phenomena, where rich types of entities interact via complex relational structures. If compared to the traditional machine learning settings, the entity relationships provide additional structural information. A simple example of a relational setting is the user-movie rating database, which contains user entities with user attributes (e.g., age, gender, education), movie entities with movie attributes (e.g., year, genre, director), and ratings (i.e., relations between users and movies). A typical relational learning problem is entity classification, for example, segmenting users into groups. One may apply usual clustering or classification methods based on a flat structure of data, where each user is associated with a vector of user attributes. However a sound model should additionally explore the user-movie relations as well as even the movie attributes, since like-minded users tend to give similar ratings on the same movie, and may like (or dislike) movies with similar genres. Relational learning addresses this and similar situation where it is not natural to transform the data into a flat structure. Entity classification in a relational setting has gained considerable attentions, like webpage classification using both textual contents and hyperlinks. However, in other occasions relationships themselves are often of central interest. For example, one may want to predict protein-protein in- teractions, or in another application, user ratings for products. The family of these problems has been called link prediction1 , which is the primary topic of this paper. In general, link prediction includes link existence prediction (i.e., does a link exist?), link classification (i.e., what type of the relationship?), and link regression (i.e., how does the user rate the item?). In this paper we propose a family of stochastic relational models (SRM) for link prediction and other relational learning tasks. The key idea of SRM is a stochastic link-wise process induced by a tensor interplay of multiple entity-wise Gaussian processes (GP). These models in fact define a set of nonparametric priors on an infinite dimensional tensor matrix, where each element represents a relationship between a tuple of entities. By maximizing the marginalized likelihood, information is exchanged between the participating GPs through the entire relational network, so that the dependency structure of links is messaged to the dependency of entities, reflected by the learned entity-wise GP kernels (i.e., GP covariance functions). SRM is discriminative because training is on a conditional model of links. We present various models of SRM and address the computational issue, which is crucial in link prediction because the number of potential links grows exponentially with the entity size. SRM has shown encouraging results in our experiments. This paper is organized as follows. We introduce the stochastic relational models in Sec. 2, and describe the algorithms for inference and parameter estimation in Sec. 3 and Sec. 4, followed by Sec. 5 on implementation details. Then we discuss the related work in Sec. 6 and report experimental results in Sec. 7, followed by conclusions and extensions in Sec. 8. 2 Stochastic Relational Models We first consider pairwise asymmetric links r between entities u ? U and v ? V. The two sets U and V can be identical or different. We use u or v to represent the attribute vectors of entities or their identity when entity attributes are unavailable. Note that ri,n ? r(ui , vn ) does not have to be identical to rn,i when U = V, i.e. relationships can be asymmetrical. Extensions to involve more than two entity sets, multi-way relations (i.e., links connecting more than two entities), and symmetric links are straightforward and will be briefly discussed in Sec. 8. We assume that the observable links r are derived as local measurements of a real-valued latent relational function t : U ? V ? R, and each link ri,n is solely dependent on its latent value ti,n , modeled by the likelihood p(ri,n |ti,n ). The focus of this paper is a family of stochastic relational processes acting on U ? V, the space of entity pairs, to generate the latent relational function t, via a tensor interaction of two independent entity-specific GPs, one acting on U and the other on V. We call them processes because U and V can both encompass an infinite number of entities. Let the relational processes be characterized by hyperparameters ? = {?? , ?? }, ?? for the GP kernel function on U and ?? for the GP kernel function on V, a SRM thus defines a Bayesian prior p(t|?) for the latent variables t. Let I be the index set of entity pairs having observed links, the marginal likelihood (also called evidence) under such a prior is   p(RI |?) = p(ri,n |ti,n )p(t|?)dt, ? = {?? , ?? } (1) (i,n)?I where RI = {ri,n }(i,n)?I . We estimate the hyperparameters ? by maximizing the evidence, which is an empirical Bayesian approach to learning the relational structure of data. Once ? are determined, we can predict the link for a new pair of entities via marginalization over the a posteriori p(t|RI , ?). Choices for the Piror p(t|?) 2.1 In order to represent a rich class of link structures, p(t|?) should be sufficiently expressive. In the following subsections, we will present three cases of p(t|?), from specific to general, by gradually extending conventional GP models. 2.1.1 A Brief Introduction to Gaussian Processes A GP defines a nonparametric prior distribution over functions in Bayesian inference. A random real-valued function f : X ? R follows a GP prior, denoted by GP(?, ?), if for every finite set 1 We will use ?link? and ?relationship? interchangeably throughout this paper. N N {xi }N i=1 , f = {f (xi }i=1 follows a multivariate Gaussian distribution with mean ? = {?(xi )}i=1 N N and covariance (or kernel) ? = {?(xi , xj ; ?? )}i,j=1 with parameter ?? . Given D = {xi , yi }i=1 , by Gaussian noise, one can make predictions via where yi is a measurement of f (xi ) corrupted  the marginal likelihood p(y|x, D, ?? ) = p(y|f, x)p(f |D, ?? )df . For non-Gaussian measurement processes, as in classification models, the integral cannot be solved analytically, and approximation for inference is required. A comprehensive coverage of GP models can be found in [9]. 2.1.2 Hierarchical Gaussian Processes By observing the relational data collectively, one may notice that two entities ui and uj in U demonstrate correlated relationships to entities in V. For example, two users often show opposite or close opinions on movies, or two hub web pages are co-linked by a set of other authority web pages. In this case, the dependency structure of links can be reduced to a dependency structure of entities in U. A hierarchical GP (HGP) model [13], originally proposed for multi-task learning, can be conveniently applied in such a situation. The model assumes that, for every v ? V, its relational function t(?, v) : U ? R is an i.i.d. sample drawn from a common entity-wise GP with covariance ? : U ? U ? R. This provides a case of p(t|?) in a SRM, where ? = ?? determines the GP kernel function ?. Optimizing the GP kernel ? via evidence maximization means to learn the dependency of entities in U, summarized over all the entities v ? V. There is a drawback if applying HGP to link prediction. The model only learns a one-side structure, while ignoring the dependency in V. In particular, the attributes of entities v cannot be incorporated even if their entity attributes are available. However, for the mentioned applications, it also makes sense to explore the dependency between movies, or the dependency between authority web pages. 2.1.3 Tensor Gaussian Processes To overcome the shortcoming of HGP, we consider a more complex SRM, which employs two GP kernel functions ? : U ? U ? R and ? : V ? V ? R. The model explains the relational dependency through the entity dependencies of both V and U. Let ? = {?? , ?? }, we describe a stochastic relational process p(t|?) as the following: Definition 2.1 (Tensor Gaussian Processes). Given two sets U and V, a collection of random variables {t(u, v)|t : U ? V ? R} follow a tensor Gaussian processes (TGP), if for every finite sets {u1 , . . . , uN } and {v1 , . . . , vM }, random variables T = {t(ui , vn )}, organized into an N ? M matrix, have a matrix-variate normal distribution   MN M N 1 NN ?M (T|B, ?, ?) = (2?)? 2 |?|? 2 |?|? 2 etr ? ??1 (T ? B)??1 (T ? B) 2 characterized by mean B = {b(ui , vn )} and positive definite covariance matrices ? = {?(ui , uj ; ?? )} and ? = {?(vn , vm ; ?? )}. This random process is denoted as T GP(b, ?, ?).2 In the above theorem etr[?] is a shortcut for exp[trace(?)]. It is easy to see that the model reduces to the HGP model if ? = I. As a key difference, the new model treats the relational function t as a whole sample from a TGP, instead of being formed by i.i.d. functions in the HGP model. Let vec(T ) = [t1,1 , t1,2 , . . . , t1,M , t2,1 , . . . , t2,M , . . . , tN,M ] . If T ? NN ?M (T|B, ?, ?), then vec(T ) ? N (0, ?), where the covariance ? = ? ? ? is the Kronecker product of two covariance matrices [6]. In other words, TGP is in fact a GP for the relational function t, where the kernel function ? : (U ? V) ? (U ? V) ? R is defined via a tensor product of two GP kernels Cov(ti,n , tj,m ) = ?[(ui , vn ), (uj , vm )] = ?(ui , uj )?(vn , vm ). The model explains the dependence structure of links by the dependence structure of participating entities. It is well known that a linear predictive model has a GP interpretation if its linear weights follow a Gaussian prior. A similar connection exists for TGP. Theorem 2.2. Let U ? RP , V ? RQ , and W ? NP ?Q (0, IP , IQ ), where IP denotes a P ? P identity matrix and ?, ? denotes the inner product, then the bilinear function t(u, v) = u Wv follows T GP(0, ?, ?) with ?(ui , uj ) = ui , uj  and ?(vn , vm ) = vn , vm . 2 Hereafter we always assume b(u, v) = 0 in TGP for simplicity. The proof is straightforward through Cov[t(ui , vn ), t(uj , vm )] = ui , uj vn , vm  and E[t(ui , vn )] = 0, where E[?] means expectation. In practice, the linear model will help to provide an efficient discriminative approach to link prediction. It appears that link prediction using TGP is almost the same as a normal GP regression or classification, except that the hyperparameters ? now have two parts, ?? for ? and ?? for ?. Unfortunately TGP inference suffers from a serious computational problem ? it does not scale well for even a small size of entities. For example, if there is a fixed portion of missing data for pairwise relationships between N u-entities and M v-entities, the size of observations scales in O(N M ). Since GP inference has the complexity cubic to the size of data, the complexity O(N 3 M 3 ) of TGP is computationally prohibitive. 2.1.4 A Family of Stochastic Processes for Entity Relationships To improve the scalability of SRM, and also describe the relational dependency in a way similar to TGP, we propose a family of stochastic processes p(t|?) for entity relationships. Definition 2.3 (Stochastic Relational Processes). A relational function t : U ? V ? R is said to d iid follow a stochastic relational process (SRP), if t(u, v) = ?1d k=1 fk (u)gk (v), fk (u) ? GP(0, ?), iid gk (v) ? GP(0, ?). We denote t ? SRP d (0, ?, ?), where d is the degrees of freedom. Interestingly, there exists an intimate connection between SRP and TGP: Theorem 2.4. SRP d (0, ?, ?) converges to T GP(0, ?, ?) in the limit d ? ?. Proof. Based on the central limit theory, for every (ui , vn ), ti,n ? t(ui , vn ) converges to a Gaussian random variable. In the next steps, we prove E[ti,n ] = 0 and Cov(ti,n , tj,m ) = d d E[ti,n tj,m ] = d1 { k=1 E[fk (ui )fk (uj )gk (vn )gk (vm )] + k=? E[fk (ui )f? (uj )gk (vn )g? (vm )]} = d 1 k=1 E[fk (ui )fk (uj )gk (vn )gk (vm )] = ?(ui , uj )?(vn , vm ). d The theorem suggests that there is a constructive definition of TGP, where relationships are formed via interactions between infinite samples from two GPs. Moreover, given a sufficiently large d, SRP will provide a close approximation to TGP. SRP is a general family of priors for modeling the relationships between entities, in which HGP and TGP are special cases. The generalization offers several advantages: (1) SRP can model symmetric links between the same set of entities. If we build a generative process where U = V, ? = ? and fk = gk , then on every finite sets {ui }N i=1 , T = {t(ui , uj )} is always a symmetric matrix; (2) Given a fixed d, the inference with SRP obtains a much reduced complexity. In Sec. 3 we will introduce an inference algorithm that scales in O[d(N 3 + M 3 )], which is much less than O(N 3 M 3 ). 2.2 Choices for the Likelihood p(ri,n |ti,n ) The likelihood term describes the dependency of observable relationships on the latent variables. It should be tailored to the types of observations to be modeled. Here we list three possible situations: ? Binary Classification: Relationships may take categorical states, e.g., ?cue? or ?no cue? in disease-treatment relationship prediction. It is popular to consider binary classification and employ the probit function to model the Bernoulli distribution over class labels, i.e. p(ri,n |ti,n ) = ? (ri,n ti,n ), where ?(?) is a cumulative normal function, and ri,n ? {?1, +1}. ? Regression: In this case we consider ri,n taking continuous values. For example, one may want to predict the rating of user u for movie v. The corresponding likelihood function is essentially defined by a noise model, e.g. a univariate Gaussian noise with variance ?2 and zero mean. ? One-class Problem: Sometimes one observed the presence of links between entities, for example, the hyperlinks between web pages. Based on the open-world assumption, if a web page does not link to another, it does not mean that they are unrelated. Therefore, we employ the likelihood p(ri,n |ti,n ) = ?(ri,n ti,n ? b) for those observed links ri,n = 1, where b is an offset. 3 Inference with Laplacian Approximation We have described the relational model under a prior of SRP, in which HGP and TGP are subcases. Now we develop the inference algorithm to compute the sufficient statistics of the a posteriori distribution of latent variables. Let F = {fi,k }, G = {gn,k }, f k = [f1,k , . . . , fN,k ] and gk = [g1,k , . . . , gM,k ] , where fi,k = fk (ui ), gn,k = gk (vn ). Then the posterior distribution p(F, G|RI , ?) is proportional to the joint distribution of the complete data:  d d f g     1   ?1 k=1 i,k n,k  ?1 ? p RI , F, G|? ? f k ? f k + gk ? gk p ri,n exp ? 2 d k=1 (i,n)?I An exact inference is intractable due to the coupling between fi,k and gn,k in the likelihood term. In this paper we apply Laplacian approximation, which approximates p(F, G|RI , ?) by a multivariate normal distribution q(F, G|?) with sufficient statistics ?. At the first step, we compute the means by finding the mode in the posterior,   {F? , G? } = arg min J(F, G) = ? log p(RI , F, G|?) (2) {F,G} We solve the minimization by the conjugate gradient method. The gradients are calculated by 1 ?J(F, G) = ? S F + ??1 G, ?G d ? d N ?M where S ? R have elements si,n = ?[? log p(ri,n |ti,n )]/?ti,n , ti,n = k=1 fi,k gn,k / d, if (i, n) ? I, otherwise si,n = 0. At the second step we calculate the covariance by C = H?1 , where H is the Hessian, i.e., the second-order derivatives of J(F, G) with respect to {F, G}. However the inverse is prohibitive because H is a huge matrix. To reduce the complexity, we assume that there exist disjoint groups of latent variables, each group is second-order independent to any other at their equilibriums. We factorize the approximating distribution as q(F, G|?) = d ? ? ? ? k=1 q(f k |f k , ?k )q(gk |gk , ?k ), where f k and gk are the solution from Eq. (2), and ?k , ?k are the covariances matrices. This follows the facts: (1) Each f k (or gk ) indirectly interacts with other  f ? (or g? ), ? = k, through the sum ?=k f ? g , indicating that f k (or gk ) across different k are ? only loosely dependent to each other, especially for a large d; (2) The dependency between fi,k and gn,k is witnessed via at most only one observation in RI . Therefore we can compute the Hessian for each group separately and obtain the covariances: 1 ?J(F, G) = ? SG + ??1 F, ?F d ?k = (?(k) + ??1 )?1 , with (k) ?i,i = n:(i,n)?I ?k = (?(k) + ??1 )?1 , with (k) ?n,n = i:(i,n)?I 2 ?i,n gn,k , d 2 ?i,n fi,k , d (k) ?i,j = 0 (3) (k) ?n,m =0 (4) where ?i,n = ? 2 [? log p(ri,n |ti,n )]/?t2i,n . Then we obtain the sufficient statistics F? , G? , {?k } and {?k }. Finally we note that, the posterior distribution of {F, G} has many modes (Simply, shuffling the order of latent dimensions or changing the signs of both f k and gk does not change the probability.). However each mode is equally well in constructing the relational function t. 4 Structural Learning by Hyperparameter Estimation We assign a hyper prior p(?|?) and estimate ? by maximizing a penalized marginal likelihood     ? log p(RI , F, G|?)dFdG + log p(?|?) (5) ? = arg max ?={?? ,?? } G F So far the optimization (5) is quite general. In principal, it allows to learn some parametric forms of kernel functions ?(ui , uj ; ?? ) and ?(vn , vm ; ?? ) that are generalizable to new entities. In this paper we particularly consider an situation where entity attributes are not fully informative or even absent. Therefore we introduce a direct parameterization ?? = ?, ?? = ?, and assign conjugate inverse-Wishart priors ? ? IW N (?d, ?0 ) and ? ? IW M (?d, ?0 ), namely  ?d ?1  ?d ? ?0 , p(?|?d, ?0 ) ? det(?)? 2 etr ? 2  ?d ?1  ?d p(?|?d, ?0 ) ? det(?)? 2 etr ? ? ?0 , 2 where ? > 0 so that ?d denotes the degrees of freedom, ?0 and ?0 are the base kernels. Then we apply an iterative expectation-maximization (EM) algorithm to solve the problem (5). In the E-step, we follow Sec. 3 to compute q(F, G|?). In the M-step, we update the hyperparameters by maximizing the expected log-likelihood of the complete data max Eq [log p(RI , F, G|?, ?)] + log p(?|?d, ?0 ) + log p(?|?d, ?0 ) {?,?} where Eq [?] is the expectation over q(F, G|?). Due to the conjugacy of the hyper prior, the maximization have an analytical solution, d d  ??0 + d1 k=1 (g?k g?k  + ?k ) ??0 + d1 k=1 (f ?k f ?k + ?k ) , ?= . (6) ?= ?+1 ?+1 5 Implementation Details The parameters ?0 , ?0 , d and ? have to be pre-specified. We let the base kernels have the form ?0 (ui , uj ) = (1 ? a)?(ui , uj ) + a?i,j and ?0 (vn , vm ) = (1 ? ?)?(vn , vm ) + ??n,m , where 1 ? a, ? ? 0, ? is a Dirac delta kernel (?i,j = 1 if i = j, otherwise ?i,j = 0), ?(?, ?) and ?(?, ?) are some kernel functions defined on entity attributes, which reflect our prior notion of similarities between entities. We use a and ? to penalize the effects of ?(?, ?) and ?(?, ?), respectively, when entity attributes are deficient. If the attributes are unavailable, we set a = ? = 1. The dimensionality d should be properly chosen, otherwise a too small d may deteriorate the modeling flexibility. We determine d and ? based on the prediction performance on a validation set of links. The learning algorithm iterates the E-step with Eq. (2), (3), (4), and the M-step with Eq. (6) until convergence. In the experiments of this paper we use p(ri,n |t?i,n ) to make predictions, where t? is computed from F? and G? . In a longer version the predictive uncertainty of ti,n will be considered. 6 Related Work There is a history of probabilistic relational models (PRM) [8] in machine learning. Getoor et al. [5] introduced link uncertainty and defined a generative model for both entity attributes and links. Recently, [12] and [7] independently introduced an infinite (hidden) relational model to avoid the difficulty of structural learning in PRM by explaining links via a potentially infinite number of hidden states of entities. Since discriminatively trained models generally outperform generative models in prediction tasks, Taskar et al. proposed relational Markov networks (RMNs) for link prediction [11], by describing a conditional distribution of links given entity attributes and other links. RMN has to define a class of potential functions on cliques of random variables based on the observed relational structure. Compared to RMN, SRM is nonparametric because structural information (e.g., cliques as well as the classes of potential functions) is not pre-defined but learned from data. Very recently a GP model was developed to learn from undirected graphs [4], which turns out to be a special rank-one case of SRM with d = 1, ? = ?, and fk = hk . In another work [1] a SVM using a tensor kernel based on user and item attributes was used to predict user ratings on items, which is similar to our TGP case and suffers a salability problem. When attributes are deficient or unavailable, the model does not work well, while SRM can learn informative kernels purely from only links (see Sec. 7). SRM is interestingly related to the recent fast maximum-margin matrix factorization (MMMF) in [10]. If we fix ? and ? as uninformative Dirac kernels, the mode of our Laplacian approximation is equivalent to the solution of Eq.(5) in [10] with the loss function l(ri,n , ti,n ) = ? log p(ri,n |ti,n ). However SRM significantly differs from MMMF in two important aspects: (1) SRM is a supervised predictive model because entity attributes enter the model by forming informative priors (?, ?) and hyper priors (?0 , ?0 ); (2) More importantly, SRM deals with (a) (b) (c) (d) (e) 10 10 10 10 10 20 20 20 20 20 30 5 10 15 20 30 5 10 (f) 15 20 5 (g) 5 10 15 20 30 5 10 (h) 15 20 20 5 10 15 20 30 10 20 30 20 30 5 10 10 20 15 20 10 (j) 10 15 30 (i) 5 10 10 20 30 5 10 15 20 30 15 10 20 30 20 5 10 15 20 Figure 1: Link prediction on synthetic data: (a) training data, where black entry means positive links, white means negative links, and gray means missing; (b) prediction of MMMF (classification rate 0.906); (c) prediction of SRM with noninformative prior (classification rate 0.942); (d) prediction of SRM with informative prior (classification rate 0.965); (e-f) informative ?0 and ?0 ; (g-h) learned ? and ? with noninformative prior; (i-j) learned ? and ? with informative prior. structural learning by adapting the kernels and marginalizing out the latent relational function, while MMMF only estimates the mode of the latent relational function with fixed Dirac kernels. 7 Experiments 30 Synthetic Data: We generated two sets of entities U = {ui }20 i=1 and V = {vn }n=1 on a real line such that ui = 0.1i and vn = 0.1n. The positions of entities were used to compute two RBF kernels that serve as informative ?0 and ?0 . Then we further made a deformation on the real line to form 2 clusters in U and 3 clusters in V. RBF function computed on the deformed scale gives two kernel matrices ? and ? whose diagonal block structure reflects the clusters. Binary links between U and V are obtained by taking the sign of a function, which is a sample from T GP(0, ?, ?). We randomly withdrew 50% of links for training, and left the remaining for test (see Fig. 1-(a)). We performed two variants of SRM, one with informative ?0 and ?0 (see Fig. 1-(e,f)) and the other with noninformative Dirac kernels ?0 = ?0 = I, and compared with MMMF [10]. In all the cases we set d = 20. The classification accuracy rates of two SRMs, 0.942 and 0.965, are both better than 0.906 of MMMF. As shown in Fig. 1, the block structures of learned kernels indicate that both SRMs can learn the cluster structure of entities from links. The structural kernel optimization enables SRM to outperform MMMF, even with a completely noninformative prior. Note that the informative prior really helps SRM to achieve the best accuracy. Eachmovie Data: We tested our algorithms on a data set from [3], which is a subset of EachMovie data, containing 5000 users? ratings, i.e., 1, 2, 3, 4, 5, or 6, on 1623 movies. We selected the first 1000 users and organized the data into a 1000 ? 1623 table with 63, 592 observed ratings. We compared SRM with MMMF in a regression task to predict the ?rating link? between users and movies. In SRM we set ?0 = ?0 = I. For both methods the dimensionality was chosen as d = 20. In MMMF we used the square error loss. We repeated the experiments for 10 times, where at each time we randomly withdrew 70% ratings for training and left the remaining for test. Root-mean-square error (RMSE) and mean-absolute error (MAE) were used to evaluate the accuracy. The results of all the repeats, as well as the means and standard deviations, are shown in Table 1 and Table 2. Compared to MMMF, SRM significantly reduces the prediction error by over 12% in terms of both RMSE and MAE. 8 Conclusions and Future Extensions In this paper we proposed a stochastic relational model (SRM) for learning relational data. Entity relationships are modeled by a tensor interaction of multiple Gaussian processes (GPs). We proposed a family of relational processes and showed its convergence to a tensor Gaussian process if the degrees of freedom goes to infinity. The process imposes an effective prior on the entity relationships, Table 1: User-movie rating prediction error measured by RMSE Repeats MMMF SRM 1 1.366 1.195 2 1.367 1.199 3 1.372 1.192 4 1.377 1.200 5 1.363 1.198 6 1.368 1.209 7 1.356 1.204 8 1.380 1.208 9 1.358 1.189 10 1.373 1.209 mean ? std. 1.368 ? 0.008 1.200?0.007 Table 2: User-movie rating prediction error measured by MAE Repeats MMMF SRM 1 1.067 0.924 2 1.066 0.928 3 1.074 0.924 4 1.076 0.923 5 1.066 0.924 6 1.073 0.934 7 1.060 0.931 8 1.074 0.932 9 1.062 0.918 10 1.072 0.933 mean ? std. 1.060?0.006 0.927? 0.005 and leads to a discriminative link prediction model. We demonstrated the excellent results of SRM on a synthetic data set and a user-movie rating prediction problem. Though the current work focused on the application of link prediction, the model can be used for general relational learning purposes. There are several directions to extend the current model: (1) SRM can describe a joint distribution of entity links and entity classes conditioned on entity-wise GP kernels. Therefore entity classification can be solved in a relational context; (2) One can extend SRM to model multi-way relations where more than two entities participate in a single relationship; (3) SRM can also be applied to model pairwise relations between multiple entity sets, where kernel updates amount to propagation of information through the entire relational network; (4) As discussed in Sec. 2.1.2, SRM is a natural extension of hierarchical Bayesian multi-task models, by explicitly modeling the dependency over tasks. In a recent work [2] a tensor GP for multi-task learning was independently suggested; (5) Finally, it is extremely important to make the algorithm scalable to very large relational data, like the Netflix problem, containing about 480,000 users and 17,000 movies. Acknowledgement The authors thank Andreas Krause, Chris Williams, Shenghuo Zhu, and Wei Xu for the fruitful discussions. References [1] J. Basilico and T. Hofmann. Unifying collaborative and content-based filtering. In Proceedings of the 21st International Conference on Machine Learning (ICML), 2004. [2] E. V. Bonilla, F. V. Agakov, and C. K. I. Williams. Kernel multi-task learning using task-specific features. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics (AISTATS), 2007. To appear. [3] J. S. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence (UAI), 1998. [4] W. Chu, V. Sindhwani, Z. Ghahramani, and S. S. Keerthi. Relational learning with gaussian processes. In Neural Information Processing Systems (NIPS), 2007. To appear. [5] L. Getoor, E. Segal, B. Taskar, and D. Koller. Probabilistic models of text and link structure for hypertext classification. In Proceedings ICJAI Workshop on Text Learning: Beyond Supervision, 2001. [6] Arjun K. Gupta and Daya K. Naga. Matrix Variate Distributions. 1999. [7] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI), 2006. [8] D. Koller and A. Pfeffer. Probabilistic frame-based systems. In Proceedings of National Conference on Artificial Intelligence (AAAI), 1998. [9] C. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [10] Jason D. M. Rennie and Nati Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd International Conference on Machine Learning (ICML), 2005. [11] B. Taskar, M. F. Wong, P. Abbeel, and D. Koller. Link prediction in relational data. In Neural Information Processing Systems Conference (NIPS), 2004. [12] Z. Xu, V. Tresp, K. Yu, and H.-P. Kriegel. Infinite hidden relational models. In Proceedings of the 22nd International Conference on Uncertainty in Artificial Intelligence (UAI), 2006. [13] K. Yu, V. Tresp, and A. Schwaighofer. Learning Gaussian processes from multiple tasks. In Proceedings of 22nd International Conference on Machine Learning (ICML), 2005.
2998 |@word deformed:1 version:1 briefly:1 nd:3 open:1 covariance:9 contains:1 hereafter:1 interestingly:2 current:2 si:2 chu:2 fn:1 informative:9 hofmann:1 enables:1 noninformative:4 update:2 generative:3 prohibitive:2 cue:2 item:3 parameterization:1 selected:1 intelligence:5 yamada:1 provides:1 authority:2 iterates:1 preference:1 direct:1 director:1 prove:1 introduce:4 deteriorate:1 pairwise:3 expected:1 themselves:1 multi:6 encouraging:2 moreover:1 unrelated:1 what:1 generalizable:1 developed:1 ag:1 finding:1 every:5 ti:20 appear:2 segmenting:1 positive:2 t1:3 local:1 treat:1 limit:2 bilinear:1 solely:1 black:1 shenghuo:1 suggests:1 co:1 factorization:2 practice:1 block:2 definite:1 differs:1 empirical:2 significantly:2 adapting:1 word:1 pre:2 griffith:1 protein:2 cannot:2 close:2 prediction1:1 applying:1 context:1 wong:1 conventional:1 equivalent:1 demonstrated:1 missing:2 maximizing:5 fruitful:1 straightforward:2 attention:1 go:1 independently:2 williams:3 focused:1 simplicity:1 d1:3 importantly:1 notion:1 gm:1 user:23 exact:1 gps:6 element:3 particularly:1 asymmetric:1 std:2 agakov:1 database:1 pfeffer:1 observed:7 taskar:3 t2i:1 solved:2 hypertext:1 calculate:1 mentioned:1 rq:1 disease:1 ui:26 complexity:4 hgp:7 trained:1 predictive:4 purely:1 serve:1 completely:1 joint:2 various:1 america:1 genre:2 fast:2 describe:4 shortcoming:1 effective:1 artificial:5 hyper:3 quite:1 whose:1 kai:1 valued:2 solve:2 rennie:1 otherwise:3 cov:3 statistic:4 g1:1 gp:31 transform:1 ip:2 interplay:1 advantage:1 analytical:1 propose:2 interaction:5 product:4 flexibility:1 achieve:1 participating:3 dirac:4 scalability:1 webpage:1 convergence:2 cluster:4 extending:1 converges:2 help:2 derive:1 iq:1 develop:1 coupling:1 measured:2 arjun:1 eq:6 coverage:1 indicate:1 direction:1 drawback:1 attribute:18 stochastic:14 opinion:1 education:1 explains:2 assign:2 f1:1 generalization:1 fix:1 really:1 abbeel:1 extension:5 sufficiently:2 considered:1 normal:4 exp:2 equilibrium:1 predict:5 prm:2 purpose:1 estimation:2 label:1 iw:2 minded:1 reflects:1 minimization:1 mit:1 gaussian:18 always:2 avoid:1 volker:1 derived:1 focus:1 properly:1 bernoulli:1 likelihood:12 rank:1 hk:1 sense:1 posteriori:2 inference:10 dependent:2 nn:2 entire:3 hidden:3 relation:5 koller:3 germany:1 issue:1 classification:15 arg:2 denoted:2 tgp:15 special:2 marginal:3 once:1 having:1 identical:2 represents:2 yu:4 icml:3 future:1 report:1 t2:2 np:1 serious:1 employ:3 randomly:2 national:2 comprehensive:1 keerthi:1 freedom:3 interest:1 huge:1 tj:3 tuple:2 integral:1 loosely:1 exchanged:2 deformation:1 witnessed:1 modeling:4 gn:6 maximization:3 deviation:1 subset:1 entry:1 srm:35 too:1 dependency:16 corrupted:1 mmmf:12 synthetic:3 st:2 international:5 probabilistic:3 vm:15 connecting:1 central:2 reflect:1 aaai:2 containing:2 wishart:1 zhao:1 derivative:1 toy:1 potential:3 segal:1 sec:12 summarized:1 includes:1 kadie:1 explicitly:1 bonilla:1 performed:1 root:1 jason:1 observing:1 linked:1 portion:1 netflix:1 rmse:3 collaborative:3 formed:2 square:2 accuracy:3 variance:1 bayesian:4 iid:2 history:1 suffers:2 definition:3 associated:1 proof:2 treatment:1 popular:1 subsection:1 dimensionality:2 organized:3 appears:1 originally:1 dt:1 supervised:1 follow:4 reflected:2 wei:2 rmns:1 though:1 until:1 web:5 expressive:1 propagation:1 defines:2 mode:5 gray:1 grows:1 effect:1 concept:1 asymmetrical:1 analytically:1 symmetric:3 laboratory:1 deal:1 white:1 interchangeably:1 occasion:1 complete:2 demonstrate:1 tn:1 wise:5 fi:6 recently:2 common:1 rmn:2 srms:2 physical:2 exponentially:1 cupertino:1 discussed:2 interpretation:1 approximates:1 mae:3 extend:2 measurement:3 vec:2 enter:1 shuffling:1 fk:10 similarity:1 longer:1 supervision:1 base:2 multivariate:2 posterior:3 recent:2 showed:1 optimizing:1 wv:1 binary:3 yi:2 additional:1 determine:1 multiple:5 corporate:1 sound:1 encompass:1 reduces:2 eachmovie:2 characterized:2 offer:2 equally:1 laplacian:3 prediction:28 variant:2 regression:4 scalable:1 essentially:1 expectation:3 df:1 kernel:28 represent:2 tailored:1 sometimes:1 achieved:1 penalize:1 want:2 separately:1 uninformative:1 krause:1 crucial:1 induced:1 tend:1 deficient:2 undirected:1 call:1 structural:6 presence:1 easy:1 marginalization:1 xj:1 variate:2 opposite:1 inner:1 idea:2 reduce:1 andreas:1 det:2 absent:1 linkage:1 york:1 hessian:2 generally:1 involve:1 amount:1 nonparametric:4 tenenbaum:1 reduced:2 generate:1 outperform:2 exist:2 notice:1 sign:2 delta:1 disjoint:1 hyperparameter:1 group:4 key:3 drawn:1 changing:1 v1:1 graph:1 year:1 teractions:1 sum:1 inverse:2 uncertainty:4 family:7 throughout:1 almost:1 ueda:1 vn:23 subcases:1 followed:2 adapted:1 strength:1 kronecker:1 infinity:1 ri:29 flat:2 u1:1 aspect:1 min:1 extremely:1 munich:1 conjugate:2 describes:1 across:1 em:1 heckerman:1 gradually:1 computationally:1 conjugacy:1 discus:3 describing:1 turn:1 end:1 available:1 apply:3 hierarchical:3 indirectly:1 rp:1 existence:2 assumes:1 clustering:1 denotes:3 remaining:2 marginalized:2 unifying:1 ghahramani:1 uj:16 build:1 approximating:1 especially:1 tensor:13 parametric:1 primary:1 dependence:2 usual:1 traditional:1 interacts:1 said:1 diagonal:1 gradient:2 link:55 thank:1 entity:65 participate:1 topic:1 etr:4 chris:1 kemp:1 modeled:3 relationship:21 index:1 unfortunately:1 potentially:1 gk:18 trace:1 negative:1 implementation:2 observation:3 markov:1 finite:3 situation:4 relational:50 incorporated:1 frame:1 rn:1 rating:13 introduced:2 namely:2 ccls:1 pair:3 required:1 connection:2 specified:1 learned:5 textual:1 nip:2 address:2 beyond:1 suggested:1 kriegel:1 hyperlink:2 max:2 getoor:2 natural:2 difficulty:1 predicting:1 mn:1 zhu:1 improve:1 movie:18 technology:1 brief:1 categorical:1 columbia:1 tresp:3 text:2 prior:22 sg:1 acknowledgement:1 nati:1 dislike:1 marginalizing:1 probit:1 messaged:2 fully:1 discriminatively:1 loss:2 proportional:1 filtering:2 srebro:1 age:1 validation:1 degree:3 sufficient:3 imposes:1 penalized:1 repeat:3 rasmussen:1 side:1 explaining:1 taking:2 absolute:1 overcome:1 calculated:1 dimension:1 world:1 cumulative:1 rich:2 author:1 collection:1 made:1 far:1 social:2 observable:2 obtains:1 clique:2 uai:2 discriminative:5 xi:6 factorize:1 un:1 latent:10 continuous:1 iterative:1 table:5 additionally:1 learn:5 ca:1 ignoring:1 unavailable:3 interact:1 excellent:1 shipeng:1 complex:2 constructing:1 aistats:1 whole:1 noise:3 hyperparameters:4 srp:9 repeated:1 xu:3 fig:3 cubic:1 ny:1 withdrew:2 position:1 intimate:1 learns:1 theorem:4 specific:3 hub:1 list:1 offset:1 svm:1 gupta:1 concern:1 evidence:3 exists:2 intractable:1 workshop:1 gained:1 nec:1 conditioned:1 margin:2 simply:1 explore:2 univariate:1 forming:1 conveniently:1 schwaighofer:1 partially:1 sindhwani:1 collectively:1 gender:1 determines:1 conditional:2 identity:2 rbf:2 considerable:1 content:2 shortcut:1 change:1 infinite:8 typical:1 determined:1 except:1 acting:2 principal:1 called:2 breese:1 experimental:2 siemens:1 indicating:1 constructive:1 evaluate:1 tested:1 phenomenon:2 correlated:1
2,201
2,999
Computation of Similarity Measures for Sequential Data using Generalized Suffix Trees Konrad Rieck Fraunhofer FIRST.IDA Kekul?estr. 7 12489 Berlin, Germany [email protected] Pavel Laskov Fraunhofer FIRST.IDA Kekul?estr. 7 12489 Berlin, Germany [email protected] S?oren Sonnenburg Fraunhofer FIRST.IDA Kekul?estr. 7 12489 Berlin, Germany [email protected] Abstract We propose a generic algorithm for computation of similarity measures for sequential data. The algorithm uses generalized suffix trees for efficient calculation of various kernel, distance and non-metric similarity functions. Its worst-case run-time is linear in the length of sequences and independent of the underlying embedding language, which can cover words, k-grams or all contained subsequences. Experiments with network intrusion detection, DNA analysis and text processing applications demonstrate the utility of distances and similarity coefficients for sequences as alternatives to classical kernel functions. 1 Introduction The ability to operate on sequential data is a vital prerequisite for application of machine learning techniques in many challenging domains. Examples of such applications are natural language processing (text documents), bioinformatics (DNA and protein sequences) and computer security (byte streams or system call traces). A key instrument for handling such data is the efficient computation of pairwise similarity between sequences. Similarity measures can be seen as an abstraction between particular structure of data and learning theory. One of the most successful similarity measures thoroughly studied in recent years is the kernel function [e.g. 1?3]. Various kernels have been developed for sequential data, starting from the original ideas of Watkins [4] and Haussler [5] and extending to application-specific kernels such as the ones for text and natural language processing [e.g. 6?8], bioinformatics [e.g. 9?14], spam filtering [15] and computer security [e.g. 16; 17]. Although kernel-based learning has gained a major focus in machine learning research, a kernel function is obviously only one of various possibilities for measuring similarity between objects. The choice of a similarity measure is essentially determined by (a) understanding of a problem and (b) properties of the learning algorithm to be applied. Some algorithms operate in vector spaces, others in inner product, metric or even non-metric feature spaces. Investigation of techniques for learning in spaces other than RKHS is currently one of the active research fields in machine learning [e.g. 18?21]. The focus of this contribution lies on general similarity measures for sequential data, especially on efficient algorithms for their computation. A large number of such similarity measures can be expressed in a generic form so that a simple linear-time algorithm can be applied for computation of a wide class of similarity measures. This algorithm enables the investigation of alternative representations of problem domain knowledge other than kernel functions. As an example, two applications are presented for which replacement of a kernel ? or equivalently, the Euclidean distance ? with a different similarity measure yields a significant improvement of accuracy in an unsupervised learning scenario. The rest of the paper is organized as follows. Section 2 provides a brief review of common similarity measures for sequential data and introduces a generic form in which a large variety of them can be cast. The generalized suffix tree and a corresponding algorithm for linear-time computation of similarity measures are presented in Section 3. Finally, the experiments in Section 4 demonstrate efficiency and utility of the proposed algorithm on real-world applications: network intrusion detection, DNA sequence analysis and text processing. 2 Similarity measures for sequences 2.1 Embedding of sequences A common way to define similarity measures for sequential data is via explicit embedding into a high-dimensional feature space. A sequence x is defined as concatenation of symbols from a finite alphabet ?. To model the content of a sequence, we consider a language L ? ?? comprising subsequences w ? L. We refer to these subsequences as words, even though they may not correspond to a natural language. Typical examples for L are a ?bag of words? [e.g. 22], the set of all sequences of fixed length (k-grams or k-mers) [e.g. 10; 23] or the set of all contained subsequences [e.g. 8; 24]. Given a language L, a sequence x can be mapped into an |L|-dimensional feature space by calculating an embedding function ?w (x) for every w ? L appearing in x. The funcion ?w is defined as follows ?w : ?? ? R+ ? {0}, ?w (x) := ?(occ(w, x)) ? Ww (1) where occ(w, x) is the number of occurrences of w in x, ? a numerical transformation, e.g. a conversion to frequencies, and W a weighting assigned to individual words, e.g. length-dependent or position-dependent weights [cf. 3; 24]. By employing the feature space induced through L and ?, one can adapt many vectorial similarity measures to operate on sequences. The feature space defined via explicit embedding is sparse, since the number of non-zero dimensions for each feature vector is bounded by the sequence length. Thus the essential parameter for measuring complexity of computation is the sequence length, denoted hereinafter as n. Furthermore, the length of a word |w| or in case of a set of words the maximum length is denoted by k. 2.2 Vectorial similarity measures Several vectorial kernel and distance functions can be applied to the proposed embedding of sequential data. A list of common functions in terms of L and ? is given in Table 1. k(x, y) Kernel function P Linear w?L Polynomial P ?w (x)?w (y) + ?   2 exp ?d(x,y) ? w?L RBF P w?L |?w (x) ? ?w (y)| |?w (x)??w (y)| w?L ?w (x)+?w (y) P Canberra Minkowski qP k Hamming P Chebyshev d d(x, y) Distance function Manhattan ?w (x)?w (y) w?L w?L |?w (x) ? ?w (y)|k sgn |?w (x) ? ?w (y)| maxw?L |?w (x) ? ?w (y)| Table 1: Kernels and distances for sequential data s(x, y) Similarity coefficient Simpson a/ min(a + b, a + c) Jaccard a/(a + b + c) a/ max(a + b, a + c) Braun-Blanquet Czekanowski, Sorensen-Dice 2a/(2a + b + c) Sokal-Sneath, Anderberg a/(a + 2(b + c)) a/(b + c) Kulczynski (1st) 1 + 2 (a/(a p Kulczynski (2nd) b) + a/(a + c)) (a + b)(a + c) a/ Otsuka, Ochiai Table 2: Similarity coefficients for sequential data Beside kernel and distance functions, a set of rather exotic similarity coefficients is also suitable for application to sequential data [25]. The coefficients are constructed using three summation variables a, b and c, which in the case of binary vectors correspond to the number of matching component pairs (1-1), left mismatching pairs (0-1) and right mismatching pairs (1-0) [cf. 26; 27] Common similarity coefficients are given in Table 2. For application to non-binary data these summation variables can be extended as proposed in [25]: X a= min(?w (x), ?w (y)) w?L b= X [?w (x) ? min(?w (x), ?w (y))] X [?w (y) ? min(?w (x), ?w (y))] w?L c= w?L 2.3 A generic representation One can easily see that the presented similarity measures can be cast in a generic form that consists of an outer function ? and an inner function m: s(x, y) = M m(?w (x), ?w (y)) (2) w?L Given this definition, the kernel and distance functions presented in Table 1 can be re-formulated in terms of ? and m. Adaptation of similarity coefficients to the generic form (2) involves a reformulation of the summation variables a, b and c. The particular definitions of outer and inner functions for the presented similarity measures are given in Table 3. The polynomial and RBF kernels are not shown since they can be expressed in terms of a linear kernel or a distance respectively. Kernel function ? m(x, y) Linear + x?y Similarity coef. ? m(x, y) Distance function ? m(x, y) Manhattan + |x ? y| + |x ? y|/(x + y) Canberra k Variable a + min(x, y) Minkowski + |x ? y|k Variable b + x ? min(x, y) Hamming + sgn |x ? y| Variable c + y ? min(x, y) Chebyshev max |x ? y| Table 3: Generalized formulation of similarity measures 3 Generalized suffix trees for comparison of sequences The key to efficient comparison of two sequences lies in considering only the minimum of words necessary for computation of the generic form (2) of similarity measures. In the case of kernels only the intersection of words in both sequences needs to be considered, while the union of words is needed for calculating distances and non-metric similarity coefficients. A simple and well-known approach for such comparison is representing the words of each sequence in a sorted list. For words of maximum length k such a list can be constructed in O(kn log n) using general sorting or O(kn) using radix-sort. If the length of words k is unbounded, sorted lists are no longer an option as the sorting time becomes quadratic. Thus, special data structures are needed for efficient comparison of sequences. Two data structures previously used for computation of kernels are tries [28; 29] and suffix trees [30]. Both have been applied for computation of a variety of kernel functions in O(kn) [3; 10] and also in O(n) run-time using matching statistics [24]. In this contribution we will argue that a generalized suffix tree is suitable for computation of all similarity measures of the form (2) in O(n) run-time. A generalized suffix tree (GST) is a tree containing all suffixes of a set of strings x1 , . . . , xl [31]. The simplest way to construct a generalized suffix tree is to extend each string xi with a delimiter $i and to apply a suffix tree construction algorithm [e.g. 32] to the concatenation of strings x1 $1 . . . xl $l . In the remaining part we will restrict ourselves to the case of two strings x and y delimited by # and $, computation of an entire similarity matrix using a single GST for a set of strings being a straightforward extension. An example of a generalized suffix tree for the strings ?aab#? and ?babab$? is shown in Fig. 1(a). b a #... #... #... m(1,3) $ ab b $ b ab#... #... $ ab m(0,2) (1,0) (0,1) x ab$ $ #... $ ab$ (1,0) (a) Generalized suffix tree (GST) ab$ (0,1) (1,0) $ (0,1) (b) Traversal of a GST Figure 1: Generalized suffix tree for ?aab#? and ?babab$? and a snapshot of its traveral Once a generalized suffix tree is constructed, it remains to determine the number of occurences occ(w, x) and occ(w, y) of each word w present in the sequences x and y. Unlike the case for kernels for which only nodes corresponding to both sequences need to be considered [24], the contributions must be correctly computed for all nodes in the generalized suffix tree. The following simple recursive algorithm computes a generic similarity measure between the sequence x and y in one depth-first traversal of the generalized suffix tree (cf. Algorithm 1). The algorithm exploits the fact that a leaf in a GST representing a suffix of x contributes exactly 1 to occ(w, x) if w is the prefix of this suffix ? and similarly for y and occ(w, y). As the GST contains all suffixes of x and y, every word w in x and y is represented by at least one leaf. Whether a leaf contributes to x or y can be determined by considering the edge at the leaf. Due to the uniqueness of the delimiter #, no branching nodes can occur below an edge containing #, thus a leaf node at an edge starting before the index of # must contain a suffix of x; otherwise it contains a suffix of y. The contributions of all leaves are aggregated in two variables x and y during a post-order traversal. At each node the inner function m of (2) is calculated using ?(x) and ?(y) according to the embedding ? in (1). A snapshot of the traversal procedure is illustrated in Fig. 1(b). To account implicit nodes along the edges of the GST and to support weighted embeddings ?, the weighting function W EIGHT introduced in [24] is employed. At a node v the function takes the beginning (begin[v]) and the end (end[v]) of the incoming edge and the depth of node (depth[v]) as arguments to determine how much the node and edge contribute to the similarity measure, e.g. for k-gram models only nodes up to a path depth of k need to be considered. Algorithm 1 Suffix tree comparison 1: function C OMPARE (x, y) 2: S ? S UFFIX T REE(x # y $) 3: (x, y, s) ? M ATCH(root[S]) 4: return s 5: 6: function M ATCH(v) 7: if v is leaf then 8: s?0 9: if begin[v] ? index# then 10: (x, y) ? (1, 0) ? Leaf of a suffix of x 11: j ? index# ? 1 12: else 13: (x, y) ? (0, 1) ? Leaf of a suffix of y 14: j ? index$ ? 1 15: else 16: (x, y, s) ? (0, 0, 0) 17: for all c in children[v] do 18: (? x, y?, s?) ? M ATCH(c) ? Traverse GST 19: (x, y, s) ? (x + x ?, y + y?, s ? s?) 20: j ? end[v] 21: W ? W EIGHT(begin[v], j, depth[v]) 22: s ? s ? m(?(x)W, ?(y)W) ? Cf. definitions in (1) and (2) 23: return (x, y, s) Similarly to the extension of string kernels proposed in [33], the GST traversal can be performed on an enhanced suffix array [34] for further run-time and space reduction. To prove correctness of our algorithm, a different approach must be taken than the one in [24]. We cannot claim that the computed similarity value is equivalent to the one returned by the matching statistic algorithm, since the latter is restricted to kernel functions. Instead we show that at each recursive call to the M ATCH function correct numbers of occurences are maintained. Theorem 1. A word w occurs occ(w, x) and occ(w, y) times in x and y if and only if M ATCH(w) ? returns x = occ(w, x) and y = occ(w, y), where w ? is the node at the end of a path from the root reassembling w in the generalized suffix tree of x and y . Proof. If w occurs m times in x, there exist exactly m suffixes of x with w as prefix. Since w corresponds to a path from the root of the GST to a node w ? all m suffixes must pass w. ? Due to the unique delimiters # each suffix of x corresponds to one leaf node in the GST whose incoming edge contains #. Hence m equals occ(w, x) and is exactly the aggregated quantity x returned by M ATCH(w). ? Likewise, occ(w, y) is the number of suffixes beginning after # and having a prefix w, which is computed by y. 4 Experimental Results 4.1 Run-time experiments In order to illustrate the efficiency of the proposed algorithm, we conducted run-time experiments on three benchmark data sets for sequential data: network connection payloads from the DARPA 1999 IDS evaluation [35], news articles from the Reuters-21578 data set [36] and DNA sequences from the human genome [14]. Table 4 gives an overview of the data sets and their specific properties. We compared the run-time of the generalized suffix tree algorithm with a recent trie-based method supporting computation of distances. Tries yield better or equal run-time complexity for computation of similarity measures over k-grams than algorithms using indexed arrays and hash tables. A detailed description of the trie-based approach is given in [25]. Note that in all of the following experiments tries were generated in a pre-processing step and the reported run-time corresponds to the comparison procedure only. For each of the three data sets, we implemented the following experimental protocol: the Manhattan distances were calculated for 1000 pairs of randomly selected sequences using k-grams as an em- Name DNA NIDS TEXT Type Human genome sequences TCP connection payloads Reuters Newswire articles Alphabet 4 108 93 Min. length 2400 53 43 Max. length 2400 132753 10002 Table 4: Sequential data sets bedding language. The procedure was repeated 10 times for various values of k, and the run-time was averaged over all runs. Fig. 2 compares the run-time of sequence comparison algorithms using the generalized suffix trees and tries. On all three data sets the trie-based comparison has a low run-time for small values of n but grows linearly with k. The algorithm using a generalized suffix tree is independent from complexity of the embedding language, although this comes at a price of higher constants due to a more complex data structure. It is obvious that a generalized suffix tree is the algorithm of choice for higher values of k. 8 6 4 2 0 5 10 15 k?gram length (a) NIDS data set Manhattan distance runtime (TEXT dataset) 20 5 Trie comparison GST comparison 4 3 2 1 0 Manhattan distance runtime (DNA dataset) mean runtime per 1000 cmp. (s) 10 Trie comparison GST comparison mean runtime per 1000 cmp. (s) mean runtime per 1000 cmp. (s) Manhattan distance runtime (NIDS dataset) Trie comparison GST comparison 12 10 8 6 4 2 5 10 15 k?gram length (b) TEXT data set 20 0 5 10 15 k?gram length 20 (c) DNA data set Figure 2: Run-time performance for varying k-gram lengths 4.2 Applications As a second part of our evaluation, we show that the ability of our approach to compute diverse similarity measures pays off when it comes to real applications, especially in an unsupervised learning scenario. The experiments were performed for (a) intrusion detection in real network traffic and (b) transcription start site (TSS) recognition in DNA sequences. For the first application, network data was generated by members of our laboratory using virtual network servers. Recent attacks were injected by a penetration-testing expert. The distance-based anomaly detection method Zeta [17] was applied to 5-grams extracted from byte sequences of TCP connections using different similarity measures: the linear kernel, the Manhattan distance and the Kulczynski coefficient. The results on network data from the HTTP protocol are shown in Fig. 3(a). Application of the Kulczynski coefficient yields the highest detection accuracy. Over 78% of all attacks are identified with no false-positives in an unsupervised setup. In comparison, the linear kernel yields roughly 30% lower detection rates. The second application focused on TSS recognition in DNA sequences. The data set comprises fixed length DNA sequences that either cover the TSS of protein coding genes or have been extracted randomly from the interior of genes [14]. We evaluated three methods on this data: an unsupervised k-nearest neighbor (kNN) classifier, a supervised and bagged kNN classifier and a Support Vector Machine (SVM). Each method was trained and tested using a linear kernel and the Manhattan distance as a similarity measure over 4-grams. Fig. 3(b) shows the performance achieved by the unsupervised and supervised versions of the kNN classifier1 . Even though the linear kernel and the Manhattan distance yield similar accuracy in a supervised setup, their performance differs significantly in unsupervised application. In the absence of prior knowledge of labels the Manhattan 1 Results for the SVM are similar to the supervised kNN and have been omitted. ROC for transcription site recognition ROC for intrusion detection in HTTP 1 1 0.8 true positive rate true positive rate 0.8 0.6 0.4 0.2 0 Linear kernel (unsup. knn) Linear kernel (sup. knn) Manhattan distance (unsup. knn) Manhattan distance (sup. knn) Kulczynski coefficient (unsup. knn) Linear kernel (unsup. knn) Manhattan distance (unsup. knn) 0 0.002 0.004 0.006 false positive rate 0.008 (a) Results for network application 0.01 0.6 0.4 0.2 0 0 0.02 0.04 0.06 false positive rate 0.08 0.1 (b) Results for DNA application Figure 3: Comparison of similarity measures on the network and DNA data distance expresses better discriminative properties for TSS recognition than the linear kernel. For the supervised application the classication performance is bounded for both similarity measures, since only some discriminative features for TSS recognition are encapsulated in n-gram models [14]. 5 Conclusions Kernel functions for sequences have recently gained strong attention in many applications of machine learning, especially in bioinformatics and natural language processing. In this contribution we have shown that other similarity measures such as metric distances or non-metric similarity coefficients can be computed with the same run-time complexity as kernel functions. The proposed algorithm is based on a post-order traversal of a generalized suffix tree of two or more sequences. During the traversal, the counts of matching and mismatching words from an embedding language are computed in time linear in sequence length ? regardless of the particular kind of chosen language: words, k-grams or even all consecutive subsequences. By using a generic representation of the considered similarity measures based on an outer and inner function, the same algorithm can be applied for various kernel, distance and similarity functions on sequential data. Our experiments demonstrate that the use of general similarity measures can bring significant improvement to learning accuracy ? in our case observed for unsupervised learning ? and emphasize importance of further investigation of distance- and similarity-based learning algorithms. Acknowledgments The authors gratefully acknowledge the funding from Bundesministerium f?ur Bildung und Forschung under the project MIND (FKZ 01-SC40A) and would like to thank Klaus-Robert M?uller and Mikio Braun for fruitful discussions and support. References [1] V.N. Vapnik. Statistical Learning Theory. Wiley, New York, 1998. [2] B. Sch?olkopf and A.J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [3] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. Cambridge University Press, 2004. [4] C. Watkins. Dynamic alignment kernels. In A.J. Smola, P.L. Bartlett, B. Sch?olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 39?50, Cambridge, MA, 2000. MIT Press. [5] D. Haussler. Convolution kernels on discrete structures. Technical Report UCSC-CRL-99-10, UC Santa Cruz, July 1999. [6] T. Joachims. Text categorization with support vector machines: Learning with many relevant features. Technical Report 23, LS VIII, University of Dortmund, 1997. [7] E. Leopold and J. Kindermann. Text categorization with Support Vector Machines. how to represent texts in input space? Machine Learning, 46:423?444, 2002. [8] H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C. Watkins. Text classification using string kernels. Journal of Machine Learning Research, 2:419?444, 2002. [9] A. Zien, G. R?atsch, S. Mika, B. Sch?olkopf, T. Lengauer, and K.-R. M?uller. Engineering Support Vector Machine Kernels That Recognize Translation Initiation Sites. BioInformatics, 16(9):799?807, September 2000. [10] C. Leslie, E. Eskin, and W.S. Noble. The spectrum kernel: A string kernel for SVM protein classification. In Proc. Pacific Symp. Biocomputing, pages 564?575, 2002. [11] C. Leslie, E. Eskin, A. Cohen, J. Weston, and W.S. Noble. Mismatch string kernel for discriminative protein classification. Bioinformatics, 1(1):1?10, 2003. [12] J. Rousu and J. Shawe-Taylor. Efficient computation of gapped substring kernels for large alphabets. Journal of Machine Leaning Research, 6:1323?1344, 2005. [13] G. R?atsch, S. Sonnenburg, and B. Sch?olkopf. RASE: recognition of alternatively spliced exons in c. elegans. Bioinformatics, 21:i369?i377, June 2005. [14] S. Sonnenburg, A. Zien, and G. R?atsch. ARTS: Accurate Recognition of Transcription Starts in Human. Bioinformatics, 22(14):e472?e480, 2006. [15] H. Drucker, D. Wu, and V.N. Vapnik. Support vector machines for spam categorization. IEEE Transactions on Neural Networks, 10(5):1048?1054, 1999. [16] E. Eskin, A. Arnold, M. Prerau, L. Portnoy, and S. Stolfo. Applications of Data Mining in Computer Security, chapter A geometric framework for unsupervised anomaly detection: detecting intrusions in unlabeled data. Kluwer, 2002. [17] K. Rieck and P. Laskov. Detecting unknown network attacks using language models. In Proc. DIMVA, pages 74?90, July 2006. [18] T. Graepel, R. Herbrich, P. Bollmann-Sdorra, and K. Obermayer. Classification on pairwise proximity data. In M.S. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information Processing Systems, volume 11, pages 438?444. MIT Press, 1999. [19] V. Roth, J Laub, M. Kawanabe, and J.M. Buhmann. Optimal cluster preserving embedding of non-metric proximity data. IEEE Trans. PAMI, 25:1540?1551, December 2003. [20] J. Laub and K.-R. M?uller. Feature discovery in non-metric pairwise data. Journal of Machine Learning, 5(Jul):801?818, July 2004. [21] C. Ong, X. Mary, S. Canu, and A.J. Smola. Learning with non-positive kernels. In Proc. ICML, pages 639?646, 2004. [22] G. Salton. Mathematics and information retrieval. Journal of Documentation, 35(1):1?29, 1979. [23] M. Damashek. Gauging similarity with n-grams: Language-independent categorization of text. Science, 267(5199):843?848, 1995. [24] S.V.N. Vishwanathan and A.J. Smola. Kernels and Bioinformatics, chapter Fast Kernels for String and Tree Matching, pages 113?130. MIT Press, 2004. [25] K. Rieck, P. Laskov, and K.-R. M?uller. Efficient algorithms for similarity measures over sequential data: A look beyond kernels. In Proc. DAGM, pages 374?383, September 2006. [26] R.R. Sokal and P.H. Sneath. Principles of numerical taxonomy. Freeman, San Francisco, CA, USA, 1963. [27] M.R. Anderberg. Cluster Analysis for Applications. Academic Press, Inc., New York, NY, USA, 1973. [28] E. Fredkin. Trie memory. Communications of ACM, 3(9):490?499, 1960. [29] D. Knuth. The art of computer programming, volume 3. Addison-Wesley, 1973. [30] P. Weiner. Linear pattern matching algorithms. In Proc. 14th Annual Symposium on Switching and Automata Theory, pages 1?11, 1973. [31] D. Gusfield. Algorithms on strings, trees, and sequences. Cambridge University Press, 1997. [32] E. Ukkonen. Online construction of suffix trees. Algorithmica, 14(3):249?260, 1995. [33] C.H. Teo and S.V.N. Vishwanathan. Fast and space efficient string kernels using suffix arrays. In Proceedings, 23rd ICMP, pages 939?936. ACM Press, 2006. [34] M.I. Abouelhoda, S. Kurtz, and E. Ohlebusch. Replacing suffix trees with enhanced suffix arrays. Journal of Discrete Algorithms, 2(1):53?86, 2002. [35] R. Lippmann, J.W. Haines, D.J. Fried, J. Korba, and K. Das. The 1999 DARPA off-line intrusion detection evaluation. Computer Networks, 34(4):579?595, 2000. [36] D.D. Lewis. Reuters-21578 text categorization test collection. AT&T Labs Research, 1997.
2999 |@word version:1 polynomial:2 nd:1 lodhi:1 mers:1 pavel:1 reduction:1 contains:3 document:1 rkhs:1 prefix:3 ida:3 must:4 cruz:1 numerical:2 enables:1 hash:1 leaf:10 selected:1 beginning:2 fried:1 tcp:2 eskin:3 provides:1 detecting:2 node:13 contribute:1 traverse:1 attack:3 herbrich:1 unbounded:1 along:1 constructed:3 ucsc:1 symposium:1 laub:2 consists:1 prove:1 symp:1 stolfo:1 pairwise:3 roughly:1 freeman:1 considering:2 becomes:1 begin:3 project:1 underlying:1 bounded:2 exotic:1 sdorra:1 e472:1 kind:1 string:13 developed:1 transformation:1 every:2 braun:2 runtime:6 exactly:3 classifier:3 kurtz:1 before:1 positive:6 engineering:1 switching:1 id:1 path:3 ree:1 pami:1 mika:1 studied:1 challenging:1 trie:7 averaged:1 unique:1 acknowledgment:1 delimiter:2 testing:1 union:1 recursive:2 differs:1 procedure:3 dice:1 significantly:1 matching:6 word:17 pre:1 protein:4 cannot:1 interior:1 sonne:1 unlabeled:1 equivalent:1 fruitful:1 roth:1 straightforward:1 attention:1 starting:2 regardless:1 l:1 focused:1 automaton:1 occurences:2 aab:2 haussler:2 array:4 embedding:10 construction:2 enhanced:2 anomaly:2 programming:1 us:1 documentation:1 recognition:7 observed:1 portnoy:1 worst:1 news:1 sonnenburg:3 spliced:1 solla:1 highest:1 und:1 complexity:4 cristianini:2 ong:1 traversal:7 dynamic:1 gapped:1 trained:1 unsup:5 efficiency:2 exon:1 easily:1 darpa:2 various:5 represented:1 chapter:2 ompare:1 alphabet:3 fast:2 klaus:1 saunders:1 whose:1 otherwise:1 ability:2 statistic:2 knn:11 online:1 obviously:1 sequence:34 propose:1 product:1 adaptation:1 relevant:1 description:1 olkopf:4 cluster:2 extending:1 categorization:5 object:1 illustrate:1 nearest:1 strong:1 implemented:1 involves:1 payload:2 come:2 correct:1 human:3 sgn:2 virtual:1 investigation:3 summation:3 extension:2 proximity:2 considered:4 exp:1 bildung:1 claim:1 major:1 consecutive:1 omitted:1 uniqueness:1 encapsulated:1 proc:5 bag:1 label:1 currently:1 kindermann:1 teo:1 correctness:1 weighted:1 uller:4 mit:4 rather:1 cmp:3 varying:1 focus:2 june:1 joachim:1 improvement:2 intrusion:6 abstraction:1 suffix:39 dependent:2 sokal:2 dagm:1 entire:1 fhg:3 comprising:1 germany:3 classification:4 denoted:2 art:2 special:1 uc:1 bagged:1 field:1 construct:1 once:1 equal:2 having:1 look:1 unsupervised:8 icml:1 noble:2 gauging:1 others:1 report:2 randomly:2 recognize:1 individual:1 algorithmica:1 ourselves:1 replacement:1 bundesministerium:1 ab:6 detection:9 tss:5 possibility:1 simpson:1 mining:1 evaluation:3 alignment:1 e480:1 introduces:1 delimiters:1 sorensen:1 accurate:1 edge:7 necessary:1 tree:27 indexed:1 euclidean:1 taylor:3 re:1 prerau:1 cover:2 measuring:2 leslie:2 kekul:3 successful:1 conducted:1 reported:1 kn:3 thoroughly:1 st:1 off:2 zeta:1 containing:2 expert:1 return:3 account:1 de:3 coding:1 coefficient:12 inc:1 stream:1 performed:2 try:4 root:3 lab:1 traffic:1 sup:2 start:2 sort:1 option:1 jul:1 contribution:5 accuracy:4 likewise:1 yield:5 correspond:2 dortmund:1 substring:1 coef:1 definition:3 frequency:1 obvious:1 proof:1 salton:1 reassembling:1 hamming:2 dataset:3 knowledge:2 organized:1 graepel:1 wesley:1 delimited:1 higher:2 supervised:5 formulation:1 evaluated:1 though:2 furthermore:1 implicit:1 smola:4 replacing:1 cohn:1 grows:1 atch:6 mary:1 name:1 usa:2 lengauer:1 contain:1 true:2 hence:1 assigned:1 laboratory:1 illustrated:1 konrad:1 during:2 branching:1 maintained:1 generalized:20 demonstrate:3 bring:1 estr:3 recently:1 funding:1 common:4 qp:1 overview:1 cohen:1 volume:2 extend:1 kluwer:1 significant:2 refer:1 cambridge:4 rd:1 canu:1 similarly:2 mathematics:1 newswire:1 language:13 gratefully:1 shawe:3 similarity:49 longer:1 otsuka:1 recent:3 scenario:2 server:1 initiation:1 binary:2 seen:1 minimum:1 preserving:1 employed:1 determine:2 aggregated:2 july:3 zien:2 technical:2 adapt:1 calculation:1 academic:1 retrieval:1 post:2 essentially:1 metric:8 rousu:1 kernel:49 represent:1 oren:1 achieved:1 else:2 sch:4 operate:3 rest:1 unlike:1 induced:1 bollmann:1 elegans:1 member:1 december:1 call:2 vital:1 embeddings:1 variety:2 restrict:1 identified:1 fkz:1 inner:5 idea:1 chebyshev:2 drucker:1 whether:1 weiner:1 utility:2 bartlett:1 returned:2 york:2 detailed:1 santa:1 dna:12 simplest:1 http:2 exist:1 correctly:1 per:3 diverse:1 discrete:2 express:1 key:2 reformulation:1 year:1 run:15 injected:1 wu:1 jaccard:1 pay:1 laskov:4 quadratic:1 annual:1 occur:1 vectorial:3 vishwanathan:2 argument:1 min:8 minkowski:2 gst:14 pacific:1 according:1 sneath:2 mismatching:3 em:1 ur:1 penetration:1 restricted:1 taken:1 previously:1 remains:1 count:1 anderberg:2 needed:2 mind:1 addison:1 instrument:1 end:4 prerequisite:1 apply:1 eight:2 kawanabe:1 generic:9 ochiai:1 appearing:1 occurrence:1 alternative:2 original:1 remaining:1 cf:4 calculating:2 exploit:1 especially:3 classical:1 quantity:1 occurs:2 obermayer:1 september:2 distance:27 thank:1 mapped:1 berlin:3 concatenation:2 outer:3 argue:1 viii:1 length:17 index:4 equivalently:1 setup:2 robert:1 taxonomy:1 trace:1 unknown:1 conversion:1 convolution:1 snapshot:2 hereinafter:1 benchmark:1 finite:1 acknowledge:1 gusfield:1 supporting:1 extended:1 communication:1 ww:1 classifier1:1 introduced:1 cast:2 pair:4 connection:3 radix:1 security:3 leopold:1 bedding:1 trans:1 beyond:1 below:1 pattern:2 mismatch:1 max:3 memory:1 suitable:2 natural:4 buhmann:1 representing:2 brief:1 fraunhofer:3 text:13 byte:2 understanding:1 review:1 prior:1 geometric:1 discovery:1 manhattan:13 beside:1 ukkonen:1 filtering:1 article:2 principle:1 editor:2 leaning:1 classication:1 translation:1 arnold:1 wide:1 neighbor:1 sparse:1 dimension:1 depth:5 gram:14 world:1 calculated:2 computes:1 genome:2 author:1 collection:1 san:1 spam:2 employing:1 transaction:1 emphasize:1 lippmann:1 transcription:3 gene:2 active:1 incoming:2 francisco:1 xi:1 discriminative:3 alternatively:1 spectrum:1 subsequence:5 table:10 ca:1 contributes:2 schuurmans:1 complex:1 domain:2 protocol:2 da:1 linearly:1 reuters:3 child:1 repeated:1 x1:2 mikio:1 fig:5 canberra:2 site:3 roc:2 ny:1 wiley:1 position:1 comprises:1 explicit:2 xl:2 lie:2 watkins:3 weighting:2 theorem:1 specific:2 symbol:1 list:4 svm:3 essential:1 false:3 sequential:15 vapnik:2 gained:2 importance:1 forschung:1 knuth:1 margin:1 sorting:2 intersection:1 expressed:2 contained:2 maxw:1 corresponds:3 lewis:1 extracted:2 ma:2 rieck:4 weston:1 acm:2 sorted:2 formulated:1 rbf:2 occ:12 price:1 content:1 absence:1 crl:1 determined:2 typical:1 kearns:1 pas:1 experimental:2 uffix:1 atsch:3 support:7 latter:1 bioinformatics:8 biocomputing:1 tested:1 handling:1
2,202
3
52 Supervised Learning of Probability Distributions by Neural Networks Eric B. Baum Jet Propulsion Laboratory, Pasadena CA 91109 Frank Wilczek t Department of Physics,Harvard University,Cambridge MA 02138 Abstract: We propose that the back propagation algorithm for supervised learning can be generalized, put on a satisfactory conceptual footing, and very likely made more efficient by defining the values of the output and input neurons as probabilities and varying the synaptic weights in the gradient direction of the log likelihood, rather than the 'error'. In the past thirty years many researchers have studied the question of supervised learning in 'neural'-like networks. Recently a learning algorithm called 'back propagation H - 4 or the 'general- ized delta-rule' has been applied to numerous problems including the mapping of text to phonemes 5 , the diagnosis of illnesses 6 and the classification of sonar targets 7 ? In these applications, it would often be natural to consider imperfect, or probabilistic information. We believe that by considering supervised learning from this slightly larger perspective, one can not only place back propagat Permanent address: Institute for Theoretical Physics, Univer- sity of California, Santa Barbara CA 93106 ? American Institute of Physics 1988 53 tion on a more rigorous and general basis, relating it to other well studied pattern recognition algorithms, but very likely improve its performance as well. The problem of supervised learning is to model some mapping between input vectors and output vectors presented to us by some real world phenomena. To be specific, coqsider the question of medical diagnosis. The input vector corresponds to the symptoms of the patient; the i-th component is defined to be 1 if symptom i is present and 0 if symptom i is absent. The output vector corresponds to the illnesses, so that its j-th component is 1 if the j-th illness is present and 0 otherwise. Given a data base consisting of a number of diagnosed cases, the goal is to construct (learn) a mapping which accounts for these examples and can be applied to diagnose new patients in a reliable way. One could hope, for instance, that such a learning algorithm might yield an expert system to simulate the performance of doctors. Little expert advice would be required for its design, which is advantageous both because experts' time is valuable and because experts often have extraodinary difficulty in describing how they make decisions. A feedforward neural network implements such a mapping between input vectors and output vectors. Such a network has a set of input nodes, one or several layers of intermediate nodes, and a layer of output nodes. The nodes are connected in a forward directed manner, so that the output of a node may be connected to the inputs of nodes in subsequent layers, but closed loops do not occur. See figure 1. The output of each node is assumed to be a bounded semilinear function of its inputs. That is, if the output of the j-th node and Wij Vj denotes denotes the weight associated with the connection of the output of the j-th node to the input of 54 the i-th, then the i-th neuron takes value Vi = g(L,i Wi:jV:j), where g is a bounded, differentiable function called the activation function. g(x) = 1/(1 + e- X ), called the logistic function, is frequently used. Given a fixed set of weights {Wi:j}, we set the input node values to equal some input vector, compute the value of the nodes layer by layer until we compute the output nodes, and so generate an output vector. Figure 1: A 5 layer network. Note bottleneck at layer 3. 55 Such networks have been studied because of analogies to neurobiology, because it may be easy to fabricate them in hardware, and because learning algorithms such as the Perceptron learning algorithm 8 , Widrow- Hoff9, and backpropagation have been able to choose weights Wi,. that solve interesting problems. Given a set of input vectors values tj, sr, together with associated target back propagation attempts to adjust the weights so as to minimize the error E in achieving these target values, defined as E = E EJL = E(tj - oj)2 JL where oj input. (1) JL,i is the output of the j-th node when sJL is presented as Back propagation starts with randomly chosen Wi,. and then varies in the gradient direction of E until a local minimum is obtained. Although only a locally optimal set of weights is obtained, in a number of experiments the neural net so generated has performed surprisingly well not only on the training set but on subsequent data. 4 - 6 This performance is probably the main reason for widespread interest in backpropagation. It seems to us natural, in the context of the medical diagnosis pro blem, the other real world problems to which backpropagation has been applied, and indeed in any mapping problem where one desires to generalize from a limited and noisy set of examples, to interpret the output vector in probabilistic terms. Such an interpretation is standard in the literature on pattern classification. 1o Indeed, the examples might even be probabilistic themselves. That is to say it might not be certain whether symptom i was present in case /L or not. Let sr represent the probability symptom i is present in case /L, and let tj represent the probability disease j ocurred in case 56 fL. Consider for the moment the case where the tJ are 1 or 0, A so that the cases are in fact fully diagnosed. Let Ii (s, 0) be our prediction of the probability of disease i given input vector 5, where {; is some set of parameters determined by our learning algorithm. In the neural network case, the {; are the connection weights and Ii (sl' , {Wi.i }) = oJ. Now lacking a priori knowledge of good 0, the best one can do is to choose the parameters {; to maximize the likelihood that the given set of examples should have occurred. 10 The formula for this likelihood, p, is immediate: or The extension of equation (2), and thus equation (3) to the case where the f are probabilities, taking values in [0,1]' is straight- 57 forward * 1 and yields log(p) = ~ [tjlog(Jj (s", 0)) + (1 - tj)log(1 - Ij (W, 0))] (4) p. ,3 Expressions of this sort often arise in physics and information theory and are generally interpreted as an entropy. 11 We may now vary the {O} in the gradient direction of the entropy. The back propagation algorithm generalizes immediately from minimizing 'Error' or 'Energy' to maximizing entropy or log likelihood, or indeed any other function of the outputs and the inputs 12 . Of course it remains true that the gradient can be computed by back propagation with essentially the same number of computations as are required to compute the output of the network. A backpropagation algorithm based on log-likelihood is not only more intuitively appealing than one based on an ad-hoc definition of error, but will make quite different and more accurate predictions as well. Consider e.g. training the net on an example which it already understands fairly well. /j(80) =L Now, from eqn(l) BE/B/j Say tj = 2?, so using = 0, and 'Error' as a * 1 We may see this by constructing an equivalent larger set of examples with the f taking only values 0 or 1 with the appropriate frequency. Thus assume the tj are rational numbers with denomi- nator dj and numerator nj and let p = IIp.,j dj. What we mean by the set of examples {tp. : J-t = 1, ... , M} can be represented by con- ij = 0 for p(J-t- 1) < v < pJ-t and 1 < vmod(dj) < (dj - nj), and ij = 1 sidering a set of N = Mp examples {ij} where for each J-t, otherwise. N ow applying equation (3) gives equation (4), up to an overall normalization. 58 criterion the net learns very little from this example, whereas, using eqn(3), Blog(p)/B!;j = 1/(1 - f), so the net continues to learn and can in fact converge to predict probabilities near 1. Indeed because back propagation using the standard 'Error' measure can not converge to generate outputs of 1 or 0, it has been customary in the literature 4 to round the target values so that a target of 1 would be presented in the learning algorithm as some ad hoc number such as .8, whereas a target of 0 would be presented as .2. In the context of our general discussion it is natural to ask whether using a feedforward network and varying the weights is in fact the most effective alternative. Anderson and Abrahams 13 have discussed this issue from a Bayesian viewpoint. From this point of view, fitting output to input using normal distributions and varying the means and covariance matrix may seem to be more logical. Feedforward networks do however have several advantages for complex problems. Experience with neural networks has shown the importance of including hidden units wherein the network can form an internal representation of the world. If one simply uses normal distributions, any hidden variables included will simply integrate out in calculating an output. It will thus be necessary to include at least third order correlations to implement useful hidden variables. Unfortunately, the number of possible third order correlations is very large, so that there may be practical obstacles to such an approach. Indeed it is well known folklore in curve fitting and pattern classification that the number of parameters must be small compared to the size of the data set if any generalization to future cases is expected. 10 In feedforward nets the question takes a different form. There can be bottlenecks to information flow. Specifically, if the net is 59 constructed with an intermediate layer which is not bypassed by any connections (i.e. there are no connections from layers preceding to layers subsequent), and if furthermore the activation functions are chosen so that the values of each of the intermediate nodes tend towards either 1 or 0*2, then this layer serves as a bottleneck to information flow. No matter how many input nodes, output nodes, or free parameters there are in the net, the output will be constrained to take on no more than 21 different patterns, where I is the number of nodes in the bottleneck layer. Thus if I is small, some sort of 'generalization' must occur even if the number of weights is large. One plausible reason for the success of back propagation in adequately solving tasks, in spite of the fact that it finds only local minima, is its ability to vary a large number of parameters. This freedom may allow back propagation to escape from many putative traps and to find an acceptable solution. A good expert system, say for medical diagnosis, should not only give a diagnosis based on the available information, but should be able to suggest, in questionable cases, which lab tests might be performed to clarify matters. Actually back propagation inherently has such a capability. Back propagation involves calculation of 81og(p)/8wij. This information allows one to compute immediately 81og(p)/8s j . Those input nodes for which this partial derivative is large correspond to important experiments. In conclusion, we propose that back propagation can be generalized, put on a satisfactory conceptual footing, and very likely made more efficient, by defining the values of the output and in*2 Alternatively when necessary this can be enforced by adding an energy term to the log-likelihood to constrain the parameter variation so that the neuronal values are near either 1 or O. 60 put neurons as probabilities, and replacing the 'Error' by the loglikelihood. Acknowledgement: E. B. Baum was supported in part by DARPA through arrangement with NASA and by NSF grant DMB-840649, 802. F. Wilczek was supported in part by NSF grant PHY82-17853 References (1)Werbos,P, "Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences" , Harvard University Dissertation (1974) (2)Parker D. B., "Learning Logic" ,MIT Tech Report TR-47, Center for Computationl Research in Economics and Management Science, MIT, 1985 (3)Le Cun, Y., Proceedings of Cognitiva 85,p599-604, Paris (1985) (4)Rumelhart, D. E., Hinton, G. E., Williams, G. E., "Learning Internal Representations by Error Propagation", in "Parallel Distributed Processing" , vol 1, eds. Rumelhart, D. E., McClelland, J. L., MIT Press, Cambridge MA,( 1986) (5)Sejnowski, T. J., Rosenberg, C. R., Complex Systems, v 1, pp 145-168 (1987) (6)LeCun, Y., Address at 1987 Snowbird Conference on Neural Networks (7)Gorman, P., Sejnowski, T. J., "Learned Classification of Sonar Targets Using a Massively Parallel Network", in "Workshop on Neural Network Devices and Applications", JPLD-4406, (1987) pp224-237 (8)Rosenblatt, F., "Principles of Neurodynamics: Perceptrons and 61 the theory of brain mechanisms", Spartan Books, Washington DC (1962) (9)Widrow, B., Hoff, M. E., 1960 IRE WESCON Cony. Record, Part 4, 96-104 (1960) (10)Duda, R. 0., Hart, P. E., "Pattern Classification and Scene Analysis", John Wiley and Sons, N.Y., (1973) (11)Guiasu, S., "Information Theory with Applications", McGraw Hill, NY, (1977) (12)Baum,E.B., "Generalizing Back Propagation to Computation" , in "Neural Networks for Computing", AlP Conf. Proc. 151, Snowbird UT (1986)pp47-53 (13)Anderson, C.H., Abrahams, E., "The Bayes Connection" , Proceedings of the IEEE International Conference on Neural N etwor ks, San Diego,(1987)
3 |@word duda:1 advantageous:1 seems:1 covariance:1 tr:1 moment:1 past:1 activation:2 must:2 john:1 subsequent:3 device:1 footing:2 dissertation:1 record:1 ire:1 node:18 constructed:1 fitting:2 behavioral:1 fabricate:1 manner:1 expected:1 indeed:5 themselves:1 frequently:1 brain:1 little:2 considering:1 bounded:2 what:1 interpreted:1 nj:2 questionable:1 unit:1 medical:3 grant:2 local:2 might:4 studied:3 k:1 limited:1 directed:1 practical:1 thirty:1 lecun:1 implement:2 backpropagation:4 sidering:1 spite:1 suggest:1 put:3 context:2 applying:1 equivalent:1 center:1 baum:3 maximizing:1 williams:1 economics:1 immediately:2 rule:1 variation:1 target:7 diego:1 us:1 harvard:2 rumelhart:2 recognition:1 continues:1 werbos:1 connected:2 valuable:1 disease:2 solving:1 eric:1 basis:1 darpa:1 represented:1 effective:1 sejnowski:2 spartan:1 quite:1 larger:2 solve:1 plausible:1 say:3 loglikelihood:1 otherwise:2 ability:1 noisy:1 hoc:2 advantage:1 differentiable:1 net:7 propose:2 loop:1 sity:1 widrow:2 snowbird:2 ij:4 involves:1 direction:3 alp:1 generalization:2 univer:1 extension:1 clarify:1 normal:2 mapping:5 predict:1 vary:2 proc:1 tool:1 hope:1 mit:3 rather:1 varying:3 og:2 rosenberg:1 likelihood:6 tech:1 rigorous:1 pasadena:1 hidden:3 wij:2 overall:1 classification:5 issue:1 priori:1 constrained:1 fairly:1 hoff:1 equal:1 construct:1 washington:1 future:1 report:1 escape:1 randomly:1 consisting:1 attempt:1 freedom:1 interest:1 adjust:1 tj:7 accurate:1 partial:1 necessary:2 experience:1 theoretical:1 instance:1 obstacle:1 tp:1 varies:1 international:1 probabilistic:3 physic:4 dmb:1 together:1 management:1 iip:1 choose:2 conf:1 book:1 american:1 expert:5 derivative:1 account:1 matter:2 permanent:1 mp:1 vi:1 ad:2 tion:1 performed:2 view:1 diagnose:1 closed:1 lab:1 doctor:1 start:1 sort:2 bayes:1 capability:1 parallel:2 minimize:1 phoneme:1 yield:2 correspond:1 generalize:1 bayesian:1 researcher:1 straight:1 synaptic:1 ed:1 definition:1 energy:2 frequency:1 pp:1 associated:2 con:1 rational:1 ask:1 logical:1 knowledge:1 ut:1 actually:1 back:14 understands:1 nasa:1 supervised:5 wherein:1 symptom:5 diagnosed:2 anderson:2 furthermore:1 until:2 correlation:2 eqn:2 replacing:1 wilczek:2 propagation:14 widespread:1 logistic:1 believe:1 true:1 adequately:1 laboratory:1 satisfactory:2 round:1 numerator:1 criterion:1 generalized:2 hill:1 pro:1 recently:1 jl:2 discussed:1 ejl:1 illness:3 relating:1 interpret:1 interpretation:1 occurred:1 cambridge:2 dj:4 base:1 perspective:1 barbara:1 massively:1 certain:1 blog:1 success:1 minimum:2 preceding:1 converge:2 maximize:1 ii:2 jet:1 calculation:1 hart:1 prediction:3 regression:1 patient:2 essentially:1 represent:2 normalization:1 whereas:2 sr:2 cognitiva:1 probably:1 tend:1 flow:2 seem:1 near:2 feedforward:4 intermediate:3 easy:1 imperfect:1 absent:1 bottleneck:4 whether:2 expression:1 sjl:1 jj:1 generally:1 useful:1 santa:1 locally:1 cony:1 hardware:1 mcclelland:1 generate:2 sl:1 semilinear:1 nsf:2 delta:1 rosenblatt:1 diagnosis:5 vol:1 achieving:1 jv:1 pj:1 year:1 enforced:1 place:1 putative:1 decision:1 acceptable:1 layer:12 fl:1 occur:2 constrain:1 scene:1 simulate:1 department:1 slightly:1 son:1 wi:5 appealing:1 cun:1 intuitively:1 equation:4 remains:1 describing:1 mechanism:1 serf:1 generalizes:1 available:1 appropriate:1 alternative:1 customary:1 denotes:2 include:1 calculating:1 folklore:1 question:3 already:1 arrangement:1 propagat:1 gradient:4 ow:1 propulsion:1 reason:2 minimizing:1 unfortunately:1 frank:1 ized:1 design:1 neuron:3 immediate:1 defining:2 neurobiology:1 hinton:1 dc:1 required:2 paris:1 connection:5 california:1 learned:1 address:2 able:2 beyond:1 pattern:5 including:2 reliable:1 oj:3 natural:3 difficulty:1 improve:1 numerous:1 text:1 literature:2 acknowledgement:1 lacking:1 fully:1 interesting:1 analogy:1 integrate:1 principle:1 viewpoint:1 course:1 surprisingly:1 supported:2 free:1 allow:1 perceptron:1 institute:2 taking:2 distributed:1 curve:1 world:3 forward:2 made:2 san:1 mcgraw:1 logic:1 wescon:1 conceptual:2 assumed:1 alternatively:1 sonar:2 neurodynamics:1 learn:2 ca:2 bypassed:1 inherently:1 complex:2 constructing:1 vj:1 main:1 abraham:2 arise:1 neuronal:1 advice:1 parker:1 ny:1 wiley:1 third:2 learns:1 formula:1 specific:1 trap:1 workshop:1 adding:1 importance:1 gorman:1 entropy:3 generalizing:1 simply:2 likely:3 desire:1 corresponds:2 ma:2 goal:1 towards:1 included:1 determined:1 specifically:1 called:3 perceptrons:1 internal:2 phenomenon:1
2,203
30
794 A 'Neural' Network that Learns to Play Backgammon G. Tesauro Center for Complex Systems Research, University of Illinois at Urbana-Champaign, 508 S. Sixth St., Champaign, IL 61820 T. J. Sejnowski Biophysics Dept., Johns Hopkins University, Baltimore, MD 21218 ABSTRACT We describe a class of connectionist networks that have learned to play backgammon at an intermediate-to-advanced level. TIle networks were trained by a supervised learning procedure on a large set of sample positions evaluated by a human expert. In actual match play against humans and conventional computer programs, the networks demonstrate substantial ability to generalize on the basis of expert knowledge. Our study touches on some of the most important issues in network learning theory, including the development of efficient coding schemes and training procedures, scaling, generalization, the use of real-valued inputs and outputs, and techniques for escaping from local minima. Practical applications in games and other domains are also discussed. INTRODUCTION A potentially quite useful testing ground for studying issues of knowledge representation and learning in networks can be found in the domain of game playing. Board games such as chess, go, backgammon, and Othello entail considerable sophistication and complexity at the advanced level, and mastery of expert concepts and strategies often takes years of intense study and practice for humans. However, the complexities in board games are embedded in relatively "clean" structured tasks with well-defined rules of play, and well-defined criteria for success and failure. This makes them amenable to automated play, and in fact most of these games have been exten')ively studied with conventional computer science techniques. Thus, direct comparisons of the results of network learning can be made with more conventional approaches. In this paper, we describe an application of network learning to the game of backgammon. Backgammon is a difficult board game which appears to be well-suited to neural networks, because the way in which moves are selected is primarily on the basis of pattern-recognition or "judgemental" reasoning, as opposed to explicit "look-ahead," or tree-search computations. This is due to the probabilistic dice rolls in backgammon, which greatly expand the branching factor at each ply in the search (to over 400 in typical positions). Our learning procedure is a supervised one 1 that requires a database of positions and moves that have been evaluated by an expert "teacher." In contrast, in an unsupervised procedure 2-4 learning would be based on the consequences of a given move (e.g., whether it led to a won or lost position), and explicit teacher instructions would not be required. However, unsupervised learning procedures thus far have been much less efficient at reaching high levels of performance than supervised learning procedures. In part, this advantage of supervised learning can be traced to the higher ? American Institute of Physics 1988 795 quantity and quality of information available from the teacher. Studying a problem of the scale and complexity of backgammon leads one to confront important general issues in network learning. Amongst the most important are scaling and generalization. Most of the problems that have been examined with connectionist learning algorithms are relatively small scale and it is not known how well they will perform on much larger problems. Generalization is a key issue in learning to play backgammon since it is estimated that there are 1020 possible board positions, which is far in excess of the number of examples that can be provided during training. In this respect our study is the most severe test of generalization in any connectionist network to date. We have also identified in this study a novel set of special techniques for training the network which were necessary to achieve good performance. A training set based on naturally occurring or random examples was not sufficient to bring the network to an advanced level of performance. Intelligent data-base design was necessary. Performance also improved when noise was added to the training procedure under some circumstances. Perhaps the most important factor in the success of the network was the method of encoding the input information. The best perfonnance was achieved when the raw input infonnation was encoded in a conceptually significant way, and a certain number of pre-computed features were added to the raw infonnation. These lessons may also be useful when connectionist learning algorithms are applied to other difficult large-scale problems. NElWORK AND DATA BASE SET-UP Our network is trained to select moves (i.e. to produce a real-valued score for any given move), rather than to generate them. This avoids the difficulties of having to teach the network the concept of move legality. Instead, we envision our network operating in tandem with a preprocessor which would take the board position and roll as input, and produce all legal moves as output. The network would be trained to score each move, and the system would choose the move with the highest network score. Furthermore, the network is trained to produce relative scores for each move, rather than an absolute evaluation of each final position. This approach would have greater sensitivity in distinguishing between close alternatives, and corresponds more closely to the way humans actually evaluate moves. The current data base contains a totaJ of 3202 board positions, taken from various sources 5? For each position there is a dice roll and a set of legal moves of that roll from that pOSition. The moves receive commentary from a human expert in the form of a relative score in the range [100,+100), with +100 representing the best possible move and -100 representing the worst possible move. One of us (G.T.) is a strong backgammon player, and played the role of human expert in entering these scores. Most of the moves in the data base were not scored, because it is not feasible for a human expert to comment on all possible moves. (The handling of these unscored lines of data in the training procedure will be discussed in the following section.) An important result of our study is that in order to achieve the best perfonnance, the data base of examples must be intelligently designed, rather than haphazardly accumulated. If one simply accumulates positions which occur in actual game play, for example, one will find that certain principles of play will appear over and over again in these positions, while other important principles may be used only rarely. This causes problems for the network, as it tends to "overlearn" the commonly used principles, and not learn at aJl the rarely used principles. Hence it is necessary to have both an intelligent selection mechanism to reduce the number of over-represented situations, and an intelligent design mechanism to enhance the number of examples which illustrate under-represented situations. This process is described in more detail elsewhere 5 . We use a detenninistic, feed-forward network with an input layer, an output layer, and either one or two layers of hidden units, with full connectivity between adjacent layers. (We have tried a number of experiments with restricted receptive fields, and generally have not found them to be useful.) Since the desired output of the network is a single real value, only one output unit is required. 796 TIle coding of the input patterns is probably the most difficult and most important design issue. In its current configuration the input layer contains 459 input units. A location-based representation scheme is used, in which a certain number of input units are assigned to each of the 26 locations (24 basic plus White and Black bar) on the board. TIle input is inverted if necessary so that the network always sees a problem in which White is to play. An example of the coding scheme used until very recently is shown in Fig. I. This is essentially a unary encoding of the number of men at each board location, with a few exceptions as indicated in the diagram. This representation scheme worked fairly well, but had one peculiar problem in that after training, the network tended to prefer piling large numbers of men on certain points, in particular White's 5 point (the 20 point in the 1-24 numbering scheme). Fig. 2 illustrates an example of this peculiar behavior. In this position White is to play 5-1. Most humans would play 4-5,4-9 in this position; however, the network chose the move 4-9,19-20. This is actually a bad move, because it reduces White's chances of making further points in his inner board. The fault lies not with the data base used to train the network, but rather with the representation scheme used. In Fig. I a, notice that unit 12 is turned on whenever the final position is a point, and the number of men is different from the initial position. For the 20 point in particular, this unit will develop strong excitatory weights due to cases in which the initial position is not a point (i.e., the move makes the point). The 20 point is such a valuable point to make that the excitation produced by turning unit 12 on might overwhelm the inhibition produced by the poor distribution of builders. (0) ( b) ~-5 -4 -3 oI 02 ~-5 ~-2 -I 000 3 4 5 -4 -3 5-2 -I oI DOD 0 234 5 I ~2 o6 ?7 I ~2 3 4 ~5 0 0 0 8 9 10 3 4 >5 o6 7? 8DOn 910 o I oII 120 o It ~2 3 4 ~5 ? ? 0 0 13 14 15 16 I~ ~2t 2~ oII 12000 13 14 3 4 ~5 0 .00 15 16 17 18 Figure 1-- Two schemes used to encode the raw position infonnation in the network's input. illustrated in each case is the encoding of two White men p~sent befo~ the move, and three White men p~Jent after the move. (a) An essentiaUy unary coding of the number of men at a particular board location. Units 1-10 encode the initial position, units 11-16 encode the final position if the~ has been a change from the initial position. Units are tumed on in the cases indicated on top of each unit, e.g., unit 1 is turned on if the~ are 5 or more Black men p~sent, etc.. (b) A superior coding scheme with more units u~ed to characterize the type of transition from initial to final position. An up arrow indicates an increase in the number of men. a down arrow indicates a decrease. Units 11-15 have conceptual interpretations: l1="dearing." 12="slotting," 13="b~aking," 14="making," 15="stripping" a point. 797 12 11 10 9 8 7 6 5 4 321 DO 13 14 15 16 17 18 19 20 21 22 23 24 Figure 2-- A sample position illustrating a defect of the coding scheme in Fig. 1a. White is to play 5-1. With coding scheme (1a). the network prefers 4-9. 19-20. With coding scheme (lb). the network prefers 4-9. 4-5. The graphic display was generated on a Sun Microsystems workstation using the Garnmontool program. In conceptual tenns, humans would say that unit 12 participates in the representation of two different concepts: the concept of making a point, and the concept of changing the number of men occupying a made point. These two concepts are unrelated, and there is no point in representing them with a common input unit. A superior representation scheme in which these concepts are separated is shown in Fig. 1b: In this representation unit 13 is turned on only for moves which make the point. Other moves which change the number of men on an already-made point do not activate unit 13, and thus do not receive any undeserved excitation. With this representation scheme the network no longer tends to pile large numbers of men on certain points, and its overall perfonnance is significantly better. In addition to this representation of the raw board position, we also utilize a number of input units to represent certain "pre-computed" features of the raw input. The principal goal of this study has been to investigate network learning, rather than simply to obtain high perfonnance, and thus we have resisted the temptation of including sophisticated hand-crafted features in the input encoding. However, we have found that a few simple features are needed in practice to obtain minimal standards of competent play. With only' 'raw" board infonnation, the order of the desired computation (as defined by Minsky and Papert6 ) is probably quite high, and the number of examples needed to learn such a difficult computation might be intractably large. By giving the network "hints" in the fonn of pre-computed features, this reduces the order of the computation, and thus might make more of the problem learnable in a tractable number of examples. 798 TRAINING AND TESTING PROCEDURES To train the network, we have used the standard "back-propagation" learning algorithm 7-9 for modifying the connections in a multilayer feed-forward network. (A detailed discussion of learning parameters, etc., is provided elsewhere s.) However, our procedure differs from the standard procedure due to the necessity of dealing with the large number of uncommented moves in the data base. One solution would be simply to avoid presenting these moves to the network. However, this would limit the variety of input patterns presented to the network in training, and certain types of inputs probably would be eliminated completely. TIle alternative procedure which we have adopted is to skip the uncommented moves most of the time (75% for ordinary rolls and 92% for double rolls), and the remainder of the time present the pattern to the network and generate a random teacher signal with a slight negative bias. This makes sense, because if a move has not received comment by the human expert, it is more likely to be a bad move than a good move. The random teacher signal is chosen uniformly from the interval [-65,+35]. We have used the following four measures to assess the network's performance after it has been trained: (i) performance on the training data, (ii) perfonnance on a set of test data (1000 positions) which was not used to train the network, (iii) perfonnance in actual game play against a conventional computer program (the program Gammontool of Sun Microsystems Inc.), and (iv) performance in game play against a human expert (G.T.). In the first two measures, we define the performance as the fraction of positions in which the network picks the correct move, i.e., those positions for which the move scored highest by the network agrees with the choice of the human expert. In the latter two measures, the perfonnance is defined simply as the fraction of games won, without considering the complications of counting gammons or backgammons. QUANTITATIVE RESULTS A summary of our numerical results as measured by perfonnance on the training set and against Gammontool is presented in Table 1. The best network that we have produced so far appears to defeat Gammontool nearly 60% of the time. Using this as a benchmark, we find that the most serious decrease in performance occurs by removing aU pre-computed features from the input coding. This produces a network which wins at most about 41 % of the time. 'The next most important effect is the removal of noise from the training procedure; this results in a network which wins 45% of the time. Next in importance is the presence of hidden units; a network without hidden units wins about 50% of the games against Gammontool. In contrast, effects such as varying the exact number of hidden units, the number of layers, or the size of the training set, results in only a few (1-3) percentage point decrease in the number of games won. Also included in Table 1 is the result of an interesting experiment in which we removed our usual set of pre-computed features and substituted instead the individual tenns of the Gammontool evaluation function. We found that the resulting network, after being trained on our expert training set, was able to defeat the Gammontool program by a small margin of 54 to 46 percent. 'The purpose of this experiment was to provide evidence of the usefulness of network learning as an adjunct to standard AI techniques for hand-crafting evaluation functions. Given a set of features to be used in an evaluation function which have been designed, for example; by interviewing a human expert, the problem remains as to how to "tune" these features, i.e., the relative weightings to associate to each feature, and at an advanced level, the context in which each feature is relevant. Little is known in general about how to approach this problem, and often the human programmer must resort to painstaking trial-and-error tuning by hand. We claim that network learning is a powerful, generaJpurpose, automated method of approaching this problem, and has the potentiaJ to produce a tuning which is superior to those produced by humans, given a data base of sufficiently high quality, and a suitable scheme for encoding the features. The result of our experiment provides evidence to support this claim, although it is not firmly established since we do not have highly accurate statistics, and we do not know how much human effort went into the tuning of the Gammontool evaluation 799 function. More conclusive evidence would be provided if the experiment were repeated with a more sophisticated program such as Berliner's BKO lO, and similar results were obtained. Network sIZe Training cycles Perf. on test set Perf. va. Ga.mmontool (a) 459-24-24-1 (b) 459-24-1 (c) 459-24-1 (d) 459-12-1 20 22 24 10 .540 .542 .518 .538 .59 .57 .58 .54 ? ? ? ? .03 .05 .05 .05 (e) 410-24-12-1 (f) 459-1 (g) 459-24-12-1 (h) 393-24-12-1 16 22 10 12 .493 .485 .499 .488 .54 .50 .45 .41 ? ? ? ? .03 .03 .03 .02 Comments 1600 posn. D.B . Gammontool features No hidden units No training noise No features Table 1-- Summary of perfonnance statistics for various networks. (a) The best network we have produced. containing two layers of hidden units. with 24 units in each layer. (b) A network with only one layer of 24 hidden units. (c) A network with 24 hidden units in a single layer, trained on a training set half the nonnal size. (d) A network with half the number of hidden units as in (b). (e) A network with features from the Gammontool evaluation function substituted for the nonnal features. (f) A network without hidden units. (g) A network trained with no noise in the training procedure. (h) A network with only a raw board description as input. QUALITATIVE RESULTS Analysis of the weights produced by training a network is an exceedingly difficult problem, which we have only been able to approach qualitatively. In Fig. 3 we present a diagram showing the connection strengths in a network with 651 input units and no hidden units. lbe figure shows the weights from each input unit to the output unit. (For purposes of illustration, we have shown a coding scheme with more units than nonnal to explicitly represent the transition from initial to final position.) Since the weights go directly to the output, the corresponding input units can be clearly interpreted as having either an overall excitatory or inhibitory effect on the score produced by the network. A great deal of columnar structure is apparent in Fig. 3. This indicates that the network has learned that a particular number of men at a given location, or a particular type of transition at a given location, is either good or bad independent of the exact location on the board where it occurs. Furthennore, we can see the importance of each of the pre-computed features in the input coding. The most significant features seem to be the number of points made in the network's inner board, and the total blot exposure. 800 features { roll { ........ bar 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 ABC 0 E F G H I J K L M N 0 P Q R STU V W Figure 3-- A Hinton diagram for a network with 651 input units and no hidden units. Small squares indicate weights from a particular input unit to the output unit. White squares indicate positive weights, and black squares indicate negative weights. Size of square indicates 24 rows from bottom up indicate raw board infonnation. Letting magnitude of weight. x=number of men before the move and y=number of men after the move, the interpretations of columns are as follows: A: x<=-5; B: x=-4; C: x=-3; D: x<=-2; E: x=-I: F: x=l: G: x>=2; H: x=3: I: x=4: J: x>=5: K: x<1 & y=l; L: x<2 & y>=2: M: x<3 & y=3: N: x<4 & y=4: 0: x<y & y>=5: P: x=1 & y=O; Q: x>=2 & y=O; R: x>=2 & y=l: S: x>=3 & y=2: T: x>=4 & y=3: U: x>=5 & y=4: V: x>y & y>=5: W: prob. of a White blot at thi~ location being hit (precomputed feature). The next row encodes number of men on White and Blnck bar~. The next 3 rows encode roll infonnation. Remaining rows encode various pre-computed feature~. rust Much insight into the basis for the network's judgement of various moves has been gained by actually playing games against it. In fact, one of the most revealing tests of what the network has and has not learned came from a 20-game match played by G.T. against one of the latest generation of networks with 48 hidden units. (A detailed description of the match is given in Ref. II.) The surprising result of this match was that the network actually won, 11 games to 9. However, a 801 detailed analysis of the moves played by the network during the match indicates that the network was extremely lucky to have won so many games, and could not reasonably be expected to continue to do so well over a large number of games. Out of the 20 games played, there were 11 in which the network did not make any serious mistakes. The network won 6 out of these 11 games, a result which is quite reasonable. However, in 9 of the 20 games, the network made one or more serious (i.e. potentially fatal) "blunders." The seriousness of these mistakes would be equivalently to dropping a piece in chess. Such a mistake is nearly always fatal in chess against a good opponent; however in backgammon there are still chances due to the element of luck involved. In the 9 games in which the network blundered, it did manage to survive and win 5 of the games due to the element of luck. (We are assuming that the mistakes made by the human, if any, were only minor mistakes.) It is highly unlikely that this sort of result would be repeated. A much more likely result would be that the network would win only one or two of the games in which it made a serious error. This would put the network's expected perfonnance against expert or near-expert humans at about the 35-40% level. (This has also been confinned in play against other networks.) We find that the network does act as if it has picked up many of the global concepts and strategies of advanced play. The network has also learned many important tactical elements of play at the advanced level. As for the specific kinds of mistakes made by the network, we find that they are not at all random, senseless mistakes, but instead fall into clear, well-defined conceptual categories, and furthennore, one can understand the reasons why these categories of mistakes are made. We do not have space here to describe these in detail, and refer the reader instead to Ref. 5. To summarize, qualitative analysis of the network's play indicates that it has learned many important strategies and tactics of advanced backgammon. This gives the network very good overall perfonnance in typical positions. However, the network's worst case perfonnance leaves a great deal to be desired. The network is capable of making both serious, obvious, "blunders," as well more subtle mistakes, in many different types of positions. Worst case perfonnance is important, because the network must make long sequences of moves throughout the course of a game without any serious mistakes in order to have a reasonable chance of winning against a skilled opponent. The prospects for improving the network's worst case perfonnance appear to be mixed. It seems quite likely that many of the current "blunders" can be fixed with a reasonable number of handcrafted examples added to the training set. However, many of the subtle mistakes are due to a lack of very sophisticated knowledge, such as the notion of timing. It is difficult to imagine that this kind of knowledge could be imparted to the network in only a few examples. Probably what is required is either an intractably large number of examples, or a major overhaul in either the pre-computed features or the training paradigm. DISCUSSION We have seen from both quantitative and qualitative measures that the network has learned a great deal about the general principles of backgammon play, and has not simply memorized the individual positions in the training set. Quantitatively, the measure of game perfonnace provides a clear indication of the network's ability to generalize, because apart from the first couple of moves at the start of each game, the network must operate entirely on generalization. Qualitatively, one can see after playing several games against the network that there are certain characteristic kinds of positions in which it does well, and other kinds of positions in which it systematically makes welldefined types of mistakes. Due to the network's frequent "blunders," its overall level of play is only intennediate level, although it probably is somewhat better than the average intennediate-Ievel player. Against the intennediate-level program Gammontool, our best network wins almost 60% of the games. However, against a human expert the network would only win about 35-40% of the time. Thus while the network does not play at expert level, it is sufficiently good to give an expert a hard time, and with luck in its favor can actually win a match to a small number of games. Our simple supervised learning approach leaves out some very important sources of 802 infonnation which are readily available to humans. 1be network is never told that the underlying topological structure of its input space really corresponds to a one-dimensional spatial structure; all it knows is that the inputs form a 459-dimensional hypercube. It has no idea of the object of the game, nor of the sense of temporal causality, i.e. the notion that its actions have consequences, and how those consequences lead to the achievement of the objective. The teacher signal only says whether a given move is good or bad, without giving any indication as to what the teacher's reasons are for making such a judgement. Finally, the network is only capable of scoring single moves in isolation, without any idea of what other moves are available. 1bese sources of knowledge are essential to the ability of humans to play backgammon well, and it seems likely that some way of incorporating them into the network learning paradigm will be necessary in order to achieve further substantial improvements in performance. 111ere are a number of ways in which these additional sources of knowledge might be incorporated, and we shall be exploring some of them in future work. For example, knowledge of alternative moves could be introduced by defining a more sophisticated error signal which takes into account not only the network and teacher scores for the current move, but also the network and teacher scores for other moves from the same position. However, the more immediate plans involve a continuation of the existing strategies of hand-crafting examples and coding scheme modifications to eliminate the most serious errors in the network's play. If these errors can be eliminated, and we are confident that this can be achieved, then the network would become substantially better than any commercially available program, and would be a serious challenge for human experts. We would expect 65% performance against Gammontool, and 45% performance against human experts. Some of the results of our study have implications beyond backgammon to more general classes of difficult problems. One of the limitations we have found is that substantial human effort is required both in the design of the coding scheme and in the design of the training set. It is not sufficient to use a simple coding scheme and random training patterns, and let the automated network learning procedure take care of everything else. We expect this to be generally true when connectionist learning is applied to difficult problem domains. On the positive side, we foresee a potential for combining connectionist learning techniques with conventional AI techniques for hand-crafting knowledge to make significant progress in the development of intelligent systems. From the practical point of view, network learning can be viewed as an "enhancer" of traditional techniques, which might produce systems with superior perfonnance. For this particular application, the obvious way to combine the two approaches is in the use of pre-computed features in the input encoding. Any set of hand-crafted features used in a conventional evaluation function could be encoded as discrete or continuous activity levels of input units which represent the current board state along with the units representing the raw information. Given a suitable encoding scheme for these features, and a training set of sufficient size and quality (i.e., the scores in the training set should be better than those of the original evaluation function), it seems possible that the resulting network could outperform the original evaluation function, as evidenced by our experiment with the Gammontool features. Networlc learning might also hold promise as a means of achieving the long-sought goal of automated feature discovery2. Our network certainly appears to have learned a great deal of knowledge from the training set which goes far beyond the amount of knowledge that was explicitJy encoded in the input features. Some of this knowledge (primarily the lowest level components) is apparent from the weight diagram when there are no hidden units (Fig. 3). However, much of the network's knowledge remains inaccessible. What is needed now is a mean'! of disentangling the novel features discovered by the network from either the patterns of activity in the hidden units, or from the massive number of connection strengths which characterize the network. This is one our top priorities for future research, although techniques for such "reverse engineering" of parallel networlcs are only beginning to be developed l2 . 803 ACKNOWLEDGEMrnNTS lhis work was inspired by a conference on "Evolution, Games and Learning" held at Los Alamos National Laboratory, May 20-24, 1985. We thank Sun Microsystems Inc. for providing the source code for their Gammontool program, Hans Berliner for providing some of the po!>itions used in the data base, Subutai Ahmad for writing the weight display graphics package, Bill Bogstad for assistance in programming the back-propagation simulator, and Bartlett Mel, Peter Frey, and Scott Kirkpatrick for critical reviews of the manuscript. G.T. was supponed in part by the National Center for Supercomputing Applications. TJ.S. was supponed by a NSF Presidential Young Investigator Award, and by grants from the Seaver Institute and the Lounsbury Foundation. REFERENCES 1. D. E. Rumelart and J. L. McClelland, eds., Parallel Distributed Processing: Explorations in the Microstructure o/Cognition, Vols. 1 and 2 (Cambridge: MIT Press, 1986). 2. A. L. Samuel, "Some studies in machine learning using the game of checkers." IBM J. Res. Dev. 3, 210--229 (1959). 3. J. H. Holland, "Escaping brittleness: the possibilities of general-purpose learning algorithms applied to parallel rule-based systems." In: R. S. Michalski et aI. (eds.), Machine learning: an artificial ;ntelligence approach. Vol. II (Los Altos CA: Morgan-Kaufman, 1986). 4. R. S. Sutton, "Learning to predict by the methods of temporal differences," GTE Labs Tech. Repon TR87-509.1 (1987). 5. G. Tesauro and T. J. Sejnowski, "A parallel network that learns to play backgammon." Univ. of Illinois at Urbana-Champaign, Center for Complex Systems Research Technical Repon (1987). 6. M. Minsky and S. Papen, Perceptrons (Cambridge: MIT Press, 1969). 7. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning representations by backpropagating errors." Nature 323,533--536 (1986). 8. Y. Le Cun, "A learning procedure for asymmetric network." Proceedings o/Cognitiva (Par;s) 85,599--604 (1985). 9. D. B. Parker, "Learning-logic." MIT Center for Computational Research in Economics and Management Science Tech. Repon TR-47 (1985). 10. H. Berliner, "Backgammon computer program beats world champion." Artificial Intelligence 14,205--220 (1980). 11. G. Tesauro, "Neural network defeats creator in backgammon match." Univ. of llIinois at Urbana-Champaign, Center for Complex Systems Research Technical Repon (1987). 12. C. R. Rosenberg, "Revealing the structure of NETtalk's internal representations." Proceedings of the Ninth Annual Conference of the Cognitive Science Society (Hillsdale, NJ: Lawrence Erlbaum Associates, 1987).
30 |@word trial:1 illustrating:1 judgement:2 seems:3 instruction:1 tried:1 fonn:1 pick:1 tr:1 necessity:1 configuration:1 contains:2 score:10 initial:6 mastery:1 legality:1 envision:1 existing:1 current:5 surprising:1 must:4 readily:1 john:1 numerical:1 designed:2 half:2 selected:1 leaf:2 intelligence:1 beginning:1 painstaking:1 provides:2 complication:1 location:8 along:1 skilled:1 direct:1 become:1 qualitative:3 welldefined:1 combine:1 expected:2 behavior:1 nor:1 simulator:1 berliner:3 inspired:1 actual:3 little:1 considering:1 tandem:1 provided:3 unrelated:1 underlying:1 intennediate:3 alto:1 lowest:1 what:5 kind:4 interpreted:1 substantially:1 kaufman:1 developed:1 nj:1 temporal:2 quantitative:2 act:1 hit:1 unit:44 grant:1 appear:2 positive:2 before:1 engineering:1 local:1 timing:1 tends:2 limit:1 consequence:3 mistake:12 sutton:1 encoding:7 accumulates:1 frey:1 black:3 plus:1 chose:1 might:6 studied:1 examined:1 au:1 confinned:1 range:1 practical:2 testing:2 practice:2 lost:1 differs:1 procedure:16 dice:2 thi:1 lucky:1 significantly:1 revealing:2 pre:9 gammon:1 othello:1 close:1 selection:1 ga:1 put:1 context:1 writing:1 conventional:6 bill:1 center:5 go:3 exposure:1 latest:1 williams:1 economics:1 rule:2 insight:1 brittleness:1 his:1 notion:2 imagine:1 play:25 massive:1 exact:2 programming:1 distinguishing:1 associate:2 element:3 rumelhart:1 recognition:1 asymmetric:1 database:1 bottom:1 role:1 worst:4 cycle:1 sun:3 went:1 decrease:3 highest:2 removed:1 valuable:1 luck:3 substantial:3 prospect:1 ahmad:1 inaccessible:1 complexity:3 blunder:4 trained:8 senseless:1 basis:3 completely:1 po:1 various:4 represented:2 train:3 separated:1 univ:2 describe:3 activate:1 sejnowski:2 artificial:2 quite:4 encoded:3 larger:1 valued:2 apparent:2 say:2 furthennore:2 fatal:2 ability:3 statistic:2 favor:1 presidential:1 final:5 advantage:1 sequence:1 indication:2 intelligently:1 michalski:1 remainder:1 frequent:1 turned:3 relevant:1 combining:1 date:1 achieve:3 description:2 achievement:1 los:2 double:1 produce:6 object:1 illustrate:1 develop:1 measured:1 minor:1 received:1 progress:1 strong:2 skip:1 indicate:4 closely:1 correct:1 modifying:1 exploration:1 human:24 programmer:1 memorized:1 everything:1 hillsdale:1 microstructure:1 generalization:5 really:1 exploring:1 hold:1 sufficiently:2 ground:1 great:4 lawrence:1 cognition:1 predict:1 claim:2 major:1 sought:1 purpose:3 infonnation:7 agrees:1 champion:1 builder:1 occupying:1 ere:1 mit:3 clearly:1 subutai:1 always:2 reaching:1 rather:5 temptation:1 avoid:1 lbe:1 varying:1 rosenberg:1 encode:5 improvement:1 backgammon:18 indicates:6 greatly:1 contrast:2 tech:2 sense:2 accumulated:1 unary:2 unlikely:1 eliminate:1 hidden:15 expand:1 nonnal:3 issue:5 overall:4 oii:2 development:2 plan:1 spatial:1 special:1 fairly:1 field:1 never:1 having:2 eliminated:2 look:1 unsupervised:2 nearly:2 survive:1 future:2 commercially:1 connectionist:6 quantitatively:1 intelligent:4 hint:1 primarily:2 few:4 serious:8 seriousness:1 national:2 individual:2 stu:1 minsky:2 investigate:1 highly:2 possibility:1 evaluation:9 severe:1 certainly:1 kirkpatrick:1 tj:1 held:1 amenable:1 accurate:1 peculiar:2 implication:1 ively:1 capable:2 detenninistic:1 necessary:5 intense:1 perfonnance:15 tree:1 iv:1 desired:3 re:1 minimal:1 column:1 dev:1 ordinary:1 alamo:1 dod:1 usefulness:1 erlbaum:1 graphic:2 characterize:2 stripping:1 teacher:9 confident:1 st:1 sensitivity:1 probabilistic:1 physic:1 participates:1 told:1 enhance:1 hopkins:1 connectivity:1 again:1 manage:1 opposed:1 choose:1 containing:1 tile:4 exten:1 management:1 priority:1 cognitive:1 expert:19 american:1 resort:1 account:1 potential:1 coding:14 tactical:1 inc:2 explicitly:1 piece:1 view:1 picked:1 lab:1 start:1 sort:1 parallel:4 ass:1 il:1 oi:2 square:4 roll:8 characteristic:1 lesson:1 conceptually:1 generalize:2 raw:9 produced:7 tended:1 whenever:1 ed:3 sixth:1 against:16 failure:1 involved:1 obvious:2 naturally:1 workstation:1 couple:1 enhancer:1 knowledge:12 subtle:2 sophisticated:4 actually:5 back:2 appears:3 feed:2 manuscript:1 higher:1 supervised:5 o6:2 improved:1 evaluated:2 foresee:1 furthermore:1 until:1 hand:6 touch:1 propagation:2 lack:1 quality:3 perhaps:1 indicated:2 vols:1 effect:3 concept:8 true:1 evolution:1 hence:1 assigned:1 entering:1 laboratory:1 illustrated:1 white:11 deal:4 adjacent:1 assistance:1 game:33 branching:1 during:2 nettalk:1 backpropagating:1 excitation:2 mel:1 won:6 samuel:1 criterion:1 tactic:1 presenting:1 demonstrate:1 l1:1 bring:1 percent:1 reasoning:1 novel:2 recently:1 superior:4 common:1 rust:1 handcrafted:1 defeat:3 bko:1 discussed:2 interpretation:2 slight:1 significant:3 refer:1 cambridge:2 ai:3 tuning:3 blundered:1 illinois:2 adjunct:1 had:1 entail:1 longer:1 operating:1 inhibition:1 etc:2 base:9 gammontool:13 han:1 imparted:1 apart:1 tesauro:3 ajl:1 reverse:1 certain:8 success:2 tenns:2 fault:1 came:1 continue:1 scoring:1 inverted:1 morgan:1 minimum:1 greater:1 commentary:1 seen:1 somewhat:1 additional:1 care:1 paradigm:2 signal:4 nelwork:1 ii:3 full:1 reduces:2 champaign:4 technical:2 match:7 long:2 dept:1 award:1 tr87:1 biophysics:1 va:1 basic:1 multilayer:1 confront:1 circumstance:1 essentially:1 represent:3 achieved:2 receive:2 addition:1 baltimore:1 interval:1 diagram:4 else:1 source:5 operate:1 checker:1 cognitiva:1 probably:5 comment:3 sent:2 seem:1 near:1 counting:1 presence:1 intermediate:1 iii:1 automated:4 variety:1 isolation:1 identified:1 escaping:2 approaching:1 reduce:1 inner:2 idea:2 whether:2 bartlett:1 effort:2 peter:1 cause:1 prefers:2 action:1 useful:3 generally:2 detailed:3 clear:2 tune:1 involve:1 ievel:1 amount:1 repon:4 category:2 mcclelland:1 generate:2 continuation:1 outperform:1 percentage:1 nsf:1 inhibitory:1 notice:1 estimated:1 discrete:1 dropping:1 shall:1 promise:1 vol:1 key:1 four:1 traced:1 achieving:1 changing:1 clean:1 utilize:1 defect:1 fraction:2 year:1 prob:1 package:1 powerful:1 throughout:1 reasonable:3 reader:1 almost:1 prefer:1 scaling:2 entirely:1 layer:10 networlc:1 played:4 display:2 topological:1 annual:1 activity:2 strength:2 ahead:1 occur:1 worked:1 encodes:1 networlcs:1 extremely:1 relatively:2 structured:1 numbering:1 poor:1 seaver:1 cun:1 making:5 modification:1 chess:3 restricted:1 taken:1 legal:2 remains:2 overwhelm:1 precomputed:1 mechanism:2 needed:3 know:2 letting:1 tractable:1 studying:2 available:4 adopted:1 opponent:2 haphazardly:1 alternative:3 original:2 top:2 remaining:1 creator:1 giving:2 hypercube:1 society:1 crafting:3 move:44 objective:1 added:3 quantity:1 already:1 occurs:2 strategy:4 receptive:1 md:1 usual:1 traditional:1 blot:2 amongst:1 win:8 thank:1 reason:2 assuming:1 code:1 illustration:1 providing:2 equivalently:1 difficult:8 disentangling:1 potentially:2 teach:1 negative:2 supponed:2 design:5 perform:1 urbana:3 benchmark:1 beat:1 immediate:1 situation:2 hinton:2 incorporated:1 defining:1 discovered:1 ninth:1 lb:1 introduced:1 evidenced:1 required:4 connection:3 conclusive:1 learned:7 established:1 able:2 bar:3 beyond:2 microsystems:3 pattern:6 scott:1 summarize:1 challenge:1 program:10 including:2 suitable:2 critical:1 difficulty:1 turning:1 advanced:7 representing:4 scheme:19 firmly:1 perf:2 review:1 l2:1 removal:1 relative:3 embedded:1 expect:2 par:1 mixed:1 men:15 interesting:1 generation:1 limitation:1 foundation:1 sufficient:3 principle:5 systematically:1 playing:3 ibm:1 pile:1 lo:1 elsewhere:2 excitatory:2 summary:2 row:4 course:1 intractably:2 bias:1 side:1 understand:1 institute:2 fall:1 absolute:1 distributed:1 transition:3 avoids:1 world:1 exceedingly:1 forward:2 made:9 commonly:1 qualitatively:2 supercomputing:1 far:4 excess:1 logic:1 dealing:1 global:1 conceptual:3 don:1 search:2 continuous:1 why:1 table:3 learn:2 reasonably:1 nature:1 ca:1 improving:1 complex:3 domain:3 substituted:2 did:2 arrow:2 noise:4 scored:2 repeated:2 competent:1 ref:2 fig:8 crafted:2 causality:1 board:17 parker:1 position:34 explicit:2 winning:1 lie:1 ply:1 weighting:1 learns:2 young:1 down:1 preprocessor:1 removing:1 bad:4 specific:1 showing:1 learnable:1 evidence:3 essential:1 incorporating:1 resisted:1 importance:2 gained:1 magnitude:1 illustrates:1 occurring:1 margin:1 columnar:1 suited:1 sophistication:1 led:1 simply:5 likely:4 holland:1 corresponds:2 chance:3 abc:1 goal:2 viewed:1 considerable:1 feasible:1 change:2 included:1 typical:2 hard:1 uniformly:1 gte:1 principal:1 total:1 player:2 perceptrons:1 rarely:2 select:1 exception:1 internal:1 support:1 latter:1 investigator:1 evaluate:1 handling:1
2,204
300
Multi-Layer Perceptrons with B-SpIine Receptive Field Functions Stephen H. Lane, Marshall G. Flax, David A. Handelman and JackJ. Gelfand Human Information Processing Group Department of Psychology Princeton University Princeton, New Jersey 08544 ABSTRACT Multi-layer perceptrons are often slow to learn nonlinear functions with complex local structure due to the global nature of their function approximations. It is shown that standard multi-layer perceptrons are actually a special case of a more general network formulation that incorporates B-splines into the node computations. This allows novel spline network architectures to be developed that can combine the generalization capabilities and scaling properties of global multi-layer feedforward networks with the computational efficiency and learning speed of local computational paradigms. Simulation results are presented for the well known spiral problem of Weiland and of Lang and Witbrock to show the effectiveness of the Spline Net approach. 1. INTRODUCTION Recently, it has been shown that multi-layer feedforward neural networks, such as Multi-Layer Perceptrons (MLPs) , are theoretically capable of representing arbitrary mappings, provided that a sufficient number of units are included in the hidden layers (Hornik et aI., 1989). Since all network weights are updated with each training exemplar, these networks construct global approximations to multi-input/multi-output function data in a manner analogous to fitting a low-order polynomial through a set of 684 Multi-Layer Perceptrons with B-Spline Receptive Field Functions data points. This is illustrated by the cubic polynomial "Global Fit" of the data points in Fig. 1. ~LocalFit I ~GlobaiFit Figure 1. Global vs. Local Function Approximation Consequently, multi-layer perceptrons are capable of generalizing (extrapolating! interpolating) their response to regions of the input space where little or no training data is present, using a quantity of connection weights that typically scales quadratically with the number of hidden nodes. The global nature of the weight updating, however, tends to blur the details of local structures, slows the rate of learning, and makes the accuracy of the resulting function approximation sensitive to the order of presentation of the training data. It is well known that many sensorimotor structures in the brain are organized using neurons that possess locally-tuned overlapping receptive fields (Hubel and Wiesel, 1962). Several neural network computational paradigms such as CMACs (Cerebel1ar Model Articulation Controllers) (Albus, 1973) and Radial Basis Functions (RBFs) (Moody and Darken, 1988) have been quite successful representing complex nonlinear functions using this same organizing principle. These networks construct local approximations to multi-input/multi-output function data that are analogous to fitting a least-squares spline through a set of data points using piecewise polynomials or other basis functions. This is illustrated as the cubic spline "Local Fit" in Fig. 1. The main benefits of using local approximation techniques to represent complex nonlinear functions include fast learning and reduced sensitivity to the order of presentation of training data. In many cases, however, in order to represent the function to the desired degree of smoothness, the number of basis functions required to adequately span the input space can scale exponentially with the number of inputs (Lane et aI., 1991a,b). The work presented in this paper is part of a larger effort (Lane et aI, 1991a) to develop a general neural network formulation that can combine the generalization capabilities and scaling properties of global multi-layer feed forward networks with the computational efficiency and learning speed of local network paradigms. It is shown in the sequel that this can be accomplished by incorporating B-Spline receptive fields into the node connection functions of Multi-Layer Perceptrons. 685 686 Lane, Flax, Handelman, and Gelfand 2. MULTI?LAYER PERCEPTRONS WITH B?SPLINE RECEPTIVE FIELD FUNCTIONS Standard Multi-Layer Perceptrons (MLPs) can be represented using node equations of the form, (1) where llL is the number of nodes in layer L and the cf; are linear connection functions between nodes in layers Land (L-1) such that, (2) 0'(-) is the standard sigmoidal nonlinearity, yf-l is the output of a node in layer L-1, wf; are adjustable network weights. y~-l = 1, and the functions are shown in Fig. 2. Some typical linear connection cfo corresponds to a threshold input. ~o 1 - - - - - ~2 y' L-1 2 Figure 2. Typical MLP Node Connection Functions Incorporating B-Spline receptive field functions (Lane et aI., 1991a) into the node computations of eq. (1) allows more general connection functions (e.g. piecewise linear, quadratic, cubic, etc.) to be formulated. The corresponding B-Spline MLP (Spline Net) is derived by redefining the connection functions of eq. (2) such that, \' W L BG ( L-l) cijL(YjL-l) = ~ ijk nk Yj (3) This enables the construction of a more general neural network architecture that has node equations of the form, L Yi = (4) Multi-Layer Perceptrons with B-Spline Receptive Field Functions The B~(Yf-1) are B-spline receptive field functions (Lane et al? 1989,19913) of order n and support G? while the 'wtk are the spline network weights. The order, n? corresponds to the number of coefficients in the polynomial pieces. For example, linear splines are of order n=2? whereas cubic splines are of order n=4. The advantage of the more general B-Spline connection functions of eq. (3) is that it allows varying degrees of "locality" to be added to the network computations since network weights are now yf-1. wtk The are modified by backpropagating the activated based on the value of output error only to the G weights in each connection function associated with active (i.e. nonzero) receptive field functions. The Dh-Iayer weights are updated using the method of steepest descent learning such that, L Wijk Eo- L + f3 ejLYiL(I - W ijk L)BnkG(YjL-1) Yi (5) where ef is the output error back-propagated to the ith node in layer L and ~ is the learning rate (Lane et aI., 19913). In the more general Spline Net formulation of eqs. (3-5), each node input has P+G-1 receptive fields and P+G-1 weights associated with it, but only G are active at anyone time. P determines the number of partitions in the input space of the connection functions. Standard MLP networks are a degenerate case of the Spline Net architecture? as they can be realized with B-Spline receptive field functions of order n=2, with P=1 and G=2. Due to the connectivity of the B-Spline receptive field functions, for the case when P> 1, the resulting network architecture corresponds to multiply-connected MLPs? where any given MLP is active within only one hypercube in the input space, but has weights that are shared with MLPs on the neighboring hypercubes. The amount of computation required in each layer of a Spline Net during both learning and function approximation is proportional to G, and independent of P. Formulating the connection functions of eq. (3) with linear (n=2) B-Splines allows connection functions such as those shown in Fig. 3 to be learned. Figure 3. Spline Net Connection Functions Using Linear B-Splines (n=2) The connection functions shown in Fig. 3 have P=4 partitions (5 knots) on the interval yj-1 E[O,1]. The number of input partitions, P? determines the degree of locality of 687 688 Lane, Flax, Handelman, and Gelfand the resulting function approximation since the local shape of the connection function is determined from the current node input activation interval. Networks constructed using the Spline Net formulation are reminiscent of the form and function of Kolmogorov-Lorenz networks (Baron and Baron, 1988). A neurobiological interpretation of a Spline Net is that it is composed of neurons that have dendritic branches with synapses that operate as a function of the level of activation at a given node or network input. This is shown in the network architecture of Fig. 4b where the standard three-layer MLP network of Fig. 4a has been redrawn using B-Spline receptive field functions with n=2, P=4 and G=2. ~- , 5 Figure 4. Three-Layer Spline Net Architecture, n=2,P=4,G=2 The horizontal arrows projecting from the right of each network node in Fig. 4b represent the node outputs. The overlapping triangles on the node output represent the receptive field functions of neurons in the next layer. These receptive field functions are summed with weighted connections in the dendritic branches to form the inputs to the next network layer. In the architecture shown in Fig. 4b, only two receptive fields are active for any given value of a node output. Therefore for this single hidden-layer network architecture, given any value for the inputs (x1,xV, at most N w = 30 weights will be active where, (6) s is the number of network inputs and 11 is the number of nodes in the hidden layer, which for this case is 2s+ 1 =5. Multi-Layer Perceptrons with B-Spline Receptive Field Functions 3. SIMULATION RESULTS In order to evaluate the impact of local computation on MLP performance, the well known spiral problem of Weiland and of Lang and Witbrock (1988) was chosen as a benchmark. Simulations were conducted using a Spline Net architecture having one hidden layer with 5 hidden nodes and linear B-Splines with support, G=2 (Fig. 4). All trials used the "vanilla" back-prop learning rule of eq. (5) with ~ = l/{2P). The connection function weights were initialized in each node such that the resulting connection functions were continuous linear functions with arbitrary slope. From previous experience (Lane et aI., 1989), it was known that the number of receptive field partitions can drastically affect network learning and performance. Therefore, the connection function partitions were bifurcated during training to see the effect on network generalization capability and learning speed. The bifurcation consisted of splitting every receptive field in half after increments of lOOK (100,(00) training points, each time doubling the number of connection function partitions and weights in the network nodes. A more adaptive approach would monitor the slope of the learning curve to determine when to split the partitions. New weights were initializing such that the connection functions before and after the bifurcation retained the same shape. All simulation results presented in Figs. 5-12 were generated using 800K training points. The left-most column of Fig. 5 represents the two learned connection functions that lead to each hidden node depicted in Fig. 4. The elements in the second column are the hidden node response to excitation over the unit square, while the plots in the third column are the connection functions from the hidden layer to the output node. The fourth column shows the hidden node outputs after being passed through their respective connection functions. The network output shown in the fifth column is the algebraic sum of the hidden node responses shown in the fourth column. The Spline Net was initialized as a standard MLP with P=1. Figure 6 shows the evolution of the two connection functions to the third hidden node in Fig. 4 after every lOOK training points. Around 400K (P=8) the connection functions start to take on a characteristic shape. For 1'>8, the creation of additional partitions has little effect on the shape of the connection functions. Figure 7 shows the associated learning curve, while Fig. 8 is an enlarged version of the network output. These results indicate that the bifurcation schedule introduces additional degrees of freedom (weights) to the network in such a way as to carve out coarse global features first, then incrementally capture finer and finer localized details later. This is in contrast to the results shown in Figs. 9 and 10 where the training (using the same 800K points as in Figs. 7 and 8) was begun on a network having P=l28 initial partitions. Figure 11 shows the Spline Net output after 800K training iterations using 112 discrete points located on the two spirals. Lang and Witbrock (1988) state that similar spiral results could only be obtained using a MLP network with 3 hidden layers (including jump connections) and 50,000,000 training iterations. The use of a Spline Net with a bifurcation schedule enabled the learning to be sped up by almost two orders of magnitude, indicating there is a significant performance advantage in trading-off number of hidden layers for node complexity. 689 690 Lane, Flax, Handelman, and Gelfand Hidden Node Connection Functions Hidden Node Response Output Node Connection Functions Hidden Node Outputs After Connection Functions Output Node Response Figure 5. Spiral Learning with Bifurcation Schedule P=l P=2 P=16 P=32 P=4 P=8 P=128 Figure 6. Evolution of Connection Functions to Third Hidden Node Multi-Layer Perceptrons with B-Spline Receptive Field Functions 3000 2500 ~ - : ~ \. 2000 CI) ~ \ 1500 \.... 1000 , i'-- 500 o o 200 400 600 800 1000 Training Iteration Figure 8. Output Node Response with Bifurcation Figure 7. Learning Curve with Bifurcation Schedule Mean Square Error vs. Training Iteration ,. ,. 3000 2500 2000 (P=128) ~ "'\. '" ~ - 1500 - 1000 ~ . 500 o o 200 400 600 800 1000 Figure 9. Learning Curve without Bifurcation Schedule Mean Square Error vs. Training Iteration 3000 2500 without Bifurcation ~ ~ \ 2000 \ 1500 1000 500 Figure 10. Output Node Response ~ ~ ; -" ,e o o :i '" ~ 200 400 ~ 600 800 1000 Figure 11. Learning Curve with Bifurcation Schedule Mean Square Error vs. Training Iteration (112 Discrete Points) Figure 12. Output Node Response with Bifurcation (112 Discrete Points) 691 692 Lane, Flax, Handelman, and Gelfand 4. CONCLUSIONS It was shown that the introduction of B-Splines into the node connection functions of Multi-Layer Perceptrons allows more general neural network architectures to be developed. The resulting Spline Net architecture combines the fast learning and computational efficiency of strictly local neural network approaches with the scaling and generalization properties of the more established global MLP approach. Similarity to Kolmogorov-Lorenz networks can be used to suggest an initial number of hidden layer nodes. The number of node connection function partitions chosen affects both network generalization capability and learning performance. It was shown that use of a bifurcation schedule to determine the number of node input partitions speeds learning and improves network generalization. Results indicate that Spline Nets solve difficult learning problems by trading-off number of hidden layers for node complexity. Acknowledgements Stephen H. Lane and David A. Handelman are also employed by Robicon Systems Inc., Princeton, NJ. This research has been supported through a grant from the James S. McDonnell Foundation and a contract from the DARPA Neural Network Program. References Albus, J. (1975) "A New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)," 1. Dyn. Sys. Meas. Control, vol. 97, pp. 270-277. Barron, A.R. and Barron, R.L. (1988) "Statistical Learning Networks: A Unifying View," Proc. 20th Symp. on the Interface - Computing and Statistics, pp. 192-203. Hornik, K. Stinchcombe, M. and White, H. (1989) "Multi-layer Feedforward Networks are Universal Approximators," Neural Networks, vol. 2, pp. 359-366. Hubel, D. and Wiesel, T.N. (1962) "Receptive Fields, Binocular Interaction and Functional Architecture in Cat's Visual Cortex," 1. Physiology, vol. 160, no. 106. Lane, S.H., Handelman, D.A. and Gelfand, JJ. (1989) "Development of Adaptive BSplines Using CMAC Neural Networks", 1989 I1CNN, Wash. DC., June 1989. Lane, S.H., Flax, M.B., Handelman, D.A. and Gelfand, JJ. (1991a) "Function Approximation in Multi-Layer Neural Networks with B-Spline Receptive Field Functions," Princeton University Cognitive Science Lab Report No. 42, in prep for 1. of Int'l Neural Network Society. Lane, S.H., Handelman, D.A. and Gelfand, JJ. (1991b) "Higher-Order CMAC Neural Networks-Theory and Practice," to appear Amer. Contr. Conf., Boston, MA, June,1991. Lang, K.J. and Witbrock, MJ. (1988) "Learning to Tell Two Spirals Apart," Proc. 1988 Connectionist Model Summer School, D. Touretzky, G. Hinton, and T. Sejnowski, Eds. Moody, J. and Darken, C. (1988) "Learning with Localized Receptive Fields," Proc. 1988 Connectionist Model Summer School, D. Touretzky, G. Hinton, T.Sejnowski, Eds.
300 |@word trial:1 version:1 wiesel:2 polynomial:4 simulation:4 initial:2 tuned:1 current:1 lang:4 activation:2 reminiscent:1 partition:11 blur:1 shape:4 enables:1 extrapolating:1 cfo:1 plot:1 v:4 half:1 sys:1 steepest:1 ith:1 coarse:1 node:43 sigmoidal:1 constructed:1 combine:3 fitting:2 symp:1 manner:1 theoretically:1 multi:22 brain:1 little:2 lll:1 provided:1 developed:2 nj:1 every:2 control:2 unit:2 grant:1 appear:1 before:1 local:11 tends:1 xv:1 yj:2 practice:1 cmac:3 universal:1 physiology:1 radial:1 suggest:1 splitting:1 rule:1 enabled:1 increment:1 analogous:2 updated:2 construction:1 element:1 updating:1 located:1 initializing:1 capture:1 region:1 connected:1 complexity:2 creation:1 efficiency:3 basis:3 triangle:1 darpa:1 jersey:1 represented:1 kolmogorov:2 cat:1 fast:2 sejnowski:2 tell:1 bifurcated:1 quite:1 gelfand:8 larger:1 solve:1 statistic:1 advantage:2 net:15 interaction:1 neighboring:1 organizing:1 degenerate:1 albus:2 yjl:2 develop:1 exemplar:1 school:2 eq:6 indicate:2 trading:2 redrawn:1 human:1 generalization:6 dendritic:2 strictly:1 around:1 mapping:1 proc:3 sensitive:1 weighted:1 weiland:2 modified:1 varying:1 derived:1 june:2 contrast:1 wf:1 contr:1 typically:1 hidden:20 development:1 special:1 summed:1 bifurcation:12 field:23 construct:2 f3:1 having:2 represents:1 look:2 report:1 spline:40 piecewise:2 connectionist:2 composed:1 freedom:1 mlp:9 wijk:1 multiply:1 introduces:1 dyn:1 activated:1 capable:2 experience:1 respective:1 initialized:2 desired:1 column:6 marshall:1 witbrock:4 successful:1 conducted:1 hypercubes:1 sensitivity:1 sequel:1 contract:1 off:2 moody:2 connectivity:1 cognitive:1 conf:1 coefficient:1 inc:1 int:1 bg:1 piece:1 later:1 view:1 lab:1 start:1 capability:4 slope:2 rbfs:1 mlps:4 square:5 accuracy:1 baron:2 characteristic:1 knot:1 finer:2 synapsis:1 touretzky:2 ed:2 sensorimotor:1 pp:3 james:1 associated:3 propagated:1 begun:1 improves:1 organized:1 schedule:7 actually:1 back:2 feed:1 higher:1 response:8 formulation:4 amer:1 binocular:1 horizontal:1 nonlinear:3 overlapping:2 incrementally:1 yf:3 manipulator:1 effect:2 consisted:1 adequately:1 evolution:2 nonzero:1 illustrated:2 white:1 during:2 backpropagating:1 excitation:1 interface:1 novel:1 recently:1 ef:1 sped:1 functional:1 exponentially:1 interpretation:1 significant:1 ai:6 smoothness:1 vanilla:1 nonlinearity:1 similarity:1 cortex:1 etc:1 apart:1 approximators:1 accomplished:1 yi:2 additional:2 eo:1 employed:1 determine:2 paradigm:3 stephen:2 branch:2 impact:1 controller:2 iteration:6 cerebellar:1 represent:4 whereas:1 interval:2 operate:1 posse:1 incorporates:1 effectiveness:1 feedforward:3 split:1 spiral:6 affect:2 fit:2 psychology:1 architecture:12 passed:1 effort:1 algebraic:1 jj:3 amount:1 locally:1 reduced:1 discrete:3 vol:3 group:1 threshold:1 monitor:1 sum:1 fourth:2 almost:1 scaling:3 layer:36 summer:2 quadratic:1 handelman:9 lane:15 carve:1 speed:4 anyone:1 span:1 formulating:1 department:1 mcdonnell:1 wtk:2 projecting:1 equation:2 barron:2 include:1 cf:1 unifying:1 hypercube:1 society:1 added:1 quantity:1 realized:1 receptive:23 retained:1 difficult:1 slows:1 adjustable:1 neuron:3 darken:2 benchmark:1 descent:1 hinton:2 dc:1 arbitrary:2 david:2 required:2 connection:34 redefining:1 quadratically:1 learned:2 established:1 articulation:2 program:1 including:1 stinchcombe:1 representing:2 acknowledgement:1 proportional:1 localized:2 foundation:1 degree:4 sufficient:1 principle:1 land:1 supported:1 drastically:1 fifth:1 benefit:1 curve:5 forward:1 adaptive:2 jump:1 neurobiological:1 global:9 hubel:2 active:5 iayer:1 continuous:1 learn:1 nature:2 mj:1 hornik:2 complex:3 interpolating:1 main:1 arrow:1 prep:1 x1:1 enlarged:1 fig:17 cubic:4 slow:1 third:3 meas:1 incorporating:2 lorenz:2 ci:1 magnitude:1 wash:1 nk:1 boston:1 locality:2 generalizing:1 depicted:1 visual:1 doubling:1 corresponds:3 determines:2 dh:1 ma:1 prop:1 presentation:2 cmacs:1 consequently:1 formulated:1 shared:1 included:1 typical:2 determined:1 ijk:2 perceptrons:13 indicating:1 support:2 evaluate:1 princeton:4
2,205
3,000
On the Relation Between Low Density Separation, Spectral Clustering and Graph Cuts Hariharan Narayanan Department of Computer Science University of Chicago Chicago IL 60637 [email protected] Mikhail Belkin Department of Computer Science and Engineering The Ohio State University Columbus, OH 43210 [email protected] Partha Niyogi Department of Computer Science University of Chicago Chicago IL 60637 [email protected] Abstract One of the intuitions underlying many graph-based methods for clustering and semi-supervised learning, is that class or cluster boundaries pass through areas of low probability density. In this paper we provide some formal analysis of that notion for a probability distribution. We introduce a notion of weighted boundary volume, which measures the length of the class/cluster boundary weighted by the density of the underlying probability distribution. We show that sizes of the cuts of certain commonly used data adjacency graphs converge to this continuous weighted volume of the boundary. keywords: Clustering, Semi-Supervised Learning 1 Introduction Consider the probability distribution with density p(x) depicted in Fig. 1, where darker color denotes higher probability density. Asked to cluster this probability distribution, we would probably separate it into two roughly Gaussian bumps as shown in the left panel. Same intuition applies to semi-supervised learning. Asked to point out more likely groups of data of the same type, we would be inclined to believe that these two bumps contain data points with the same labels. On the other hand, the class boundary shown in the right panel seems rather less likely. One way to state this basic intuition is the Low Density Separation assumption [5], saying that the class/cluster boundary tends to pass through regions of low density. In this paper we propose a formal measure on the complexity of the boundary, which intuitively corresponds to the Low Density Separation assumption. We will show that given a class boundary, this measure can be computed from a finite sample from the probability distribution. Moreover, we show this is done by computing the size of a cut for a partition of a certain standard adjacency graph, defined on that sample, and point out some interesting connections to spectral clustering. To fix our intuitions, let us consider the question of what makes the cut in the left panel more intuitively acceptable than the cut in the right. Two features of the left cut make it more pleasing: the cut is shorter in length and the cut Figure 1: A likely cut and a less likely cut. passes through a low-density area. Note that a very jagged cut through a low-density area or a short cut through the middle of a high-density bump would be unsatisfactory. It will therefore appear reasonable to take the length of the cut as a measure of its complexity but weight it depending on the density of the probability distribution p through which it passes. In other words, we propose the weighted length of the boundary, represented by the contour integral R along the boundary cut p(s)ds to measure the complexity of a cut. It is clear that the boundary in the left panel has a considerably lower weighted length than the boundary in the right panel of our Fig. 1. To formalize this notion further consider a (marginal) probability distribution with density p(x) supported on some domain or manifold M . This domain is partitioned in two disjoint clusters/parts. Assuming that the boundary S is a R smooth hypersurface we define the weighted volume of the cut to be S p(s)ds. Note that just as in the example above, the integral is taken over the surface of the boundary. We will show how this quantity can be approximated given empirical data and establish connections with some popular graph-based methods. 2 2.1 Connections and related work Spectral Clustering Over the last two decades there has been considerable interest in various spectral clustering techniques (see, e.g., [6] for an overview). The idea of spectral clustering can be expressed very simply. Given a graph, we would often like to construct a balanced partitioning of the vertex set, i.e. a partitioning such which minimizes the number (or total weight) of edges across the cut. This is generally an NP-hard optimization problem. It turns out, however, that a simple real-valued relaxation can be used to reduce it to standard linear algebra, typically to finding eigenvectors of a certain graph Laplacian. We note that the quality of partition is usually measured in terms of the corresponding cut size. A critical question, when this notion is applied to general purpose clustering in the context of machine learning is how to construct the graph given data points. A typical choice here is the Gaussian weights (e.g., [14]). To summarize, a graph is obtained from a point cloud, using Gaussian or other weights, and partitioned using spectral clustering or a different algorithm, which attempts to approximate the smallest (balanced) cut. We note that while the intuition is that spectral clustering is an approximation to the minimum cut, and is closely related to random walks and diffusions on graphs and the underlying probability distributions ([13, 12]), existing results on convergence of spectral clustering ([11]) do not provide a formal interpretation of the limiting partition or connect it to the size of the resulting cut. 2.2 Graph-based semi-supervised learning Similarly to spectral clustering, graph-based semi-supervised learning constructs a graph from the data. In contrast to clustering, however, some of the data is labeled. The problem is typically to either label the unlabeled points (transduction) or, more generally, to build a classifier defined on the whole space. This may be done trying to find the 2 Figure 2: Curves of small and high condition number respectively minimum cut, which respects the labels of the data directly ([3]), or, using graph Laplacian as a penalty functional (e.g.,[15, 1]). One of the important intuitions of semi-supervised learning is the cluster assumption(e.g., [4]) or, more specifically, the low density separation assumption suggested in [5], which states that the class boundary passes through a low density region. We argue that this intuition needs to slightly modified by suggesting that cutting through a high density region may be acceptable as long as the length of the cut is very short. For example imagine two high-density round clusters connected by a very thin high-density thread. Cutting the thread is appropriate as long as the width of the thread is much smaller than the radii of the clusters. 2.3 Convergence of Manifold Laplacians Another closely related line of research is the connections between point-cloud graph Laplacians and Laplace-Beltrami operators on manifolds, which have been explored recently in [9, 2, 10]. A typical result in that setting shows that for a fixed function f and points sampled from a probability distribution on a manifold or a domain, the graph Laplacian applied to f converges to the manifold Laplace-Beltrami operator ?M f . We note that the results of those papers cannot be directly applied in our situation as for us f is the indicator function of a subset and is not differentiable. Even more importantly, this paper establishes an explicit connection between the point-cloud Laplacian applied to such characteristic functions (weighted graph cuts) and a geometric quantity, which is the weighted volume of the cut boundary. This geometric connection does not easily follow from results of those papers and the techniques used in the proof of our Theorem 3 are significantly different. 3 Summary of the Main Results Let p be a probability density function on a domain M ? Rd . Let S be a smooth hypersurface that separates M into two parts, S1 and S2 . The smoothness of S will be quantified by a condition number 1/? , where ? is the radius of the largest ball that can be placed tangent to the manifold at any point, intersecting the manifold at only one point. It bounds the curvature of the manifold. Definition 1 Let Kt (x, y) be the heat kernel in Rd given by Kt (x, y) := Let Mt := Kt (x, x) = 2 1 e?kx?yk /4t . (4?t)d/2 1 . (4?t)d/2 Let X := {x1 , . . . , xN } be a set of N points chosen independently at random from p. Consider the complete graph whose vertices are associated with the points in X, and where the weight of the edge between xi and xj , i 6= j is given 3 by Wij = Kt (xi , xj ) Let W be the weight matrix. Let X1 = X ? S1 and X2 = X ? S2 be the data point which land in S1 and S2 respectively. Let D be the diagonal matrix whose entries are row sums of W (degrees of the corresponding vertices) X Dii = Wij j The normalized Laplacian associated to the data X (and parameter t) is the matrix L(t, X) := I ? D?1/2 W D?1/2 . Let f = (f1 , . . . , fN ) be the indicator vector for X1 : ? 1 fi = 0 if xi ? X1 otherwise There are two quantities of interest: R 1. S p(s)ds, which measures the quality of the partition S in accordance with the weighted volume of the boundary. 2. f T Lf , which measures the quality of the empirical partition in terms of its cut size. Our main Theorem shows that after an appropriate scaling, the empirical cut size converges to the volume of the boundary. Theorem 1 Let the number of points, |X| = N tend to infinity and {tN }? 0 , be a sequence of values of t that tend to zero such that tN > 11 . Then with probability 1, N 2d+2 lim ? Z ? ? f T L(tN , X)f = p(s)ds N t S Further, for any ? ? (0, 1) and any ? ? (0, 1/2), there exists a positive constant C and an integer N0 (depending on ?, ? and certain generic invariants of p and S) such that with probability 1 ? ?, (?N > N0 ), ? ? ? Z ? ? T ? ? ? f L(tN , X)f ? ? < Ct?N . p(s)ds ?N t ? S ? This theorem is proved by first relating the empirical quantity N ??t f T L(tN , X)f to a heat flow across the relevant cut (on the continuous domain), and then relating the heat flow to the measure of the cut. In order to state these results, we need the following notation. Definition 2 Let p(x) ?t (x) = qR M Let . Kt (x, z)p(z)dz ? ? ?(t, X) := ? f T L(t, X)f N t and r ?(t) := ? t Z Z Kt (x, y)?t (x)?t (y)dxdy. S1 S2 Where t and X are clear from context, we shall abbreviate ?(t, X) to ? and ?(t) to ?. In theorem 2 we show that for a fixed t, as the number of points |X| = N tends to infinity, with probability 1, ?(t, X) tends to ?(t). In theorem 3 we show that ?(t) can be made arbitrarily close to the weighted volume of the boundary by making t tend to 0. 4 S S2 S1 t1/2 Figure 3: Heat flow ? tends to R S p(s)ds ? Theorem 2 Let 0 < ? < 1. Let u := 1/ t2d+1 N 1?? . Then, there exist positive constants C1 , C2 depending only on p and S such that with probability greater than 1 ? exp (?C1 N ? ) ? ? d+1 (1) |?(t, X) ? ?(t)| < C2 1 + t 2 u?(t). e Theorem 3 For any ? ? (0, 21 ), there exists a constant C such that for all t such that 0 < t < ? (2d)? e?1 , ?r Z Z ? Z ? ? ? ? Kt (x, y)?t (x)?t (y)dxdy ? p(s)ds?? < C t? . ? t S 2 S1 S (2) By letting N ? ? and tN ? 0 at suitable rates and putting together theorems 2 and 3, we obtain the following theorem: ? Theorem 4 Let the number of random data points N ? ?, and tN ? 0, at rates so that u := 1/ t2d+1 N 1?? ? 0. Then, for any ? ? (0, 1/2), there exist positive constants C1 , C2 depending only on p and S , such that for any N > 1 with probability greater than 1 ? exp (?C1 (N ? )), ? ? Z ? ? ?? (tN , X) ? p(s)ds?? < C2 (t? + u) (3) ? S 4 Outline of Proofs 1?2? . N0 is chosen to Theorem 1 is a corollary of Theorem 4, obtained by setting u to be t? , and setting ? to 2d+2 bePa large enough integer so that an application of the union bound on all N > N0 , still gives us a probability exp (?C1 (N ? )) < ?, of the rate of convergence being worse than stated in Theorem 1. N >N0 Theorem 4 is a direct consequence of Theorem 2 and Theorem 3. Theorem 2: We prove theorem 2 using a generalization of McDiarmid?s inequality from [7, 8]. McDiarmid?s inequality asserts that a function of a large number of independent random variables, that is not very influenced by the value of any one of these, takes a value close to its mean. In the generalization that we use, it is permitted that over a bad set that has a small probability mass, the function is highly influenced by some of the random variables. In our setting, it can be shown that our measure of a cut, f T Lf is such a function of the independent random points in X, and so the result is applicable. There is another step involved, since the mean ? of f T Lf is not ?, the quantity to which we wish to prove convergence. Therefore we need to prove that the mean E[ N ??t f T L(t, X)f ] tends to ?(t) as N tends to infinity. Now, ? X X p ? Kt (x, y) P P ? f T L(t, X)f = 1/N ?/t . {( z6=x Kt (x, z))( z6=y Kt (y, z))}1/2 N t x?X y?X 1 2 5 If, instead, we had in the denominator of the right side vZ Z u u t p(z)Kt (x, z)dz p(z)Kt (y, z)dz, M M using the linearity of Expectation, ? ? ? X X p ? 1 s? E? ? N (N ? 1) ?/t R ? x?X1 y?X2 ? ? Kt (x, y) ? ?? ? ? = ?(t). R ? p(z)Kt (x, z)dz p(z)Kt (y, z)dz M M Using Chernoff bounds, we can show that with high probability, for all x ? X, P Z z6=x Kt (x, z) ? p(z)Kt (x, z)dz. N ?1 M Putting the last two facts together and using the Generalization of McDiarmid?s inequality from [7, 8], the result follows. Since the exact details require fairly technical calculations, we leave them to the Journal version. Theorem 3: The quantity r Z Z ? Kt (x, y)?t (x)?t (y)dxdy ? := t S1 S2 is similar to the heat that would flow in time t from one part to another if the first were heated proportional to p. Intuitively, the heat that would flow from one part to the otherRin a small interval ought to be related to the volume of the boundary between these two parts, which in our setting is S p(s)ds. To prove this relationship, we bound ? both above and below in terms of the weighted volume and condition number of the boundary. These bounds are obtained by making comparisons with the ?worst case?, given condition number ?1 , which is when S is a sphere of radius ? . In order to obtain a lower bound on ?, we observe that if B2 is the nearest ball of radius ? contained in S1 to a point P in S2 that is within ? of S1 , Z Z Kt (x, P )?t (x)?t (P )dx ? Kt (x, P )?t (x)?t (P )dx, S1 B2 as in Figure 4. Similarly, to obtain an upper bound on ?, we observe that if B1 is a ball or radius ? in S2 , tangent to B2 at the point of S nearest to P , Z Z Kt (x, P )?t (x)?t (P )dx ? Kt (x, P )?t (x)?t (P )dx. B1c S1 We now indicate how a lower bound is obtained for Z Kt (x, P )?t (x)?t (P )dx. B2 p A key observation is that for R = 2dt ln(1/t), R Kt (x, P )dx ? 1. For this reason, only the portions of B2 kx?P k>R near P contribute to the the integral Z Kt (x, P )?t (x)?t (P )dx. B2 It turns out that a good lower bound can be obtained by considering the integral over H2 instead, where H2 is a 2 halfspace whose boundary is at a distance ? ? R 2? from the center as in figure 4. 6 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 1 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 0 1 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 0 1 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 2 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 1 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111 00000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 11111111111111 00000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 1111111111111111111111 0000000000000000000000 S B 1 0 0 1 P 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 111111 000000 000000000000000 111111111111111 000000 111111 000000000000000 111111111111111 000000 111111 000000000000000 111111111111111 000000 111111 000000000000000 111111111111111 111111 000000 000000000000000 111111111111111 000000 111111 1 0 000000000000000 111111111111111 000000 111111 1 0 000000000000000 111111111111111 000000 111111 2 000000000000000 111111111111111 000000 111111 000000000000000 111111111111111 000000 111111 000000000000000 111111111111111 111111 000000 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 000000000000000 111111111111111 B B P Figure 4: The density of heat diffusing to point P from S1 in the left panel is less or equal to the density of heat diffusing to P from B2 in the right panel. H2 11111111111 00000000000 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 000000000000 111111111111 00000000000 11111111111 000000000000 111111111111 00000000000 11111111111 00000000000 11111111111 2 00000000000 11111111111 00000000000 R 11111111111 B2 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 P P 2? Figure 5: The density of heat received by point P from B2 in the left panel can be approximated by the density of heat received by P from the halfspace H2 in the right panel. 7 An upper bound for Z Kt (x, P )?t (x)?t (P )dx B1c is obtained along similar lines. The details of this proof will be presented in the Journal version. 5 Conclusion In this paper we take a step towards a probabilistic analysis of graph based methods for clustering. The nodes of the graph are identified with data points drawn at random from an underlying probability distribution on a continuous domain. For a fixed partition we show that the cut-size of the graph partition converges to the weighted volume of the boundary separating the two regions of the domain. The rates of this convergence are analyzed. If one is able to generalize our result uniformly over all partitions, this allows us to relate ideas around graph based partitioning to ideas surrounding Low Density Separation. The most important future direction would be to achieve similar results uniformly over balanced partitions. References [1] M. Belkin and P. Niyogi (2004).?Semi-supervised Learning on Riemannian Manifolds.? In Machine Learning 56, Special Issue on Clustering, 209-239. [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] M. Belkin and P. Niyogi. ?Toward a theoretical foundation for Laplacian-based manifold methods.? COLT 2005. A. Blum and S. Chawla, ?Learning from labeled and unlabeled data using graph mincuts?, ICML 2001. O.Chapelle, J. Weston, B. Scholkopf, ?Cluster kernels for semi-supervised learning?, NIPS 2002. O. Chapelle and A. Zien, ?Semi-supervised Classification by Low Density Separation?, AISTATS 2005. Chris Ding, Spectral Clustering, ICML 2004 Tutorial. Samuel Kutin, Partha Niyogi, ?Almost-everywhere Algorithmic Stability and Generalization Error.?, UAI 2002, 275-282 S. Kutin, TR-2002-04, ?Extensions to McDiarmid?s inequality when differences are bounded with high probability.? Technical report TR-2002-04 at the Department of Computer Science, University of Chicago. S. Lafon, Diffusion Maps and Geodesic Harmonics, Ph. D. Thesis, Yale University, 2004. M. Hein, J.-Y. Audibert, U. von Luxburg, From Graphs to Manifolds ? Weak and Strong Pointwise Consistency of Graph Laplacians, COLT 2005. U. von Luxburg, M. Belkin, O. Bousquet, Consistency of Spectral Clustering, Max Planck Institute for Biological Cybernetics Technical Report TR 134, 2004. M. Meila and J. Shi. A Random Walks View of Spectral Segmentation, NIPS 2001. [13] B. Nadler, S. Lafon, R. R. Coifman,and I. G. Kevrekidis. Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker-Planck Operators, NIPS 2006. [14] J. Shi and J. Malik. ?Normalized cuts and image segmentation.? [15] X. Zhu, J. Lafferty and Z. Ghahramani, Semi-supervised learning using Gaussian fields and harmonic functions, ICML 2003. 8
3000 |@word version:2 middle:1 seems:1 tr:3 existing:1 b1c:2 dx:8 fn:1 chicago:5 partition:9 n0:5 short:2 node:1 contribute:1 cse:1 mcdiarmid:4 along:2 c2:4 direct:1 scholkopf:1 prove:4 introduce:1 coifman:1 roughly:1 considering:1 underlying:4 moreover:1 panel:9 notation:1 mass:1 linearity:1 what:1 bounded:1 kevrekidis:1 minimizes:1 finding:1 ought:1 classifier:1 partitioning:3 appear:1 planck:2 positive:3 t1:1 engineering:1 accordance:1 tends:6 consequence:1 quantified:1 union:1 lf:3 area:3 empirical:4 significantly:1 word:1 cannot:1 unlabeled:2 close:2 operator:3 context:2 heated:1 map:2 dz:6 center:1 shi:2 independently:1 importantly:1 oh:1 stability:1 notion:4 laplace:2 limiting:1 imagine:1 exact:1 approximated:2 cut:32 labeled:2 cloud:3 ding:1 worst:1 region:4 inclined:1 connected:1 yk:1 balanced:3 intuition:7 complexity:3 asked:2 geodesic:1 algebra:1 easily:1 represented:1 various:1 surrounding:1 heat:10 whose:3 valued:1 otherwise:1 niyogi:5 sequence:1 differentiable:1 propose:2 relevant:1 achieve:1 asserts:1 qr:1 convergence:5 cluster:9 converges:3 leave:1 depending:4 measured:1 nearest:2 keywords:1 received:2 strong:1 c:2 indicate:1 direction:1 beltrami:2 radius:5 closely:2 dii:1 adjacency:2 require:1 fix:1 f1:1 generalization:4 biological:1 extension:1 around:1 exp:3 nadler:1 algorithmic:1 bump:3 smallest:1 purpose:1 applicable:1 label:3 largest:1 vz:1 establishes:1 weighted:12 gaussian:4 modified:1 rather:1 corollary:1 unsatisfactory:1 contrast:1 mbelkin:1 typically:2 relation:1 wij:2 issue:1 classification:1 colt:2 special:1 fairly:1 t2d:2 marginal:1 equal:1 construct:3 field:1 chernoff:1 icml:3 thin:1 future:1 np:1 report:2 belkin:4 attempt:1 pleasing:1 interest:2 highly:1 analyzed:1 kt:26 integral:4 edge:2 shorter:1 walk:2 hein:1 theoretical:1 vertex:3 subset:1 entry:1 connect:1 considerably:1 density:25 probabilistic:1 together:2 intersecting:1 thesis:1 von:2 worse:1 suggesting:1 b2:9 jagged:1 audibert:1 view:1 portion:1 halfspace:2 partha:2 il:2 hariharan:1 characteristic:1 generalize:1 weak:1 cybernetics:1 influenced:2 definition:2 involved:1 proof:3 associated:2 riemannian:1 sampled:1 proved:1 popular:1 color:1 lim:1 segmentation:2 formalize:1 higher:1 dt:1 supervised:10 follow:1 permitted:1 done:2 just:1 d:9 hand:1 quality:3 columbus:1 believe:1 contain:1 normalized:2 round:1 width:1 samuel:1 trying:1 outline:1 complete:1 tn:8 image:1 harmonic:2 ohio:2 recently:1 fi:1 functional:1 mt:1 overview:1 volume:10 interpretation:1 relating:2 smoothness:1 rd:2 meila:1 consistency:2 similarly:2 had:1 chapelle:2 surface:1 curvature:1 certain:4 inequality:4 arbitrarily:1 minimum:2 dxdy:3 greater:2 converge:1 semi:10 zien:1 smooth:2 technical:3 calculation:1 sphere:1 long:2 laplacian:6 basic:1 denominator:1 expectation:1 kernel:2 c1:5 interval:1 probably:1 pass:3 eigenfunctions:1 tend:3 flow:5 lafferty:1 integer:2 near:1 enough:1 diffusing:2 xj:2 identified:1 reduce:1 idea:3 thread:3 penalty:1 generally:2 clear:2 eigenvectors:1 ph:1 narayanan:1 exist:2 tutorial:1 disjoint:1 shall:1 group:1 putting:2 key:1 blum:1 drawn:1 diffusion:3 graph:25 relaxation:1 sum:1 luxburg:2 everywhere:1 saying:1 reasonable:1 almost:1 separation:6 acceptable:2 scaling:1 bound:10 ct:1 yale:1 kutin:2 infinity:3 x2:2 bousquet:1 department:4 ball:3 across:2 slightly:1 smaller:1 partitioned:2 making:2 s1:12 intuitively:3 invariant:1 taken:1 ln:1 turn:2 letting:1 observe:2 appropriate:2 spectral:13 generic:1 chawla:1 denotes:1 clustering:18 ghahramani:1 build:1 establish:1 malik:1 question:2 quantity:6 diagonal:1 distance:1 separate:2 separating:1 chris:1 manifold:11 argue:1 reason:1 toward:1 assuming:1 length:6 pointwise:1 relationship:1 relate:1 stated:1 upper:2 observation:1 finite:1 situation:1 connection:6 nip:3 able:1 suggested:1 usually:1 below:1 laplacians:3 summarize:1 max:1 critical:1 suitable:1 indicator:2 abbreviate:1 zhu:1 geometric:2 tangent:2 interesting:1 proportional:1 h2:4 foundation:1 degree:1 land:1 row:1 summary:1 supported:1 last:2 placed:1 formal:3 uchicago:2 side:1 institute:1 mikhail:1 boundary:23 curve:1 xn:1 lafon:2 contour:1 commonly:1 made:1 hypersurface:2 approximate:1 cutting:2 hari:1 uai:1 b1:1 xi:3 continuous:3 decade:1 z6:3 domain:7 aistats:1 main:2 whole:1 s2:8 x1:5 fig:2 transduction:1 darker:1 explicit:1 wish:1 theorem:20 bad:1 explored:1 exists:2 kx:2 depicted:1 simply:1 likely:4 expressed:1 contained:1 applies:1 corresponds:1 fokker:1 weston:1 towards:1 considerable:1 hard:1 typical:2 specifically:1 uniformly:2 total:1 mincuts:1 pas:2
2,206
3,001
Learning to be Bayesian without Supervision Martin Raphan Courant Inst. of Mathematical Sciences New York University [email protected] Eero P. Simoncelli Center for Neural Science, and Courant Inst. of Mathematical Sciences New York University [email protected] Bayesian estimators are defined in terms of the posterior distribution. Typically, this is written as the product of the likelihood function and a prior probability density, both of which are assumed to be known. But in many situations, the prior density is not known, and is difficult to learn from data since one does not have access to uncorrupted samples of the variable being estimated. We show that for a wide variety of observation models, the Bayes least squares (BLS) estimator may be formulated without explicit reference to the prior. Specifically, we derive a direct expression for the estimator, and a related expression for the mean squared estimation error, both in terms of the density of the observed measurements. Each of these prior-free formulations allows us to approximate the estimator given a sufficient amount of observed data. We use the first form to develop practical nonparametric approximations of BLS estimators for several different observation processes, and the second form to develop a parametric family of estimators for use in the additive Gaussian noise case. We examine the empirical performance of these estimators as a function of the amount of observed data. 1 Introduction Bayesian methods are widely used throughout engineering for estimating quantities from corrupted measurements. Those that minimize the mean squared error (known as Bayes least squares, or BLS) are particularly widespread. These estimators are usually derived assuming explicit knowledge of the observation process (expressed as the conditional density of the observation given the quantity to be estimated), and the prior density over that quantity. Despite its appeal, this approach is often criticized for the reliance on knowledge of the prior distribution, since the true prior is usually not known, and in many cases one does not have data drawn from this distribution with which to approximate it. In this case, it must be learned from the same observed measurements that are available in the estimation problem. In general, learning the prior distribution from the observed data presents a difficult, if not impossible task, even when the observation process is known. In the commonly used ?empirical Bayesian? approach [1], one assumes a parametric family of densities, whose parameters are obtained by fitting the data. This prior is then used to derive an estimator that may be applied to the data. If the true prior is not a member of the assumed parametric family, however, such estimators can perform quite poorly. An estimator may also be obtained in a supervised setting, in which one is provided with many pairs containing a corrupted observation along with the true value of the quantity to be estimated. In this case, selecting an estimator is a classic regression problem: find a function that best maps the observations to the correct values, in a least squares sense. Given a large enough number of training samples, this function will approach the BLS estimate, and should perform well on new samples drawn from the same distribution as the training samples. In many real-world situations, however, one does not have access to such training data. In this paper, we examine the BLS estimation problem in a setting that lies between the two cases described above. Specifically, we assume the observation process (but not the prior) is known, and we assume unsupervised training data, consisting only of corrupted observations (without the correct values). We show that for many observation processes, the BLS estimator may be written directly in terms of the observation density. We also show a dual formulation, in which the BLS estimator may be obtained by minimizing an expression for the mean squared error that is written only in terms of the observation density. A few special cases of the first formulation appear in the empirical Bayes literature [2], and of the second formulation in another branch of the statistical literature concerned with improvement of estimators [3, 4, 5]. Our work serves to unify these prior-free methods within a linear algebraic framework, and to generalize them to a wider range of cases. We develop practical nonparametric approximations of estimators for several different observation processes, demonstrating empirically that they converge to the BLS estimator as the amount of observed data increases. We also develop a parametric family of estimators for use in the additive Gaussian case, and examine their empirical convergence properties. We expect such BLS estimators, constructed from corrupted observations without explicit knowledge of, assumptions about, or samples from the prior, to prove useful in a variety real-world estimation problems faced by machine or biological systems that must learn from examples. 2 Bayes least squares estimation Suppose we make an observation, Y , that depends on a hidden variable X, where X and Y may be scalars or vectors. Given this observation, the BLS estimate of X is simply the conditional expectation of the posterior density, E{X|Y = y}. If the prior distribution on X is PX , and the likelihood function is PY |X then this can be written using Bayes? rule as  E{X|Y = y} = x PX|Y (x|y) dx   (1) = x PY |X (y|x) PX (x) dx PY (y) , where the denominator is the distribution of the observed data:  PY (y) = PY |X (y|x) PX (x) dx . (2) If we know PX and PY |X , we can calculate this explicitly. Alternatively, if we do not know PX or PY |X , but are given independent identically distributed (i.i.d.) samples (Xn , Yn ) drawn from the joint distribution of (X, Y ), then we can solve for the estimator f (y) = E{X|Y = y} nonparametrically, or we could choose a parametric family of estimators {f? }, and choose ? to minimize the empirical squared error: N 1  |f? (Yn ) ? Xn |2 . ?? = arg min ? N n=1 However, in many situations, one does not have access to PX , or to samples drawn from PX . 2.1 Prior-free reformulation of the BLS estimator In many cases, the BLS estimate may be written without explicit reference to the prior distribution. We begin by noting that in Eq. (1), the prior appears only in the numerator  N (y) = PY |X (y|x) x PX (x) dx. This equation may be viewed as a composition of linear transformations of the function PX (x) N (y) = (A ? X){PX }(y), where X{f }(x) = xf (x), and the operator A computes an inner product with the likelihood function  A{f }(y) = PY |X (y|x) f (x) dx. Similarly, Eq. (2) may be viewed as the linear transformation A applied to PX (x). If the linear transformation A is 1-1, and we restrict PY to lie in the range of A, then we can then write the numerator as a linear transformation on PY alone, without explicit reference to PX : N (y) = = (A ? X ? A?1 ){PY }(y) L{PY }(y). (3) In the discrete case, PY (y) and N (y) are each vectors, A is a matrix containing PY |X , X is a diagonal matrix containing values of x, and ? is matrix multiplication. This allows us to write the BLS estimator as E{X|Y = y} = L{PY }(y) . PY (y) (4) Note that if we wished to calculate E{X n |Y }, then Eq. (3) would be replaced by (A ? Xn ? A?1 ){PY } = (A ? X ? A?1 )n {PY } = Ln {PY } . By linearity of the conditional expectation, we may extend this to any polynomial function (and thus to any function that can be approximated with a polynomial):  M  M  ck Lk {PY }(y) E . ck X k |Y = y = k=?N PY (y) k=?N In the definition of the operator L, A?1 effectively inverts the observation process, recovering PX from PY . In many situations, this operation will not be well-behaved. For example, in the case of additive Gaussian noise, A?1 is a deconvolution operation which is inherently unstable for high frequencies. The usefulness of Eq. (4) comes from the fact that in many cases, the composite operation L may be written explicitly, even when the inverse operation is poorly defined or unstable. In section 3, we develop examples of operators L for a variety of observation processes. 2.2 Prior-free reformulation of the mean squared error In some cases, developing a stable nonparametric approximation of the ratio in Eq. (4) may be difficult. However, the linear operator formulation of the BLS estimator also leads to a dual expression for the mean squared error that does not depend explicitly on the prior, and this may be used to select an optimal estimator from a parametric family of estimators. Specifically, for any estimator f? (Y ) parameterized by ?, the mean squared error may be decomposed into two orthogonal terms:       2 2 2 E |f? (Y ) ? X| = E |f? (Y ) ? E(X|Y )| + E |E(X|Y ) ? X| . The second term is the minimum possible MSE, obtained when using the optimal estimator. Since it does not depend on f? , it is irrelevant for optimizing ?. The first term may be expanded as       2 2 2 E |f? (Y ) ? E(X|Y )| = E |f? (Y )| ? 2f? (Y )E(X|Y ) + E |E(X|Y )| . Again, the second expectation does not depend on f? . Using the prior-free formulation of the previous section, the second component of the first expectation may be written as L{PY }(Y ) E {f? (Y )E(X|Y )} = E f? (Y ) PY (Y )  L{PY }(y) PY (y)dy = f? (y) PY (y)  = f? (y) L{PY }(y)dy  = L? {f? }(y)PY (y)dy = E {L? {f? }(Y )} , where L? is the dual operator of L (in the discrete case, L? is the matrix transpose of L). Combining all of the above, we have:     2 2 arg min E |f? (Y ) ? X| = arg min E |f? (Y )| ? 2L? {f? }(Y ) . (5) ? ? where the expectation on the right is over the observation variable, Y . In practice, we can solve for an optimal ? by minimizing the sample mean of this quantity: N  1  2 |f? (Yn )| ? 2L? {f? }(Yn ) . (6) ?? = arg min ? N n=1 where {Yn } is a set of observed data. Again this does not require any knowledge of, or samples drawn from, the prior PX . 3 Example estimators In general, it can be difficult to obtain the operator L directly from the definition in Eq. (3), because inversion of the operator A could be unstable or undefined. Instead, a solution may often be obtained by noting that the definition implies that L ? A = A ? X, or, equivalently L{PY |X (y|x)} = xPY |X (y|x). This is an eigenfunction equation: for each value of x, the conditional density PY |X (y|x) must be an eigenfunction (eigenvector, for discrete variables) of eoperator L, with associated eigenvalue x. Consider a standard example, in which the variable of interest is corrupted by independent additive noise: Y = X + W . The conditional density is PY |X (y|x) = PW (y ? x). We wish to find an operator which when applied to this conditional density (viewed as a function of y) will give L{PW (y ? x)} = x PW (y ? x) (7) for all x. Subtracting y PW (y ? x) from both sides gives M {PW (y ? x)} = ?(y ? x) PW (y ? x). (8) where M {f } (y) = L{f }(y) ? y f (y) is a linear shift-invariant operator (acting in y). Taking Fourier transforms and using the convolution and differentiation properties gives: P M(?) W (?) = ?(yPW )(?) ?i?? P W (?), (9) ?? P W (?) M(?) = ?i PW (?)  = ?i?? ln P W (?) . (10)     L{f }(y) = y f (y) ? F ?1 i?? ln P W (?) f (?) (y), (11) = so that This gives us the linear operator where F ?1 denotes the inverse Fourier transform. Note that throughout this discussion X and W played symmetric roles. Thus, in cases with known prior density and unknown additive noise density, one can formulate the estimator entirely in terms of the prior. Our prior-free estimator methodology is quite general, and can often be applied to more complicated observation processes. In order to give some sense of the diversity of forms that can arise, Table 1 provides additional examples. References for the specific cases that we have found in the statistics literature are provided in table. Obs. process Obs. density: PY |X (y|x) Numerator: N (y) = L{PY }(y) A (A ? X ? A?1 )PY (y)    yPY ? F ?1 i?? ln P W (?) PY (?) Discrete PW (y ? x) Gen. add. Add. [6]/[4]* Gaussian Add. Poisson exp{ Add. uniform Add. random # of components ? |2??|  ?k e?? Add. Laplacian Add. Cauchy } ?1 T ?1 (y?x??) 2 (y?x??) ? k! ?(y ? x ? ks) 1 ?|(y?x)/?| 2? e 1 ? ? ( (?(y?x))2 +1 ) 1 2a , 0, |y ? x| ? a |y ? x| > a PW (y ? x), where: K W ? k=0 Wk , Wk i.i.d. (Pc ), K ? Poiss(?) (y ? ?)PY (y) + ??y PY (y) yPY (y) ? ?sPY (y ? s)  yPY (y) + 2?2 {PW  PY }(y) 1 yPY (y) ? { 2??y  PY }(y)  yPY (y)+ a sgn(k)PY (y ? ak) ? 12 PY (? y )sgn(y ? y?)d? y yPY (y) ? ?{(yPc )  PY }(y) Disc. exp. [2]/[5]* h(x)g(n)xn g(n) g(n+1) PY (n + 1) Disc. inv. exp. [5]* h(x)g(n)x?n g(n) g(n?1) PY (n ? 1) Cnt. exp. [2]/[3]* h(x)g(y)eT (y)x Cnt. inv. exp. [3]* h(x)g(y)eT (y)/x xn e?x n! Poisson [7]/[5]* Lapl. scale mixture ?1 e 2?x y 1 ?x ; x, y xe g(y) y y) T  (? y ) PY ?? g(? (? y )d? y (n + 1)PY (n + 1) 2 ? y2x Gauss. scale mixture g(y) d PY (y) T  (y) dy { g(y) } >0 ?EY {Y ; Y < y} PY {Y > y} Table 1: Prior-free estimation formulas. Functions written with hats or in terms of ? represent multiplication in the Fourier Domain. n replaces y for discrete distributions. Bracketed numbers are references for operators L, with * denoting references for the parametric (dual) operator, L? . 4 4.1 Simulations Non-parametric examples Since each of the prior-free estimators discussed above relies on approximating values from the observed data, the behavior of such estimators should approach the BLS estimator as the number of data samples grows. In Fig. 1, we examine the behavior of three non-parametric prior-free estimators based on Eq. (4). The first case corresponds to data drawn independently from a binary source, which are observed through a process in which bits are switched with probability 14 . The estimator does not know the binary distribution of the source (which was a ?fair coin? for our simulation), but does know the bit-switching probability. For this estimator we approximate PY using a simple histogram, and then use the matrix version of the linear operator in Eq. (3). We characterize the behavior of this estimator as a function of the number of data points, N , by running many Monte Carlo simulations for each N and indicating the mean improvement in MSE (compared with the ML estimator, which is the identity function), the mean improvement using the conventional BLS estimation function, E{X|Y = y} assuming the prior density is known, and the standard deviations of the improvements taken over our simulations. Figure 1b shows similar results for additive Gaussian noise, with SNR replacing MSE. Signal density is a generalized Gaussian with exponent 0.5. In this case, we compute Eq. (4) using a more 3 0.2 4 0 ?0.1 ?0.2 SNR improvement SNR improvement 0.1 MSE improvement 5 2.5 2 1.5 1 0.5 ?0.3 0 10 1 10 2 10 # samples 3 10 (a) 4 10 0 2 10 3 2 1 0 ?1 ?2 3 4 10 10 5 10 ?3 2 10 3 10 4 # samples 10 # samples (b) (c) 5 10 6 10 Fig. 1: Empirical convergence of prior-free estimator to optimal BLS solution, as a function number of observed samples of Y . For each number of observations, each estimator is simulated many times. Black dashed lines show the improvement of the prior-free estimator, averaged over simulations, relative to the ML estimator. White line shows the mean improvement using the conventional BLS solution, E{X|Y = y}, assuming the prior density is known. Gray regions denote ? one standard deviation. (a) Binary noise (10,000 simulations for each number of observations); (b) additive Gaussian noise (1,000 simulations); (c) Poisson noise (1,000 simulations). sophisticated approximation method, as described in [8]. We fit a local exponential model similar to that used in [9] to the data in bins, with binwidth adaptively selected so that the product of the number of points in the bin and the squared binwidth is constant. This binwidth selection procedure, analogous to adaptive binning procedures developed in the density estimation literature [10], provides a reasonable tradeoff between bias and variance, and converges to the correct answer for any well-behaved density [8]. Note that in this case, convergence is substantially slower than for the binary case, as might be expected given that we are dealing with a continuous density rather than a single scalar probability. But the variance of the estimates is very low. Figure 1c shows the case of estimating a randomly varying rate parameter that governs an inhomogeneous Poisson process. The prior on the rate (unknown to the estimator) is exponential. The observed values Y are the (integer) values drawn from the Poisson process. In this case the histogram of observed data was used to obtain a naive approximation of PY (n). It should be noted that improved performance for this estimator is expected if we were to use a more sophisticated approximation of the ratio of densities. 4.2 Parametric examples In this section we discuss the empirical behavior of the parametric approach applied to the additive Gaussian case. From the derivation in section 3, and restricting to the scalar case, we have L? = y ? ? 2 d . dy In this particular case,, it is easier to represent the estimator as f (y) = y + g(y). Substituting into Eq. (5) gives E{|f (Y ) ? X|2 } = E{|g(Y )|2 + ? 2 g  (Y )} + const, where the constant does not depend on g. Therefore, if we have a parametric family {g? } of such g, and are given data {Yn } we can try and minimize N 1  {|g? (Yn )|2 + ? 2 g? (Yn )}. N n=1 (12) This expression, known as Stein?s unbiased risk estimator (SURE) [4], favors estimators g? that have small amplitude, and highly negative derivatives at the data values. This is intuitively sensible: the resulting estimators will ?shrink? the data toward regions of high probability. Recently, an expression similar to Eq. (12) was used as a criterion for density estimation in cases where the normalizing constant, or partition function, is difficult to obtain [11]. The prior-free 3 3 2.5 2.5 2.5 2 1.5 1 0.5 0 2 10 SNR improvement 3 SNR improvement SNR improvement Fig. 2: Example bump functions, used for linear parameterization of estimators in Figs. 3(a) and 3(b). 2 1.5 1 0.5 3 4 10 10 5 10 2 1.5 1 0.5 0 2 10 3 4 10 10 5 10 0 2 10 3 4 10 10 # samples # samples # samples (a) (b) (c) 5 10 Fig. 3: Empirical convergence of parametric prior-free method to optimal BLS solution, as a function number of data observations, for three different parameterized estimators. (a)3 bump; (b)15 bumps; (c) Soft thresholding. All cases use a generalized Gaussian prior (exponent 0.5), and assume additive Gaussian noise. approach we are discussing provides an interpretation for this procedure: the optimal density is the one which, when converted into an estimator using the formula in Table 1 for the additive Gaussian case, gives the best MSE. This may be extended to any of the linear operators in Table 1. As an example, we parametrize g as a linear combination of nonlinear ?bump? functions  g? (y) = ?k gk (y) (13) k where the functions gk are of the form  1 k? sgn(y) log2 (|y|/? + 1) ? , gk (y) = y cos ? 2 as illustrated in Fig. 2. Recently, linear parameterizations have been used in conjunction with Eq. (12) for image denoising in the wavelet domain [12]. 2  We can substitute Eq. (13) into Eq. (12) and pick coefficients {?k } to minimize this criteria, which is a quadratic function of the coefficients. For our simulations, we used a generalized Gaussian prior, with exponent 0.5. Figure 3 shows the empirical behavior of these ?SURE-bump? estimators when using three bumps ( Fig. 3a) and fifteen bumps (Fig. 3b), illustrating the bias-variance tradeoff inherent in the fixed parameterization. Three bumps behaves fairly well, though the asymptotic behavior for large amounts of data is biased and thus falls short of ideal. Fifteen bumps asymptotes correctly but has very large variance for small amounts of data (overfitting). For comparison purposes, we have included the behavior of SURE thresholding [13], in which Eq. (4.2) is used to choose an optimal threshold, ?, for the function f? (y) = sgn(y)(|y| ? ?)+ . As can be seen, SURE thresholding shows significant asymptotic bias although the variance behavior is nearly ideal. 5 Discussion We have reformulated the Bayes least squares estimation problem for a setting in which one knows the observation process, and has access to many observations. We do not assume the prior density is known, nor do we assume access to samples from the prior. Our formulation thus acts as a bridge between a conventional Bayesian setting in which one derives the optimal estimator from known prior and likelihood functions, and a data-oriented regression setting in which one learns the optimal estimator from samples of the prior paired with corrupted observations of those samples. In many cases, the prior-free estimator can be written explicitly, and we have shown a number of examples to illustrate the diversity of estimators that can arise under different observation processes. For three simple cases, we developed implementations and demonstrated that these converge to optimal BLS estimators as the amount of data grows. We also have derived a prior-free formulation of the MSE, which allows selection of an estimator from a parametric family. We have shown simulations for a linear family of estimators in the additive Gaussian case. These simulations serve to demonstrate the potential of this approach, which holds particular appeal for real-world systems (machine or biological) that must learn the priors from environmental observations. Both methods can be enhanced by using data-adaptive parameterizations or fitting procedures in order to properly trade off bias and variance (see, for example [8]). It is of particular interest to develop incremental implementations, which would update the estimator based on incoming observations. This would further enhance the applicability of this approach for systems that must learn to do optimal estimation from corrupted observations. Acknowledgments This work was partially funded by the Howard Hughes Medical Institute, and by New York University through a McCracken Fellowship to MR. References [1] G. Casella, ?An introduction to empirical Bayes data analysis,? Amer. Statist., vol. 39, pp. 83? 87, 1985. [2] J. S. Maritz and T. Lwin, Empirical Bayes Methods. Chapman & Hall, 2nd ed., 1989. [3] J. Berger, ?Improving on inadmissible estimators in continuous exponential families with applications to simultaneous estimation of gamma scale parameters,? The Annals of Staistics, vol. 8, pp. 545?571, 1980. [4] C. M. Stein, ?Estimation of the mean of a multivariate normal distribution,? Annals of Statistics, vol. 9, pp. 1135?1151, November 1981. [5] J. T. Hwang, ?Improving upon standard estimators in discrete exponential families with applications to poisson and negative binomial cases,? The Annals of Staistics, vol. 10, pp. 857?867, 1982. [6] K. Miyasawa, ?An empirical bayes estimator of the mean of a normal population,? Bull. Inst. Internat. Statist., vol. 38, pp. 181?188, 1956. [7] H. Robbins, ?An empirical bayes approach to statistics,? Proc. Third Berkley Symposium on Mathematcal Statistics, vol. 1, pp. 157?163, 1956. [8] M. Raphan and E. P. Simoncelli, ?Empirical Bayes least squares estimation without an explicit prior.? NYU Courant Inst. Tech. Report, 2007. [9] C. R. Loader, ?Local likelihood density estimation,? Annals of Statistics, vol. 24, no. 4, pp. 1602?1618, 1996. [10] D. W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization. John Wiley, 1992. [11] A. Hyvarinen, ?Estimation of non-normalized statistical models by score matching,? Journal of Machine Learning Research, vol. 6, pp. 695?709, 2005. [12] F. Luisier, T. Blu, and M. Unser, ?SURE-based wavelet thresholding integrating inter-scale dependencies,? in Proc IEEE Int?l Conf on Image Proc, (Atlanta GA, USA), pp. 1457?1460, October 2006. [13] D. Donoho and I. Johnstone, ?Adapting to unknown smoothness via wavelet shrinkage,? J American Stat Assoc, vol. 90, December 1995.
3001 |@word illustrating:1 version:1 inversion:1 polynomial:2 pw:10 blu:1 nd:1 simulation:11 pick:1 fifteen:2 score:1 selecting:1 denoting:1 dx:5 written:9 must:5 john:1 additive:11 partition:1 asymptote:1 update:1 alone:1 selected:1 parameterization:2 short:1 provides:3 parameterizations:2 mathematical:2 along:1 constructed:1 direct:1 symposium:1 prove:1 fitting:2 inter:1 expected:2 behavior:8 examine:4 nor:1 decomposed:1 provided:2 estimating:2 begin:1 linearity:1 substantially:1 eigenvector:1 developed:2 transformation:4 differentiation:1 act:1 assoc:1 medical:1 appear:1 yn:8 engineering:1 local:2 switching:1 despite:1 ak:1 loader:1 black:1 might:1 k:1 co:1 range:2 averaged:1 practical:2 acknowledgment:1 practice:2 hughes:1 procedure:4 empirical:14 adapting:1 composite:1 matching:1 integrating:1 ga:1 selection:2 operator:14 risk:1 impossible:1 py:52 conventional:3 map:1 demonstrated:1 center:1 independently:1 formulate:1 unify:1 estimator:64 rule:1 classic:1 population:1 analogous:1 annals:4 enhanced:1 suppose:1 approximated:1 particularly:1 binning:1 observed:13 role:1 calculate:2 region:2 trade:1 depend:4 serve:1 upon:1 joint:1 derivation:1 monte:1 whose:1 quite:2 widely:1 solve:2 favor:1 statistic:5 transform:1 eigenvalue:1 subtracting:1 product:3 combining:1 gen:1 poorly:2 convergence:4 y2x:1 incremental:1 converges:1 wider:1 derive:2 develop:6 cnt:2 stat:1 illustrate:1 wished:1 eq:15 recovering:1 come:1 implies:1 inhomogeneous:1 correct:3 sgn:4 bin:2 require:1 biological:2 hold:1 hall:1 normal:2 exp:5 bump:9 substituting:1 purpose:1 estimation:17 proc:3 bridge:1 robbins:1 gaussian:13 ck:2 rather:1 poi:1 shrinkage:1 varying:1 conjunction:1 derived:2 improvement:12 properly:1 likelihood:5 tech:1 sense:2 inst:4 typically:1 hidden:1 arg:4 dual:4 exponent:3 xpy:1 special:1 fairly:1 chapman:1 unsupervised:1 nearly:1 report:1 inherent:1 few:1 randomly:1 oriented:1 gamma:1 replaced:1 consisting:1 atlanta:1 interest:2 highly:1 mixture:2 undefined:1 pc:1 orthogonal:1 criticized:1 soft:1 bull:1 applicability:1 deviation:2 snr:6 uniform:1 usefulness:1 characterize:1 dependency:1 answer:1 corrupted:7 adaptively:1 density:27 off:1 enhance:1 squared:8 again:2 containing:3 choose:3 conf:1 american:1 derivative:1 converted:1 potential:1 diversity:2 wk:2 coefficient:2 int:1 explicitly:4 bracketed:1 depends:1 try:1 bayes:11 complicated:1 minimize:4 square:6 variance:6 generalize:1 bayesian:5 disc:2 carlo:1 simultaneous:1 casella:1 ed:1 definition:3 frequency:1 pp:9 associated:1 knowledge:4 amplitude:1 sophisticated:2 appears:1 courant:3 supervised:1 methodology:1 improved:1 formulation:8 amer:1 shrink:1 though:1 replacing:1 nonlinear:1 widespread:1 nonparametrically:1 gray:1 behaved:2 hwang:1 grows:2 usa:1 normalized:1 true:3 unbiased:1 symmetric:1 illustrated:1 white:1 numerator:3 noted:1 criterion:2 generalized:3 demonstrate:1 image:2 recently:2 behaves:1 empirically:1 extend:1 discussed:1 interpretation:1 measurement:3 composition:1 significant:1 smoothness:1 similarly:1 funded:1 access:5 stable:1 supervision:1 berkley:1 internat:1 add:7 posterior:2 multivariate:2 optimizing:1 irrelevant:1 binary:4 discussing:1 xe:1 uncorrupted:1 seen:1 minimum:1 additional:1 mr:1 ey:1 converge:2 signal:1 dashed:1 branch:1 simoncelli:3 xf:1 paired:1 laplacian:1 regression:2 denominator:1 expectation:5 poisson:6 histogram:2 represent:2 fellowship:1 source:2 biased:1 sure:5 member:1 december:1 integer:1 noting:2 ideal:2 identically:1 enough:1 concerned:1 variety:3 fit:1 restrict:1 inner:1 tradeoff:2 shift:1 expression:6 algebraic:1 reformulated:1 york:3 useful:1 governs:1 amount:6 nonparametric:3 transforms:1 stein:2 raphan:3 statist:2 spy:1 estimated:3 correctly:1 write:2 discrete:6 vol:9 bls:20 reliance:1 demonstrating:1 reformulation:2 threshold:1 drawn:7 luisier:1 inverse:2 parameterized:2 family:11 throughout:2 reasonable:1 ob:2 dy:5 bit:2 entirely:1 played:1 replaces:1 quadratic:1 fourier:3 min:4 expanded:1 martin:1 px:15 developing:1 combination:1 intuitively:1 invariant:1 taken:1 ln:4 equation:2 visualization:1 discus:1 know:5 serf:1 available:1 operation:4 parametrize:1 coin:1 slower:1 hat:1 substitute:1 assumes:1 denotes:1 running:1 binomial:1 log2:1 const:1 approximating:1 quantity:5 parametric:14 diagonal:1 simulated:1 sensible:1 cauchy:1 unstable:3 toward:1 assuming:3 berger:1 ratio:2 minimizing:2 equivalently:1 difficult:5 lapl:1 october:1 gk:3 negative:2 implementation:2 unknown:3 perform:2 observation:30 convolution:1 howard:1 november:1 situation:4 extended:1 inv:2 pair:1 learned:1 eigenfunction:2 usually:2 scott:1 cim:1 lk:1 naive:1 faced:1 prior:44 literature:4 multiplication:2 relative:1 asymptotic:2 expect:1 mccracken:1 switched:1 sufficient:1 thresholding:4 free:15 transpose:1 side:1 bias:4 institute:1 wide:1 fall:1 taking:1 johnstone:1 distributed:1 xn:5 world:3 computes:1 commonly:1 adaptive:2 hyvarinen:1 approximate:3 dealing:1 ml:2 overfitting:1 incoming:1 assumed:2 eero:2 alternatively:1 continuous:2 table:5 learn:4 inherently:1 improving:2 mse:6 domain:2 noise:9 arise:2 fair:1 fig:8 wiley:1 binwidth:3 explicit:6 inverts:1 wish:1 exponential:4 lie:2 third:1 wavelet:3 learns:1 formula:2 specific:1 nyu:3 appeal:2 unser:1 normalizing:1 deconvolution:1 derives:1 restricting:1 effectively:1 easier:1 simply:1 expressed:1 partially:1 scalar:3 corresponds:1 environmental:1 relies:1 conditional:6 viewed:3 formulated:1 identity:1 donoho:1 included:1 specifically:3 acting:1 denoising:1 inadmissible:1 gauss:1 indicating:1 select:1
2,207
3,002
Linearly-solvable Markov decision problems Emanuel Todorov Department of Cognitive Science University of California San Diego [email protected] Abstract We introduce a class of MPDs which greatly simplify Reinforcement Learning. They have discrete state spaces and continuous control spaces. The controls have the effect of rescaling the transition probabilities of an underlying Markov chain. A control cost penalizing KL divergence between controlled and uncontrolled transition probabilities makes the minimization problem convex, and allows analytical computation of the optimal controls given the optimal value function. An exponential transformation of the optimal value function makes the minimized Bellman equation linear. Apart from their theoretical signi cance, the new MDPs enable ef cient approximations to traditional MDPs. Shortest path problems are approximated to arbitrary precision with largest eigenvalue problems, yielding an O (n) algorithm. Accurate approximations to generic MDPs are obtained via continuous embedding reminiscent of LP relaxation in integer programming. Offpolicy learning of the optimal value function is possible without need for stateaction values; the new algorithm (Z-learning) outperforms Q-learning. This work was supported by NSF grant ECS?0524761. 1 Introduction In recent years many hard problems have been transformed into easier problems that can be solved ef ciently via linear methods [1] or convex optimization [2]. One area where these trends have not yet had a signi cant impact is Reinforcement Learning. Indeed the discrete and unstructured nature of traditional MDPs seems incompatible with simplifying features such as linearity and convexity. This motivates the search for more tractable problem formulations. Here we construct the rst MDP family where the minimization over the control space is convex and analytically tractable, and where the Bellman equation can be exactly transformed into a linear equation. The new formalism enables ef cient numerical methods which could not previously be applied in Reinforcement Learning. It also yields accurate approximations to traditional MDPs. Before introducing our new family of MDPs, we recall the standard formalism. Throughout the paper S is a nite set of states, U (i) is a set of admissible controls at state i 2 S, ` (i; u) 0 is a cost for being in state i and choosing control u 2 U (i), and P (u) is a stochastic matrix whose element pij (u) is the transition probability from state i to state j under control u. We focus on problems where a non-empty subset A S of states are absorbing and incur zero cost: pij (u) = ji and ` (i; u) = 0 whenever i 2 A. Results for other formulations will be summarized later. If A can be reached with non-zero probability in a nite number of steps from any state, then the undiscounted in nite-horizon optimal value function is nite and is the unique solution [3] to the Bellman equation n o X pij (u) v (j) (1) v (i) = min ` (i; u) + u2U (i) j For generic MDPs this equation is about as far as one can get analytically. 2 A class of more tractable MDPs In our new class of MDPs the control u 2 RjSj is a real-valued vector with dimensionality equal to the number of discrete states. The elements uj of u have the effect of directly modifying the transition probabilities of an uncontrolled Markov chain. In particular, given an uncontrolled transition probability matrix P with elements pij , we de ne the controlled transition probabilities as (2) pij (u) = pij exp (uj ) Note that P (0) = P . In some sense this is the most general notion of "control" one can imagine ? we are allowing the controller to rescale the underlying transition probabilities in any way it wishes. However there are two constraints implicit in (2). First, pij = 0 implies pij (u) = 0. In this case uj has no effect and so we set it to 0 for concreteness. Second, P (u) must have row-sums equal to 1. Thus the admissible controls are n o X U (i) = u 2 RjSj ; pij exp (uj ) = 1; pij = 0 =) uj = 0 (3) j Real-valued controls make it possible to de ne a natural control cost. Since the control vector acts directly on the transition probabilities, it makes sense to measure its magnitude in terms of the difference between the controlled and uncontrolled transition probabilities. Differences between probability distributions are most naturally measured using KL divergence, suggesting the following de nition. Let pi (u) denote the i-th row-vector of the matrix P (u), that is, the vector of transition probabilities from state i to all other states under control u. The control cost is de ned as r (i; u) = KL (pi (u) jjpi (0)) = X j:pij 6=0 pij (u) log From the properties of KL divergence it follows that r (i; u) Substituting (2) in (4) and simplifying, the control cost becomes X r (i; u) = pij (u) uj pij (u) pij (0) (4) 0, and r (i; u) = 0 iff u = 0. (5) j This has an interesting interpretation. The Markov chain likes to behave according to P but can be paid to behave according to P (u). Before each transition the controller speci es the price uj it is willing to pay (or collect, if uj < 0) for every possible next state j. When the actual transition occurs, say to state k, the controller pays the price uk it promised. Then r (i; u) is the price the controller expects to pay before observing the transition. Coming back to the MDP construction, we allow an arbitrary state cost q (i) above control cost: ` (i; u) = q (i) + r (i; u) 0 in addition to the (6) We require q (i) = 0 for absorbing states i 2 A so that the process can continue inde nitely without incurring extra costs. Substituting (5, 6) in (1), the Bellman equation for our MDP is n o X v (i) = min q (i) + pij exp (uj ) (uj + v (j)) (7) j u2U (i) We can now exploit the bene ts of this unusual construction. The minimization in (7) subject to the constraint (3) can be performed in closed form using Lagrange multipliers, as follows. For each i de ne the Lagrangian X X L (u; i ) = pij exp (uj ) (uj + v (j)) + i pij exp (uj ) 1 (8) j j The necessary condition for an extremum with respect to uj is 0= @L = pij exp (uj ) (uj + v (j) + @uj i + 1) (9) When pij 6= 0 the only solution is uj (i) = v (j) i 1 (10) Taking another derivative yields @2L @uj @uj = pij exp uj (i) > 0 (11) uj =uj (i) and therefore (10) is a minimum. The Lagrange multiplier i can be found by applying the constraint (3) to the optimal control (10). The result is X pij exp ( v (j)) 1 (12) i = log j and therefore the optimal control law is uj (i) = v (j) log X k pik exp ( v (k)) (13) Thus we have expressed the optimal control law in closed form given the optimal value function. Note that the only in uence of the current state i is through the second term, which serves to normalize the transition probability distribution pi (u ) and is identical for all next states j. Thus the optimal controller is a high-level controller: it tells the Markov chain to go to good states without specifying how to get there. The details of the trajectory emerge from the interaction of this controller and the uncontrolled stochastic dynamics. In particular, the optimally-controlled transition probabilities are pij exp ( v (j)) pij (u (i)) = P (14) k pik exp ( v (k)) These probabilities are proportional to the product of two terms: the uncontrolled transition probabilities pij which do not depend on the costs or values, and the (exponentiated) next-state values v (j) which do not depend on the current state. In the special case pij = consti the transition probabilities (14) correspond to a Gibbs distribution where the optimal value function plays the role of an energy function. Substituting the optimal control (13) in the Bellman equation (7) and dropping the min operator, X v (i) = q (i) + pij (u (i)) uj (i) + v (j) (15) j X = q (i) + pij (u (i)) ( i 1) j = q (i) i = q (i) log 1 X j pij exp ( v (j)) Rearranging terms and exponentiating both sides of (15) yields X exp ( v (i)) = exp ( q (i)) pij exp ( v (j)) j (16) We now introduce the exponential transformation z (i) = exp ( v (i)) (17) which makes the minimized Bellman equation linear: z (i) = exp ( q (i)) X j pij z (j) (18) De ning the vector z with elements z (i), and the diagonal matrix G with elements exp ( q (i)) along its main diagonal, (18) becomes z = GP z Thus our class of optimal control problems has been reduced to a linear eigenvalue problem. (19) 2.1 Iterative solution and convergence analysis From (19) it follows that z is an eigenvector of GP with eigenvalue 1. Furthermore z (i) > 0 for all i 2 S and z (i) = 1 for i 2 A. Is there a vector z with these properties and is it unique? The answer to both questions is af rmative, because the Bellman equation has a unique solution and v is a solution to the Bellman equation iff z = exp ( v) is an admissible solution to (19). The only remaining question then is how to nd the unique solution z. The obvious iterative method is zk+1 = GP zk ; (20) z0 = 1 This iteration always converges to the unique solution, for the following reasons. A stochastic matrix P has spectral radius 1. Multiplication by G scales down some of the rows of P , therefore GP has spectral radius at most 1. But we are guaranteed than an eigenvector z with eigenvalue 1 exists, therefore GP has spectral radius 1 and z is a largest eigenvector. Iteration (20) is equivalent to the power method (without the rescaling which is unnecessary here) so it converges to a largest eigenvector. The additional constraints on z are clearly satis ed at all stages of the iteration. In particular, for i 2 A the i-th row of GP has elements ji , and so the i-th element of zk remains equal to 1 for all k. We now analyze the rate of convergence. Let m = jAj and n = jSj. The states can be permuted so that GP is in canonical form: T1 T 2 GP = (21) 0 I where the absorbing states are last, T1 is (n m) by (n m), and T2 is (n m) by m. The reason we have the identity matrix in the lower-right corner, despite multiplication by G, is that q (i) = 0 for i 2 A. From (21) we have GP k = T1k 0 T1k 1 + + T1 + I T2 I = T1k 0 I T1k (I T1 ) I 1 T2 (22) A stochastic matrix P with m absorbing states has m eigenvalues 1, and all other eigenvalues are smaller than 1 in absolute value. Since the diagonal elements of G are no greater than 1, all eigenvalues of T1 are smaller than 1 and so limk!1 T1k = 0. Therefore iteration (20) converges exponentially as k where < 1 is the largest eigenvalue of T1 . Faster convergence is obtained for smaller . The factors that can make small are: (i) large state costs q (i) resulting in small terms exp ( q (i)) along the diagonal of G; (ii) small transition probabilities among non-absorbing states (and large transition probabilities from non-absorbing to absorbing states). Convergence is independent of problem size because has no reason to increase as the dimensionality of T1 increases. Indeed numerical simulations on randomly generated MDPs have shown that problem size does not systematically affect the number of iterations needed to reach a given convergence criterion. Thus the average running time scales linearly with the number of non-zero elements in P . 2.2 Alternative problem formulations While the focus of this paper is on in nite-horizon total-cost problems with absorbing states, we have obtained similar results for all other problem formulations commonly used in Reinforcement Learning. Here we summarize these results. In nite-horizon problems equation (19) becomes z (t) = G (t) P (t) z (t + 1) (23) where z (t nal ) is initialized from a given nal cost function. In in nite-horizon average-cost-perstage problems equation (19) becomes z = GP z (24) where is the largest eigenvalue of GP , z is a differential value function, and the average cost-perstage turns out to be log ( ). In in nite-horizon discounted-cost problems equation (19) becomes z = GP z (25) where < 1 is the discount factor and z is de ned element-wise. Even though the latter equation is nonlinear, we have observed that the analog of iteration (20) still converges rapidly. Fig 1A 3 Fig 1B 0 3 4 6 8 10 12 0 1 2 3 4 5 6 2 3 4 6 9 11 12 1 1 2 3 4 5 6 11 13 5 6 14 13 22 20 18 16 14 22 21 23 23 24 26 28 30 25 25 25 26 28 30 10 9 8 7 6 6 6 10 9 31 10 10 10 11 12 13 14 31 11 11 11 11 12 13 14 Shortest paths as an eigenvalue problem Suppose the state space S of our MDP corresponds to the vertex set of a directed graph, and let D be the graph adjacency matrix whose element dij indicates the presence (dij = 1) or absence (dij = 0) of a directed edge from vertex i to vertex j. Let A S be a non-empty set of destination vertices. Our goal is to nd the length s (i) of the shortest path from every i 2 S to some vertex in A. For i 2 A we have s (i) = 0 and dij = ji . We now show how the shortest path lengths s (i) can be obtained from our MDP. De ne the elements of the stochastic matrix P as dij pij = P (26) k dik corresponding to a random walk on the graph. Next choose > 0 and de ne the state costs q (i) = when i 2 = A, q (i) = 0 when i 2 A (27) This cost model means that we pay a price whenever the current state is not in A. Let v (i) denote the optimal value function for the MDP de ned by (26, 27). If the control costs were 0 then the shortest paths would simply be s (i) = 1 v (i). Here the control costs are not 0, however they are bounded. This can be shown using (28) pij (u) = pij exp (uj ) 1 which implies that for pij 6= 0 we have uj log pij . Since r (i; u) is a convex combination of the elements of u, the following bound holds: r (i; u) maxj (uj ) log minj:pij 6=0 pij (29) The control costs are bounded and we are free to choose arbitrarily large, so we can make the state costs dominate the optimal value function. This yields the following result: v (i) s (i) = lim (30) !1 Thus we have reduced the shortest path problem to an eigenvalue problem. In spectral graph theory many problems have previously been related to eigenvalues of the graph Laplacian [4], but the shortest path problem was not among them until now. Currently the most widely used algorithm is Dijkstra's algorithm. In sparse graphs its running time is O (n log (n)). In contrast, algorithms for nding largest eigenpairs have running time O (n) for sparse matrices. Of course (30) involves a limit and so we cannot obtain the exact shortest paths by solving a single eigenvalue problem. However we can obtain a good approximation by setting large enough ? but not too large because exp ( ) may become numerically indistinguishable from 0. Fig 1 illustrates the solution obtained from (30) and rounded down to the nearest integer, for = 1 in 1A and = 50 in 1B. Transitions are allowed to all neighbors. The result in 1B matches the exact shortest paths. Although the solution for = 1 is numerically larger, it is basically a scaled-up version of the correct solution. Indeed the R2 between the two solutions before rounding was 0:997. 4 Approximating discrete MDPs via continuous embedding In the previous section we replaced the shortest path problem with a continuous MDP and obtained an excellent approximation. Here we obtain approximations of similar quality in more general settings, using an approach reminiscent of LP-relaxation in integer programming. As in LP-relaxation, theoretical results are hard to derive but empirically the method works well. We construct an embedding which associates the controls in the discrete MDP with speci c control vectors of a continuous MDP, making sure that for these control vectors the continuous MDP has the same costs and transition probabilities as the discrete MDP. This turns out to be possible under mild and reasonable assumptions, as follows. Consider a discrete MDP with transition probabilities and e De ne the matrix B (i) of all controlled transition probabilities from state i. costs denoted pe and `. This matrix has elements baj (i) = peij (a) ; a 2 U (i) (31) We need two assumptions to guarantee the existence of an exact embedding: for all i 2 S the matrix B (i) must have full row-rank, and if any element of B (i) is 0 then the entire column must be 0. If the latter assumption does not hold, we can replace the problematic 0 elements of B (i) with a small and renormalize. Let N (i) denote the set of possible next states, i.e. states j for which peij (a) > 0 for any/all a 2 U (i). Remove the zero-columns of B (i) and restrict j 2 N (i). The rst step in the construction is to compute the real-valued control vectors ua corresponding to the discrete actions a. This is accomplished by matching the transition probabilities of the discrete and continuous MDPs: pij exp uaj = peij (a) ; 8 i 2 S; j 2 N (i) ; a 2 U (i) (32) These constraints are satis ed iff the elements of the vector u are a uaj = log (e pij (a)) (33) log pij The second step is to compute the uncontrolled transition probabilities pij and state costs q (i) in the continuous MDP so as to match the costs in the discrete MDP. This yields the set of constraints q (i) + r (i; ua ) = `e(i; a) ; 8 i 2 S; a 2 U (i) For the control vector given by (33) the KL-divergence cost is X X r (i; ua ) = pij exp uaj uaj = h (i; a) peij (a) log pij j j where h (i; a) is the entropy of the transition probability distribution in the discrete MDP: X h (i; a) = peij (a) log (e pij (a)) j The constraints (34) are then equivalent to X q (i) baj (i) log pij = `e(i; a) j h (i; a) (34) (35) (36) (37) De ne the vector y (i) with elements `e(i; a) h (i; a), and the vector x (i) with elements log pij . The dimensionality of y (i) is jU (i)j while the dimensionality of x (i) is jN (i)j jU (i)j. The latter inequality follows from the assumption that B (i) has full row-rank. Suppressing the dependence on the current state i, the constraints (34) can be written in matrix notation as q1 (38) Bx = y Since the probabilities pij must sum up to 1, the vector x must satisfy the additional constraint X exp (xj ) = 1 (39) j b be any vector such that We are given B; y and need to compute q; x satisfying (38, 39). Let x b = B y y where y denotes the Moore-Penrose pseudoinverse. Since B is Bb x = y, for example x a stochastic matrix we have B1 = 1, and so q1 B (b x + q1) = Bb x=y (40) Fig 2A Fig 2B Fig 2C 40 * * 30 20 R 2 = 0.986 10 * 0 * 0 20 40 60 value in discrete MDP b + q1 satis es (38) for all q, and we can adjust q to also satisfy (39), namely Therefore x = x X q = log exp (b xj ) (41) j This completes the embedding. If the above q turns out to be negative, we can either choose another b by adding an element from the null-space of B, or scale all costs `e(i; a) by a positive constant. x Such scaling does not affect the optimal control law for the discrete MDP, but it makes the elements of B y y more negative and thus q becomes more positive. We now illustrate this construction with the example in Fig 2. The grid world has a number of obstacles (black squares) and two absorbing states (white stars). The possible next states are the immediate neighbors including the current state. Thus jN (i)j is at most 9. The discrete MDP has jN (i)j 1 actions corresponding to stochastic transitions to each of the neighbors. For each action, the transition probability to the "desired" state is 0:8 and the remaining 0:2 is equally distributed among the other states. The costs `e(i; a) are random numbers between 1 and 10 ? which is why the optimal value function shown in grayscale appears irregular. Fig 2A shows the optimal value function for the discrete MDP. Fig 2B shows the optimal value function for the corresponding continuous MDP. The scatterplot in Fig 2C shows the optimal values in the discrete and continuous MDP (each dot is a state). The values in the continuous MDP are numerically smaller ? which is to be expected since the control space is larger. Nevertheless, the correlation between the optimal values in the discrete and continuous MDPs is excellent. We have observed similar performance in a number of randomly-generated problems. 5 Z-learning So far we assumed that a model of the continuous MDP is available. We now turn to stochastic approximations of the optimal value function which can be used when a model is not available. All we have access to are samples (ik ; jk ; qk ) where ik is the current state, jk is the next state, qk is the state cost incurred at ik , and k is the sample number. Equation (18) can be rewritten as X z (i) = exp ( q (i)) pij z (j) = exp ( q (i)) EP [z (j)] (42) j This suggests an obvious stochastic approximation zb to the function z, namely zb (ik ) b (ik ) k) z (1 + k exp ( qk ) zb (jk ) (43) where the sequence of learning rates k is appropriately decreased as k increases. The approximation to v (i) is simply log (b z (i)). We will call this algorithm Z-learning. Let us now compare (43) to the Q-learning algorithm applicable to discrete MDPs. Here we have samples (ik ; jk ; `k ; uk ). The difference is that `k is now a total cost rather than a state cost, and we have a control uk generated by some control policy. The update equation for Q-learning is b (ik ; uk ) Q (1 b k ) Q (ik ; uk ) + k min u0 2U (jk ) b (jk ; u0 ) `k + Q (44) Fig 3A Fig 3B 1 1 Z-learning * Q-learning Q * * Z 0 0 0 1 2 3 x 10 4 0 5 10 15 Number of state transitions To compare the two algorithms, we rst constructed continuous MDPs with q (i) = 1 and transitions to the immediate neighbors in the grid worlds shown in Fig 3. For each state we found the optimal transition probabilities (14). We then constructed a discrete MDP which had one action (per state) that caused the same transition probabilities, and the corresponding cost was the same as in the continuous MDP. We then added jN (i)j 1 other actions by permuting the transition probabilities. Thus the discrete and continuous MDPs were guaranteed to have identical optimal value functions. Note that the goal here is no longer to approximate discrete with continuous MDPs, but to construct pairs of problems with identical solutions allowing fair comparison of Z-learning and Q-learning. We run both algorithms with the same random policy. The learning rates decayed as k = c= (c + t (k)) where the constant c was optimized separately for each algorithm and t (k) is the run to which sample k belongs. When the MDP reaches an absorbing state a new run is started from a random initial state. The approximation error plotted in Fig 3 is de ned as maxi jv (i) vb (i)j maxi v (i) (45) and is computed at the end of each run. For small problems (Fig 3A) the two algorithms had identical convergence, however for larger problems (Fig 3B) the new Z-learning algorithm was clearly faster. This is not surprising: even though Z-learning is as model-free as Q-learning, it bene ts from the analytical developments in this paper and in particular it does not need a maximization operator or state-action values. The performance of Q-learning can be improved by using a non-random (say greedy) policy. If we combine Z-learning with importance sampling, the performance of Z-learning can also be improved by using such a policy. 6 Summary We introduced a new class of MDPs which have a number of remarkable properties, can be solved ef ciently, and yield accurate approximations to traditional MDPs. In general, no single approach is likely to be a magic wand which simpli es all optimal control problems. Nevertheless the results so far are very encouraging. While the limitations remain to be clari ed, our approach appears to have great potential and should be thoroughly investigated. References [1] [2] [3] [4] B. Scholkopf and A. Smola, Learning with kernels. MIT Press (2002) S. Boyd and L. Vandenberghe, Convex optimization. Cambridge University Press (2004) D. Bertsekas, Dynamic programming and optimal control (2nd ed). Athena Scienti c (2000) F. Chung, Spectral graph theory. CMBS Regional Conference Series in Mathematics (1997)
3002 |@word mild:1 version:1 seems:1 nd:3 willing:1 simulation:1 simplifying:2 q1:4 paid:1 initial:1 series:1 suppressing:1 outperforms:1 clari:1 current:6 surprising:1 yet:1 reminiscent:2 must:5 written:1 numerical:2 cant:1 enables:1 remove:1 update:1 greedy:1 offpolicy:1 along:2 constructed:2 differential:1 become:1 ik:8 scholkopf:1 combine:1 introduce:2 expected:1 indeed:3 bellman:8 discounted:1 actual:1 encouraging:1 ua:3 becomes:6 underlying:2 linearity:1 bounded:2 notation:1 null:1 eigenvector:4 extremum:1 transformation:2 guarantee:1 every:2 act:1 stateaction:1 exactly:1 scaled:1 uk:5 control:37 grant:1 bertsekas:1 eigenpairs:1 before:4 t1:7 positive:2 limit:1 despite:1 path:10 nitely:1 black:1 collect:1 specifying:1 suggests:1 directed:2 unique:5 nite:8 area:1 matching:1 boyd:1 get:2 cannot:1 operator:2 applying:1 equivalent:2 lagrangian:1 go:1 convex:5 unstructured:1 dominate:1 vandenberghe:1 embedding:5 notion:1 diego:1 imagine:1 construction:4 play:1 suppose:1 programming:3 exact:3 associate:1 trend:1 element:21 approximated:1 satisfying:1 jk:6 observed:2 role:1 ep:1 solved:2 cance:1 convexity:1 dynamic:2 depend:2 solving:1 incur:1 cogsci:1 tell:1 choosing:1 whose:2 widely:1 valued:3 larger:3 say:2 gp:12 sequence:1 eigenvalue:13 analytical:2 interaction:1 coming:1 product:1 rapidly:1 iff:3 normalize:1 rst:3 convergence:6 empty:2 undiscounted:1 converges:4 derive:1 illustrate:1 measured:1 nearest:1 rescale:1 signi:2 implies:2 involves:1 ning:1 radius:3 correct:1 modifying:1 stochastic:9 enable:1 adjacency:1 require:1 hold:2 exp:29 great:1 substituting:3 applicable:1 currently:1 largest:6 minimization:3 mit:1 clearly:2 always:1 rather:1 focus:2 u2u:2 rank:2 indicates:1 greatly:1 contrast:1 sense:2 entire:1 transformed:2 among:3 denoted:1 development:1 special:1 equal:3 construct:3 sampling:1 identical:4 minimized:2 t2:3 simplify:1 randomly:2 divergence:4 maxj:1 replaced:1 satis:3 adjust:1 yielding:1 scienti:1 permuting:1 chain:4 accurate:3 edge:1 necessary:1 initialized:1 walk:1 desired:1 renormalize:1 plotted:1 uence:1 theoretical:2 formalism:2 column:2 obstacle:1 maximization:1 cost:33 introducing:1 vertex:5 subset:1 expects:1 dij:5 rounding:1 too:1 optimally:1 answer:1 thoroughly:1 ju:2 decayed:1 destination:1 rounded:1 choose:3 cognitive:1 corner:1 derivative:1 chung:1 rescaling:2 bx:1 suggesting:1 potential:1 de:13 star:1 summarized:1 satisfy:2 caused:1 performed:1 later:1 closed:2 observing:1 analyze:1 reached:1 jsj:1 square:1 qk:3 yield:6 correspond:1 basically:1 trajectory:1 cmb:1 minj:1 reach:2 baj:2 whenever:2 ed:4 energy:1 obvious:2 naturally:1 emanuel:1 recall:1 lim:1 dimensionality:4 back:1 appears:2 improved:2 formulation:4 though:2 furthermore:1 implicit:1 stage:1 smola:1 until:1 correlation:1 nonlinear:1 quality:1 mdp:26 effect:3 multiplier:2 analytically:2 moore:1 white:1 indistinguishable:1 criterion:1 wise:1 ef:4 absorbing:10 permuted:1 ji:3 empirically:1 exponentially:1 analog:1 interpretation:1 numerically:3 cambridge:1 gibbs:1 grid:2 mathematics:1 had:3 dot:1 access:1 longer:1 recent:1 belongs:1 apart:1 inequality:1 continue:1 arbitrarily:1 accomplished:1 nition:1 minimum:1 additional:2 greater:1 simpli:1 speci:2 shortest:10 ii:1 u0:2 full:2 faster:2 match:2 af:1 equally:1 controlled:5 impact:1 laplacian:1 controller:7 iteration:6 kernel:1 irregular:1 addition:1 separately:1 consti:1 decreased:1 completes:1 appropriately:1 extra:1 regional:1 limk:1 sure:1 subject:1 integer:3 ciently:2 call:1 presence:1 enough:1 todorov:2 affect:2 xj:2 t1k:5 restrict:1 jaj:1 dik:1 action:6 discount:1 reduced:2 nsf:1 canonical:1 problematic:1 per:1 discrete:21 dropping:1 nevertheless:2 promised:1 jv:1 penalizing:1 nal:2 graph:7 relaxation:3 concreteness:1 year:1 sum:2 wand:1 run:4 family:2 throughout:1 reasonable:1 decision:1 incompatible:1 pik:2 scaling:1 vb:1 bound:1 uncontrolled:7 pay:4 guaranteed:2 constraint:9 min:4 ned:4 department:1 according:2 combination:1 smaller:4 remain:1 lp:3 making:1 equation:16 previously:2 remains:1 turn:4 needed:1 tractable:3 serf:1 unusual:1 end:1 available:2 incurring:1 rewritten:1 generic:2 spectral:5 alternative:1 existence:1 jn:4 denotes:1 remaining:2 running:3 exploit:1 uj:28 approximating:1 question:2 added:1 occurs:1 dependence:1 traditional:4 diagonal:4 athena:1 reason:3 length:2 negative:2 magic:1 motivates:1 policy:4 allowing:2 markov:5 behave:2 t:2 dijkstra:1 immediate:2 ucsd:1 arbitrary:2 introduced:1 namely:2 pair:1 kl:5 bene:2 optimized:1 california:1 summarize:1 including:1 power:1 natural:1 solvable:1 mdps:19 ne:7 nding:1 started:1 uaj:4 multiplication:2 law:3 inde:1 interesting:1 limitation:1 proportional:1 remarkable:1 incurred:1 pij:49 systematically:1 pi:3 row:6 course:1 summary:1 supported:1 last:1 free:2 side:1 allow:1 exponentiated:1 neighbor:4 taking:1 emerge:1 absolute:1 sparse:2 distributed:1 transition:33 world:2 commonly:1 reinforcement:4 san:1 exponentiating:1 ec:1 far:3 bb:2 approximate:1 pseudoinverse:1 b1:1 unnecessary:1 assumed:1 grayscale:1 continuous:17 search:1 iterative:2 why:1 nature:1 zk:3 rearranging:1 excellent:2 investigated:1 main:1 linearly:2 allowed:1 fair:1 fig:16 cient:2 precision:1 wish:1 exponential:2 pe:1 admissible:3 z0:1 down:2 maxi:2 r2:1 exists:1 scatterplot:1 adding:1 importance:1 magnitude:1 illustrates:1 horizon:5 easier:1 entropy:1 simply:2 likely:1 penrose:1 lagrange:2 expressed:1 corresponds:1 identity:1 goal:2 price:4 absence:1 replace:1 hard:2 zb:3 total:2 e:3 latter:3
2,208
3,003
Efficient Structure Learning of Markov Networks using L1-Regularization Su-In Lee Varun Ganapathi Daphne Koller Department of Computer Science Stanford University Stanford, CA 94305-9010 {silee,varung,koller}@cs.stanford.edu Abstract Markov networks are commonly used in a wide variety of applications, ranging from computer vision, to natural language, to computational biology. In most current applications, even those that rely heavily on learned models, the structure of the Markov network is constructed by hand, due to the lack of effective algorithms for learning Markov network structure from data. In this paper, we provide a computationally efficient method for learning Markov network structure from data. Our method is based on the use of L1 regularization on the weights of the log-linear model, which has the effect of biasing the model towards solutions where many of the parameters are zero. This formulation converts the Markov network learning problem into a convex optimization problem in a continuous space, which can be solved using efficient gradient methods. A key issue in this setting is the (unavoidable) use of approximate inference, which can lead to errors in the gradient computation when the network structure is dense. Thus, we explore the use of different feature introduction schemes and compare their performance. We provide results for our method on synthetic data, and on two real world data sets: pixel values in the MNIST data, and genetic sequence variations in the human HapMap data. We show that our L1 -based method achieves considerably higher generalization performance than the more standard L2 -based method (a Gaussian parameter prior) or pure maximum-likelihood learning. We also show that we can learn MRF network structure at a computational cost that is not much greater than learning parameters alone, demonstrating the existence of a feasible method for this important problem. 1 Introduction Undirected graphical models, such as Markov networks or log-linear models, have been used in an ever-growing variety of applications, including computer vision, natural language, computational biology, and more. However, as this modeling framework is used in increasingly more complex and less well-understood domains, the problem of selecting from the exponentially large space of possible network structures becomes of great importance. Including all of the possibly relevant interactions in the model generally leads to overfitting, and can also lead to difficulties in running inference over the network. Moreover, learning a ?good? structure can be an important task in its own right, as it can provide insight about the underlying structure in the domain. Unfortunately, the problem of learning Markov networks remains a challenge. The key difficulty is that the maximum likelihood (ML) parameters of these networks have no analytic closed form; finding these parameters requires an iterative procedure (such as conjugate gradient [15] or BFGS [5]), where each iteration runs inference over the current model. This type of procedure is computationally expensive even for models where inference is tractable. The problem of structure learning is considerably harder. The dominant type of solution to this problem uses greedy local heuristic search, which incrementally modifies the model by adding and possibly deleting features. One approach [6, 14] adds features so as to greedily improve the model likelihood; once a feature is added, it is never removed. As the feature addition step is heuristic and greedy, this can lead to the inclusion of unnecessary features, and thereby to overly complex structures and overfitting. An alternative approach [1, 7] explicitly searches over the space of low-treewidth models, but the utility of such models in practice is unclear; indeed, hand-designed models for real-world problems generally do not have low tree-width. Moreover, in all of the greedy heuristic search methods, the learned network is (at best) a local optimum of a penalized likelihood score. In this paper, we propose a different approach for learning the structure of a log-linear graphical model (or a Markov network). Rather than viewing it as a combinatorial search problem, we embed the structure selection step within the problem of parameter estimation: features that have weight zero (in log-space) are degenerate, and do not induce dependencies between the variables they involve. To appropriately bias the model towards sparsity, we use a technique that has become increasingly popular in the context of supervised learning, in problems that involve a large number of features, many of which may be irrelevant. It has been long known [21] that using L 1 -regularization over the model parameters ? optimizing a joint objective that trades off fit to data with the sum of the absolute values of the parameters ? tends to lead to sparse models, where many weights have value 0. More recently, Dudik et al. (2004) showed that density estimation in log-linear models using L1 -regularized likelihood has sample complexity that grows only logarithmically in the number of features of the log-linear model; Ng (2004) shows a similar result for L 1 -regularized logistic regression. These results show that this approach is useful for selecting the relevant features from a large number of irrelevant ones. Other recent work proposes effective algorithms for L 1 -regularized generalized linear models (e.g., [18, 10, 9]), support vector machines (e.g., [25]), and feature selection in log-linear models encoding natural language grammars [19]. Surprisingly, the use of L1 -regularization has not been proposed for the purpose of structure learning in general Markov networks. In this paper, we explore this approach, and discuss issues that are important to its effective application to large problems. A key point is that, for a given L 1 based model score and given candidate feature set F, we have a fixed convex optimization problem that admits a unique optimal solution. Due to the properties of the L 1 score, in this solution, many features will have weight 0, generally leading to a sparse network structure. However, it is generally impractical to simply initialize the model to include all possible features: exact inference in such a model is almost invariably intractable, and approximate inference methods such as loopy belief propagation [17] are likely to give highly inaccurate estimates of the gradient, leading to poorly learned models. Thus, we propose an algorithm schema that gradually introduces features into the model, and lets the L1 -regularization scheme eliminate them via the optimization process. We explore the use of different approaches for feature introduction, one based on the gain-based method of Della Pietra, Della Pietra and Lafferty [6] and one on the grafting method of Perkins, Lacker and Theiler [18]. We provide a sound termination condition for the algorithm based on the criterion proposed by Perkins et al. [18]; given correct estimates of the gradient, this algorithm is guaranteed to terminate only at the unique global optimum, for any reasonable feature introduction method. We test our method on synthetic data generated from known MRFs and on two real-world tasks: modeling the joint distribution of pixel values in the MNIST data [12], and modeling the joint distribution of genetic sequence variations ? single-nucleotide polymorphisms (SNPs) ? in the human HapMap data [3]. Our results show that L1 -regularization out-performs other approaches, and provides an effective method for learning MRF structure even in large, complex domains. 2 Preliminaries We focus our presentation on the framework of log-linear models, which forms a convenient basis for a discussion of learning. Let X = {X1 , . . . , Xn } be a set of discrete-valued random variables. A log-linear model is a compact representation of a probability distribution over assignments to X . The log-linear model is defined in terms of a set of feature functions fk (X k ), each of which is a function that defines a numerical value for each assignment xk to some subset X k ? X . Given a set of feature functions F = {fk }, the parameters of the log-linear model P are weights ? = {?k : fk ? F }. 1 The overall distribution is then defined as: P? (x) = Z(?) exp( fk ?F ?k fk (xk )), where xk is the assignment to X k within x, and Z(?) is the partition function that ensures that the distribution is normalized (so that all entries sum to 1). Note that this definition of features encompasses both ?standard? features that relate unobserved network variables (e.g., the part of speech of a word in a sentence) to observed elements in the data (e.g., the word itself), and structural features that encode the interaction between hidden variables in the model. A log-linear model induces a Markov network over X , where there is an edge between every pair of variables X i , Xj that appear together in some feature fk (X k ) (Xi , Xj ? X k ). The clique potentials are constructed from the log-linear features in the obvious way. Conversely, every Markov network can be encoded as a log-linear model by defining a feature which is an indicator function for every assignment of variables x c to a clique X c . The mapping to Markov networks is useful, as most inference algorithms, such as belief propagation [17, 16], operate on the graph structure of the Markov network. The standard learning problem for MRFs is formulated as follows. We are given a set of IID training instances D = {x[1], . . . , x[M ]}, each consisting of a full assignment to the variables in X . Our goal is to output a log-linear model M over X , which consists of a set of features F and a parameter vector ? that specifies a weight for each fk ? F . The log-likelihood function log P (D | M) has the following form: X `(M : D) = ?k fk (D) ? M log Z(?) = ? > f (D) ? M log Z(?), (1) fk ?F where fk (D) = m=1 fk (xk [m]) is the sum of the feature values over the entire data set, f (D) is the vector where all of these aggregate features have been arranged in the same order as the parameter vector, and ? > f (D) is a vector dot-product operation. There is no closed-form solution for the parameters that maximize Eq. (1), but the objective is concave, and can therefore be optimized using numerical optimization procedures such as conjugate gradient [15] or BFGS [5]. The gradient of the log-likelihood is: ?`(M : D) (2) = fk (D) ? M IEx?P? [fk (x)]. ??k PM This expression has a particularly intuitive form: the gradient attempts to make the feature counts in the empirical data equal to their expected counts relative to the learned model. Note that, to compute the expected feature counts, we must perform inference relative to the current model. This inference step must be performed at every iteration of the gradient process. 3 L1 -Regularized Structure Learning We formulate our structure learning problem as follows. We assume that there is a (possibly very large) set of features F, from which we wish to select a subset F ? F for inclusion in the model. This problem is generally solved using a heuristic search over the combinatorial space of possible feature subsets. Our approach addresses it as a search over the possible parameter vectors ? ? IR |F | . Specifically, rather than optimizing the log-likelihood itself, we introduce a Laplacian parameter Q prior for each feature fk takes the form P (?k ) = ?k /2 ? exp(??k |?k |). We define P (?) = k P (?k ). We now optimize the posterior probability P (D, ?) = P (D | ?)P (?) rather than the likelihood. Taking the logarithm and eliminating constant terms, we obtain the following objective: X max[?> f (D) ? M log Z(?) ? ?k |?k |]. (3) ? k In most cases, the same prior is used for all features, so we have ? k = ? for all k. This objective is convex, and can be optimized efficiently using methods such as conjugate gradient or BFGS, although care needs to be taken with the discontinuity of the derivative at 0. Thus, in principle, we can simply optimize this objective to obtain its globally optimal parameter assignment. The objective of Eq. (3) should be contrasted with the one obtained for the more standard parameter prior used for log-linear models: the mean-zero Gaussian prior P (? k ) ? exp(??k2 /2? 2 ). The gaussian prior induces a regularization term that is quadratic in ?k , which penalizes large features much more than smaller ones. Conversely, L1 -regularization still penalizes small terms strongly, thereby forcing parameters to 0. Overall, it is known that, empirically, optimizing an L 1 -regularized objective leads to a sparse representation with a relative small number of non-zero parameters. Aside from this intuitive argument, recent theoretical results also provide a formal justification for the use of L1 -regularization over other approaches: The analysis of Dudik et al. (2004) and Ng (2004) suggests that this form of regularization is effective at identifying relevant features even with a relatively small number of samples. Building directly on the results of Dudik et al. (2004), we can show the following result: Corollary 3.1: Let X = {X1 , . . . , Xn } be a set of variables each of domain size d, and P ? (X ) be a distribution. Let F be the set of indicator features over all subsets of variables X ? X of cardinality at most c, and ?, , B > 0. Let be the parameterization over F that optimizes h i ??B = arg max IE??P ? `(? : ???B ) . ? : k?k?B ? D be the assignment that optimizes Eq. (3), for regularization parameter For a data p set D, let ? ?k = ? = c ln(2nd/?)/(2m) for all k. Then with probability at least 1 ? ?, for a data set D of IID instances from P ? of size   1 2nd m ? 2cB 2 2 ln .  ? we have that: h i ? D ) ? IE??P ? [`(? : ? ? )] ? . IE??P ? `(? : ? B In words, using the L1 -regularized log-likelihood objective, we can learn a Markov network with a maximal clique size c, whose expected log-likelihood relative to the true underlying distribution is at most  worse than the log-likelihood of the optimal Markov network in this class whose L 1 -norm is at most B. The number of samples required grows logarithmically in the number of nodes in the network, and polynomially in B. The dependence on B is quite natural, indicating that more samples are required to learn networks containing more ?strong? interactions. Note, however, that if we bound the magnitude of each potential in the Markov network, then B = O((nd) c ), so that a polynomial number of data instances suffices. 4 Incremental Feature Introduction The above discussion implicitly assumed that we can find the global optimum of Eq. (3) by simple convex optimization. However, we cannot simply include all of the features in the model in advance, and use only parameter optimization to prune away the irrelevant ones. Recall that computing the gradient requires performing inference in the resulting model. If we have too many features, the model may be too densely connected to allow effective inference. Even approximate inference algorithms, such as belief propagation, tend to degrade as the density of the network increases; for example, BP algorithms are less likely to converge, and the answers they return are typically much less accurate. Thus, our approach also contains a feature introduction component, which gradually selects features to add into the model, allowing the optimization process to search for the optimal values for their parameters. More precisely, our algorithm maintains a set of active features F ? F. An inactive feature fk has its parameter ?k set to 0; the parameters of active features are free to be changed when optimizing the objective Eq. (3). In addition to various simple baseline methods, we explore two feature introduction methods, both of which are greedy and myopic, in that they compute some heuristic estimate of the likely benefit to be gained from introducing a single feature into the active set. The grafting procedure of Perkins et al. [18], which was developed for feature selection in standard classification tasks, selects features based on the gradient of these parameters: We first optimize the objective relative to the current active features F and their weights, so that, at convergence, the gradient relative to these features is zero. Then, for each inactive feature f , we compute the partial derivative of the objective Eq. (3) relative to ?f , and select the one whose gradient is largest. A more conservative estimate is obtained from the gain-based method of Della Pietra et al. [6]. This method was designed for the log-likelihood objective. It begins by optimizing the parameters relative to the current active set F . Then, for each inactive feature f , it computes the log-likelihood gain of adding that feature, assuming that we could optimize its feature weight arbitrarily, but that the weights of all other features are held constant. It then introduces the feature with the greatest gain. Della Pietra et al. show that the gain is a concave objective that can be computed efficiently using a one-dimensional line search. For the restricted case of binary-valued features, they provide a closed-form solution for the gain. Our task is to compute not the optimal gain in log-likelihood, but rather the optimal gain of Eq. (3). It is not difficult to see that the gain in this objective, which differs from the log-likelihood in only a linear term, is also a concave function that can be optimized using line search. Moreover, for the case of binary-valued features, we can also provide a closed-form solution for the gain. The change in the objective function for introducing a feature f k is: ?L1 = ?k fk (D) ? ?k?k k ? M log[exp(?k )P? (fk ) + P? (?fk )], where M is the number of training instances. If we take the derivative of the above function and set it to zero, we also get a closed form solution:   (fk (D) ? ?sign(?k ))P? (?fk ) ?k = log . (M ? fk (D) + ?sign(?k ))P? (fk ) Both methods are heuristic, in that they consider only the potential gain of adding a single feature in isolation, assuming all other weights are held constant. However, the grafting method is more optimistic, in that it estimates the value of adding a single feature via the slope of adding it, whereas the gain-based approach also considers, intuitively, how far one can go in that direction before the gain ?peaks out?. The gain based heuristic is, in fact, a lower bound on the actual gain obtained from adding this feature, while allowing the other features to also adapt. Overall, the gain-based heuristic provides a better estimate of the value of adding the feature, albeit at slightly greater computational cost (except in the limited cases where a closed-form solution can be found). As observed by Perkins et al. [18], the use of the L1 -regularized objective also provides us with a sound stopping criterion for any incremental feature-induction algorithm. If we have that, for every inactive fk 6? F , the gradient of the log-likelihood | ?`(M:D) | ? ?, then the gradient of the ??k objective in any direction is non-positive, and the objective is at a stationary point. Importantly, as the overall objective is a concave function, it has a unique global maximum. Hence, once the termination condition is achieved, we are guaranteed that we are at the local maximum, regardless of the feature introduction method used. Thus, assuming the algorithm is run until the convergence criterion is satisfied, there is no impact of the feature introduction heuristic on the final outcome, but only on the computational complexity. Finally, constantly evaluating all of the non-active candidate features can be computationally prohibitive when many features are possible. Even in pairwise Markov networks, when the number of nodes is large, a quadratic number of candidate edges can become unmanageable. In this case, we must generally pre-select a smaller set of candidate features, and ignore the others entirely. One very natural method for pre-selecting edges is to train an L1 -regularized logistic regression classifier for each variable given all of the others, and then use only the edges that are used in these individual classifiers. This approach is similar to the work of Wainwright et al. [22] (done in parallel with our work), who proposed the use of L1 -regularized pseudo-likelihood for asymptotically learning a Markov network structure. 5 The Use of Approximate Inference All of the steps in the above algorithm rely on the use of inference for computing key quantities: The gradient is needed for the parameter optimization, for the grafting method, and for the termination condition, and the expression for the gradient requires the computation of marginal probabilities relative to our current model. Similarly, the computation of the gain also requires inference. As we discussed above, in most of the networks that are useful models for real applications, exact inference is intractable. Therefore, we must resort to approximate inference, which results in errors in the gradient. While many approximate inference methods have been proposed, one of the most commonly used is the general class of loopy belief propagation (BP) algorithms [17, 16, 24]. The use of an approximate inference algorithm such as BP raises several important points. One important question issue relates to the computation of the gradient or the gain for features that are currently inactive. The belief propagation algorithm, when executed on a particular network with a set of active features F , creates a cluster for every subset of variables X k that appear as the scope of a feature fk (X k ). The output of the BP inference process is a set of marginal probabilities over all of the clusters; thus, it returns the necessary information for computing the expected sufficient statistics in the gradient of the objective (see Eq. (2)). However, for features f k (X k ) that are currently inactive, there is no corresponding cluster in the induced Markov network, and hence, in most cases, the necessary marginal probabilities over X k will not be computed by BP. We can approximate this marginal probability by extracting a subtree of the calibrated loopy graph that contains all of the variables in X k . At convergence of the BP algorithm, every subtree of the loopy graph is calibrated, in that all of the belief potentials must agree [23]. Thus, we can view the subtree as a calibrated clique tree, and use standard dynamic programming methods over the tree (see, e.g.. [4]) to extract an approximate joint distribution over X k . We note that this computation is exact for tree-structured cluster graphs, but approximate otherwise, and that the choice of tree is not obvious, and affects the accuracy of the answers. A second key issue is that the performance of BP algorithms generally degrades significantly as the density of the network increases: they are less likely to converge, and the answers they return are typically much less accurate. Moreover, non-convergence of the inference is more common when the network parameters are allowed to take larger, more extreme values; see, for example, [20, 11, 13] for some theoretical results supporting this empirical phenomenon. Thus, it is important to keep the model amenable to approximate inference, and thereby continue to improve, for as long as possible. This observation has two important consequences. First, while different feature introduction schemes achieve the same results when using exact inference, their outcomes can vary greatly when using approximate inference, due to differences in the structure of the networks arising during the learning process. Thus, as we shall see, better feature introduction methods, which introduce the more relevant features first, work much better in practice. Second, in order to keep the inference feasible for as long as possible, we utilize an annealing schedule for the regularization parameter ?, beginning with large values of ?, leading to greater sparsification of the structure, and then gradually reducing ?, allowing additional (weaker) features to be introduced. This method allows a greater part of the learning to be executed with a more robust model. 6 Results In our experiments, we focus on binary pairwise Markov networks, where each feature function is an indicator function for a certain assignment to a pair of nodes. As computing the exact loglikelihood is intractable in most networks, we use the conditional marginal log-likelihood (CMLL) as our evaluation metric on the learned network. To calculate CMLL, we first divide the variables into two groups: Xhidden and Xobserved . Then, for any test instance X[m], we compute CM LL(X[m]) = P Xh ?Xhidden [m] log P (Xh |Xobserved [m]). In practice, we divide the variables into four groups and calculate the average CMLL when observing only one group and hiding the rest. Note that the CMLL is defined only with respect to the marginals (but not the global partition function Z(?)), which are empirically thought to be more accurate. We considered three feature induction schemes: (a) Gain: based on the estimated change of gain, (b) Grad: using grafting and (c) Simple: based on pairwise similarity. Under the Simple scheme, the score of a pairwise feature between Xi and Xj is the mutual information between Xi and Xj . For each scheme, we varied the regularization method: (a) None: no regularization, (b) L1: L 1 regularization and (c) L2: L2 regularization. We note that Gain and Grad performed similarly for L1 and None. Moreover, we used only Grad for L2, because L 2 regularization does not admit a closed form solution for the approximate gain. Experiments on Synthetically Generated Data. We generated synthetic data through Gibbs sampling on a synthetic network. A network structure with N nodes was generated by treating each possible edge as a Bernoulli random variable and sampling the edges. We chose the parameter of Bernoulli distribution so that each node had K neighbors on average. In order to analyze the dependence of the performance on the size and connectivity of a network, we varied N and K. Figure 1: Results from the experiments on the synthetic data (See text for details.) We compare our algorithm using L1 regularization against no regularization and L2 regularization in three different ways. Figure 1 summarizes our results on this data sets, and includes information about the synthetic networks used for each experiment. The method labeled ?True? simply learns the parameters given the true model. In Figure 1(a), we measure performance using CMLL and reconstruction error as the number of training examples increases. As expected, L1 produces the biggest improvement when the number of training instances is small, whereas L2 and None are more prone to overfitting. This effect is much more pronounced when measuring the Hamming distance, the number of disagreeing edges between the learned structure and the true structure. The figure shows that L2 and None learn many spurious edges. Not surprisingly, L1 shows sparser distribution on the weights, thereby it has smaller number of edges with non-negligible weights; the structures from None and L2 tend to have many edges with small values. In Figure 1(b), we plot performance as a function of the density of the synthetic network. As the synthetic network gets denser, L1 increasingly outperforms the other algorithms. This may be because as the graph gets more dense, each node is indirectly correlated with more other nodes. Therefore, the feature induction algorithm is more likely to introduce an spurious edge, which L1 may later remove, whereas None and L2 do not. In Figure 1(c), we measure the wall-clock time as a function of the size of the synthetic network. Figure 1(c) shows that the computational cost of learning the structure of the network using Gain-L1 not much more than that of learning the parameters alone. Moreover, L1 increasingly outperforms other regularization methods as the number of nodes increases. Experiments on MNIST Digit Dataset. Moving to real data, we applied our algorithm to handwritten digits. The MNIST training set consists of 32 ? 32 binary images of handwritten digits. In order to speed up inference and learning, we resized the image to 16 ? 16. We trained the model where each pixel is a variable for each digit separately, using a training set consisting of 189?195 images per digit. For each digit, we used 50 images as training instances and the remainder as test instances. Figure 2(a) compares CMLL of the different methods. To save space, we show the digits on which the relative difference in performance of L1 compared to the next best competitor is the lowest (digit 5) and highest (digit 0), as well as the average performance. As mentioned earlier, the performance Figure 2: Results from the experiments on MNIST dataset of the regularized algorithm should be insensitive to the feature induction method, assuming inference is exact. However, in practice, because inference is approximate, an induction algorithm that introduces spurious features will affect the quality of inference, and therefore the performance of the algorithm. This effect is substantiated by the poor performance of the Simple-L1 and SimpleL2 methods that introduce features based on mutual information rather than gradient (Grad-) or approximate gain (Gain-). Nevertheless L1 still outperforms None and L2, regardless the feature induction algorithm with which it is paired. Figure 2(b) shows a visualization of the MRF learned when modeling digits 4 and 7. Of course, one would expect many short-range interactions, such as the associativity between neighboring pixels, and the algorithm does indeed capture these relationships. (They are not shown in the graph to simplify the analysis of the relationships.) Interestingly, the algorithm picks up long-range interactions, which presumably allow the algorithm to model the variations in the size and shape of hand-written digits. Experiments on Human Genetic Variation Data. The Human HapMap data set 1 represents the genetic variation over human individuals. Six data sets contain the genotype values over 6141,052 genetic markers (SNPs) from 120 individuals. For each data set, we learned the structure of the Markov network whose nodes are binary valued SNPs such that it captures the structure of the human genetic variation. Figure 2(c) compares CMLLs among three methods for these data sets. For all data sets, L1 shows better performance than L2 and None. 7 Discussion and Future Work We have presented a simple and effective method for learning the structure of Markov networks. We view the structure learning problem as an L1 -regularized parameter estimation task, allowing it to be solved using convex optimization techniques. We show that the computational cost of our method is not considerably greater than pure parameter estimation for a fixed structure, suggesting that MRF structure learning is a feasible option for many applications. 1 The Human HapMap data are available at: http://www.hapmap.org. There are some important directions in which our work can be extended. Currently, our method handles each feature in the log-linear model independently, with no attempt to bias the learning towards sparsity in the structure of the induced Markov network. We can extend our approach to introduce such a bias by using a variant of L1 regularization that penalizes blocks of parameters together, such as the block-L1-norm of [2]. From a theoretical perspective, it would be interesting to show that, at the large sample limit, redundant features are eventually eliminated, so that the learning eventually converges to a minimal structure consistent with the underlying distribution. Similar results were shown by Donoho [8], and can perhaps be adapted to this case. A key limiting factor in MRF learning, and in our approach, is the fact that it requires inference over the model. While our experiments suggest that approximate inference is a viable solution, as the network structure becomes dense, its performance does degrade, especially as the approximate gradient does not always move the parameters to 0, diminishing the sparsifying effect of the L 1 regularization, and rendering the inference even less precise. It would be interesting to explore inference methods whose goal is correctly estimating the direction (even if not the magnitude) of the gradient. Finally, it would be interesting to explore the viability of the learned network structures in realworld applications, both for density estimation and for knowledge discovery, for example, in the context of the HapMap data. References [1] F. Bach and M. Jordan. Thin junction trees. In NIPS 14, 2002. [2] F.R. Bach, G.R.G. Lanckriet, and M.I. Jordan. Multiple kernel learning, conic duality, and the smo algorithm, 2004. [3] The International HapMap Consortium. The international hapmap project. Nature, 426:789?796, 2003. [4] Robert G. Cowell and David J. Spiegelhalter. Probabilistic Networks and Expert Systems. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1999. [5] H. Daum?e III. Notes on CG and LM-BFGS optimization of logistic regression. August 2004. [6] S. Della Pietra, V.J. Della Pietra, and J.D. Lafferty. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380?393, 1997. [7] A. Deshpande, M.N. Garofalakis, and M.I. Jordan. Efficient stepwise selection in decomposable models. In Proc. UAI, pages 128?135, 2001. [8] D. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition, 1999. [9] A. Genkin, D. D. Lewis, and D. Madigan. Large-scale bayesian logistic regression for text categorization. 2004. [10] J. Goodman. Exponential priors for maximum entropy models. In North American ACL, 2005. [11] Alexander T. Ihler, John W. Fischer III, and Alan S. Willsky. Loopy belief propagation: Convergence and effects of message errors. J. Mach. Learn. Res., 6:905?936, 2005. [12] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, November 1998. [13] Martijn A. R. Leisink and Hilbert J. Kappen. General lower bounds based on computer generated higher order expansions. In UAI, pages 293?300, 2002. [14] A. McCallum. Efficiently inducing features of conditional random fields. In Proc. UAI, 2003. [15] Thomas P. Minka. Algorithms for maximum-likelihood logistic regression. 2001. [16] K. P. Murphy, Y. Weiss, and M. I. Jordan. Loopy belief propagation for approximate inference: an empirical study. pages 467?475, 1999. [17] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [18] S. Perkins, K. Lacker, and J. Theiler. Grafting: Fast, incremental feature selection by gradient descent in function space. 3(2003):1333?1356, 2003. [19] Stefan Riezler and Alexander Vasserman. Incremental feature selection and l1 regularization for relaxed maximum-entropy modeling. In Proceedings of EMNLP 2004. [20] Sekhar Tatikonda and Michael I. Jordan. Loopy belief propogation and gibbs measures. In UAI, pages 493?500, 2002. [21] R Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B, 1996. [22] Martin J. Wainwright, Pradeep Ravikumar, and Lafferty. Inferring graphical model structure using ` 1 regularized pseudo-likelihood. In Advances in Neural Information Processing Systems 19, 2007. [23] Martin J. Wainwright, Erik B. Sudderth, and Alan S. Willsky. Tree-based modeling and estimation of gaussian processes on graphs with cycles. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13, pages 661?667. MIT Press, 2001. [24] Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Generalized belief propagation. In Advances in Neural Information Processing Systems 13. MIT Press, 2001. [25] J. Zhu, S. Rosset, T. Hastie, and R. Tibshirani. 1-norm support vector machines. In Proc. NIPS, 2003.
3003 |@word eliminating:1 polynomial:1 norm:3 nd:3 termination:3 decomposition:1 pick:1 thereby:4 harder:1 kappen:1 contains:2 score:4 selecting:3 genetic:6 document:1 interestingly:1 outperforms:3 current:6 must:5 written:1 john:1 numerical:2 partition:2 shape:1 analytic:1 remove:1 designed:2 treating:1 plot:1 aside:1 alone:2 greedy:4 stationary:1 prohibitive:1 intelligence:1 parameterization:1 xk:4 beginning:1 huo:1 mccallum:1 short:1 provides:3 node:9 org:1 daphne:1 constructed:2 become:2 viable:1 consists:2 introduce:5 pairwise:4 expected:5 indeed:2 growing:1 freeman:1 globally:1 actual:1 cardinality:1 becomes:2 begin:1 hiding:1 moreover:6 underlying:3 estimating:1 project:1 lowest:1 cm:1 developed:1 finding:1 unobserved:1 sparsification:1 impractical:1 nj:1 pseudo:2 every:7 concave:4 k2:1 classifier:2 appear:2 positive:1 before:1 understood:1 local:3 negligible:1 tends:1 limit:1 consequence:1 riezler:1 todd:1 encoding:1 mach:1 chose:1 acl:1 conversely:2 suggests:1 limited:1 range:2 unique:3 lecun:1 atomic:1 practice:4 block:2 differs:1 della:6 digit:11 procedure:4 empirical:3 significantly:1 thought:1 convenient:1 word:3 induce:1 pre:2 madigan:1 suggest:1 consortium:1 get:3 cannot:1 selection:7 context:2 optimize:4 www:1 modifies:1 go:1 regardless:2 independently:1 convex:5 formulate:1 sekhar:1 decomposable:1 identifying:1 pure:2 insight:1 importantly:1 handle:1 variation:6 justification:1 limiting:1 heavily:1 exact:6 programming:1 us:1 leisink:1 lanckriet:1 logarithmically:2 element:1 expensive:1 particularly:1 recognition:1 labeled:1 observed:2 disagreeing:1 solved:3 capture:2 calculate:2 ensures:1 connected:1 cycle:1 trade:1 removed:1 highest:1 mentioned:1 complexity:2 dynamic:1 trained:1 raise:1 creates:1 basis:1 xhidden:2 joint:4 various:1 substantiated:1 train:1 fast:1 effective:7 aggregate:1 outcome:2 whose:5 heuristic:9 stanford:3 valued:4 encoded:1 quite:1 larger:1 otherwise:1 loglikelihood:1 grammar:1 denser:1 statistic:1 fischer:1 itself:2 final:1 sequence:2 propose:2 reconstruction:1 interaction:5 product:1 maximal:1 remainder:1 neighboring:1 relevant:4 degenerate:1 poorly:1 achieve:1 martijn:1 intuitive:2 secaucus:1 pronounced:1 inducing:2 convergence:5 cluster:4 optimum:3 produce:1 categorization:1 incremental:4 converges:1 eq:8 strong:1 soc:1 c:1 treewidth:1 direction:4 correct:1 human:7 viewing:1 hapmap:8 polymorphism:1 generalization:1 suffices:1 preliminary:1 wall:1 considered:1 exp:4 great:1 cb:1 mapping:1 scope:1 presumably:1 lm:1 achieves:1 vary:1 purpose:1 estimation:6 proc:3 combinatorial:2 currently:3 tatikonda:1 largest:1 stefan:1 mit:2 gaussian:4 always:1 rather:5 resized:1 shrinkage:1 volker:1 iex:1 encode:1 corollary:1 focus:2 improvement:1 bernoulli:2 likelihood:21 greatly:1 greedily:1 baseline:1 cg:1 inference:34 mrfs:2 stopping:1 inaccurate:1 eliminate:1 entire:1 typically:2 diminishing:1 hidden:1 koller:2 spurious:3 associativity:1 selects:2 pixel:4 issue:4 overall:4 arg:1 classification:1 among:1 proposes:1 initialize:1 mutual:2 marginal:5 field:2 once:2 never:1 equal:1 ng:2 sampling:2 eliminated:1 biology:2 represents:1 thin:1 future:1 others:2 simplify:1 intelligent:1 genkin:1 densely:1 individual:3 pietra:6 murphy:1 consisting:2 william:1 attempt:2 invariably:1 message:1 highly:1 evaluation:1 introduces:3 extreme:1 genotype:1 pradeep:1 myopic:1 held:2 amenable:1 accurate:3 edge:11 partial:1 necessary:2 nucleotide:1 tree:7 divide:2 logarithm:1 penalizes:3 re:1 theoretical:3 minimal:1 silee:1 instance:8 modeling:6 earlier:1 measuring:1 assignment:8 loopy:7 cost:4 introducing:2 subset:5 entry:1 too:2 dependency:1 answer:3 synthetic:9 considerably:3 calibrated:3 rosset:1 density:5 peak:1 international:2 propogation:1 ie:3 lee:1 off:1 probabilistic:2 michael:1 together:2 connectivity:1 unavoidable:1 satisfied:1 containing:1 possibly:3 emnlp:1 worse:1 admit:1 resort:1 derivative:3 leading:3 return:3 ganapathi:1 expert:1 american:1 suggesting:1 potential:4 bfgs:4 includes:1 north:1 inc:1 explicitly:1 performed:2 view:2 later:1 closed:7 optimistic:1 schema:1 observing:1 analyze:1 maintains:1 parallel:1 option:1 slope:1 ir:1 accuracy:1 kaufmann:1 who:1 efficiently:3 handwritten:2 bayesian:1 iid:2 none:8 definition:1 against:1 competitor:1 deshpande:1 minka:1 obvious:2 ihler:1 hamming:1 gain:25 dataset:2 popular:1 recall:1 knowledge:1 hilbert:1 schedule:1 higher:2 varun:1 supervised:1 wei:2 formulation:1 arranged:1 done:1 strongly:1 leen:1 until:1 clock:1 hand:3 su:1 marker:1 lack:1 incrementally:1 propagation:8 defines:1 logistic:5 quality:1 perhaps:1 grows:2 building:1 dietterich:1 effect:5 normalized:1 true:4 contain:1 usa:1 regularization:24 hence:2 ll:1 during:1 width:1 criterion:3 generalized:2 performs:1 l1:32 snp:3 reasoning:1 ranging:1 image:4 recently:1 common:1 empirically:2 exponentially:1 insensitive:1 discussed:1 extend:1 marginals:1 gibbs:2 fk:24 pm:1 similarly:2 inclusion:2 language:3 had:1 dot:1 moving:1 similarity:1 add:2 dominant:1 posterior:1 own:1 showed:1 recent:2 perspective:1 optimizing:5 irrelevant:3 optimizes:2 forcing:1 certain:1 verlag:1 binary:5 arbitrarily:1 continue:1 morgan:1 greater:5 care:1 dudik:3 additional:1 relaxed:1 prune:1 converge:2 maximize:1 redundant:1 relates:1 full:1 sound:2 multiple:1 alan:2 adapt:1 bach:2 long:4 ravikumar:1 paired:1 laplacian:1 impact:1 mrf:5 regression:6 variant:1 vision:2 metric:1 iteration:2 kernel:1 achieved:1 addition:2 whereas:3 separately:1 annealing:1 sudderth:1 appropriately:1 goodman:1 operate:1 rest:1 induced:2 tend:2 undirected:1 lafferty:3 jordan:5 extracting:1 structural:1 garofalakis:1 synthetically:1 ideal:1 iii:2 viability:1 bengio:1 rendering:1 variety:2 xj:4 fit:1 isolation:1 affect:2 hastie:1 lasso:1 haffner:1 grad:4 inactive:6 expression:2 six:1 utility:1 speech:1 york:1 generally:7 useful:3 involve:2 statist:1 induces:2 http:1 specifies:1 sign:2 estimated:1 overly:1 arising:1 per:1 correctly:1 tibshirani:2 discrete:1 shall:1 group:3 key:6 four:1 sparsifying:1 demonstrating:1 nevertheless:1 utilize:1 graph:7 asymptotically:1 convert:1 sum:3 run:2 realworld:1 uncertainty:1 almost:1 reasonable:1 summarizes:1 entirely:1 bound:3 guaranteed:2 quadratic:2 adapted:1 precisely:1 perkins:5 bp:7 speed:1 argument:1 performing:1 relatively:1 martin:2 department:1 structured:1 poor:1 conjugate:3 smaller:3 slightly:1 increasingly:4 intuitively:1 gradually:3 restricted:1 taken:1 computationally:3 ln:2 agree:1 remains:1 visualization:1 discus:1 count:3 eventually:2 needed:1 tractable:1 available:1 operation:1 junction:1 yedidia:1 away:1 indirectly:1 save:1 alternative:1 yair:1 existence:1 thomas:2 running:1 include:2 lacker:2 graphical:3 daum:1 especially:1 objective:20 move:1 added:1 quantity:1 question:1 degrades:1 dependence:2 unclear:1 gradient:26 distance:1 degrade:2 considers:1 induction:6 willsky:2 assuming:4 xobserved:2 erik:1 relationship:2 difficult:1 unfortunately:1 executed:2 robert:1 relate:1 perform:1 allowing:4 observation:1 markov:24 november:1 descent:1 supporting:1 defining:1 extended:1 ever:1 precise:1 varied:2 august:1 introduced:1 david:1 pair:2 required:2 sentence:1 optimized:3 smo:1 learned:9 pearl:1 discontinuity:1 nip:2 address:1 pattern:1 biasing:1 sparsity:2 challenge:1 encompasses:1 vasserman:1 including:2 max:2 royal:1 deleting:1 belief:10 greatest:1 wainwright:3 natural:5 rely:2 difficulty:2 regularized:12 indicator:3 zhu:1 scheme:6 improve:2 spiegelhalter:1 conic:1 extract:1 tresp:1 text:2 prior:7 l2:11 discovery:1 relative:10 expect:1 interesting:3 theiler:2 sufficient:1 consistent:1 principle:2 editor:1 prone:1 course:1 penalized:1 changed:1 surprisingly:2 free:1 bias:3 formal:1 allow:2 weaker:1 wide:1 neighbor:1 taking:1 unmanageable:1 absolute:1 sparse:3 benefit:1 xn:2 world:3 evaluating:1 computes:1 commonly:2 far:1 polynomially:1 transaction:1 approximate:18 grafting:6 compact:1 implicitly:1 ignore:1 keep:2 clique:4 ml:1 global:4 overfitting:3 active:7 uai:4 unnecessary:1 assumed:1 xi:3 continuous:1 iterative:1 search:9 learn:5 terminate:1 robust:1 ca:1 nature:1 expansion:1 bottou:1 complex:3 domain:4 dense:3 allowed:1 x1:2 biggest:1 inferring:1 wish:1 xh:2 exponential:1 candidate:4 learns:1 embed:1 admits:1 intractable:3 stepwise:1 mnist:5 albeit:1 adding:7 importance:1 gained:1 magnitude:2 subtree:3 sparser:1 entropy:2 simply:4 explore:6 likely:5 cowell:1 springer:1 constantly:1 lewis:1 conditional:2 goal:2 presentation:1 formulated:1 donoho:2 towards:3 feasible:3 change:2 specifically:1 except:1 contrasted:1 reducing:1 conservative:1 duality:1 indicating:1 select:3 support:2 jonathan:1 alexander:2 phenomenon:1 correlated:1
2,209
3,004
Towards a general independent subspace analysis Fabian J. Theis Max Planck Institute for Dynamics and Self-Organisation & Bernstein Center for Computational Neuroscience Bunsenstr. 10, 37073 G?ottingen, Germany [email protected] Abstract The increasingly popular independent component analysis (ICA) may only be applied to data following the generative ICA model in order to guarantee algorithmindependent and theoretically valid results. Subspace ICA models generalize the assumption of component independence to independence between groups of components. They are attractive candidates for dimensionality reduction methods, however are currently limited by the assumption of equal group sizes or less general semi-parametric models. By introducing the concept of irreducible independent subspaces or components, we present a generalization to a parameter-free mixture model. Moreover, we relieve the condition of at-most-one-Gaussian by including previous results on non-Gaussian component analysis. After introducing this general model, we discuss joint block diagonalization with unknown block sizes, on which we base a simple extension of JADE to algorithmically perform the subspace analysis. Simulations confirm the feasibility of the algorithm. 1 Independent subspace analysis A random vector Y is called an independent component of the random vector X, if there exists an invertible matrix A and a decomposition X = A(Y, Z) such that Y and Z are stochastically independent. The goal of a general independent subspace analysis (ISA) or multidimensional independent component analysis is the decomposition of an arbitrary random vector X into independent components. If X is to be decomposed into one-dimensional components, this coincides with ordinary independent component analysis (ICA). Similarly, if the independent components are required to be of the same dimension k, then this is denoted by multidimensional ICA of fixed group size k or simply k-ISA. So 1-ISA is equivalent to ICA. 1.1 Why extend ICA? An important structural aspect in the search for decompositions is the knowledge of the number of solutions i.e. the indeterminacies of the problem. Without it, the result of any ICA or ISA algorithm cannot be compared with other solutions, so for instance blind source separation (BSS) would be impossible. Clearly, given an ISA solution, invertible transforms in each component (scaling matrices L) as well as permutations of components of the same dimension (permutation matrices P) give again an ISA of X. And indeed, in the special case of ICA, scaling and permutation are already all indeterminacies given that at most one Gaussian is contained in X [6]. This is one of the key theoretical results in ICA, allowing the usage of ICA for solving BSS problems and hence stimulating many applications. It has been shown that also for k-ISA, scalings and permutations as above are the only indeterminacies [11], given some additional rather weak restrictions to the model. However, a serious drawback of k-ISA (and hence of ICA) lies in the fact that the requirement fixed group-size k does not allow us to apply this analysis to an arbitrary random vector. Indeed, crosstalking error 4 3 2 1 0 FastICA JADE Extended Infomax Figure 1: Applying ICA to a random vector X = AS that does not fulfill the ICA model; here S is chosen to consist of a two-dimensional and a one-dimensional irreducible component. Shown are the statistics over 100 runs of the Amari error of the random original and the reconstructed mixing matrix using the three ICA-algorithms FastICA, JADE and Extended Infomax. Clearly, the original mixing matrix could not be reconstructed in any of the experiments. However, interestingly, the latter two algorithms do indeed find an ISA up to permutation, which will be explained in section 3. theoretically speaking, it may only be applied to random vectors following the k-ISA blind source separation model, which means that they have to be mixtures of a random vector that consists of independent groups of size k. If this is the case, uniqueness up to permutation and scaling holds as noted above; however if k-ISA is applied to any random vector, a decomposition into groups that are only ?as independent as possible? cannot be unique and depends on the contrast and the algorithm. In the literature, ICA is often applied to find representations fulfilling the independence condition as well as possible, however care has to be taken; the strong uniqueness result is not valid any more, and the results may depend on the algorithm as illustrated in figure 1. This work aims at finding an ISA model that allows applicability to any random vector. After reviewing previous approaches, we will provide such a model together with a corresponding uniqueness result and a preliminary algorithm. 1.2 Previous approaches to ISA for dependent component analysis Generalizations of the ICA model that are to include dependencies of multiple one-dimensional components have been studied for quite some time. ISA in the terminology of multidimensional ICA has first been introduced by Cardoso [4] using geometrical motivations. His model as well as the related but independently proposed factorization of multivariate function classes [9] is quite general, however no identifiability results were presented, and applicability to an arbitrary random vector was unclear; later, in the special case of equal group sizes (k-ISA) uniqueness results have been extended from the ICA theory [11]. Algorithmic enhancements in this setting have been recently studied by [10]. Moreover, if the observation contain additional structures such as spatial or temporal structures, these may be used for the multidimensional separation [13]. Hyv?arinen and Hoyer presented a special case of k-ISA by combining it with invariant feature subspace analysis [7]. They model the dependence within a k-tuple explicitly and are therefore able to propose more efficient algorithms without having to resort to the problematic multidimensional density estimation. A related relaxation of the ICA assumption is given by topographic ICA [8], where dependencies between all components are assumed and modelled along a topographic structure (e.g. a 2-dimensional grid). Bach and Jordan [2] formulate ISA as a component clustering problem, which necessitates a model for inter-cluster independence and intra-cluster dependence. For the latter, they propose to use a tree-structure as employed by their tree dependepent component analysis. Together with inter-cluster independence, this implies a search for a transformation of the mixtures into a forest i.e. a set of disjoint trees. However, the above models are all semi-parametric and hence not fully blind. In the following, no additional structures are necessary for the separation. 1.3 General ISA Definition 1.1. A random vector S is said to be irreducible if it contains no lower-dimensional independent component. An invertible matrix W is called a (general) independent subspace analysis of X if WX = (S1 , . . . , Sk ) with pairwise independent, irreducible random vectors Si . Note that in this case, the Si are independent components of X. The idea behind this definition is that in contrast to ICA and k-ISA, we do not fix the size of the groups Si in advance. Of course, some restriction is necessary, otherwise no decomposition would be enforced at all. This restriction is realized by allowing only irreducible components. The advantage of this formulation now is that it can clearly be applied to any random vector, although of course a trivial decomposition might be the result in the case of an irreducible random vector. Obvious indeterminacies of an ISA of X are, as mentioned above, scalings i.e. invertible transformations within each Si and permutation of Si of the same dimension1. These are already all indeterminacies as shown by the following theorem, which extends previous results in the case of ICA [6] and k-ISA [11], where also the additional slight assumptions on square-integrability i.e. on existing covariance have been made. Theorem 1.2. Given a random vector X with existing covariance and no Gaussian independent component, then an ISA of X exists and is unique except for scaling and permutation. Existence holds trivially but uniqueness is not obvious. Due to the limited space, we only give a short sketch of the proof in the following. The uniqueness result can easily be formulated as a subspace extraction problem, and theorem 1.2 follows readily from Lemma 1.3. Let S = (S1 , . . . , Sk ) be a square-integrable decomposition of S into irreducible independent components Si . If X is an irreducible component of S, then X ? Si for some i. Here the equivalence relation ? denotes equality except for an invertible transformation. The following two lemmata each give a simplification of lemma 1.3 by ordering the components Si according to their dimensions. Some care has to be taken when showing that lemma 1.5 implies lemma 1.4. Lemma 1.4. Let S and X be defined as in lemma 1.3. In addition assume that dim Si = dim X for i ? l and dim Si < dim X for i > l. Then X ? Si for some i ? l. Lemma 1.5. Let S and X be defined as in lemma 1.4, and let l = 1 and k = 2. Then X ? S1 . In order to prove lemma 1.5 (and hence the theorem), it is sufficient to show the following lemma: Lemma 1.6. Let S = (S1 , S2 ) with S1 irreducible and m := dim S1 > dim S2 =: n. If X = AS is again irreducible for some m ? (m + n)-matrix A, then (i) the left m ? m-submatrix of A is invertible, and (ii) if X is an independent component of S, the right m ? n-submatrix of A vanishes. (i) follows after some linear algebra, and is necessary to show the more difficult part (ii). For this, we follow the ideas presented in [12] using factorization of the joint characteristic function of S. 1.4 Dealing with Gaussians In the previous section, Gaussians had to be excluded (or at most one was allowed) in order to avoid additional indeterminacies. Indeed, any orthogonal transformation of two decorrelated hence independent Gaussians is again independent, so clearly such a strong identification result would not be possible. Recently, a general decomposition model dealing with Gaussians was proposed in the form of the socalled non-Gaussian subspace analysis (NGSA) [3]. It tries to detect a whole non-Gaussian subspace within the data, and no assumption of independence within the subspace is made. More precisely, given a random vector X, a factorization X = AS with an invertible matrix A, S = (SN , SG ) and SN a square-integrable m-dimensional random vector is called an m-decomposition of X if SN and SG are stochastically independent and SG is Gaussian. In this case, X is said to be mdecomposable. X is denoted to be minimally n-decomposable if X is not (n ? 1)-decomposable. According to our previous notation, SN and SG are independent components of X. It has been shown that the subspaces of such decompositions are unique [12]: 1 Note that scaling here implies a basis change in the component Si , so for example in the case of a twodimensional source component, this might be rotation and sheering. In the example later in figure 3, these indeterminacies can easily be seen by comparing true and estimated sources. Theorem 1.7 (Uniqueness of NGSA). The mixing matrix A of a minimal decomposition is unique except for transformations in each of the two subspaces. Moreover, explicit algorithms can be constructed for identifying the subspaces [3]. This result enables us to generalize theorem 1.2and to get a general decomposition theorem, which characterizes solutions of ISA. Theorem 1.8 (Existence and Uniqueness of ISA). Given a random vector X with existing covariance, an ISA of X exists and is unique except for permutation of components of the same dimension and invertible transformations within each independent component and within the Gaussian part. Proof. Existence is obvious. Uniqueness follows after first applying theorem 1.7 to X and then theorem 1.2 to the non-Gaussian part. 2 Joint block diagonalization with unknown block-sizes Joint diagonalization has become an important tool in ICA-based BSS (used for example in JADE) or in BSS relying on second-order temporal decorrelation. The task of (real) joint diagonalization (JD) of a set of symmetric real n?n matrices M := {M1 , . . . , MK } is to find an orthogonal matrix ? := PK kE ? ? Mk E ?? E such that E? Mk E is diagonal for all k = 1, . . . , K i.e. to minimizef (E) k=1 ? 2 ? ? ? diagM(E Mk E)kF with respect to the orthogonal matrix E, where diagM(M) produces a matrix where all off-diagonal elements of M have been set to zero, and kMk2F := tr(MM? ) denotes the squared Frobenius norm. The Frobenius norm is invariant under conjugation by an orthogonal ma? := PK k diag(E ? ? Mk E)k ? 2 , where now trix, so minimizing f is equivalent to maximizing g(E) k=1 diag(M) := (mii )i denotes the diagonal of M. For the actual minimization of f respectively maximization of g, we will use the common approach of Jacobi-like optimization by iterative applications of Givens rotation in two coordinates [5]. 2.1 Generalization to blocks In the following we will use a generalization of JD in order to solve ISA problems. Instead of fully diagonalizing all n ? n matrices Mk ? M, in joint block diagonalization (JBD) of M we want to determine E such that E? Mk E is block-diagonal. Depending on the application, we fix the blockstructure in advance or try to determine it from M. We are not interested in the order of the blocks, so the block-structure is uniquely specified by fixing a partition of n i.e. a way of writing n as a sum of positive integers, where the order of the addends is not significant. So let2 n = m1 + . . . + mr with m1 ? m2 ? . . . ? mr and set m := (m1 , . . . , mr ) ? Nr . An n ? n matrix is said to be m-block diagonal if it is of the form ? ? D1 ? ? ? 0 ? .. .. ? .. ? . . . ? 0 ? ? ? Dr with arbitrary mi ? mi matrices Di . As generalization of JD in the case of known the block structure, we can formulate the joint m? := PK kE ? ? Mk E ? ? block diagonalization (m-JBD) problem as the minimization of f m (E) k=1 m ?? m 2 ? ? diagM (E Mk E)kF with respect to the orthogonal matrix E, where diagM (M) produces a m-block diagonal matrix by setting all other elements of M to zero. In practice due to estimation errors, such E will not exist, so we speak of approximate JBD and imply minimizing some errormeasure on non-block-diagonality. Indeterminacies of any m-JBD are m-scaling i.e. multiplication by an m-block diagonal matrix from the right, and m-permutation defined by a permutation matrix that only swaps blocks of the same size. Finally, we speak of general JBD if we search for a JBD but no block structure is given; instead it is to be determined from the matrix set. For this it is necessary to require a block 2 We do not use the convention from Ferrers graphs of specifying partitions in decreasing order, as a visualization of increasing block-sizes seems to be preferable in our setting. structure of maximal length, otherwise trivial solutions or ?in-between? solutions could exist (and obviously contain high indeterminacies). Formally, E is said to be a (general) JBD of M if (E, m) = argmaxm | ?E:f m (E)=0 |m|. In practice due to errors, a true JBD would always result in the trivial decomposition m = (n), so we define an approximate general JBD by requiring f m (E) < ? for some fixed constant ? > 0 instead of f m (E) = 0. 2.2 JBD by JD A few algorithms to actually perform JBD have been proposed, see [1] and references therein. In the following we will simply perform joint diagonalization and then permute the columns of E to achieve block-diagonality ? in experiments this turns out to be an efficient solution to JBD [1]. This idea has been formulated in a conjecture [1] essentially claiming that a minimum of the JD cost function f already is a JBD i.e. a minimum of the function f m up to a permutation matrix. Indeed, in the conjecture it is required to use the Jacobi-update algorithm from [5], but this is not necessary, and we can prove the conjecture partially: We want to show that JD implies JBD up to permutation, i.e. if E is a minimum of f , then there exists a permutation P such that f m (EP) = 0 (given existence of a JBD of M). But of course f (EP) = f (E), so we will show why (certain) JBD solutions are minima of f . However, JD might have additional minima. First note that clearly not any JBD minimizes f , only those such that in each block of size mk , f (E) when restricted to the block is maximal over E ? O(mk ). We will call such a JBD block-optimal in the following. Theorem 2.1. Any block-optimal JBD of M (zero of f m ) is a local minimum of f . Proof. Let E ? O(n) be block-optimal with f m (E) = 0. We have to show that E is a local minimum of f or equivalently a local maximum of the squared diagonal sum g. After substituting each Mk by E? Mk E, we may already assume that Mk is m-block diagonal, so we have to show that E = I is a local maximum of g. Consider the elementary Givens rotation Gij (?) defined for i < j and ??? (?1, 1) as the orthogonal matrix, where all diagonal elements are 1 except for the two elements 1 ? ?2 in rows i and j and with all off-diagonal elements equal to 0 except for the two elements ? and ?? at (i, j) and (j, i), respectively. It can be used to construct local coordinates Q of the d := n(n ? 1)/2-dimensional manifold O(n) at I, simply by ?(?12 , ?13 , . . . , ?n?1,n ) := i<j Gij (?ij ) This is an embedding, and ?(0) = I, so we only have to show that h(??) := g(?(??)) has a local maximum at ? = 0. We do this by considering h partially in each coordinate. Let i < j. If i, j are in the same block of m, then h is locally maximal i.e. negative semi-definite at 0 in the direction ?ij because of block-optimality. Now assume i and j are from different blocks. After possible permutation, we may assume that j = i + 1 so that each matrix Mk ? M has (Mk )ij = (Mk )ji = 0, and ak := (Mk )ii , bk := (Mk )jj . Then Gij (?)? Mk Gij (?) can be easily calculated at coordinates (i, i) to (j, j), and indeed entries on the diagonal other than at indices (i, i) and (j, j) are not changed, so k diag(Gij (?)? Mk Gij (?))k2 ? k diag(Mk )k2 = = ?2ak (ak ? bk )?2 + 2bk (ak ? bk )?2 + 2(ak ? bk )2 ?4 = ?2(a2k + b2k )?2 + 2(ak ? bk )2 ?4 . P 2 2 Hence h(0, . . . , 0, ?ij , 0, . . . , 0) ? h(0) = ?c?2ij + d?4ij with c = 2 K k=1 (ak + bk ) and d = PK 2 k=1 (ak ? bk )2 . Now either c = 0, then also d = 0 and h is constant zero in the direction ?ij . Or, more interestingly, c 6= 0, then c > 0 and therefore h is negative definite in the direction ?ij . Altogether we get a negative definite h at 0 except for ?trivial directions?, and hence a local maximum at 0. 2.3 Recovering the permutation In order to perform JBD, we therefore only have to find a JD E of M. What is left according to the above theorem is to find a permutation matrix P such that EP block-diagonalizes M. In the case of known block-order m, we can employ similar techniques as used in [1, 10], which essentially find P by some combinatorial optimization. 5 5 5 10 10 10 15 15 15 20 20 20 25 25 25 30 30 30 35 35 40 35 40 40 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 ?? (a) (unknown) block diagonal M1 (b) E E w/o recovered permutation 5 10 15 20 25 30 35 40 ?? (c) E E Figure 2: Performance of the proposed general JBD algorithm in the case of the (unknown) blockpartition 40 = 1 + 2 + 2 + 3 + 3 + 5 + 6 + 6 + 6 + 6 in the presence of noise with SNR of 5dB. The ? ? E of the inverse of the estimated block diagonalizer and the original one is an m-block product E diagonal matrix except for permutation within groups of the same sizes as claimed in section 2.2. In the case of unknown block-size, we propose to use the following simple permutation-recovery PK algorithm: consider the mean diagonalized matrix D := K ?1 k=1 E? Mk E. Due to the assumption that M is m-block-diagonalizable (with unknown m), each E? Mk E and hence also D must be m-block-diagonal except for a permutation P, so it must have the corresponding number of zeros in each column and row. In the approximate JBD case, thresholding with a threshold ? is necessary, whose choice is non-trivial. We propose using algorithm 1 to recover the permutation; we denote its resulting permuted matrix by P(D) when applied to the input D. P(D) is constructed from possibly thresholded D by iteratively permuting columns and rows in order to guarantee that all non-zeros of D are clustered along the diagonal as closely as possible. This recovers the permutation as well as the partition m of n. Algorithm 1: Block-diagonality permutation finder Input: (n ? n)-matrix D Output: block-diagonal matrix P(D) := D? such that D? = PDPT for a permutation matrix P D? ? D for i ? 1 to n do repeat if (j0 ? min{j|j ? i and d?ij = 0 and d?ji = 0}) exists then if (k0 ? min{k|k > j0 and (d?ik 6= 0 or d?ki 6= 0)}) exists then swap column j0 of D? with column k0 swap row j0 of D? with row k0 until no swap has occurred ; We illustrate the performance of the proposed JBD algorithm as follows: we generate a set of K = 100 m-block-diagonal matrices Dk of dimension 40 ? 40 with m = (1, 2, 2, 3, 3, 5, 6, 6, 6, 6). They have been generated in blocks of size m with coefficients chosen randomly uniform from [?1, 1], and symmetrized by Dk ? (Dk + D? k )/2. After that, they have been mixed by a random orthogonal mixing matrix E ? O(40), i.e. Mk := EDk E? + N, where N is a noise matrix with independent Gaussian entries such that the resulting signal-to-noise ratio is 5dB. Application of the JBD algorithm from above to {M1 , . . . , MK } with threshold ? = 0.1 correctly recovers the ? equals E up to m-scaling and permutation, as block sizes, and the estimated block diagonalizer E illustrated in figure 2. 3 SJADE ? a simple algorithm for general ISA As usual by preprocessing of the observations X by whitening we may assume that Cov(X) = I. The indeterminacies allow scaling transformations in the sources, so without loss of generality let 5.5 6 5 5.5 5 1 5 2 4.5 4 3 5 4.5 4 3 4 2 5 4.5 4 3.5 4 3.5 6 3 1 2.5 0 5 3.5 3 7 3 2.5 8 4 3 2 2.5 2 5 4 2 1 1.5 7 8 9 10 11 12 13 14 2 3 4 5 (a) S2 6 7 8 9 1.5 3 3.5 4 (b) S3 4.5 5 5.5 6 6.5 10 1 0 7 (c) S4 5 9 3 2 0 1 14 3 4 5 6 7 8 9 10 ? ?1 A (e) A ?3 1 ?3.5 4.5 2 (d) S5 13 250 0 ?4 4 ?1 ?4.5 12 200 3.5 11 150 ?5 ?2 ?5.5 ?3 ?6 ?4 ?6.5 ?5 0 3 10 100 2.5 ?1 ?7 9 50 2 5 ?2 4 3 ?3 ?7.5 2 ?4 1.5 ?6.5 ?6 ?5.5 ?5 ?4.5 ?4 ?3.5 ?3 (f) (S?1 , S?2 ) ?2.5 ?2 0 ?4 ?3.5 ?3 ?2.5 ?2 ?1.5 ?1 ?0.5 (g) histogram of S?3 0 8 ?4.5 ?4 ?3.5 ?3 ?2.5 ?2 (h) S4 ?1.5 ?1 ?0.5 ?8 ?7.5 ?7 ?6.5 ?6 ?5.5 ?5 ?4.5 (i) S5 ?4 ?3.5 ?3 ?2.5 1 ?5 0 (j) S6 Figure 3: Example application of general ISA for unknown sizes m = (1, 2, 2, 2, 3). Shown are the ? ?1 A. scatter plots i.e. densities of the source components and the mixing-separating map A also Cov(S) = I. Then I = Cov(X) = A Cov(S)A? = AA? so A is orthogonal. Due to the ISA assumptions, the fourth-order cross cumulants of the sources have to be trivial between different groups, and within the Gaussians. In order to find transformations of the mixtures fulfilling this property, we follow the idea of the JADE algorithmbut now in the ISA setting. We perform JBD of the (whitened) contracted quadricovariance matrices defined by Cij (X) := E X? Eij XXX? ? Eij ? E? ij ? tr(Eij )I. Here RX := Cov(X) and Eij is a set of eigen-matrices of Cij , 1 ? i, j ? n. One simple choice is to use n2 matrices Eij with zeros everywhere except 1 at index (i, j). More elaborate choices of eigen-matrices (with only n(n + 1)/2 or even n entries) are possible. The resulting algorithm, subspace-JADE (SJADE) not only performs NGCA by grouping Gaussians as one-dimensional components with trivial Cii ?s, but also automatically finds the subspace partition m using the general JBD algorithm from section 2.3. 4 Experimental results In a first example, we consider a general ISA problem in dimension n = 10 with the unknown partition m = (1, 2, 2, 2, 3). In order to generate 2- and 3-dimensional irreducible random vectors, we decided to follow the nice visual ideas from [10] and to draw samples from a density following a known shape ? in our case 2d-letters or 3d-geometrical shapes. The chosen sources densities are shown in figure 3(a-d). Another 1-dimensional source following a uniform distribution was constructed. Altogether 104 samples were used. The sources S were mixed by a mixing matrix A with coefficients uniformly randomly sampled from [?1, 1] to give mixtures X = AS. The recovered ? was then estimated using the above block-JADE algorithm with unknown block mixing matrix A size; we observed that the method is quite sensitive to the choice of the threshold (here ? = 0.015). ? ?1 A; clearly the matrices are equal Figure 3(e) shows the composed mixing-separating system A except for block permutation and scaling, which experimentally confirms theorem 1.8. The algo? = (1, 1, 1, 2, 2, 3), so one 2d-source was misinterpreted as two 1d-sources, rithm found a partition m but by using previous knowledge combination of the correct two 1d-sources yields the original 2d? := A ? ?1 X, figures 3(f-j), then equal the original sources source. The resulting recovered sources S except for permutation and scaling within the sources ? which in the higher-dimensional cases implies transformations such as rotation of the underlying images or shapes. When applying ICA (1-ISA) to the above mixtures, we cannot expect to recover the original sources as explained in figure 1; however, some algorithms might recover the sources up to permutation. Indeed, SJADE equals JADE with additional permutation recovery because the joint block diagonalization is per- formed using joint diagonalization. This explains why JADE retrieves meaningful components even in this non-ICA setting as observed in [4]. In a second example, we illustrate how the algorithm deals with Gaussian sources i.e. how the subspace JADE also includes NGCA. For this we consider the case n = 5, m = (1, 1, 1, 2) and sources with two Gaussians, one uniform and a 2-dimensional irreducible component as before; 105 samples were drawn. We perform 100 Monte-Carlo simulations with random mixing matrix ? is compared with A by A, and apply SJADE with ? = 0.01. The recovered mixing matrix A P3 P2 2 2 ?1 ? taking the ad-hoc measure ?(P) := i=1 j=1 (pij + pji ) for P := A A. Indeed, we get nearly perfect recovery in 99 out of 100 runs, the median of ?(P) is very low with 0.0083. A single run diverges with ?(P ) = 3.48. In order to show that the algorithm really separates the Gaussian part from the other components, we compare the recovered source kurtoses. The median kurtoses are ?0.0006 ? 0.02, ?0.003 ? 0.3, ?1.2 ? 0.3, ?1.2 ? 0.2 and ?1.6 ? 0.2. The first two components have kurtoses close to zero, so they are the two Gaussians, whereas the third component has kurtosis of around ?1.2, which equals the kurtosis of a uniform density. This confirms the applicability of the algorithm in the general, noisy ISA setting. 5 Conclusion Previous approaches for independent subspace analysis were restricted either to fixed group sizes or semi-parametric models. In neither case, general applicability to any kind of mixture data set was guaranteed, so blind source separation might fail. In the present contribution we introduce the concept of irreducible independent components and give an identifiability result for this general, parameter-free model together with a novel arbitrary-subspace-size algorithm based on joint block diagonalization. As in ICA, the main uniqueness theorem is an asymptotic result (but includes noisy case via NGCA). However in practice in the finite sample case, due to estimation errors the general joint block diagonality only approximately holds. Our simple solution in this contribution was to choose appropriate thresholds. But this choice is non-trivial, and adaptive methods are to be developed in future works. References [1] K. Abed-Meraim and A. Belouchrani. Algorithms for joint block diagonalization. In Proc. EUSIPCO 2004, pages 209?212, Vienna, Austria, 2004. [2] F.R. Bach and M.I. Jordan. Finding clusters in independent component analysis. In Proc. ICA 2003, pages 891?896, 2003. [3] G. Blanchard, M. Kawanabe, M. Sugiyama, V. Spokoiny, and K.-R. M?uller. In search of non-gaussian components of a high-dimensional distribution. JMLR, 7:247?282, 2006. [4] J.F. Cardoso. Multidimensional independent component analysis. In Proc. of ICASSP ?98, Seattle, 1998. [5] J.F. Cardoso and A. Souloumiac. Jacobi angles for simultaneous diagonalization. SIAM J. Mat. Anal. Appl., 17(1):161?164, January 1995. [6] P. Comon. Independent component analysis - a new concept? Signal Processing, 36:287?314, 1994. [7] A. Hyv?arinen and P.O. Hoyer. Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):1705?1720, 2000. [8] A. Hyv?arinen, P.O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1525?1558, 2001. [9] J.K. Lin. Factorizing multivariate function classes. In Advances in Neural Information Processing Systems, volume 10, pages 563?569, 1998. [10] B. Poczos and A. L?orincz. Independent subspace analysis using k-nearest neighborhood distances. In Proc. ICANN 2005, volume 3696 of LNCS, pages 163?168, Warsaw, Poland, 2005. Springer. [11] F.J. Theis. Uniqueness of complex and multidimensional independent component analysis. Signal Processing, 84(5):951?956, 2004. [12] F.J. Theis and M. Kawanabe. Uniqueness of non-gaussian subspace analysis. In Proc. ICA 2006, pages 917?925, Charleston, USA, 2006. [13] R. Vollgraf and K. Obermayer. Multi-dimensional ICA to separate correlated sources. In Proc. NIPS 2001, pages 993?1000, 2001.
3004 |@word norm:2 seems:1 hyv:3 confirms:2 simulation:2 decomposition:14 covariance:3 tr:2 reduction:1 contains:1 interestingly:2 existing:3 diagonalized:1 recovered:5 comparing:1 si:12 scatter:1 must:2 readily:1 partition:6 wx:1 shape:3 enables:1 plot:1 update:1 generative:1 short:1 misinterpreted:1 along:2 constructed:3 become:1 ik:1 consists:1 prove:2 introduce:1 pairwise:1 theoretically:2 inter:2 indeed:8 ica:30 multi:1 relying:1 decomposed:1 decreasing:1 automatically:1 actual:1 considering:1 increasing:1 moreover:3 notation:1 underlying:1 what:1 kind:1 minimizes:1 developed:1 finding:2 transformation:9 guarantee:2 temporal:2 multidimensional:7 preferable:1 k2:2 planck:1 positive:1 before:1 local:7 eusipco:1 ak:8 approximately:1 might:5 minimally:1 studied:2 therein:1 equivalence:1 specifying:1 appl:1 limited:2 factorization:3 decided:1 unique:5 practice:3 block:50 definite:3 lncs:1 j0:4 get:3 cannot:3 close:1 algorithmindependent:1 twodimensional:1 impossible:1 applying:3 writing:1 restriction:3 equivalent:2 map:1 center:1 maximizing:1 independently:1 formulate:2 ke:2 decomposable:2 identifying:1 recovery:3 m2:1 his:1 s6:1 embedding:1 coordinate:4 diagonalizable:1 speak:2 element:6 ep:3 observed:2 ordering:1 mentioned:1 vanishes:1 dynamic:1 depend:1 solving:1 reviewing:1 algebra:1 algo:1 basis:1 swap:4 necessitates:1 easily:3 joint:13 icassp:1 k0:3 retrieves:1 monte:1 jade:10 neighborhood:1 quite:3 whose:1 solve:1 blockstructure:1 amari:1 otherwise:2 statistic:1 cov:5 topographic:3 emergence:1 noisy:2 obviously:1 hoc:1 advantage:1 kurtosis:2 propose:4 maximal:3 product:1 combining:1 diagonalizer:2 mixing:10 achieve:1 frobenius:2 seattle:1 enhancement:1 requirement:1 cluster:4 diverges:1 produce:2 perfect:1 depending:1 illustrate:2 fixing:1 a2k:1 nearest:1 ij:10 indeterminacy:10 p2:1 strong:2 recovering:1 implies:5 convention:1 direction:4 drawback:1 closely:1 correct:1 explains:1 require:1 arinen:3 fix:2 generalization:5 clustered:1 preliminary:1 really:1 elementary:1 extension:1 hold:3 mm:1 around:1 warsaw:1 algorithmic:1 substituting:1 uniqueness:12 estimation:3 proc:6 combinatorial:1 currently:1 sensitive:1 tool:1 minimization:2 uller:1 clearly:6 gaussian:14 always:1 aim:1 rather:1 fulfill:1 avoid:1 integrability:1 contrast:2 detect:1 kurtoses:3 dim:6 dependent:1 relation:1 interested:1 germany:1 denoted:2 socalled:1 spatial:1 special:3 equal:8 construct:1 having:1 extraction:1 nearly:1 future:1 serious:1 few:1 irreducible:13 employ:1 randomly:2 b2k:1 composed:1 phase:1 intra:1 mixture:7 behind:1 permuting:1 tuple:1 necessary:6 orthogonal:8 tree:3 theoretical:1 minimal:1 mk:26 instance:1 column:5 cumulants:1 maximization:1 ordinary:1 applicability:4 introducing:2 cost:1 entry:3 snr:1 uniform:4 fastica:2 dependency:2 density:5 siam:1 kmk2f:1 off:2 invertible:8 infomax:2 together:3 again:3 squared:2 choose:1 possibly:1 dr:1 stochastically:2 resort:1 includes:2 relieve:1 coefficient:2 blanchard:1 spokoiny:1 explicitly:1 blind:4 depends:1 ad:1 later:2 try:2 characterizes:1 recover:3 identifiability:2 contribution:2 square:3 formed:1 characteristic:1 yield:1 generalize:2 weak:1 modelled:1 identification:1 carlo:1 rx:1 meraim:1 simultaneous:1 decorrelated:1 definition:2 addend:1 obvious:3 proof:3 jacobi:3 mi:2 di:1 recovers:2 sampled:1 popular:1 austria:1 knowledge:2 dimensionality:1 actually:1 higher:1 follow:3 xxx:1 formulation:1 generality:1 until:1 sketch:1 usage:1 name:1 usa:1 concept:3 contain:2 true:2 requiring:1 equality:1 hence:8 excluded:1 symmetric:1 iteratively:1 illustrated:2 deal:1 attractive:1 self:1 uniquely:1 noted:1 coincides:1 performs:1 geometrical:2 image:2 novel:1 recently:2 common:1 rotation:4 inki:1 permuted:1 ji:2 volume:2 extend:1 slight:1 m1:6 occurred:1 significant:1 s5:2 grid:1 trivially:1 similarly:1 sugiyama:1 ottingen:1 had:1 whitening:1 base:1 multivariate:2 claimed:1 certain:1 integrable:2 seen:1 minimum:7 additional:7 care:2 mr:3 employed:1 cii:1 determine:2 diagonality:4 signal:3 semi:4 ii:3 multiple:1 isa:33 bach:2 cross:1 lin:1 finder:1 feasibility:1 whitened:1 essentially:2 histogram:1 addition:1 want:2 whereas:1 median:2 source:24 db:2 jordan:2 integer:1 call:1 structural:1 presence:1 bernstein:1 independence:6 idea:5 shift:1 poczos:1 speaking:1 jj:1 cardoso:3 transforms:1 s4:2 locally:1 generate:2 exist:2 problematic:1 s3:1 neuroscience:1 algorithmically:1 disjoint:1 estimated:4 correctly:1 per:1 mat:1 group:11 key:1 terminology:1 threshold:4 drawn:1 neither:1 thresholded:1 graph:1 relaxation:1 sum:2 enforced:1 run:3 inverse:1 everywhere:1 fourth:1 letter:1 angle:1 extends:1 separation:5 p3:1 mii:1 draw:1 bunsenstr:1 scaling:12 submatrix:2 ki:1 conjugation:1 simplification:1 guaranteed:1 precisely:1 aspect:1 optimality:1 min:2 conjecture:3 according:3 combination:1 increasingly:1 abed:1 s1:6 comon:1 explained:2 invariant:3 fulfilling:2 restricted:2 taken:2 visualization:1 diagonalizes:1 discus:1 turn:1 fail:1 gaussians:8 apply:2 kawanabe:2 appropriate:1 pji:1 symmetrized:1 altogether:2 existence:4 original:6 jd:8 denotes:3 clustering:1 include:1 eigen:2 vienna:1 already:4 realized:1 parametric:3 dependence:2 usual:1 diagonal:18 nr:1 unclear:1 hoyer:3 said:4 obermayer:1 subspace:23 distance:1 separate:2 separating:2 manifold:1 trivial:8 length:1 index:2 ratio:1 minimizing:2 equivalently:1 difficult:1 cij:2 claiming:1 negative:3 anal:1 unknown:9 perform:6 allowing:2 observation:2 fabian:2 finite:1 january:1 extended:3 orincz:1 arbitrary:5 introduced:1 bk:8 required:2 specified:1 nip:1 able:1 max:1 including:1 decorrelation:1 natural:1 diagonalizing:1 imply:1 sn:4 poland:1 nice:1 literature:1 sg:4 theis:4 kf:2 multiplication:1 asymptotic:1 fully:2 loss:1 permutation:30 expect:1 mixed:2 sufficient:1 pij:1 thresholding:1 row:5 course:3 changed:1 belouchrani:1 repeat:1 free:2 allow:2 institute:1 taking:1 dimension:6 bs:4 valid:2 calculated:1 souloumiac:1 made:2 adaptive:1 preprocessing:1 reconstructed:2 approximate:3 confirm:1 dealing:2 assumed:1 factorizing:1 search:4 iterative:1 sk:2 why:3 forest:1 permute:1 argmaxm:1 complex:1 diag:4 pk:5 main:1 icann:1 motivation:1 s2:3 whole:1 noise:3 n2:1 allowed:1 contracted:1 elaborate:1 rithm:1 explicit:1 candidate:1 lie:1 ngca:3 jmlr:1 third:1 theorem:14 showing:1 dk:3 organisation:1 grouping:1 exists:6 consist:1 diagonalization:12 simply:3 eij:5 visual:1 contained:1 trix:1 partially:2 springer:1 aa:1 ma:1 stimulating:1 goal:1 formulated:2 towards:1 change:1 experimentally:1 determined:1 except:12 uniformly:1 lemma:12 called:3 gij:6 experimental:1 meaningful:1 formally:1 latter:2 d1:1 correlated:1
2,210
3,005
Large-Scale Sparsified Manifold Regularization Ivor W. Tsang James T. Kwok Department of Computer Science and Engineering The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong {ivor,jamesk}@cse.ust.hk Abstract Semi-supervised learning is more powerful than supervised learning by using both labeled and unlabeled data. In particular, the manifold regularization framework, together with kernel methods, leads to the Laplacian SVM (LapSVM) that has demonstrated state-of-the-art performance. However, the LapSVM solution typically involves kernel expansions of all the labeled and unlabeled examples, and is slow on testing. Moreover, existing semi-supervised learning methods, including the LapSVM, can only handle a small number of unlabeled examples. In this paper, we integrate manifold regularization with the core vector machine, which has been used for large-scale supervised and unsupervised learning. By using a sparsified manifold regularizer and formulating as a center-constrained minimum enclosing ball problem, the proposed method produces sparse solutions with low time and space complexities. Experimental results show that it is much faster than the LapSVM, and can handle a million unlabeled examples on a standard PC; while the LapSVM can only handle several thousand patterns. 1 Introduction In many real-world applications, collection of labeled data is both time-consuming and expensive. On the other hand, a large amount of unlabeled data are often readily available. While traditional supervised learning methods can only learn from the limited amount of labeled data, semi-supervised learning [2] aims at improving the generalization performance by utilizing both the labeled and unlabeled data. The label dependencies among patterns are captured by exploiting the intrinsic geometric structure of the data. The underlying smoothness assumption is that two nearby patterns in a high-density region should share similar labels [2]. When the data lie on a manifold, it is common to approximate this manifold by a weighted graph, leading to graph-based semi-supervised learning methods. However, many of these are designed for transductive learning, and thus cannot be easily extended to out-of-sample patterns. Recently, attention is drawn to the development of inductive methods, such as harmonic mixtures [15] and Nystr?om-based methods [3]. In this paper, we focus on the manifold regularization framework proposed in [1]. By defining a data-dependent reproducing kernel Hilbert space (RKHS), manifold regularization incorporates an additional regularizer to ensure that the learned function is smooth on the manifold. Kernel methods, which have been highly successful in supervised learning, can then be integrated with this RKHS. The resultant Laplacian SVM (LapSVM) demonstrates state-of-the-art semi-supervised learning performance [10]. However, a deficiency of the LapSVM is that its solution, unlike that of the SVM, is not sparse and so is much slower on testing. Moreover, while the original motivation of semi-supervised learning is to utilize the large amount of unlabeled data available, existing algorithms are only capable of handling a small to moderate amount of unlabeled data. Recently, attempts have been made to scale up these methods. Sindhwani et al. [9] speeded up manifold regularization by restraining to linear models, which, however, may not be flexible enough for complicated target functions. Garcke and Griebel [5] proposed to use discretization with a sparse grid. Though it scales linearly with the sample size, its time complexity grows exponentially with data dimensionality. As reported in a recent survey [14], most semisupervised learning methods can only handle 100 ? 10,000 unlabeled examples. More recently, G?artner et al. [6] presented a solution in the more restrictive transductive setting. The largest graph they worked with involve 75,888 labeled and unlabeled examples. Thus, no one has ever been experimented on massive data sets with, say, one million unlabeled examples. On the other hand, the Core Vector Machine (CVM) is recently proposed for scaling up kernel methods in both supervised (including classification [12] and regression [13]) and unsupervised learning (e.g., novelty detection). Its main idea is to formulate the learning problem as a minimum enclosing ball (MEB) problem in computational geometry, and then use an (1 + )-approximation algorithm to obtain a close-to-optimal solution efficiently. Given m samples, the CVM has an asymptotic time complexity that is only linear in m and a space complexity that is even independent of m for a fixed . Experimental results on real world data sets with millions of patterns demonstrated that the CVM is much faster than existing SVM implementations and can handle much larger data sets. In this paper, we extend the CVM to semi-supervised learning. To restore sparsity of the LapSVM solution, we first introduce a sparsified manifold regularizer based on the -insensitive loss. Then, we incorporate manifold regularization into the CVM. It turns out that the resultant QP can be casted as a center-constrained MEB problem introduced in [13]. The rest of this paper is organized as follows. In Section 2, we first give a brief review on manifold regularization. Section 3 then describes the proposed algorithm for semi-supervised classification and regression. Experimental results on very large data sets are presented in Section 4, and the last section gives some concluding remarks. 2 Manifold Regularization Given a training set {(xi , yi )}m i=1 with input xi ? X and output yi ? R. The regularized risk functional is the sum of the empirical risk (corresponding to a loss function `) and a regularizer ?. Given a kernel k and its RKHS Hk , we minimize the regularized risk over function f in Hk : m min f ?Hk 1 X `(xi , yi , f (xi )) + ??(kf kHk ). m i=1 (1) Here, k ? kHk denotes the RKHS norm and ? > 0 is a regularization Pm parameter. By the representer theorem, the minimizer f admits the representation f (x) = i=1 ?i k(xi , x), where ?i ? R. Therefore, the problem is reduced to the optimization over the finite-dimensional space of ? i ?s. In semi-supervised learning, we have both labeled examples {(xi , yi )}m i=1 and unlabeled examples m+n {xi }i=m+1 . Manifold regularization uses an additional regularizer kf k2I to ensure that the function f is smooth on the intrinsic structure of the input. The objective function in (1) is then modified to: m 1 X `(xi , yi , f (xi )) + ??(kf kHk ) + ?I kf k2I , m i=1 (2) where It can be shown that the minimizer f is of the form f (x) = Pm ?I is anotherRtradeoff0 parameter. 0 0 i=1 ?i k(xi , x) + M ?(x )k(x, x )dPX (x ), where M is the support of the marginal distribution PX of X [1]. In practice, we do not have access to PX . Now, assume that the support M of PX is a compact subR manifold, and take kf k2I = M h?f, ?f i where ? is the gradient of f . It is common to approximate this manifold by a weighted graph defined on all the labeled and unlabeled data, as G = (V, E) with V and EP being the sets of vertices and edges respectively. Denote the weight function w and degree d(u) = v?u w(u, v). Here, v ? u means that u, v are adjacent. Then, kf k2I is approximated as1   2 X p f (xue ) f (xve ) 2 (3) kf kI = w(ue , ve ) s(ue ) ? s(ve ) , e?E 1 When the set of labeled and unlabeled data is small, a function that is smooth on this small set may not be interesting. However, this is not an issue here as our focus is on massive data sets. p where ue and ve are vertices of the edge e, and s(u) = d(u) when the normalized graph Laplacian is used, and s(u) = 1 with the unnormalized one. As shown in [1], the minimizer of (2) becomes Pm+n f (x) = i=1 ?i k(xi , x), which depends on both labeled and unlabeled examples. 2.1 Laplacian SVM Using the hinge loss `(xi , yi , f (xi )) = max(0, 1 ? yi f (xi )) in (2), we obtain the Laplacian SVM (LapSVM) [1]. Its training involves two steps. First, solve the quadratic program (QP) 1 1 to obtain the ? ? . Here, ? = [?1 , . . . , ?m ]0 , 1 = max ? 0 1 ? 12 ? 0 Q? : ? 0 y = 0, 0 ? ? ? m 0 ?1 0 [1, . . . , 1] , Qm?m = YJK(2?I + 2?I LK) J Y, Ym?m is the diagonal matrix with Yii = yi , K(m+n)?(m+n) is the kernel matrix over both the labeled and unlabeled data, L(m+n)?(m+n) is the graph Laplacian, and Jm?(m+n) with Jij = 1 if i = j and xi is a labeled example, and Jij = 0 otherwise. The optimal ? = [?1 , . . . , ?m+n ]0 solution is then obtained by solving the linear system: ?? = (2?I + 2?I LK)?1 J0 Y? ? . Note that the matrix 2?I + 2?I LK is of size (m + n) ? (m + n), and so its inversion can be very expensive when n is large. Moreover, unlike the standard SVM, the ?? obtained is not sparse and so evaluation of f (x) is slow. 3 Proposed Algorithm 3.1 Sparsified Manifold Regularizer To restore sparsity of the LapSVM solution, we replace the square function in the manifold regularizer (3) by the -insensitive loss function2 , as   2 X p w(ue , ve ) f (xue ) ? f (xve ) , kf k2I = (4) s(ue ) s(ve ) ?? e?E where |z|?? = 0 if |z| ? ??; and |z| ? ?? otherwise. Obviously, it reduces to (3) when ?? = 0. As will be shown in Section 3.3, the ? solution obtained will be sparse. Substituting (4) into (2), we have: ( m   2 ) X p f (xue ) f (xve ) 1X `(xi , yi , f (xi ))+?I min w(ue , ve ) s(ue ) ? s(ve ) +??(kf kHk ). f ?Hk m ?? i=1 e?E By treating the terms inside the braces as the ?loss function?, this can be regarded as regularized risk minimization and, using the standard representer theorem, the minimizer f then admits the form Pm+n f (x) = i=1 ?i k(xi , x), same as that of the original manifold regularization. P 2 0 Moreover, putting f(x) = w0 ?(x) + b into (4), we obtain kfk2I = e?E |w  ? e + b?e |??, where p p ?(x ) ?(x ) ? e = w(ue , ve ) s(uuee) ? s(vvee) , and ?e = w(ue , ve ) s(u1 e ) ? s(v1e ) . The primal of the LapSVM can then be formulated as: m min C X 2 C? X 2 kwk + b + ?i + 2C ?? + (?e + ?e? 2 ) m? i=1 |E|? 2 2 (5) e?E s.t. yi (w0 ?(xi ) + b) ? 1 ? ?? ? ?i , i = 1, . . . , m, ?(w0 ? e + b?e ) ? ?? + ?e , w0 ? e + b?e ? ?? + ?e? , e ? E. (6) (7) Here, |E| is the number of edges in the graph, ?i is the slack variable for the error, ?e , ?e? are slack variables for edge e, and C, ?, ? are user-defined parameters. As in previous CVM formulations 2 ?2 [12, 13], the bias b is penalized and the two-norm errors (?i2 , ?ij and ?ij ) are used. Moreover, the ? constraints ?i , ?ij , ?ij , ?? ? 0 are automatically satisfied. When ?? = 0, (5) reduces to the original LapSVM (using two-norm errors). When ? is also zero, it becomes the Lagrangian SVM. The dual can be easily obtained as the following QP: max [? 0 ? 0 ? ? 0 ][ 2 2 0 0 00 ? 0 ? 0 ? ? 0 ]0 : [? 0 ? 0 ? ? 0 ]1 = 1, ?, ?, ? ? ? 0, (8) 1 0 0 ] ? [? 0 ? 0 ? ? 0 ]K[? C To avoid confusion with the  in the (1 + )-approximation, we add a bar to the ? here. ? 0 ? = where ? = [?1 , . . . , ?m ]0 , ? = [?1 , . . . , ?|E| ]0 , ? ? = [?1? , . . . , ?|E| ] are the dual variables, and K ? ? ?m 0 0 ? (K` + 11 + C I) yy V0 ?V0 V I U + |E|? C? ?U ?V ? is the transformed ?kernel matrix?. Here, ?U |E|? U + C? I K` is the kernel matrix defined using kernel k on the m labeled examples, U|E|?|E| = [? 0e ? f + ?e ?f ], and Vm?|E| = [yi ?(xi )0 ? e + ?e ]. Note that while each entry of the matrix Q in LapSVM ? here takes only (Section 2.1) requires O((m + n)2 ) kernel k(xi , xj ) evaluations, each entry in K O(1) kernel evaluations. This is particularly favorable to decomposition methods such as SMO as most of the CPU computations are typically dominated by kernel evaluations. Moreover, it can be shown that ? is a parameter that controls the size of ??, analogous to the ? parameter in ?-SVR. Hence, only ?, but not ??, appears in (8). Moreover, the primal variables can P be easily recoveredPfrom the dual variables by the P KKT conditions. In particular,   P m m ? ? w = C i=1 ?i yi ?(xi ) + e?E (?e ? ?e )? e and b = C i=1 ?i yi + e?E (?e ? ?e )?e . Subsequently, the decision function f (x) = w 0 ?(x) + b is a linear combination of k(xi , x)?s defined on both the labeled and unlabeled examples, as in standard manifold regularization. 3.2 Transforming to a MEB Problem We now show that CVM can be used for solving the possibly very large QP in (8). In particular, we will transform this QP to the dual of a center-constrained MEB problem [13], which is of the form: max ?0 (diag(K) + ? ? ?1) ? ?0 K? : ? ? 0, ?0 1 = 1, (9)  0 for some 0 ? ? ? Rm and ? ? R. From the variables in (8), define ? ? = ? 0 ? 0 ? ? 0 and ? + ?1 + 2 [10 00 0]0 s.t. ? ? 0 for some sufficiently large ?. (8) can then be written ? = ?diag(K) C ? + ? ? ?1) ? ? ?? as max ? ? 0 (diag(K) ? 0K ? : ? ? ? 0, ? ? 0 1 = 1, which is of the form in (9). The above formulation can be easily extended to the regression case, with the pattern output changed from ?1 to yi ? R, and the hinge loss replaced by the -insensitive loss. Converting the resultant QP to the form in (9) is also straightforward. 3.3 Sparsity In Section 3.3.1, we first explain why a sparse solution can be obtained by using the KKT conditions. Alternatively, by building on [7], we show in Section 3.3.2 that the -insensitive loss achieves a similar effect as the `1 penalty in LASSO [11], which is known to produce sparse approximation. 3.3.1 KKT Perspective Basically, this follows from the standard argument as for sparse solutions with the -insensitive loss in SVR. From the KKT condition associated with (6): ?i (yi (w0 ?(xi ) + b) ? 1 + ?? + ?i ) = 0. As for the SVM, most patterns are expected to lie outside the margin (i.e. yi (w0 ?(xi ) + b) > 1 ? ??) and so most ?i ?s are zero. Similarly, manifold regularization finds a f that is locally smooth. Hence, from the definition of ? e and ?e , many values of (w0 ? e + b?e )?s will be inside the ??-tube. Using the KKT conditions associated with (7), the corresponding ?e ?s and ?e? ?s are zero. As f (x) is a linear combination of the k(xi , x)?s weighted by ?i and ?e ? ?e? (Section 3.1), f is thus sparse. 3.3.2 LASSO Perspective Our exposition will be along the line pioneered by Girosi [7], who established a connection between the -insensitive loss in SVR and sparse approximation. Given a predictor f (x) = Pm 0 i=1 ?i k(xi , x) = K?, we consider minimizing the error between f = [f (x 1 ), . . . f (xm )] and y = [y1 , . . . , ym ]0 . While sparse approximation techniques such as basis pursuit typically use the L2 norm for the error, Girosi argued that the norm of the RKHS Hk is a better measure of smoothness. However, the RKHS norm operates on functions, while here we have vectors f and y w.r.t. x1 , . . . , xm . Hence, we will use the kernel PCA map with ky ? f k2K ? (y ? f )K?1 (y ? f ). First, consider the simpler case where the manifold regularizer is replaced by a simple regularizer k?k22 . As in LASSO, we also add a `1 penalty on ?. The optimization problem is formulated as: ?m 0 ? ? : k?k1 = C, (10) min ky ? f k2K + C where C and ? are constants. As in [7], we decompose ? as ??? ? , where ?, ? ? ? 0 and ?i ?i? = 0. Then, (10) can be rewritten as: 0 0 ? 0 0 ?0 0 max [? 0 ? ? ][2y0 ? 2y0 ]0 ? [? 0 ? ? ]K[? ? ] : ?, ? ? ? 0, ? 0 1 + ? ? 1 = C, ? = where3 K  I K + ?m C ?K using the -insensitive loss: ?K K + ?m I C  (11) . On the other hand, consider the following variant of SVR m min kwk2 + C X 2 (? + ?i?2 ) + 2C ?? : yi ? w0 ?(xi ) ? ?? + ?i , w0 ?(xi ) ? yi ? ?? + ?i? . (12) m? i=1 i It can be shown that its dual is identical to (11), with ?, ? ? as dual variables. Moreover, the LASSO penalty (i.e., the equality constraint in (11)) is induced from the ?? in (12). Hence, the -insensitive loss in SVR achieves a similar effect as using the error ky ? f k2K and the LASSO penalty. We now add back the manifold regularizer. The derivation is similar, though more involved, and so details are skipped. As above, the key steps are on replacing the `2 norm by the kernel PCA map, and adding a `1 penalty on the variables. It can then be shown that sparsified manifold regularizer (based on the -insensitive loss) can again be recovered by using the LASSO penalty. 3.4 Complexities As the proposed algorithm is an extension of the CVM, its properties are analogous to those in [12]. For example, its approximation ratio is (1 + )2 , and so the approximate solution obtained is very close to the exact optimal solution. As for the computational complexities, it can be shown that the SLapCVM only takes O(1/8 ) time and O(1/2 ) space when probabilistic speedup is used. (Here, we ignore O(m+|E|) space required for storing the m training patterns and 2|E| edge constraints, as these may be stored outside the core memory.) They are thus independent of the numbers of labeled and unlabeled examples for a fixed . In contrary, LapSVM involves an expensive matrix inversion for K(m+n)?(m+n) and requires O((m + n)3 ) time and O((m + n)2 ) space. 3.5 Remarks The reduced SVM [8] has been used to scale up the standard SVM. Hence, another natural alternative is to extend it for the LapSVM. This ?reduced LapSVM? solves a smaller optimization problem that involves a random r?(m+n) rectangular subset of the kernel matrix, where the r patterns are chosen from both the labeled and unlabeled data. It can be easily shown that it requires O((m + n) 2 r) time and O((m + n)r) space. Experimental comparisons based on this will be made in Section 4. Note that the CVM [12] is in many aspects similar to the column generation technique [4] commonly used in large-scale linear or integer programs. Both start with only a small number of nonzero variables, and the restricted master problem in column generation corresponds to the inner QP that is solved at each CVM iteration. Moreover, both can be regarded as primal methods that maintain primal4 feasibility and work towards dual feasibility. Also, as is typical in column generation, the dual variable whose KKT condition is most violated is added at each iteration. The key difference 5 , 3 For simplicity, here we have only considered the case where f does not have a bias. In the presence of a ? has to be replaced by K + 110 . bias, it can be easily shown that K (in the expression of K) 4 By convention, column generation takes the optimization problem to be solved as the primal. Hence, in this section, we also regard the QP to be solved as CVM?s primal, and the MEB problem as its dual. Note that each dual variable then corresponds to a training pattern. 5 Another difference is that an entire column is added at each iteration of column generation. However, in CVM, the dual variable added is just a pattern and the extra space required for the QP is much smaller. Besides, there are other implementation tricks (such as probabilistic speedup) that further improves the speed of CVM. however, is that CVM exploits the ?approximateness? as in other approximation algorithms. Instead of requiring the dual solution to be strictly feasible, CVM only requires it to be feasible within a factor of (1 + ). This, together with the fact that its dual is a MEB problem, allows its number of iterations for convergence to be bounded and thus the total time and space complexities guaranteed. On the other hand, we are not aware of any similar results for column generation. By regarding the CVM as the approximation algorithm counterpart of column generation, this suggests that the CVM can also be used in the same way as column generation in speeding up other optimization problems. For example, the CVM can also be used for SVM training with other loss functions (e.g. 1-norm error). However, as the dual may no longer be a MEB problem, the downside is that its convergence bound and complexity results in Section 3.4 may no longer be available. 4 Experiments In this section, we perform experiments on some massive data sets 6 (Table 1). The graph (for the manifold) is constructed by using the 6 nearest neighbors of each w(u e , ve ) P pattern, and the weight 1 2 kx ? x k . For simplicity, in (3) is defined as exp(?kxue ? xve k2 /?g ), where ?g = |E| e e u v e?E we use the unnormalized Laplacian and so all s(?)?s in (3) are 1. The value of m? in (5) is always fixed at 1, and the other parameters are tuned by a small validation specified, Pmset. Unless otherwise 1 ? k2 . For comparison, we use the Gaussian kernel exp(?kx ? zk2 /?), with ? = m i=1 kxi ? x we also run the LapSVM7 and another LapSVM implementation based on the reduced SVM [8] (Section 3.5). All the experiments are performed on a 3.2GHz Pentium?4 PC with 1GB RAM. Table 1: A summary of the data sets used. data set 4.1 two-moons #attrib 2 extended USPS 676 extended MIT face 361 class + ? + ? + ? #training patns labeled unlabeled 1 500,000 1 500,000 1 144,473 1 121,604 5 408,067 5 481,909 #test patns 2,500 2,500 43,439 31,944 472 23,573 Two-Moons Data Set We first perform experiments on the popular two-moons data set, and use one labeled example for each class (Figure 1(a)). To better illustrate the scaling behavior, we vary the number of unlabeled patterns used for training (from 1, 000 up to a maximum of 1 million). Following [1], the width of the Gaussian kernel is set to ? = 0.25. For the reduced LapSVM implementation, we fix r = 200. 4 3 SLapCVM LapSVM Reduced LapSVM 2 10 number of kernel expansions CPU time (in seconds) 10 1 10 0 10 ?1 10 (a) Data distribution. (b) Typical decision boundary obtained by SLapCVM. 3 10 4 5 10 10 number of unlabeled points (c) CPU time. 6 10 10 SLapCVM core?set Size LapSVM Reduced LapSVM 3 10 2 10 3 10 4 5 10 10 number of unlabeled points (d) #kernel sions. 6 10 expan- Figure 1: Results on the two-moons data set (some abscissas and ordinates are in log scale). The two labeled examples are labeled in red in Figure 1(a). Results are shown in Figure 1. Both the LapSVM and SLapCVM always attain 100% accuracy on the test set, even with only two labeled examples (Figure 1(b)). However, SLapCVM is faster than LapSVM (Figure 1(c)). Moreover, as mentioned in Section 2.1, the LapSVM solution is non-sparse 6 7 Both the USPS and MIT face data sets are downloaded from http://www.cs.ust.hk/?ivor/cvm.html. http://manifold.cs.uchicago.edu/manifold regularization/. and all the labeled and unlabeled examples are involved in the solution (Figure 1(d))). On the other hand, SLapCVM uses only a small fraction of the examples. As can be seen from Figures 1(c) and 1(d), both the time and space required by the SLapCVM are almost constant, even when the unlabeled data set gets very large. The reduced LapSVM, though also fast, is slightly inferior to both the SLapCVM and LapSVM. Moreover, note that both the standard and reduced LapSVMs cannot be run on the full data set on our PC because of their large memory requirements. 4.2 Extended USPS Data Set The second experiment is performed on the USPS data from [12]. One labeled example is randomly sampled from each class for training. To achieve comparable accuracy, we use r = 2, 000 for the reduced LapSVM. For comparison, we also train a standard SVM with the two labeled examples. Results are shown in Figure 2. As can be seen, the SLapCVM is again faster (Figures 2(a)) and produces a sparser solution than LapSVM (Figure 2(b)). For the SLapCVM, both the time required and number of kernel expansions involved grow only sublinearly with the number of unlabeled examples. Figure 2(c) demonstrates that semi-supervised learning (using either the LapSVMs or SLapCVM) can have much better generalization performance than supervised learning using the labeled examples only. Note that although the use of the 2-norm error in SLapCVM could in theory be less robust than the use of the 1-norm error in LapSVM, the SLapCVM solution is indeed always more accurate than that of LapSVM. On the other hand, the reduced LapSVM has comparable speed with the SLapCVM, but its performance is inferior and cannot handle large data sets. 5 5 CPU time (in seconds) number of kernel expansions 4 10 80 10 SLapCVM LapSVM Reduced LapSVM 3 10 2 10 1 10 SLapCVM core?set Size LapSVM Reduced LapSVM SLapCVM LapSVM Reduced LapSVM SVM (#labeled = 2) 70 60 4 10 error rate (in %) 10 3 10 50 40 30 20 10 0 10 3 10 2 4 5 10 10 number of unlabeled points 6 10 (a) CPU time. 10 3 10 4 5 10 10 number of unlabeled points (b) #kernel expansions. 6 10 0 3 10 4 5 10 10 number of unlabeled points 6 10 (c) Test error. Figure 2: Results on the extended USPS data set (some abscissas and ordinates are in log scale). 4.3 Extended MIT Face Data Set In this section, we perform face detection using the extended MIT face database in [12]. Five labeled example are randomly sampled from each class and used in training. Because of the imbalanced nature of the test set (Table 1), the classification error is inappropriate for performance evaluation here. Instead, we will use the area under the ROC curve (AUC) and the balanced loss 1 ? (TP + TN)/2, where TP and TN are the true positive and negative rates respectively. Here, faces are treated as positives while non-faces as negatives. For the reduced LapSVM, we again use r = 2, 000. For comparison, we also train two SVMs: one uses the 10 labeled examples only while the other uses all the labeled examples (a total of 889,986) in the original training set of [12]. Figure 3 shows the results. Again, the SLapCVM is faster and produces a sparser solution than LapSVM. Note that the SLapCVM, using only 10 labeled examples, can attain comparable AUC and even better balanced loss than the SVM trained on the original, massive training set (Figures 3(c) and 3(d)). This clearly demonstrates the usefulness of semi-supervised learning when a large amount of unlabeled data can be utilized. On the other hand, note that the LapSVM again cannot be run with more than 3,000 unlabeled examples on our PC because of its high space requirement. The reduced LapSVM performs very poorly here, possibly because this data set is highly imbalanced. 5 Conclusion In this paper, we addressed two issues associated with the Laplacian SVM: 1) How to obtain a sparse solution for fast testing? 2) How to handle data sets with millions of unlabeled examples? For the number of kernel expansions CPU time (in seconds) 10 3 10 2 10 1 10 0 10 3 10 50 SLapCVM core?set Size LapSVM Reduced LapSVM 45 4 10 3 10 2 4 10 number of unlabeled points (a) CPU time. 5 10 10 3 10 4 10 number of unlabeled points (b) #kernel expansions. 5 10 40 35 1.1 SLapCVM LapSVM Reduced LapSVM SVM (#labeled = 10) CVM (w/ all training labels) 1.05 1 0.95 AUC 5 10 SLapCVM LapSVM Reduced LapSVM 4 balanced loss (in %) 5 10 30 25 0.9 0.85 20 0.8 15 0.75 10 3 10 SLapCVM LapSVM Reduced LapSVM SVM (#labeled = 10) CVM (w/ all training labels) 4 10 number of unlabeled points (c) Balanced loss. 5 10 0.7 3 10 4 10 number of unlabeled points 5 10 (d) AUC. Figure 3: Results on the extended MIT face data (some abscissas and ordinates are in log scale). first issue, we introduce a sparsified manifold regularizer based on the -insensitive loss. For the second issue, we integrate manifold regularization with the CVM. The resultant algorithm has low time and space complexities. Moreover, by avoiding the underlying matrix inversion in the original LapSVM, a sparse solution can also be recovered. Experiments on a number of massive data sets show that the SLapCVM is much faster than the LapSVM. Moreover, while the LapSVM can only handle several thousand unlabeled examples, the SLapCVM can handle one million unlabeled examples on the same machine. On one data set, this produces comparable or even better performance than the (supervised) CVM trained on 900K labeled examples. This clearly demonstrates the usefulness of semi-supervised learning when a large amount of unlabeled data can be utilized. References [1] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399?2434, 2006. [2] O. Chapelle, B. Sch?olkopf, and A. Zien. Semi-Supervised Learning. MIT Press, Cambridge, MA, USA, 2006. [3] O. Delalleau, Y. Bengio, and N. L. Roux. Efficient non-parametric function induction in semi-supervised learning. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, Barbados, January 2005. [4] G. Desaulniers, J. Desrosiers, and M.M. Solomon. Column Generation. Springer, 2005. [5] J. Garcke and M. Griebel. Semi-supervised learning with sparse grids. In Proceedings of the ICML Workshop on Learning with Partially Classified Training Data, Bonn, Germany, August 2005. [6] T. G?artner, Q.V. Le, S. Burton, A. Smola, and S.V.N. Vishwanathan. Large-scale multiclass transduction. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18. MIT Press, Cambridge, MA, 2006. [7] F. Girosi. An equivalence between sparse approximation and support vector machines. Neural Computation, 10(6):1455?1480, 1998. [8] Y.-J. Lee and O.L. Mangasarian. RSVM: Reduced support vector machines. In Proceeding of the First SIAM International Conference on Data Mining, 2001. [9] V. Sindhwani, M. Belkin, and P. Niyogi. The geometric basis of semi-supervised learning. In Semisupervised Learning. MIT Press, 2005. [10] V. Sindhwani, P. Niyogi, and M. Belkin. Beyond the point cloud: from transductive to semi-supervised learning. In Proceedings of the Twenty-Second International Conference on Machine Learning, pages 825?832, Bonn, Germany, August 2005. [11] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B, 58:267?288, 1996. [12] I. W. Tsang, J. T. Kwok, and P.-M. Cheung. Core vector machines: Fast SVM training on very large data sets. Journal of Machine Learning Research, 6:363?392, 2005. [13] I. W. Tsang, J. T. Kwok, and K. T. Lai. Core vector regression for very large regression problems. In Proceedings of the Twenty-Second International Conference on Machine Learning, pages 913?920, Bonn, Germany, August 2005. [14] X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, Department of Computer Sciences, University of Wisconsin - Madison, 2005. [15] X. Zhu and J. Lafferty. Harmonic mixtures: Combining mixture models and graph-based methods. In Proceedings of the Twenty-Second International Conference on Machine Learning, Bonn, Germany, August 2005.
3005 |@word kong:2 inversion:3 norm:10 decomposition:1 nystr:1 series:1 tuned:1 rkhs:6 existing:3 recovered:2 discretization:1 ust:2 readily:1 written:1 griebel:2 girosi:3 designed:1 treating:1 intelligence:1 core:8 cse:1 simpler:1 five:1 along:1 constructed:1 khk:4 artner:2 inside:2 introduce:2 sublinearly:1 indeed:1 expected:1 abscissa:3 behavior:1 automatically:1 cpu:7 jm:1 inappropriate:1 becomes:2 moreover:13 underlying:2 bounded:1 demonstrates:4 qm:1 rm:1 control:1 k2:2 platt:1 positive:2 engineering:1 garcke:2 equivalence:1 suggests:1 limited:1 speeded:1 testing:3 practice:1 dpx:1 j0:1 area:1 empirical:1 attain:2 get:1 svr:5 cannot:4 unlabeled:40 close:2 selection:1 risk:4 function2:1 www:1 map:2 demonstrated:2 center:3 lagrangian:1 straightforward:1 attention:1 survey:2 formulate:1 rectangular:1 simplicity:2 roux:1 utilizing:1 regarded:2 handle:9 analogous:2 target:1 massive:5 user:1 pioneered:1 exact:1 us:4 trick:1 expensive:3 approximated:1 particularly:1 utilized:2 labeled:34 database:1 ep:1 cloud:1 solved:3 burton:1 tsang:3 thousand:2 region:1 mentioned:1 balanced:4 transforming:1 complexity:9 trained:2 solving:2 basis:2 usps:5 easily:6 regularizer:12 derivation:1 train:2 fast:3 artificial:1 outside:2 whose:1 larger:1 solve:1 say:1 delalleau:1 otherwise:3 niyogi:3 statistic:1 transductive:3 transform:1 obviously:1 jij:2 yii:1 combining:1 poorly:1 achieve:1 ky:3 olkopf:2 exploiting:1 convergence:2 requirement:2 produce:5 illustrate:1 nearest:1 ij:4 solves:1 c:2 involves:4 convention:1 subsequently:1 desrosiers:1 argued:1 fix:1 generalization:2 decompose:1 attrib:1 extension:1 strictly:1 sufficiently:1 considered:1 exp:2 substituting:1 achieves:2 vary:1 favorable:1 label:4 largest:1 weighted:3 minimization:1 mit:8 clearly:2 kowloon:1 always:3 gaussian:2 aim:1 modified:1 avoid:1 shrinkage:1 sion:1 focus:2 hk:7 pentium:1 skipped:1 dependent:1 typically:3 integrated:1 entire:1 transformed:1 germany:4 issue:4 among:1 flexible:1 classification:3 xve:4 dual:14 html:1 development:1 art:2 constrained:3 marginal:1 aware:1 identical:1 unsupervised:2 icml:1 representer:2 report:1 belkin:3 randomly:2 ve:10 replaced:3 geometry:1 maintain:1 attempt:1 detection:2 highly:2 mining:1 evaluation:5 mixture:3 pc:4 primal:5 accurate:1 edge:5 capable:1 k2i:6 unless:1 column:10 downside:1 tp:2 vertex:2 entry:2 subset:1 predictor:1 usefulness:2 successful:1 reported:1 stored:1 dependency:1 xue:3 kxi:1 density:1 international:5 siam:1 probabilistic:2 vm:1 lee:1 barbados:1 together:2 ym:2 again:5 satisfied:1 tube:1 solomon:1 possibly:2 leading:1 expan:1 depends:1 performed:2 kwk:1 red:1 start:1 complicated:1 om:1 minimize:1 square:1 accuracy:2 moon:4 who:1 efficiently:1 jamesk:1 basically:1 classified:1 explain:1 definition:1 involved:3 james:1 resultant:4 associated:3 sampled:2 popular:1 dimensionality:1 improves:1 hilbert:1 organized:1 back:1 appears:1 supervised:25 wei:1 formulation:2 though:3 just:1 smola:1 hand:7 replacing:1 grows:1 semisupervised:2 building:1 effect:2 k22:1 normalized:1 requiring:1 true:1 counterpart:1 inductive:1 regularization:17 hence:6 equality:1 nonzero:1 i2:1 adjacent:1 ue:9 width:1 inferior:2 auc:4 unnormalized:2 hong:2 confusion:1 tn:2 performs:1 harmonic:2 mangasarian:1 recently:4 common:2 functional:1 qp:9 exponentially:1 insensitive:10 million:6 extend:2 kwk2:1 cambridge:2 smoothness:2 grid:2 pm:5 similarly:1 chapelle:1 access:1 longer:2 v0:2 add:3 imbalanced:2 restraining:1 recent:1 perspective:2 moderate:1 yi:18 captured:1 minimum:2 additional:2 seen:2 converting:1 novelty:1 semi:18 zien:1 full:1 reduces:2 smooth:4 technical:1 faster:6 lai:1 laplacian:8 feasibility:2 variant:1 regression:6 iteration:4 kernel:25 addressed:1 grow:1 sch:2 extra:1 rest:1 unlike:2 brace:1 induced:1 contrary:1 incorporates:1 lafferty:1 integer:1 presence:1 bengio:1 enough:1 xj:1 lasso:7 inner:1 idea:1 regarding:1 multiclass:1 expression:1 casted:1 pca:2 gb:1 penalty:6 remark:2 clear:1 involve:1 amount:6 locally:1 svms:1 reduced:21 http:2 tibshirani:1 yy:1 putting:1 key:2 drawn:1 tenth:1 utilize:1 ram:1 graph:9 fraction:1 sum:1 run:3 powerful:1 master:1 almost:1 decision:2 scaling:2 comparable:4 ki:1 bound:1 guaranteed:1 quadratic:1 constraint:3 deficiency:1 worked:1 vishwanathan:1 nearby:1 dominated:1 u1:1 speed:2 argument:1 min:5 formulating:1 concluding:1 aspect:1 bonn:4 px:3 speedup:2 department:2 ball:2 combination:2 describes:1 smaller:2 slightly:1 y0:2 restricted:1 turn:1 slack:2 zk2:1 available:3 pursuit:1 rewritten:1 kwok:3 alternative:1 slower:1 original:6 denotes:1 ensure:2 hinge:2 madison:1 meb:7 exploit:1 restrictive:1 k1:1 society:1 objective:1 added:3 parametric:1 traditional:1 diagonal:1 gradient:1 w0:9 manifold:31 water:1 induction:1 besides:1 ratio:1 minimizing:1 yjk:1 negative:2 enclosing:2 implementation:4 twenty:3 perform:3 finite:1 january:1 sparsified:6 defining:1 extended:9 ever:1 y1:1 reproducing:1 august:4 ordinate:3 introduced:1 required:4 specified:1 connection:1 smo:1 learned:1 established:1 beyond:1 bar:1 pattern:13 xm:2 sparsity:3 program:2 including:2 max:6 memory:2 royal:1 natural:1 treated:1 restore:2 regularized:3 zhu:2 technology:1 brief:1 lk:3 speeding:1 review:1 geometric:3 l2:1 literature:1 kf:10 asymptotic:1 wisconsin:1 loss:19 interesting:1 generation:9 validation:1 integrate:2 downloaded:1 degree:1 usa:1 editor:1 storing:1 share:1 penalized:1 changed:1 summary:1 last:1 bias:3 uchicago:1 neighbor:1 face:8 sparse:16 ghz:1 regard:1 boundary:1 curve:1 rsvm:1 world:2 collection:1 made:2 commonly:1 approximate:3 compact:1 ignore:1 kkt:6 consuming:1 xi:29 alternatively:1 bay:1 why:1 table:3 lapsvm:53 as1:1 nature:1 learn:1 robust:1 improving:1 expansion:7 diag:3 main:1 linearly:1 k2k:3 motivation:1 x1:1 roc:1 transduction:1 cvm:23 slow:2 lie:2 theorem:2 experimented:1 svm:20 admits:2 intrinsic:2 workshop:2 adding:1 margin:1 kx:2 sparser:2 ivor:3 partially:1 sindhwani:4 springer:1 corresponds:2 minimizer:4 ma:2 cheung:1 formulated:2 exposition:1 towards:1 replace:1 feasible:2 typical:2 operates:1 total:2 experimental:4 support:4 violated:1 incorporate:1 avoiding:1 handling:1
2,211
3,006
Optimal Change-Detection and Spiking Neurons Angela J. Yu CSBMB, Princeton University Princeton, NJ 08540 [email protected] Abstract Survival in a non-stationary, potentially adversarial environment requires animals to detect sensory changes rapidly yet accurately, two oft competing desiderata. Neurons subserving such detections are faced with the corresponding challenge to discern ?real? changes in inputs as quickly as possible, while ignoring noisy fluctuations. Mathematically, this is an example of a change-detection problem that is actively researched in the controlled stochastic processes community. In this paper, we utilize sophisticated tools developed in that community to formalize an instantiation of the problem faced by the nervous system, and characterize the Bayes-optimal decision policy under certain assumptions. We will derive from this optimal strategy an information accumulation and decision process that remarkably resembles the dynamics of a leaky integrate-and-fire neuron. This correspondence suggests that neurons are optimized for tracking input changes, and sheds new light on the computational import of intracellular properties such as resting membrane potential, voltage-dependent conductance, and post-spike reset voltage. We also explore the influence that factors such as timing, uncertainty, neuromodulation, and reward should and do have on neuronal dynamics and sensitivity, as the optimal decision strategy depends critically on these factors. 1 Introduction Animals interacting with a changeable, potentially adversarial environment need to excel in the detection of changes in its sensory inputs. This detection, however, is riddled by the inherently competing goals of accuracy and speed. Due to the noisy and incomplete nature of sensory inputs, the animal can generally achieve more accurate detection by waiting for more sensory inputs. However, gathering this extra information incurs an opportunity cost, as the extra time can be used to gather more food, attract a mate, or escape a predator. Neurons subserving the detection process face a similar speed-accuracy trade-off. In this work, we aim to understand the computations performed by a neuron at the time-scale of single spikes. How sensitive a neuron is to each input spike should depend on the relative probabilities of the input representing noise and useful information, and the relative costs of mis-interpretation. We formulate the problem as an example of change-detection, and characterize the optimal decision policy in this context. The formal tools we utilize to formalize the change-detection problem are built upon work in the area of controlled stochastic processes. Controlled stochastic processes refer to decision-making in environments plagued not only by inferential uncertainty about the state of the world, but also uncertainty associated with the consequences of an action or decision on the world itself. Finding optimal decision policies for such processes is an actively researched problem in financial mathematics and operations research. As we will discuss below, neuronal change-detection is a prime example of such a problem. In Sec. 2, we introduce the general framework of change-detection. In Sec. 3, we apply the framework to a specific scenario similar to that faced by the neuron, and characterize the optimal solution. In Sec. 3, we demonstrate that the optimal information accumulation and decision process has dynamics remarkably resembling that of a spiking neuron. We examine the computational import of certain intracellular properties, characterize the input-output firing rate relationship, and extend the framework of multi-source detection. In Sec. 4, we explore the behavioral consequences of opti- mal change-detection and examine issues such as the speed-accuracy trade-off, temporal and spatial cueing, and neuromodulation. 2 A Bayesian Formulation of the Change-Detection Problem The Generative Model Suppose we have sequential inputs x1 , x2 , . . ., which are generated iid by a distribution f0 (x) before time ? ? {0, 1, . . .}, and by a distribution f1 (x) afterwards, where the random variable (r.v.) ? denotes the sudden, hidden change time. ? has an initial probability P (? = 0) = q0 , and a geometric distribution thereafter: P (? = t) = (1 ? q0 )(1 ? q)t?1 q, t > 0. The change-detection problem is concerned with finding the optimal decision policy for reporting the change from f0 to f1 as early as possible while minimizing false-alarms [1]. A decision policy ? is a mapping, possibly stochastic, from all observations made so far to the control (or action) set, ?(xt , {x1 , . . . , xt }) 7? {a1 , a2 }. The action a1 terminates the observation process and reports ? ? t, and a2 continues the observation for another time step. Every unique decision policy is identified by a corresponding r.v. of stopping times ? ? {0, 1, . . .}. In the following, we will use ? and ? interchangeably to refer to a policy. The Loss Function Following convention [2], we assume a loss function linear in false alarms and detection delay: l? (?, ? ) = 1{? <?;?} + 1{? ??;?} c(? ? ?) (1) where 1 is the indicator function, and c > 0 is a constant that specifies the relative importance of speed and accuracy. The total loss is the expectation of this loss function over ? and ? : ! ?X =? X ??1 ? X L? , hl? (?, ? ); ?i = P (?, ? ) + c(? ??)P (?, ? ) = P (? < ?) + ch(? ??)+ i (2) ?=0 ? =0 ? =? ? An optimal policy ? minimizes L? . Due to the linear loss in detection delay, the expected loss blows up for all policies that do not stop almost surely (a.s.; probability=1) in finite time; therefore, we restrict the optimization problem in the following to the class of almost-surely finite-time policies. Using the notation Pt , P (? ? t|xt ), we have the following: ? ? Z X X P (? > ? ) = P (? = t, ? > ? ) = P (? > ? |x? )P (? = t|xt )p(xt )dxt = h1?P? i?,x? t=0 h(???)+ i = ? X t=0 h1{? >t} ?1{??t} i?,? = t=0 ? ? ?1 X X P (? )P (? ? t) = ? =0 t=0 ? X P (? ) ? =0 t?1 X ? ?1 X hPt i?,xt = h t=0 Pt i?,xt ,? t=0 The cumulative posterior probability P? at the detection time ? , therefore, is the critical factor in loss evaluation and policy optimization: ?1 L? = hc??k=0 Pk + (1 ? P ? )i?,Pk ,? ;? . (3) Bayes Rule gives us the iterative update rule for the cumulative posterior Pt , P (? ? t|xt ), Pt+1 = (Pt + (1 ? Pt )q)f1 (xt+1 ) , (Pt + (1 ? Pt )q)f1 (xt+1 ) + (1 ? Pt )(1 ? q)f0 (xt+1 ) P0 = q0 . (4) Pt+1 is a deterministic function of Pt and xt+1 , but appears to take a stochastic trajectory since xt+1 is an i.i.d.-distributed r.v. The expectation of hPt+1 |xt i is Pt +(1?Pt )q. We also define the Pt monotonically related posterior ratio ?t = 1?P , which has the update rule t ?t+1 = f1 (xt+1 )(?t +q) , f0 (xt+1 )(1 ? q) ?0 = q . 1?q (5) Optimal Policy: Threshold Crossing In order to optimize over the space of all possible stopping rules (policies), we define the following: (1) the conditional termination cost, Ct , associated with stopping at time t after observing xt : Ct , Pt?1 c i=0 Pi + (1 ? Pt ); (2) the minimal conditional cost, ?t , to be expected after observation xt : ?t , ess inf? hC? |xt i, where ? ranges over all stopping rules that terminate no earlier than t, and the expectation is taken over all future observations (which can be a function of the decision taken at every time step); (3) ess inf, the largest (a.s.) r.v. less than (a.s.) every r.v. Xn , n ? N . As an example of Bellman?s Equation, ?t satisfies the dynamic programming equation ?t = min{Ct , h?t+1 |xt i}, and that the stationary, deterministic stopping rule ? ? = min{t ? 1|?t = Ct } achieves optimality (Eq. 2). This implies that the optimal policy consists of a stopping region S ? [0, 1] and a continuation region C = [0, 1] \ S, such that ?(Pt : Pt ? S) = a1 and ?(Pt : Pt ? C) = a2 . We will state and prove a useful theorem below, which will imply that C and S neatly fall into two contiguous blocks, such that the optimal policy requires the termination action as soon as Pt exceeds some fixed threshold B ? ? ie the optimal policy is a first-passage process in Pt ! Before we present the theorem, we first introduce the method of truncation. The difficulty of solving the dynamic equation for ?t lies in its infinite recursiveness. If we can impose a finite horizon T T on ? , then the finitely recursive relation ?tT = min {Ct , h?t+1 |xt i} has a corresponding finite? T horizon optimal policy ?T , where ?T = CT . Taking the infinite limit ?t? , limT ?? ?tT , it has been shown [2] that when the expected loss is finite (which is the case here, since the expression in Eq. 2 is finite for all decision policies that stop a.s. in finite time), ?t = ?t? , and ?T? converges to the infinite-horizon optimal policy ? ? . We also note the following self-evident lemma. P Lemma. functions in t, and h(t) = i gi (t)wi (t), P Suppose {gi (t)}i?I is a family of decreasing where i wi (t) = 1 ?t. If gi (t) ? gj (t) implies wi0 (t) ? wj0 (t), then h(t) decreases with t. T Theorem. Ct ?h?t+1 |xt i is a decreasing function of Pt . Proof. CT ?1 ?h?TT |xt?1 i decreases with PT ?1 . Assume that the theorem holds for t+1, and note: T Ct ? h?t+1 |xt i = ?(c + q)Pt + q + X gi wi i T where gi , max(0, li ), li , Ct+1 ? h?t+2 |xt , xt+1 = ii, and wi , P (xt+1 = i|x). gi decreases with Pt for each i, since li decreases with Pt+1 by the inductive hypothesis, and Pt+1 increases with Pt by Eq. 4. Suppose i, j are such that f1 (i)?f0 (i) > f1 (j)?f0 (j), then ?t+1 (i) > ?t+1 (j), and Pt+1 (i) > Pt+1 (j), for any given xt . The inductive hypothesis implies gi ? gj . Also note T dwk /dPt = (f1 (k)?f0 (k))(1?q), so dwi /dPt ? dwj /dPt . Thus, Ct ?h?t+1 |xt i decreases with Pt . This theorem states that the cost of stopping at time t relative to continuing gets smaller when it is more certain that ? ? t. This is true for any finite stopping time T and therefore also for the infinitehorizon limit. If Ct ?h?t+1 |xt i is negative for some value of Pt , then the optimal policy is to select action a1 ; this is also true for any larger values of Pt . Define B ? ? [0, 1] as the lower bound of all such Pt , then the stopping and continuation regions have the form [B ? , 1] and [0, B ? ), respectively. Ideally, we would like to have an exact solution for the optimal policy as a function of the generative and cost parameters of the change-detection problem as defined in Sec. 1. While the explicit form of B ? is not known, the theorem allows us to find the optimal policy numerically by evaluating and minimizing the empirical loss as a function of the decision threshold B ? [0, 1]. 3 Neuronal change-detection In the following, we focus on the specific case where f0 and f1 are Bernoulli processes with respective rate parameters ?0 and ?1 . This case resembles the problem faced by neurons, which receive sequential binary inputs (spike=1, no spike=0) with approximately Poisson statistics. The Bernoulli process is a discrete-time analog of the Poisson process, and obviates the problematic assumption (made by the Poisson model) that spikes could be fired infinitely close to one another. For now, we assume that the generative parameters ?1 , ?0 , q0 , q and the cost parameter c are known. We also assume, without loss of generality, that ?1 > ?0 (rate increases), since otherwise we can just redefine the inputs (0 or 1). When the parameters satisfy c ? (?1 ? ?0 ? q(1 ? ?0 ))/(1 ? ?1 ), we have the explicit solution B ? = q/(q = c), or ? ? q/c (proof omitted). This corresponds to the one-step look-ahead policy, and is optimal when the cost of detection is large or when the probability of the change taking place is very high. This turns out not to be a very interesting case as the detection process is driven to cross the threshold even in the absence of any input spikes. Although we do not have an explicit solution for the optimal detection threshold B ? in general, we can numerically compare different values of B for any specific problem. Fig. 1(a) shows the empirical cost averaged over 1000 trials for different threshold values. For these particular parameters, the minimum is around B = 0.65, although the cost function is quite shallow for a large range of values of B around the optimum, implying that performance is not particularly sensitive to relatively large perturbation around the optimal value. Repeated Change-Detection and Firing Rate From the problem formulation in sec. 2, it might seem like the framework only applies to detecting a single change, or multiple unrelated changes. However, the same policy formulation can apply to the case of repeated detection of changes, one after another, in a temporally contiguous fashion. As long as each detection event is generated from the same model parameters (q, q0 , f1 , f0 ), and the cost parameter (c) remains constant, the threshold-crossing policy is still optimal in minimizing the empirical expected loss over these repeated events. The only generative parameter affected by the repetition is q0 , which represents the probability of the inputs already being generated from f1 before the current observation process began. In this repeated detection scenario, q0 should in general be high if the detection threshold B ? is high, and low if B ? is low. However, the strength of this coupling is tempered by (i) whether each detection termination resets the generative process, as happens when visual detection leads to saccades and thus the resetting of input statistics, and (ii) the amount of time elapsed during the refractory period after a detection spike. Fortunately, while q0 is influenced by the detection policy, the optimization of the policy is not influenced by q0 , since it consists of comparing Ct and h?t+1 |xt i at every time step. This comparison does not depend on q0 , which simply adds a linear factor to both terms. In this repeated firing scenario, where the number of spikes is relatively high relative to the frequency of changes, the loss function of Eq. 2 can be rewritten as L? = p0 r0 + c/r1 , where ri is the mean firing rate when the inputs are generated from fi , and p0 is the fraction of time when f0 is applicable (as opposed to f1 ). In other words, if the rate of change is slow compared to neuronal firing rates, then optimal processing amounts to minimizing the ?spontaneous? firing rate during f0 and maximizing the ?stimulus-evoked? firing rate during f1 . Optimality and Dynamics of Leaky Integrate-and-Fire Fig. 1(b) illustrates this concept of repeated firing. The top panel shows an example tracing of the dynamical variable ?t in the repeated optimal change-detection process. Whenever ?t reaches the threshold 0.65/(1?0.65) (or equivalently when Pt reaches 0.65, the optimal threshold as determined in the last section), a change is reported and the whole process resets to ?0 . The dynamics of ?t is remarkably similar to a leaky integrate-and-fire neuron. The bottom panel shows a raster plot of input and output spikes over 25 trials, and again the resemblance to spiking neurons is remarkable. Closer inspection indicates that the update rule for the posterior ratio in Eq. 5 indeed approximates f1 (xt ) the dynamics of a leaky integrate-and-fire neuron [3]. Let a , (1?q)f , we can rewrite Eq. 5 as 0 (xt ) ?t = a(?t?1 + q) (6) ?1 (1?q)?0 When xt = 1, a = > 1, ?t increases, and the rate of increase is larger when ?t itself is larger. This is reminiscent of the near-threshold dynamics of the Hodgkin-Huxley model, in which the voltage-dependent activation of sodium conductance drives the neuron to fire [4]. When xt = 0, ?t converges to ?0? = f1 q/(f0 (1 ? q) ? f1 ) (by Eq. 5), which is greater than 0 when f0 (0)/f1 (0) ? 1?q. We can think of ?0? as the resting membrane potential. Since ?0? increases with q, it implies that the resting potential should be higher and closer to the firing threshold, making the neuron more sensitive to synaptic inputs, when there is a stronger expectation that a change is imminent. Relationship Between Input and Output Firing Rates We can also look at the input-output relationship at the firing-rate level. The state-dependent rate parameter a has the expected values: 1 1 ?21 +?0 ?2?0 ?1 a0 , ha|f0 i = a1 , ha|f1 i = . 1?q 1?q ?0 ??20 Given Eqs. 5 and 6, we can write down an approximate, explicit expression for h?t |fi i:   t?1 X ai q(1?ati ) q t k t t h?t |fi i ? ai (h?t?1 i+q) = ai h?0 i + ai q ai = ai h?0 i + ? ai ?0 + . (7) 1?ai ai ?1 k=0 (b) 0.5 0 0 100 200 300 400 (c) Distribution of input spikes 0.7 0.1 0 500 Cost as a function of thresholds 0.8 0.2 0.6 ?200 Input and output spikes ?100 0 100 200 Cost Dynamics of ? 1 Frequency (a) Distribution of output spikes Frequency 0 10 20 0 100 200 300 Time (samples) 400 500 0.04 0.4 0.3 0.02 0 0.5 0.2 ?200 ?100 0 Time 100 0.1 0.6 200 0.65 0.7 0.75 0.8 Threshold 0.85 0.9 Figure 1: Optimal change-detection and dynamics. (a) The empirical average cost (over 1000 trials) has a single shallow minimum at B = 0.65. ?0 = 0.13, ?1 = 0.17, q = 0.0125, q0 = 0.05, c = 0.0005; these parameters apply for the remainder of the paper unless otherwise specified. (b) Top panel: a typical example of the dynamics of ?t over time. Superimposed on ?t are the spikes, which are arbitrarily set to a fixed high value. Black bars near the bottom indicate input spikes. Green line indicates time of actual change. In this example, a chance flurry of input spikes near the start causes the optimal change-detector to fire; after the change, the increased input firing rate induces the change-detector fire much more frequently. Note that ?t decreases whenever there is a lull in input spikes. Bottom panel: Raster plot of input (blue) and output (red) spikes; both more frequent after the the change indicated by the green line. (c) Output spikes (bottom) increase frequency quickly after the increase in input spikes (top). Given the decision threshold B, h?t0 |f0 i = h?t1 |f0 i = B, where ti is the average number of time steps it takes to reach the threshold for for xt = fi , and can be assumed to be  1 (it takes many time steps of input integration to reach the threshold). We therefore have      1 q q q/(a0 ?1) + ?0 t1 t /t t /t at00 ?0 + = at11 ?1 + =? a1 = a00 1 ? a00 1 . (8) a0 ?1 a1 ?1 q/(a1 ?1) + ?0 And therefore the ratio of the output firing rates, ri , 1/ti for i = 1, 2, is log r1 t0 log a1 = = = r0 t1 log a0 1 1?q ?21+?0?2?0 ?1 ?0??20 1 1?q + log log =1+ log ?21+?0?2?0 ?1 ?0??20 1 log 1?q . (9) Since the arguments of log in both the denominator and numerator are greater than 1, r1 /r0 > 1. Therefore, when the input rates are such that ?1 > ?0 , then the respective output rates are also such that r1 > r0 . To see exactly how the output firing rate ratio changes as a function of the input rates, ?2+? ?2? ?1 we define the function g(?0 , ?1 ) , 1 ?00??20 , and take its partial derivatives with respect to ?0 0 and ?1 . Then we see that the output firing ratio Eq. 9 also increases with ?1 and decreases with ?0 , consistent with intuitions. Fig. 1(c) shows the average detection/firing rate over time: the rise in output firing rate closely follows that in the input, despite the small change in the input firing rates. Multi-source change-detection So far, we have only considered the case of the Bernoulli inputs uniformly changing from one rate to another. However, sometimes the problem at hand is one of multi-source change-detection. For instance, a visual neuron detecting the onset of a stimulus might get inputs from up-stream neurons sensitive to stimuli with different properties (different colors, orientations, depth of view, etc.). Here, we extend our framework to the case of two independent sources of inputs, using an approach similar to that taken in [5]. The source f i , i ? {1, 2} emits observations xi1 , xi2 , . . . from a Bernoulli process that changes from rate ?i0 to ?i1 at an unknown time ?i , where ?i is generated by a geometric distribution with parameter q i , and the prior probability P (?i = 0) is q0i . The objective is to detect ? , min(?1 , ?2 ) with the cost function specified as before (Eqs. 1-2). Defining the individual posteriors Pti , P (?i ? t|xit ), where xit , xi1 , . . . , xit , we have the following Pt , P (min(?1 , ?2 ) ? t|x1t , x2t ) = 1 ? (1 ? Pt1 )(1 ? Pt2 ) = Pt1 + Pt2 ? Pt1 Pt2 . (10) We can also define the corresponding overall posterior ratio ?t , Pt /(1 ? Pt ) = ?1t + ?2t + ?1t ?2t (11) (a) (b) (c) Figure 2: Effect of cueing on change-detection. (a) Distribution of first spikes for the optimal stopping policy; spikes aligned to time 0 when the actual change takes place. (b) This distribution is significantly tightened with mean brought closer to the actual change, when there is extra temporal information about an imminent change (q = .02). (c) The distribution of spikes is also slightly tightened and brought closer to the actual change time, when there is stronger prior probability of a stimulus appearing (q0 = .1), as during special cueing. The effect is smaller because the higher prior leads to false alarms as well as reducing detection delay. as a function of the individual posterior ratios ?it , Pti /(1 ? Pti ). Following reasoning very close to that of Sec. 2, we can show that if the generative and cost parameters are such that ?t is lowerbounded by ?0? for t  1, then the optimal stopping/detection policy is to terminate at the smallest t, such that ?t = ?1t +?2t +?1t ?2t ? (q 1 +q 2 ?q 1 q 2 )/c. Despite the generative independence of the two Bernoulli processes, we note that the optimal policy is different from the na??ve strategy of running two single-source change-detectors, and report a change as soon as one of them reports a change. To see this, consider the case when ?1t = q 1 /c, but ?2t ? 0, so that ?t ? ?1t = q 1 /c < (q 1 +q 2 (1 ? q 1 ))/c. Therefore, the individual detector for process 1 would have reported a change, but the overall detector would not. 4 Optimal Change-Detection and Neuromodulation A sizeable body of behavioral studies suggest that stimulus processing is influenced by cognitive factors such as knowledge about the timing of stimulus onset, or whether or not a stimulus would appear in a particular location. There is evidence that the neuromodulators norepinephrine [6], and acetylcholine [7] are respectively involved in those two aspects of stimulus processing. Separately, there is a rich literature on the effects of these various neuromodulators at the single-cell level [8]. Since we have here an explicit model of neuronal dynamics as a function of the statistical properties associated with the stimulus, we are ideally positioned to examine how these properties should affect the cellular properties, and whether the known behavioral consequences of neuromodulation are consistent with their observed effects at the cellular level. If the system has some prior knowledge about the onset time of a stimulus, we can model the information accumulation process as starting shortly before the mean change-time, with a tight distribution over the random variable ?. Making q larger achieves both effects in our model. Fig. 2A shows the distribution of first spikes for 1000 trials; Fig. 2B shows that this distribution is more tightly clustered immediately after the actual change time ? for larger q. Experimentally, it has been observed that norepinephrine makes sensory neurons fire more vigorously to bottom-up sensory inputs [8]. It is also known from behavioral studies that a temporal cue improves detection performance, and that noradrenergic depletion diminishes this advantage [6]. If there is some prior knowledge about the stimulus being in a particular location, we can model this with a higher prior probability q0 of the stimulus being present. This also has the effect of increasing the responsiveness of the change-detection process to input spikes (Fig. 2C), as well as making the detection (spiking) process more sensitive. It has been shown experimentally that a (correct) spatial cue improves stimulus detection, and that acetylcholine is implicated in this process [7], and that acetylcholine potentiates neurons and increases their responsiveness to sensory inputs [8]. 5 Discussion Responding accurately and rapidly to changes in the environment is a problem confronted by the brain at every level, from single neurons to behavior. In this work, we have presented a formal treatment of the change-detection problem and obtained important properties of the optimal policy ? for a broad class of problems, the optimal detection algorithm is a threshold-crossing process based on the posterior probability of the change having taken place, which can be iteratively updated using Bayes? Rule. Applying these ideas to the case of neurons that must rapidly and accurately detect changes in input spike statistics, we saw that the optimal algorithm yields dynamics remarkably similar to the intracellular dynamics of spiking neurons. This suggests that neurons are optimized for tracking discrete, abrupt changes in the inputs. The model yields insight into the computational import of cellular properties such as resting membrane potential, post-spike reset potential, voltage-dependent conductances, and the input-output spiking relationship. The basic framework was extended to examine the case of multi-source change-detection, a problem faced by a neuron tasked with detecting a stimulus when it could be one of two possible sub-categories. We also explored the computational consequences of spatial and temporal cueing on stimulus detection, and saw that the behavioral and biophysical effects of neuromodulation (eg by acetylcholine and norepinephrine) are consistent within the framework. This novel framework for modeling single-neuron computations is attractive, as it suggests explicit design principles underlying neuronal dynamics, and not merely provides a descriptive model. Since the computational objects are well-specified at the outset, it provides a natural theoretica link between cellular properties and behavioral constraints. It is also appealing as a self-consistent and elegantly simple model of the computations taking place in single neurons. Every neuron in this scheme simply detects changes in its synaptic inputs, on a spike-to-spike time scale, and propagates its knowledge according to its own speed-accuracy trade-off. All that a down-stream neuron needs from this neuron for its own change-detection computations are this neuron?s average firing rate in different states, the rate of change among these states, and the prior probability of of this neuron being in one of those states ? all of these quantities can be learned over a longer time-scale. In particular, the down-stream neuron does not need to know about this neuron?s inputs, its internal dynamics, its decision policy, its objective function, its model of the world, etc. In this scheme, more sophisticated computations can be achieved by pooling together the outputs of different neurons in various configurations ? we explored this briefly with the example of multi-source change-detection. Another advantage of this framework is that it eliminates the boundary between inference and decision. In this scheme, neurons make inferences about their inputs and make decisions at every level of processing. It therefore obviates the problem of where in a hierarchical nervous system does the nature of the computation change from input-processing to decision-making. While the incorporation of formal tools from controlled stochastic processes into the modeling of single-cell computations is a novel approach, this work is related to several other theoretical works. The idea of neurons processing and representing probabilistic information has received much attention in recent years, with most work focusing on the level of neuronal populations [9?12]. Theoretical work on the representation and processing of probabilistic information in single neurons are comparatively more rare. It has been suggested [13] that certain decision-making neurons may accumulate probabilistic information and spike when the evidence exceeds a certain threshold. However, it was typically assumed that the neurons already receive continuously-valued inputs that represent probabilistic information. Moreover, the tasks considered in these earlier works involved stationary discrimination, such that there was no explicit non-stationarity in the state of the world/inputs. We note that our framework is a generalization of the commonly studied 2AFC task, which is equivalent to setting the change probability q to 0 in our model. Consistent with this characterization, our optimal policy is a generalization of the SPRT algorithm which is known to be optimal for stationary 2AFC discrimination [14]. One closely related piece of work proposed that single neurons track the log posterior ratio of the state of an underlying binary variable, and spike when the new inputs imply a value for this log posterior ratio that is sufficiently different from the neuron?s current estimate based on previous inputs [15]. The key difference at the conceptual level is that this previous work focused on the explicit propagation of probabilistic information across neurons, thus introducing complications into processing and learning that are necessary to make this probabilistic knowledge consistent across neurons. Also, there was no explicit analysis of the optimality of the output spike generation process: how much of a discrepancy merits a spike, and how this depends on the relevant statistical and cost parameters. At the mechanistic level, having the membrane potential represent the log posterior ratio, as opposed to the posterior ratio, requires the dynamical update rule to involve exponentiation. While it was shown in that work that the dynamics is approximately leaky integrate-and-fire during steady state, it does not help the most interesting case, when the world is rapidly changing and the linear approximation is most detrimental. We showed in this work that there are good reasons for neurons not to integrate inputs linearly. The amount of new evidence provided by each input (spike or not spike) at every time step is state-dependent, and should be so according to optimal information integration. This work suggests that the particular types of nonlinearity we see in neuronal dynamics are desirable from a computational point of view. One important assumption we made in our model is that the cost of detection delay is linear in time, parameterized by the constant c. Without this assumption, the controlled dynamic process framework would not apply, as the decision policy would not only depend on a state variable, but on time in an explicit way. However, in general, there might not be a fixed c that relates the trade-off between false alarms and detection delay. Intuitively, c should be related to how much reward could be obtained per unit of time if the system were not engaged in prolonging the current observation process. In particular, if a new ?trial? begins as soon as the current ?trial? terminates, regardless of detection accuracy, then c should be set to P (? ? ? )/h? i, which also places the two cost terms in the same dimension. If we had analytical expressions for P (? ? ? ) and h? i as a function of the decision threshold B, then we could solve the optimization problem through the self-consistency constraint placed on the optimal threshold B ? through its dependence on c. Unfortunately, there is no known analytical expressions for P (? ? ? ; B) and h? ; Bi. Alternatively, one might still numerically obtain a value for a fixed detection threshold that incurs the lowest cost among all thresholds. There is no guarantee, however, that the optimal policy lives in this parameterized family of policies. It may be that the best fixed threshold policy is still far from optimal detection. There are several important and exciting directions in which we plan to extend the current work. One is the consideration of more complex state transitions. In this work, we assumed that the state transition is always from f0 to f1 . But in more general scenarios, the inputs are likely to revert back to f0 before another transition into f1 , and so on. Thus, we need at least two populations of detectors, one that detects the onset (f0 to f1 ), and one that detects the offset (f1 to f0 ). Intuitively, there ought to be recurrent connections between them, to propagate and aggregate the total information about what states the inputs are in. A related problem is when the inputs can be in multiple (> 2) possible states, or even a continuous range of states, with complex transitions among these states. Another interesting question is what happens when we have a different or more complex distribution for the change variable ?. We know, for instance, that animals are capable of utilizing independent temporal information about the mean and variance of the stimulus onset. In the geometric model we assumed, these two variables are coupled. Finally, we note that the formal framework we presented, that of optimal detection of changes in input statistics, is not only applicable to the level of single neuron, but also to systems and cognitive level problems. For example, certain problems in reinforcement learning, such as reversal learning and exploration versus exploitation in general, are also amenable to analysis by a similar approach. We intend to explore some of these problems in the future using similar formal tools from controlled dynamic processes. Acknowledgments We thank Bill Bialek, Peter Dayan, Savas Dayanik, and Sophie Deneve for helpful discussions. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] Shiryaev, A N (1978). Optimal Stopping Rules, Springer-Verlag, New York. Chow, Y S et al (1971). Great Expectations: The Theory of Optimal Stopping, Houghton Mifflin, Boston. Dayan, P & Abbott, L F (2001). Theoretical Neuroscience, MIT Press, Boston. Hodgkin, A L & Huxley, A F (1952). J. Physiology 117: 500-44. Bayraktar, E & Poor, H V (2005). 44th IEEE Conf. on Decision and Control and Eur. Control Conference. Witte, E A & Marrocco, R T (1997). Psychopharmacology 132: 315-23. Phillips, J M, McAlonan, K, Robb, W G K & Brown, V (2000). Psychopharmacology 150: 112-6. Gu, Q (2002). Neuroscience 111: 815-35. Zemel, R S, Dayan, P, & Pouget, A (1998). Neural Computation 10: 403-30. Sahani, M & Dayan, P (2003). Neural Computation 15: 2255-79. Rao, R P (2004). Neural Computation 16: 1-38. Yu, A J & Dayan, P (2005). Advances in Neural Information Processing Systems 17. Gold, J I & Shadlen, M N (2002). Neuron 36: 299-308. Wald, A & Wolfowitz, J (1948). Ann. Math. Statisti. 19: 326-39. Deneve, S (2004). Advances in Neural Information Processing Systems 16.
3006 |@word trial:6 exploitation:1 briefly:1 noradrenergic:1 stronger:2 termination:3 propagate:1 p0:3 incurs:2 vigorously:1 initial:1 configuration:1 ati:1 current:5 comparing:1 activation:1 yet:1 import:3 reminiscent:1 must:1 plot:2 update:4 discrimination:2 stationary:4 generative:7 implying:1 cue:2 nervous:2 inspection:1 es:2 sudden:1 detecting:3 provides:2 characterization:1 location:2 complication:1 math:1 consists:2 prove:1 behavioral:6 redefine:1 introduce:2 indeed:1 expected:5 behavior:1 examine:4 frequently:1 multi:5 brain:1 bellman:1 detects:3 decreasing:2 researched:2 food:1 actual:5 increasing:1 psychopharmacology:2 provided:1 begin:1 notation:1 unrelated:1 panel:4 underlying:2 moreover:1 lowest:1 what:2 minimizes:1 developed:1 finding:2 nj:1 ought:1 guarantee:1 temporal:5 every:8 ti:2 shed:1 exactly:1 control:3 unit:1 appear:1 before:6 t1:3 timing:2 limit:2 consequence:4 despite:2 opti:1 fluctuation:1 firing:19 approximately:2 might:4 black:1 resembles:2 studied:1 evoked:1 suggests:4 range:3 bi:1 averaged:1 unique:1 acknowledgment:1 recursive:1 block:1 area:1 empirical:4 significantly:1 physiology:1 inferential:1 imminent:2 word:1 outset:1 suggest:1 get:2 close:2 context:1 influence:1 applying:1 accumulation:3 optimize:1 deterministic:2 equivalent:1 bill:1 maximizing:1 resembling:1 attention:1 starting:1 regardless:1 focused:1 formulate:1 abrupt:1 immediately:1 pouget:1 rule:10 insight:1 utilizing:1 financial:1 population:2 updated:1 pt:39 suppose:3 spontaneous:1 exact:1 programming:1 hypothesis:2 crossing:3 robb:1 particularly:1 continues:1 houghton:1 bottom:5 observed:2 mal:1 region:3 trade:4 decrease:7 intuition:1 environment:4 reward:2 ideally:2 dynamic:21 flurry:1 depend:3 solving:1 rewrite:1 tight:1 upon:1 gu:1 various:2 revert:1 zemel:1 aggregate:1 quite:1 larger:5 pt1:3 valued:1 solve:1 otherwise:2 statistic:4 gi:7 think:1 noisy:2 itself:2 confronted:1 advantage:2 descriptive:1 biophysical:1 analytical:2 reset:4 remainder:1 frequent:1 aligned:1 relevant:1 mifflin:1 rapidly:4 fired:1 achieve:1 gold:1 x1t:1 optimum:1 r1:4 converges:2 object:1 help:1 derive:1 coupling:1 recurrent:1 finitely:1 received:1 eq:10 implies:4 indicate:1 convention:1 direction:1 closely:2 correct:1 stochastic:6 exploration:1 f1:22 clustered:1 generalization:2 mathematically:1 hold:1 around:3 considered:2 sufficiently:1 plagued:1 great:1 mapping:1 achieves:2 early:1 a2:3 omitted:1 smallest:1 diminishes:1 applicable:2 sensitive:5 saw:2 largest:1 repetition:1 tool:4 brought:2 mit:1 always:1 aim:1 voltage:4 acetylcholine:4 focus:1 xit:3 bernoulli:5 indicates:2 superimposed:1 adversarial:2 detect:3 helpful:1 inference:2 dependent:5 stopping:13 attract:1 i0:1 dayan:5 typically:1 chow:1 dayanik:1 hidden:1 relation:1 a0:4 i1:1 issue:1 overall:2 orientation:1 among:3 animal:4 spatial:3 integration:2 special:1 plan:1 having:2 represents:1 broad:1 yu:2 look:2 afc:2 future:2 discrepancy:1 report:3 stimulus:16 escape:1 ve:1 tightly:1 individual:3 fire:9 detection:57 conductance:3 stationarity:1 evaluation:1 light:1 hpt:2 amenable:1 accurate:1 closer:4 partial:1 necessary:1 capable:1 respective:2 unless:1 incomplete:1 continuing:1 theoretical:3 minimal:1 increased:1 instance:2 earlier:2 modeling:2 rao:1 contiguous:2 cost:20 introducing:1 rare:1 delay:5 characterize:4 reported:2 eur:1 sensitivity:1 ie:1 probabilistic:6 off:4 xi1:2 together:1 quickly:2 continuously:1 na:1 again:1 neuromodulators:2 opposed:2 possibly:1 cognitive:2 conf:1 derivative:1 lull:1 actively:2 li:3 savas:1 potential:6 blow:1 sizeable:1 sec:7 satisfy:1 depends:2 onset:5 stream:3 performed:1 h1:2 view:2 piece:1 observing:1 red:1 start:1 bayes:3 predator:1 accuracy:6 variance:1 resetting:1 yield:2 bayesian:1 accurately:3 critically:1 iid:1 trajectory:1 drive:1 detector:6 influenced:3 reach:4 whenever:2 synaptic:2 raster:2 frequency:4 involved:2 associated:3 mi:1 proof:2 stop:2 cueing:4 emits:1 treatment:1 color:1 knowledge:5 improves:2 formalize:2 subserving:2 positioned:1 sophisticated:2 back:1 appears:1 focusing:1 higher:3 formulation:3 generality:1 just:1 hand:1 propagation:1 indicated:1 resemblance:1 effect:7 concept:1 true:2 brown:1 inductive:2 wi0:1 riddled:1 q0:13 iteratively:1 eg:1 attractive:1 interchangeably:1 self:3 during:5 numerator:1 steady:1 wj0:1 evident:1 tt:3 demonstrate:1 lowerbounded:1 passage:1 reasoning:1 consideration:1 novel:2 fi:4 began:1 spiking:6 refractory:1 extend:3 interpretation:1 analog:1 resting:4 numerically:3 approximates:1 accumulate:1 refer:2 a00:2 ai:9 phillips:1 consistency:1 mathematics:1 neatly:1 nonlinearity:1 had:1 f0:20 longer:1 gj:2 etc:2 add:1 posterior:12 own:2 recent:1 showed:1 inf:2 driven:1 prime:1 scenario:4 certain:6 verlag:1 binary:2 arbitrarily:1 life:1 tempered:1 responsiveness:2 minimum:2 fortunately:1 greater:2 impose:1 r0:4 surely:2 wolfowitz:1 period:1 monotonically:1 ii:2 relates:1 afterwards:1 multiple:2 desirable:1 exceeds:2 cross:1 long:1 post:2 a1:9 controlled:6 desideratum:1 basic:1 wald:1 denominator:1 expectation:5 poisson:3 tasked:1 sometimes:1 limt:1 represent:2 achieved:1 cell:2 receive:2 remarkably:4 separately:1 source:8 extra:3 eliminates:1 pooling:1 seem:1 near:3 concerned:1 independence:1 affect:1 competing:2 identified:1 restrict:1 idea:2 t0:2 whether:3 expression:4 peter:1 york:1 cause:1 action:5 generally:1 useful:2 involve:1 amount:3 induces:1 category:1 continuation:2 specifies:1 problematic:1 shiryaev:1 neuroscience:2 track:1 per:1 blue:1 discrete:2 write:1 waiting:1 affected:1 thereafter:1 key:1 threshold:24 changing:2 abbott:1 utilize:2 prolonging:1 deneve:2 merely:1 fraction:1 year:1 exponentiation:1 uncertainty:3 parameterized:2 hodgkin:2 discern:1 reporting:1 almost:2 family:2 place:5 decision:23 bound:1 ct:13 correspondence:1 potentiates:1 strength:1 ahead:1 constraint:2 huxley:2 incorporation:1 x2:1 ri:2 aspect:1 speed:5 argument:1 min:5 optimality:3 relatively:2 according:2 witte:1 poor:1 membrane:4 terminates:2 smaller:2 slightly:1 pti:3 across:2 wi:4 appealing:1 shallow:2 making:6 happens:2 hl:1 intuitively:2 gathering:1 taken:4 depletion:1 equation:3 remains:1 discus:1 turn:1 neuromodulation:5 xi2:1 x2t:1 know:2 merit:1 mechanistic:1 reversal:1 operation:1 rewritten:1 apply:4 hierarchical:1 appearing:1 statisti:1 shortly:1 changeable:1 dpt:3 angela:1 denotes:1 obviates:2 top:3 running:1 responding:1 opportunity:1 pt2:3 comparatively:1 objective:2 intend:1 already:2 quantity:1 spike:35 question:1 strategy:3 dependence:1 bialek:1 detrimental:1 link:1 thank:1 dwj:1 cellular:4 reason:1 relationship:4 ratio:11 minimizing:4 equivalently:1 unfortunately:1 potentially:2 marrocco:1 negative:1 rise:1 design:1 sprt:1 policy:37 unknown:1 neuron:46 observation:8 mate:1 finite:8 defining:1 extended:1 interacting:1 perturbation:1 community:2 specified:3 optimized:2 connection:1 elapsed:1 learned:1 bar:1 suggested:1 below:2 dynamical:2 oft:1 challenge:1 built:1 max:1 green:2 critical:1 event:2 difficulty:1 natural:1 indicator:1 sodium:1 representing:2 scheme:3 imply:2 temporally:1 excel:1 coupled:1 sahani:1 faced:5 prior:7 geometric:3 literature:1 relative:5 loss:12 dxt:1 interesting:3 generation:1 versus:1 remarkable:1 integrate:6 gather:1 consistent:6 shadlen:1 propagates:1 principle:1 tightened:2 exciting:1 pi:1 placed:1 last:1 soon:3 truncation:1 implicated:1 formal:5 understand:1 fall:1 face:1 taking:3 leaky:5 tracing:1 distributed:1 boundary:1 depth:1 xn:1 world:5 cumulative:2 evaluating:1 rich:1 sensory:7 dimension:1 made:3 dwi:1 commonly:1 transition:4 reinforcement:1 far:3 approximate:1 instantiation:1 conceptual:1 assumed:4 alternatively:1 continuous:1 iterative:1 norepinephrine:3 nature:2 terminate:2 inherently:1 ignoring:1 hc:2 complex:3 elegantly:1 pk:2 intracellular:3 linearly:1 whole:1 noise:1 alarm:4 repeated:7 x1:2 neuronal:8 fig:6 body:1 fashion:1 slow:1 sub:1 explicit:10 lie:1 theorem:6 down:3 specific:3 xt:36 explored:2 offset:1 ajyu:1 survival:1 evidence:3 false:4 sequential:2 importance:1 illustrates:1 horizon:3 boston:2 simply:2 explore:3 infinitely:1 likely:1 infinitehorizon:1 visual:2 tracking:2 saccade:1 applies:1 springer:1 ch:1 corresponds:1 satisfies:1 chance:1 conditional:2 goal:1 ann:1 absence:1 change:65 experimentally:2 infinite:3 determined:1 typical:1 uniformly:1 reducing:1 sophie:1 lemma:2 dwk:1 total:2 engaged:1 select:1 internal:1 princeton:3
2,212
3,007
Attribute-efficient learning of decision lists and linear threshold functions under unconcentrated distributions Philip M. Long Google Mountain View, CA [email protected] Rocco A. Servedio Department of Computer Science Columbia University New York, NY [email protected] Abstract We consider the well-studied problem of learning decision lists using few examples when many irrelevant features are present. We show that smooth boosting algorithms such as MadaBoost can efficiently learn decision lists of length k over n boolean variables using poly(k, log n) many examples provided that the marginal distribution over the relevant variables is ?not too concentrated? in an L 2 -norm sense. Using a recent result of H?astad, we extend the analysis to obtain a similar (though quantitatively weaker) result for learning arbitrary linear threshold functions with k nonzero coefficients. Experimental results indicate that the use of a smooth boosting algorithm, which plays a crucial role in our analysis, has an impact on the actual performance of the algorithm. 1 Introduction A decision list is a Boolean function defined over n Boolean inputs of the following form: if `1 then b1 else if `2 then b2 ... else if `k then bk else bk+1 . Here `1 , ..., `k are literals defined over the n Boolean variables and b1 , . . . , bk+1 are Boolean values. Since the work of Rivest [24] decision lists have been widely studied in learning theory and machine learning. A question that has received much attention is whether it is possible to attribute-efficiently learn decision lists, i.e. to learn decision lists of length k over n variables using only poly(k, log n) many examples. This question was first asked by Blum in 1990 [3] and has since been re-posed numerous times [4, 5, 6, 29]; as we now briefly describe, a range of partial results have been obtained along different lines. Several authors [4, 29] have noted that Littlestone?s Winnow algorithm [17] can learn decision lists of length k using 2O(k) log n examples in time 2O(k) n log n. Valiant [29] and Nevo and El-Yaniv [21] sharpened the analysis of Winnow in the special case where the decision list has only a bounded number of alternations in the sequence of output bits b1 , . . . , bk+1 . It is well known that the ?halving algorithm? (see [1, 2, 19]) can learn length-k decision lists using only O(k log n) examples, but the running time of the algorithm is nk . Klivans and Servedio [16] used polynomial threshold functions together with Winnow to obtain a tradeoff between running time and the number of examples ? 1/3 ? 1/3 required, by giving an algorithm that runs in time nO(k ) and uses 2O(k ) log n examples. In this work we take a different approach by relaxing the requirement that the algorithm work under any distribution on examples or in the mistake-bound model. This relaxation in fact allows us to handle not just decision lists, but arbitrary linear threshold functions with k nonzero coefficients. (Recall Pn that a linear threshold function f : {?1, 1}n ? {?1, 1}n is a function f (x) = sgn( i=1 wi xi ? ?) where wi , ? are real numbers and the sgn function outputs the ?1 numerical sign of its argument.) The approach and results. We will analyze a smooth boosting algorithm (see Section 2) together with a weak learner that exhaustively considers all 2n possible literals x i , ?xi as weak hypotheses. The algorithm, which we call Algorithm A, is described in more detail in Section 6. The algorithm?s performance can be bounded in terms of the L 2 -norm of the distribution over examP ples. Recall that the L2 -norm of a distribution D over a finite set X is kDk2 := ( x?X D(x)2 )1/2 . The L2 norm can be used to evaluate the ?spread? of a probability distribution: if the probability is concentrated on a constant number of elements of the domain then the L 2 norm is constant, ? whereas if the probability mass is spread uniformly over a domain of size N then the L 2 norm is 1/ N . Our main results are as follows. Let D be a distribution over {?1, 1} n. Suppose the target function f has k relevant variables. Let D rel denote the marginal distribution over {?1, 1}k induced by the relevant variables to f (i.e. if the relevant variables are xi1 , . . . , xik , then the value that D rel puts on an input (z1 , . . . , zk ) is Prx?D [xi1 . . . xik = z1 . . . zk ]. Let Uk be the uniform distribution over {?1, 1}k and suppose that ||D rel ||2 /||Uk ||2 = ? . (Note that for any D we have ? ? 1, since Uk has minimal L2 -norm among all distributions over {?1, 1}k .) Then we have: Theorem 1 Suppose the target function is an arbitrary decision list in the setting described above. Then given poly(log n, 1 , ?, log ?1 ) examples, Algorithm A runs in poly(n, ?, 1 , log ?1 ) time and with probability 1 ? ? constructs a hypothesis h that is -accurate with respect to D. Theorem 2 Suppose the target function is an arbitrary linear threshold function in the setting de2 ? scribed above. Then given poly(k, log n, 2O((? /) ) , log 1? ) examples, Algorithm A runs in poly(n, 2 ? 2O((? /) ) , log 1? ) time and with probability 1 ? ? constructs a hypothesis h that is -accurate with respect to D. Relation to Previous Work. Jackson and Craven [14] considered a similar approach of using Boolean literals as weak hypotheses for a boosting algorithm (in their case, AdaBoost). Jackson and Craven proved that for any distribution over examples, the resulting algorithm requires poly(K, log n)P examples to learn any weight-K linear threshold function, i.e. any function of n the form sgn( i=1 wi xi ? ?) over Boolean variables where all weights w i are integers and Pn |w | ? K (this clearly implies that there are at most K relevant variables). It is well known i i=1 [12, 18] that general decision lists of length k can only be expressed by linear threshold functions of weight 2?(k) , and thus the result of [14] does not give an attribute efficient learning algorithm for decision lists. More recently Servedio [27] considered essentially the same algorithm we analyze in this work by specifically studying smooth boosting algorithms with the ?best-single-variable? weak learner. He considered a general linear threshold learning problem (with no assumption that there are few relevant variables) and showed that if the distribution satisfies a margin condition then the algorithm has some level of resilience to malicious noise. The analysis of this paper is different from that of [27]; to the best of our knowledge ours is the first analysis in which the smoothness property of boosting is exploited for attribute efficient learning. 2 Boosting and Smooth Boosting Fix a target function f : {?1, 1}n ? {?1, 1} and a distribution D over {?1, 1}n. A hypothesis function h : {?1, 1}n ? {?1, 1} is a ?-weak hypothesis for f with respect to D if ED [f h] ? ?. We sometimes refer to ED [f h] as the advantage of h with respect to f. We remind the reader that a boosting algorithm is an algorithm which operates in a sequence of stages and at each stage t maintains a distribution Dt over {?1, 1}n. At stage t the boosting algorithm is given a weak hypothesis ht for f with respect to D; the boosting algorithm then uses this to construct the next distribution Dt+1 over {?1, 1}n. After T such stages the boosting algorithm constructs a final hypothesis h based on the weak hypotheses h 1 , . . . , hT that is guaranteed to have high accuracy with respect to the initial distribution D. See [25] for more details. Let D1 , D2 be two distributions. For ? ? 1 we say that D1 is ?-smooth with respect to D2 if for all x ? {?1, 1}n, D1 (x)/D2 (x) ? ?. Following [15], we say that a boosting algorithm B is ?(, ?)-smooth if for any initial distribution D and any distribution Dt that is generated starting from D when B is used to boost to -accuracy with ?-weak hypotheses at each stage, Dt is ?(, ?)-smooth w.r.t. D. It is known that there are algorithms that are ?-smooth for ? = ?( 1 ) with no dependence on ?, see e.g. [8]. For the rest of the paper B will denote such a smooth boosting algorithm. It is easy to see that every distribution D which is 1 -smooth w.r.t. the uniform distribution U satisfies p kDk2 /kUk2 ? 1/. On the other hand, there are distributions D that are highly non-smooth relative to U but which still have kDk2 /kUk2 small. For instance, the distribution D over {?1, 1}k 1 which puts weight 2k/2 on a single point and distributes the remaining weight uniformly on the other k 2 ? 1 points is only 2k/2 -smooth (i.e. very non-smooth) but satisfies kDk2 /kUk k2 = ?(1). Thus the L2 -norm condition we consider in this paper is a weaker condition than smoothness with respect to the uniform distribution. 3 Total variation distance and L2 -norm of distributions The total variation distance between twoP probability distributions D 1 , D2 over a finite set X is dT V := maxS?X D1 (S) ? D2 (S) = 21 x?X |D1 (x) ? D2 (x)| . It is easy to see that the total variation distance between any two distributions is at most 1, and equals 1 if and only if the supports of the distributions are disjoint. The following is immediate: Lemma P 1 For any two distributions D1 and D2 over a finite domain X, we have dT V (D1 , D2 ) = 1 ? x?X min{D1 (x), D2 (x)}. We can bound the total variation distance between a distribution D and the uniform distribution in terms of the ratio kDk2 /kUk2 of the L2 -norms as follows: Lemma 2 For any distribution D over a finite domain X, if U is the uniform distribution over X, ||U ||2 we have dT V (D, U) ? 1 ? 4||D||22 . 2 2 2 2 2 Proof: Let M = ||D|| ||U ||2 . Since ||D||2 = Ex?D [D(x)], we have Ex?D [D(x)] = M ||U||2 = By Markov?s inequality, Pr [D(x) ? 2M 2 U(x)] = Pr [D(x) ? x?D x?D By Lemma 1, we have 1 ? dT V (D, U) = X x ? min{D(x), U(x)} ? X x:D(x)?2M 2 U (x) 2M 2 ] ? 1/2. |X| X x:D(x)?2M 2 U (x) M2 |X| . (1) min{D(x), U(x)} D(x) 1 ? , 2 2M 4M 2 where the second inequality uses the fact that M ? 1 (so D(x)/2M 2 < D(x)) and the third inequality uses (1). Using the definition of M and solving for dT V (D, U) completes the proof. 4 Weak hypotheses for decision lists Let f be any decision list that depends on k variables: if `1 then output b1 else ? ? ? else if `k then output bk else output bk+1 where each `i is either ?(xi = 1)? or ?(xi = ?1).? (2) The following folklore lemma can be proved by an easy induction (see e.g. [12, 26] for proofs of essentially equivalent claims): Lemma 3 The decision list f can be represented by a linear threshold function of the form f (x) = sgn(c1 x1 + ? ? ? + ck xk ? ?) where each ci = ?2k?i and ? is an even integer in the range [?2k , 2k ]. It is easy to see that for any fixed c1 , . . . , ck as in the lemma, as x = (x1 , . . . , xk ) varies over {?1, 1}k the linear form c1 x1 +? ? ?+ck xk will assume each odd integer value in the range [?2k , 2k ] exactly once. Now we can prove: Lemma 4 Let f be any decision list of length k over the n Boolean variables x 1 , . . . , xn . Let D be any distribution over {?1, 1}n, and let Drel denote the marginal distribution over {?1, 1}k induced by the k relevant variables of f. Suppose that dT V (Drel , Uk ) ? 1 ? ?. Then there is some weak 2 hypothesis h ? {x1 , ?x1 , . . . , xn , ?xn , 1, ?1} which satisfies EDrel [f h] ? ?16 . Proof: We first observe that by Lemma 3 and the well-known ?discriminator lemma? of [23, 11], under any distribution D some weak hypothesis h from {x1 , ?x1 , . . . , xn , ?xn , 1, ?1} must have 4 ED [f h] ? 21k . This immediately establishes the lemma for all ? ? 2k/2 , and thus we may suppose 4 . w.l.o.g. that ? > 2k/2 We may assume w.l.o.g. that f is the decision list (2), that is, that the first literal concerns x 1 , the second concerns x2 , and so on. Let L(x) denote the linear form c1 x1 +? ? ?+ck xk ?? from Lemma 3, so f (x) = sgn(L(x)). If x is drawn uniformly from {?1, 1}k , then L(x) is distributed uniformly over the 2k odd integers in the interval [?2k ? ?, 2k ? ?], as c1 x1 is uniform over ?2k , c2 x2 over ?2k?1 , and so on. Let S denote the set of those x ? {?1, 1}k that satisfy |L(x)| ? ?4 2k . Note that there are at most ? k 4 2 + 1 elements in S, corresponding to L(x) = ?1, ?3, . . . , ?(2j ? 1), where j is the greatest 4 integer such that 2j ? 1 ? ?4 2k . Since ? > 2k/2 , certainly |S| ? 1 + ?4 2k ? ?2 2k . We thus have PrUk [|L(x)| > ?4 2k ] ? 1 ? ?/2. It follows that PrDrel [|L(x)| > ?4 2k ] ? ?2 (for otherwise we would have dT V (Drel , Uk ) > 1 ? ?), and consequently we have EDrel [|L(x)|] ? ?2 k 8 2 . Now we follow the simple argument used to prove the ?discriminator lemma? [23, 11]. We have EDrel [|L(x)|] = EDrel [f (x)L(x)] = c1 E[f (x)x1 ] + ? ? ? + ck E[f (x)xk ] ? ?E[f (x)] ? ?2 k 2 . (3) 8 Recalling that each |ci | = 2k?i , it follows that some h ? {x1 , ?x1 , . . . , xn , ?xn , 1, ?1} must 2 2 satisfy EDrel [f h] ? ( ?8 2k )/(2k?1 + ? ? ? + 20 + |?|). Since |?| ? 2k this is at least ?16 , and the proof is complete. 5 Weak hypotheses for linear threshold functions Now we consider the more general setting of arbitrary linear threshold functions. Though there are additional technical complications the basic idea is as in the previous section. We will use the following fact due to H?astad: Fact 3 (H?astad) (see [28], Theorem 9) Let f : {?1, 1}k ? {?1, 1} be any linear threshold funcPk tion that depends on all k variables x1 , . . . , xk . There is a representation sgn( i=1 wi xi ? ?) for f which is such that (assuming the weights w1 , . . . , wk are ordered by decreasing magnitude 1 1 = |w1 | ? |w2 | ? ? ? ? ? |wk | > 0) we have |wi | ? i!(k+1) for all i = 2, . . . , k. The main result of this section is the following lemma. The proof uses ideas from the proof of Theorem 2 in [28]. Lemma 5 Let f : {?1, 1}n ? {?1, 1} be any linear threshold function that depends on k variables. Let D be any distribution over {?1, 1}n, and let Drel denote the marginal distribution over {?1, 1}k induced by the k relevant variables of f. Suppose that dT V (Drel , Uk ) ? 1 ? ?. Then there is some weak hypothesis h ? {x1 , ?x1 , . . . , xn , ?xn , 1, ?1} which satisfies EDrel [f h] ? 2 ? 1/(k 2 2O(1/? ) ). Proof sketch: We may assume that f (x) = sgn(L(x)) where L(x) = w1 x1 + ? ? ? + wk xk ? ? with w1 , . . . , wk as described in Fact 3. 2 ? Let ` := O(1/? ) = O((1/? 2 )poly(log(1/?))). (We will specify ` in more detail later.) Suppose first that ` ? k. By a well-known result of Muroga et al. [20], every linear threshold function f that depends on k variables can be represented using integer weights each of magnitude 2O(k log k) . Now the discriminator lemma [11] implies that for any distribution P, for some h ? {x1 , ?x1 , . . . , xn , ?xn , 1, ?1} we have EP [f h] ? 1/2O(k log k) . If ` ? k and ` = 2 ? 2 ? O((1/? 2 )poly(log(1/?))), we have k log k = O(1/? ). Thus, in this case, EP [f h] ? 1/2O(1/? ) , so the lemma holds if ` ? k. Thus we henceforth assume that ` < k. It remains only to show that ? (4) 2 EDrel [|L(x)|] ? 1/(k2O(1/? ) ); once we have this, following (3) we get ? 2 EDrel [|L(x)|] = EDrel [f L] = w1 E[f (x)x1 ] + ? ? ? + wk E[f (x)xk ] ? ?E[f (x)] ? 1/(k2O(1/? ) ), and now since each |wi | ? 1 (and w.l.o.g. |?| ? k) this implies that some h satisfies EDrel [f h] ? 2 ? 1/(k 2 2O(1/? ) ) as desired. Similar to [28] we consider two cases (which are slightly different from the cases in [28]). Pk Case I: For all 1 ? i ? ` we have wi2 /( j=i wj2 ) > ? 2 /576. r   Pk 2 ln(8/?). Recall the following version of Hoeffding?s bound: for any Let ? := 2 w j=`+1 j 0 6= w ? Rk and any ? > 0, we have Prx?{?1,1}k [|w ? x| ? ?kwk] ? 2e?? /2 (where we write q Pk 2 kwk to denote i=1 wi ). This bound directly gives us that ? Pr [|w`+1 x`+1 + ? ? ? + wk xk | ? ?] ? 2e?2 ln(8/?)/2 = . (5) x?Uk 4 2 Moreover, the argument in [28] that establishes equation (4) of [28] also yields ? (6) Pr [|w1 x1 + ? ? ? + w` x` ? ?| ? 2?] ? x?Uk 4 in our current setting. (The only change that needs to be made to the argument of [28] is adjusting various constant factors in the definition of `). Equations (5) and (6) together yield Prx?Uk [|w1 x1 + ? ? ? + wk xk ? ?| ? ?] ? 1 ? ?2 . Now as before, taken together with the dT V bound this yields PrDrel [|L(x)| ? ?] ? ?2 and hence we have EDrel [|L(x)|] ? ??/2. Since ? > w`+1 and w`+1 ? 1/((k + 1)(` + 1)!) by Fact 3, we have established (4) in Case I. Pk Case II: For some value J ? ` we have wJ2 /( i=J wi2 ) ? ? 2 /576. Let us fix any setting z ? {?1, 1}J?1 of the variables x1 , . . . , xJ?1 . By an inequality due to Petrov [22] (see [28], Theorem 4) we have 6wJ 6? ? Pr [|w1 z1 +? ? ?+wJ?1 zJ?1 +wJ xJ +? ? ?+wk xk ??| ? wJ ] ? qP ? = . xJ ,...,xk ?Uk?J+1 24 4 k 2 i=J wi Thus for each z ? {?1, 1}J?1 we have Prx?Uk [|L(x)| ? wJ | x1 . . . xJ?1 = z1 . . . zJ?1 ] ? ?4 . This immediately yields Prx?Uk [|L(x)| > wJ ] ? 1 ? ?4 , which in turn gives Prx?Drel [|L(x)| > 3?wJ wJ ] ? 3? by our usual arguments. Now (4) follows using Fact 3 4 and hence ED rel [|L(x)|] ? 4 and J ? `. 6 Putting it all together Algorithm A works by running a ?( 1 )-smooth boosting-by-filtering algorithm; for concreteness we use the MadaBoost algorithm of Domingo and Watanabe [8]. At the t-th stage of boosting, when MadaBoost simulates the distribution Dt , the weak learning algorithm works as follows: 0 ) O( log n+log(1/? ) many examples are drawn from the simulated distribution Dt , and these examples ?2 are used to obtain an empirical estimate of EDt [f h] for each h ? {x1 , ?x1 , . . . , xn , ?xn , ?1, 1}. (Here ? is an upper bound on the advantage EDt [f h] of the weak hypotheses used at each stage; we discuss this more below.) The weak hypothesis used at this stage is the one with the highest observed empirical estimate. The algorithm is run for T = O( ?12 ) stages of boosting. Consider any fixed stage t of the algorithm?s execution. As shown in [8], at most O( 1 ) draws from the original distribution D are required for MadaBoost to simulate a draw from the distribution D t . (This is a direct consequence of the fact that MadaBoost is O( 1 )-smooth; the distribution Dt is simulated using rejection sampling from D.) Standard tail bounds show that if the best hypothesis h has E[f h] ? ? then with probability 1 ? ? 0 the hypothesis selected will have E[f h] ? ?/2. In [8] it is shown that if MadaBoost always has an ?(?)-accurate weak hypothesis at each stage, then after at most T = O( ?12 ) stages the algorithm will construct a hypothesis which has error at most . Thus it suffices to take ? 0 = O(?2 ?). The overall number of examples used by Algorithm A is 0 ) O( log n+log(1/? ). 2 ? 4 Thus to establish Theorems 1 and 2, it remains only to show that for any initial distribution D with kDrel k2 /kUk k2 = ? , the distributions Dt that arise in the course of boosting are always such that the best weak hypothesis h ? {x1 , ?x1 , . . . , xn , ?xn , ?1, 1} has sufficiently large advantage. Suppose f is a target function that depends on some set of k (out of n) variables. Consider what happens if we run a 1 -smooth boosting algorithm, where the initial distribution D satisfies kDrel k/kUk k = ?. At each stage we will have Dtrel (x) ? 1 ? Drel (x) for all x ? {?1, 1}k , and consequently we will have ||Dtrel ||22 = X x?{?1,1}k Dtrel (x)2 ? 1 2 X x?{?1,1}k Drel (x)2 ? ?2 2 X x?{?1,1}k Uk (x)2 . Thus, by Lemma 2 each distribution Dt will satisfy dT V (Dtrel , Uk ) ? 1?2 /(4? 2 ). Now Lemmas 4 and 5 imply that in both cases (decision lists and LTFs) the best weak hypothesis h does indeed have the required advantage. 7 Experiments The smoothness property enabled the analysis of this paper. Is smoothness really helpful for learning decision lists with respect to diffuse distributions? Is it critical? This section is aimed at addressing these questions experimentally. We compared the accuracy of the classifiers output by a number of smooth boosters from the literature with AdaBoost (which is known to not be a smooth booster in general, see e.g. Section 4.2 of [7]) on synthetic data in which the examples were distributed uniformly, and the class designations were determined by applying a randomly generated decision list. The number of relevant variables was fixed at 10. The decision list was determined by picking `1 , ..., `10 and b1 , ..., b11 from (2) independently uniformly at random from among the possibilities. We evaluated the following algorithms: (a) AdaBoost [9], (b) MadaBoost [8], (c) SmoothBoost [27], and (d) a smooth booster proposed by Gavinsky [10]. Due to space constraints, we cannot describe each of these in detail.1 Each booster was used to reweight the training data, and in each round, the literal which minimized the weighted training error was chosen. Some of the algorithms choose the number of rounds of 1 Very roughly speaking, AdaBoost reweights the data to assign more weight to examples that previously chosen base classifiers have often classified incorrectly; it then outputs a weighted vote over the outputs of the base classifiers, where each voting weight is determined as a function of how well its base classifier performed. MadaBoost modifies AdaBoost to place a cap on the weight, prior to normalization. SmoothBoost [27] caps the weight more aggressively as learning progresses, but also reweights the data and weighs the base classifiers in a manner that does not depend on how well they performed. The form of the manner in which Gavinsky?s booster updates weights is significantly different from AdaBoost, and reminiscent of [13, 15]. m 100 200 500 1000 100 200 500 1000 n 100 100 100 100 1000 1000 1000 1000 Ada 0.086 0.052 0.022 0.016 0.123 0.079 0.045 0.033 Mada 0.077 0.045 0.018 0.014 0.119 0.072 0.039 0.026 Gavinsky 0.088 0.050 0.024 0.024 0.116 0.083 0.045 0.035 SB(0.05) 0.071 0.067 0.056 0.063 0.093 0.071 0.050 0.048 SB(0.1) 0.067 0.047 0.031 0.036 0.101 0.064 0.040 0.038 SB(0.2) 0.077 0.047 0.025 0.028 0.117 0.072 0.040 0.032 SB(0.4) 0.089 0.051 0.031 0.033 0.128 0.081 0.044 0.036 SB(0.2) 7.5 9.4 11.5 12.1 6.1 9.5 10.9 12.1 SB(0.4) 9.1 9.9 12.2 13.0 7.4 11.7 11.5 13.3 Table 1: Average test set error rate m 100 200 500 1000 100 200 500 1000 n 100 100 100 100 1000 1000 1000 1000 Ada 13.6 19.8 32.2 37.2 13.3 19.8 28.1 36.7 Mada 8.8 13.1 20.7 19.2 7.7 11.5 16.7 20.1 Gavinsky 11.7 12.5 15.2 15.3 26.8 19.4 16.2 14.7 SB(0.05) 3.9 4.1 5.0 7.1 3.7 4.4 4.9 7.2 SB(0.1) 6.0 6.9 9.1 10.7 5.3 7.4 8.6 11.0 Table 2: Average smoothness boosting as a function of the desired accuracy; instead, we ran all algorithms for 100 rounds. All boosters reweighted the data by normalizing some function that assigns weight to examples based on how well previously chosen based classifiers are doing at classifying them correctly. The booster proposed by Gavinsky might set all of these weights to zero: in such cases, it was terminated. For each choice of the number of examples m and the number of features n, we repeated the following steps: (a) generate a random target, (b) generate m random examples, (c) split them into a training set with 2/3 of the examples and a test set with the remaining 1/3, (d) apply all the algorithms on the training set, and (e) apply all the resulting classifiers on the test set. We repeated the steps enough times so that the total size of the test sets was at least 10000; that is, we repeated them d30000/me times. The average test-set error is reported. SmoothBoost [27] has two parameters, ? and ?. In his analysis, ? = ?/(2 + ?), so we used the same setting. We tried his algorithm with ? set to each of 0.05, 0.1, 0.2 and 0.4. The test set error rates are tabulated in Table 1. MadaBoost always improved on the accuracy of AdaBoost. The results are consistent with the possibility that AdaBoost learns decision lists attributeefficiently with respect to the uniform distribution; this motivates theoretical study of whether this is true. One possible route is to prove that, for sources like this, AdaBoost is, with high probability, a smooth boosting algorithm. The average smoothnesses are given in Table 2. SmoothBoost [27] was seen to be fairly robust to the choice of ?; with a good choice it sometimes performed the best. This motivates research into adaptive boosters along the lines of SmoothBoost. References [1] D. Angluin. Queries and concept learning. Machine Learning, 2:319?342, 1988. [2] J. Barzdin and R. Freivald. On the prediction of general recursive functions. Soviet Mathematics Doklady, 13:1224?1228, 1972. [3] A. Blum. Learning Boolean functions in an infinite attribute space. In Proceedings of the Twenty-Second Annual Symposium on Theory of Computing, pages 64?72, 1990. [4] A. Blum. On-line algorithms in machine learning. available at http://www.cs.cmu.edu/?avrim/Papers/pubs.html, 1996. [5] A. Blum, L. Hellerstein, and N. Littlestone. Learning in the presence of finitely or infinitely many irrelevant attributes. Journal of Computer and System Sciences, 50:32?40, 1995. [6] A. Blum and P. Langley. Selection of relevant features and examples in machine learning. Artificial Intelligence, 97(1-2):245?271, 1997. [7] N. Bshouty and D. Gavinsky. On boosting with optimal poly-bounded distributions. Journal of Machine Learning Research, 3:483?506, 2002. [8] C. Domingo and O. Watanabe. Madaboost: a modified version of adaboost. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, pages 180?189, 2000. [9] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997. [10] Dmitry Gavinsky. Optimally-smooth adaptive boosting and application to agnostic learning. Journal of Machine Learning Research, 4:101?117, 2003. [11] A. Hajnal, W. Maass, P. Pudlak, M. Szegedy, and G. Turan. Threshold circuits of bounded depth. Journal of Computer and System Sciences, 46:129?154, 1993. [12] S. Hampson and D. Volper. Linear function neurons: structure and training. Biological Cybernetics, 53:203?217, 1986. [13] R. Impagliazzo. Hard-core distributions for somewhat hard problems. In Proceedings of the Thirty-Sixth Annual Symposium on Foundations of Computer Science, pages 538?545, 1995. [14] J. Jackson and M. Craven. Learning sparse perceptrons. In NIPS 8, pages 654?660, 1996. [15] A. Klivans and R. Servedio. Boosting and hard-core sets. Machine Learning, 53(3):217?238, 2003. Preliminary version in Proc. FOCS?99. [16] A. Klivans and R. Servedio. Toward attribute efficient learning of decision lists and parities. In Proceedings of the 17th Annual Conference on Learning Theory,, pages 224?238, 2004. [17] N. Littlestone. Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning, 2:285?318, 1988. [18] M. Minsky and S. Papert. Perceptrons: an introduction to computational geometry. MIT Press, Cambridge, MA, 1968. [19] T. Mitchell. Generalization as search. Artificial Intelligence, 18:203?226, 1982. [20] S. Muroga, I. Toda, and S. Takasu. Theory of majority switching elements. J. Franklin Institute, 271:376? 418, 1961. [21] Z. Nevo and R. El-Yaniv. On online learning of decision lists. Journal of Machine Learning Research, 3:271?301, 2002. [22] V. V. Petrov. Limit theorems of probability theory. Oxford Science Publications, Oxford, England, 1995. [23] G. Pisier. Remarques sur un resultat non publi?e de B. Maurey. Sem. d?Analyse Fonctionelle, 1(12):1980? 81, 1981. [24] R. Rivest. Learning decision lists. Machine Learning, 2(3):229?246, 1987. [25] R. Schapire. Theoretical views of boosting. In Proc. 10th ALT, pages 12?24, 1999. [26] R. Servedio. On PAC learning using Winnow, Perceptron, and a Perceptron-like algorithm. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, pages 296?307, 1999. [27] R. Servedio. Smooth boosting and learning with malicious noise. Journal of Machine Learning Research, 4:633?648, 2003. Preliminary version in Proc. COLT?01. [28] R. Servedio. Every linear threshold function has a low-weight approximator. In Proceedings of the 21st Conference on Computational Complexity (CCC), pages 18?30, 2006. [29] L. Valiant. Projection learning. Machine Learning, 37(2):115?130, 1999.
3007 |@word hampson:1 version:4 briefly:1 polynomial:1 norm:10 twelfth:1 d2:9 tried:1 initial:4 pub:1 wj2:2 ours:1 franklin:1 current:1 com:1 must:2 reminiscent:1 numerical:1 hajnal:1 update:1 intelligence:2 selected:1 xk:12 core:2 boosting:27 complication:1 along:2 c2:1 direct:1 symposium:2 focs:1 prove:3 kdk2:5 manner:2 indeed:1 roughly:1 decreasing:1 actual:1 abound:1 provided:1 rivest:2 bounded:4 moreover:1 mass:1 agnostic:1 circuit:1 what:1 mountain:1 turan:1 every:3 voting:1 exactly:1 doklady:1 k2:3 classifier:7 uk:14 before:1 resilience:1 mistake:1 consequence:1 switching:1 limit:1 oxford:2 might:1 studied:2 relaxing:1 range:3 scribed:1 thirty:1 recursive:1 pudlak:1 langley:1 empirical:2 significantly:1 projection:1 get:1 cannot:1 smoothboost:5 selection:1 put:2 applying:1 www:1 equivalent:1 modifies:1 attention:1 starting:1 independently:1 madaboost:10 immediately:2 assigns:1 m2:1 jackson:3 de2:1 enabled:1 his:2 handle:1 variation:4 target:6 play:1 suppose:9 us:5 hypothesis:23 domingo:2 element:3 ep:2 role:1 observed:1 wj:8 highest:1 ran:1 complexity:1 asked:1 exhaustively:1 depend:1 solving:1 learner:2 represented:2 various:1 soviet:1 describe:2 query:1 artificial:2 widely:1 posed:1 say:2 otherwise:1 analyse:1 final:1 online:1 sequence:2 advantage:4 relevant:10 yaniv:2 requirement:1 finitely:1 bshouty:1 odd:2 received:1 progress:1 gavinsky:7 astad:3 c:2 indicate:1 implies:3 attribute:8 sgn:7 assign:1 fix:2 suffices:1 really:1 generalization:2 preliminary:2 biological:1 hold:1 sufficiently:1 considered:3 claim:1 proc:3 establishes:2 weighted:2 mit:1 clearly:1 always:3 modified:1 ck:5 pn:2 publication:1 sense:1 helpful:1 el:2 sb:8 relation:1 overall:1 among:2 html:1 colt:1 special:1 fairly:1 marginal:4 equal:1 construct:5 once:2 sampling:1 muroga:2 minimized:1 quantitatively:1 few:2 randomly:1 minsky:1 geometry:1 recalling:1 highly:1 possibility:2 certainly:1 mada:2 accurate:3 partial:1 ples:1 littlestone:3 re:1 desired:2 weighs:1 theoretical:2 minimal:1 instance:1 boolean:9 ada:2 addressing:1 uniform:7 too:1 optimally:1 reported:1 varies:1 synthetic:1 st:1 xi1:2 picking:1 together:5 quickly:1 w1:8 sharpened:1 choose:1 hoeffding:1 literal:5 henceforth:1 reweights:2 booster:8 szegedy:1 de:1 impagliazzo:1 b2:1 wk:8 coefficient:2 satisfy:3 depends:5 tion:1 view:2 later:1 performed:3 analyze:2 kwk:2 doing:1 maintains:1 accuracy:5 efficiently:2 yield:4 weak:19 cybernetics:1 classified:1 ed:4 definition:2 sixth:1 petrov:2 servedio:8 proof:8 proved:2 adjusting:1 mitchell:1 recall:3 knowledge:1 cap:2 dt:19 follow:1 adaboost:10 specify:1 improved:1 evaluated:1 though:2 just:1 stage:13 hand:1 sketch:1 takasu:1 google:2 concept:1 true:1 hence:2 aggressively:1 nonzero:2 maass:1 reweighted:1 round:3 noted:1 drel:8 complete:1 theoretic:1 recently:1 qp:1 extend:1 he:1 tail:1 refer:1 cambridge:1 smoothness:6 examp:1 mathematics:1 base:4 recent:1 showed:1 winnow:4 irrelevant:3 route:1 inequality:4 alternation:1 exploited:1 seen:1 additional:1 somewhat:1 ii:1 smooth:23 technical:1 england:1 long:1 impact:1 halving:1 prediction:1 basic:1 essentially:2 cmu:1 sometimes:2 normalization:1 c1:6 whereas:1 thirteenth:1 interval:1 else:6 completes:1 malicious:2 source:1 crucial:1 w2:1 rest:1 volper:1 induced:3 simulates:1 call:1 integer:6 presence:1 split:1 easy:4 enough:1 xj:4 idea:2 tradeoff:1 whether:2 tabulated:1 york:1 speaking:1 aimed:1 concentrated:2 generate:2 angluin:1 http:1 schapire:2 zj:2 sign:1 disjoint:1 correctly:1 write:1 putting:1 threshold:18 blum:5 drawn:2 kuk:3 ht:2 relaxation:1 concreteness:1 run:5 place:1 reader:1 draw:2 decision:28 bit:1 bound:7 guaranteed:1 annual:5 constraint:1 x2:2 diffuse:1 simulate:1 klivans:3 argument:5 min:3 department:1 craven:3 slightly:1 wi:8 happens:1 pr:5 taken:1 ln:2 equation:2 remains:2 previously:2 turn:1 discus:1 studying:1 available:1 apply:2 observe:1 hellerstein:1 original:1 running:3 remaining:2 folklore:1 giving:1 toda:1 establish:1 question:3 rocco:2 dependence:1 usual:1 ccc:1 distance:4 simulated:2 philip:1 majority:1 me:1 considers:1 toward:1 induction:1 assuming:1 length:6 sur:1 remind:1 ratio:1 xik:2 reweight:1 motivates:2 twenty:1 upper:1 neuron:1 markov:1 finite:4 incorrectly:1 immediate:1 arbitrary:5 edt:2 bk:6 required:3 pisier:1 z1:4 discriminator:3 established:1 boost:1 nip:1 below:1 wi2:2 max:1 greatest:1 critical:1 imply:1 numerous:1 columbia:2 prior:1 literature:1 l2:6 relative:1 freund:1 designation:1 maurey:1 filtering:1 approximator:1 foundation:1 consistent:1 classifying:1 course:1 parity:1 weaker:2 perceptron:2 institute:1 sparse:1 distributed:2 depth:1 xn:15 author:1 made:1 adaptive:2 dmitry:1 b1:5 xi:6 search:1 un:1 table:4 learn:6 zk:2 b11:1 ca:1 robust:1 sem:1 poly:10 domain:4 pk:4 spread:2 main:2 terminated:1 noise:2 arise:1 prx:6 repeated:3 x1:27 nevo:2 ny:1 papert:1 watanabe:2 plong:1 third:1 learns:1 theorem:7 kuk2:3 rk:1 pac:1 list:27 alt:1 concern:2 normalizing:1 rel:4 avrim:1 valiant:2 ci:2 magnitude:2 execution:1 margin:1 nk:1 rejection:1 infinitely:1 barzdin:1 expressed:1 ordered:1 satisfies:7 ma:1 consequently:2 change:1 experimentally:1 hard:3 specifically:1 determined:3 uniformly:6 operates:1 infinite:1 distributes:1 lemma:18 total:5 experimental:1 vote:1 perceptrons:2 support:1 evaluate:1 d1:8 ex:2
2,213
3,008
Clustering Under Prior Knowledge with Application to Image Segmentation M?ario A. T. Figueiredo Instituto de Telecomunicac?o? es Instituto Superior T?ecnico Technical University of Lisbon Portugal Dong Seon Cheng, Vittorio Murino Vision, Image Processing, and Sound Laboratory Dipartimento di Informatica University of Verona Italy [email protected] [email protected], [email protected] Abstract This paper proposes a new approach to model-based clustering under prior knowledge. The proposed formulation can be interpreted from two different angles: as penalized logistic regression, where the class labels are only indirectly observed (via the probability density of each class); as finite mixture learning under a grouping prior. To estimate the parameters of the proposed model, we derive a (generalized) EM algorithm with a closed-form E-step, in contrast with other recent approaches to semi-supervised probabilistic clustering which require Gibbs sampling or suboptimal shortcuts. We show that our approach is ideally suited for image segmentation: it avoids the combinatorial nature Markov random field priors, and opens the door to more sophisticated spatial priors (e.g., wavelet-based) in a simple and computationally efficient way. Finally, we extend our formulation to work in unsupervised, semi-supervised, or discriminative modes. 1 Introduction Most approaches to semi-supervised learning (SSL) see the problem from one of two (dual) perspectives: supervised classification with additional unlabelled data (see [20] for a recent survey); clustering with prior information or constraints (e.g., [4, 10, 11, 15, 17]). The second perspective, usually termed semi-supervised clustering (SSC), is usually adopted when labels are totaly absent, but there are (usually pair-wise) relations that one wishes to enforce or encourage. Most SSC techniques work by incorporating the constrains (or prior) into classical algorithms such as K-means or EM for mixtures. The semi-supervision may be hard (i.e., grouping constraints [15, 17]), or have the form of a prior under which probabilistic clustering is performed [4, 11]. The later is clearly the most natural formulation for cases where one wishes to encourage, not enforce, certain relations; an obvious example is image segmentation, seen as clustering under a spatial prior, where neighboring sites should be encouraged, but not constrained, to belong to the same cluster/segment. However, the previous EM-type algorithms for this class of methods have a major drawback: the presence of the prior makes the E-step non-trivial, forcing the use of expensive Gibbs sampling [11] or suboptimal methods such as the iterated conditional modes algorithm [4]. In this paper, we introduce a new approach to mixture-based SSC, leading to a simple, fully deterministic, generalized EM (GEM) algorithm. The keystone is the formulation of SSC as a penalized logistic regression problem, where the labels are only indirectly observed. The linearity of the resulting complete log-likelihood, w.r.t. the missing group labels, underlies the simplicity of the resulting GEM algorithm. When applied to image segmentation, our method allows using spatial priors which are typical of image estimation problems (e.g., restoration/denoising), such as Gaussian fields or wavelet-based priors. Under these priors, the M-step of our GEM algorithm reduces to a simple image denoising procedure, for which there are several extremely efficient algorithms. 2 Formulation We start from the standard formulation of finite mixture models: X = {x1 , ..., xn } is an observed data set, where each xi ? IRd was generated (independently) according to one of a set of K probability (density or mass) functions {p(?|?(1) ), ..., p(?|?(K) )}. In image segmentation, each xi is a pixel value (gray scale, d = 1; color, d = 3) or a vector of local (e.g., texture) features. Associated (1) (K) with X , there is a hidden label set Y = {y1 , ..., yn }, where yi = [yi , ..., yi ]T ? {0, 1}K , with (k) yi = 1 if and only if xi was generated by source k (the so-called ?1-of-K? binary encoding). Thus, K n K h iyi(k) Y Y Y Y (k) (k) p (X |Y, ? ) = p(xi |? ) = p(xi |? ) , (1) k=1 (k) i: yi i=1 =1 k=1 where ? = (?(1) , ..., ?(K) ) is the set of parameters of the generative models of classes. In standard mixture models, all the yi are assumed to be independent and identically distributed samples following a multinomial distribution with probabilities {? (1) , ..., ? (K) }, i.e., P (Y) = Q Q (k) y(k) ) i . This is the part of standard mixture models that has to be modified in order to i k (? insert grouping constraints [15] or a grouping prior p(Y) [4, 11]. However, this prior destroys the simplicity of the standard E-step for finite mixtures, which is critically based on the independence assumption. We follow a different route to avoid that roadblock. Let the hidden labels Y = {y1 , ..., yn } depend on a new set of variables Z = {z1 , ..., zn }, where (1) (K) each zi = [zi , ..., zi ]T ? IRK, following a multinomial logistic model [5]: (k) n Y K  yi(k) Y ezi (k) (k) P (Y|Z) = P [yi = 1|zi ] , where P [yi = 1|zi ] = PK . (2) (l) zi i=1 k=1 l=1 e (K) Due to the normalization, we can set (w.l.o.g.) zi = 0, for i = 1, ..., n [5]. We?re thus left with (k) (k) n (K ? 1) real variables, i.e., Z = {z(1) , ..., z(K?1) }, where z(k) = [z1 , ..., zn ]T ; of course, Z can be seen as an n ? (K ? 1) matrix, where z(k) is the k-th column and zi is the i-th row. With this formulation, certain grouping preferences may be expressed by a prior p(Z). For example, preferred pair-wise relations can be easily embodied in a Gaussian prior ? ?   K?1 n X n K?1 Y X Y 1 1 (k) (k) Ai,j (zi ? zj )2 ? = exp ? (z(k) )T ? z(k) , (3) p(Z) ? exp ?? 4 i=1 j=1 2 k=1 k=1 where A is a matrix (with a null diagonal) encoding pair-wise preferences (Ai,j > 0 expresses preference, with strength proportional to Ai,j , for having points i and j in the same cluster) and ? is the well-known graph-Laplacian matrix [20], nP o Pn n ? = diag A , ..., A ? A. (4) 1,j n,j j=1 j=1 For image segmentation, each z(k) is an image with real-valued elements and a natural choice for A is to have Ai,j = ?, if i and j are neighbors, and zero otherwise. Assuming periodic boundary conditions for the neighborhood system, ? is a block-circulant matrix with circulant blocks [2]. However, as shown below, other more sophisticated priors (such as wavelet-based priors) can also be used at no additional computational cost [1]. 3 3.1 Model Estimation Marginal Maximum A Posteriori and the GEM Algorithm Based on the formulation presented in the previous section, SSC is performed by estimating Z and ?, seeing Y as missing data. The marginal maximum a posteriori estimate is obtained by marginalizing out the hidden labels (over all the possible label configurations),   X X b = arg max b ? Z, p(X , Y, Z|?) = arg max p(X |Y, ?) P (Y|Z) p(Z), Z,? Y Z,? Y (5) where we?re assuming a flat prior for ?. One of the key advantages of this approach is that (5) is a continuous (not combinatorial) optimization problem. This is in contrast Markov random field approaches to image segmentation, which lead to hard combinatorial problems, since they perform optimization directly with respect to the (discrete) label variables Y. Finally, notice that once in b one may compute P (Y|Z) b which gives the probability that each data possession of an estimate Z, (k) point belongs to each class. By finding arg maxk P [yi = 1|zi ], for every i, one may obtain a hard clustering/segmentation. We handle (5) with a generalized EM (GEM) algorithm [13], i.e., by applying the following iterative procedure (until some convergence criterion is satisfied): E-step: Compute the conditional expectation of the complete log-posterior, given the current estib and the observations X : b ?) mates (Z, h i b = EY log p(Y, Z, ?|X ) Z, b X . b ?) b ?, Q(Z, ?|Z, (6) b ? (Zbnew , ? b ), with new values such that b ?) M-step: Update the estimate: (Z, new b |Z, b ? Q(Z, b Z, b b ?) b ?| b ?). Q(Zbnew , ? new (7) Under mild conditions, it is well known that GEM algorithms converge to a local maximum of the marginal log-posterior [18]. 3.2 E-step The complete log-posterior is . log p(Y, Z, ?|X ) = log p(X |Y, ?) + log P (Y|Z) + log p(Z) "K # n K n K X X (k) (k) X (k) . X X (k) (k) = yi log p(xi |? ) + yi zi ? log ezi + log p(Z) i=1 k=1 i=1 k=1 (8) k=1 . where = stands for ?equal up to an additive constant?. The key observation is that this function is lin(k) ear w.r.t. the hidden variables yi . Consequently, the E-step reduces to computing their conditional expectations, which are then plugged into (8). (k) (k) As in standard mixtures, each missing yi is binary, thus its expectation (denoted ybi ) equals its posterior probability of being equal to one, easily obtained via Bayes law: (k) ybi ? (k) b b E[yi |Z, ?, X ] = (k) P [yi b (k) ) P [y (k) = 1|b p(xi |? zi ] i b = 1|b zi , ?, xi ] = P . (j) (j) K b = 1|b zi ] j=1 p(xi |? ) P [yi (9) Notice that this is the same as the E-step for a standard finite mixture, where the probabilities (k) P [yi = 1|b zi ] (given by (2)) play the role of the probabilities of the classes/components. Finally, (k) the Q function is obtained by plugging the expectations ybi into (8). 3.3 M-Step It?s clear from (8) that the maximization w.r.t. ? can be performed separately w.r.t. to each ?(k) , n X (k) b (k) = arg max ? ybi log p(xi |?(k) ). (10) new (k) ? i=1 This is the familiar weighted maximum likelihood criterion, exactly as it appears in EM for standard mixtures. The explicit form of this update depends on the choice of p(?|?(k) ); e.g., this step can be easily applied to any finite mixture of exponential family densities [3]. In supervised image segmentation, these parameters are known (e.g., previously estimated from training data) and thus it?s not necessary to estimate them; the M-step reduces to the estimation of Z. In unsupervised image segmentation, ? is unknown and (10) will have to be applied. To update the estimate of Z, we need to maximize (or at least improve, see (7)) "K # n K X X (k) (k) X (k) z b ? L(Z|Y) yb z ? log ei + log p(Z). i i=1 k=1 i (11) k=1 Without the prior, this would be a simple logistic regression (LR) problem, with an identity design (k) (k) matrix [5]; however, instead of the usual hard labels yi ? {0, 1}, we have ?soft? labels ybi ? [0, 1]. Arguably, the two standard approaches to maximum likelihood LR are the Newton-Raphson algorithm (a.k.a. iteratively reweighted least squares ? IRLS [7]) and the minorize-maximize (MM) approach (formerly known as bound optimization) [5, 9]. We will show below that the MM approach can be easily modified to accommodate the presence of a prior. Let?s briefly review the MM approach for maximizing a twice differentiable concave function E(?) with bounded Hessian [5, 9]. Let the Hessian H(?) of E(?) be bounded below by ?B (that is, H(?)  ?B, in the matrix sense, meaning that H(?) + B is positive definite), where B is a positive b has a minimum at ? = ?, b where definite matrix. It?s trivial to show that E(?) ? R(?, ?)  T   b = ?1 ? ? ? b ? B?1 g(?) b B ??? b ? B?1 g(?) b , R(?, ?) (12) 2 b denoting the gradient of E(?) at ?. b Thus, the iteration with g(?) bnew = arg max R(?, ?) b =? b + B?1 g(?) b ? ? bnew ) ? E(?). b is guaranteed to monotonically improve E(?), i.e., E(? (13) It was shown in [5] that the gradient and the Hessian of the logistic log-likelihood function, i.e., (11) without the log-prior, verify (with Ia denoting an a ? a identity matrix and 1a a vector of a ones) ! 1K?1 1TK?1 1 b ? ?(z) IK?1 ? ? In ? ?B, (14) g(z) = y and H(z)  ? 2 K (1) (1) (2) (K?1) b denotes ]T denotes the lexicographic vectorization of Z, y where z = [z1 , ..., zn , z1 , ..., zn (1) (1) (2) (K?1) T b the corresponding lexicographic vectorization of Y, and ?(z) = [p1 , ..., pn , p1 , ..., pn ] (k) (k) with pi = P [yi = 1|zi ]. Defining v = b z + B?1 (b y ? ?(b z)), the MM update equation for solving (11) is thus   1 T b znew (v) = arg min (z ? v) B (z ? v) ? log p(z) , z 2 (15) where p(z) is equivalent to p(Z), because z is simply the lexicographic vectorization of Z. We now summarize our GEM algorithm: b , using (9), for all i = 1, ..., n and k = 1, ..., K ? 1. E-step: compute y b fixed, that is, loop through (Generalized) M-step: Apply one or more iterations (15), keeping y the following two steps: v ? b z + B?1 (b y ? ?(b z)) and b z?b znew (v). 3.4 Speeding Up the Algorithm In image segmentation, the MM update equation (15) is formally equivalent to the MAP estimation of an image with n pixels in IRK?1 , under prior p(z), where v plays the role of observed image, and B is the inverse covariance matrix of the noise. Due to the structure of B, even if the prior models the several z(k) as independent, i.e., if log p(z) = log p(z(1) ) + ? ? ? + log p(z(K?1) ), (15) can not be decoupled into the several components {z(1) , ..., z(K?1) }. We sidestep this difficulty, at the cost of using a less tight bound in (14), based the following lemma: Lemma 1 Let ?K = 1/2, if K > 2, and ?K = 1/4, if K = 2. Then, B  ?K In(K?1) . Proof: Inserting K = 2 in (14) yields B = I/4, which proves the case K = 2. For K > 2, the inequality I/2  B is equivalent to ?min (I/2 ? B) ? 0, which is equivalent to ?max (B) ? (1/2). Since the eigenvalues of the Kronecker product are the products of the eigenvalues of the matrices, ?max (B) = ?max (I ? (1/K) 1 1T )/2. Since 1 1T is a rank-1 matrix with eigenvalues {0, ..., 0, K ? 1}, the eigenvalues of (I ? (1/K) 1 1T ) are {1, ..., 1, 1/K}, thus ?max (I ? (1/K) 1 1T ) = 1, and ?max (B) = 1/2. This lemma allows replacing B with ?K In(K?1) in (15) which (assuming independent priors, as is the case of (3)) becomes decoupled, leading to   2 ?K (k) (k) (k) (k) b p(z ) , for k = 1, ..., K ? 1, (16) ? log z(k) (v ) = arg min ? v z new 2 z(k) where v(k) = b z(k) + (1/?K )(b y(k) ? ? (k) (b z(k) )). Moreover, the ?noise? in each of these ?denoising? problems is white and Gaussian, of variance 1/?K . 3.5 Stationary Gaussian Field Priors Consider a Gaussian prior of form (3), where Ai,j only depends on the relative position of i and j and the neighborhood system defined by A has periodic boundary conditions. In this case, both A and ? are block-circulant matrices, with circulant blocks [2], thus diagonalizable by a 2D discrete Fourier transform (2D-DFT). Formally, ? = UH DU, where D is diagonal, U is the orthogonal matrix representing the 2D-DFT, and (?)H denotes conjugate transpose. The log-prior is then expressed in . the DFT domain, log p(z(k) ) = 21 (Uz(k) )H D(Uz(k) ), and the solution of (16) is ?1 (k) b z(k) ) = ?K UH [?K In + D] U v(k) , new (v for k = 1, ..., K ? 1. (17) (k) Observe that (17) corresponds to filtering each image v , in the DFT domain, with a fixed filter ?1 with frequency response [?K In + D] ; this inversion can be computed off-line and is trivial because ?K In + D is diagonal. Finally, it?s worth stressing that the matrix-vector products by U and UH are not carried out explicitly but more efficiently via the FFT algorithm, with cost O(n log n). 3.6 Wavelet-Based Priors for Segmentation It?s known that piece-wise smooth images have sparse wavelet-based representations (see [12] and the many references therein); this fact underlies the state-of-the-art denoising performance of wavelet-based methods. Piece-wise smoothness of the z(k) translates into segmentations in which pixels in each class tend to form connected regions. Consider a wavelet expansion of each z(k) z(k) = W? (k) , k = 1, ..., K ? 1, (18) (k) where the ? are sets of coefficients and W is the matrix representation of an inverse wavelet transform; W may be orthogonal or have more columns than lines (over-complete representations) [12]. A wavelet-based prior for z(k) is induced by placing a prior on the coefficients ? (k) . A classical choice for p(? (k) ) is a generalized Gaussian [14]. Without going into details, under this class of priors (and others), (16) becomes a non-linear wavelet-based denoising step, which has been widely studied in the image processing literature. For several choices of p(? (k) ) and W, this denoising step has a very simple closed form, which essentially corresponds to computing a wavelet transform of the observations, applying a coefficient-wise non-linear shrinkage/thresholding operation, and applying the inverse transform to the processed coefficients. This is computationally very efficient, due to the existence of fast algorithms for computing direct and inverse wavelet transforms; e.g., O(n) for an orthogonal wavelet transform or O(n log n) for a shift-invariant redundant transform. 4 4.1 Extensions Semi-Supervised Segmentation For semi-supervised image segmentation, the user defines regions in the image for which the true label is known. Our GEM algorithm is trivially modified to handle this case: if at location i the label (k) (j) is known to be (say) k, we freeze ybi = 1, and ybi = 0, for j 6= k. The E-step is only applied to those locations for which the label is unknown. The M-step remains unchanged. 4.2 Discriminative Features Our formulation (as most probabilistic segmentation methods) adopts a generative perspective, where each p(?|?(k) ) models the data generation mechanism in the corresponding class. However, discriminative methods (such as support vector machines) are seen as the current state-of-the-art in classification [7]. We will now show how a pre-trained discriminative classifier can be used in our GEM algorithm instead of the generative likelihoods. The E-step (see (9)) obtains the posterior probability that xi was generated by the k-th model, by b (k) ) with the local prior probability combining (via Bayes law) the corresponding likelihood p(xi |? (k) P [yi = 1|b zi ]. Consider that, instead of likelihoods derived from generative models, we have a discriminative classifier, i.e., one that directly provides estimates of the posterior class probabilities (k) P [yi = 1|xi ]. To use these values in our segmentation algorithm, we need a way to bias these (k) estimates according to the local prior probabilities P [yi = 1|b zi ], which are responsible for encouraging spatial coherence. Let us assume that we know that the discriminative classifier was trained using mk samples from the k-th class. It can thus be assumed that these posterior class probabilities (k) (k) verify P [yi = 1|xi ] ? mk p(xi |yi = 1). It is then possible to ?bias? these classifiers, with the (k) local prior probabilities P [yi = 1|b zi ], simply by computing ? ??1 K (k) (k) (j) (j) X P [y = 1|x ] P [y = 1|b z ] P [y = 1|x ] P [y = 1|b z ] i i ? i i ? (k) i i i i P [yi = 1|xi ] = . mk m j j=1 5 Experiments In this section we will show experimental results of image segmentation in supervised, unsupervised, semi-supervised, and discriminative modes. Assessing the performance of a segmentation method is not a trivial task. Moreover, the performance of segmentation algorithms depends more critically on the adopted features (which is not the focus of this paper) than on the spatial coherence prior. For these reasons, we will not present any careful comparative study, but simply a set of experimental examples testifying for the promising behavior of the proposed approach. 5.1 Supervised and Unsupervised Image Segmentation The first experiment, reported in Fig. 1, illustrates the algorithm on a synthetic gray scale image with four Gaussian classes of means 1, 2, 3, and 4, and standard deviation 0.6. For this image, both supervised and unsupervised segmentation lead to almost visually indistinguishable results, so we only show the supervised segmentation results. In the Gaussian prior, matrix A corresponds to a first order neighborhood, that is, Ai,j = ? if and only if j is one of the four nearest neighbors of i. For wavelet-based segmentation, we have used undecimated Haar wavelets and the Bayes-shrink denoising procedure [6]. Figure 1: From left to right: observed image, maximum likelihood segmentation, GEM result with Gaussian prior, GEM result with wavelet-based prior. 5.2 Semi-supervised Image Segmentation We illustrate the semi-supervised mode of our approach on two real RGB images, shown in Fig. 2. Each region is modelled by a single multivariate Gaussian density in RGB space. In the example in the first row, the goal is to segment the image into skin, cloth, and background regions; in the second example, the goal is to segment the horses from the background. These examples show how the semi-supervised mode of our algorithm is able to segment the image into regions which ?look like? the seed regions provided by the user. Figure 2: From left to right (in each row): observed image with regions indicated by the user as belonging to each class, segmentation result, region boundaries. 5.3 Discriminative Texture Segmentation Finally, we illustrate the behavior of the algorithm when used with discriminative classifiers by applying it to texture segmentation. We build on the work in [8], where SVM classifiers are used for texture classification (see [8] for complete details about the kernels and texture features used). Fig. 3 shows two experiments; one with a two-texture 256?512 image and the other with a 5-texture 256 ? 256 image. In the two-class case, one binary SVM was trained on 1000 random patterns from each class. For the 5-class case, 5 binary SVMs were trained in the ?1-vs-all? mode, with 500 samples from each class. In the 2-class and 5-class cases, the error rates of the SVM classifier are 12.69% and 13.92%, respectively. Our GEM algorithm achieves 0.51% and 2.22%, respectively. These examples show that our method is able to take class predictions produced by a classifier lacking any spatial prior and produce segmentations with a high degree of spatial coherence. 6 Conclusions We have introduced an approach to probabilistic semi-supervised clustering which is particularly suited for image segmentation. The formulation allows supervised, unsupervised, semi-supervised, and discriminative modes, and can be used with classical standard image priors (such as Gaussian fields, or wavelet-based priors). Unlike the usual Markov random field approaches, which involve combinatorial optimization, our segmentation algorithm consists of a simple generalized EM algorithm. Several experimental examples illustrated the promising behavior of our method. Ongoing work includes a thorough experimental comparison with state-of-the-art segmentation algorithms, namely, spectral methods [16] and techniques based on ?graph-cuts? [19]. Acknowledgement: This work was partially supported by the (Portuguese) Fundac?a? o para a Ci?encia e Tecnologia (FCT), grant POSC/EEA-SRI/61924/2004. Figure 3: From left to right (in each row): observed image, direct SVM segmentation, segmentation produced by our algorithm. References [1] M. Figueiredo. ?Bayesian image segmentation using wavelet-based priors?, Proc. IEEE Conf. Computer Vision and Pattern Recognition - CVPR?2005, San Diego, CA, 2005. [2] N. Balram, J. Moura. ?Noncausal Gauss-Markov random fields: parameter structure and estimation?, IEEE Trans. Information Theory, vol. 39, pp. 1333?1355, 1993. [3] A. Banerjee, S. Merugu, I. Dhillon, J. Ghosh. ?Clustering with Bregman divergences.? Proc. SIAM Intern. Conf. Data Mining ? SDM?2004, Lake Buena Vista, FL, 2004. [4] S. Basu, M. Bilenko, R. Mooney. ?A probabilistic framework for semi-supervised clustering.? Proc. of the KDD-2004, Seattle, WA, 2004. [5] D. B?ohning. ?Multinomial logistic regression?, Annals Inst. Stat. Math., vol. 44, pp. 197-200, 1992. [6] G. Chang, B. Yu, M. Vetterli. ?Adaptive wavelet thresholding for image denoising and compression.? IEEE Trans. Image Proc., vol. 9, pp. 1532?1546, 2000. [7] T. Hastie, R. Tibshirani, J. Friedman. The Elements of Statistical Learning, Springer, 2001. [8] K. I. Kim, K. Jung, S. H. Park, H. J. Kim. ?Support vector machines for texture classification.? IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, pp. 1542?1550, 2002. [9] D. Hunter, K. Lange. ?A tutorial on MM algorithms?, The American Statistician, vol. 58, pp. 30?37, 2004. [10] M. Law, A. Topchy, A. K. Jain. ?Model-based clustering with probabilistic constraints.? In Proc. of the SIAM Conf. on Data Mining, pp. 641-645, Newport Beach, CA, 2005. [11] Z. Lu, T. Leen. ?Probabilistic penalized clustering.? In NIPS 17, MIT Press, 2005. [12] S. Mallat. A Wavelet Tour of Signal Proc.. Academic Press, San Diego, CA, 1998. [13] G. McLachlan, T. Krishnan. The EM Algorithm and Extensions, Wiley, N. York, 1997. [14] P. Moulin, J. Liu. ?Analysis of multiresolution image denoising schemes using generalized - Gaussian and complexity priors,? IEEE Trans. Inform. Theory, vol. 45, pp. 909?919, 1999. [15] N. Shental, A. Bar-Hillel, T. Hertz, D. Weinshall. ?Computing Gaussian mixture models with EM using equivalence constraints.? In NIPS 15, MIT Press, Cambridge, MA, 2003. [16] J. Shi, J. Malik, ?Normalized cuts and image segmentation.? IEEE-TPAMI, vol. 22, pp. 888?905, 2000. [17] K. Wagstaff, C. Cardie, S. Rogers, S. Schr?odl. ?Constrained K-means clustering with background knowledge.? In Proc. of ICML?2001, Williamstown, MA, 2001. [18] C. Wu. ?On the convergence properties of the EM algorithm,? Ann. Statistics, vol. 11, pp. 95-103, 1983. [19] R. Zabih, V. Kolmogorov, ?Spatially coherent clustering with graph cuts.? Proc. IEEE-CVPR, vol. II, pp. 437?444, 2004. [20] X. Zhu. ?Semi-Supervised Learning Literature Survey?, TR-1530, Comp. Sci. Dept., Univ. of Wisconsin, Madison, 2006. Available at www.cs.wisc.edu/?jerryzhu/pub/ssl_survey.pdf
3008 |@word mild:1 briefly:1 inversion:1 compression:1 sri:1 verona:1 open:1 rgb:2 covariance:1 tr:1 accommodate:1 configuration:1 liu:1 pub:1 denoting:2 current:2 portuguese:1 additive:1 kdd:1 update:5 v:1 stationary:1 generative:4 intelligence:1 lr:2 provides:1 math:1 location:2 lx:1 preference:3 direct:2 ik:1 consists:1 introduce:1 behavior:3 p1:2 uz:2 bilenko:1 encouraging:1 becomes:2 provided:1 estimating:1 linearity:1 bounded:2 moreover:2 mass:1 null:1 weinshall:1 interpreted:1 possession:1 finding:1 ghosh:1 thorough:1 every:1 concave:1 exactly:1 classifier:8 grant:1 yn:2 arguably:1 positive:2 local:5 instituto:2 encoding:2 twice:1 therein:1 studied:1 equivalence:1 responsible:1 block:4 definite:2 procedure:3 pre:1 seeing:1 applying:4 www:1 equivalent:4 deterministic:1 vittorio:2 missing:3 maximizing:1 map:1 shi:1 independently:1 survey:2 simplicity:2 handle:2 diagonalizable:1 annals:1 pt:1 play:2 diego:2 user:3 mallat:1 element:2 expensive:1 particularly:1 recognition:1 cut:3 observed:7 role:2 murino:2 region:8 connected:1 complexity:1 constrains:1 ideally:1 bnew:2 depend:1 solving:1 segment:4 tight:1 trained:4 uh:3 easily:4 vista:1 kolmogorov:1 irk:2 jain:1 fast:1 univ:1 horse:1 neighborhood:3 hillel:1 widely:1 valued:1 cvpr:2 say:1 univr:2 otherwise:1 estib:1 statistic:1 transform:6 advantage:1 differentiable:1 eigenvalue:4 sdm:1 tpami:1 product:3 neighboring:1 inserting:1 loop:1 combining:1 multiresolution:1 seattle:1 convergence:2 cluster:2 assessing:1 produce:1 comparative:1 tk:1 derive:1 illustrate:2 stat:1 nearest:1 c:1 drawback:1 filter:1 newport:1 rogers:1 require:1 fundac:1 dipartimento:1 insert:1 extension:2 mm:6 exp:2 visually:1 seed:1 posc:1 major:1 achieves:1 estimation:5 proc:8 label:14 combinatorial:4 weighted:1 mclachlan:1 mit:2 clearly:1 destroys:1 gaussian:13 lexicographic:3 stressing:1 modified:3 avoid:1 pn:3 shrinkage:1 derived:1 focus:1 rank:1 likelihood:8 contrast:2 kim:2 sense:1 posteriori:2 inst:1 cloth:1 hidden:4 relation:3 going:1 pixel:3 arg:7 dual:1 classification:4 denoted:1 proposes:1 spatial:7 ssl:1 constrained:2 art:3 marginal:3 field:7 once:1 equal:3 having:1 beach:1 sampling:2 encouraged:1 placing:1 park:1 look:1 unsupervised:6 yu:1 icml:1 np:1 others:1 divergence:1 familiar:1 statistician:1 friedman:1 mining:2 mixture:12 noncausal:1 bregman:1 encourage:2 necessary:1 decoupled:2 orthogonal:3 plugged:1 re:2 mk:3 column:2 soft:1 restoration:1 zn:4 maximization:1 cost:3 deviation:1 jerryzhu:1 tour:1 reported:1 para:1 periodic:2 synthetic:1 density:4 siam:2 probabilistic:7 dong:1 off:1 satisfied:1 ear:1 ssc:5 conf:3 american:1 sidestep:1 leading:2 de:1 includes:1 coefficient:4 explicitly:1 depends:3 piece:2 performed:3 later:1 closed:2 mario:1 undecimated:1 start:1 bayes:3 square:1 variance:1 merugu:1 efficiently:1 yield:1 ybi:7 modelled:1 bayesian:1 iterated:1 critically:2 irls:1 produced:2 hunter:1 lu:1 worth:1 comp:1 cardie:1 mooney:1 moura:1 inform:1 telecomunicac:1 frequency:1 pp:10 obvious:1 associated:1 di:1 proof:1 knowledge:3 color:1 segmentation:37 vetterli:1 sophisticated:2 appears:1 supervised:21 follow:1 response:1 yb:1 formulation:10 leen:1 shrink:1 until:1 ei:1 replacing:1 banerjee:1 defines:1 logistic:6 mode:7 indicated:1 gray:2 verify:2 true:1 normalized:1 minorize:1 spatially:1 laboratory:1 iteratively:1 dhillon:1 illustrated:1 white:1 reweighted:1 indistinguishable:1 criterion:2 generalized:7 pdf:1 ecnico:1 complete:5 image:41 wise:6 meaning:1 superior:1 multinomial:3 extend:1 belong:1 freeze:1 cambridge:1 gibbs:2 ai:6 dft:4 smoothness:1 trivially:1 portugal:1 supervision:1 iyi:1 ezi:2 posterior:7 multivariate:1 recent:2 perspective:3 italy:1 belongs:1 forcing:1 termed:1 route:1 certain:2 inequality:1 binary:4 yi:27 seen:3 minimum:1 additional:2 moulin:1 ey:1 converge:1 maximize:2 redundant:1 monotonically:1 signal:1 semi:15 ii:1 sound:1 reduces:3 smooth:1 technical:1 unlabelled:1 academic:1 raphson:1 lin:1 plugging:1 laplacian:1 prediction:1 underlies:2 regression:4 vision:2 expectation:4 essentially:1 iteration:2 normalization:1 kernel:1 background:3 separately:1 source:1 unlike:1 induced:1 tend:1 presence:2 door:1 identically:1 fft:1 krishnan:1 independence:1 zi:19 hastie:1 suboptimal:2 lange:1 translates:1 absent:1 shift:1 ird:1 hessian:3 york:1 clear:1 involve:1 transforms:1 zabih:1 processed:1 informatica:1 svms:1 zj:1 tutorial:1 notice:2 estimated:1 tibshirani:1 odl:1 discrete:2 shental:1 vol:9 express:1 ario:1 group:1 key:2 four:2 wisc:1 graph:3 angle:1 znew:2 inverse:4 family:1 almost:1 wu:1 lake:1 coherence:3 bound:2 fl:1 guaranteed:1 cheng:2 strength:1 constraint:5 kronecker:1 flat:1 fourier:1 extremely:1 min:3 fct:1 according:2 conjugate:1 belonging:1 hertz:1 em:10 invariant:1 wagstaff:1 computationally:2 equation:2 previously:1 remains:1 mechanism:1 know:1 adopted:2 available:1 operation:1 apply:1 observe:1 indirectly:2 enforce:2 spectral:1 existence:1 denotes:3 clustering:15 newton:1 madison:1 prof:1 build:1 classical:3 unchanged:1 skin:1 malik:1 usual:2 diagonal:3 gradient:2 sci:2 trivial:4 reason:1 assuming:3 design:1 unknown:2 perform:1 observation:3 markov:4 finite:5 mate:1 maxk:1 defining:1 y1:2 schr:1 keystone:1 introduced:1 eea:1 pair:3 namely:1 z1:4 coherent:1 nip:2 trans:4 able:2 bar:1 usually:3 below:3 pattern:3 summarize:1 max:9 ia:1 lisbon:1 natural:2 difficulty:1 haar:1 zhu:1 representing:1 topchy:1 improve:2 scheme:1 carried:1 embodied:1 speeding:1 formerly:1 prior:45 review:1 literature:2 acknowledgement:1 marginalizing:1 relative:1 law:3 lacking:1 fully:1 wisconsin:1 generation:1 proportional:1 filtering:1 degree:1 thresholding:2 pi:1 testifying:1 row:4 course:1 penalized:3 jung:1 supported:1 keeping:1 transpose:1 figueiredo:3 bias:2 circulant:4 neighbor:2 basu:1 sparse:1 distributed:1 boundary:3 xn:1 stand:1 avoids:1 adopts:1 adaptive:1 san:2 ohning:1 obtains:1 roadblock:1 preferred:1 gem:12 assumed:2 discriminative:10 xi:16 continuous:1 iterative:1 vectorization:3 promising:2 nature:1 ca:3 du:1 expansion:1 domain:2 diag:1 pk:1 noise:2 x1:1 site:1 fig:3 wiley:1 position:1 wish:2 explicit:1 exponential:1 wavelet:20 svm:4 grouping:5 incorporating:1 ci:1 texture:8 illustrates:1 suited:2 simply:3 intern:1 expressed:2 partially:1 chang:1 springer:1 corresponds:3 williamstown:1 ma:2 conditional:3 identity:2 goal:2 consequently:1 careful:1 buena:1 ann:1 shortcut:1 hard:4 typical:1 tecnologia:1 denoising:9 lemma:3 called:1 e:1 experimental:4 gauss:1 formally:2 support:2 ongoing:1 encia:1 dept:1
2,214
3,009
Emergence of conjunctive visual features by quadratic independent component analysis J.T. Lindgren Department of Computer Science University of Helsinki Finland [email protected] Aapo Hyv?arinen HIIT Basic Research Unit University of Helsinki Finland [email protected] Abstract In previous studies, quadratic modelling of natural images has resulted in cell models that react strongly to edges and bars. Here we apply quadratic Independent Component Analysis to natural image patches, and show that up to a small approximation error, the estimated components are computing conjunctions of two linear features. These conjunctive features appear to represent not only edges and bars, but also inherently two-dimensional stimuli, such as corners. In addition, we show that for many of the components, the underlying linear features have essentially V1 simple cell receptive field characteristics. Our results indicate that the development of the V2 cells preferring angles and corners may be partly explainable by the principle of unsupervised sparse coding of natural images. 1 Introduction Sparse coding of natural images has led to models that resemble the receptive fields in the primate primary visual cortex area V1 (see e.g. [1, 2, 3]). An ongoing research effort is in trying to understand and model the computational principles in visual areas following V1, commonly thought to provide representations for more complicated stimuli. For example, it has recently been shown that in the Macaque monkey, the V2 area following V1 contains neurons responding favourably to angles and corners, but not necessarily to their constituent edges if presented alone [4, 5]. This behaviour can not be easily attained with linear models [6]. In this paper we estimate quadratic models for natural images using Independent Component Analysis (ICA). The used quadratic functions are a natural extension to linear functions (i.e. lT x), and give the value of a single feature or component as s = xT Hx + lT x, (1) where the matrix H specifies weights for second-order interactions between the input variables in stimulus x. This class of functions is equivalent to second-order polynomials of the input, and can compute linear combinations of squared responses of linear models (see e.g. [7]). Another wellknown interpretation of components in a quadratic model is as outputs of two-layer neural networks, which is based on an eigenvalue decomposition and will be discussed below. Estimating a quadratic model for natural images with ICA, we report here the emergence of receptive field models that respond strongly only if the stimulus contains two features that are in a correct spatial arrangement. With a heavy dimensionality reduction, the conjuncted features are mostly collinear (i.e. prefer edges or bars), but with a smaller reduction, additional components emerge that appear to prefer more complex stimuli such as angles or corners. We show that in both cases, the emerging components approximately operate by computing products between the outputs of two linear submodels that have V1 simple cell characteristics. The rest of this paper is organized as follows. In section 2 we describe the quadratic ICA in detail. Section 3 outlines the dataset and the preprocessing we used, and section 4 describes the results. Finally, section 5 concludes with discussion and future work. 2 Quadratic ICA Let x ? Rn be a vectorized grayscale input image patch. A basic form of linear ICA assumes that each data point is generated as x = As, (2) where A is a linear mixing matrix and s the vector of unknown source signals or independent components. The dimension of s is assumed to be equal to the dimension of x, possibly after the x have been reduced by PCA to a smaller dimension. ICA estimation tries to recover s and the parameter matrix W = A?1 . If the independent components are sparse, this is equivalent to performing sparse coding (for an account of ICA, see e.g. [8]). It has been proposed that ICA for quadratic models can be performed by first making a quadratic basis expansion on each x and then applying standard linear ICA [9]. Let the new data vectors z ? Rn(n+1)/2+n in quadratic space be z = ?([x1 , x2 , ..., xn ]) = [x21 , x1 x2 , ..., x22 , x2 x3 , ..., x2n , x1 , x2 , ..., xn ], (3) that is, ?(x) generates all the monomials for a second-order polynomial of x, except for the constant term. Such a dimension expansion is also implicit in kernel methods, where a second-order polynomial kernel would be used instead of ?. Here we work with the more traditional input transformation for simplicity. From now on, assume that ICA has been performed on the transformed data z. Then the columns wi of WT make up the quadratic components (cell models, polynomial filters) of the model. As the coefficients in wi are weights for a second-order polynomial, it is straightforward to decompose the response si of each quadratic component to x as si = wiT z = xT Hi x + lTi x, (4) where Hi is a symmetric square matrix corresponding to the weights given to all the cross-terms and li weights the first- order monomials. It is well known that the Hi can be represented in another form by eigenvalue decomposition, leading to the expression si = n X ?j (vjT x)2 + lTi x, (5) j=1 where ?j are the decreasingly sorted eigenvalues of Hi and vj the corresponding eigenvectors. In some cases the representation of eq. 5 can help to understand the model, since the individual eigenvectors can be interpreted as linear receptive fields. A quadratic function in this form is illustrated on the left in figure 1. However, for our model estimated with quadratic ICA, many of the eigenvectors vj did not resemble V1 simple cell receptive fields. Here we propose another decomposition which leads to a simple network computation based on linear receptive fields similar to those of V1 simple cells. Assume that the two eigenvalues of Hi which are largest in absolute value have opposite signs; we will refer to these as the dominant eigenvalues and denote them by ?1 and ?n . This assumption will turn out to hold empirically for our estimated models. Now, including just the two corresponding dominant eigenvectors and ignoring the linear term, we denote p p p p v+ = ( |?1 |v1 + |?n |vn ), v? = ( |?1 |v1 ? |?n |vn ), (6) x x x XYZ[ / _^]\ (v1T x)2 DD DD ? DD1 ... ... DD DD ! P ?n XYZ[ HIJK / ONML / _^]\ (vnT x)2 < zz z z 1 z z zz z z z PQRS / WVUT lT x x /s x T _^]\ / XYZ[ (v+ x) BB BB BB1 BB BB ! Q 1 T _^]\ HIJK / XYZ[ / ONML (v? x) / s? Figure 1: Quadratic components as networks. Using the eigenvalue decomposition, quadratic forms can be interpreted as networks. Left, the computation of a single component w, where the vi are the eigenvectors and the ?i the eigenvalues of the matrix H. Right, its product approximation, which is possible if the variance is concentrated on just two eigenvectors with eigenvalues of opposite signs. This turns out to be the case for natural images. and by using simple arithmetic we obtain a product approximation for eq. 5 as T T s?i = ?1 (v1T x)2 + ?n (vnT x)2 = (v+ x)(v? x). (7) This approximation is shown as a network on the right in figure 1, and will be justified later by its relatively small empirical error for our models. Providing that the approximation is good, the intuition is that the component is essentially computing the product of the responses of two linear filters, analogous to a logical AND operation, or a conjunction. We will empirically show that the vectors v+ and v? have resemblance to V1 simple cell receptive fields for our model even if the respective two dominant eigenvectors have more complicated shapes. 3 Materials and methods In our experiments we used the natural image dataset provided by van Hateren and van der Schaaf [2]. This dataset contains over 4000 grayscale images representing natural scenes, each image having a resolution of 1024?1536. The intensity distribution over this image set has a very long right tail related to variation in the overall image contrast and intensity [2, 10]. In addition, it is known that the high frequencies in natural images contain sampling artifacts due to rectangular sampling, and that the spectral characteristics are not uniform across frequencies, causing difficulties for gradient-based estimation methods [1]. To alleviate these problems for ICA, we adopt a two-phase preprocessing for the raw images, following [1, 2]. This processing can be considered a very simple model of the physiological pathway containing the retina and the lateral geniculate nucleus (LGN). First, we address the problem of heavy contrast variation and the long-tailed intensity distribution by taking a natural logarithm of the input images, effectively compressing their dynamic range. This preprocessing is similar to what happens in the first stages of natural visual systems, and has been previously suggested for the current dataset [2]. Next, to correct for the spectral imbalances in the data, we use the whitening filter proposed by Olshausen and Field [1]. This whitening filter cuts the highest frequencies, and balances the frequency distribution otherwise by dampening the dominant low frequencies. We use the filter with the same parameters as in [1]. The whitening filter has bandpass characteristics and hence resembles the center-surround behaviour of LGN cells. In practice, the filtering approximately decorrelates the data. After preprocessing each image as a whole, we sampled 300, 000 small image patches from the images, each patch having a resolution of 9 ? 9. Then we subtracted the local DC-component (mean intensity) from each patch. These patches then formed the data we used to estimate the quadratic ICA model. The model fitting was done by transforming the data to the quadratic space using eq. 3, followed by linear ICA. For ICA, we used the FastICA algorithm [11] with tanh nonlinearity and symmetric estimation of the components. The input dimension in the quadratic space was Figure 2: The quadratic ICA components when the model size is very small (81 components). Each quadruple displays the two dominant eigenvectors v1 and vn (top row), and the corresponding vectors v+ and v? (bottom row). Light and dark areas correspond to positive and negative weights, respectively. The components have been sorted by collinearity of the conjuncted features. 81 ? (81 + 1)/2 + 81 = 3402. We used PCA to drop the dimension by selecting the 400 most dominant principal axes, covering approximately 50% of the summed eigenvalues. This resulted in estimation of 400 independent components (or second-order polynomial filters). We also performed additional experiments with 81 and 1024 dominant principal axes, corresponding to 18% and 80% coverage. Due to space constraints, we are unable to discuss the 1024 component model, other than to briefly mention that it conformed to the main results presented in this paper. To ensure replicable research, the source codes performing the experiments described in this paper have been made publicly available1 . 4 Results In general, interpreting quadratic models can be difficult, and several strategies have been proposed in the literature (see e.g. [12]). However, in the current work the estimated components turned out to be fairly simple (up to a small approximation error, as shown later), and as discussed in section 2, it will be illustrative to display the estimated components in terms of their two dominant eigenvectors v1 and vn of H, and the respective vectors v+ and v? (see eq. 6). Since either pair of the two vectors can be used to compute the approximate component response to any stimuli using eq. 7, the analysis of the components can be based on the vectors v+ and v? if preferred. Figure 2 shows the quadratic ICA components when a small model was estimated with only 81 components. If we ignore the linear term as in eq. 7, the dominant eigenvectors shown at the top row of each quadruple are equal to the two unit-norm stimuli that the component reacts most highly to (e.g. [12]). Note that the reaction to the eigenvector vn (top right) will be highly negative. On the 1 http://www.cs.helsinki.fi/u/jtlindgr/stuff/ Figure 3: Quadratic ICA components picked from 10 bootstrap iterations with 400 components estimated on each run. All 4000 components were ordered by collinearity of the conjuncted features, and a small sample of each tail is shown. The presentation is the same as in Figure 2. Top, some components that prefer conjunctions of two collinear features. Bottom, components that conjunct two highly orthogonal features. The latter components become more apparent if the model size is large. Clear Gabor-like V1 characteristics can be seen in both cases in the vectors v+ and v? , even if the corresponding eigenvectors are more complex. other hand, both vectors v+ and v? must respond to a stimuli with a non-zero value if the component is to respond strongly. In the case of this small model size, many of the conjuncted features v+ and v? are collinear, and respond strongly to edge- or bar-like stimuli. The feature conjunctions that are not collinear remain more unstructured and appear to react highly to blob-like stimuli. However, both component types are quite different from ordinary linear detectors for edges, bars and blobs, since their conjunctive nature makes them much more selective. In the following, we will limit the discussion to larger models consisting of 400 components (unless mentioned otherwise). With the higher dimensionality allowed, the diversity of the emerging components increased. Figure 3 shows quadratic ICA components picked from 10 experiments repeated with different subsets of the input patches and different random seeds. Now, in addition to collinear conjunctions (on the top in the image), we also get components that conjunct more orthogonal stimuli (on the bottom). The latter components appear to respond favourably to intuitive visual concepts such as angles and corners. In this case, the benefits of the decomposition to vectors v+ and v? becomes more apparent, as many of the receptive field models retain their resemblance to Gabor-like filters (as in e.g. [1, 2, 8]) even if the corresponding eigenvectors become more complicated. Next we will validate the above characterization by showing that the approximation of eq. 7 holds up to a generally small error. First, it turns out that the eigenvalue distributions decay fast for the quadratic forms Hi of the estimated components. This is illustrated on the left in figure 4, which shows the mean sorted eigenvalues for the 400 components (for a model of 81 components, the figure was similar). Since all the eigenvectors have equal norms, the eigenvalues imply the magnitude of the contribution of its respective eigenvector to the component output value. Due to the fast decay of the eigenvalues, the two dominant eigenvectors are largely responsible for the component output, providing that the linear term l is insignificant (for some discussion on the linear term in quadratic models, see [12]). Figure 4: The conjunctive nature of the components is due to the eigenvalues of the quadratic forms Hi typically emerging as heavily dominated by just two eigenvectors with opposite-sign eigenvalues. This conjunctiveness is further confirmed by the relatively small approximation error caused by ignoring the non-dominant eigenvectors and the linear term. Left, sorted eigenvalues of Hi averaged over all 400 components for both quadratic ICA and quadratic PCA. It can be seen that the ICA-based eigenvalue distributions tend to decay faster. Right, the relative mean square error of the product approximation for the 400 quadratic ICA components. The components have been sorted by the error of approximation when Gabor functions have been used to model v+ and v? . Here the quadratic part tended to dominate the component responses, which may be because the (co)variances were much larger for the quadratic dimensions of the space than for the linear ones. The above reasoning is supported by analysis of the prediction error of the product approximation. We examined this by sampling 100, 000 new image patches (not used in the training), and computing the mean square error of the approximation divided by the variance of the component response, i.e. err(? s) = E[(s?? s)2 ]/V ar(s). This error is shown on the right in figure 4 for all the 400 components. On average, this relative error was 12% of the respective component variance, ranging from 2% to 57%. Hence, the product approximation appears to capture the behaviour of the components rather well. The plot also shows the effect of approximating the vectors v+ and v? with Gabor functions, which are commonly used to model V1 receptive fields. Using Gabor functions, the approximation error increased, ranging from 8% to 93%, with mean of 34%. To better understand the obtained error rates, we also fitted linear models to approximate the estimated quadratic cells using least-squares regression. This revealed the quadratic components to have highly nonlinear behaviour. For all components, the error of the linear approximator was over 90%, coming close to the baseline 100% error attained if the empirical mean is used as a (constant) predictor. Since the product approximation only covers the two dominant eigenvectors, it is possible that the rest of the eigenvectors might code for interesting phenomena through further excitatory and inhibitory effects. However, the quick decay of the eigenvalues in our estimated model should make any such effects rather minor. Following the ideas and methods of [12], we explored the possibility that the nondominant eigenvectors coded for invariances of the component. The only strong invariance we found was insensitivity to (possibly local) input sign changes, which is at least partly a structural property of the model, originating from taking squares in eq. 5. In particular, we observed no shift-invariance, consistent with some recent findings in the V2 area of the Macaque monkey [5]. We leave more in-depth exploration of the role of the nondominant eigenvectors as future work. Finally, we performed some experiments to examine to what extent the method of quadratic ICA on the one hand, and the natural image input data on the other, are responsible for the reported results. For example, it could be argued that the quadratic ICA components might be very similar to the quadratic PCA components. Figure 5 illustrates that this is not trivially so by showing 16 PCA components with large eigenvalues. It can be seen that the PCA components quickly lose resemblance to Gabor-like filters as the eigenvalues decrease. Also, the conjunctive nature of the estimated features is not as clear for the PCA-based components. This is shown on the left in Figure 5: Left, the two top rows show the vectors v+ and v? for the first 8 quadratic PCA components. The two bottom rows display the PCA components 41 ? 48. It can be seen that the PCA components quickly lose any resemblance to Gabor-like receptive fields of V1 simple cells. Right, some typical quadratic ICA components in terms of the vectors v+ and v? when the model was estimated on white noise. The circular shapes are likely artifacts due to the whitening filter. figure 4, demonstrating that on the average, the eigenvalues of the quadratic forms decay slower for the PCA components. If the whole set of PCA components is studied (not shown), it can be seen that the components appear to change from low-pass filters to high-pass filters as the eigenvalues decrease. Comparing figure 5 to figure 3, both outputs seem characteristic to the method applied, the differences resembling those observed when linear ICA and linear PCA are used to code natural image data. To verify that the emerging component structures are not artifacts of the modelling methodology, we generated a dataset of artificial images of white noise, having the luminance distribution of the original dataset, but with no spatial or spectral structure. By repeating the model estimation (including preprocessing) on this new dataset, the resulting components did not respond favourably to the same stimuli as before, and they were no longer clearly conjunctive: the eigenvalue distributions decayed fast, but tended to have only one dominant eigenvector. Based on these vectors, the components could be roughly categorized to two classes. The first class responded to spatial forms of center-surround filters, possibly caused by the use of the whitening filter. The second class preferred apparently random configurations of inhibitory and excitatory effects. Some of the components estimated on random data are displayed on the right in figure 5 in terms of vectors v+ and v? . 5 Discussion In this paper, we specified a quadratic model for natural images and estimated its parameters with independent component analysis. We reported the emergence of cell models exhibiting strongly nonlinear behaviour. In particular, we demonstrated that the estimated cells were essentially computing products between outputs of two linear filters that had V1 simple cell characteristics. Many of these feature conjunctions preferred two collinear features, and yet others corresponded to combinations of more orthogonal stimuli, reacting strongly to angles and corners. Our results indicate that sparse coding of natural images might partially explain the development of angle- or corner-preferring cells in V2. There has been some previous work describing quadratic models of natural image data (i.e. [13, 7, 9]). Of these, the ICA-based approaches [13, 9] resemble ours the most. Bartsch and Obermayer [13] report curvature detecting cells, but the patch size used and the number of components estimated were very small, making the results inconclusive. Hashimoto [7] sought to replicate complex cell properties with an algorithm based on Kullback-Leibler divergences, and does not report conjunctive features or cells with preferences for angles or corners. Instead, most of the estimated quadratic forms on static image data had only one dominant eigenvector. Our work extends the previous research by reporting the emergence of conjunctive components that combine responses of V1-like linear filters. The differences of our work to the previous research can be due to various reasons. The number of estimated components (i.e., number of principal components retained) was seen to affect the feature diversity, and with only 81 components, the conjuncted features were mostly collinear, producing highly selective edge or bar detectors. Even larger differences to previous work are likely due to different input preprocessing: it is known that unprocessed image data can cause difficulties to statistical estimation of linear models [1, 10] and that both the preprocessing and the size of the used image patches can affect the estimated features [10]. In quadratic modelling, taking products between the dimensions of the input data can cause additional problems for any methods relying on non-robust estimation (such as covariance-based PCA) since the quadratic transform has the strongest boosting effect on outliers and tails of the marginal distributions. It is worthwhile to note that despite differences to previous work [13, 7, 9], invariances resembling complex cell behaviour did not emerge with our method either, although the class of quadratic models contains the classic energy-detector models of complex cells (fitted in e.g. [3]). It could be that static images and optimization of sparsity alone may not work towards the emergence of invariances, or equivalently, behaviour resembling logical OR operation, unless the model is further constrained (for example as in [3]). Optimizing model likelihood can also be preferable to optimizing output sparseness, but for quadratic ICA it is not clear how to construct a proper probabilistic model. Finally, an important open question regarding the current work is to what extent the obtained conjunctive features reflect real structures present in the image data. At the time of writing, we have not been able to either prove or disprove the possibility that the pairings are an algorithmic artifact in the following sense: it could be that after the effects of quadratic-space PCA have been accounted for, the quadratic ICA components are only combinations of two rather randomly selected sparse T T x) which are as independent as possible. We are currently investigating x and v? components (v+ this issue. Acknowledgments The authors wish to thank Jarmo Hurri and the anonymous reviewers for helpful comments. This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. This publication only reflects the authors? views. References [1] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37(23):3311?3325, 1997. [2] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc.R.Soc.Lond. B, 265:359?366, 1998. [3] A. Hyv?arinen and P. O. Hoyer. Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):1705?1720, 2000. [4] J. Hegd?e and D. C. van Essen. Selectivity for complex shapes in primate visual area V2. The Journal of Neuroscience, 20(5):RC61?66, 2000. [5] M. Ito and H. Komatsu. Representation of angles embedded within contour stimuli in area V2 of macaque monkeys. The Journal of Neuroscience, 24(13):3313?3324, 2004. [6] G. Krieger and C. Zetzsche. Nonlinear image operators for the evaluation of local intrisic dimensionality. IEEE Transactions on Image Processing, 5(6):1026?1042, 1996. [7] W. Hashimoto. Quadratic forms in natural images. Network: Computation in Neural Systems, 14(4):765? 788, 2003. [8] A. Hyv?arinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley, 2001. [9] F. Theis and W. Nakamura. Quadratic independent component analysis. IEICE Trans. Fundamentals, E87-A(9):2355?2363, 2004. [10] B. Willmore, P. A. Watters, and D. J. Tolhurst. A comparison of natural-image-based models of simplecell coding. Perception, 29:1017?1040, 2000. [11] A. Hyv?arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3):626?634, 1999. [12] P. Berkes and L. Wiskott. On the analysis and interpretation of inhomogeneous quadratic forms as receptive fields. Neural Computation, accepted, 2006. [13] H. Bartsch and K. Obermayer. Second-order statistics of natural images. Neurocomputing, 52-54:467? 472, 2003.
3009 |@word collinearity:2 briefly:1 polynomial:6 norm:2 replicate:1 simplecell:1 open:1 hyv:4 decomposition:6 covariance:1 mention:1 reduction:2 configuration:1 contains:4 selecting:1 ours:1 reaction:1 err:1 current:3 comparing:1 si:3 conjunct:2 conjunctive:9 must:1 yet:1 shape:3 drop:1 plot:1 alone:2 selected:1 characterization:1 detecting:1 boosting:1 tolhurst:1 preference:1 become:2 pairing:1 prove:1 pathway:1 fitting:1 combine:1 excellence:1 ica:29 roughly:1 examine:1 v1t:2 relying:1 becomes:1 provided:1 estimating:1 underlying:1 what:3 interpreted:2 monkey:3 emerging:4 eigenvector:4 finding:1 transformation:1 stuff:1 preferable:1 unit:2 appear:5 producing:1 intrisic:1 positive:1 before:1 local:3 limit:1 willmore:1 despite:1 quadruple:2 reacting:1 approximately:3 might:3 resembles:1 examined:1 studied:1 co:1 range:1 averaged:1 acknowledgment:1 responsible:2 komatsu:1 practice:1 x3:1 onml:2 bootstrap:1 area:7 empirical:2 thought:1 gabor:7 get:1 close:1 operator:1 applying:1 writing:1 www:1 equivalent:2 quick:1 center:2 demonstrated:1 resembling:3 straightforward:1 reviewer:1 rectangular:1 resolution:2 wit:1 simplicity:1 unstructured:1 watters:1 react:2 dominate:1 classic:1 variation:2 analogous:1 heavily:1 cut:1 bottom:4 observed:2 role:1 capture:1 compressing:1 decrease:2 highest:1 mentioned:1 intuition:1 transforming:1 dynamic:1 basis:2 hashimoto:2 easily:1 represented:1 various:1 fast:4 describe:1 artificial:1 corresponded:1 apparent:2 quite:1 larger:3 otherwise:2 statistic:1 emergence:6 conformed:1 transform:1 blob:2 eigenvalue:23 propose:1 interaction:1 product:10 coming:1 causing:1 turned:1 mixing:1 insensitivity:1 intuitive:1 validate:1 constituent:1 leave:1 help:1 minor:1 eq:8 strong:1 soc:1 coverage:1 c:3 resemble:3 indicate:2 exhibiting:1 inhomogeneous:1 correct:2 filter:17 exploration:1 material:1 argued:1 arinen:4 behaviour:7 hx:1 decompose:1 alleviate:1 anonymous:1 extension:1 hold:2 considered:1 seed:1 algorithmic:1 finland:2 adopt:1 sought:1 estimation:7 proc:1 geniculate:1 lose:2 tanh:1 currently:1 largest:1 reflects:1 clearly:1 rather:3 publication:1 conjunction:6 ax:2 modelling:3 likelihood:1 contrast:2 baseline:1 sense:1 helpful:1 typically:1 originating:1 transformed:1 selective:2 lgn:2 overall:1 issue:1 pascal:1 development:2 spatial:3 summed:1 fairly:1 schaaf:2 marginal:1 field:13 construct:1 equal:3 having:3 x2n:1 sampling:3 zz:2 unsupervised:1 future:2 report:3 stimulus:14 others:1 retina:1 randomly:1 oja:1 resulted:2 divergence:1 individual:1 neurocomputing:1 phase:2 consisting:1 dampening:1 highly:6 possibility:2 circular:1 essen:1 evaluation:1 light:1 zetzsche:1 x22:1 bb1:1 edge:7 disprove:1 respective:4 orthogonal:3 unless:2 logarithm:1 overcomplete:1 fitted:2 increased:2 column:1 ar:1 cover:1 ordinary:1 subset:1 monomials:2 uniform:1 predictor:1 fastica:1 reported:2 decayed:1 fundamental:1 preferring:2 retain:1 probabilistic:1 hijk:2 quickly:2 squared:1 reflect:1 containing:1 possibly:3 corner:8 leading:1 li:1 account:1 diversity:2 coding:6 coefficient:1 vnt:2 caused:2 vi:1 performed:4 try:1 later:2 picked:2 view:1 apparently:1 recover:1 complicated:3 contribution:1 square:5 formed:1 publicly:1 responded:1 variance:4 characteristic:7 largely:1 correspond:1 raw:1 confirmed:1 detector:3 explain:1 strongest:1 tended:2 energy:1 frequency:5 static:2 sampled:1 dataset:7 logical:2 dimensionality:3 organized:1 appears:1 attained:2 higher:1 jarmo:1 methodology:1 response:7 hiit:1 done:1 strongly:6 just:3 implicit:1 stage:1 hand:2 favourably:3 nonlinear:3 artifact:4 resemblance:4 ieice:1 olshausen:2 effect:6 contain:1 concept:1 verify:1 hence:2 symmetric:2 leibler:1 illustrated:2 white:2 covering:1 illustrative:1 trying:1 outline:1 interpreting:1 reasoning:1 image:39 ranging:2 fi:3 recently:1 dd1:1 empirically:2 discussed:2 interpretation:2 tail:3 refer:1 surround:2 trivially:1 nonlinearity:1 had:2 cortex:2 longer:1 whitening:5 berkes:1 lindgren:1 dominant:14 curvature:1 recent:1 optimizing:2 wellknown:1 selectivity:1 der:2 seen:6 additional:3 employed:1 signal:1 arithmetic:1 faster:1 cross:1 long:2 divided:1 coded:1 prediction:1 aapo:2 basic:2 regression:1 essentially:3 vision:1 iteration:1 represent:1 kernel:2 cell:21 justified:1 addition:3 source:2 operate:1 rest:2 comment:1 tend:1 seem:1 structural:1 revealed:1 reacts:1 affect:2 opposite:3 idea:1 regarding:1 shift:2 unprocessed:1 expression:1 pca:15 effort:1 explainable:1 collinear:7 cause:2 generally:1 clear:3 eigenvectors:20 repeating:1 dark:1 concentrated:1 reduced:1 http:1 specifies:1 inhibitory:2 sign:4 estimated:19 neuroscience:2 ist:2 demonstrating:1 lti:2 v1:18 luminance:1 run:1 angle:8 respond:6 extends:1 reporting:1 submodels:1 vn:5 patch:10 prefer:3 layer:1 hi:8 followed:1 display:3 quadratic:54 constraint:1 helsinki:5 x2:4 scene:1 dominated:1 generates:1 lond:1 performing:2 relatively:2 department:1 combination:3 smaller:2 describes:1 across:1 remain:1 wi:2 primate:2 making:2 happens:1 constrained:1 outlier:1 invariant:1 vjt:1 previously:1 bartsch:2 turn:3 discus:1 xyz:4 describing:1 operation:2 apply:1 worthwhile:1 v2:6 spectral:3 subtracted:1 slower:1 original:1 responding:1 assumes:1 top:6 ensure:1 x21:1 approximating:1 arrangement:1 question:1 receptive:11 primary:2 strategy:2 traditional:1 obermayer:2 hoyer:1 gradient:1 subspace:1 unable:1 thank:1 lateral:1 extent:2 reason:1 code:3 retained:1 providing:2 balance:1 equivalently:1 difficult:1 mostly:2 negative:2 proper:1 unknown:1 imbalance:1 neuron:1 displayed:1 dc:1 rn:2 community:1 intensity:4 pair:1 specified:1 decreasingly:1 macaque:3 trans:1 address:1 able:1 bar:6 suggested:1 below:1 perception:1 sparsity:1 including:2 natural:23 difficulty:2 available1:1 nakamura:1 representing:1 imply:1 concludes:1 literature:1 theis:1 relative:2 embedded:1 interesting:1 filtering:1 approximator:1 nucleus:1 vectorized:1 consistent:1 wiskott:1 principle:2 dd:4 heavy:2 row:5 excitatory:2 accounted:1 supported:2 understand:3 taking:3 emerge:2 absolute:1 sparse:7 decorrelates:1 van:5 benefit:1 dimension:8 xn:2 depth:1 contour:1 replicable:1 author:2 commonly:2 made:1 preprocessing:7 programme:1 hyvarinen:1 transaction:2 bb:4 approximate:2 ignore:1 preferred:3 kullback:1 investigating:1 assumed:1 hurri:1 grayscale:2 tailed:1 nature:3 robust:2 inherently:1 ignoring:2 expansion:2 necessarily:1 complex:6 european:1 vj:2 did:3 main:1 whole:2 noise:2 allowed:1 repeated:1 categorized:1 x1:3 wiley:1 wish:1 bandpass:1 ito:1 xt:2 showing:2 explored:1 decay:5 physiological:1 insignificant:1 inconclusive:1 effectively:1 magnitude:1 illustrates:1 krieger:1 sparseness:1 karhunen:1 led:1 lt:3 likely:2 visual:7 ordered:1 partially:1 sorted:5 presentation:1 towards:1 change:2 typical:1 except:1 wt:1 principal:3 pas:2 partly:2 invariance:5 accepted:1 latter:2 nondominant:2 hateren:2 ongoing:1 phenomenon:1
2,215
301
Constructing Hidden Units using Examples and Queries Eric B. Baum Kevin J. Lang NEC Research Institute 4 Independence Way Princeton, NJ 08540 ABSTRACT While the network loading problem for 2-layer threshold nets is NP-hard when learning from examples alone (as with backpropagation), (Baum, 91) has now proved that a learner can employ queries to evade the hidden unit credit assignment problem and PAC-load nets with up to four hidden units in polynomial time. Empirical tests show that the method can also learn far more complicated functions such as randomly generated networks with 200 hidden units. The algorithm easily approximates Wieland's 2-spirals function using a single layer of 50 hidden units, and requires only 30 minutes of CPU time to learn 200-bit parity to 99.7% accuracy. 1 Introd uction Recent theoretical results (Baum & Haussler, 89) promise good generalization from multi-layer feedforward nets that are consistent with sufficiently large training sets. Unfortunately, the problem of finding such a net has been proved intractable due to the hidden unit credit assignment problem - even for nets containing only 2 hidden units (Blum & Rivest, 88). While back-propagation works well enough on simple problems, its luck runs out on tasks requiring more than a handful of hidden units. Consider, for example, Alexis Wielands "2-spirals" mapping from ~2 to {O, I}. There are many sets of weights that would cause a 2-50-1 network to be consistent with the training set of figure 3a, but backpropagation seems unable to find any of them starting from random initial weights. Instead, the procedure drives the net into a suboptimal configuration like the one pictured in figure 2b. 904 Constructing Hidden Units Using Examples and Queries Figure 1: The geometry of query learning. In 1984, Valiant proposed a query learning model in which the learner can ask an oracle for the output values associated with arbitrary points in the input space. In the next section we shall see how this additional source of information can be exploited to locate and pin down a network's hidden units one at a time, thus avoiding the combinatorial explosion of possible hidden unit configurations which can arise when one attempts to learn from examples alone. 2 How to find a hidden unit using queries For now, assume that our task is to build a 2-layer network of binary threshold units which computes the same function as an existing "target" network. Our first step will be to draw a positive example x+ and a negative example x_ from our training set. Because the target net maps these points to different output values, its hidden layer representations for the points must also be different, so the hyperplane through input space corresponding to one of the net's hidden units must intersect the line segment bounded by the two points (see figure 1). We can reduce our uncertainty about the location of this intersection point by a factor of 2 by asking the oracle for the target net's output at m, the line segment's midpoint. If, for example, m is mapped to the same output as x+, then we know that the hidden plane must intersect the line segment between x_ and m, and we can then further reduce our uncertainty by querying the midpoint of this segment. By performing b of queries of this sort, we can determine to within b bits of accuracy the location of a point Po that lies on the hidden plane. Assuming that our input space has n dimensions, after finding n - 1 more points on this hyperplane we can solve n equations in n unknowns to find the weights of the corresponding hidden unit. 1 1 The additional points Pi are obtained by perturbing PO with various small vectors 1r i and then diving back to the plane via a search that is slightly more complicated than the bisection method by which we found PO. (Baum, 91) describes this search procedure in detail, &8 well &8 a technique for verifying that all the points Pi lie on the .ame hidden plane. 90S 906 Baum and Lang Figure 2: A backprop net before and after being trained on the 2-spirals task. In these plots over input space, the net's hidden units are shown by lines while its output is indicated by grey-level shading. 3 Can we find all of a network's hidden units? Here is the crucial question: now that we have a procedure for finding one hidden unit whose hyperplane passes between a given pair of positive and negative examples,2 can we discover all of the net's hidden units by invoking this procedure on a sequence of such example pairs? If the answer is yes, then we have got a viable learning method because the net's output weights can be efficiently computed via the linear programming problem that arises from forward-propagating the training set through the net's first layer of weights. (Baum, 91) proves that for target nets with four or fewer hidden units we can always find enough of them to compute the required function. This result is a direct counterpoint to the theorem of (Blum & Rivest, 88): by using queries, we can PAC learn in polynomial time small threshold nets that would be NP-hard to learn from examples alone. However, it is possible for an adversary to construct a larger target net and an input distribution such that we may not find enough hidden units to compute the target function even by searching between every pair of examples in our training set. The problem is that more than one hidden plane can pass between a given pair of points, so we could repeatedly encounter some of the hidden units while never seeing others. 2This "positive" and ''negative'' terminology suggests that the target net possesses a single output unit, but the method is not actually restricted to this case. Constructing Hidden Units Using Examples and Queries Figure 3: 2-spirals oracle, and net built by query learning. Fortunately, the experiments described in the next section suggest that one can find most of a net's hidden units in the average case. In fact, we may not even need to find all of a network's hidden units in order to achieve good generalization. Suppose that one of a network's hidden units is hard to find due to the rarity of nearby training points. As long as our test set is drawn from the same distribution as the training set, examples that would be misclassified due to the absence of this plane will also be rare. Our experiment on learning 200-bit parity illustrates this point: only 1/4 of the possible hidden units were needed to achieve 99.7% generalization. 4 Learning random target nets Although query learning might fail to discover hidden units in the worse case, the following empirical study suggests that the method has good behavior in the average case. In each of these learning experiments the target function was computed by a 2-layer threshold net whose k hidden units were each chosen by passing a hyperplane through a set of n points selected from the uniform distribution on the unit n-sphere. The output weights of each target net corresponded to a random hyperplane through the origin of the unit k-sphere. Our training examples were drawn from the uniform distribution on the corners of the unit n-cube and then classified according to the target net. To establish a performance baseline, we attempted to learn several of these functions using backpropagation. For (n = 20, k = 20) we succeeded in training a net to 97% accuracy in less than a day, but when we increased the size of the problem to (n 100, k = 50) or (n 200, k 30), 150 hours of CPU time dumped our backprop nets into local minima that accounted for only 90% of the training data. = = = 907 908 Baum and Lang In contrast, query learning required only 1.5 hours to learn either of the latter two functions to 99% accuracy. The method continued to function well when we increased the problem size to (n = 200, k = 200). In each of five trials at this scale, a check of 104 training pairs revealed 197 or more hidden planes. Because the networks were missing a couple of hidden units, their hidden-to-output mappings were not quite linearly separable. Nevertheless, by running the percept ron algorithm on 100 x k random examples, in each trial we obtained approximate output weights whose generalization was 98% or better. 5 Learning 200-bit parity Because the learning method described above needs to make real-valued queries in order to localize a hidden plane, it cannot be used to learn a function that is only defined on boolean inputs. Thus, we defined the parity of a real-valued vector to be the function computed by the 2-layer parity net of (Rumelhart, Hinton & Williams, 1986), which has input weights of 1, hidden unit thresholds of ~, ~, ... , n - ~, and output weights alternating between 1 and -1. The n parallel hidden planes of this net carve the input space into n + 1 diagonal slabs, each of which contains all of the binary patterns with a particular number of 1'so After adopting this definition of parity (which agrees with the standard definition on boolean inputs), we applied the query learning algorithm to 200-dimensionalinput patterns. A search of 30,000 pairs of examples drawn randomly and uniformly from the corners of the unit cube revealed 46 of the 200 decision planes of the target function. Using approximate output weights computed by the perceptron algorithm, we found the nets generalization rate to be 99.7%. If it seems surprising that the net could perform so well while lacking so many hidden planes, consider the following. The target planes that we did find were the middle ones with thresholds near 100, and these are the relevant ones for classifying inputs that contain about the same number of 1's and O's. Because vectors of uniform random bits are unlikely to contain many more 1's than O's or vice versa, we had little chance of stumbling across hidden planes with high or low thresholds while learning, but we were also unlikely to need them for classifying any given test case. 6 Function approximation using queries Suppose now that our goal in building a threshold net is to approximate an arbitrary function rather than to duplicate an existing threshold net. Earlier, we were worried about whether we could locate all of a target net's hidden units, but at least we knew how many of them there were, and we knew that we had made real progress when we found one of them. Now, the hidden units constructed by our algorithm are merely tangents to the true decision boundaries of the target fuction, and we do not know ahead of time how many such units will be required to construct a decent approximation to the function. While one could keep adding hidden units to a net until the hidden layer representation of the training set becomes linearly separable, the fact that there are Constructing Hidden Units Using Examples and Queries learning algorithm additional heuristics none querIes reject redundant units two-stage construction conjugate gradient backprop hidden units min max 90 160 80 65 49 59 60 train errors 0 0 0 avg=9 test errors min max 70 136 47 72 15 45 80 125 Table 1: 2-spirals performance summary. infinitely many of tangents to a given curve can result in the creation of an oversized net which generalizes poorly. This problem can be addressed heuristically by rejecting new hidden units that are too similar to existing ones. For example, the top two rows of the above table summarize the results of 10 learning trials on the two-spirals problem with and without such a heuristic. 3 By imposing a floor on the difference between two hidden units,4 we reduced the size of our nets and the rate of generalization errors by 40%. The following two-stage heuristic training method resulted in even better networks. During the first stage of learning we attempted to create a minimally necessary set of hidden units by searching only between training examples that were not yet divided by an existing hidden unit. During the second stage of learning we tried to increase the separability of our hidden codes by repeatedly computing an approximate set of output weights and then searching for hidden units between misclassified examples and nearby counterexamples. This heuristic was motivated by the observation that examples tend to be misclassified when a nearby piece of the target function's decision boundary has not been discovered. Ten trials of this method resulted in networks containing an average of just 54 hidden units, and the nets committed an average of only 29 mistakes on the test set. An example of a network generated by this method is shown in figure 3b. For comparison, we made 10 attempts to train a 60-hidden-unit backprop net on the 2-spirals problem starting from uniform random weights and using conjugate gradient code provided by Steve Nowlan. While these nets had more than enough hidden units to compute the required function, not one ofthem succeeded in learning the complete training set. s 3To employ query learning, we defined the oracle function indicated by shading in figure 3a. The 194 training points are shown by dots in the figure. Our 576-element test set consisted of 3 points between each pair of adjacent same-class training points. ? Specifically, we required a minimum euclidean distance of 0.3 between the weights of two hidden units (after first normalizing the weight vectors so that the length of the non-threshold part of each vector was 1. 5lnterestingly, a 2-50-1 backprop net whose initial weights were drawn from a handcrafted distribution (hidden units with uniform random positions together with the appropriate output weights) came much closer to success than 2-50-1 nets with uniform random initial weights (compare figures 4 and 2). We can sometimes address tough problems with backprop when our prior knowledge gives us a head start. 909 910 Baum and Lang Figure 4: Backprop works better when started near a solution. These results illustrate the main point of this paper: the currently prevalent training methodology (local optimization of random initial weights) is too weak to solve the NP-hard problem of hidden unit deployment. We believe that methods such as query learning which avoid the credit assignment problem are essential to the future of connectionism. References E. Baum & D. Haussler. (1989) What size net gives valid generalization? Neural Computation 1(1): 151-160. E. Baum. (1991) Neural Net Algorithms that Learn in Polynomial Time from Examples and Queries. IEEE Transactions on Neural Networks 2(1), January, 1991. A. Blum & R. L. Rivest. (1988) Training a 3-node neural network is NP-complete. In D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 1, 494-501. San Mateo, CA: Morgan Kaufmann. K. Lang & M. Witbrock. (1988) Learning to Tell Two Spirals Apart. Proceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann. D. Rumelhart, G. Hinton, & R. Williams. (1986) Learning internal representations by error propagation. In D. Rumelhart & J. McClelland (eds.) Parallel Distributed Processing, MIT Press. L. G. Valiant. (1984) A theory of the learnable. Comm. ACM 27(11): 1134-1142.
301 |@word trial:4 middle:1 polynomial:3 loading:1 seems:2 grey:1 heuristically:1 tried:1 invoking:1 shading:2 initial:4 configuration:2 contains:1 existing:4 surprising:1 nowlan:1 lang:5 yet:1 must:3 plot:1 alone:3 fewer:1 selected:1 plane:13 node:1 location:2 ron:1 five:1 constructed:1 direct:1 viable:1 behavior:1 multi:1 cpu:2 little:1 becomes:1 provided:1 discover:2 rivest:3 bounded:1 what:1 finding:3 nj:1 every:1 unit:54 positive:3 before:1 local:2 mistake:1 might:1 minimally:1 mateo:1 suggests:2 deployment:1 backpropagation:3 procedure:4 intersect:2 empirical:2 got:1 reject:1 seeing:1 suggest:1 cannot:1 map:1 missing:1 baum:10 williams:2 starting:2 haussler:2 continued:1 searching:3 target:16 suppose:2 construction:1 programming:1 alexis:1 origin:1 element:1 rumelhart:3 verifying:1 luck:1 comm:1 trained:1 segment:4 creation:1 eric:1 learner:2 easily:1 po:3 various:1 train:2 query:20 corresponded:1 tell:1 kevin:1 whose:4 quite:1 larger:1 solve:2 valued:2 heuristic:4 sequence:1 net:43 oversized:1 relevant:1 poorly:1 achieve:2 illustrate:1 propagating:1 school:1 progress:1 dumped:1 backprop:7 generalization:7 connectionism:1 sufficiently:1 credit:3 mapping:2 slab:1 combinatorial:1 currently:1 agrees:1 vice:1 create:1 mit:1 always:1 rather:1 avoid:1 prevalent:1 check:1 contrast:1 baseline:1 unlikely:2 hidden:59 misclassified:3 cube:2 construct:2 never:1 future:1 np:4 others:1 connectionist:1 duplicate:1 employ:2 x_:2 randomly:2 resulted:2 geometry:1 attempt:2 succeeded:2 closer:1 explosion:1 necessary:1 euclidean:1 theoretical:1 increased:2 earlier:1 boolean:2 asking:1 assignment:3 witbrock:1 rare:1 uniform:6 too:2 answer:1 together:1 containing:2 worse:1 corner:2 piece:1 start:1 sort:1 complicated:2 parallel:2 accuracy:4 kaufmann:2 efficiently:1 percept:1 yes:1 weak:1 rejecting:1 bisection:1 none:1 drive:1 classified:1 touretzky:1 ed:2 definition:2 evade:1 associated:1 couple:1 proved:2 ask:1 knowledge:1 actually:1 back:2 steve:1 day:1 methodology:1 just:1 stage:4 until:1 propagation:2 indicated:2 believe:1 building:1 requiring:1 contain:2 true:1 consisted:1 alternating:1 adjacent:1 during:2 complete:2 perturbing:1 handcrafted:1 approximates:1 versa:1 imposing:1 counterexample:1 had:3 dot:1 recent:1 diving:1 apart:1 binary:2 stumbling:1 came:1 success:1 exploited:1 morgan:2 minimum:2 additional:3 fortunately:1 floor:1 determine:1 redundant:1 long:1 sphere:2 divided:1 sometimes:1 adopting:1 addressed:1 source:1 crucial:1 posse:1 pass:1 tend:1 tough:1 near:2 feedforward:1 revealed:2 spiral:8 enough:4 decent:1 independence:1 suboptimal:1 reduce:2 whether:1 motivated:1 introd:1 passing:1 cause:1 repeatedly:2 ten:1 mcclelland:1 reduced:1 wieland:2 promise:1 shall:1 ame:1 four:2 terminology:1 threshold:10 blum:3 nevertheless:1 localize:1 drawn:4 merely:1 run:1 uncertainty:2 draw:1 decision:3 bit:5 layer:9 summer:1 oracle:4 ahead:1 handful:1 nearby:3 carve:1 min:2 performing:1 separable:2 according:1 conjugate:2 describes:1 slightly:1 across:1 separability:1 counterpoint:1 restricted:1 equation:1 pin:1 fail:1 needed:1 know:2 generalizes:1 appropriate:1 encounter:1 top:1 running:1 build:1 establish:1 prof:1 question:1 diagonal:1 gradient:2 distance:1 unable:1 mapped:1 assuming:1 code:2 length:1 unfortunately:1 negative:3 unknown:1 perform:1 observation:1 january:1 hinton:2 committed:1 head:1 locate:2 discovered:1 arbitrary:2 pair:7 required:5 hour:2 address:1 adversary:1 pattern:2 summarize:1 built:1 max:2 pictured:1 started:1 prior:1 tangent:2 lacking:1 querying:1 consistent:2 classifying:2 pi:2 row:1 summary:1 accounted:1 parity:6 perceptron:1 institute:1 midpoint:2 distributed:1 boundary:2 dimension:1 curve:1 worried:1 valid:1 computes:1 forward:1 made:2 avg:1 san:1 far:1 transaction:1 approximate:4 keep:1 knew:2 fuction:1 search:3 table:2 learn:9 ca:1 constructing:4 did:1 main:1 linearly:2 arise:1 rarity:1 position:1 lie:2 theorem:1 minute:1 down:1 load:1 pac:2 learnable:1 normalizing:1 intractable:1 essential:1 uction:1 adding:1 valiant:2 nec:1 illustrates:1 intersection:1 infinitely:1 chance:1 acm:1 goal:1 absence:1 hard:4 specifically:1 uniformly:1 hyperplane:5 pas:1 attempted:2 internal:1 latter:1 arises:1 princeton:1 avoiding:1
2,216
3,010
Robotic Grasping of Novel Objects Ashutosh Saxena, Justin Driemeyer, Justin Kearns, Andrew Y. Ng Computer Science Department Stanford University, Stanford, CA 94305 {asaxena,jdriemeyer,jkearns,ang}@cs.stanford.edu Abstract We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. Our algorithm is trained via supervised learning, using synthetic images for the training set. We demonstrate on a robotic manipulation platform that this approach successfully grasps a wide variety of objects, such as wine glasses, duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. 1 Introduction In this paper, we address the problem of grasping novel objects that a robot is perceiving for the first time through vision. Modern-day robots can be carefully hand-programmed or ?scripted? to carry out many complex manipulation tasks, ranging from using tools to assemble complex machinery, to balancing a spinning top on the edge of a sword. [15] However, autonomously grasping a previously unknown object still remains a challenging problem. If the object was previously known, or if we are able to obtain a full 3-d model of it, then various approaches, for example ones based on friction cones, [5] form- and force-closure, [1] pre-stored primitives, [7] or other methods can be applied. However, in practical scenarios it is often very difficult to obtain a full and accurate 3-d reconstruction of an object seen for the first time through vision. This is particularly true if we have only a single camera; for stereo systems, 3-d reconstruction is difficult for objects without texture, and even when stereopsis works well, it would typically reconstruct only the visible portions of the object. Finally, even if more specialized sensors such as laser scanners (or active stereo) are used to estimate the object?s shape, we would still have only a 3-d reconstruction of the front face of the object. In contrast to these approaches, we propose a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. Informally, the algorithm takes two or more pictures of the object, and then tries to identify a point within each 2-d image that corresponds to a good point at which to grasp the object. (For example, if trying to grasp a coffee mug, it might try to identify the mid-point of the handle.) Given these 2-d points in each image, we use triangulation to obtain a 3-d position at which to actually attempt the grasp. Thus, rather than trying to triangulate every single point within each image in order to estimate depths?as in dense stereo?we only attempt to triangulate one (or at most a small number of) points corresponding to the 3-d point where we will grasp the object. This allows us to grasp an object without ever needing to obtain its full 3-d shape, and applies even to textureless, translucent or reflective objects on which standard stereo 3-d reconstruction fares poorly. To the best of our knowledge, our work represents the first algorithm capable of grasping novel objects (ones where a 3-d model is not available), including ones from novel object classes, that we are perceiving for the first time using vision. Figure 1: Examples of objects on which the grasping algorithm was tested. In prior work, a few others have also applied learning to robotic grasping. [1] For example, Jebara et al. [8] used a supervised learning algorithm to learn grasps, for settings where a full 3-d model of the object is known. Hsiao and Lozano-Perez [4] also apply learning to grasping, but again assuming a fully known 3-d model of the object. Piater?s algorithm [9] learned to position single fingers given a top-down view of an object, but considered only very simple objects (specifically, square, triangle and round ?blocks?). Platt et al. [10] learned to sequence together manipulation gaits, but again assumed a specific, known, object. There is also extensive literature on recognition of known object classes (such as cups, mugs, etc.) [14], but this seems unlikely to apply directly to grasping objects from novel object classes. To pick up an object, we need to identify the grasping point?more formally, a position for the robot?s end-effector. This paper focuses on the task of grasp identification, and thus we will consider only objects that can be picked up without performing complex manipulation.1 We will attempt to grasp a number of common office and household objects such as toothbrushes, pens, books, mugs, martini glasses, jugs, keys, duct tape, and markers. (See Fig. 1.) The remainder of this paper is structured as follows. In Section 2, we describe our learning approach, as well as our probabilistic model for inferring the grasping point. In Section 3, we describe the motion planning/trajectory planning (on our 5 degree of freedom arm) for moving the manipulator to the grasping point. In Section 4, we report the results of extensive experiments performed to evaluate our algorithm, and Section 5 concludes. 2 Learning the Grasping Point Because even very different objects can have similar sub-parts, there are certain visual features that indicate good grasps, and that remain consistent across many different objects. For example, jugs, cups, and coffee mugs all have handles; and pens, white-board markers, toothbrushes, screw-drivers, etc. are all long objects that can be grasped roughly at their mid-point. We propose a learning approach that uses visual features to predict good grasping points across a large range of objects. Given two (or more) images of an object taken from different camera positions, we will predict the 3-d position of a grasping point. An image is a projection of the three-dimensional world into an image plane, and does not have depth information. In our approach, we will predict the 2-d location of the grasp in each image; more formally, we will try to identify the projection of a good grasping point onto the image plane. If each of these points can be perfectly identified in each image, we can then easily ?triangulate? from these images to obtain the 3-d grasping point. (See Fig. 4a.) In practice it is difficult to identify the projection of a grasping point into the image plane (and, if there are multiple grasping points, then the correspondence problem?i.e., deciding which grasping point in one image corresponds to which point in another image?must also be solved). On our robotic platform, this problem is further exacerbated by uncertainty in the position of the camera when the 1 For example, picking up a heavy book lying flat on table might require a sequence of complex manipulations, such as to first slide the book slightly past the edge of the table so that the manipulator can place its fingers around the book. Figure 2: Examples of different edge and texture filters used to calculate the features. Figure 3: Synthetic images of the objects used for training. The classes of objects used for training were martini glasses, mugs, teacups, pencils, whiteboard erasers, and books. images were taken. To address all of these issues, we develop a probabilistic model over possible grasping points, and apply it to infer a good position at which to grasp an object. 2 2.1 Features In our approach, we begin by dividing the image into small rectangular patches, and for each patch predict if it is a projection of a grasping point onto the image plane. For this prediction problem, we chose features that represent three types of local cues: edges, textures, and color. [11, 13] We compute features representing edges by convolving the intensity channel3 with 6 oriented edge filters (Fig. 2). Texture information is mostly contained within the image intensity channel, so we apply 9 Laws masks to this channel to compute the texture energy. For the color channels, low frequency information is most useful to identify grasps; our color features are computed by applying a local averaging filter (the first Laws mask) to the 2 color channels. We then compute the sum-squared energy of each of these filter outputs. This gives us an initial feature vector of dimension 17. To predict if a patch contains a grasping point, local image features centered on the patch are insufficient, and one has to use more global properties of the object. We attempt to capture this information by using image features extracted at multiple spatial scales (3 in our experiments) for the patch. Objects exhibit different behaviors across different scales, and using multi-scale features allows us to capture these variations. In detail, we compute the 17 features described above from that patch as well as the 24 neighboring patches (in a 5x5 window centered around the patch of interest). This gives us a feature vector x of dimension 1 ? 17 ? 3 + 24 ? 17 = 459. 2.2 Synthetic Data for Training We apply supervised learning to learn to identify patches that contain grasping points. To do so, we require a labeled training set, i.e., a set of images of objects labeled with the 2-d location of the grasping point in each image. Collecting real-world data of this sort is cumbersome, and manual labeling is prone to errors. Thus, we instead chose to generate, and learn from, synthetic data that is automatically labeled with the correct grasps. In detail, we generate synthetic images along with correct grasp (Fig. 3) using a computer graphics ray tracer,4 as this produces more realistic images than other simpler rendering methods. 5 The advantages of using synthetic images are multi-fold. First, once a synthetic model of an object has been created, a large number of training examples can be automatically generated by rendering the object under different (randomly chosen) lighting conditions, camera positions and orientations, etc. 2 An earlier version of this work without the probabilistic model and using simpler learning/inference was described in [12]. 3 We use YCbCr color space, where Y is the intensity channel, and Cb and Cr are color channels. 4 Ray tracing [3] is a standard image rendering method from computer graphics. It handles many real-world optical phenomenon such as multiple specular reflections, textures, soft shadows, smooth curves, and caustics. We used PovRay, an open source ray tracer. 5 There is a relation between the quality of the synthetically generated images and the accuracy of the algorithm. The better the quality of the synthetically generated images and graphical realism, the better the accuracy of the algorithm. Therefore, we use a ray tracer instead of faster, but cruder, openGL style graphics. Michels, Saxena and Ng [6] used synthetic openGL images to learn distances in natural scenes. However, because openGL style graphics have less realism, the learning performance sometimes decreased with added complexity in the scenes. In addition, to increase the diversity of the training data generated, we randomized different properties of the objects such as color, scale, and text (e.g., on the face of a book). The time-consuming part of synthetic data generation is the creation of the mesh models of the objects. However, there are many objects for which models are available on the internet, and can be used with only minor modifications. We generated 2500 examples from synthetic data, comprising objects from six object classes (see Figure 3). Using synthetic data also allows us to generate perfect labels for the training set with the exact location of a good grasp for each object. In contrast, collecting and manually labeling a comparably sized set of real images would have been extremely time-consuming. 2.3 Probabilistic Model On our manipulation platform, we have a camera mounted on the robotic arm. (See Fig. 6) When asked to grasp an object, we command the arm to move the camera to two or more positions, so as to acquire images of the object from these positions. However, there are inaccuracies in the physical positioning of the arm, and hence some slight uncertainty in the position of the camera when the images are acquired. We now describe how we model these position errors. Formally, let C be the image that would have been taken if the robot had moved exactly to the commanded position and orientation. However, due to robot positioning error, instead an image C? is taken from a slightly different location. Let (u, v) be a 2-d position in image C, and let (? u, v?) be the corresponding image ? Thus C(u, v) = C(? ? u, v?), where C(u, v) is the pixel value at (u, v) in image C. The position in C. errors in camera position/pose should usually be small,6 and we model the difference between (u, v) and (? u, v?) using an additive Gaussian model: u ? = u + u , v? = v + v , where u , v ? N (0, ? 2 ). For each location (u, v) in an image C, we define the class label to be z(u, v) = 1{(u, v) is the projection of a grasping point into image plane}. (Here, 1{?} is the indicator func? we similarly tion; 1{True} = 1, 1{False} = 0.) For a corresponding location (? u, v?) in image C, ? Since, define z?(? u, v?) to indicate whether position (? u, v?) represents a grasping point in the image C. ? (u, v) and (? u, v?) are corresponding pixels in C and C, we assume z?(? u, v?) = z(u, v). Thus: ? P (z(u, v) = 1|C) = P (? z (? u, v?) = 1|C) R R ? u dv = u v P (u , v )P (? z (u + u , v + v ) = 1|C)d (1) (2) Here, P (u , v ) is the (Gaussian) density over u and v . Further, we use logistic regression to model the probability of a 2-d position (u + u , v + v ) in C? being a good grasping point: T ? = P (? P (? z (u + u , v + v ) = 1|C) z (u + u , v + v ) = 1|x; w) = 1/(1 + e?w x ) (3) where x ? R459 are the features for the rectangular patch centered at (u + u , v + v ) in image C? (described in Section 2.1). The parameter of this model wQ? R459 is learned using standard maximum likelihood for logistic regression: w = arg maxw0 i P (zi |xi ; w0 ), where (xi , zi ) are the synthetic training examples (image patches and labels), as described in Section 2.2. Fig. 5 shows the result of applying the learned logistic regression model to some real (non-synthetic) images. Given two or more images of a new object from different camera positions, we want to infer the 3-d position of the grasping point. (See Fig. 4.) Because logistic regression may have predicted multiple grasping points per image, there is usually ambiguity in the correspondence problem (i.e., which grasping point in one image corresponds to which graping point in another). To address this while also taking into account the uncertainty in camera position, we propose a probabilistic model over possible grasping points in 3-d space. In detail, we discretize the 3-d work-space of the robotic arm into a regular 3-d grid G ? R3 , and associate with each grid element j a random variable yj = 1{grid cell j is a grasping point}. From each camera location i = 1, ..., N , one image is taken. In image Ci , let the ray passing through (u, v) be denoted Ri (u, v). Let Gi (u, v) ? G be the set of grid-cells through which the ray Ri (u, v) passes. Let r1 , ...rK ? Gi (u, v) be the indices of the grid-cells lying on the ray Ri (u, v) . 6 The robot position/orientation error is typically small (position is usually accurate to within 1mm), but it is still important to model this error. From our experiments (see Section 4), if we set ? 2 = 0, the triangulation is highly inaccurate, with average error in predicting grasping point being 15.40 cm, as compared to 1.85 cm when appropriate ? 2 is chosen. Figure 4: (a) Diagram illustrating rays from two images C1 and C2 intersecting at a grasping point (shown in dark blue). (b) Actual plot in 3-d showing multiple rays from 4 images intersecting at the grasping point. All grid-cells with at least one ray passing nearby are colored using a light blue-green-dark blue colormap, where dark blue represents those grid-cells which have many rays passing near them. (Best viewed in color.) We know that if any of the grid-cells rj along the ray represent a grasping point, then its projection is a grasp point. More formally, zi (u, v) = 1 if and only if yr1 = 1 or yr2 = 1 or . . . or yrK = 1. For simplicity, we use a (arguably unrealistic) naive Bayes-like assumption of independence, and model the relation between P (zi (u, v) = 1|Ci ) and P (yr1 = 1 or . . . or yrK = 1|Ci ) as QK P (zi (u, v) = 0|Ci ) = P (yr1 = 0, ..., yrK = 0|Ci ) = j=1 P (yrj = 0|Ci ) (4) Assuming that any grid-cell along a ray is equally likely to be a grasping point, this therefore gives P (yrj = 1|Ci ) = 1 ? (1 ? P (zi (u, v) = 1|Ci ))1/K (5) Next, using another naive Bayes-like independence assumption, we estimate the probability of a particular grid-cell yj ? G being a grasping point as: QN P (yj =1) P (yj =1)P (C1 ,...,CN |yj =1) = P (C1 ,...,C (6) P (yj = 1|C1 , ..., CN ) = i=1 P (Ci |yj = 1) P (C1 ,...,CN ) N) Q Q P (yj =1) N P (yj =1|Ci )P (Ci ) N = P (C1 ,...,C ? i=1 P (yj = 1|Ci ) (7) i=1 P (yj =1) N) where P (yj = 1) is the prior probability of a grid-cell being a grasping point (set to a constant value in our experiments). Using Equations 2, 3, 5, and 7, we can now compute (up to a constant of proportionality that does not depend on the grid-cell) the probability of any grid-cell y j being a valid grasping point, given the images. 2.4 MAP Inference We infer the best grasping point by choosing the 3-d position (grid-cell) that is most likely to be a valid grasping point. More formally, using Eq. 5 and 7, we will choose: QN arg maxj log P (yj = 1|C1 , ..., CN ) = arg maxj log i=1 P (yj = 1|Ci ) (8)  PN 1/K = arg maxj (9) i=1 log 1 ? (1 ? P (zi (u, v) = 1|Ci )) where P (zi (u, v) = 1|Ci ) is given by Eq. 2 and 3. A straightforward implementation that explicitly computes the sum above for every single grid-cell would give good grasping performance, but be extremely inefficient (over 110 seconds). For real-time manipulation, we therefore used a more efficient search algorithm in which we explicitly consider only grid-cells yj that at least one ray Ri (u, v) intersects. Further, the counting operation in Eq. 9 is implemented using an efficient counting algorithm that accumulates the sums over all grid-cells by iterating over all the images N and rays Ri (u, v).7 This results in an algorithm that identifies a grasping position in 1.2 sec. 7 Since there are only a few places in an image where P (zi (u, v) = 1|Ci ) > 0, the counting algorithm is computationally much less expensive than enumerating over all grid-cells. In practice, we found that restricting attention to areas where P (zi (u, v) = 1|Ci ) > 0.1 allows us to further reduce the number of rays to be considered, with no noticeable degradation in performance. Figure 5: Grasping point classification. The red points in each image show the most likely locations, predicted to be candidate grasping points by our logistic regression model. (Best viewed in color.) Figure 6: The robotic arm picking up various objects: box, screwdriver, duct-tape, wine glass, a solder tool holder, powerhorn, cellphone, and martini glass and cereal bowl from dishwasher. 3 Control Having identified a grasping point, we have to move the end-effector of the robotic arm to it, and pick up the object. In detail, our algorithm plans a trajectory in joint angle space [5] to take the endeffector to an approach position,8 and then moves the end-effecter in a straight line forward towards the grasping point. Our robotic arm uses two classes of grasps: downward grasps and outward grasps. These arise as a direct consequence of the shape of the workspace of our 5 dof robotic arm (Fig. 6). A ?downward? grasp is used for objects that are close to the base of the arm, which the arm will grasp by reaching in a downward direction. An ?outward? grasp is for objects further away from the base, for which the arm is unable to reach in a downward direction. The class is determined based on the position of the object and grasping point. 4 Experiments 4.1 Hardware Setup Our experiments used a mobile robotic platform called STAIR (STanford AI Robot) on which are mounted a robotic arm, as well as other equipment such as our web-camera, microphones, etc. STAIR was built as part of a project whose long-term goal is to create a robot that can navigate home and office environments, pick up and interact with objects and tools, and intelligently converse with and help people in these environments. Our algorithms for grasping novel objects represent a first step towards achieving some of these goals. The robotic arm we used is the Harmonic Arm made by Neuronics. This is a 4 kg, 5-dof arm equipped with a parallel plate gripper, and has a positioning accuracy of ?1 mm. Our vision system used a low-quality webcam mounted near the end-effector. 8 The approach position is set to be a fixed distance away from the predicted grasp point. Table 1: Average absolute error in locating the grasp point for different objects, as well as grasp success rate for picking up the different objects using our robotic arm. (Although training was done on synthetic images, testing was done on the real robotic arm and real objects.) O BJECTS S IMILAR TO O NES T RAINED ON T ESTED ON M EAN G RASP E RROR ( CM ) RATE M UGS P ENS W INE G LASS B OOKS E RASER / C ELLPHONE 2.4 0.9 1.2 2.9 75% 100% 100% 75% 1.6 100% OVERALL 1.80 90% 4.2 N OVEL O BJECTS M EAN E RROR ( CM ) D UCT TAPE 1.8 K EYS 1.0 M ARKERS /S CREWDRIVER 1.1 T OOTHBRUSH /C UTTER 1.1 J UG 1.7 T RANSLUCENT B OX 3.1 P OWERHORN 3.6 C OILED W IRE 1.4 OVERALL 1.85 T ESTED ON G RASP RATE 100% 100% 100% 100% 75% 75% 50% 100% 87.5% Results and Discussion We first evaluated the predictive accuracy of the algorithm on synthetic images (not contained in the training set). (See Fig. 5.) The average accuracy for classifying whether a 2-d image patch is a projection of a grasping point was 94.2% (evaluated on a balanced test set), although the accuracy in predicting 3-d grasping points was higher because the probabilistic model for inferring a 3-d grasping point automatically aggregates data from multiple images, and therefore ?fixes? some of the errors from individual classifiers. We then tested the algorithm on the physical robotic arm. Here, the task was to use input from a web-camera, mounted on the robot, to pick up an object placed in front of the robot. Recall that the parameters of the vision algorithm were trained from synthetic images of a small set of six object classes, namely books, martini glasses, white-board erasers, coffee mugs, tea cups and pencils. We performed experiments on coffee mugs, wine glasses (partially filled with water), pencils, books, and erasers?but all of different dimensions and appearance than the ones in the training set? as well as a large set of objects from novel object classes, such as rolls of duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, pens, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, etc. (See Fig. 1.) We note that many of these objects are translucent, textureless, and/or reflective, making 3-d reconstruction difficult for standard stereo systems. (Indeed, a carefully-calibrated Point Gray stereo system, the Bumblebee BB-COL20, ?with higher quality cameras than our web-camera?fails to accurately reconstruct the visible portions of 9 out of 12 objects.) In extensive experiments, the algorithm for predicting grasps in images appeared to generalize very well. Despite being tested on images of real (rather than synthetic) objects, including many very different from ones in the training set, it was usually able to identify correct grasp points. We note that test set error (in terms of average absolute error in the predicted position of the grasp point) on the real images was only somewhat higher than the error on synthetic images; this shows that the algorithm trained on synthetic images transfers well to real images. (Over all 5 object types used in the synthetic data, average absolute error was 0.81cm in the synthetic images; and over all the 13 real test objects, average error was 1.83cm.) For comparison, neonate humans can grasp simple objects with an average accuracy of 1.5cm. [2] Table 1 shows the errors in the predicted grasping points on the test set. The table presents results separately for objects which were similar to those we trained on (e.g., coffee mugs) and those which were very dissimilar to the training objects (e.g., duct tape). In addition to reporting errors in grasp positions, we also report the grasp success rate, i.e., the fraction of times the robotic arm was able to physically pick up the object (out of 4 trials). On average, the robot picked up the novel objects 87.5% of the time. For simple objects such as cellphones, wine glasses, keys, toothbrushes, etc., the algorithm performed perfectly (100% grasp success rate). However, grasping objects such as mugs or jugs (by the handle) allows only a narrow trajectory of approach?where one ?finger? is inserted into the handle?so that even a small error in the grasping point identification causes the arm to hit and move the object, resulting in a failed grasp attempt. Although it may be possible to improve the algorithm?s accuracy, we believe that these problems can best be solved by using a more advanced robotic arm that is capable of haptic (touch) feedback. In many instances, the algorithm was able to pick up completely novel objects (a strangely shaped power-horn, duct-tape, solder tool holder, etc.; see Fig. 1 and 6). Perceiving a transparent wine glass is a difficult problem for standard vision (e.g., stereopsis) algorithms because of reflections, etc. However, as shown in Table 1, our algorithm successfully picked it up 100% of the time. The same rate of success holds even if the glass is 2/3 filled with water. Videos showing the robot grasping the objects, are available at http://ai.stanford.edu/?asaxena/learninggrasp/ We also applied our learning algorithm to the task of unloading items from a dishwasher. 9 Fig. 5 demonstrates that the algorithm correctly identifies the grasp on multiple objects even in the presence of clutter and occlusion. Fig. 6 shows our robot unloading some items from a dishwasher. 5 Conclusions We proposed an algorithm to enable a robot to grasp an object that it has never seen before. Our learning algorithm neither tries to build, nor requires, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. In our experiments, the algorithm generalizes very well to novel objects and environments, and our robot successfully grasped a wide variety of objects, such as wine glasses, duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. Acknowledgment We give warm thanks to Anya Petrovskaya, Morgan Quigley, and Jimmy Zhang for help with the robotic arm control driver software. This work was supported by the DARPA transfer learning program under contract number FA8750-05-2-0249. References [1] A. Bicchi and V. Kumar. Robotic grasping and contact: a review. In ICRA, 2000. [2] T. G. R. Bower, J. M. Broughton, and M. K. Moore. Demonstration of intention in the reaching behaviour of neonate humans. Nature, 228:679?681, 1970. [3] A. S. Glassner. An Introduction to Ray Tracing. Morgan Kaufmann Publishers, Inc., San Francisco, 1989. [4] K. Hsiao and T. Lozano-Perez. Imitation learning of whole-body grasps. In IEEE/RJS International Conference on Intelligent Robots and Systems (IROS), 2006. [5] M. T. Mason and J. K. Salisbury. Manipulator grasping and pushing operations. In Robot Hands and the Mechanics of Manipulation. The MIT Press, Cambridge, MA, 1985. [6] J. Michels, A. Saxena, and A. Y. Ng. High speed obstacle avoidance using monocular vision and reinforcement learning. In ICML, 2005. [7] Miller and et. al. Automatic grasp planning using shape primitives. In ICRA, 2003. [8] R. Pelossof and et. al. An svm learning approach to robotic grasping. In ICRA, 2004. [9] J. H. Piater. Learning visual features to predict hand orientations. In ICML Workshop on Machine Learning of Spatial Knowledge, 2002. [10] R. Platt, A. H. Fagg, and R. Grupen. Reusing schematic grasping policies. In IEEE-RAS International Conference on Humanoid Robots, Tsukuba, Japan, 2005. [11] A. Saxena, S. H. Chung, and A. Y. Ng. Learning depth from single monocular images. In NIPS 18, 2005. [12] A. Saxena, J. Driemeyer, J. Kearns, C. Osondu, and A. Y. Ng. Learning to grasp novel objects using vision. In 10th International Symposium of Experimental Robotics (ISER), 2006. [13] A. Saxena, J. Schulte, and A. Y. Ng. Depth estimation using monocular and stereo cues. In 20th International Joint Conference on Artificial Intelligence (IJCAI), 2007. [14] H. Schneiderman and T. Kanade. Probabilistic modeling of local appearance and spatial relationships for object recognition. In CVPR, 1998. [15] T. Shin-ichi and M. Satoshi. Living and working with robots. Nipponia, 2000. 9 To improve performance, we also used depth-based features. More formally, we applied our texture based features to the depth image obtained from a stereo camera, and appended them to the feature vector used in classification. We also appended some hand-labeled real examples of dishwasher images to the training set to prevent the algorithm from identifying grasping points on background clutter, such as dishwasher prongs.
3010 |@word trial:1 illustrating:1 version:1 seems:1 open:1 proportionality:1 closure:1 pick:6 carry:1 initial:1 cellphone:5 contains:1 fa8750:1 past:1 must:1 mesh:1 realistic:1 additive:1 visible:2 shape:4 plot:1 ashutosh:1 cue:2 intelligence:1 item:2 plane:5 realism:2 colored:1 ire:1 location:8 simpler:2 zhang:1 along:3 c2:1 direct:1 driver:2 symposium:1 grupen:1 ray:17 acquired:1 mask:2 ra:1 indeed:1 behavior:1 roughly:1 planning:3 nor:3 multi:2 mechanic:1 automatically:3 actual:1 window:1 equipped:1 begin:1 project:1 translucent:5 kg:1 cm:7 every:2 saxena:6 collecting:2 glassner:1 exactly:1 colormap:1 classifier:1 hit:1 platt:2 control:2 demonstrates:1 converse:1 arguably:1 before:1 local:4 consequence:1 despite:1 accumulates:1 hsiao:2 might:2 chose:2 challenging:1 programmed:1 commanded:1 range:1 horn:4 jug:6 practical:1 camera:16 yj:15 practice:2 block:1 testing:1 acknowledgment:1 shin:1 grasped:2 area:1 projection:7 pre:1 intention:1 regular:1 onto:2 close:1 applying:2 map:1 primitive:2 straightforward:1 attention:1 jimmy:1 rectangular:2 simplicity:1 identifying:1 avoidance:1 handle:5 variation:1 exact:1 us:2 associate:1 element:1 recognition:2 particularly:1 expensive:1 predicts:3 labeled:4 inserted:1 solved:2 capture:2 calculate:1 autonomously:1 grasping:63 balanced:1 environment:3 complexity:1 asked:1 trained:4 depend:1 creation:1 predictive:1 completely:1 triangle:1 easily:1 bowl:1 joint:2 darpa:1 various:2 finger:3 intersects:1 laser:1 describe:3 artificial:1 labeling:2 aggregate:1 choosing:1 dof:2 whose:1 stanford:5 cvpr:1 reconstruct:2 gi:2 sequence:2 advantage:1 quigley:1 intelligently:1 reconstruction:5 propose:3 gait:1 remainder:1 neighboring:1 poorly:1 ine:1 moved:1 ijcai:1 r1:1 produce:1 perfect:1 object:93 help:2 andrew:1 develop:1 pose:1 minor:1 textureless:2 noticeable:1 eq:3 exacerbated:1 dividing:1 implemented:1 c:1 shadow:1 indicate:2 predicted:5 dishwasher:5 direction:2 thick:3 correct:3 filter:4 centered:3 human:2 enable:1 driemeyer:2 require:2 behaviour:1 fix:1 transparent:1 opengl:3 scanner:1 lying:2 around:2 considered:2 mm:2 hold:1 deciding:1 cb:1 predict:6 wine:6 estimation:1 label:3 create:1 successfully:3 tool:4 mit:1 sensor:1 gaussian:2 rather:2 reaching:2 pn:1 cr:1 eys:1 mobile:1 command:1 office:2 focus:1 cruder:1 likelihood:1 lass:1 contrast:2 equipment:1 glass:11 inference:2 inaccurate:1 typically:2 unlikely:1 relation:2 comprising:1 pixel:2 issue:1 arg:4 orientation:4 classification:2 denoted:1 overall:2 plan:1 platform:4 spatial:3 once:1 never:1 shaped:4 ng:6 having:1 manually:1 schulte:1 represents:3 icml:2 triangulate:3 others:3 report:2 screw:1 intelligent:1 few:2 modern:1 oriented:1 randomly:1 individual:1 maxj:3 occlusion:1 attempt:5 freedom:1 interest:1 ested:2 highly:1 grasp:43 perez:2 light:1 stair:2 accurate:2 edge:6 capable:2 screwdriver:4 machinery:1 filled:2 effector:3 instance:1 earlier:1 soft:1 obstacle:1 modeling:1 prong:1 front:2 graphic:4 stored:1 ugs:1 synthetic:21 calibrated:1 thanks:1 density:1 international:4 randomized:1 workspace:1 probabilistic:7 contract:1 picking:3 together:1 intersecting:2 duct:7 again:2 squared:1 ambiguity:1 choose:1 book:8 convolving:1 inefficient:1 style:2 chung:1 reusing:1 japan:1 account:1 diversity:1 sec:1 inc:1 explicitly:2 performed:3 try:6 view:1 picked:3 tion:1 portion:2 red:1 sort:1 bayes:2 parallel:1 appended:2 square:1 accuracy:8 holder:2 qk:1 roll:1 kaufmann:1 miller:1 identify:8 satoshi:1 generalize:1 identification:2 accurately:1 comparably:1 yr1:3 none:2 trajectory:3 lighting:1 straight:1 reach:1 cumbersome:1 manual:1 energy:2 frequency:1 recall:1 knowledge:2 color:9 carefully:2 actually:1 higher:3 supervised:3 day:1 yrk:3 done:2 box:4 ox:1 evaluated:2 uct:1 yr2:1 hand:4 working:1 web:3 touch:1 marker:5 logistic:5 quality:4 gray:1 believe:1 manipulator:3 neonate:2 cereal:1 contain:1 true:2 lozano:2 pencil:3 hence:1 moore:1 white:2 mug:9 round:1 x5:1 trying:2 plate:1 demonstrate:1 motion:1 reflection:2 image:74 ranging:1 harmonic:1 novel:12 common:1 specialized:1 ug:1 physical:2 fare:1 slight:1 cup:3 cambridge:1 ai:2 automatic:1 grid:18 similarly:1 iser:1 had:1 moving:1 robot:19 etc:8 base:2 triangulation:2 manipulation:8 scenario:1 certain:1 success:4 seen:5 morgan:2 somewhat:1 rained:1 living:1 full:4 multiple:7 needing:1 infer:3 rj:1 smooth:1 positioning:3 faster:1 knife:3 long:2 equally:1 schematic:1 prediction:1 regression:5 vision:9 physically:1 represent:3 sometimes:1 scripted:1 cell:16 c1:7 robotics:1 addition:2 want:1 separately:1 background:1 decreased:1 diagram:1 source:1 publisher:1 pass:1 haptic:1 reflective:2 near:2 counting:3 synthetically:2 presence:1 rendering:3 variety:2 independence:2 specular:1 zi:10 perfectly:2 identified:2 reduce:1 cn:4 enumerating:1 whether:2 six:2 stereo:8 locating:1 passing:3 cause:1 tape:8 useful:1 iterating:1 informally:1 outward:2 clutter:2 slide:1 ang:1 mid:2 dark:3 hardware:1 generate:3 http:1 cutter:3 per:1 correctly:1 blue:4 tea:1 ichi:1 key:5 achieving:1 prevent:1 neither:3 iros:1 eraser:3 fagg:1 fraction:1 cone:1 sum:3 angle:1 schneiderman:1 uncertainty:3 place:2 reporting:1 patch:12 home:1 asaxena:2 internet:1 correspondence:2 fold:1 tsukuba:1 assemble:1 scene:2 flat:1 ri:5 software:1 nearby:1 speed:1 friction:1 extremely:2 kumar:1 performing:1 strangely:4 optical:1 solder:2 department:1 structured:1 remain:1 across:3 slightly:2 modification:1 making:1 taken:5 computationally:1 equation:1 monocular:3 previously:2 remains:1 r3:1 know:1 end:4 available:3 operation:2 generalizes:1 apply:5 away:2 appropriate:1 top:2 graphical:1 household:1 pushing:1 build:3 coffee:5 webcam:1 icra:3 contact:1 move:4 added:1 exhibit:1 stapler:3 distance:2 unable:1 w0:1 water:2 spinning:1 assuming:2 index:1 relationship:1 insufficient:1 demonstration:1 acquire:1 difficult:5 mostly:1 setup:1 ycbcr:1 implementation:1 policy:1 unknown:1 discretize:1 wire:3 ever:1 jebara:1 intensity:3 namely:1 extensive:3 learned:4 narrow:1 inaccuracy:1 nip:1 address:3 justin:2 able:4 usually:4 appeared:1 program:1 built:1 including:2 green:1 video:1 power:4 unrealistic:1 natural:1 force:1 warm:1 predicting:3 indicator:1 advanced:1 arm:23 representing:1 improve:2 ne:1 picture:1 identifies:2 created:1 concludes:1 naive:2 piater:2 tracer:3 func:1 text:1 prior:2 literature:1 review:1 law:2 anya:1 fully:1 generation:1 mounted:4 utter:1 humanoid:1 degree:1 sword:1 consistent:1 classifying:1 martini:4 balancing:1 heavy:1 prone:1 placed:1 supported:1 wide:2 face:2 taking:1 absolute:3 tracing:2 curve:1 depth:6 dimension:3 world:3 valid:2 feedback:1 qn:2 computes:1 forward:1 made:1 reinforcement:1 san:1 bb:1 global:1 robotic:21 active:1 assumed:1 francisco:1 consuming:2 xi:2 stereopsis:2 imitation:1 search:1 pen:3 table:6 kanade:1 nature:1 learn:4 channel:6 ca:1 transfer:2 ean:2 interact:1 whiteboard:1 complex:4 dense:1 whole:1 arise:1 body:1 fig:13 en:1 board:2 sub:1 position:30 inferring:2 ovel:1 fails:1 candidate:1 bower:1 down:1 rk:1 specific:1 navigate:1 showing:2 mason:1 svm:1 workshop:1 gripper:1 false:1 restricting:1 ci:17 texture:7 downward:4 michels:2 likely:3 appearance:2 visual:3 failed:1 contained:2 toothbrush:6 partially:1 rror:2 applies:1 corresponds:3 extracted:1 rjs:1 coil:3 ma:1 sized:1 viewed:2 goal:2 towards:2 specifically:2 perceiving:3 determined:1 averaging:1 kearns:2 degradation:1 called:1 microphone:1 experimental:1 formally:6 wq:1 people:1 dissimilar:1 yrj:2 evaluate:1 tested:3 phenomenon:1
2,217
3,011
Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation Jason M. Samonds Brian R. Potetz Tai Sing Lee Center for the Neural Basis CNBC and Computer CNBC and Computer of Cognition (CNBC) Science Department Science Department Carnegie Mellon University Carnegie Mellon University Carnegie Mellon University Pittsburgh, PA 15213 Pittsburgh, PA 15213 Pittsburgh, PA 15213 [email protected] [email protected] [email protected] Abstract Although there has been substantial progress in understanding the neurophysiological mechanisms of stereopsis, how neurons interact in a network during stereo computation remains unclear. Computational models on stereopsis suggest local competition and long-range cooperation are important for resolving ambiguity during stereo matching. To test these predictions, we simultaneously recorded from multiple neurons in V1 of awake, behaving macaques while presenting surfaces of different depths rendered in dynamic random dot stereograms. We found that the interaction between pairs of neurons was a function of similarity in receptive fields, as well as of the input stimulus. Neurons coding the same depth experienced common inhibition early in their responses for stimuli presented at their nonpreferred disparities. They experienced mutual facilitation later in their responses for stimulation at their preferred disparity. These findings are consistent with a local competition mechanism that first removes gross mismatches, and a global cooperative mechanism that further refines depth estimates. 1 In trod u ction The human visual system is able to extract three-dimensional (3D) structures in random noise stereograms even when such images evoke no perceptible patterns when viewed monocularly [1]. Bela Julesz proposed that this is accomplished by a stereopsis mechanism that detects correlated shifts in 2D noise patterns between the two eyes. He also suggested that this mechanism likely involves cooperative neural processing early in the visual system. Marr and Poggio formalized the computational constraints for solving stereo matching (Fig. 1a) and devised an algorithm that can discover the underlying 3D structures in a variety of random dot stereogram patterns [2]. Their algorithm was based on two rules: (1) each element or feature is unique (i.e., can be assigned only one disparity) and (2) surfaces of objects are cohesive (i.e., depth changes gradually across space). To describe their algorithm in neurophysiological terms, we can consider neurons in primary visual cortex as simple element or feature detectors. The first rule is implemented by introducing competitive interactions (mutual inhibition) among neurons of different disparity tuning at each location (Fig. 1b, blue solid horizontal or vertical lines), allowing only one disparity to be detected at each location. The second rule is implemented by introducing cooperative interactions (mutual facilitation) among neurons tuned to the same depth (image disparity) across different spatial locations (Fig. 1b, along the red dashed diagonal lines). In other words, a disparity estimate at one location is more likely to be correct if neighboring locations have similar disparity estimates. A dynamic system under such constraints can relax to a stable global disparity map. Here, we present neurophysiological evidence of interactions between disparity-tuned neurons in the primary visual cortex that is consistent with this general approach. We sampled from a variety of spatially distributed disparity tuned neurons (see electrodes Fig. 1b) while displaying DRDS stimuli defined at various disparities (see stimulus Fig.1b). We then measured the dynamics of interactions by assessing the temporal evolution of correlation in neural responses. a Left Image b Right Image Electrodes Disparity Left Image ? Stimulus Right Image Figure 1: (a) Left and right images of random dot stereogram (right image has been shifted to the right). (b) 1D graphical depiction of competition (blue solid lines) and cooperation (red dashed lines) among disparity-tuned neurons with respect to space as defined by Marr and Poggio?s stereo algorithm [2]. 2 2.1 Methods Recording and stimulation a Posterior - Anterior Recordings were made in V1 of two awake, behaving macaques. We simultaneously recorded from 4-8 electrodes providing data from up to 10 neurons in a single recording session (some electrodes recorded from as many as 3 neurons). We collected data from 112 neurons that provided 224 pairs for cross-correlation analysis. For stimuli, we used 12 Hz dynamic random dot stereograms (DRDS; 25% density black and white pixels on a mean luminance background) presented in a 3.5-degree aperture. Liquid crystal shutter goggles were used to present random dot patterns to each eye separately. Eleven horizontal disparities between the two eyes, ranging from ?0.9 degrees, were tested. Seventy-four neurons (66%) had significant disparity tuning and 99 pairs (44%) were comprised of neurons that both had significant disparity tuning (1-way ANOVA, p<0.05). b 5mm Medial - Lateral 100?V 0.2ms 1? Figure 2: (a) Example recording session from five electrodes in V1. (b) Receptive field (white box?arrow represents direction preference) and random dot stereogram locations for same recording session (small red square is the fixation spot). 2.2 Data analysis Interaction between neurons was described as "effective connectivity" defined by crosscorrelation methods [3]. First, the probability of all joint spikes (x and y) between the two neurons was calculated for all times from stimulus onset (t1 and t2) including all possible lag times (t1 - t2) between the two neurons (2D joint peristimulus time histogram?JPSTH). Next, the cross-product of each neuron?s PSTH (joint probabilities expected from chance) was subtracted from the JPSTH; this difference is referred to as the cross-covariance histogram. Finally, the cross-covariance histogram was normalized by the geometric mean of the auto-covariance histograms: C x ,y ( t1 ,t 2 ) = x( t1 )y ( t 2 ) ? x( t1 ) y ( t 2 ) ( x( t 1 )x( t1 ) ? x( t1 ) x( t1 ) )( y ( t 2 )y ( t 2 ) ? y ( t 2 ) y ( t 2 ) ) (1) This normalized cross-covariance histogram is a 2D matrix of Pearson?s correlation coefficients between the two neurons where the axes represent time from stimulus onset (Figure 3). The principal diagonal also represents time from stimulus onset for correlation and the opposite diagonal represents lag time between the two neurons. We derived three measurements from this matrix to describe the ?effective connectivity? between neuron pairs. Using bootstrapped samples of stimulus trials, we estimated 95% confidence intervals for these three measurements [4]. We first integrated along the principal diagonal to produce correlation versus lag time (i.e., the traditional cross-correlation histogram?CCH). We used CCHs to find significant correlation at or near 0 ms lag times (suggesting synaptic connectivity between the neurons). Second, we integrated under the half-height full bandwidth of significant correlation peaks to quantify effective connectivity. Figure 4 shows the population average of normalized CCHs (n = 27) and 95% confidence intervals. Finally, we repeated this integration along the principal diagonal to obtain the temporal evolution of effective connectivity (computed with a running 100 ms window). CCH y(t 2 ) t2 JPSTH x(t 1 ) t1 Figure 3: Example normalized cross-covariance histogram. In computing effective connectivity with Equation 1, we assume trial-to-trial stationarity. If this is not true (e.g., due to difference in attentional effort in different trials), correlation peaks can emerge that are not due to effective connectivity [5]. We applied a correction to equation 1 [5,6] based on the average firing rate for each trial. However, no significant difference in correlation peaks was observed. In addition, changes in DRDS properties other than disparity did not cause significant changes to correlation peak properties. Finally, alternative cross-correlation methods (CCG) [7] using responses to the same exact random dot pattern to predict correlation expected from chance, again, lead to no significant difference in correlation peak properties. These observations justify our assumption that the effective connectivity computed in our case does not arise due to trial-to-trial non-stationarity. 0.008 b 0.006 a Correlation 0.004 0.002 0.004 0.17 ? 0.02 0.000 -0.002 -50 0.000 -25 25 50 0 25 Lag Time (ms) 50 0 Half-Height Half Bandwidth -0.004 -300 Peak Lag Time -100 100 Lag Time (ms) 300 Figure 4: (a) Population average CCH for 27 neuron pairs with a significant correlation peak. (b) Same as (a), but zoomed into ?50 ms lag times with statistics of peak properties (mean ? s.e.m.). 3 Interaction depends on tuning properties The primary indicator of whether or not a neuron pair had a significant correlation peak at or near a 0 ms lag time, for this class of stimuli, was similarity in disparity tuning between the two neurons. Neuron pairs with significant correlation peaks (n = 27; 27%) tended to have more similar disparity peaks, bandwidths, and frequencies (determined from fitted Gabor functions) than neuron pairs that did not have significant correlation peaks. We quantified similarity in tuning using the similarity index (SI), which is Pearson?s product-moment corn relation [8]: SI = ?( x i =1 ? ? ? ? n ?( x i =1 i i ? x )( y i ? y ) ?? ? x )2 ?? ?? ?? n ?( y i =1 i ? ? y )2 ? ? ? (2) 0.4 0.3 0.2 0.1 0 a b 0.4 Correlation Percentage Pairs ( n=27 ) where i is each point on the disparity tuning curve, x and y are the firing rates at each point for each neuron, and x and y are the mean firing rates across the tuning curve. Figure 5a and 5b clearly show that both the probability of correlation and strength in correlation increase with greater SI (n = 27 pairs). This relationship is limited to long-range interactions among neurons because our electrodes were all at least 1 mm apart. This suggests they are likely mediated by the well known long-range intracortical connections in V1 that link neurons of similar orientation across space [9]. Our results suggest that these connections might also be shared to link similar disparity neurons together. Because connectivity also depended on orientation (Figure 5c), V1 connectivity among neurons appears to depend on similarity across multiple cue dimensions. r = 0.49 p = 0.01 0.3 c r = -0.40 p = 0.04 0.2 -0.9 -0.3 0.3 0.9 Similarity Index (SI) 0.1 0.0 -1.0 -0.5 0.0 SI 0.5 1.0 0 20 40 60 80 ? Orientation Preference Figure 5: (a) Likelihood of significant correlation peak with respect to similar disparity tuning. (b) Strength of correlation increases with similarity. (c) Correlation is also more likely if orientation preference is similar. From the 12 pairs of neurons recorded on a single electrode, correlation was observed among neuron pairs with very similar disparity tuning as well as among neurons with nearly opposite disparity tuning (see also [8]). This suggests that antagonistic disparity-tuned neurons tend to spatially coexist, and their interactions are likely competitive. 4 Interaction is stimulus-dependent The interaction between pairs of neurons was not simply a function of the similarity between their receptive field properties but was also a function of the input stimuli (or stimulus disparity in our case). The effective connectivity was significantly modulated (1-way ANOVA, p<0.05) by the stimulus disparity for 25 out of the 27 pairs. We are not suggesting synaptic connections physically change, but rather that the effectiveness of those connections can change depending on the spiking activity and therefore the stimulus input. For neuron pairs with similar disparity tuning, the strongest correlation was observed at their shared preferred disparity, i.e. the peak of the disparity tuning curves based on firing rate (as shown in Figure 6). This suggests facilitation is strongest when a frontal parallel plane activated these neurons simultaneously at their preferred depth. As the stimulus plane moved away from this depth, the effective connectivity between the neurons became weaker. This was observed in 10 pairs (e.g., Figure 6c). For the other 17 pairs (e.g., Figure 6d), the correlation or effective connectivity was again strongest at the neuron pair's shared preferred disparity. However, these pairs in addition exhibited secondary correlation peaks for disparity stimuli that produced the lowest firing rates (even below the baseline for DRDSs). 50 sps 80 sps b 40 sps 20 sps Firing Rate a -0.9 0.30 Correlation c 0.0 0.9 -0.9 0.15 0.20 0.10 0.10 0.05 0.0 0.9 d 0.00 0.00 -0.9 0.0 0.9 -0.9 0.0 0.9 Horizontal Disparity (?) Horizontal Disparity (?) Figure 6: Top row are disparity tuning curves based on firing rates (mean ? s.e.m.). Bottom row are disparity tuning curves based on correlation for the corresponding pairs of neurons in the top row. Error bars are 95% confidence intervals and dashed lines represent 95% confidence of the mean correlation. Cross-correlation peaks are interpreted as a result of effective circuits that may represent any combination of a variety of synaptic connections that may have a bias in direction (one neuron drives the other) or may not have a bias in direction (zero lag time; both neurons receive a common drive) [10]. As correlation peaks become broader, as in our case (mean = 42 ms), this interpretation becomes more ambiguous (more possible circuits). The broader positive correlation peaks can even be caused by common inhibitory circuitry. One way to potentially disambiguate our interpretations is to consider firing rate behavior. The positive correlation measured at the preferred disparity suggests that the interaction was likely facilitatory in nature based on the increased firing of the neurons. The positive correlation measured at the disparity where both neurons' firing rates were depressed, i.e. at the valley of the firing- rate based disparity tuning curves, suggests that the correlation likely arose from common inhibition (presumably from neurons that preferred that disparity). 5 Temporal dynamics of interaction We can compare the temporal dynamics of the correlation with the temporal dynamics of the firing rate of the neurons to gain more insight into the possible underlying circuitry. We computed the correlation every 1 msec over a 100 ms running window, and found that the correlation peak at the preferred disparity (based on firing rates) occurred at a later time (250-350 ms post-stimulus onset) than the correlation peaks at the non-preferred disparity (100-200 ms). Figure 7 illustrates the temporal dynamics of correlation for the example neuron pair shown in Figure 6b and 6d. The distinct interval in which correlation emerged at the preferred and the non-preferred disparities was consistently observed for all 27 pairs of neurons. Even for the example shown in Figure 6c, there were peaks in correlation in the early part of the response at the most non-preferred disparities. The timing of these two phases of correlation was also rather consistent over the population of pairs. Correlation 400 100 0.4 0.0 0.2 Correlation 0.2 0.1 0 200 Correlation Time (ms) 300 0.3 -0.9 0 Horizontal Disparity (?) 0.9 -0.1 0.3 0.2 0.1 0 -0.9 0 0.9 -0.1 Horizontal Disparity (?) Figure 7: Temporal dynamics of correlation for example neuron pair shown in Figure 6, right. From left to right: Correlation versus time for preferred (red) and non-preferred (blue) disparities. Contour map of correlation versus time and disparity. Disparity tuning based on correlation for the early (blue) and late (red) portion of the response (95% confidence intervals). Correlation was calculated every 1 ms over 100 ms windows. By examining the interplay between firing rate and correlation, we were able to gain even greater insight about the interactions among neuron pairs. To summarize this interplay across our population, we compared the temporal evolution of the correlation at three distinct disparities with the temporal evolution of the firing rates at the same disparities (also smoothed with 100 ms time windows). The first disparity, the preferred disparity A, is where we measured the strongest correlation and was at or near the highest firing rate measured in individual neurons (see Figure 8, left). The second important disparity, the most non-preferred disparity C, was where we measured secondary correlation peaks and coincided with the lowest firing rates observed in individual neurons. Lastly, we looked at a disparity B that was in between disparities A and C. Figure 8 shows that neurons responded better to their preferred disparity over other disparities very early, resulting in immediate moderate firing rate-based disparity tuning. Then shortly after (100 ms), a correlation peak emerges at the least preferred disparity C. This coincides with suppression of firing rate for all disparities (Figure 8, blue dashed line). However, the suppression in firing rate is much stronger for C where the firing rate diverges downward from the firing rates for A and B sharpening the disparity tuning (Figure 8, blue arrow; see also [11]). The strong correlation coupled with the decrease in firing suggests strong common inhibition. 0.25 Correlation A B C B A Correlation 0.20 C n = 27 pairs 0.15 0.10 0.05 0.00 Firing Rate B C A Firing Rate (sps) -100 -0.05 30 0 100 A B C 25 20 200 300 400 300 400 n = 32 cells 15 10 5 0 -100 0 100 200 Time (ms) Figure 8: Population average of normalized correlation versus time (top) for three disparities shown on the left. Population average of normalized PSTHs for same three disparities (bottom). Both correlation and firing rates were calculated every 1 ms over 100 ms windows. Once the correlation peak at C subsided (200 ms), the correlation increased for A (red dashed line). When the correlation for A peaked, the correlation decreased for B and C, leading to very sharp correlation-based disparity tuning (see also Figure 7). This correlationbased tuning can facilitate depth estimates by changing how effectively these signals are integrated downstream as a function of disparity [12]. Our interpretation is that the initial firing rate bias leads to antagonistic disparity-tuned neurons generating common inhibition that suppresses firing at non-preferred disparities, removing potential mismatches. The immediacy suggests that mutual inhibition was local, which is consistent with our observation that many opposing disparity-tuned neurons spatially coexisted. The slower correlation peak at the preferred disparity A is indicative of mutual facilitation that occurred when the depth estimates of spatially distinct neurons matched. This facilitation leads to a more precise estimate of depth. 6 D i s c u s s i on a n d c o n c l u s i o n s The findings from this study provide support to Julesz?s proposal that cooperative and competitive mechanisms in primary visual cortex are utilized for estimating global depth in random dot stereograms [1], which was later described formally by Marr and Poggio [2]. More recent cooperative stereo computation models allow excitatory interaction between neurons of different disparities separated by long distance. This is used to accommodate the computation of slanted surfaces [13,14]. In this experiment, we only tested frontal parallel planes, thus, we cannot answer whether or not effective connections and facilitation exist between neurons with larger disparity differences over long distances. This will require further experiments using planes with disparity gradients. The observation that initial correlation peaks occurred at disparities that evoked the lowest firing rates in neurons, suggests that correlation peaks emerged from common inhibition for non-preferred disparities. The observation that later correlation occurred at disparities that evoked the highest firing rates suggests that neurons were mutually exciting each other at their preferred disparity. Our neurophysiological data reveal interesting dynamics between network-based (effective connectivity) and firing rate-based encoding of depth estimates. The observation that inhibition precedes facilitation suggests that competition is local (re- calling neurons at the same electrode tend to have opposite disparity tuning) and cooperation is more global (mediated through long-range connectivity). Local competition between neurons encoding different depths is consistent with the uniqueness principle of Marr and Poggio's algorithm [2]. In addition, cooperation among neurons encoding the same depth across space was predicted by the second rule of their algorithm: matter is cohesive. These two interactions are robust at removing potential ambiguity during stereo matching and depth inference. Previous neurophysiological data had suggested that intracortical connectivity in primary visual cortex underlies competitive [15] and cooperative [16] mechanisms for improving estimates of orientation. Our data suggests similar circuitry might play a role also in stereo matching [17]. However, this study is distinct in that it provides detailed empirical support for computational algorithms for solving stereo matching. It thus highlights the importance of computational algorithms in generating hypotheses to guide future neurophysiological studies. Acknowledgments We thank George Gerstein and Jeff Keating for JPSTH software. Supported by NIMH IBSC MH64445 and NSF CISE IIS-0413211 grants. References [1] Julesz, B. (1971) Foundations of cyclopean perception. Chicago: University of Chicago Press. [2] Marr, D. & Poggio, T. (1976) Cooperative computation of stereo disparity. 194(4262):283-287. Science [3] Aertsen, A.M., Gerstein, G.L., Habib, M.K. & Palm, G. (1989) Dynamics of neuronal firing correlation: modulation of "effective connectivity". Journal of Neurophysiology 61(5):900-917. [4] Efron, B. & Tibshirani, R. (1993) An Introduction to the Bootstrap. New York: Chapman & Hall. [5] Brody, C.D. (1999) Correlations without synchrony. Neural Computation 11(7):1537-1551. [6] Gerstein, G.L. & Kirkland, K.L. (2001) Neural assemblies: technical issues, analysis, and modeling. Neural Networks 14(6-7):589-598. [7] Kohn, A. & Smith, M.A. (2005) Stimulus dependence of neuronal correlation in primary visual cortex of the macaque. Journal of Neuroscience 25(14):3661-3673. [8] Menz, M. & Freeman, R.D. (2004) Functional connectivity of disparity-tuned neurons in the visual cortex. Journal of Neurophysiology 91(4):1794-1807. [9] Gilbert, C.D. & Wiesel, T.N. (1989) Columnar specificity of intrinsic horizontal and corticocortical connections in cat visual cortex. Journal of Neuroscience 9(7):2432-2442. [10] Moore, G.P., Segundo, J.P., Perkel, D.H. & Levitan, H. (1970) Statistical signs of synaptic interaction in neurons. Biophysics Journal 10(9):876-900. [11] Menz, M. & Freeman, R.D. (2003) Stereoscopic depth processing in the visual cortex: a coarseto-fine mechanism. Nature Neuroscience 6(1):59-65. [12] Bruno, R.M. & Sakmann, B. (2006) Cortex is driven by weak but synchronously active thalamocortical synapses. Science 312(5780):1622-1627. [13] Prazdny, K. (1985) Detection of binocular disparities. Biological Cybernetics 52(2):93-99. [14] Pollard, S.B., Mayhew, J.E., & Frisby, J.P. (1985) PMF: a stereo correspondence algorithm using a disparity gradient limit. Perception 14(4):449-470. [15] Ringach, D.L., Hawken, M.J. & Shapley, R. (1997) Dynamics of orientation tuning in macaque primary visual cortex. Nature 387(6630):281-284. [16] Samonds, J.M., Allison, J.D., Brown, H.A. & Bonds, A.B. (2004) Cooperative synchronized assemblies enhance orientation discrimination. Proceedings of the National Academy of Sciences USA 101(17):6722-6727. [17] Ben-Shahar, O., Huggins, P.S., Izo, T. & Zucker, S.W. (2003) Cortical connections and early visual function: intra- and inter-columnar processing. Journal of Physiology (Paris) 97(2-3):191-208.
3011 |@word neurophysiology:2 trial:7 wiesel:1 stronger:1 covariance:5 solid:2 accommodate:1 moment:1 initial:2 disparity:84 liquid:1 tuned:8 bootstrapped:1 anterior:1 si:5 slanted:1 refines:1 chicago:2 eleven:1 remove:1 medial:1 discrimination:1 half:3 cue:1 indicative:1 plane:4 smith:1 provides:1 location:6 preference:3 psth:1 five:1 height:2 along:3 become:1 fixation:1 shapley:1 cnbc:5 inter:1 expected:2 behavior:1 perkel:1 keating:1 freeman:2 detects:1 window:5 becomes:1 provided:1 discover:1 underlying:2 matched:1 circuit:2 estimating:1 lowest:3 interpreted:1 suppresses:1 finding:2 sharpening:1 temporal:9 every:3 grant:1 t1:9 positive:3 local:5 timing:1 depended:1 limit:1 encoding:3 firing:32 modulation:1 black:1 might:2 quantified:1 evoked:2 suggests:11 limited:1 range:4 unique:1 acknowledgment:1 bootstrap:1 spot:1 coexisted:1 empirical:1 gabor:1 significantly:1 matching:5 physiology:1 word:1 confidence:5 specificity:1 suggest:2 cannot:1 coexist:1 valley:1 gilbert:1 map:2 center:1 formalized:1 rule:4 insight:2 marr:5 facilitation:7 population:6 antagonistic:2 play:1 exact:1 hypothesis:1 pa:3 element:2 utilized:1 corticocortical:1 cooperative:9 observed:6 bottom:2 role:1 decrease:1 highest:2 substantial:1 gross:1 stereograms:4 nimh:1 dynamic:12 depend:1 solving:2 basis:1 joint:3 various:1 cat:1 separated:1 distinct:4 describe:2 effective:14 ction:1 detected:1 precedes:1 pearson:2 lag:10 emerged:2 larger:1 relax:1 statistic:1 interplay:2 interaction:17 product:2 zoomed:1 neighboring:1 academy:1 moved:1 competition:5 electrode:8 assessing:1 diverges:1 produce:1 generating:2 ben:1 object:1 depending:1 measured:6 mayhew:1 progress:1 strong:2 implemented:2 c:1 involves:1 predicted:1 quantify:1 synchronized:1 direction:3 correct:1 human:1 require:1 brian:1 biological:1 correction:1 mm:2 hall:1 presumably:1 cognition:1 predict:1 circuitry:3 early:6 uniqueness:1 coarseto:1 bond:1 clearly:1 rather:2 arose:1 broader:2 ax:1 derived:1 consistently:1 likelihood:1 suppression:2 baseline:1 inference:1 dependent:1 integrated:3 relation:1 pixel:1 issue:1 among:9 orientation:7 spatial:1 integration:1 mutual:5 field:3 once:1 chapman:1 represents:3 seventy:1 nearly:1 peaked:1 future:1 t2:3 stimulus:20 simultaneously:3 national:1 individual:2 phase:1 opposing:1 detection:1 stationarity:2 intra:1 goggles:1 samonds:2 allison:1 activated:1 trod:1 segundo:1 poggio:5 pmf:1 re:1 fitted:1 increased:2 modeling:1 introducing:2 comprised:1 examining:1 answer:1 density:1 peak:27 lee:1 enhance:1 together:1 connectivity:18 again:2 ambiguity:2 recorded:4 bela:1 sps:5 crosscorrelation:1 leading:1 suggesting:2 potential:2 intracortical:2 coding:1 coefficient:1 matter:1 caused:1 onset:4 depends:1 later:4 jason:1 red:6 competitive:4 cch:3 portion:1 parallel:2 synchrony:1 square:1 became:1 responded:1 weak:1 produced:1 drive:2 cybernetics:1 detector:1 strongest:4 synapsis:1 tended:1 synaptic:4 frequency:1 sampled:1 gain:2 emerges:1 efron:1 appears:1 response:6 box:1 binocular:1 lastly:1 correlation:76 horizontal:7 reveal:1 facilitate:1 usa:1 normalized:6 true:1 brown:1 evolution:4 assigned:1 spatially:4 moore:1 ringach:1 white:2 cohesive:2 during:3 ambiguous:1 coincides:1 m:20 presenting:1 crystal:1 image:8 ranging:1 common:7 psths:1 stimulation:2 spiking:1 functional:1 he:1 interpretation:3 occurred:4 mellon:3 significant:12 measurement:2 tuning:23 session:3 depressed:1 bruno:1 had:4 dot:8 stable:1 zucker:1 similarity:8 behaving:2 inhibition:8 surface:3 cortex:10 depiction:1 posterior:1 recent:1 moderate:1 apart:1 driven:1 shahar:1 accomplished:1 greater:2 george:1 dashed:5 signal:1 resolving:1 multiple:2 full:1 stereogram:3 ii:1 technical:1 cross:9 long:6 devised:1 post:1 biophysics:1 prediction:1 underlies:1 cmu:3 physically:1 histogram:7 represent:3 cell:1 receive:1 background:1 addition:3 separately:1 proposal:1 interval:5 decreased:1 fine:1 exhibited:1 recording:5 hz:1 tend:2 effectiveness:1 near:3 variety:3 bandwidth:3 opposite:3 kirkland:1 shift:1 whether:2 shutter:1 kohn:1 effort:1 stereo:11 pollard:1 york:1 cause:1 detailed:1 julesz:3 percentage:1 exist:1 nsf:1 inhibitory:1 shifted:1 sign:1 estimated:1 neuroscience:3 tibshirani:1 stereoscopic:1 blue:6 cyclopean:1 carnegie:3 four:1 changing:1 anova:2 v1:5 luminance:1 downstream:1 hawken:1 gerstein:3 brody:1 correspondence:1 activity:1 strength:2 constraint:2 awake:2 software:1 calling:1 facilitatory:1 rendered:1 corn:1 department:2 palm:1 combination:1 across:7 perceptible:1 huggins:1 gradually:1 equation:2 mutually:1 tai:2 remains:1 mechanism:9 away:1 subtracted:1 alternative:1 shortly:1 slower:1 top:3 running:2 assembly:2 graphical:1 potetz:1 levitan:1 monocularly:1 spike:1 looked:1 receptive:3 primary:7 dependence:1 diagonal:5 traditional:1 unclear:1 aertsen:1 gradient:2 distance:2 attentional:1 link:2 lateral:1 thank:1 collected:1 index:2 relationship:1 providing:1 potentially:1 sakmann:1 allowing:1 vertical:1 neuron:68 observation:5 sing:1 immediate:1 precise:1 synchronously:1 smoothed:1 sharp:1 pair:26 paris:1 connection:8 macaque:4 able:2 suggested:2 bar:1 below:1 pattern:5 mismatch:2 perception:2 summarize:1 including:1 indicator:1 frisby:1 eye:3 mediated:2 extract:1 auto:1 coupled:1 understanding:1 geometric:1 immediacy:1 highlight:1 interesting:1 versus:4 foundation:1 degree:2 consistent:5 displaying:1 exciting:1 correlationbased:1 principle:1 row:3 excitatory:1 cooperation:4 supported:1 thalamocortical:1 bias:3 weaker:1 allow:1 guide:1 emerge:1 distributed:1 curve:6 depth:16 calculated:3 dimension:1 cortical:1 contour:1 made:1 preferred:21 aperture:1 evoke:1 global:4 active:1 pittsburgh:3 stereopsis:3 disambiguate:1 nature:3 robust:1 improving:1 interact:1 did:2 arrow:2 noise:2 arise:1 repeated:1 neuronal:2 fig:5 referred:1 experienced:2 msec:1 late:1 coincided:1 removing:2 peristimulus:1 jpsth:4 evidence:2 intrinsic:1 effectively:1 importance:1 ccg:1 illustrates:1 downward:1 columnar:2 simply:1 likely:7 neurophysiological:7 visual:12 chance:2 viewed:1 jeff:1 shared:3 cise:1 habib:1 change:5 determined:1 justify:1 principal:3 secondary:2 nonpreferred:1 formally:1 support:2 modulated:1 frontal:2 tested:2 correlated:1
2,218
3,012
Handling Advertisements of Unknown Quality in Search Advertising Sandeep Pandey Carnegie Mellon University [email protected] Christopher Olston Yahoo! Research [email protected] Abstract We consider how a search engine should select advertisements to display with search results, in order to maximize its revenue. Under the standard ?pay-per-click? arrangement, revenue depends on how well the displayed advertisements appeal to users. The main difficulty stems from new advertisements whose degree of appeal has yet to be determined. Often the only reliable way of determining appeal is exploration via display to users, which detracts from exploitation of other advertisements known to have high appeal. Budget constraints and finite advertisement lifetimes make it necessary to explore as well as exploit. In this paper we study the tradeoff between exploration and exploitation, modeling advertisement placement as a multi-armed bandit problem. We extend traditional bandit formulations to account for budget constraints that occur in search engine advertising markets, and derive theoretical bounds on the performance of a family of algorithms. We measure empirical performance via extensive experiments over real-world data. 1 Introduction Search engines are invaluable tools for society. Their operation is supported in large part through advertising revenue. Under the standard ?pay-per-click? arrangement, search engines earn revenue by displaying appealing advertisements that attract user clicks. Users benefit as well from this arrangement, especially when searching for commercial goods or services. Successful advertisement placement relies on knowing the appeal or ?clickability? of advertisements. The main difficulty is that the appeal of new advertisements that have not yet been ?vetted? by users can be difficult to estimate. In this paper we study the problem of placing advertisements to maximize a search engine?s revenue, in the presence of uncertainty about appeal. 1.1 The Advertisement Problem Consider the following advertisement problem [8], illustrated in Figure 1. There are m advertisers A1 , A2 . . . Am who wish to advertise on a search engine. The search engine runs a large auction where each advertiser submits its bids to the search engine for the query phrases in which it is interested. Advertiser Ai submits advertisement ai,j to target query phrase Qj , and promises to pay bi,j amount of money for each click on this advertisement, where bi,j is Ai ?s bid for advertisement ai,j . Advertiser Ai can also specify a daily budget (di ) that is the total amount of money it is willing to pay for the clicks on its advertisements in a day. Given a user search query on phrase Qj , the search engine selects a constant number C ? 1 of advertisements from the candidate set of advertisements {a?,j }, targeted to Qj . The objective in selecting advertisements is to maximize the search engine?s total daily revenue. The arrival sequence of user queries is not known in advance. For now we assume that each day a new set of advertisements is given to the search engine and the set remains fixed through out the day; we drop both of these assumptions later in Section 4. Budgets Advertisers d1 A1 Ads a 1,1 Query phrases CTR = 1 for all ads Q A2 d3 A3 d4 A4 d5 A5 general CTR, CTR unknown V this paper 1 a 2,1 d2 general CTR, CTR known a 1,3 Q 2 a 3,2 a 3,4 a 4,4 a 5,5 no budget constraints Q I III GREEDY GREEDY ratio=1 ratio=1 3 Q 4 a 5,4 budget constraints Q 5 Figure 1: Advertiser and query model. II IV MSVV GREEDY ratio=1?1/e ratio=1/2 VI this paper Figure 2: Problem variants. High revenue is achieved by displaying advertisements that have high bids as well as high likelihood of being clicked on by users. Formally, the click-through rate (CTR) ci,j of advertisement ai,j is the probability of a user to click on advertisement ai,j given that the advertisement was displayed to the user for query phrase Qj . In the absence of budget constraints, revenue is maximized by displaying advertisements with the highest ci,j ? bi,j value. The work of [8] showed how to maximize revenue in the presence of budget constraints, but under the assumption that all CTRs are known in advance. In this paper we tackle the more difficult but realistic problem of maximizing advertisement revenue when CTRs are not necessarily known at the outset, and must be learned on the fly. We show the space of problem variants (along with the best known advertisement policies) in Figure 2. GREEDY refers to selection of advertisements according to expected revenue (i.e., ci,j ? bi,j ). In Cells I and III GREEDY performs as well as the optimal policy, where the optimal policy also knows the arrival sequence of queries in advance. We write ?ratio=1? in Figure 2 to indicate that GREEDY has the competitive ratio of 1. For Cells II and IV the greedy policy is not optimal, but is nevertheless 1/2 competitive. An alternative policy for Cell II was given in [8], which we refer to as MSVV; it achieves a competitive ratio of 1 ? 1/e. In this paper we give the first policies for Cells V and VI, where we must choose which advertisements to display while simultaneously estimating click-through rates of advertisements. 1.2 Exploration/Exploitation Tradeoff The main issue we face while addressing Cells V and VI is to balance the exploration/exploitation tradeoff. To maximize short-term revenue, the search engine should exploit its current, imperfect CTR estimates by displaying advertisements whose estimated CTRs are large. On the other hand, to maximize long-term revenue, the search engine needs to explore, i.e., identify which advertisements have the largest CTRs. This kind of exploration entails displaying advertisements whose current CTR estimates are of low confidence, which inevitably leads to displaying some low-CTR ads in the short-term. This kind of tradeoff between exploration and exploitation shows up often in practice, e.g., in clinical trials, and has been extensively studied in the context of the multi-armed bandit problem [4]. In this paper we draw upon and extend the existing bandit literature to solve the advertisement problem in the case of unknown CTR. In particular, first in Section 3 we show that the unbudgeted variant of the problem (Cell V in Figure 2) is an instance of the multi-armed bandit problem. Then, in Section 4 we introduce a new kind of bandit problem that we termed the budgeted multi-armed multi-bandit problem (BMMP), and show that the budgeted unknown-CTR advertisement problem (Cell VI) is an instance of BMMP. We propose policies for BMMP and give performance bounds. We evaluate our policies empirically over real-world data in Section 5. In the extended technical version of the paper [9] we show how to extend our policies to address various practical considerations, e.g., exploiting any prior information available about the CTRs of ads, permitting advertisers to submit and revoke advertisements at any time, not just at day boundaries. 2 Related Work We have already discussed the work of [8], which addresses the advertisement problem under the assumption that CTRs are known. There has not been much published work on estimating CTRs. Reference [7] discusses how contextual information such as user demographic or ad topic can be used to estimate CTRs, and makes connections to the recommender and bandit problems, but stops short of presenting technical solutions. Some methods for estimating CTRs are proposed in [5] with the focus of thwarting click fraud. Reference [1] studies how to maximize user clicks on banner ads. The key problem addressed in [1] is to satisfy the contracts made with the advertisers in terms of the minimum guaranteed number of impressions (as opposed to the budget constraints in our problem). Reference [10] looks at the advertisement problem from an advertiser?s point of view, and gives an algorithm for identifying the most profitable set of keywords for the advertiser. 3 Unbudgeted Unknown-CTR Advertisement Problem In this section we address Cell V of Figure 2, where click-through rates are initially unknown and budget constraints are absent (i.e., di = ? for all advertisers Ai ). Our unbudgeted problem is an instance of the multi-armed bandit problem [4], which is the following: we have K arms where each arm has an associated reward and payoff probability. The payoff probability is not known to us while the reward may or may not be known (both versions of the bandit problem exist). With each invocation we activate exactly C ? K arms. 1 Each activated arm then yields the associated reward with its payoff probability and nothing with the remaining probability. The objective is to determine a policy for activating the arms so as to maximize the total reward over some number of invocations. To solve the unbudgeted unknown-CTR advertisement problem, we create a multi-armed bandit problem instance for each query phrase Q, where ads targeted for the query phrase are the arms, bid values are the rewards and CTRs are the payoff probabilities of the bandit instance. Since there are no budget constraints, we can treat each query phrase independently and solve each bandit instance in isolation. 2 The number of invocations for a bandit instance is not known in advance because the number of queries of phrase Q in a given day is not known in advance. A variety of policies have been proposed for the bandit problem, e.g., [2, 3, 6], any of which can be applied to our unbudgeted advertisement problem. The policies proposed in [3] are particularly attractive because they have a known performance bound for any number of invocations not known in advance (in our context the number of queries is not known a priori). In the case of C = 1, the policies of [3] make O(ln n) number of mistakes, on expectation, in n invocations (which is also the asymptotic lower bound on the number of mistakes [6]). A mistake occurs when a suboptimal arm is chosen by a policy (the optimal arm is the one with the highest expected reward). We consider a specific policy from [3] called UCB and apply it to our problem (other policies from [3] can also be used). UCB is proposed under a slightly different reward model; we adapt it to our context to produce the following policy that we call MIX (for mixing exploration with exploitation). We prove a performance bound of O(ln n) mistakes for MIX for any C ? 1 in [9]. Policy MIX : Each time a query for phrase Qj arrives: 1. Display the C ads targeted for Qj that have the highest priority. The priority Pi,j of ad ai,j is a function of its current CTR estimate (? ci,j ), its bid value (bi,j ), the number of times it has been displayed so far (ni,j ), and the number of times phrase Qj has been queried so far in the day (nj ). Formally, priority Pi,j is defined as: (  q  2 ln nj ? bi,j if ni,j > 0 c?i,j + ni,j Pi,j = ? otherwise 1 The conventional multi-armed bandit problem is defined for C = 1. We generalize it to any C ? 1 in this paper. 2 We assume CTRs to be independent of one another. 2. Monitor the clicks made by users and update the CTR estimates c?i,j accordingly. c?i,j is the average click-through rate observed so far, i.e., the number of times ad ai,j has been clicked on divided by the total number of times it has been displayed. Policy MIX manages the exploration/exploitation q tradeoff in the following way. The priority 2 ln nj  function has two factors: an exploration factor that diminishes with time, and ni,j an exploitation factor (? ci,j ). Since c?i,j can be estimated only when ni,j ? 1, the priority value is set to ? for an ad which has never been displayed before. Importantly, the MIX policy is practical to implement because it can be evaluated efficiently using a single pass over the ads targeted for a query phrase. Furthermore, it incurs minimal storage overhead because it keeps only three numbers (? ci,j , ni,j and bi,j ) with each ad and one number (nj ) with each query phrase. 4 Budgeted Unknown-CTR Advertisement Problem We now turn to the more challenging case in which advertisers can specify daily budgets (Cell VI of Figure 2). Recall from Section 3 that in the absence of budget constraints, we were able to treat the bandit instance created for a query phrase independent of the other bandit instances. However, budget constraints create dependencies between query phrases targeted by an advertiser. To model this situation, we introduce a new kind of bandit problem that we call Budgeted Multi-armed Multi-bandit Problem (BMMP), in which multiple bandit instances are run in parallel under overarching budget constraints. We derive generic policies for BMMP and give performance bounds. 4.1 Budgeted Multi-armed Multi-bandit Problem BMMP consists of a finite set of multi-armed bandit instances, B = {B1 , B2 . . . B|B| }. Each bandit instance Bi has a finite number of arms and associated rewards and payoff probabilities as described in Section 3. In BMMP each arm also has an associated type. Each type Ti ? T has budget di ? [0, ?] which specifies the maximum amount of reward that can be generated by activating all the arms of that type. Once the specified budget is reached for a type, the corresponding arms can still be activated but no further reward is earned. With each invocation of the bandit system, one bandit instance from B is invoked; the policy has no control over which bandit instance is invoked. Then the policy activates C arms of the invoked bandit instance, and the activated arms generate some (possibly zero) total reward. It is easy to see that the budgeted unknown-CTR advertisement problem is an instance of BMMP. Each query phrase acts as a bandit instance and the ads targeted for it act as bandit arms, as described in Section 3. Each advertiser defines a unique type of arms and gives a budget constraint for that type; all ads submitted by an advertiser belong to the type defined by it. When a query is submitted by a user, the corresponding bandit instance is invoked. We now show how to derive a policy for BMMP given as input a policy POL for the regular multi-armed bandit problem such as one of the policies from [3]. The derived policy, denoted by BPOL (Budget-aware POL), is as follows: ? Run |B| instances of POL in parallel, denoted POL1 , POL2 , . . . POL|B| . ? Whenever bandit instance Bi is invoked: 1. Discard any arm(s) of Bi whose type?s budget is newly depleted, i.e., has become depleted since the last invocation of Bi . 2. If one or more arms of Bi was discarded during step 1, restart POLi . 3. Let POLi decide which of the remaining arms of Bi to activate. Observe that in the second step of BPOL, when POL is restarted, POL loses any state it has built up, including any knowledge gained about the payoff probabilities of bandit arms. Surprisingly, despite this seemingly imprudent behavior, we can still derive a good performance bound for BPOL, provided that POL has certain properties, as we discuss in the next section. In practice, since most bandit policies can take prior information about the payoff probabilities as input, when restarting POL we can supply the previous payoff probability estimates as the prior (as done in our experiments). 4.2 Performance Bound for BMMP Policies Let S denote the sequence of bandit instances that are invoked, i.e., S = {S(1), S(2) . . . S(N )} where S(n) denotes the index of the bandit instance invoked at the nth invocation. We compare the performance of BPOL with that of the optimal policy, denoted by OPT, where OPT has advance knowledge of S and the exact payoff probabilities of all bandit instances. We claim that bpol (N ) ? opt(N )/2?O(f (N )) for any N , where bpol (N ) and opt(N ) denote the total expected reward obtained after N invocations by BPOL and OPT, respectively, and f (n) denotes the expected number of mistakes made by POL after n invocations of the the regular multi-armed bandit problem (for UCB, f (n) is O(ln n) [3]). Our complete proof is rather involved. Here we give a high-level outline of the proof (the complete proof is given in [9]). For simplicity we focus on the C = 1 case; C ? 1 is a simple extension thereof. Since bandit arms generate rewards stochastically, it is not clear how we should compare BPOL and OPT. For example, even if BPOL and OPT behave in exactly the same way (activate the same arm on each bandit invocation), we cannot guarantee that both will have the same total reward in the end. To enable meaningful comparison, we define a payoff instance, denoted by I, such that I(i, n) denotes the reward generated by arm i of bandit instance S(n) for invocation n in payoff instance I. The outcome of running BPOL or OPT on a given payoff instance is deterministic because the rewards are fixed in the payoff instance. Hence, we can compare BPOL and OPT on per payoff instance basis. Since each payoff instance arises with a certain probability, denoted as P(I), by taking expectation over all possible payoff instances of execution we can compare the expected performance of BPOL and OPT. Let us consider invocation n in payoff instance I. Let B (I, n) and O(I, n) denote the arms of bandit instance S(n) activated under BPOL and OPT respectively. Based on the different possibilities that can arise, we classify invocation n into one of three categories: ? Category 1: The arm activated by OPT, O(I, n), is of smaller or equal expected reward in comparison to the arm activated by BPOL, B (I, n). The expected reward of an arm is the product of its payoff probability and reward. ? Category 2: Arm O(I, n) is of greater expected reward than B (I, n), but O(I, n) is not available for BPOL to activate at invocation n due to budget restrictions. ? Category 3: Arm O(I, n) is of greater expected reward than B (I, n) and both arms O(I, n) and B (I, n) are available for BPOL to activate, but BPOL prefers to activate B (I, n) over O(I, n). Let us denote the invocations of category k (1, 2 or 3) by N k (I) for payoff instance I. Let bpol k (N ) and opt k (N ) denote the expected reward obtained during the invocations of category k (1, 2 or 3) by BPOL and OPT respectively. In [9] we show that  X X I(B (I, n), n) bpol k (N ) = P(I) ? I?I n?N k (I) Similarly, opt k (N ) = X I?I P(I) ? X  I(O(I, n), n) n?N k (I) Then for each k we bound opt k (N ) in terms of bpol (N ). In [9] we provide proof of each of the following bounds: Lemma 1 opt 1 (N ) ? bpol 1 (N ). Lemma 2 opt 2 (N ) ? bpol (N ) + (|T | ? rmax ), where |T | denotes the number of arm types and rmax denotes the maximum reward. Lemma 3 opt 3 (N ) = O(f (N )). From the above bounds we obtain our overall claim: Theorem 1 bpol (N ) ? opt(N )/2 ? O(f (N )), where bpol (N ) and opt(N ) denote the total expected reward obtained under BPOL and OPT respectively. Proof: opt(N ) = opt 1 (N ) + opt 2 (N ) + opt 3 (N )  ? bpol 1 (N ) + bpol (N ) + |T | ? rmax + O(f (N )) ? 2 ? bpol (N ) + O(f (N )) Hence, bpol (N ) ? opt(N )/2 ? O(f (N )). If we supply MIX (Section 3) as input to our generic BPOL framework, we obtain BMIX, a policy for the budgeted unknown-CTR advertisement problem. Due to the way MIX structures and maintains its internal state, it is not necessary to restart a MIX instance when an advertiser?s budget is depleted in BMIX, as specified in the generic BPOL framework (the exact steps of BMIX are given in [9]). So far, for modeling purposes, we have assumed the search engine receives an entirely new batch of advertisements each day. In reality, ads may persist over multiple days. With BMIX, we can carry forward an ad?s CTR estimate (? ci,j ) and display count (ni,j ) from day to day until an ad is revoked, to avoid having to re-learn CTR?s from scratch each day. Of course the daily budgets reset daily, regardless of how long each ad persists. In fact, with a little care we can permit ads to be submitted and revoked at arbitrary times (not just at day boundaries). We describe this extension, as well as how we can incorporate and leverage prior beliefs about CTR?s, in [9]. 5 Experiments From our general result of Section 4, we have a theoretical performance guarantee for BMIX. In this section we study BMIX empirically. In particular, we compare it with the greedy policy proposed for the known-CTR advertisement problem (Cells 1-IV in Figure  2). GREEDY displays the C ads targeted for a query phrase that have the highest c?i,j ?bi,j values among the ads whose advertisers have enough remaining budgets; to induce a minimal amount of exploration, for an ad which has never been displayed before, GREEDY treats c?i,j as ? (our policies do this as well). GREEDY is geared exclusively toward exploitation. Hence, by comparing GREEDY with our policies, we can gauge the importance of exploration. We also propose and evaluate the following variants of BMIX that we expect to perform well in practice: 1. Varying the Exploration Factor. Internally, BMIX runs instances of MIX to select which ads to display.qAs mentioned in Section 4, the priority function of MIX consists of an 2 ln nj  and an exploitation factor (ci,j ). In [3] it was shown empiriexploration factor ni,j cally that the following heuristical exploitation factor performs well, despite the absence of a known performance guarantee: s s n1 o   ln nj 2 ln nj ? min , Vi,j (ni,j , nj ) where Vi,j (ni,j , nj ) = c?i,j ? (1 ? c?i,j ) + ni,j 4 ni,j q 2 ln nj Substituting this expression in place of in the priority function of BMIX gives us ni,j a new (heuristical) policy we call BMIX-E. 2. Budget Throttling. It is shown in [8] that in the presence of budget constraints, it is beneficial to display the ads of an advertiser less often as the advertiser?s remaining budget decreases. In particular, they propose to multiply bids from advertiser Ai by the following discount factor : 0 ?(d0i ) = 1 ? e?di /di where d0i is the current remaining budget of advertiser Ai for the day and di is its total daily budget. Following this idea we can replace bi,j by ?(d0i ) ? bi,j in the priority function of BMIX, yielding a variant we call BMIX-T. Policy BMIX-ET refers to use of heuristics 1 and 2 together. 5.1 Experiment Setup We evaluate advertisement policies by conducting simulations over real-world data. Our data set consists of a sample of 85,000 query phrases selected at random from the Yahoo! query log for the date of February 12, 2006. Since we have the frequency counts of these query phrases but not the actual order, we ran the simulations multiple times with random orderings of the query instances and report the average revenue in all our experiment results. The total number of query instances is 2 million. For each query phrase we have the list of advertisers interested in it and the ads submitted by them to Yahoo!. We also have the budget constraints of the advertisers. Roughly 60% of the advertisers in our data set impose daily budget constraints. In our simulation, when an ad is displayed, we decide whether a click occurs by flipping a coin weighted by the true CTR of the ad. Since true CTRs are not known to us (this is the problem we are trying to solve!), we took the following approach to assign CTRs to ads: from a larger set of Yahoo! ads we selected those ads that have been displayed more than thousand times, and therefore we have highly accurate CTR estimates. We regarded the distribution of these CTR estimates as the true CTR distribution. Then for each ad ai,j in the dataset we sampled a random value from this distribution and assigned it as CTR ci,j of the ad. (Although this method may introduce some skew compared with the (unknown) true distribution, it is the best we could do short of serving live ads just for the purpose of measuring CTRs). We are now ready to present our results. Due to lack of space we consider a simple setting here where the set of ads is fixed and no prior information about CTR is available. We study the more general setting in [9]. 5.2 Exploration/Exploitation Tradeoff We ran each of the policies for a time horizon of ten days; each policy carries over its CTR estimates from one day to the next. Budget constraints are renewed each day. For now we fix the number of displayed ads (C) to 1. Figure 3 plots the revenue generated by each policy after a given number of days (for confidentiality reasons we have changed the unit of revenue). All policies (including GREEDY) estimate CTRs based on past observations, so as time passes by their estimates become more reliable and their performance improves. Note that the exploration factor of BMIX-E causes it to perform substantially better than that of BMIX. The budget throttling heuristic (BMIX-T and BMIX-ET) did not make much difference in our experiments. All of our proposed policies perform significantly better than GREEDY, which underscores the importance of balancing exploration and exploitation. GREEDY is geared exclusively toward exploitation, so one might expect that early on it would outperform the other policies. However, that does not happen because GREEDY immediately fixates on ads that are not very profitable (i.e., low ci,j ? bi,j ). Next we vary the number of ads displayed for each query (C). Figure 4 plots total revenue over ten days on the y-axis, and C on the x-axis. Each policy earns more revenue when more ads are displayed (larger C). Our policies outperform GREEDY consistently across different values of C. In fact, GREEDY must display almost twice as many ads as BMIX-E to generate the same amount of revenue. 12 5 Total revenue 4 3 Total revenue GREEDY BMIX BMIX-T BMIX-E BMIX-ET 2 8 GREEDY BMIX BMIX-T BMIX-E BMIX-ET 4 1 0 1 4 7 10 Time horizon (days) Figure 3: Revenue generated by different advertisement policies (C=1). 6 0 1 4 7 10 Ads per query (C) Figure 4: Effect of C (number of ads displayed per query). Summary and Future Work In this paper we studied how a search engine should select which ads to display in order to maximize revenue, when click-through rates are not initially known. We dealt with the underlying exploration/exploitation tradeoff using multi-armed bandit theory. In the process we contributed to bandit theory by proposing a new variant of the bandit problem that we call budgeted multi-armed multi-bandit problem (BMMP). We proposed a policy for solving BMMP and derived a performance guarantee. Practical extensions of our advertisement policies are given in the extended version of the paper. Extensive experiments over real ad data demonstrate substantial revenue gains compared to a greedy strategy that has no provision for exploration. Several useful extensions of this problem can be conceived. One such extension would be to exploit similarity in ad attributes while inferring CTRs, as suggested in [7], instead of estimating the CTR of each ad independently. Also, an adversarial formulation of this problem merits study, perhaps leading to general consideration of how to manage exploration versus exploitation in game-theoretic scenarios. References [1] N. Abe and A. Nakamura. Learning to Optimally Schedule Internet Banner Advertisements. In ICML, 1999. [2] R. Agrawal. Sample Mean Based Index Policies with O(log n) Regret for the MultiArmed Bandit Problem. Advances in Applied Probability, 27:1054?1078, 1995. [3] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time Analysis of the Multi-Armed Bandit Problem. Machine Learning, 47:235?256, 2002. [4] D. A. Berry and B. Fristedt. Bandit Problems: Sequential Allocation of Experiments. Chapman and Hall, London, 1985. [5] N. Immorlica, K. Jain, M. Mahdian, and K. Talwar. Click Fraud Resistant Methods for Learning Click-Through Rates. In WINE, 2005. [6] T. Lai and H. Robbins. Asymptotically Efficient Adaptive Allocation Rules. Advances in Applied Mathematics, 6:4?22, 1985. [7] O. Madani and D. Decoste. Contextual Recommender Problems. In Proceedings of the 1st International Workshop on Utility-based Data Mining, 2005. [8] A. Mehta, A. Saberi, U. Vazirani, and V. Vazirani. AdWords and Generalized On-line Matching. In FOCS, 2005. [9] S. Pandey and C. Olston. Handling advertisements of unknown quality in search advertising, October, 2006. Technical report, available via http://www.cs.cmu.edu/ ?spandey/publications/ctrEstimation.pdf. [10] P. Rusmevichientong and D. Williamson. An Adaptive Algorithm for Selecting Profitable Keywords for Search-Based Advertising Services. In EC, 2006.
3012 |@word trial:1 exploitation:16 version:3 mehta:1 willing:1 d2:1 simulation:3 incurs:1 carry:2 exclusively:2 selecting:2 renewed:1 past:1 existing:1 current:4 com:1 contextual:2 comparing:1 yet:2 must:3 realistic:1 happen:1 drop:1 plot:2 update:1 greedy:21 selected:2 accordingly:1 short:4 along:1 become:2 supply:2 focs:1 prove:1 consists:3 overhead:1 introduce:3 poli:2 expected:11 market:1 roughly:1 behavior:1 multi:19 mahdian:1 little:1 armed:15 actual:1 decoste:1 clicked:2 provided:1 estimating:4 underlying:1 kind:4 rmax:3 substantially:1 proposing:1 nj:10 guarantee:4 ti:1 act:2 tackle:1 exactly:2 control:1 unit:1 internally:1 before:2 service:2 persists:1 treat:3 mistake:5 despite:2 might:1 twice:1 studied:2 challenging:1 bi:17 confidentiality:1 practical:3 unique:1 practice:3 regret:1 implement:1 empirical:1 significantly:1 matching:1 outset:1 confidence:1 refers:2 fraud:2 regular:2 submits:2 induce:1 cannot:1 selection:1 storage:1 context:3 live:1 restriction:1 conventional:1 deterministic:1 www:1 maximizing:1 overarching:1 regardless:1 independently:2 simplicity:1 identifying:1 immediately:1 rule:1 d5:1 importantly:1 regarded:1 searching:1 profitable:3 target:1 commercial:1 user:14 exact:2 particularly:1 persist:1 observed:1 fly:1 thousand:1 earned:1 ordering:1 decrease:1 highest:4 ran:2 mentioned:1 substantial:1 pol:9 reward:24 solving:1 upon:1 basis:1 various:1 jain:1 describe:1 activate:6 london:1 query:30 outcome:1 whose:5 d0i:3 heuristic:2 solve:4 larger:2 otherwise:1 fischer:1 seemingly:1 sequence:3 agrawal:1 took:1 propose:3 product:1 reset:1 date:1 mixing:1 fixates:1 exploiting:1 produce:1 derive:4 heuristical:2 keywords:2 c:2 indicate:1 throttling:2 attribute:1 exploration:18 enable:1 activating:2 assign:1 fix:1 opt:27 extension:5 hall:1 claim:2 substituting:1 achieves:1 early:1 a2:2 vary:1 wine:1 purpose:2 diminishes:1 robbins:1 largest:1 create:2 gauge:1 tool:1 weighted:1 activates:1 rather:1 avoid:1 varying:1 publication:1 derived:2 focus:2 consistently:1 likelihood:1 underscore:1 adversarial:1 am:1 attract:1 initially:2 bandit:50 interested:2 selects:1 issue:1 overall:1 among:1 denoted:5 priori:1 yahoo:5 equal:1 once:1 never:2 aware:1 having:1 chapman:1 placing:1 look:1 icml:1 future:1 report:2 simultaneously:1 madani:1 n1:1 a5:1 possibility:1 highly:1 multiply:1 mining:1 arrives:1 yielding:1 activated:6 accurate:1 necessary:2 daily:7 iv:3 re:1 theoretical:2 minimal:2 instance:38 classify:1 modeling:2 measuring:1 phrase:20 addressing:1 successful:1 optimally:1 dependency:1 ctrs:16 banner:2 st:1 international:1 contract:1 together:1 earn:1 ctr:30 cesa:1 manage:1 opposed:1 choose:1 possibly:1 priority:8 stochastically:1 leading:1 account:1 b2:1 rusmevichientong:1 inc:1 satisfy:1 depends:1 ad:46 vi:7 later:1 view:1 reached:1 competitive:3 maintains:1 parallel:2 ni:13 who:1 efficiently:1 maximized:1 yield:1 identify:1 conducting:1 generalize:1 dealt:1 manages:1 advertising:5 published:1 submitted:4 whenever:1 frequency:1 involved:1 thereof:1 associated:4 di:6 proof:5 stop:1 newly:1 dataset:1 sampled:1 gain:1 fristedt:1 recall:1 knowledge:2 improves:1 provision:1 schedule:1 auer:1 day:19 specify:2 formulation:2 evaluated:1 done:1 lifetime:1 just:3 furthermore:1 until:1 hand:1 receives:1 christopher:1 lack:1 defines:1 quality:2 perhaps:1 effect:1 true:4 hence:3 assigned:1 illustrated:1 attractive:1 during:2 game:1 d4:1 generalized:1 trying:1 pdf:1 presenting:1 impression:1 complete:2 outline:1 demonstrate:1 theoretic:1 performs:2 invaluable:1 saberi:1 auction:1 consideration:2 invoked:7 empirically:2 million:1 extend:3 discussed:1 belong:1 mellon:1 refer:1 multiarmed:1 ai:13 queried:1 mathematics:1 similarly:1 resistant:1 entail:1 geared:2 money:2 similarity:1 showed:1 discard:1 termed:1 advertise:1 certain:2 scenario:1 minimum:1 greater:2 care:1 impose:1 determine:1 maximize:9 advertiser:24 ii:3 multiple:3 mix:10 stem:1 technical:3 adapt:1 clinical:1 long:2 divided:1 lai:1 permitting:1 a1:2 variant:6 cmu:2 expectation:2 achieved:1 cell:10 addressed:1 pass:1 call:5 presence:3 depleted:3 leverage:1 iii:2 easy:1 enough:1 bid:6 variety:1 isolation:1 earns:1 click:17 suboptimal:1 imperfect:1 idea:1 knowing:1 tradeoff:7 absent:1 qj:7 whether:1 sandeep:1 expression:1 utility:1 cause:1 prefers:1 useful:1 clear:1 amount:5 discount:1 extensively:1 ten:2 category:6 generate:3 specifies:1 outperform:2 exist:1 http:1 estimated:2 conceived:1 per:5 serving:1 carnegie:1 write:1 promise:1 key:1 nevertheless:1 monitor:1 d3:1 budgeted:8 vetted:1 asymptotically:1 run:4 talwar:1 uncertainty:1 place:1 family:1 almost:1 decide:2 adwords:1 draw:1 entirely:1 bound:11 internet:1 pay:4 guaranteed:1 display:10 occur:1 placement:2 constraint:17 qas:1 min:1 according:1 revoke:1 smaller:1 slightly:1 beneficial:1 across:1 appealing:1 ln:9 remains:1 discus:2 turn:1 count:2 skew:1 know:1 merit:1 demographic:1 end:1 available:5 operation:1 permit:1 apply:1 observe:1 generic:3 alternative:1 batch:1 coin:1 denotes:5 remaining:5 running:1 a4:1 cally:1 exploit:3 especially:1 february:1 society:1 objective:2 arrangement:3 already:1 occurs:2 flipping:1 strategy:1 traditional:1 restart:2 topic:1 toward:2 reason:1 index:2 ratio:7 balance:1 difficult:2 setup:1 october:1 policy:49 unknown:12 perform:3 contributed:1 recommender:2 bianchi:1 observation:1 discarded:1 finite:4 inevitably:1 displayed:12 behave:1 payoff:19 extended:2 situation:1 arbitrary:1 abe:1 specified:2 extensive:2 connection:1 engine:15 learned:1 address:3 able:1 suggested:1 built:1 reliable:2 including:2 belief:1 difficulty:2 nakamura:1 arm:31 nth:1 axis:2 created:1 ready:1 prior:5 literature:1 berry:1 determining:1 asymptotic:1 expect:2 allocation:2 versus:1 revenue:24 degree:1 displaying:6 pi:3 balancing:1 course:1 changed:1 summary:1 supported:1 last:1 surprisingly:1 face:1 taking:1 benefit:1 boundary:2 world:3 forward:1 made:3 adaptive:2 far:4 ec:1 vazirani:2 restarting:1 keep:1 b1:1 assumed:1 search:20 pandey:2 reality:1 learn:1 williamson:1 necessarily:1 submit:1 did:1 main:3 arise:1 arrival:2 nothing:1 inferring:1 wish:1 candidate:1 invocation:17 advertisement:53 theorem:1 specific:1 appeal:7 list:1 a3:1 workshop:1 olston:3 sequential:1 gained:1 ci:10 importance:2 execution:1 budget:33 horizon:2 explore:2 restarted:1 loses:1 relies:1 targeted:7 replace:1 absence:3 determined:1 lemma:3 total:13 called:1 pas:1 ucb:3 meaningful:1 select:3 formally:2 internal:1 immorlica:1 arises:1 incorporate:1 evaluate:3 d1:1 scratch:1 handling:2
2,219
3,013
Near-Uniform Sampling of Combinatorial Spaces Using XOR Constraints Carla P. Gomes Ashish Sabharwal Bart Selman Department of Computer Science Cornell University, Ithaca NY 14853-7501, USA {gomes,sabhar,selman}@cs.cornell.edu ? Abstract We propose a new technique for sampling the solutions of combinatorial problems in a near-uniform manner. We focus on problems specified as a Boolean formula, i.e., on SAT instances. Sampling for SAT problems has been shown to have interesting connections with probabilistic reasoning, making practical sampling algorithms for SAT highly desirable. The best current approaches are based on Markov Chain Monte Carlo methods, which have some practical limitations. Our approach exploits combinatorial properties of random parity (XOR) constraints to prune away solutions near-uniformly. The final sample is identified amongst the remaining ones using a state-of-the-art SAT solver. The resulting sampling distribution is provably arbitrarily close to uniform. Our experiments show that our technique achieves a significantly better sampling quality than the best alternative. 1 Introduction We present a new method, XORSample, for uniformly sampling from the solutions of hard combinatorial problems. Although our method is quite general, we focus on problems expressed in the Boolean Satisfiability (SAT) framework. Our work is motivated by the fact that efficient sampling for SAT can open up a range of interesting applications in probabilistic reasoning [6, 7, 8, 9, 10, 11]. There has also been a growing interest in combining logical and probabilistic constraints as in the work of Koller, Russell, Domingos, Bacchus, Halpern, Darwiche, and many others (see e.g. statistical relational learning and Markov logic networks [1]), and a recently proposed Markov logic system for this task uses efficient SAT sampling as its core reasoning mechanism [2]. Typical approaches for sampling from combinatorial spaces are based on Markov Chain Monte Carlo (MCMC) methods, such as the Metropolis algorithm and simulated annealing [3, 4, 5]. These methods construct a Markov chain with a predefined stationary distribution. One can draw samples from the stationary distribution by running the Markov chain for a sufficiently long time. Unfortunately, on many combinatorial problems, the time taken by the Markov chain to reach its stationary distribution scales exponentially with the problem size. MCMC methods can also be used to find (globally optimal) solutions to combinatorial problems. For example, simulated annealing (SA) uses the Boltzmann distribution as the stationary distribution. By lowering the temperature parameter to near zero, the distribution becomes highly concentrated around the minimum energy states, which correspond to the solutions of the combinatorial problem under consideration. SA has been successfully applied to a number of combinatorial search problems. However, many combinatorial problems, especially those with intricate constraint structure, are beyond the reach of SA and related MCMC methods. Not only does problem structure make reaching the stationary distribution prohibitively long, even reaching a single (optimal) solution is often infeasible. Alternative combinatorial search techniques have been developed that are much more effective at finding solutions. These methods generally exploit clever search space pruning ? This work was supported by the Intelligent Information Systems Institute (IISI) at Cornell University (AFOSR grant F49620-01-1-0076) and DARPA (REAL grant FA8750-04-2-0216). techniques, which quickly focus the search on small, but promising, parts of the overall combinatorial space. As a consequence, these techniques tend to be highly biased, and sample the set of solutions in an extremely non-uniform way. (Many are in fact deterministic and will only return one particular solution.) In this paper, we introduce a general probabilistic technique for obtaining near-uniform samples from the set of all (globally optimal) solutions of combinatorial problems. Our method can use any state-of-the-art specialized combinatorial solver as a subroutine, without requiring any modifications to the solver. The solver can even be deterministic. Most importantly, the quality of our sampling method is not affected by the possible bias of the underlying specialized solver ? all we need is a solver that is good at finding some solution or proving that none exists. We provide theoretical guarantees for the sampling quality of our approach. We also demonstrate the practical feasibility of our approach by sampling near-uniformly from instances of hard combinatorial problems. As mentioned earlier, to make our discussion more concrete, we will discuss our method in the context of SAT. In the SAT problem, we have a set of logical constraints on a set of Boolean (True/False) variables. The challenge is to find a setting of the variables such that all logical constraints are satisfied. SAT is the prototypical NP-complete problem, and quite likely the most widely studied combinatorial problem in computer science. There have been dramatic advances in recent years in the state-of-the-art of SAT solvers [e.g. 12, 13, 14]. Current solvers are able to solve problems with millions of variables and constraints. Many practical combinatorial problems can be effectively translated into SAT. As a consequence, one of the current most successful approaches to solving hard computational problems, arising in, e.g., hardware and software verification and planning and scheduling, is to first translate the problem into SAT, and then use a state-of-the-art SAT solver to find a solution (or show that it does not exist). As stated above, these specialized solvers derive much of their power from quickly focusing their search on a very small part of the combinatorial space. Many SAT solvers are deterministic, but even when the solvers incorporate some randomization, solutions will be sampled in a highly non-uniform manner. The central idea behind our approach can be summarized as follows. Assume for simplicity that our original SAT instance on n Boolean variables has 2s solutions or satisfying assignments. How can we sample uniformly at random from the set of solutions? We add special randomly generated logical constraints to our SAT problem. Each random constraint is constructed in such a way that it rules out any given truth assignment exactly with probability 1/2 . Therefore, in expectation, after adding s such constraints, we will have a SAT instance with exactly one solution. 1 We then use a SAT solver to find the remaining satisfying assignment and output this as our first sample. We can repeat this process with a new set of s randomly generated constraints and in this way obtain another random solution. Note that to output each sample, we can use whatever off-the-shelf SAT solver is available, because all it needs to do is find the single remaining assignment.2 The randomization in the added constraints will guarantee that the assignment is selected uniformly at random. How do we implement this approach? For our added constraints, we use randomly generated parity or ?exclusive-or? (XOR) constraints. In recent work, we introduced XOR constraints for the problem of counting the number of solutions using MBound [15]. Although the building blocks of MBound and XORSample are the same, this work relies much more heavily on the properties of XOR constraints, namely, pairwise and even 3-wise independence. As we will discuss below, an XOR constraint eliminates any given truth assignment with probability 1/2 , and therefore, in expectation, cuts the set of satisfying assignments in half. For this expected behavior to happen often, the elimination of each assignment should ideally be fully independent of the elimination of other assignments. Unfortunately, as far as is known, there are no compact (polynomial size) logical constraints that can achieve such complete independence. However, XOR constraints guarantee at least pairwise independence, i.e., if we know that an XOR constraint C eliminates assignment ? 1 , this provides no information as to whether C will remove any another assignment ?2 . Remarkably, as we will see, such pairwise independence already leads to near-uniform sampling. Our sampling approach is inspired by earlier work in computational complexity theory by Valiant and Vazirani [16], who considered the question whether having one or more assignments affects 1 Of course, we don?t know the true value of s. In practice, we use a binary style search to obtain a rough estimate. As we will see, our algorithms work correctly even with over- and under-estimates for s. 2 The practical feasibility of our approach exploits the fact that current SAT solvers are very effective in finding such truth assignments in many real-world domains. the hardness of combinatorial problems. They showed that, in essence, the number of solutions should not affect the hardness of the problem instances in the worst case [16]. This was received as a negative result because it shows that finding a solution to a Unique SAT problem (a SAT instance that is guaranteed to have at most one solution) is not any easier than finding a solution to an arbitrary SAT instance. Our sampling strategy turns this line of research into a positive direction by showing how a standard SAT solver, tailored to finding just one solution of a SAT problem, can now be used to sample near-uniformly from the set of solutions of an arbitrary SAT problem. In addition to introducing XORSample and deriving theoretical guarantees on the quality of the samples it generates, we also provide an empirical validation of our approach. One question that arises is whether the state-of-the-art SAT solvers will perform well on problem instances with added XOR (or parity) constraints. Fortunately, as our experiments show, a careful addition of such constraints does generally not degrade the performance of the solvers. In fact, the addition of XOR constraints can be beneficial since the constraints lead to additional propagation that can be exploited by the solvers.3 Our experiments show that we can effectively sample near-uniformly from hard practical combinatorial problems. In comparison with the best current alternative method on such instances, our sampling quality is substantially better. 2 Preliminaries For the rest of this paper, fix the set of propositional variables in all formulas to be V , |V | = n. A variable assignment ? : V ? {0, 1} is a function that assigns a value in {0, 1} to each variable in V . We may think of the value 0 as FALSE and the value 1 as TRUE. We will often abuse notation and write ? (i) for valuations of entities i 6? V when the intended meaning is either already defined or is clear from the context. In particular, ? (1) = 1 and ? (0) = 0. When ? (i) = 1, we say that ? satisfies i. For x ? V , ?x denotes the corresponding negated variable; ? (?x) = 1 ? ? (x). Let F be a formula over variables V . ? (F) denotes the valuation of F under ? . If ? satisfies F, i.e., ? (F) = 1, then ? is a model, solution, or satisfying assignment for F. Our goal in this paper is to sample uniformly from the set of all solutions of a given formula F. An XOR constraint D over variables V is the logical ?xor? or parity of a subset of V ? {1}; ? satisfies D if it satisfies an odd number of elements in D. The value 1 allows us to express even parity. For instance, D = {a, b, c, 1} represents the xor constraint a ? b ? c ? 1, which is TRUE when an even number of a, b, c are TRUE. Note that it suffices to use only positive variables. E.g., ?a ? b ? ?c and ?a ? b are equivalent to D = {a, b, c} and D = {a, b, 1}, respectively. Our focus will be on formulas which are a logical conjunction of a formula in Conjunctive Normal Form (CNF) and some XOR constraints. In all our experiments, XOR constraints are translated into CNF using additional variables so that the full formula can be fed directly to standard (CNF-based) SAT solvers. We will need basic concepts from linear algebra. Let F2 denote the field of two elements, 0 and 1, and Fn2 the vector space of dimension n over F. An assignment ? can be thought of as an element of Fn2 . Similarly, an XOR constraint D can be seen as a linear constraint a1 x1 + a2 x2 + . . .+ an xn + b = 1, where ai , b ? {0, 1}, + denotes addition modulo 2 for F2 , ai = 1 iff D has variable i, and b = 1 iff D has the parity constant 1. In this setting, we can talk about linear transformations of F n2 as well as linear independence of ? , ? 0 ? Fn2 (see standard texts for details). We will use two properties: every linear transformation maps the all-zeros vector to itself, and there exists a linear transformation that maps any k linearly independent vectors to any other k linearly independent vectors. Consider the set X of all XOR constraints over V . Since an XOR constraint is a subset of V ? {1}, |X| = 2n+1 . Our method requires choosing XOR constraints from X at random. Let X(n, q) denote the probability distribution over X defined as follows: select each v ? V independently at random with probability q and include the constant 1 independently with probability 1/2 . This produces XORs of average length nq. In particular, note that every two complementary XOR constraints involving the same subset of V (e.g., c ? d and c ? d ? 1) are chosen with the same probability irrespective of q. Such complementary XOR constraints have the simple but useful property that any assignment ? satisfies exactly one of them. Finally, when the distribution X(n,1/2 ) is used, every XOR constraint in X is chosen with probability 2?(n+1) . 3 Note that there are certain classes of structured instances based on parity constraints that are designed to be hard for SAT solvers [17]. Our augmented problem instances appear to behave quite differently from these specially constructed instances because of the interaction between the constraints in the original instance and the added random parity constraints. We will be interested in the random variables which are the sum of indicator random variables: Y = ?? Y? . Linearity of expectation says that E [Y ] = ?? E [Y? ]. When various Y? are pairwise independent, i.e., knowing Y?2 tells us nothing about Y?1 , even variance behaves linearly: Var [Y ] = ?? Var [Y? ]. We will also need conditional probabilities. Here, for a random event X, linearity of conditional expectation says that E [Y | X] = ?? E [Y? | X]. Let X = Y?0 . When various Y? are 3-wise independent, i.e., knowing Y?2 and   Y?3 tells  us nothing about Y?1 , even conditional variance behaves linearly: Var Y | Y?0 = ?? Var Y? | Y?0 . This will be key to the analysis of our second algorithm. 3 Sampling using XOR constraints In this section, we describe and analyze two randomized algorithms, XORSample and XORSample?, for sampling solutions of a given Boolean formula F near-uniformly using streamlining with random XOR constraints. Both algorithms are parameterized by two quantities: a positive integer s and a real number q ? (0, 1), where s is the number of XORs added to F and X(n, q) is the distribution from which they are drawn. These parameters determine the degree of uniformity achieved by the algorithms, which we formalize as Theorems 1 and 2. The first algorithm, XORSample, uses a SAT solver as a subroutine on the randomly streamlined formula. It repeatedly performs the streamlining process until the resulting formula has a unique solution. When s is chosen appropriately, it takes XORSample a small number of iterations (on average) to successfully produce a sample. The second algorithm, XORSample?, is non-iterative. Here s is chosen to be relatively small so that a moderate number of solutions survive. XORSample? then uses stronger subroutines, namely a SAT model counter and a model selector, to output one of the surviving solutions uniformly at random. 3.1 XOR -based sampling using SAT solvers: XORSample Let F be a formula over n variables, and q and s be the parameters of XORSample. The algorithm works by adding to F, in each iteration, s random XOR constraints Qs drawn independently from the distribution X(n, q). This generates a streamlined formula Fsq whose solutions (called the surviving solutions) are a subset of the solutions of F. If there is a unique surviving solution ? , XORSample outputs ? and stops. Otherwise, it discards Qs and Fsq , and iterates the process (rejection sampling). The check for uniqueness of ? is done by adding the negation of ? as a constraint to Fsq and testing whether the resulting formula is still satisfiable. See Algorithm 1 for a full description. Params: q ? (0, 1), a positive integer s Input : A CNF formula F Output : A solution of F begin iterationSuccess f ul ? FALSE while iterationSuccess f ul = FALSE do Qs ? {s random constraints independently drawn from X(n, q)} q Fs ? F ? Qs // Add s random X O R constraints to F q result ? SATSolve(Fs ) // Solve using a SAT solver if result = TRUE then ? ? solution returned by SATSolve (Fsq ) q F 0 ? Fs ? {?? } // Remove ? from the solution set 0 result ? SATSolve(F 0 ) if result 0 = FALSE then iterationSuccess f ul = TRUE q return ? // Output ? ; it is the unique solution of Fs end Algorithm 1: XORSample, sampling solutions with XORs using a SAT solver We now analyze how uniform the samples produced by XORSample are. For the rest of this section, ? fix q = 1/2 . Let F be satisfiable and have exactly 2s solutions; s? ? [0, n]. Ideally, we would like ? each solution ? of F to be sampled with probability 2?s . Let pone,s (? ) be the probability that ? XORSample outputs ? in one iteration. This is typically much lower than 2?s , which is accounted for by rejection sampling. Nonetheless, we will show that when s is larger than s ? , the variation in pone,s (? ) over different ? is small. Let ps (? ) be the overall probability that XORSample outputs ? . ? This, we will show, is very close to 2?s , where ?closeness? is formalized as being within a factor of c(? ) which approaches 1 very fast. The proof closely follows the argument used by Valiant and Vazirani [16] in their complexity theory work on unique satisfiability. However, we give a different, non-combinatorial argument for the pairwise independence property of XORs needed in the proof, relying on linear algebra. This approach is insightful and will come handy in Section 3.2. We describe the main idea below, deferring details to the full version of the paper. Lemma 1. Let ? > 0, c(? ) = 1 ? 2?? , and s = s? + ? . Then c(? )2?s < pone,s (? ) ? 2?s . Proof sketch. We first prove the upper bound on pone,s (? ). Recall that for any two complementary XORs (e.g. c ? d and c ? d ? 1), ? satisfies exactly one XOR. Hence, the probability that ? satisfies an XOR chosen randomly from the distribution X(n, q) is 1/2 . By independence of the s XORs in Qs in XORSample, ? survives with probability exactly 2?s , giving the desired upper bound on pone,s (? ). For the lower bound, we resort to pairwise independence. Let ? 6= ? 0 be two solutions of F. Let D be an XOR chosen randomly from X(n,1/2 ). We use linear algebra arguments to show that the probability that ? (D) = 1 (i.e., ? satisfies D) is independent of the probability that ? 0 (D) = 1. Recall the interpretation of variable assignments and XOR constraints in the vector space F n2 (cf. Section 2). First suppose that ? and ? 0 are linearly dependent. In Fn2 , this can happen only if exactly one of ? and ? 0 is the all-zeros vector. Suppose ? = (0, 0, . . . , 0) and ? 0 is non-zero. Perform a linear transformation on Fn2 so that ? 0 = (1, 0, . . . , 0). Let D be the constraint a1 x1 + a2 x2 + . . . + an xn + b = 1. Then, ? 0 (D) = a1 + b and ? (D) = b. Since a1 is chosen uniformly from {0, 1} when D is drawn from X(n,1/2 ), knowing a1 + b gives us no information about b, proving independence. A similar argument works when ? is non-zero and ? 0 = (0, 0, . . . , 0), and also when ? and ? 0 are linearly independent to begin with. We skip the details. This proves that ? (D) and ? 0 (D) are independent when D is drawn from X(n,1/2 ). In particular, Pr [? 0 (D) = 1 | ? (D) = 1] = 1/2 . This reasoning easily extends to s XORs in Qs and we have that Pr [? 0 (Qs ) = 1 | ? (Qs ) = 1] = 2?s . Now,   pone,s (? ) = Pr ? (Qs ) = 1 and for all other solutions ? 0 of F, ? 0 (Qs ) = 0   = Pr [? (Qs ) = 1] ? 1 ? Pr for some solution ? 0 6= ? , ? 0 (Qs ) = 1 | ? (Qs ) = 1 . Evaluating this using the union bound and pairwise independence shows p one,s (? ) > c(? ) 2?s . Theorem 1. Let F be a formula with 2s solutions. Let ? > 0, c(? ) = 1 ? 2?? , and s = s? + ? . For any solution ? of F, the probability ps (? ) with which XORSample with parameters q = 1/2 and s outputs ? satisfies ? ? 1 c(? ) 2?s < ps (? ) < 2?s and min {ps (? )} > c(? ) max {ps (? )} . ? ? c(? ) Further, the number of iterations needed to produce one sample has a geometric distribution with expectation between 2? and 2? /c(? ). Proof. Let p? denote the probability that XORSample finds some unique solution in any single iteration. pone,s (? ), as before, is the probability that ? is the unique surviving solution. ps (? ), the overall probability of sampling ? , is given by the infinite geometric series ? ps (? ) = pone,s (? ) + (1 ? p)p ? one,s (? ) + (1 ? p) ? 2 pone,s (? ) + . . . which sums to pone,s (? )/ p. ? In particular, ps (? ) is proportional to pone,s (? ). Lemma 1 says that for any two solutions ?1 and ?2 of F, pone,s (?1 ) and pone,s (?2 ) are strictly within a factor of c(? ) of each other. By the above discussion, ps (?1 ) and ps (?2 ) must also be strictly within a factor of c(? ) of each other, already proving the min vs. max part of the result. Further, ?? ps (? ) = 1 because of rejection sampling. For the first part of the result, suppose for the sake of contradiction that ps (?0 ) ? c(? )2?s for some ?0 , violating the claimed lower bound. By the above argument, ps (? ) is within a factor of c(? ) of ? ps (?0 ) for every ? , and would therefore be at most 2?s . This would make ?? ps (? ) strictly less than one, a contradiction. A similar argument proves the upper bound on ps (? ). ? Finally, the number of iterations needed to find a unique solution (thereby successfully producing a sample) is a geometric random variable with success parameter p? = ?? pone,s (? ), and has expected value 1/ p. ? Using the bounds on pone,s (? ) from Lemma 1 and the fact that the unique survival of each ? ? ? of the 2s solutions ? are disjoint events, we have p? ? 2s 2?s = 2?? and p? > 2s c(? )2?s = c(? )2?? . This proves the claimed bounds on the expected number of iterations, 1/ p. ? 3.2 XOR -based sampling using model counters and selectors: XORSample? We now discuss our second parameterized algorithm, XORSample?, which also works by adding to F s random XORs Qs chosen independently from X(n, q). However, now the resulting streamlined formula Fsq is fed to an exact model counting subroutine to compute the number of surviving solutions, mc. If mc > 0, XORSample? succeeds and outputs the ith surviving solution using a model selector on Fsq , where i is chosen uniformly from {1, 2, . . . , mc}. Note that XORSample?, in contrast to XORSample, is non-iterative. Also, the model counting and selecting subroutines it uses are more complex than SAT solvers; these work well in practice only because Fsq is highly streamlined. Params: q ? (0, 1), a positive integer s Input : A CNF formula F Output : A solution of F, or Failure begin Qs ? {s constraints randomly drawn from X(n, p)} q Fs ? F ? Qs // Add s random X O R constraints to F q q mc ? SATModelCount(Fs ) // Compute the exact model count of Fs if mc 6= 0 then i ? a random number chosen uniformly from {1, 2, . . . , mc} ? ? SATFindSolution(Fsq , i) // Compute the ith solution return ? // Sampled successfully! else return Failure end Algorithm 2: XORSample?, sampling with XORs using a model counter and selector The sample-quality analysis of XORSample? requires somewhat more complex ideas than that of ? XORSample. Let F have 2s solutions as before. We again fix q = 1/2 and prove that if the parameter s is sufficiently smaller than s? , the sample-quality is provably good. The proof relies on the fact that XORs chosen randomly from X(n,1/2 ) act 3-wise independently on different solutions, i.e., knowing the value of an XOR constraint on two variable assignments does not tell us anything about its value on a third assignment. We state this as the following lemma, which can be proved by extending the linear algebra arguments we used in the proof of Lemma 1 (see the full version for details). Lemma 2 (3-wise independence). Let ?1 , ?2 , and ?3 be three distinct assignments to n Boolean variables. Let D be an XOR constraint chosen at random from X(n,1/2 ). Then for i ? {0, 1}, Pr [?1 (D) = i | ?2 (D), ?3 (D)] = Pr [?1 (D) = i]. Recall the discussion of expectation, variance, pairwise independence, and 3-wise independence in Section 2. In particular, when a number of random variables are 3-wise independent, the conditional variance of their sum (conditioned on one of these variables) equals the sum of their individual conditional variances. We use this to compute bounds on the sampling probability of XORSample?. The idea is to show that the number of solutions surviving, given that any fixed solution ? survives, is independent of ? in expectation and is highly likely to be very close to the expected value. As a result, the probability with which ? is output, which is inversely proportional to the number of solutions surviving along with ? , will be very close to the uniform probability. Here ?closeness? is one-sided and is measured as being within a factor of c0 (? ) which approaches 1 very quickly. Theorem 2. Let F be a formula with 2s solutions. Let ? > 0 and s = s? ? ? . For any solution ? of F, the probability p0s (? ) with which XORSample? with parameters q = 1/2 and s outputs ? satisfies ? ? p0s (? ) > c0 (? ) 2?s , where c0 (? ) = 1 ? 2?? /3 . (1 + 2?? )(1 + 2?? /3 ) Further, XORSample? succeeds with probability larger than c0 (? ). Proof sketch. See the full version for a detailed proof. We begin by setting up a framework for analyzing the number of surviving solutions after s XORs Qs drawn from X(n,1/2 ) are added to F. Let Y? 0 be the indicator random variable which is 1 iff ? 0 (Qs ) = 1, i.e., ? 0 survives Qs . E [Y? 0 ] = 2?s and Var [Y? 0 ] ? E [Y? 0 ] = 2?s . Further, a straightforward generalization of Lemma 2 from a single XOR constraint D to s independent XORs Qs implies that the random variables Y? 0 are 3-wise independent. The variable mc (see Algorithm 2), which is the number of surviving solutions, equals ?? 0 Y? 0 . Consider the distribution of mc conditioned on the fact that ? survives. Using pairwise independence, the corresponding conditional expectation can be shows to satisfy: ? = E [mc | ? (Q s ) = 1] = 1 + (2s ? 1)2?s . More interesting, using 3-wise independence, the corresponding conditional variation can also be bounded: Var [mc | ? (Qs ) = 1] < E [mc | ? (Qs ) = 1]. ? Since s = s? ? ? , 2? < ? < 1 + 2? . We show that mc conditioned on ? (Qs ) = 1 indeed lies very close to ? . Let ? ? 0 be a parameter whose value we will fix later. By Chebychev?s inequality, h i 22? Var [mc | ? (Q ) = 1] ? 22? 22? s < = Pr |mc ? ? | ? ? | ? (Qs ) = 1 ? (E [mc | ? (Qs ) = 1])2 E [mc | ? (Qs ) = 1] ? 2 Therefore, conditioned on ? (Qs ) = 1, with probability more than 1 ? 22? /? , mc lies between (1 ? 2?? )? and (1 + 2?? )? . Recall that p0s (? ) is the probability that XORSample? outputs ? . p0s (? ) = Pr [? (Qs ) = 1] n i=1 ? 2?s Pr mc ? (1 + 2?? )? | ? (Qs ) = 1 h 1 ? Pr [mc = i | ? (Qs ) = 1] i i 2? 1 ?s 1 ? 2 / ? ? 2 (1 + 2?? )? (1 + 2?? )? Simplifying this expression and optimizing it by setting ? = ? /3 gives the desired bound on p 0s (? ). Lastly, the success probability of XORSample? is ?? p0s (? ) > c0 (? ). Remark 1. Theorems 1 and 2 show that both XORSample and XORSample? can be used to sample arbitrarily close to the uniform distribution when q = 1/2 . For example, as the number of XORs used in XORSample is increased, ? increases, the deviation c(? ) from the truly uniform sampling probability p? approaches 0 exponentially fast, and we get progressively smaller error bands around p? . However, for any fixed ? , these algorithms, somewhat counter-intuitively, do not always sample truly uniformly (see the full version). As a result, we expect to see a fluctuation around p ? , which, as we proved above, will be exponentially small in ? . 4 Empirical validation To validate our XOR-sampling technique, we consider two kinds of formulas: a random 3-SAT instance generated near the SAT phase transition [18] and a structured instance derived from a logistics planning domain (data and code available from the authors). We used a complete model counter, Relsat [12], to find all solutions of our problem instances. Our random instance with 75 variables has a total of 48 satisfying assignments, and our logistics formula with 352 variables has 512 satisfying assignments. (We used formulas with a relatively small number of assignments in order to evaluate the quality of our sampling. Note that we need to draw many samples for each assignment.) We used XORSample with MiniSat [14] as the underlying SAT solver to generate samples from the set of solutions of each formula. Each sample took a fraction of a second to generate on a 4GHz processor. For comparison, we also ran the best alternative method for sampling from SAT problems, SampleSAT [19, 2], allowing it roughly the same cumulative runtime as XORSample. Figure 1 depicts our results. In the left panel, we consider the random SAT instance, generating 200,000 samples total. In pure uniform sampling, in expectation we have 200, 000/48 ? 4, 167 samples for each solution. This level is indicated with the solid horizontal line. We see that the samples produced by XORSample all lie in a narrow band centered around this line. Contrast this with the results for SampleSAT: SampleSAT does sample quite uniformly from solutions that lie near each other in Hamming distance but different solution clusters are sampled with different frequencies. This SAT instance has two solution clusters: the first 32 solutions are sampled around 2, 900 times each, i.e., not frequently enough, whereas the remaining 16 solutions are sampled too frequently, around 6,700 times each. (Although SampleSAT greatly improves on other sampling strategies for SAT, the split into disjoint sampling bands appears inherent in the approach.) The Kullback-Leibler (KL) divergence between the XORSample data and the uniform distribution is 0.002. For SampleSAT the KL-divergence from uniform is 0.085. It is clear that the XORSample approach leads to much more uniform sampling. The right panel in Figure 1 gives the results for our structured logistics planning instance. (To improve the readability of the figure, we plot the sample frequency only for every fifth assignment.) In this case, the difference between XORSample and SampleSAT is even more dramatic. SampleSAT in fact only found 256 of the 512 solutions in a total of 100,000 samples. We also see that one of these solutions is sampled nearly 60,000 times, whereas many other solutions are sampled less than Absolute Frequency (log scale) Absolute Frequency (log scale) 10000 1000 100000 XORsample SampleSat uniform XORSample SampleSat uniform 10000 1000 100 10 1 0 10 20 Solution # 30 40 50 0 100 200 300 Solution # 400 500 Figure 1: Results of XORSample and SampleSAT on a random 3-SAT instance, the left panel, and a logistics planning problem, the right panel. (See color figures in PDF.) five times. The KL divergence from uniform is 4.16. (Technically the KL divergence is infinite, but we assigned a count of one to the non-sampled solutions.) The expected number of samples for each assignment is 100, 000/512 ? 195. The figure also shows that the sample counts from XORSample all lie around this value; their KL divergence from uniform is 0.013. These experiments show that XORSample is a promising practical technique (with theoretical guarantees) for obtaining near-uniform samples from intricate combinatorial spaces. References [1] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62(1-2):107?136, 2006. [2] H. Poon and P. Domingos. Sound and efficient inference with probabilistic and deterministic dependencies. In 21th AAAI, pages 458?463, Boston, MA, July 2006. [3] N. Madras. Lectures on Monte Carlo methods. In Field Institute Monographs, vol. 16. Amer. Math. Soc., 2002. [4] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equations of state calculations by fast computing machines. J. Chem. Phy., 21:1087?1092, 1953. [5] S. Kirkpatrick, D. Gelatt Jr., and M. Vecchi. Optimization by simuleated annealing. Science, 220(4598): 671?680, 1983. [6] D. Roth. On the hardness of approximate reasoning. J. AI, 82(1-2):273?302, 1996. [7] M. L. Littman, S. M. Majercik, and T. Pitassi. Stochastic Boolean satisfiability. J. Auto. Reas., 27(3): 251?296, 2001. [8] J. D. Park. MAP complexity results and approximation methods. In 18th UAI, pages 388?396, Edmonton, Canada, August 2002. [9] A. Darwiche. The quest for efficient probabilistic inference, July 2005. Invited Talk, IJCAI-05. [10] T. Sang, P. Beame, and H. A. Kautz. Performing Bayesian inference by weighted model counting. In 20th AAAI, pages 475?482, Pittsburgh, PA, July 2005. [11] F. Bacchus, S. Dalmao, and T. Pitassi. Algorithms and complexity results for #SAT and Bayesian inference. In 44nd FOCS, pages 340?351, Cambridge, MA, October 2003. [12] R. J. Bayardo Jr. and R. C. Schrag. Using CSP look-back techniques to solve real-world SAT instances. In 14th AAAI, pages 203?208, Providence, RI, July 1997. [13] L. Zhang, C. F. Madigan, M. H. Moskewicz, and S. Malik. Efficient conflict driven learning in a Boolean satisfiability solver. In ICCAD, pages 279?285, San Jose, CA, November 2001. [14] N. E?en and N. S?orensson. MiniSat: A SAT solver with conflict-clause minimization. In 8th SAT, St. Andrews, U.K., June 2005. Poster. [15] C. P. Gomes, A. Sabharwal, and B. Selman. Model counting: A new strategy for obtaining good bounds. In 21th AAAI, pages 54?61, Boston, MA, July 2006. [16] L. G. Valiant and V. V. Vazirani. NP is as easy as detecting unique solutions. Theoretical Comput. Sci., 47(3):85?93, 1986. [17] J. M. Crawford, M. J. Kearns, and R. E. Schapire. The minimal disagreement parity problem as a hard satisfiability problem. Technical report, AT&T Bell Labs., 1994. [18] D. Achlioptas, A. Naor, and Y. Peres. Rigorous location of phase transitions in hard optimization problems. Nature, 435:759?764, 2005. [19] W. Wei, J. Erenrich, and B. Selman. Towards efficient sampling: Exploiting random walk strategies. In 19th AAAI, pages 670?676, San Jose, CA, July 2004.
3013 |@word version:4 polynomial:1 stronger:1 nd:1 c0:5 open:1 simplifying:1 dramatic:2 thereby:1 solid:1 phy:1 series:1 selecting:1 fa8750:1 current:5 conjunctive:1 must:1 happen:2 remove:2 designed:1 plot:1 progressively:1 bart:1 stationary:5 half:1 selected:1 v:1 nq:1 ith:2 core:1 provides:1 iterates:1 math:1 readability:1 detecting:1 location:1 zhang:1 five:1 along:1 constructed:2 focs:1 prove:2 naor:1 darwiche:2 introduce:1 manner:2 pairwise:9 expected:5 hardness:3 roughly:1 indeed:1 planning:4 growing:1 frequently:2 beame:1 intricate:2 behavior:1 inspired:1 globally:2 relying:1 solver:29 becomes:1 begin:4 underlying:2 notation:1 linearity:2 bounded:1 panel:4 kind:1 substantially:1 developed:1 finding:6 transformation:4 guarantee:5 every:5 act:1 runtime:1 exactly:7 prohibitively:1 whatever:1 grant:2 appear:1 producing:1 positive:5 before:2 consequence:2 analyzing:1 fluctuation:1 abuse:1 studied:1 range:1 practical:7 unique:10 testing:1 practice:2 block:1 implement:1 union:1 handy:1 empirical:2 bell:1 significantly:1 thought:1 poster:1 madigan:1 get:1 close:6 clever:1 scheduling:1 context:2 equivalent:1 deterministic:4 fn2:5 map:3 roth:1 straightforward:1 independently:6 simplicity:1 formalized:1 assigns:1 pure:1 contradiction:2 rule:1 q:30 importantly:1 deriving:1 proving:3 variation:2 suppose:3 heavily:1 modulo:1 exact:2 us:5 domingo:3 pa:1 element:3 satisfying:6 cut:1 worst:1 russell:1 counter:5 ran:1 mentioned:1 monograph:1 complexity:4 iisi:1 ideally:2 littman:1 halpern:1 uniformity:1 solving:1 algebra:4 technically:1 f2:2 translated:2 easily:1 darpa:1 differently:1 various:2 talk:2 distinct:1 fast:3 effective:2 describe:2 monte:3 tell:3 choosing:1 quite:4 whose:2 widely:1 solve:3 larger:2 say:4 otherwise:1 richardson:1 think:1 itself:1 final:1 took:1 propose:1 interaction:1 combining:1 translate:1 iff:3 achieve:1 poon:1 description:1 validate:1 exploiting:1 ijcai:1 cluster:2 p:16 extending:1 produce:3 generating:1 derive:1 andrew:1 measured:1 odd:1 received:1 sa:3 soc:1 c:1 skip:1 come:1 implies:1 direction:1 sabharwal:2 closely:1 stochastic:1 centered:1 elimination:2 fix:4 suffices:1 generalization:1 preliminary:1 randomization:2 strictly:3 sufficiently:2 around:7 considered:1 normal:1 achieves:1 a2:2 uniqueness:1 combinatorial:22 successfully:4 survives:4 weighted:1 minimization:1 rough:1 always:1 csp:1 reaching:2 shelf:1 cornell:3 conjunction:1 derived:1 focus:4 june:1 check:1 greatly:1 contrast:2 rigorous:1 inference:4 dependent:1 minisat:2 typically:1 koller:1 subroutine:5 interested:1 provably:2 overall:3 art:5 special:1 field:2 construct:1 equal:2 having:1 sampling:37 represents:1 park:1 look:1 survive:1 nearly:1 others:1 np:2 intelligent:1 inherent:1 report:1 randomly:8 divergence:5 individual:1 intended:1 phase:2 negation:1 interest:1 highly:6 kirkpatrick:1 truly:2 behind:1 chain:5 predefined:1 walk:1 desired:2 theoretical:4 minimal:1 instance:23 increased:1 earlier:2 boolean:8 assignment:27 mbound:2 introducing:1 deviation:1 subset:4 uniform:20 successful:1 bacchus:2 too:1 dependency:1 providence:1 dalmao:1 params:2 st:1 randomized:1 probabilistic:6 off:1 ashish:1 quickly:3 concrete:1 again:1 central:1 satisfied:1 aaai:5 resort:1 style:1 return:4 streamlining:2 sang:1 summarized:1 pone:15 satisfy:1 later:1 lab:1 analyze:2 satisfiable:2 kautz:1 xor:35 variance:5 who:1 correspond:1 bayesian:2 produced:2 none:1 carlo:3 mc:19 processor:1 reach:2 streamlined:4 failure:2 energy:1 nonetheless:1 frequency:4 proof:8 hamming:1 sampled:9 stop:1 proved:2 logical:7 recall:4 color:1 improves:1 satisfiability:5 formalize:1 back:1 focusing:1 appears:1 violating:1 wei:1 amer:1 done:1 just:1 p0s:5 lastly:1 achlioptas:1 until:1 sketch:2 horizontal:1 propagation:1 quality:8 indicated:1 usa:1 building:1 requiring:1 true:7 concept:1 hence:1 assigned:1 leibler:1 essence:1 anything:1 pdf:1 complete:3 demonstrate:1 performs:1 temperature:1 reasoning:5 meaning:1 wise:8 consideration:1 recently:1 specialized:3 behaves:2 clause:1 exponentially:3 million:1 interpretation:1 cambridge:1 ai:3 similarly:1 pitassi:2 add:3 recent:2 showed:1 optimizing:1 moderate:1 driven:1 discard:1 claimed:2 certain:1 inequality:1 binary:1 arbitrarily:2 success:2 exploited:1 seen:1 minimum:1 fortunately:1 additional:2 somewhat:2 prune:1 determine:1 july:6 full:6 desirable:1 sound:1 technical:1 calculation:1 long:2 a1:5 feasibility:2 involving:1 basic:1 expectation:9 iteration:7 tailored:1 achieved:1 addition:4 remarkably:1 whereas:2 annealing:3 else:1 ithaca:1 biased:1 eliminates:2 rest:2 specially:1 appropriately:1 invited:1 tend:1 integer:3 surviving:10 near:13 counting:5 split:1 enough:1 easy:1 independence:15 affect:2 identified:1 idea:4 knowing:4 whether:4 motivated:1 expression:1 ul:3 f:7 returned:1 cnf:5 repeatedly:1 remark:1 generally:2 useful:1 clear:2 detailed:1 band:3 concentrated:1 hardware:1 generate:2 schapire:1 exist:1 arising:1 correctly:1 disjoint:2 write:1 vol:1 affected:1 express:1 key:1 drawn:7 lowering:1 bayardo:1 fraction:1 year:1 sum:4 jose:2 parameterized:2 extends:1 draw:2 bound:11 fsq:8 guaranteed:1 constraint:52 x2:2 software:1 ri:1 sake:1 generates:2 iccad:1 argument:7 extremely:1 min:2 vecchi:1 performing:1 relatively:2 department:1 structured:3 jr:2 beneficial:1 smaller:2 metropolis:2 deferring:1 making:1 modification:1 intuitively:1 pr:11 sided:1 taken:1 equation:1 discus:3 turn:1 mechanism:1 count:3 needed:3 know:2 fed:2 end:2 available:2 away:1 disagreement:1 gelatt:1 alternative:4 original:2 denotes:3 remaining:4 running:1 include:1 cf:1 madras:1 exploit:3 giving:1 especially:1 prof:3 malik:1 added:6 already:3 question:2 quantity:1 strategy:4 exclusive:1 amongst:1 distance:1 simulated:2 entity:1 sci:1 degrade:1 valuation:2 length:1 code:1 unfortunately:2 october:1 stated:1 negative:1 rosenbluth:2 boltzmann:1 xors:13 perform:2 negated:1 upper:3 allowing:1 markov:8 november:1 behave:1 logistics:4 peres:1 relational:1 arbitrary:2 august:1 canada:1 introduced:1 propositional:1 namely:2 specified:1 kl:5 connection:1 conflict:2 chebychev:1 narrow:1 beyond:1 able:1 below:2 challenge:1 max:2 power:1 event:2 indicator:2 improve:1 inversely:1 irrespective:1 auto:1 crawford:1 text:1 geometric:3 teller:2 afosr:1 fully:1 expect:1 lecture:1 interesting:3 limitation:1 prototypical:1 proportional:2 var:7 validation:2 degree:1 verification:1 course:1 accounted:1 supported:1 parity:9 repeat:1 infeasible:1 moskewicz:1 bias:1 institute:2 fifth:1 absolute:2 ghz:1 f49620:1 dimension:1 xn:2 world:2 evaluating:1 transition:2 cumulative:1 selman:4 author:1 san:2 far:1 vazirani:3 pruning:1 compact:1 selector:4 approximate:1 kullback:1 logic:3 uai:1 sat:48 pittsburgh:1 gomes:3 don:1 search:6 iterative:2 promising:2 nature:1 ca:2 obtaining:3 complex:2 domain:2 main:1 linearly:6 n2:2 nothing:2 complementary:3 x1:2 augmented:1 edmonton:1 en:1 depicts:1 ny:1 comput:1 lie:5 third:1 xorsample:46 formula:22 theorem:4 showing:1 insightful:1 closeness:2 survival:1 exists:2 false:5 adding:4 effectively:2 valiant:3 conditioned:4 easier:1 rejection:3 boston:2 carla:1 likely:2 expressed:1 truth:3 satisfies:10 relies:2 ma:3 conditional:7 goal:1 careful:1 towards:1 hard:7 typical:1 infinite:2 uniformly:15 lemma:7 kearns:1 called:1 total:3 succeeds:2 select:1 quest:1 arises:1 chem:1 incorporate:1 evaluate:1 mcmc:3
2,220
3,014
Generalized Regularized Least-Squares Learning with Predefined Features in a Hilbert Space Wenye Li, Kin-Hong Lee, Kwong-Sak Leung Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin, Hong Kong, China {wyli, khlee, ksleung}@cse.cuhk.edu.hk Abstract Kernel-based regularized learning seeks a model in a hypothesis space by minimizing the empirical error and the model?s complexity. Based on the representer theorem, the solution consists of a linear combination of translates of a kernel. This paper investigates a generalized form of representer theorem for kernel-based learning. After mapping predefined features and translates of a kernel simultaneously onto a hypothesis space by a specific way of constructing kernels, we proposed a new algorithm by utilizing a generalized regularizer which leaves part of the space unregularized. Using a squared-loss function in calculating the empirical error, a simple convex solution is obtained which combines predefined features with translates of the kernel. Empirical evaluations have confirmed the effectiveness of the algorithm for supervised learning tasks. 1 Introduction Supervised learning, or learning from examples, refers to the task of training a system by a set of examples which are specified by input-output pairs. The system is used to predict the output value for any valid input object after training. Examples of such tasks include regression which produces continuous values, and classification which predicts a class label for an input object. Vapnik?s seminal work[1] shows that the key to effectively solving this problem is by controlling the solution?s complexity, which leads to the techniques known as regularized kernel methods[1] [2][3] and regularization networks[4]. The work championed by Poggio and other researchers[5][6] implicitly treats learning as an approximation problem and gives a general scheme with ideas going back to modern regularization theory[7][8][9]. For both frameworks, a solution is sought by simultaneously minimizing the empirical error and the complexity. More precisely, given a training set m D = (xi ; yi )i=1 , an estimator f : X ? Y, where X is a closed subset of Rd and Y ? R, is given by m 1 X 2 min V (yi , f (xi )) + ? kf kK (1) f ?HK m i=1 where V is a convex loss function, kf kK is the norm of f in a reproducing kernel Hilbert space (RKHS) HK induced by a positive definite function (a kernel) Kx (x0 ) = K (x, x0 ), and ? is a regularization parameter that makes a trade-off between the empirical error and the complexity. ? kf k2K is also called a regularizer. According to representer theorem [10][11] [12], the minimizer of (1) admits a simple solution as a linear combination of translates of the kernel K by the training data m X f? = ci Kxi , ci ? R, 1 ? i ? m i=1 for a variety of loss functions. Different loss functions lead to different learning algorithms. For 2 example, when used for classification, a squared-loss (y ? f (x)) brings about the regularized least-squares classification (RLSC) algorithm[13][14][15]; while a hinge loss (1 ? yf (x)) + ? max (1 ? yf (x) , 0) corresponds to the classical support vector machines(SVM). Using this model, data are implicitly projected onto the hypothesis space H K via a transformation ?K : x ? K x and a linear functional is sought by finding its representer in HK , which generally has infinite dimensions. It is generally believed that learning problems associated with infinite dimensions are ill-posed and need regularization. However, finite dimensional problems are often associated with well-posedness and do not need regularization. Motivated by this, we unified these two views in this paper. Using an existing trick in designing kernels, an RKHS is constructed which contains a subspace spanned by some predefined features and this subspace is left unregularized during the learning process. Empirical results have shown the embedding of these features often has the effect of stabilizing the algorithms?s performance for different choices of kernels and prevents the results from deteriorating for inappropriate kernels. The paper is organized as follows. First, a generalized regularized learning model and its associated representer theorem are studied. Then, we introduce an existing trick with which we constructed a hypothesis space which has a subspace of the predefined features. Next, a generic learning algorithm is proposed based on the model and especially evaluated for classification problems. Empirical results have confirmed the benefits brought by the algorithm. A note on notation. Throughout the paper, vectors and matrices are represented in bold notation and scalars in normal script, e.g. x1 , ? ? ? , xm ? Rd , K ? Rm?m , and y1 , ? ? ? , ym ? R. I and O are used to denote an identity matrix and a zero matrix of appropriate sizes, respectively. For clarity, the size of a matrix is sometimes added as a subscript, such as Om?` . 2 Generalized regularized least-squares learning model Suppose the space HK decomposes into the direct sum: HK = H0 ? H1 , where H0 is spanned by ` (? m) linearly independent features: H0 = span (?1 , ? ? ? , ?` ). We propose the generalized regularized least-squares (G-RLS) learning model as m min L (f ) = f ?HK 1 X (yi ? f (xi ))2 + ? kf ? P f k2K , m i=1 (2) where P f is the orthogonal projection of f onto H0 . Suppose f ? is the minimizer of (2). For any f ? HK , let f = f ? + ?g where ? ? R and g ? HK . Now take derivative w.r.t. ? and notice that ?L ?? |?=0 = 0 . Then m ? 2 X (yi ? f ? (xi )) g (xi ) + 2? hf ? ? P f ? , giK = 0, m i=1 (3) where h?, ?iK denotes the inner product in HK . This equation holds for any g ? HK . In particular, setting g = Kx gives Pm (yi ? f ? (xi )) Kxi . (4) f ? ? P f ? = i=1 m? P f ? is the orthogonal projection of f ? onto H0 and hence, P f? = ` X ?p ?p , ?p ? R, 1 ? p ? `. (5) p=1 So (4) is simplified to f? = ` X p=1 ?p ? p + m X i=1 c i Kxi , (6) where yi ? f ? (xi ) , 1 ? i ? m. (7) m? The coefficients ?1 , ? ? ? , ?` , c1 , ? ? ? , cm are uniquely specified by m + ` linear equations. The first m equations are obtained by substituting (6) into (7). The rest ` equations are derived from the orthogonality constraint between P f ? and f ? P f ? , which can be written as * + m X ?p , c i Kxi = 0, 1 ? p ? `, (8) ci = i=1 K or equivalently due to the property of reproducing kernels, m X (9) ci ?p (xi ) = 0, 1 ? p ? `. i=1 m The solution (6) derived from (2) satisfies the reproduction property. Suppose (x i ; yi )i=1 comes purely from a model which is perfectly linearly related to ?1 , ? ? ? , ?` , it is desirable to get back a solution that is independent of the other features. As an evident result of (2), the property is satisfied. The parameters c1 , ? ? ? , cm in the resulting estimator (6) are all zero, which makes the regularizer in (2) equal to zero. 3 Kernel construction By decomposing a hypothesis space HK and studying a generalized regularizer, we have proposed the G-RLS model and derived a solution which consists of predefined features as well as translates of a kernel function. In this section, starting with predefined features ? 1 , ? ? ? , ?` and a kernel ?, we will construct a hypothesis space which contains the features and translates of the kernel by using an existing trick. 3.1 A kernel construction trick Let?s consider the following reproducing kernel K (x, x0 ) = H (x, x0 ) + ` X ?0p (x) ?0p (x0 ) (10) p=1 where H (x, x0 ) = ? (x, x0 ) ? ` X ?0p (x) ? (xp , x0 ) ? p=1 + ` X ` X ` X ?0q (x0 ) ? (x, xq ) (11) q=1 ?0p (x) ?0q (x0 ) ? (xp , xq ) , p=1 q=1 ? is any strictly positive definite function, and ?01 , ? ? ? , ?0` defines a linear transformation of ?1 , ? ? ? , ?` w.r.t. x1 , ? ? ? , x` , " 0 # " #?1 " # ?1 (x) ?1 (x1 ) ? ? ? ?1 (x` ) ?1 (x) ??? ??? ??? ??? = (12) ?0` (x) ?` (x1 ) ? ? ? ?` (x` ) ?` (x) which satisfies ?0q (xp ) =  1 0 1?p=q?` . 1 ? p 6= q ? ` (13) This trick is studied in [16] to provide an alternative basis for radial basis functions and first used in a fast RBF interpolation algorithm[17]. A sketch of properties which are peripheral to our concerns in this paper are given below. Kxp = ?0p , 1 ? p ? ` (14) ?0p , ?0q  = K 1 0 1?p=q?` 1 ? p 6= q ? ` (15) (16) Hxp = H (xp , ?) = 0, 1 ? p ? ` = H (xi , ?) , ?0p K = 0, ` + 1 ? i ? m, 1 ? p ? ` Hxi , Hxj K = H (xi , xj ) , ` + 1 ? i, j ? m (17) Hxi , ?0p K Another property is that the matrix H = (H be used in the computations below. m (xi , xj ))i,j=`+1 (18) is strictly positive definite, which will By constructing a kernel K using this trick, predefined features ?1 , ? ? ? , ?` are explicitly mapped onto HK which has a subspace H0 = span (?01 , ? ? ? , ?0` ) = span (?1 , ? ? ? , ?` ). By property (15), we can see that ?01 , ? ? ? , ?0` also forms an orthonormal basis of H0 . 3.2 Computation After projecting the features ?1 , ? ? ? , ?` onto an RKHS HK , let?s study the regularized minimization problem in (2). As shown in (6), the minimizer has a form of a linear combination of predefined features and translates of a kernel. By the properties of K in (14)-(17), the minimizer can be rewritten as: f? = ` X ?p ? p + p=1 = ` X = ?0p ?0p + ` X ` X ci ?0i + i=1 ?0p + cp + p=1 = c i Kxi i=1 p=1 ` X m X m X p=1 m X Hx i + m X ?0p (xi ) ?0p p=1 i=`+1 ! ci ?0p (xi ) ?0p + i=`+1 ? p ?0 + ? p ci ` X m X !! c i Hx i i=`+1 (19) c?i Hxi i=`+1 ?1 , ? ? ? , ? ?` , c?`+1 , ? ? ? , c?m are m parameters to be determined. Furthermore, from the orthogwhere ? onal property between ?0p and Hxi in (17), we have f? ? P f? = m X (20) c?i Hxi . i=`+1 T  T ?1 , ? ? ? , ? ?` and ? c = (? c`+1 , ? ? ? , c?m ) , we need To determine the values of ? ?= ? 2 kf ? ? P f ? kK = m X c?i c?j H (xi , xj ) = i,j=`+1 ? = where H  O`?` O(m?`)?` O`?(m?`) H   ? ? ? c T ? H  ? ? ? c  (21) . Substituting (21) into (2), we have   T     ? ? 1 ? ? ? ? y?K y?K +? L= ? c ? c m   `,m I`?` O`?(m?`) ? where K = and E = ?0p (xi ) p=1,i=`+1 . ET H and set the derivative to zero, and we get     ? ? ? ?2 ? ? ? K + ?mH = Ky. ? c ? c ? ? ? c T ? H  ? ? ? c  Take derivative w.r.t. (22)  ? ? ? c  (23) ? ?1 = Since K we have i.e.  I`?` ?H?1 ET O(m?`)?` H?1  ? ?1 H ? =? and K I=  O`?` O`?(m?`) O(m?`)?` I(m?`)?(m?`)  ?  ? ? + ?m? K I = y, ? c     ? O`?(m?`) y1 ? = , y2 H + ?mI ? c   I`?` ET  , (24) (25) T T ? by where y1 = (y1 , ? ? ? , y` ) and y2 = (y`+1 , ? ? ? , ym ) . Equation (25) uniquely specifies ? ? ? = y1 , (26) and ? c by ?. (27) (H + ?mI) ? c = y 2 ? ET ? H+?mI is a strictly positive definite matrix. The equation can be efficientlysolved either by conju  3 gate gradient or by Cholesky factorization. The worst case complexity is O (m ? `) ? O m3 . It is also possible to investigate iterative methods for solving linear systems coupled with recent advances in fast matrix-vector multiplication methods (e.g. fast multipole method), and the complexity reduces to nearly O (m log m), which provides the potential to solve large scale problems. 4 A generic learning algorithm Based on the discussions above, a generic learning algorithm (G-RLS algorithm) is summarized below. m 1. Start with data (xi ; yi )i=1 . 2. For ` (? m) predefined linearly independent features ?1 , ? ? ? , ?` of the data, define ?01 , ? ? ? , ?0` according to equation (12). 3. Choose a symmetric, strictly positive definite function ?x (x0 ) = ? (x, x0 ) which is continuous on X ? X . Define H according to equation (11). 4. The estimator f : X ? Y is given by f (x) = ` X ?p ?0 (x) + ? p p=1 m X c?i Hxi (x) (28) i=`+1 ?1 , ? ? ? , ? ? ` , c?`+1 , ? ? ? , c?m are obtained by solving equations (26) and (27). where ? The algorithm can be applied to a number of applications including regression and binary classification. As a simple example for regression, noisy points were randomly generated via a function y = |5 ? x|, and we fitted the data by a curve. Polynomial features up to the second degree (?1 = 1, ?2 = x, ?3 = x2 ) were used for G-RLS algorithm along with a Gaussian RBF kernel kx??k2 ?x (?) = e? ?2 . We selected ridge regression with the Gaussian RBF kernel for a comparison, which can be regarded as an implementation of standard regularized least-squares model for regression tasks. For both algorithms, three trials were made in which the parameter ? was set to a large value, to a small value, and by cross validation respectively. For each ?, the parameter ? was set by cross validation. Comparing with ridge regression in figure 1(b), the existence of polynomial features in G-RLS has the effect of stabilizing the results, as shown in figure 1(a). Varying ?, different fitting results were obtained by ridge regression. However, for G-RLS algorithm, the difference was not evident. In the case of generalized regularized least-squares classification (G-RLSC), each y i of the training set takes the values {?1, 1}. The predicted label of any x depends on the sign of (28)  1, f (x) > 0 y= . ?1 otherwise G-RLSC uses the ?classical? squared-loss as a classification loss criterion. The effectiveness of this criterion has been reported by the empirical results[13][14][15]. 5 5 data data 2 2 ? =cv ? =cv 2 ? =1000 4 ? =0.001 3 3 2 2 1 1 0 0 ?1 ?5 ?4 ?3 ?2 ?1 0 1 2 3 4 2 ? =1000 4 2 5 (a) G-RLS Regression ?1 ?5 2 ? =0.001 ?4 ?3 ?2 ?1 0 1 2 3 4 5 (b) Ridge Regression Figure 1: A Regression Example. The existence of polynomial features in G-RLS helped to improve the stability of the algorithm. 5 Experiments To evaluate the performance of G-RLS algorithm, empirical results are reported on text categorization tasks using the three datasets from CMU text mining group1. The 7-sectors dataset has 4, 573 web pages belonging to seven economic sectors, with each sector containing pages varying from 300 to 1, 099. The 4-universities dataset consists of 8, 282 webpages collected mainly from four universities, in which the pages belong to seven classes and each class has 137 to 3, 764 pages. The 20-newsgroups dataset collects UseNet postings into twenty newsgroups and each group has about 1, 000 messages. We experimented with its four major subsets. The first subset has 5 groups (comp.*), the second 4 groups (rec.*), the third 4 groups (sci.*) and the last 4 groups (talk.*). For each dataset, we removed all but the 2, 000 words with highest mutual information with the class variable by rainbow package[18]. The document was represented as bag-of-words with linear normalization into [?1, 1]. Probabilistic latent semantic analysis[19] (pLSA) was used to get ten latent features ?1 , ? ? ? , ?10 out of the data. Experiments were carried out with different number (100?3, 200) of data for training and the rest for testing. Each experiment consisted of ten runs and the average accuracy is reported. In each run, the data were separated by the xval-prep utility accompanied in C4.5 package2. Figure 2 compares the performance of G-RLSC, RLSC and SVM. It is shown that G-RLSC reports improved results on most of the datasets except on 4-universities. Moreover, an insightful observation may find that although SVM excels on the dataset when the number of training data increases, G-RLSC shows better performance than standard RLSC. A possible reason is that the hinge loss used by SVM is more appropriate than the squared-loss used by RLSC and G-RLSC on this dataset; while the embedding of pLSA features still improves the accuracy. 6 Conclusion In this paper, we first proposed a generic G-RLS learning model. Unlike the standard kernel-based methods which only consider the translates of a kernel for model learning, the new model takes predefined features into special consideration. A generalized regularizer is studied which leaves part of the hypothesis space unregularized. Similar ideas were explored in spline smoothing[9] in which low degree polynomials are not regularized. Another example is semi-parametric SVM[2], which considers the addition of some features to the kernel expansion for SVM. However, to our knowledge, few learning algorithms and applications have been studied along this line from a unified RKHS regularization point of view, or investigated for empirical evaluations. The second part of our work presented a practical computation method based on the model. An RKHS that contains the combined solutions is explicitly constructed based on a special trick in designing kernels. (The idea of a conditionally positive definite function[20] is lurking in the back1 2 http://www.cs.cmu.edu/?TextLearning/datasets.html http://www.rulequest.com/Personal/c4.5r8.tar.gz. VHFWRUV XQLYHUVLWLHV       *5/6& 5/6&%R: 5/6&S/6$ 690%R: 690S/6$         *5/6& 5/6&%R: 5/6&S/6$ 690%R: 690S/6$            FRPS    UHF       *5/6& 5/6&%R: 5/6&S/6$ 690%R: 690S/6$         *5/6& 5/6&%R: 5/6&S/6$ 690%R: 690S/6$            VFL    WDON       *5/6& 5/6&%R: 5/6&S/6$ 690%R: 690S/6$         *5/6& 5/6&%R: 5/6&S/6$ 690%R: 690S/6$               Figure 2: Classification accuracies on CMU text datasets with different number of training samples. Ten pLSA features along with a linear kernel ? were used for G-RLSC. Both bag-of-words (BoW) and pLSA representations of documents were experimented for RLSC and SVM with a linear kernel. The parameter ? was selected via cross validation. For multi-classification, G-RLSC and RLSC used one-versus-all strategy. SVM used one-versus-one strategy. ground of this trick, which goes beyond the discussion of this paper.) With the construction of the RKHS, the computation is further optimized and the theoretical analysis of such algorithms is also potentially facilitated. We evaluated G-RLS learning algorithm in text categorization. The empirical results from real-world applications have confirmed the effectiveness of the algorithm. Acknowledgments The authors thank Dr. Haixuan Yang for useful discussions. This research was partially supported by RGC Earmarked Grant #4173/04E and #4132/05E of Hong Kong SAR and RGC Research Grant Direct Allocation of the Chinese University of Hong Kong. References [1] V.N. Vapnik. Statistical Learning Theory. John Wiley and Sons, 1998. [2] B. Sch?olkopf and A.J. Smola. Learning with Kernels. The MIT Press, 2002. [3] J.S. Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004. [4] T. Evgeniou, M. Pontil, and T. Poggio. Regularization networks and support vector machines. Adv. Comput. Math., 13:1?50, 2000. [5] T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer networks. Science, 247:978?982, 1990. [6] T. Poggio and S. Smale. The mathematics of learning: Dealing with data. Not. Am. Math. Soc, 50:537? 544, 2003. [7] A.N. Tikhonov and V.Y. Arsenin. Solutions of Ill-Posed Problems. Winston and Sons, 1977. [8] V.A. Morozov. Methods for Solving Incorrectly Posed Problems. Springer-Verlag, 1984. [9] G. Wahba. Spline Models for Observational Data. SIAM, 1990. [10] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. J. Math. Anal. Appl., 33:82?95, 1971. [11] F. Girosi, M.J. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Comput., 7:219?269, 1995. [12] B. Sch?olkopf, R. Herbrich, and A.J. Smola. A generalized representer theorem. In COLT?2001 and EuroCOLT?2001, 2001. [13] R.M. Rifkin. Everything Old is New Again: A Fresh Look at Historical Approaches in Machine Learning. PhD thesis, Massachusetts Institute of Technology, 2002. [14] G. Fung and O.L. Mangasarian. Proximal support vector machine classifiers. In KDD?01, 2001. [15] J.A.K. Suykens and J. Vandewalle. Least squares support vector machine classifiers. Neural Process. Lett., 9:293?300, 1999. [16] W. Light and H. Wayne. Spaces of distributions, interpolation by translates of a basis function and error estimates. J. Numer. Math., 81:415?450, 1999. [17] R.K. Beatson, W.A. Light, and S. Billings. Fast solution of the radial basis function interpolation equations: Domain decomposition methods. SIAM J. Sci. Comput., 22:1717?1740, 2000. [18] A.K. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/?mccallum/bow, 1996. [19] T. Hofmann. Probabilistic latent semantic analysis. In UAI?99, 1999. [20] C.A. Micchelli. Interpolation of scattered data: Distances, matrices, and conditionally positive definite functions. Constr. Approx., 2:11?22, 1986.
3014 |@word kong:4 trial:1 polynomial:4 norm:1 plsa:4 seek:1 decomposition:1 contains:3 rkhs:6 document:2 existing:3 comparing:1 deteriorating:1 com:1 written:1 john:1 kdd:1 hofmann:1 girosi:2 leaf:2 selected:2 mccallum:2 provides:1 math:4 cse:1 herbrich:1 along:3 constructed:3 direct:2 ik:1 consists:3 combine:1 fitting:1 introduce:1 x0:12 multi:1 eurocolt:1 inappropriate:1 notation:2 moreover:1 kimeldorf:1 cm:2 unified:2 finding:1 transformation:2 rm:1 k2:1 classifier:2 wayne:1 grant:2 positive:7 engineering:1 treat:1 usenet:1 subscript:1 interpolation:4 championed:1 china:1 studied:4 collect:1 appl:1 factorization:1 practical:1 acknowledgment:1 testing:1 definite:7 pontil:1 empirical:11 projection:2 word:3 radial:2 refers:1 get:3 onto:6 package2:1 seminal:1 www:3 equivalent:1 go:1 starting:1 convex:2 stabilizing:2 estimator:3 utilizing:1 regarded:1 spanned:2 orthonormal:1 embedding:2 stability:1 sar:1 controlling:1 suppose:3 construction:3 us:1 designing:2 hypothesis:7 trick:8 rec:1 predicts:1 solved:1 worst:1 adv:1 trade:1 removed:1 highest:1 complexity:6 cristianini:1 personal:1 solving:4 purely:1 basis:5 mh:1 represented:2 talk:1 regularizer:5 separated:1 fast:4 h0:7 posed:3 solve:1 otherwise:1 noisy:1 propose:1 product:1 beatson:1 bow:3 rifkin:1 ky:1 olkopf:2 webpage:1 produce:1 categorization:2 object:2 soc:1 predicted:1 c:2 come:1 wenye:1 kwong:1 observational:1 everything:1 hx:2 strictly:4 hold:1 ground:1 normal:1 mapping:1 predict:1 substituting:2 group1:1 major:1 sought:2 bag:2 label:2 minimization:1 brought:1 mit:1 gaussian:2 tar:1 varying:2 derived:3 mainly:1 hk:14 am:1 leung:1 going:1 classification:10 ill:2 html:1 colt:1 smoothing:1 special:2 mutual:1 equal:1 construct:1 evgeniou:1 jones:1 rls:11 nearly:1 representer:6 look:1 report:1 spline:3 few:1 modern:1 randomly:1 simultaneously:2 message:1 investigate:1 mining:1 evaluation:2 numer:1 light:2 predefined:11 poggio:5 orthogonal:2 taylor:1 old:1 rulequest:1 theoretical:1 fitted:1 modeling:1 onal:1 subset:3 vandewalle:1 reported:3 kxi:5 proximal:1 combined:1 siam:2 morozov:1 lee:1 off:1 probabilistic:2 ym:2 squared:4 again:1 satisfied:1 thesis:1 containing:1 choose:1 dr:1 derivative:3 li:1 potential:1 accompanied:1 bold:1 summarized:1 coefficient:1 explicitly:2 depends:1 script:1 view:2 h1:1 closed:1 helped:1 start:1 hf:1 om:1 square:7 accuracy:3 efficiently:1 confirmed:3 researcher:1 comp:1 associated:3 mi:3 dataset:6 massachusetts:1 knowledge:1 improves:1 hilbert:2 organized:1 back:2 haixuan:1 supervised:2 improved:1 evaluated:2 furthermore:1 smola:2 sketch:1 web:1 defines:1 brings:1 yf:2 effect:2 consisted:1 y2:2 regularization:9 hence:1 symmetric:1 semantic:2 conditionally:2 during:1 uniquely:2 hong:5 generalized:10 criterion:2 evident:2 ridge:4 cp:1 consideration:1 mangasarian:1 functional:1 belong:1 cambridge:1 cv:2 rd:2 approx:1 pm:1 mathematics:1 language:1 hxi:6 toolkit:1 recent:1 tikhonov:1 verlag:1 binary:1 yi:8 determine:1 cuhk:1 semi:1 desirable:1 reduces:1 believed:1 cross:3 retrieval:1 rlsc:14 regression:10 multilayer:1 cmu:4 kernel:32 sometimes:1 normalization:1 suykens:1 c1:2 addition:1 sch:2 rest:2 unlike:1 induced:1 effectiveness:3 yang:1 variety:1 xj:3 newsgroups:2 conju:1 architecture:1 perfectly:1 wahba:2 billing:1 inner:1 idea:3 economic:1 translates:9 shatin:1 motivated:1 utility:1 generally:2 useful:1 ten:3 http:3 specifies:1 notice:1 sign:1 group:5 key:1 four:2 tchebycheffian:1 clarity:1 sum:1 run:2 package:1 facilitated:1 throughout:1 investigates:1 winston:1 orthogonality:1 precisely:1 constraint:1 x2:1 xval:1 span:3 min:2 department:1 fung:1 according:3 peripheral:1 combination:3 belonging:1 son:2 constr:1 projecting:1 unregularized:3 equation:10 studying:1 decomposing:1 rewritten:1 generic:4 appropriate:2 sak:1 alternative:1 gate:1 existence:2 denotes:1 multipole:1 include:1 clustering:1 hinge:2 calculating:1 chinese:2 especially:1 classical:2 micchelli:1 added:1 parametric:1 strategy:2 gradient:1 excels:1 subspace:4 distance:1 thank:1 mapped:1 sci:2 seven:2 collected:1 considers:1 reason:1 fresh:1 kk:3 minimizing:2 equivalently:1 sector:3 potentially:1 smale:1 implementation:1 anal:1 twenty:1 observation:1 datasets:4 finite:1 incorrectly:1 y1:5 reproducing:3 hxj:1 posedness:1 pair:1 specified:2 optimized:1 c4:2 beyond:1 below:3 pattern:1 xm:1 max:1 including:1 regularized:11 scheme:1 improve:1 technology:1 carried:1 gz:1 coupled:1 xq:2 text:5 kf:5 multiplication:1 loss:10 allocation:1 versus:2 validation:3 degree:2 xp:4 arsenin:1 supported:1 last:1 institute:1 benefit:1 curve:1 dimension:2 rainbow:1 valid:1 world:1 lett:1 author:1 made:1 projected:1 simplified:1 historical:1 implicitly:2 dealing:1 uai:1 xi:16 continuous:2 iterative:1 latent:3 decomposes:1 expansion:1 investigated:1 constructing:2 domain:1 linearly:3 k2k:2 prep:1 x1:4 scattered:1 wiley:1 comput:3 third:1 posting:1 kin:1 theorem:5 specific:1 insightful:1 r8:1 explored:1 experimented:2 admits:1 svm:8 gik:1 reproduction:1 concern:1 vapnik:2 effectively:1 ci:7 phd:1 kx:3 prevents:1 partially:1 scalar:1 springer:1 corresponds:1 minimizer:4 satisfies:2 identity:1 rbf:3 infinite:2 determined:1 except:1 called:1 m3:1 rgc:2 support:4 cholesky:1 evaluate:1
2,221
3,015
Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds Gloria Haro, Gregory Randall, and Guillermo Sapiro IMA and Electrical and Computer Engineering University of Minnesota, Minneapolis, MN 55455 [email protected],[email protected],[email protected] Abstract The study of point cloud data sampled from a stratification, a collection of manifolds with possible different dimensions, is pursued in this paper. We present a technique for simultaneously soft clustering and estimating the mixed dimensionality and density of such structures. The framework is based on a maximum likelihood estimation of a Poisson mixture model. The presentation of the approach is completed with artificial and real examples demonstrating the importance of extending manifold learning to stratification learning. 1 Introduction Data in high dimensions is becoming ubiquitous, from image analysis and finances to computational biology and neuroscience. This data is often given or represented as samples embedded in a high dimensional Euclidean space, point cloud data, though it is assumed to belong to lower dimensional manifolds. Thus, in recent years, there have been significant efforts in the development of methods to analyze these point clouds and their underlying manifolds. These include numerous techniques for the estimation of the intrinsic dimension of the data and also its projection onto lower dimensional representations. These disciplines are often called manifold learning and dimensionality reduction. A few examples include [2, 3, 4, 9, 10, 11, 12, 16]. The vast majority of the manifold learning and dimensionality reduction techniques developed in the literature assume, either explicitly or implicitly, that the given point cloud are samples of a unique manifold. It is very easy to realize that a significant part of the interesting data has mixed dimensionality and complexity. The work here presented deals with this more general case, where there are different dimensionalities/complexities present in the point cloud data. That is, we have samples not of a manifold but of a stratification. The main aim is to cluster the data according to the complexity (dimensionality) of the underlying possible multiple manifolds. Such clustering can be used both to better understand the varying dimensionality and complexity of the data, e.g., states in neural recordings or different human activities for video analysis, or as a pre-processing step for the above mentioned manifold learning and dimensionality reduction techniques. This clustering-by-dimensionality task has been recently explored in a handful of works. Barbar?a and Chen, [1], proposed a hard clustering technique based on the fractal dimension (box-counting). Starting from an initial clustering, they incrementally add points into the cluster for which the change in the fractal dimension after adding the point is the lowest. They also find the number of clusters and the intrinsic dimension of the underlying manifolds. Gionis et al., [7], use local growth curves to estimate the local correlation dimension and density for each point. The new two-dimensional representation of the data is clustered using standard techniques. Souvenir and Pless, [14], use an Expectation Maximization (EM) type of technique, combined with weighted geodesic multidimensional scaling. The weights measure how well each point fits the underlying manifold defined by the current set of points in the cluster. After clustering, each cluster dimensionality is estimated following [10]. Huang et al., [8], cluster linear subspaces with an algebraic geometric method based on polynomial differentiation and a Generalized PCA. They search for the best combination of linear subspaces that explains the data, and find the number of linear subspaces and their intrinsic dimension. The work of Mordohai and Medioni, [11], estimates the local dimension using tensor voting. These recent works have clearly shown the necessity to go beyond manifold learning, into ?stratification learning.? In our work, we do not assume linear subspaces, and we simultaneously estimate the soft clustering and the intrinsic dimension and density of the clusters. This collection of attributes is not shared by any of the pioneering works just described. Our approach is an extension of the Levina and Bickel?s local dimension estimator [10]. They proposed to compute the intrinsic dimension at each point using a Maximum Likelihood (ML) estimator based on a Poisson distribution. The local estimators are then averaged, under the assumption of a single uniform manifold. We propose to compute a ML on the whole point cloud data at the same time (and not one for each point independently), and use a Poisson mixture model, which permits to have different classes, each one with their own dimension and sampling density. This technique automatically gives a soft clustering according to dimensionality and density, with an estimation of both quantities for each class. Our approach assumes that the number of classes is given, but we are discovering the actual number of underlying manifolds. If we search for a larger than needed number of classes, we obtain some classes with the same dimensionality and density or some classes with very few representatives, as shown in the examples later presented. The remainder of this paper is organized as follows: In Section 2 we review the method proposed by Levina and Bickel, [10], which gives a local estimation of the intrinsic dimension and has inspired our work. In Section 3 we present our core contribution of simultaneous soft clustering and dimensionality and density estimation. We present experiments with synthetic and real data in Section 4, and finally, some conclusions are presented in Section 5. 2 Local intrinsic dimension estimation Levina and Bickel (LB), [10], proposed a geometric and probabilistic method which estimates the local dimension (and density) of a point cloud data1 . This is the approach we here extend, which is based on the idea that if we sample an m-dimensional manifold with T points, the proportion of points that fall into a ball around a point xt is Tk ? f (xt )V (m)Rk (xt )m , where the given point cloud, embedded in high dimensions D, is X = {xt ? RD ; t = 1, . . . , T }, k is the number of points inside the ball, f (xt ) is the local sampling density at point xt , V (m) is the volume of the unit sphere in Rm , and Rk (xt ) is the Euclidean distance from xt to its k-th nearest neighbor (kNN). Then, they consider the inhomogeneous process N (R, xt ), which counts the number of points falling into a small D-dimensional sphere B(R, xt ) of radius R centered at xt . This is a binomial process, and some assumptions need to be done to proceed. First, if T ? ?, k ? ?, and k/T ? 0, then we can approximate the binomial process by a Poisson process. Second, the density f (xt ) is constant inside the sphere, a valid assumption for small R. With these assumptions, the rate ? of the counting process N (R, xt ) can be written as ?(R, xt ) = f (xt )V (m)mRm?1 . The log-likelihood of the process N (R, xt ) is then given by Z R Z R L(m(xt ), ?(xt )) = log ?(r, xt )dN (r, xt ) ? ?(r, xt )dr, 0 0 where ?(xt ) := log f (xt ) is the density parameter and the first integral is a Riemann-Stieltjes integral [13]. The maximum likelihood estimators satisfy ?L/?? = 0 and ?L/?m = 0, leading to a computation for the local dimension at point xt , m(xt ), depending on all the neighbors within a distance R from xt [10]. In practice, it is more convenient to compute a fixed amount k of nearest i?1 h Pk?1 Rk (xt ) 1 log . This neighbors. Thus, the local dimension at point xt is m(xt ) = k?2 j=1 Rj (xt ) estimator is asymptotically unbiased (see [10] for more details). If the data points belong to the same manifold, we can average over all m(xt ) in order to obtain a more robust estimator. However, if there are two or more manifolds with different dimensions, the average does not make sense, unless we first cluster according to dimensionality and then we estimate the dimensionality for each 1 M. Hein pointed us out in NIPS that this dimension estimator is equivalent to the one proposed in [15]. cluster. We briefly toy with this idea now, as a warm up to our simultaneous soft clustering and estimation technique described in Section 3. 2.1 A two step clustering approach As a first simple approach to detect and cluster mixed dimensionality (and/or densities), we can combine a local dimensionality estimator such as the one just described and a clustering technique. For the second step we use the Information Bottleneck (IB) [17], which is an elegant framework to eventually combine several local dimension estimators and other possible features such as density [6]. The IB is a technique that allows to cluster (compress) a variable according to another related variable. Let X be the set of variables to be clustered and S the relevance variable that gives some information about X. An example is the information that different words provide about documents ? the clustered version of X. The optimal X ? is the one that minimizes of different topics. We call X ? X) ? ?I(X; ? S), where I(?; ?) denotes mutual information and the functional L(p(x?t |xt )) = I(X; p(?) the probability density function. There is a trade-off, controlled by ?, between compressing the representation and preserving the meaningful information. In our context, we want to cluster the data according to the intrinsic dimensionality (and/or density). Then, our relevant variable S will be the set of (quantized) estimated local intrinsic dimensions. For the joint distribution p(xt , si ), si ? S, we use the histogram of local dimensions inside a ball of radius R0 around xt ,2 computed by the LB technique. Examples of this technique will be presented in the experimental Section. Instead of a two-steps algorithm, with local dimensionality and/or density estimation followed by clustering, we now propose a maximum likelihood technique that combines these steps. 3 Poisson mixture model The core approach that we propose to study stratifications (mixed manifolds) is based on extending the LB technique [10]. Instead of modelling each point and its local ball of radius R as a Poisson process and computing the ML for each ball separately, we consider all the possible balls at the same time in the same ML function. As the probability density function for all the point cloud we consider a mixture of Poisson distributions with different parameters (dimension and density). Thus, we allow the presence of different intrinsic dimensions and densities in the dataset. These are automatically computed while being used for soft clustering. Let us denote by J the number of different Poisson distributions considered in the mixture, each one with a (possibly) different dimension m and density parameter ?. We consider the vector set of parameters ? = {? j = (? j , ?j , mj ); j = 1, . . . , J}, where ? j is the mixture coefficient for class j j (the proportion of distribution j in the dataset), ?j is its density parameter (f j = e? ), and mj is its dimension. We denote by p(?) the probability density function and by P (?) the probability. As in the LB approach, the observable event will be yt = N (R, xt ), the number of points inside the ball B(R, xt ) of radius R centered at point xt . The total number of observations is T 0 and Y = {yt ; t = 1, . . . , T 0 } is the observation sequence. If we consider every possible ball in the dataset then, T 0 coincides with the total number of points T in the point cloud. From now on, we will consider this case and T 0 ? T . The density function of the Poisson mixture model is given by ! ! Z R Z R J J X X j j j j j j p(yt |?) = ? p(yt |? , m ) = ? exp log ? (r) dN (r, xt ) exp ? ? (r)dr , j=1 j=1 j 0 j 0 where ?j (r) = e? V (mj )mj rm ?1 . Usually, problems involving a mixture of experts are solved by the Expectation Maximization (EM) algorithm [5]. In our context, there are two kinds of unknown parameters: The membership function of an expert (class), ? j , and the parameters of each expert, mj and ?j . The membership information is originally unknown, thereby making the parameter estimation for each class difficult. The EM algorithm computes its expected value (E-step) and then this value is used for the parameter estimation procedure (M-step). These two steps are iterated. 2 The value of R0 determines the amount of regularity in the classification. If Y contains T statistically independent variables, then the incomplete data log-likelihood is: L(Y |?) = log p(Y |?) = log T Y p(yt |?) = Q(?) + R(?), t=1 Q(?) := X P (Z|Y, ?) log p(Z, Y |?), R(?) := ? Z X P (Z|Y, ?) log P (Z|Y, ?), Z where Z = {zt ? C; t = 1, . . . , T } is the missing data (hidden-state information), and the set of class labels is C = {C 1 , C 2 , . . . C J }. Here, zt = C j means that the j-th mixture generates yt . We call Q the expectation of log p(Z, Y |?) with respect to Z. The EM algorithm is based on maximizing Q, since while improving (maximizing) the function Q at each iteration, the likelihood function L is also improved. The probability density that appears in the function Q can be written QT as p(Z, Y |?) = t=1 p(zt , yt |?), and the complete-data log-likelihood becomes log p(Z, Y |?) = T X J X   ?tj log p(yt |zt = C j , ? j )? j , (1) t=1 j=1 where a set of indicator variables ?tj is used in order to indicate the status of the hidden variables:  1 if yt generated by mixture C j , j j ?t ? ?(zt , C ) = 0 else. Considering the expectation, with respect to Z, EZ (?) of (1) and setting ? to a fixed known value ?n (the value at step n of the algorithm), everywhere except for the log function, we get a function Q of ?. We denote it by Q(?|?n ), and it has the following form Q(?|?n ) = J T X X h i hjn (yt ) log p(yt |?tj = 1, ? j )? j , t=1 j=1 where p(yt |?tj = 1, ?nj )?nj (2) hjn (yt ) = EZ [?tj |yt , ?n ] = P (?tj = 1|yt , ?n ) = PJ l l l l=1 p(yt |?t = 1, ?n )?n is the probability that observation t belongs to mixture j. Finally, the probability density in (2) is ! ! Z R Z R p(yt |?tj = 1, ?nj ) = exp log ?jn (r)dN (r, xt ) exp ? ?jn (r)dr , (3) 0 j ?n 0 mjn ?1 . As mentioned above, the EM algorithm consists of two main where ?jn (r) = e V (mjn )mjn r steps. In the E-step, the function Q(?|?n ) is computed, for that, we determine the best guess of the membership function, i.e., the probabilities hjn (yt ). Once we know these probabilities, Q(?|?n ) can be considered as a function of the only unknown, ?, and it is maximized in order to compute the values of ?n+1 , i.e., the maximum likelihood parameters ? at step n + 1; this is called the Mstep. The EM suffers from local maxima, hitting a local maximum can be prevented running the algorithm several times with different initializations. Different random subset of points, from the point cloud, may be used in each run. We have experimented with both approaches and the results are always similar if we initialize all the probabilities equally. The Algorithm PMM describes the j j are obtained by , mjn+1 , and ?n+1 main components of this proposed approach. The estimators ?n+1 PJ j l computing ?n+1 = arg max?j Q(?|?n ) + ?( l=1 ? ? 1) in the M-step, where ? is the Lagrange PJ multiplier that allows to introduce the constraint l=1 ? l = 1. This gives equations (4)-(5), where R? j j mj mj V (mjn ) = (2? mn /2 )/(mjn ?( 2n )), and ?( 2n ) = 0 tmn /2?1 e?t dt. In order to compute mjn+1 we have used the same approach as in [10], by means of a k nearest neighbor graph. 4 Experimental results We now present a number of experimental results for the technique proposed in Section 3. We often compare it with the two-steps algorithm described in Section 2, and denote this algorithm by LD+IB. Algorithm PMM Poisson Mixture Model Require: The point cloud data, J (number of desired classes) and k (scale of observation). Ensure: Soft clustering according to dimensionality and density. PJ j j j j 1: Initialization of ?0 = {?0 , m0 , ?0 } to any set of values which ensures that j=1 ?0 = 1. 2: EM iterations on n, For all j = 1, . . . J, compute: ? E-step: Compute hjn (yt ) by (2). ? M-step: Compute ?P ??1 Pk?1 T Rk (yt ) j T X h (y ) log t 1 t=1 n j=1 Rj (yt ) j ? = ?n+1 hj (yt ) ; mjn+1 = ? PT j T t=1 n t=1 hn (yt )(k ? 1) ! T T X X j j j j mjn ?n+1 = log hn (yt )(k ? 1) ? log V (mn ) hn (yt )Rk (yt ) t=1 (4) (5) t=1 Until convergence of ?n , that is, when ||?n+1 ? ?n ||2 < , for a certain small value . In all the experiments we use the initialization ?0j = 1/J, ?0j = 0, and mj0 = j, for all j = 1, . . . , J. The distances are normalized so that the maximum distance is 1. The embedding dimension in all the experiments on synthetic data is 3, although the results were found to be consistent when we increased the embedding dimension. The first experiment consists of a mixture of a Swiss roll manifold (700 points) and a line (700 points) embedded in a three dimensional space. The algorithm (with J = 2 and k = 10) is able to separate both manifolds. The estimated parameters are collected in Table 1. For each table, we display the estimated dimension m, density ?, and mixture coefficient ? for each one of the classes. We also show the percentage of points of each manifold that are classified in each class (after thresholding the soft assignment). Figure 1(a) displays both manifolds ? each point is colored according to the probability of belonging to each one of the two possible classes. Tables 1(a) and 1(c) contain the results for both PMM and LD+IB using J = 2. Table 1(b) shows the results for the PMM algorithm with k = 10 and J = 3. Note how the parameters of the first two classes are quite similar to the ones obtained with J = 2, and the third class is marginal (very small ?). Figure 1(b) shows the PMM classification when J = 3. Note that all the points of the line belong to the class of dimension 1. The points of the Swiss roll are mainly concentrated in the other class with dimension 2. A slight amount of Swiss roll points belong to a third class with roughly the same dimension as the second class. Actually, these points are located in the point cloud boundaries, where the underlying assumptions are not always valid. If we estimate the dimension of the mixture using the LB technique with k = 10, we obtain 1.70 with a standard deviation of 5.31. If we use the method proposed by Costa and Hero [4], the estimated dimension is 2. In both cases, the estimated intrinsic dimension is the largest one present in the mixture, ignoring that the data actually lives in two manifolds of different intrinsic dimension. The same Table and Figure, second rows, show results for noisy data. We add to the point coordinates Gaussian noise with ? = 0.6. The results obtained with k = 10 are displayed in Tables 1(d), 1(e) and 1(f), and in Figures 1(d), 1(e) and 1(f). Note how the classification still separates the two different manifolds, although the line is much more affected by the noise and it does not look like a one dimensional manifold anymore. This is reflected also by the estimated dimension which now is bigger. This phenomena is related to the scale of observation and to the level of noise. If the level of noise is large ? e.g., compared to the mean distance to the k nearest neighbors for a small k ? intuitively the estimated intrinsic dimension will be closer to the embedding dimension (this behavior was experimentally verified). We can again compare the results with the ones obtained with the LB estimator alone: Estimated dimension 2.71 and standard deviation 1.12. Using Costa and Hero [4], the estimated dimension varies between 2 and 3 (depending on the number of bootstrap loops). Both techniques do not consider the possibility of mixed dimensionality. The experiment in Figure 2 illustrates how the soft clustering is done according to both dimensionality and density. The data consists of 2500 points on the Swiss roll, 100 on a line with high density Estimated parameters m 1.00 2.01 ? 5.70 2.48 ? 0.5000 0.5000 % points in each class Line 100 0 SR 0 100 Estimated parameters 1.00 2.01 2.16 5.70 2.55 1.52 0.5000 0.4792 0.0208 % points in each class Line 100 0 0 SR 0 96.57 3.43 Estimated dimension m 1.67 2.00 % points in each class Line 100 0 Swiss roll 3.45 96.55 (a) PMM (J = 2). (b) PMM (J = 3). (c) LD+IB (J = 2). Estimated parameters m 3.02 2.38 ? 7.69 2.73 ? 0.4951 0.5049 % points in each class Line 98.14 1.86 SR 0.86 99.14 Estimated parameters 3.01 2.40 2.26 7.70 2.88 1.72 0.4910 0.4766 0.0325 % points in each class Line 97.71 2.29 0 SR 0.71 93.00 6.29 Estimated dimension m 3.09 2.30 % points in each class Line 79.71 20.29 Swiss roll 24.71 75.29 (d) PMM (J = 2). (e) PMM (J = 3). (f) LD+IB (J = 2). m ? ? m ? ? Table 1: Clustering results for the Swiss roll (SR) and a line (k = 10), without noise (first row) and with noise (second row). (a) PMM (J = 2) (b) PMM (J = 3) (c) LD+IB (J = 2) (d) PMM (J = 2) (e) PMM (J = 3) (f) LD+IB (J = 2) Figure 1: Clustering of a line and a Swiss roll (k = 10). First row without noise, second row with Gaussian noise (? = 0.6). Points colored according to the probability of belonging to each class. and 50 on another less dense line. We have set J = 4 and the algorithm gives an ?empty class,? thus discovering that three classes, with correct dimensionality and density, is enough for a good representation. The only errors are in the borders, as expected. m ? ? Line Line (dense) Swiss Roll Estimated parameters 1.94 1.04 0.98 7.12 3.82 2.66 0.9330 0.0498 0.0167 % points in each class 0.0 15.69 84.31 0.0 99.00 1.00 98.92 1.08 0.0 1.93 2.57 0.0004 0.0 0.0 0.0 Figure 2: Clustering with mixed dimensions and density (k = 20, J = 4). In order to test the algorithm with real data, we first work with the MNIST database of handwritten digits,3 which has a test set of 10.000 examples. Each digit is an image of 28 ? 28 pixels and we treat the data as 784-dimensional vectors. 3 http://yann.lecun.com/exdb/mnist/ We study the mixture of digits one and two and apply PMM and LD+IB with J = 2 and k = 10. The results are shown in Figure 3. Note how the digits are well separated.4 The LB estimator alone gives dimensions 9.13 for digits one, 13.02 for digits two, and 11.26 for the mixture of both digits. The Costa and Hero?s method, [4], gives 8, 11 and 9 respectively. Both methods assume a single intrinsic dimension and give an average of the dimensions of the underlying manifolds. Estimated parameters m 8.50 12.82 ? 11.20 6.80 ? 0.4901 0.5099 % points in each class Ones 93.48 6.52 Twos 0 100 Estimated dimension m 9.17 13.74 % points in each class Ones 94.71 5.29 Twos 9.08 90.02 (a) PMM (b) LD+IB (c) Some image examples. Figure 3: Results for digits 1 and 2 (k = 10, J = 2). Next, we experiment with 9-dimensional vectors formed of image patches of 3 ? 3 pixels. If we impose J = 3 and use PMM, we obtain the results in Figure 4. Notice how roughly one class corresponds to patches in homogeneous zones (approximately constant gray value), a second class corresponds to textured zones and a third class to patches containing edges. The estimated dimensions in each region are in accordance to the estimated dimensions using Isomap or Costa and Hero?s technique in each region after separation. This experiment is just a proof of concept, in the future we will study how to adapt this clustering approach to image segmentation. Figure 4: Clustering of image patches of 3 ? 3 pixels with PMM, colors indicating the different classes (complexity) (J = 3, k = 30). Left: original and segmented images of a house. Right: original and segmented images of a portion of biological tissue. Adding spatial regularization is the subject of current research. Finally, as an additional proof of the validity of our approach and its potential applications, we use the PMM framework to separate activities in video, Figure 5 (see also [14]). Each original frame is 480 ? 640, sub-sampled to 48 ? 64 pixels, with 1673 frames. Four classes are present: standing, walking, jumping, and arms waving. The whole run took 361 seconds in Matlab, while the classification time (PMM) can be neglected compared to the kNN component. Samples in each cluster C1 C2 C3 Standing 416 0 95 Walking 0 429 69 Waving 0 5 423 Jumping 0 18 0 C4 0 25 4 189 Figure 5: Classifying human activities in video (k = 10, J = 4). Four sample frames are shown followed by the classification results (confusion matrix). Visual analysis of the wrongly classified frames show that these are indeed very similar to the misclassified class members. Adding features, e.g., optical flow, will improve the results. 4 Since the clustering is done according to dimensionality and density, digits which share these characteristics won?t be separated into different classes. 5 Conclusions In this paper we discussed the concept of ?stratification learning,? where the point cloud data is not assumed to belong to a single manifold, as commonly done in manifold learning and dimensionality reduction. We extended the work in [10] in the sense that the maximum likelihood is computed once for the whole dataset, and the probability density function is a mixture of Poisson laws, each one modeling different intrinsic dimensions and densities. The soft clustering and the estimation are simultaneously computed. This framework has been contrasted with a more standard two-steps approach, a combination of the local estimator introduced in [10] with the Information Bottleneck clustering technique [17]. In both methods we need to compute a kNN-graph which is precisely the computationally more expensive part. The mixture of Poisson estimators is faster than the two-steps approach one, it uses an EM algorithm, linear in the number of classes and observations, which converges in a few iterations. The mixture of Poisson model is not only clustering according to dimensionality, but to density as well. The introduction of additional observations and estimates can also help to separate points that although have the same dimensionality and density, belong to different manifolds. We would also like to study the use of ellipsoids instead of balls in the counting process in order to better follow the geometry of the intrinsic manifolds. Another aspect to study is the use of metrics more adapted to the nature of the data instead of the Euclidean distance. At the theoretical level, the bias of the PMM model needs to be studied. Results in these directions will be reported elsewhere. Acknowledgments: This work has been supported by ONR, DARPA, NSF, NGA, and the McKnight Foundation. We thank Prof. Persi Diaconis and Prof. Ren?e Vidal for important feedback and comments. We also thank Pablo Arias and J?er?emie Jakubowicz for their help. GR was on sabbatical from the Universidad de la Republica, Uruguay, while performing this work. References [1] D. Barbara and P. Chen. Using the fractal dimension to cluster datasets. In Proceedings of the Sixth ACM SIGKDD, pages 260?264, 2000. [2] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in NIPS 14, 2002. [3] M. Brand. Charting a manifold. In Advances in NIPS 16, 2002. [4] J. A. Costa and A. O. Hero. Geodesic entropic graphs for dimension and entropy estimation in manifold learning. IEEE Trans. on Signal Processing, 52(8):2210?2221, 2004. [5] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data. Journal of the Royal Statistical Society Ser. B, 39:1?38, 1977. [6] N. Friedman, O. Mosenzon, N. Slonim, and N. Tishby. Multivariate information bottleneck. In Seventeenth Conference UAI, pages 152?161, 2001. [7] A. Gionis, A. Hinneburg, S. Papadimitriu, and P. Tsparas. Dimension induced clustering. In Proceeding of the Eleventh ACM SIGKDD, pages 51?60, 2005. [8] K. Huang, Y. Ma, and R. Vidal. Minimum effective dimension for mixtures of subspaces: A robust GPCA algorithm and its applications. In Proceedings of CVPR, pages 631?638, 2004. [9] B. Kegl. Intrinsic dimension estimation using packing numbers. In Advances in NIPS 14, 2002. [10] E. Levina and P.J. Bickel. Maximum likelihood estimation of intrinsic dimension. In Advances in NIPS 17, 2005. [11] P. Mordohai and G. Medioni. Unsupervised dimensionality estimation and manifold learning in highdimensional spaces by tensor voting. In IJCAI, page 798, 2005. [12] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326, 2000. [13] D. L. Snyder. Random Point Processes. Wiley, New York, 1975. [14] R. Souvenir and R. Pless. Manifold clustering. In ICCV, pages 648?653, 2005. [15] F. Takens. On the numerical determination of the dimension of an attractor. Lecture notes in mathematics. Dynamical systems and bifurcations, 1125:99?106, 1985. [16] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319?2323, 2000. [17] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. In Proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing, pages 368?377, 1999.
3015 |@word version:1 briefly:1 polynomial:1 proportion:2 thereby:1 ld:8 reduction:6 necessity:1 initial:1 contains:1 document:1 pless:2 current:2 com:1 si:2 written:2 realize:1 numerical:1 mstep:1 alone:2 pursued:1 discovering:2 guess:1 core:2 colored:2 detecting:1 quantized:1 gpca:1 allerton:1 dn:3 c2:1 consists:3 combine:3 eleventh:1 inside:4 introduce:1 indeed:1 expected:2 behavior:1 roughly:2 inspired:1 riemann:1 automatically:2 actual:1 considering:1 becomes:1 estimating:1 underlying:7 lowest:1 kind:1 minimizes:1 developed:1 differentiation:1 nj:3 sapiro:1 every:1 multidimensional:1 voting:2 growth:1 finance:1 rm:2 ser:1 control:1 unit:1 engineering:1 local:20 treat:1 accordance:1 slonim:1 becoming:1 approximately:1 initialization:3 studied:1 minneapolis:1 averaged:1 statistically:1 uy:1 unique:1 lecun:1 acknowledgment:1 seventeenth:1 practice:1 swiss:9 bootstrap:1 digit:9 procedure:1 projection:1 convenient:1 pre:1 word:1 tmn:1 get:1 fing:1 onto:1 wrongly:1 context:2 equivalent:1 yt:26 missing:1 maximizing:2 go:1 starting:1 independently:1 estimator:14 embedding:5 coordinate:1 pt:1 homogeneous:1 us:1 expensive:1 located:1 walking:2 database:1 cloud:15 electrical:1 solved:1 barbar:1 pmm:20 compressing:1 ensures:1 region:2 trade:1 mentioned:2 dempster:1 complexity:5 neglected:1 geodesic:2 textured:1 packing:1 joint:1 darpa:1 represented:1 separated:2 effective:1 artificial:1 quite:1 larger:1 cvpr:1 niyogi:1 knn:3 noisy:1 laird:1 sequence:1 took:1 propose:3 remainder:1 relevant:1 loop:1 roweis:1 convergence:1 cluster:14 regularity:1 extending:2 empty:1 ijcai:1 converges:1 tk:1 help:2 depending:2 nearest:4 qt:1 indicate:1 direction:1 inhomogeneous:1 radius:4 correct:1 attribute:1 centered:2 human:2 explains:1 require:1 clustered:3 biological:1 extension:1 around:2 considered:2 exp:4 mj0:1 m0:1 bickel:4 entropic:1 estimation:15 label:1 largest:1 weighted:1 clearly:1 always:2 gaussian:2 aim:1 hj:1 varying:1 modelling:1 likelihood:12 mainly:1 sigkdd:2 sense:2 detect:1 membership:3 hidden:2 misclassified:1 pixel:4 arg:1 classification:5 development:1 takens:1 spatial:1 initialize:1 mutual:1 marginal:1 bifurcation:1 once:2 sampling:2 stratification:7 biology:1 look:1 unsupervised:1 future:1 few:3 belkin:1 mjn:9 diaconis:1 simultaneously:3 ima:2 geometry:1 attractor:1 friedman:1 possibility:1 umn:2 mixture:22 tj:7 integral:2 closer:1 edge:1 jumping:2 unless:1 incomplete:2 euclidean:3 desired:1 hein:1 theoretical:1 increased:1 soft:10 modeling:1 assignment:1 maximization:2 deviation:2 subset:1 uniform:1 eigenmaps:1 gr:1 tishby:2 reported:1 varies:1 gregory:1 synthetic:2 combined:1 density:37 standing:2 probabilistic:1 off:1 universidad:1 discipline:1 again:1 containing:1 huang:2 possibly:1 hn:3 dr:3 expert:3 uruguay:1 leading:1 toy:1 potential:1 de:2 coefficient:2 gionis:2 satisfy:1 explicitly:1 later:1 stieltjes:1 analyze:1 portion:1 waving:2 contribution:1 formed:1 roll:9 characteristic:1 maximized:1 handwritten:1 iterated:1 ren:1 tissue:1 classified:2 simultaneous:2 suffers:1 sixth:1 proof:2 sampled:2 costa:5 dataset:4 persi:1 color:1 dimensionality:31 ubiquitous:1 organized:1 segmentation:1 actually:2 appears:1 originally:1 dt:1 follow:1 reflected:1 improved:1 done:4 though:1 box:1 just:3 correlation:1 until:1 langford:1 nonlinear:2 incrementally:1 gray:1 validity:1 normalized:1 unbiased:1 concept:2 multiplier:1 isomap:1 regularization:1 contain:1 deal:1 coincides:1 won:1 generalized:1 exdb:1 complete:1 confusion:1 silva:1 image:8 recently:1 data1:1 functional:1 volume:1 sabbatical:1 belong:6 extend:1 slight:1 discussed:1 significant:2 rd:1 mathematics:1 pointed:1 minnesota:1 add:2 multivariate:1 own:1 recent:2 belongs:1 barbara:1 certain:1 onr:1 life:1 preserving:1 minimum:1 additional:2 impose:1 r0:2 determine:1 signal:1 multiple:1 rj:2 segmented:2 levina:4 adapt:1 faster:1 determination:1 sphere:3 prevented:1 equally:1 bigger:1 controlled:1 laplacian:1 involving:1 expectation:4 poisson:13 metric:1 histogram:1 iteration:3 c1:1 want:1 separately:1 else:1 sr:5 comment:1 recording:1 subject:1 elegant:1 induced:1 member:1 flow:1 call:2 counting:3 presence:1 easy:1 enough:1 fit:1 idea:2 bottleneck:4 pca:1 effort:1 algebraic:1 proceed:1 york:1 matlab:1 fractal:3 amount:3 guille:1 locally:1 hinneburg:1 concentrated:1 tenenbaum:1 http:1 percentage:1 nsf:1 notice:1 neuroscience:1 estimated:21 affected:1 snyder:1 four:2 demonstrating:1 falling:1 pj:4 verified:1 vast:1 asymptotically:1 graph:3 year:1 nga:1 run:2 everywhere:1 yann:1 patch:4 separation:1 scaling:1 followed:2 display:2 annual:1 activity:3 adapted:1 handful:1 constraint:1 precisely:1 generates:1 aspect:1 performing:1 haro:2 optical:1 according:11 combination:2 ball:9 mcknight:1 belonging:2 mordohai:2 describes:1 em:8 making:1 randall:2 intuitively:1 iccv:1 computationally:1 equation:1 count:1 eventually:1 needed:1 know:1 hero:5 permit:1 vidal:2 apply:1 spectral:1 anymore:1 jn:3 original:3 compress:1 assumes:1 clustering:28 include:2 binomial:2 completed:1 denotes:1 running:1 ensure:1 prof:2 society:1 tensor:2 quantity:1 bialek:1 subspace:5 distance:6 separate:4 thank:2 jakubowicz:1 majority:1 topic:1 manifold:35 collected:1 charting:1 ellipsoid:1 difficult:1 zt:5 unknown:3 observation:7 datasets:1 displayed:1 kegl:1 extended:1 communication:1 frame:4 lb:7 introduced:1 pablo:1 c3:1 c4:1 nip:5 trans:1 beyond:1 able:1 usually:1 dynamical:1 emie:1 pioneering:1 max:1 royal:1 video:3 medioni:2 event:1 warm:1 indicator:1 mn:3 arm:1 improve:1 numerous:1 review:1 literature:1 geometric:3 law:1 embedded:3 lecture:1 mixed:7 interesting:1 foundation:1 consistent:1 rubin:1 thresholding:1 classifying:1 share:1 row:5 elsewhere:1 guillermo:1 supported:1 bias:1 allow:1 understand:1 fall:1 neighbor:5 saul:1 curve:1 dimension:60 boundary:1 valid:2 feedback:1 computes:1 collection:2 commonly:1 approximate:1 observable:1 implicitly:1 status:1 ml:4 global:1 uai:1 assumed:2 search:2 table:7 mj:7 nature:1 robust:2 ignoring:1 improving:1 pk:2 main:3 dense:2 whole:3 noise:8 border:1 representative:1 wiley:1 sub:1 pereira:1 house:1 ib:10 third:3 rk:5 xt:39 hjn:4 er:1 explored:1 experimented:1 intrinsic:18 mnist:2 adding:3 importance:1 aria:1 mosenzon:1 illustrates:1 chen:2 entropy:1 ez:2 visual:1 lagrange:1 hitting:1 corresponds:2 determines:1 acm:2 ma:1 presentation:1 shared:1 hard:1 change:1 experimentally:1 except:1 contrasted:1 called:2 total:2 experimental:3 la:1 brand:1 meaningful:1 zone:2 indicating:1 highdimensional:1 relevance:1 phenomenon:1
2,222
3,016
Dynamic Foreground/Background Extraction from Images and Videos using Random Patches Le Lu? Integrated Data Systems Department Siemens Corporate Research Princeton, NJ 08540 [email protected] Gregory Hager Department of Computer Science Johns Hopkins University Baltimore, MD 21218 [email protected] Abstract In this paper, we propose a novel exemplar-based approach to extract dynamic foreground regions from a changing background within a collection of images or a video sequence. By using image segmentation as a pre-processing step, we convert this traditional pixel-wise labeling problem into a lower-dimensional supervised, binary labeling procedure on image segments. Our approach consists of three steps. First, a set of random image patches are spatially and adaptively sampled within each segment. Second, these sets of extracted samples are formed into two ?bags of patches? to model the foreground/background appearance, respectively. We perform a novel bidirectional consistency check between new patches from incoming frames and current ?bags of patches? to reject outliers, control model rigidity and make the model adaptive to new observations. Within each bag, image patches are further partitioned and resampled to create an evolving appearance model. Finally, the foreground/background decision over segments in an image is formulated using an aggregation function defined on the similarity measurements of sampled patches relative to the foreground and background models. The essence of the algorithm is conceptually simple and can be easily implemented within a few hundred lines of Matlab code. We evaluate and validate the proposed approach by extensive real examples of the object-level image mapping and tracking within a variety of challenging environments. We also show that it is straightforward to apply our problem formulation on non-rigid object tracking with difficult surveillance videos. 1 Introduction In this paper, we study the problem of object-level figure/ground segmentation in images and video sequences. The core problem can be defined as follows: Given an image X with known figure/ground labels L, infer the figure/ground labels L? of a new image X? closely related to X. For example, we may want to extract a walking person in an image using the figure/ground mask of the same person in another image of the same sequence. Our approach is based on training a classifier from the appearance of a pixel and its surrounding context (i.e., an image patch centered at the pixel) to recognize other similar pixels across images. To apply this process to a video sequence, we also evolve the appearance model over time. A key element of our approach is the use of a prior segmentation to reduce the complexity of the segmentation process. As argued in [22], image segments are a more natural primitive for image modeling than pixels. More specifically, an image segmentation provides a natural dimensional reduction from the spatial resolution of the image to a much smaller set of spatially compact and relatively homogeneous regions. We can then focus on representing the appearance characteristics ? The work has been done while the first author was a graduate student in Johns Hopkins University. of these regions. Borrowing a term from [22], we can think of each region as a ?superpixel? which represents a complex connected spatial region of the image using a rich set of derived image features. We can then consider how to classify each superpixel (i.e. image segment) as foreground or background, and then project this classification back into the original image to create the pixel-level foreground-background segmentation we are interested in. The original superpixel representation in [22, 19, 18] is a feature vector created from the image segment?s color histogram [19], filter bank responses [22], oriented energy [18] and contourness [18]. These features are effective for image segmentation [18], or finding perceptually important boundaries from segmentation by supervised training [22]. However, as shown in [17], those parameters do not work well for matching different classes of image regions from different images. Instead, we propose using a set of spatially randomly sampled image patches as a non-parametric, statistical superpixel representation. This non-parametric ?bag of patches? model1 can be easily and robustly evolved with the spatial-temporal appearance information from video, while maintaining the model size (the number of image patches per bag) using adaptive sampling. Foreground/background classification is then posed as the problem of matching sets of random patches from the image with these models. Our major contributions are demonstrating the effectiveness and computational simplicity of a nonparametric random patch representation for semantically labelling superpixels and a novel bidirectional consistency check and resampling strategy for robust foreground/background appearance adaptation over time. (a) (b) (c) (d) Figure 1: (a) An example indoor image, (b) the segmentation result using [6] coded in random colors, (c) the boundary pixels between segments shown in red, the image segments associated with the foreground, a walking person here, shown in blue, (d) the associated foreground/background mask. Notice that the color in (a) is not very saturated. This is a common fact in our indoor experiments without any specific lighting controls. We organize the paper as follows. We first address several image patch based representations and the associated matching methods are described. In section 3, the algorithm used in our approach is presented with details. We demonstrate the validity of the proposed approach using experiments on real examples of the object-level figure/ground image mapping and non-rigid object tracking under dynamic conditions from videos of different resolutions in section 4. Finally, we summarize the contributions of the paper and discuss possible extensions and improvements. 2 Image Patch Representation and Matching Building stable appearance representations of images patches is fundamental to our approach. There are many derived features that can be used to represent the appearance of an image patch. In this paper, we evaluate our algorithm based on: 1) an image patch?s raw RGB intensity vector, 2) mean color vector, 3) color + texture descriptor (filter bank response or Haralick feature [17]), and 4) PCA, LDA and NDA (Nonparametric Discriminant Analysis) features [7, 3] on the raw RGB vectors. For completeness, we give a brief description of each of these techniques. Texture descriptors: To compute texture descriptions, we first apply the Leung-Malik (LM) filter bank [13] which consists of 48 isotropic and anisotropic filters with 6 directions, 3 scales and 2 phases. Thus each image patch is represented by a 48 component feature vector. The Haralick texture descriptor [10] was used for image classification in [17]. Haralick features are derived from the Gray Level Co-occurrence Matrix, which is a tabulation of how often different combinations of pixel brightness values (grey levels) occur in an image region. We selected 5 out of 14 texture 1 Highly distinctive local features [16] are not the adequate substitutes for image patches. Their spatial sparseness nature limits their representativity within each individual image segment, especially for the nonrigid, nonstructural and flexible foreground/background appearance. descriptors [10] including dissimilarity, Angular Second Moment (ASM), mean, standard deviation (STD) and correlation. For details, refer to [10, 17]. Dimension reduction representations: The Principal Component Analysis (PCA) algorithm is used to reduce the dimensionality of the raw color intensity vectors of image patches. PCA makes no prior assumptions about the labels of data. However, recall that we construct the ?bag of patches? appearance model from sets of labelled image patches. This supervised information can be used to project the bags of patches into a subspace where they are best separated using Linear discriminant Analysis (LDA) or Nonparametric Discriminant Analysis (NDA) algorithm [7, 3] by assuming Gaussian or Non-Gaussian class-specific distributions. Patch matching: After image patches are represented using one of the above methods, we must match them against the foreground/background models. There are 2 methods investigated in this paper: the nearest neighbor matching using Euclidean distance and KDE (Kernel Density Estimation) [12] in PCA/NDA subspaces. For nearest-neighbor matching, we find, for each patch p, its B F F nearest neighbors pF n , pn in foreground/background bags, and then compute dp =k p ? pn k, B B F B dp =k p ? pn k. On the other hand, an image patch?s matching scores mp and mp are evaluated as probability density values from the KDE functions KDE(p, ?F ) and KDE(p, ?B ) where ?F |B are bags of patch models. Then the segmentation-level classification is performed as section 3.2. 3 Algorithms We briefly summarize our labeling algorithm as follows. We assume that each image of interest has been segmented into spatial regions. A set of random image patches are spatially and adaptively sampled within each segment. These sets of extracted samples are formed into two ?bags of patches? to model the foreground/background appearance respectively. The foreground/background decision for any segment in a new image is computed using one of two aggregation functions on the appearance similarities from its inside image patches to the foreground and background models. Finally, for videos, within each bag, new patches from new frames are integrated through a robust bidirectional consistency check process and all image patches are then partitioned and resampled to create an evolving appearance model. As described below, this process prune classification inaccuracies in the nonparametric image patch representations and adapts them towards current changes in foreground/background appearances for videos. We describe each of these steps for video tracking of foreground/background segments in more detail below, and for image matching, which we treat as a special case by simply omitting step 3 and 4 in Figure 2. Non-parametric Patch Appearance Modelling-Matching Algorithm inputs: Pre-segmented Images Xt , t = 1, 2, ..., T ; Label L1 F |B outputs: Labels Lt , t = 2, ..., T ; 2 ?bags of patches? appearance model for foreground/background ?T 1. Sample segmentation-adaptive random image patches {P1 } from image X1 . F |B 2. Construct 2 new bags of patches ?1 for foreground/background using patches {P1 } and label L1 ; set t = 1. 3. t = t + 1; Sample segmentation-adaptive random image patches {Pt } from image Xt ; match F |B {Pt } with ?t?1 and classify segments of Xt to generate label Lt by aggregation. 4. Classify and reject ambiguous patch samples, probable outliers and redundant appearance patch F |B samples from new extracted image patches {Pt } against ?t?1 ; Then integrate the filtered {Pt } F |B F |B into ?t?1 and evaluate the probability of survival ps for each patch inside ?t?1 against the original unprocessed {Pt } (Bidirectional Consistency Check). 5. Perform the random partition and resampling process according to the normalized product of F |B F |B probability of survival ps and partition-wise sampling rate ? ? inside ?t?1 to generate ?t . F |B 6. If t = T , output Lt , t = 2, ..., T and ?T ; exit. If t < T , go to (3). Figure 2: Non-parametric Patch Appearance Modelling-Matching Algorithm Figure 3: Left: Segment adaptive random patch sampling from an image with known figure/ground labels. Green dots are samples for background; dark brown dots are samples for foreground. Right: Segment adaptive random patch sampling from a new image for figure/ground classification, shown as blue dots. 3.1 Sample Random Image Patches We first employ an image segmentation algorithm2 [6] to pre-segment all the images or video frames in our experiments. A typical segmentation result is shown in Figure 1. We use Xt , t = 1, 2, ..., T to represent a sequence of video frames. Given an image segment, we formulate its representation as a distribution on the appearance variation over all possible extracted image patches inside the segment. To keep this representation to a manageable size, we approximate this distribution by sampling a random subset of patches. We denote an image segment as Si with SiF for a foreground segment, and SiB for a background segment, where i is the index of the (foreground/background)image segment within an image. Accordingly, Pi , PiF and PiB represent a set of random image patches sampled from Si , SiF and SiB respectively. The cardinality Ni of an image segment Si generated by [6] typically ranges from 50 to thousands. However small or large superpixels are expected to have roughly the same amount of uniformity. Therefore the sampling rate ?i of Si , defined as ?i = size(Pi )/Ni , should decrease with increasing Ni . For simplicity, we keep ?i as a constant for all superpixels, unless Ni is above a predefined threshold ? , (typically 2500 ? 3000), above which size(Pi ) is held fixed. This sampling adaptivity is illustrated in Figure 3. Notice that large image segments have much more sparsely sampled patches than small image segments. From our experiments, this adaptive spatial sampling strategy is sufficient to represent image segments of different sizes. 3.2 Label Segments by Aggregating Over Random Patches For an image segment Si from a new frame to be classified, we again first sample a set of random patches Pi as its representative set of appearance samples. For each patch p ? Pi , we calculate its B B F distances dF p , dp or matching scores mp , mp towards the foreground and background appearance models respectively as described in Section 2. The decision of assigning Si to foreground or background, is an aggregating process over all B B F {dF p , dp } or {mp ; mp } where p ? Pi . Since Pi is considered as a set of i.i.d. samples of the B B F appearance distribution of Si , we use the average of {dF p , dp } or {mp ; mp } (ie. first-order statisB F B F tics) as its distances DPi , DPi or fitness values MPi , MPi with the foreground/background model. B F B B F In terms of distances {dF p , dp }, DPi = meanp?Pi (dp ) and DPi = meanp?Pi (dp ). Then the F segment?s foreground/background fitness is set as the inverse of the distances: MPFi = 1/DP i F B F F B B and MPi = 1/DPi . In terms of KDE matching scores {mp ; mp }, MPi = meanp?Pi (mp ) and B F MPBi = meanp?Pi (mB p ). Finally, Si is classified as foreground if MPi > MPi , and vice versa. The Median robust operator can also be employed in our experiments, without noticeable difference in F performance. Another choice is to classify each p ? Pi from mB p and mp , then vote the majority foreground/background decision for Si . The performance is similar with mean and median. 2 Because we are not focused on image segmentation algorithms, we choose Felzenszwalb?s segmentation code which generates good results and is publicly available at http://people.cs.uchicago.edu/?pff/segment/. 3.3 Construct a Robust Online Nonparametric Foreground/Background Appearance Model with Temporal Adaptation From sets of random image patches extracted from superpixels with known figure/ground labels, 2 foreground/background ?bags of patches? are composed. The bags are the non-parametric form of the foreground/background appearance distributions. When we intend to ?track? the figure/ground model sequentially though a sequence, these models need to be updated by integrating new image patches extracted from new video frames. However the size (the number of patches) of the bag will be unacceptably large if we do not also remove the some redundant information over time. More importantly, imperfect segmentation results from [6] can cause inaccurate segmentation level figure/ground labels. For robust image patch level appearance modeling of ?t , we propose a novel bidirectional consistency check and resampling strategy to tackle various noise and labelling uncertainties. More precisely, we classify new extracted image patches {Pt } as {PtF } or {PtB } according to F |B F |B B ?t?1 ; and reject ambiguous patch samples whose distances dF p , dp towards respective ?t?1 have B no good contrast (simply, the ratio between dF p and dp falls into the range of 0.8 to 1/0.8). We further sort the distance list of the newly classified foreground patches {PtF } to ?F t?1 , filter out image patches on the top of the list which have too large distances and are probably to be outliers, and ones from the bottom of the list which have too small distances and contain probably redundant 3 B B appearances compared with ?F t?1 . We perform the same process with {Pt } according to ?t?1 . F |B F ? |B ? Then the filtered {Pt } are integrated into ?t?1 to form ?t?1 , and we evaluate the probability of F ? |B ? survival ps for each patch inside ?t?1 against the original unprocessed {Pt } with their labels4 . F ? |B ? Next, we cluster all image patches of ?t?1 into k partitions [8], and randomly resample image patches within each partition. This is roughly equivalent to finding the modes of an arbitrary distribution and sampling for each mode. Ideally, the resampling rate ? ? should decrease with increasing partition size, similar to the segment-wise sampling rate ?. For simplicity, we define ? ? as a constant value for all partitions, unless setting a threshold ? ? to be the minimal required size5 of partitions after resampling. If we perform resampling directly over patches without partitioning, some modes of the appearance distribution may be mistakenly removed. This strategy represents all partitions with sufficient number of image patches, regardless of their different sizes. In all, we resample image F |B patches of ?t?1 , according to the normalized product of probability of survival ps and partitionF |B wise sampling rate ? ? , to generate ?t . By approximately fixing the expected bag model size, the number of image patches extracted from a certain frame Xt in the bag decays exponentially in time. The problem of partitioning image patches in the bag can be formulated as the NP-hard k-center problem. The definition of k-center is as follows: given a data set of n points and a predefined cluster number k, find a partition of the points into k subgroups P1 , P2 , ..., Pk and the data centers c1 , c2 , ..., ck , to minimize the maximum radius of clusters maxi maxp?Pi k p ? ci k, where i is the index of clusters. Gonzalez [8] proposed an efficient greedy algorithm, farthest-point clustering, which proved to give an approximation factor of 2 of the optimum. The algorithm operates as follows: pick a random point p1 as the first cluster center and add it to the center set C; for iterations i = 2, ..., k, find the point pi with the farthest distance to the current center set C: di (pi , C) = minc?C k pi ? c k and add pi to set C; finally assign data points to its nearest center and recompute the means of clusters in C. Compared with the popular k-means algorithm, this algorithm is computationally efficient and theoretically bounded6. In this paper, we employ the Eu3 Simply, we reject patches with distances dF that are larger than mean(dF ) + ? ? std(dF ) or smaller pF pF pF t t t F |B than mean(dF ) ? ? ? std(dF ) where ? controls the range of accepting patch samples of ?t?1 , called model pF pF t t rigidity. ? 4 F For example, we compute the distance of each patch in ?F t?1 to {Pt }, and covert them as surviving probabilities using a exponential function over negative covariance normalized distances. Patches with smaller distances have higher survival chances during resampling; and vice versa. We perform the same process with ? B ?B t?1 according to {Pt }. 5 All image patches from partitions that are already smaller than ? ? are kept during resampling. 6 The random initialization of all k centers and the local iterative smoothing process in k-means, which is time-consuming in high dimensional space and possibly converges to undesirable local minimum, are avoided. clidean distance between an image patch and a cluster center, using the raw RGB intensity vector or the feature representations discussed in section 2. 4 Experiments We have evaluated the image patch representations described in Section 2 for figure/ground mapping between pairs of image on video sequences taken with both static and moving cameras. Here we summarize our results. 4.1 Evaluation on Object-level Figure/Ground Image Mapping We first evaluate our algorithm on object-level figure/ground mapping between pairs of images under eight configurations of different image patch representations and matching criteria. They are listed as follows: the nearest neighbor distance matching on the image patch?s mean color vector (MCV); raw color intensity vector of regular patch scanning (RCV) or segment-adaptive patch sampling over image (SCV); color + filter bank response (CFB); color + Haralick texture descriptor (CHA); PCA feature vector (PCA); NDA feature vector (NDA) and kernel density evaluation on PCA features (KDE). In general, 8000 ? 12000 random patches are sampled per image. There is no apparent difference on classification accuracy for the patch size ranging from 9 to 15 pixels and the sample rate from 0.02 to 0.10. The PCA/NDA feature vector has 20 dimensions, and KDE is evaluated on the first 3 ? 6 PCA features. Because the foreground figure has fewer of pixels than background, we conservatively measure the classification accuracy from the foreground?s detection precision and recall on pixels. Precision is the ratio of the number of correctly detected foreground pixels to the total number of detected foreground pixels; recall is is the ratio of the number of correctly detected foreground pixels to the total number of foreground pixels in the image. The patch size is 11 by 11 pixels, and the segmentwise patch sampling rate ? is fixed as 0.06, unless stated otherwise. Using 40 pairs of (720 ? 480) images with the labelled figure/ground segmentation, we compare their average classification accuracies in Tables 1. MCV 0.46 0.28 RCV 0.81 0.89 SCV 0.97 0.95 CFB 0.92 0.85 CHA 0.89 0.81 PCA 0.93 0.85 NDA 0.96 0.87 KDE 0.69 0.98 Table 1: Evaluation on classification accuracy (ratio). The first row is precision; the second row is recall. For figure/ground extraction accuracy, SCV has the best classification ratio using the raw color intensity vector without any dimension reduction. MCV has the worst accuracy, which shows that pixelcolor leads to poor separability between figure and ground in our data set. Four feature based representations, CFB, CHA, PCA, NDA with reduced dimensions, have similar performance, whereas NDA is slightly better than the others. KDE tends to be more biased towards the foreground class because background usually has a wider, flatter density distribution. The superiority of SCV over RCV proves that our segment-wise random patch sampling strategy is more effective at classifying image segments than regularly scanning the image, even with more samples. As shown in Figure 4 (b), some small or irregularly-shaped image segments do not have enough patch samples to produce stable classifications. (a) MCV (b) RCV (c) SCV (d) CFB (e) CHA (f) PCA (g) NDA (h) KDE Figure 4: An example of evaluation on object-level figure/ground image mapping. The labeled figure image segments are coded in blue. 4.2 Figure/Ground Segmentation Tracking with a Moving Camera From Figure 4 (h), we see KDE tends to produce some false positives for the foreground. However the problem can be effectively tackled by multiplying the appearance KDE by the spatial prior which is also formulated as a KDE function of image patch coordinates. By considering videos with complex appearance-changing figure/ground, imperfect segmentation results [6] are not completely avoidable which can cause superpixel based figure/ground labelling errors. However our robust bidirectional consistency check and resampling strategy, as shown below, enables to successfully track the dynamic figure/ground segmentations in challenging scenarios with outlier rejection, model rigidity control and temporal adaptation (as described in section 3.3). Karsten.avi shows a person walking in an uncontrolled indoor environment while tracked with a handheld camera. After we manually label the frame 1, the foreground/background appearance model starts to develop, classify new frames and get updated online. Eight Example tracking frames are shown in Figure 5. Notice that the significant non-rigid deformations and large scale changes of the walking person, while the original background is completely substituted after the subject turned his way. In frame 258, we manually eliminate some false positives of the figure. The reason for this failure is that some image regions which were behind the subject begin to appear when the person is walking from left to the center of image (starting from frame 220). Compared to the online foreground/background appearance models by then, these newly appearing image regions have quite different appearance from both the foreground and the background. Therefore the foreground?s spatial prior dominates the classification. We leave this issue for future work. (a) 12# (b) 91# (c) 155# (d) 180# (e) 221# (f) 257# (g) 308# (h) 329# Figure 5: Eight example frames (720 by 480 pixels) from the video sequence Karsten.avi of 330 frames. The video is captured using a handheld Panasonic PV-GS120 in standard NTSC format. Notice that the significant non-rigid deformations and large scale changes of the walking person, while the original background is completely substituted after the subject turned his way. The red pixels are on the boundary of segments; the tracked image segments associated with the foreground walking person is coded in blue. 4.3 Non-rigid Object Tracking from Surveillance Videos We can also apply our nonparametric treatment of dynamic random patches in Figure 2 into tracking non-rigid interested objects from surveillance videos. The difficult is that surveillance cameras normally capture small non-rigid figures, such as a walking person or running car, in low contrast and low resolution format. Thus to adapt our method to solve this problem, we make the following modifications. Because our task changes to localizing figure object automatically overtime, we can simply model figure/ground regions using rectangles and therefore no pre-segmentation [6] is needed. Random figure/ground patches are then extracted from the image regions within these two rectangles. Using two sets of random image patches, we train an online classifier for figure/ground classes at each time step, generate a figure appearance confidence map of classification for the next frame and, similarly to [1], apply mean shift [4] to find the next object location by mode seeking. In our problem solution, the temporal evolution of dynamic image patch appearance models are executed by the bidirectional consistency check and resampling described in section 3.3. Whereas [1] uses boosting for both temporal appearance model updating and classification, our online binary classification training can employ any off-the-shelf classifiers, such as k-Nearest Neighbors (KNN), support vector machine (SVM). Our results are favorably competitive to the state-of-the-art algorithms [1, 9], even under more challenging scenario. 5 Conclusion and Discussion Although quite simple both conceptually and computationally, our algorithm of performing dynamic foreground-background extraction in images and videos using non-parametric appearance models produces very promising and reliable results in a wide variety of circumstances. For tracking figure/ground segments, to our best knowledge, it is the first attempt to solve this difficult ?video matting? problem [15, 25] by robust and automatic learning. For surveillance video tracking, our results are very competitive with the state-of-art [1, 9] under even more challenging conditions. Our approach does not depend on an image segmentation algorithm that totally respects the boundaries of the foreground object. Our novel bidirectional consistency check and resampling process has been demonstrated to be effectively robust and adaptive. We leave the explorations on supervised dimension reduction and density modeling techniques on image patch sets, optimal random patch sampling strategy, and self-tuned optimal image patch size searching as our future work. In this paper, we extract foreground/background by classifying on individual image segments. It might improve the figure/ground segmentation accuracy by modeling their spatial pairwise relationships as well. This problem can be further solved using generative or discriminative random field (MRF/DRF) model or the boosting method on logistic classifiers [11]. In this paper, we focus on learning binary dynamic appearance models by assuming figure/ground are somewhat distributionwise separatable. Other cues, as object shape regularization and motion dynamics for tracking, can be combined to improve performance. References [1] S. Avidan, Ensemble Tracking, CVPR 2005. [2] Y. Boykov and M. Jolly, Interactive Graph Cuts for Optimal boundary and Region Segmentation of Objects in n-d Images, ICCV, 2001. [3] M. Bressan and J. Vitri` a, Nonparametric discriminative analysis and nearest neighbor classification, Pattern Recognition Letter, 2003. [4] D. Comaniciu and P. Meer, Mean shift: A robust approach toward feature space analysis. IEEE Trans. PAMI, 2002. [5] A. Efros, T. Leung, Texture Synthesis by Non-parametric Sampling, ICCV, 1999. [6] P. Felzenszwalb and D. Huttenlocher, Efficient Graph-Based Image Segmentation, IJCV, 2004. [7] K. Fukunaga and J. Mantock, Nonparametric discriminative analysis, IEEE Trans. on PAMI, Nov. 1983. [8] T. Gonzalez, Clustering to minimize the maximum intercluster distance, Theoretical Computer Science, 38:293-306, 1985. [9] B. Han and L. Davis, On-Line Density-Based Appearance Modeling for Object Tracking, ICCV 2005. [10] R. Haralick, K. Shanmugam, I. Dinstein, Texture features for image classification. IEEE Trans. on SMC, 1973. [11] D. Hoiem, A. Efros and M. Hebert, Automatic Photo Pop-up, SIGGRAPH, 2005. [12] A. Ihler, Kernel Density Estimation Matlab Toolbox, http://ssg.mit.edu/ ihler/code/kde.shtml. [13] T. Leung and J. Malik, Representing and Recognizing the Visual Appearance of Materials using ThreeDimensional Textons, IJCV, 2001. [14] Y. Li, J. Sun, C.-K. Tang and H.-Y. Shum, Lazy Snapping, SIGGRAPH, 2004. [15] Y. Li, J. Sun and H.-Y. Shum. Video Object Cut and Paste, SIGGRAPH, 2005. [16] D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV, 2004. [17] L. Lu, K. Toyama and G. Hager, A Two Level Approach for Scene Recognition, CVPR, 2005. [18] J. Malik, S. Belongie, T. Leung, J. Shi, Contour and Texture Analysis for Image Segmentation, IJCV, 2001. [19] D. Martin, C. Fowlkes, J. Malik, Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues, IEEE Trans. on PAMI, 26(5):530-549, May 2004. [20] A. Mittal and N. Paragios, Motion-based Background Substraction using Adaptive Kernel Density Estimation, CVPR, 2004. [21] Eric Nowak, Frdric Jurie, Bill Triggs, Sampling Strategies for Bag-of-Features Image Classification, ECCV, 2006. [22] X. Ren and J. Malik, Learning a classification model for segmentation, ICCV, 2003. [23] C. Rother, V. Kolmogorov and A. Blake, Interactive Foreground Extraction using Iterated Graph Cuts, SIGGRAPH, 2004. [24] Yaser Sheikh and Mubarak Shah, Bayesian Object Detection in Dynamic Scenes, CVPR, 2005. [25] J. Wang, P. Bhat, A. Colburn, M. Agrawala and M. Cohen, Interactive Video Cutout. SIGGRAPH, 2005.
3016 |@word briefly:1 manageable:1 triggs:1 cha:4 grey:1 rgb:3 covariance:1 brightness:2 pick:1 hager:3 moment:1 reduction:4 configuration:1 score:3 hoiem:1 shum:2 tuned:1 colburn:1 current:3 com:1 si:9 assigning:1 must:1 john:2 partition:10 shape:1 enables:1 remove:1 resampling:11 greedy:1 selected:1 fewer:1 unacceptably:1 generative:1 accordingly:1 cue:2 isotropic:1 core:1 accepting:1 filtered:2 provides:1 completeness:1 recompute:1 location:1 boosting:2 c2:1 consists:2 ijcv:4 inside:5 pairwise:1 theoretically:1 mask:2 expected:2 karsten:2 roughly:2 p1:4 ptb:1 jolly:1 automatically:1 pf:6 cardinality:1 increasing:2 considering:1 project:2 begin:1 totally:1 rcv:4 evolved:1 tic:1 finding:2 nj:1 temporal:5 toyama:1 tackle:1 interactive:3 classifier:4 control:4 partitioning:2 farthest:2 normally:1 superiority:1 organize:1 appear:1 positive:2 local:4 treat:1 aggregating:2 limit:1 tends:2 approximately:1 pami:3 might:1 initialization:1 challenging:4 co:1 smc:1 graduate:1 range:3 jurie:1 camera:4 procedure:1 evolving:2 jhu:1 reject:4 matching:15 pre:4 integrating:1 regular:1 confidence:1 get:1 undesirable:1 operator:1 context:1 equivalent:1 map:1 demonstrated:1 center:10 shi:1 bill:1 straightforward:1 primitive:1 go:1 regardless:1 starting:1 focused:1 resolution:3 formulate:1 simplicity:3 importantly:1 his:2 meer:1 searching:1 cutout:1 variation:1 coordinate:1 updated:2 scv:5 pt:11 ntsc:1 homogeneous:1 us:1 superpixel:5 element:1 recognition:2 walking:8 updating:1 std:3 sparsely:1 cut:3 labeled:1 huttenlocher:1 bottom:1 solved:1 capture:1 worst:1 thousand:1 calculate:1 region:13 wang:1 connected:1 sun:2 decrease:2 removed:1 meanp:4 environment:2 complexity:1 ideally:1 dynamic:10 uniformity:1 depend:1 segment:40 distinctive:2 exit:1 eric:1 completely:3 easily:2 overtime:1 siggraph:5 sif:2 represented:2 various:1 kolmogorov:1 surrounding:1 train:1 separated:1 effective:2 describe:1 detected:3 labeling:3 avi:2 whose:1 apparent:1 posed:1 larger:1 quite:2 solve:2 cvpr:4 otherwise:1 maxp:1 vitri:1 asm:1 knn:1 think:1 online:5 sequence:8 propose:3 product:2 mb:2 adaptation:3 turned:2 adapts:1 description:2 validate:1 cluster:7 p:4 optimum:1 produce:3 converges:1 leave:2 object:18 wider:1 develop:1 fixing:1 exemplar:1 nearest:7 noticeable:1 p2:1 implemented:1 c:2 direction:1 radius:1 closely:1 filter:6 centered:1 exploration:1 material:1 argued:1 assign:1 probable:1 nda:10 extension:1 considered:1 ground:27 blake:1 mapping:6 lm:1 major:1 efros:2 resample:2 estimation:3 bag:20 label:12 vice:2 create:3 successfully:1 mittal:1 mit:1 gaussian:2 ck:1 pn:3 shelf:1 surveillance:5 minc:1 shtml:1 derived:3 focus:2 haralick:5 improvement:1 pif:1 modelling:2 check:8 superpixels:4 contrast:2 detect:1 rigid:7 leung:4 inaccurate:1 integrated:3 typically:2 eliminate:1 borrowing:1 interested:2 pixel:18 issue:1 classification:20 flexible:1 spatial:9 special:1 smoothing:1 art:2 field:1 construct:3 extraction:4 shaped:1 sampling:17 manually:2 represents:2 foreground:51 future:2 np:1 others:1 few:1 employ:3 oriented:1 randomly:2 composed:1 recognize:1 individual:2 fitness:2 phase:1 attempt:1 detection:2 interest:1 highly:1 evaluation:4 saturated:1 behind:1 held:1 predefined:2 algorithm2:1 nowak:1 respective:1 unless:3 euclidean:1 deformation:2 theoretical:1 minimal:1 classify:6 modeling:5 localizing:1 deviation:1 subset:1 hundred:1 recognizing:1 too:2 bressan:1 scanning:2 gregory:1 combined:1 adaptively:2 person:9 density:8 fundamental:1 ie:1 off:1 synthesis:1 hopkins:2 again:1 choose:1 possibly:1 li:2 student:1 flatter:1 textons:1 mp:12 performed:1 lowe:1 red:2 start:1 aggregation:3 sort:1 competitive:2 contribution:2 minimize:2 formed:2 ni:4 publicly:1 accuracy:7 descriptor:5 characteristic:1 ensemble:1 ssg:1 conceptually:2 raw:6 bayesian:1 iterated:1 lu:3 ren:1 multiplying:1 lighting:1 agrawala:1 classified:3 definition:1 against:4 failure:1 energy:1 associated:4 di:1 ihler:2 static:1 sampled:7 newly:2 proved:1 treatment:1 popular:1 recall:4 color:12 car:1 dimensionality:1 knowledge:1 segmentation:29 back:1 bidirectional:8 higher:1 supervised:4 response:3 formulation:1 done:1 evaluated:3 though:1 ptf:2 angular:1 correlation:1 hand:1 mistakenly:1 mode:4 logistic:1 lda:2 gray:1 building:1 omitting:1 validity:1 normalized:3 brown:1 contain:1 evolution:1 regularization:1 spatially:4 illustrated:1 during:2 self:1 comaniciu:1 essence:1 ambiguous:2 davis:1 mpi:6 criterion:1 nonrigid:1 demonstrate:1 covert:1 l1:2 motion:2 image:125 wise:5 ranging:1 novel:5 boykov:1 common:1 tracked:2 tabulation:1 cohen:1 exponentially:1 anisotropic:1 discussed:1 measurement:1 refer:1 significant:2 versa:2 automatic:2 consistency:8 similarly:1 dot:3 moving:2 stable:2 han:1 similarity:2 add:2 scenario:2 certain:1 binary:3 captured:1 minimum:1 handheld:2 somewhat:1 employed:1 prune:1 redundant:3 corporate:1 keypoints:1 infer:1 segmented:2 sib:2 match:2 panasonic:1 adapt:1 coded:3 mrf:1 avidan:1 circumstance:1 df:11 histogram:1 represent:4 kernel:4 iteration:1 c1:1 background:41 want:1 whereas:2 bhat:1 baltimore:1 median:2 biased:1 probably:2 subject:3 regularly:1 effectiveness:1 surviving:1 enough:1 variety:2 reduce:2 imperfect:2 shift:2 unprocessed:2 pca:12 yaser:1 cause:2 matlab:2 adequate:1 listed:1 amount:1 nonparametric:8 dark:1 reduced:1 generate:4 http:2 notice:4 per:2 track:2 correctly:2 blue:4 paste:1 key:1 four:1 demonstrating:1 threshold:2 changing:2 kept:1 rectangle:2 graph:3 convert:1 inverse:1 letter:1 uncertainty:1 patch:98 gonzalez:2 decision:4 resampled:2 uncontrolled:1 tackled:1 occur:1 precisely:1 scene:2 generates:1 fukunaga:1 performing:1 relatively:1 format:2 martin:1 department:2 according:5 combination:1 poor:1 across:1 smaller:4 slightly:1 separability:1 partitioned:2 sheikh:1 modification:1 outlier:4 iccv:4 invariant:1 taken:1 computationally:2 discus:1 needed:1 irregularly:1 cfb:4 photo:1 available:1 apply:5 eight:3 occurrence:1 robustly:1 appearing:1 fowlkes:1 shah:1 original:6 substitute:1 top:1 clustering:2 running:1 maintaining:1 especially:1 prof:1 threedimensional:1 seeking:1 malik:5 intend:1 already:1 parametric:7 strategy:8 md:1 traditional:1 dp:11 subspace:2 distance:17 majority:1 discriminant:3 reason:1 toward:1 assuming:2 rother:1 code:3 index:2 relationship:1 ratio:5 difficult:3 executed:1 kde:14 favorably:1 negative:1 stated:1 perform:5 mcv:4 observation:1 frame:15 dpi:5 arbitrary:1 intensity:5 pair:3 required:1 toolbox:1 extensive:1 subgroup:1 inaccuracy:1 pop:1 trans:4 address:1 below:3 usually:1 pattern:1 indoor:3 summarize:3 model1:1 pib:1 including:1 green:1 video:24 reliable:1 natural:3 representing:2 improve:2 brief:1 created:1 extract:3 prior:4 evolve:1 relative:1 adaptivity:1 integrate:1 sufficient:2 bank:4 classifying:2 pi:17 row:2 eccv:1 hebert:1 uchicago:1 neighbor:6 fall:1 felzenszwalb:2 wide:1 matting:1 shanmugam:1 boundary:6 dimension:5 drf:1 rich:1 contour:1 conservatively:1 author:1 collection:1 adaptive:10 avoided:1 approximate:1 compact:1 nov:1 keep:2 sequentially:1 incoming:1 belongie:1 consuming:1 discriminative:3 iterative:1 table:2 promising:1 nature:1 robust:9 investigated:1 complex:2 substituted:2 pk:1 noise:1 x1:1 representative:1 precision:3 paragios:1 pv:1 exponential:1 mubarak:1 tang:1 specific:2 xt:5 maxi:1 list:3 decay:1 svm:1 survival:5 dominates:1 false:2 effectively:2 ci:1 texture:10 dissimilarity:1 perceptually:1 labelling:3 sparseness:1 pff:1 rejection:1 lt:3 simply:4 appearance:40 visual:1 lazy:1 tracking:13 chance:1 extracted:9 avoidable:1 intercluster:1 formulated:3 towards:4 clidean:1 labelled:2 change:4 hard:1 specifically:1 typical:1 operates:1 semantically:1 principal:1 called:1 total:2 siemens:2 vote:1 people:1 support:1 evaluate:5 princeton:1 rigidity:3
2,223
3,017
Bayesian Detection of Infrequent Differences in Sets of Time Series with Shared Structure Jennifer Listgarten? , Radford M. Neal? , Sam T. Roweis? Rachel Puckrin? and Sean Cutler? ? Department of Computer Science, ? Department of Botany, University of Toronto, Toronto, Ontario, M5S 3G4 {jenn,radford,roweis}@cs.toronto.edu rachel [email protected], [email protected] Abstract We present a hierarchical Bayesian model for sets of related, but different, classes of time series data. Our model performs alignment simultaneously across all classes, while detecting and characterizing class-specific differences. During inference the model produces, for each class, a distribution over a canonical representation of the class. These class-specific canonical representations are automatically aligned to one another ? preserving common sub-structures, and highlighting differences. We apply our model to compare and contrast solenoid valve current data, and also, liquid-chromatography-ultraviolet-diode array data from a study of the plant Arabidopsis thaliana. 1 Aligning Time Series From Different Classes Many practical problems over a wide range of domains require synthesizing information from several noisy examples of one or more categories in order to build a model which captures common structure and also learns the patterns of variability between categories. In time series analysis, these modeling goals manifest themselves in the tasks of alignment and difference detection. These tasks have diverse applicability, spanning speech & music processing, equipment & industrial plant diagnosis/monitoring, and analysis of biological time series such as microarray & liquid/gas chromatography-based laboratory data (including mass spectrometry and ultraviolet diode arrays). Although alignment and difference detection have been extensively studied as separate problems in the signal processing and statistical pattern recognition communities, to our knowledge, no existing model performs both tasks in a unified way. Single class alignment algorithms attempt to align a set of time series all together, assuming that variability across different time series is attributable purely to noise. In many real-world situations, however, we have time series from multiple classes (categories) and our prior belief is that there is both substantial shared structure between the class distributions and, simultaneously, systematic (although often rare) differences between them. While in some circumstances (if differences are small and infrequent), single class alignment can be applied to multi-class data, it is much more desirable to have a model which performs true multi-class alignment in a principled way, allowing for more refined and accurate modeling of the data. In this paper, we introduce a novel hierarchical Bayesian model which simultaneously solves the multi-class alignment and difference detection tasks in a unified manner, as illustrated in Figure 1. The single-class alignment shown in this figure coerces the feature in region A for class 1 to be inappropriately collapsed in time, and the overall width of the main broad peak in class 2 to be inappropriately narrowed. In contrast, our multi-class model handles these features correctly. Furthermore, because our algorithm does inference for a fully probabilistic model, we are able to obtain quantitative measures of the posterior uncertainty in our results, which, unlike the point estimates produced by most current approaches, allow us to assess our relative confidence in differences learned by the model. Our basic setup for multi-class alignment assumes the class labels are known for each time series, as is the case for most difference detection problems. However, as we discuss at the end of the paper, our model can be extended to the completely unsupervised case. normal abnormal 3 common structure 3 1 1 20 0 3 ?20 class?specific differences 20 1 0 ?20 3 class?specific models 3 A 1 1 0 50 100 150 200 250 0 50 100 150 200 Figure 1: Nine time series from the NASA valve solenoid current data set [4]. Four belong to a ?normal? class, and five to an ?abnormal? class. On all figures, the horizontal axis is time, or latent time for figures of latent traces and observed time series aligned to latent traces. The vertical axis is current amplitude. Top left: The raw, unaligned data. Middle left: Average of the unaligned data within each class in thick line, with the thin lines showing one standard deviation on either side. Bottom left: Average of the aligned data (over MCMC samples) within each class, using the single-class alignment version of the model (no child traces), again with one standard deviation lines shown in the thinner style line. Right: Mean and one standard deviation over MCMC samples using the HB-CPM. Top right: Parent trace. Middle right: Class-specific energy impulses with the topmost showing the class impulses for the less smooth class. Bottom right: Child traces superimposed. Note that if one generates more HB-CPM MCMC samples, the parent cycles between the two classes since the model has no preference for which class is seen as a modification of the other; the child classes remain stable however. 2 A Hierarchical Bayesian Continuous Profile Model Building on our previous Continuous Profile Model (CPM) [7], we propose a Hierarchical Bayesian Continuous Profile Model (HB-CPM) to address the problems of multi-class alignment and difference detection, together, for sets of sibling time series data ? that is, replicate time series from several distinct, but related classes. The HB-CPM is a generative model that allows simultaneous alignment of time series and also provides aligned canonical representations of each class along with measures of uncertainty on these representations. Inference in the model can be used, for example, to detect and quantify similarities and differences in class composition. The HB-CPM extends the basic CPM in two significant ways: i) it addresses the multi-class rather than the single-class alignment problem, and ii) it uses a fully Bayesian framework rather than a maximum likelihood approach, allowing us to estimate uncertainty in both the alignments and the canonical representations. Our model, depicted in Figure 2, assumes that each observed time series is generated as a noisy transformation of a single, class-specific latent trace. Each latent trace is an underlying, noiseless representation of the set of replicated, observable time series belonging to a single class. An observed time series is generated from this latent trace exactly as in the original CPM, by moving through a sequence of hidden states in a Markovian manner and emitting an observable value at each step, as with an HMM. Each hidden state corresponds to a ?latent time? in the latent trace. Thus different choices of hidden state sequences result in different nonlinear transformations of the underlying trace. The HB-CPM uses a separate latent trace for each class, which we call child traces. Crucially, each of these child traces is generated from a single parent trace (also unobserved), which 250 captures the common structure among all of the classes. The joint prior distribution for the child traces in the HB-CPM model can be realized by first sampling a parent trace, and then, for each class, sampling a sparse ?difference vector? which dictates how and where each child trace should differ from the common parent. z Figure 2: Core elements of the HB-CPM, illustrated with two-class data (hidden and observed) drawn from the model?s prior. parent r1 r2 z1 impulse z2 child trace x1 x2 class 1 observed time series 2.1 x3 impulse child trace x4 x5 x6 class 2 observed time series The Prior on Latent Traces Let the vector xk = (xk1 , xk2 , ..., xkN ) represent the k th observed scalar time series, and w k ? 1..C be the class label of this time series. Also, let z = (z1 , z2 , ..., zM ) be the parent trace, and c z c = (z1c , z2c , ..., zM ) be the child trace for the cth class. During inference, posterior samples of c z form a canonical representation of the observed times series in class c, and z contains their common sub-structure. Ideally, the length of the latent traces, M , would be very large relative to N so that any experimental data could be mapped precisely to the correct underlying trace point. Aside from the computational impracticalities this would pose, great care to avoid overfitting would have to be taken. Thus in practice, we have used M = (2 + )N (double the resolution, plus some slack on each end) in our experiments, and found this to be sufficient with  < 0.2. Because the resolution of the latent traces is higher than that of the observed time series, experimental time can be made to effectively speed up or slow down by advancing along the latent trace in larger or smaller jumps. As mentioned previously, the child traces in the HB-CPM inherit most of their structure from a common parent trace. The differences between child and parent are encoded in a difference vector for each class, dc = (dc1 , dc2 , ..., dcM ); normally, most elements of dc are close to zero. Child traces are obtained by adding this difference vector to the parent trace: z c = z + dc . We model both the parent trace and class-specific difference vectors with what we call an energy impulse chain, which is an undirected Markov chain in which neighbouring nodes are encouraged to be similar (i.e., smooth), and where this smoothness is perturbed by a set of marginally independent energy impulse nodes, with one energy impulse node attached to each node in the chain. For the difc ference vector of the cth class, the corresponding energy impulses are denoted r c = (r1c , r2c , ..., rM ), and for the parent trace the energy impulses are denoted r = (r1 , r2 , ..., rM ). Conditioned on the energy impulses, the probability of a difference vector is " !!# M ?1 M 1 1 X (dci ? dci+1 )2 X (dci ? ric )2 c c c c p(d |r , ? , ? ) = exp ? + . (1) Zr c 2 i=1 ?c ?c i=1 Here, Zrc is the normalizing constant for this probability density, ?c controls the smoothness of the chain, and ?c controls the influence of the energy impulses. Together, ?c and ?c also control the overall tightness of the distribution for dc . Presently, we set all ?c = ?0 , and similarly ?c = ?0 ? that is, these do not differ between classes. Similarly, the conditional probability of the parent trace is " 1 1 p(z|r, ?, ?) = exp ? Zr 2 M ?1 X i=1 M (zi ? zi+1 )2 X (zi ? ri )2 + ? ? i=1 !!# . (2) These probability densities are each multivariate Gaussian with tridiagonal precision matrixes (corresponding to the Markov nature of the interactions). Each component of each energy impulse for the parent, rj , is drawn independently from a single univariate Gaussian, N (ri |?par , spar ), whose mean and variance are in turn drawn from a Gaussian and inverse-gamma, respectively. The class-specific difference vector impulses, however, are drawn from a mixture of two zero-mean Gaussians ? one ?no difference? (inlier) Gaussian, and one ?classdifference? (outlier) Gaussian. The means are zero so as to encourage difference vectors to be near zero (and thus child traces to be similar to the parent trace). Letting ?ic denote the binary latent mixture component indicator variables for each rjc , c c p(?jc ) = Multinomial(?jc |mcin , mcout ) = (mcin )?j (mcout )1??j  N (rjc |0, s2in ), if ?jc = 1 c c p(rj |?j ) = . N (rjc |0, s2out ), if ?jc = 0 (3) (4) Each Gaussian mixture variance has an Inverse-Gamma prior, which for the ?no difference? variance, s2in , is set to have very low mean (and not overly dispersed) so that ?no difference? regions truly have little difference from the parent class, while for the ?class-difference? variance, s 2out , the prior is set to have a larger mean, so as to model our belief that substantial class-specific differences do occasionally exist. The priors for ?c , ?c , ?, ? are each log-normal (inverse-gamma priors would not be conjugate in this model, so we use log-normals which are easier to specify). Additionally, the mixing proportions, mcin , mcout , have a Dirichlet prior, which typically encodes our belief that the proportion that are ?class differences? is likely to be small. 2.2 The HMM Portion of the Model Each observed xk is modeled as being generated by an HMM conditioned on the appropriate child k trace, z w . The probability of an observed time series conditioned on a path of hidden time states, QN k k ? k , and the child trace, is given by p(xk |z w , ? k ) = i=1 N (xki |z?wk uk , ? k ), where ? k is the i emission variance for time series k, and the scale factor, uk , allows for constant, global, multiplicak tive rescaling. The HMM transition probabilities T k (?i?1 ? ?ik ) are multinomial within a limited k k range, with p (?i = a|?i?1 = b) = ?(a?b) for (a ? b) ? [1, J? ] and pk (?i = a|?i?1 = b) = 0 for (a ? b) < 1 or (a ? b) > J? where J? is the maximum allowable number of consecutive time P J? k states that can be advanced in a single transition. (Of course, i=1 ?i = 1.) This multinomial distribution, in turn, has a Dirichlet prior. The HMM emission variances, ? k , have an inverse-gamma prior. Additionally, the prior over the first hidden time state is a uniform distribution over a constant number of states, 1..Q, where Q defines how large a shift can exist between any two observed time series. The prior over each global scaling parameter, uk , is a log-normal with fixed variance and mean of zero, which encourages the scaling factors to remain near unity. 3 Posterior Inference of Alignments and Parameters by MCMC Given a set of observed time series (and their associated class labels), the main computational operation to be performed in the HB-CPM is inference of the latent traces, alignment state paths and other model parameters. Exact inference is analytically intractable, but we are able to use Markov Chain Monte Carlo (MCMC) methods to create an iterative algorithm which, if run for sufficiently long, produces samples from the correct posterior distribution. This posterior provides simultaneous alignments of all observed time series in all classes, and also, crucially, aligned canonical representations of each class, along with error bars on these representations, allowing for a principled approach to difference detection in time series data from different classes. We may also wish to obtain a posterior estimate of some of our parameters conditioned on the data, and marginalized over the other parameters. In particular, we might be interested in obtaining the posterior over hidden time state vectors for each time series, ? k , which together provide a simultaneous, multi-class alignment of our data. We may, in addition, or, alternatively, be interested in the posterior of the child traces, z c , which together characterize how the classes agree and disagree. The former may be more of interest for visualizing aligned observed time series, or in expanding out aligned scalar time series to a related vector time series, while the latter would be more of interest when looking to characterize differences in multi-class, scalar time series data. We group our parameters into blocks, and sample these blocks conditioned on the values of the other parameters (as in Gibbs sampling) ? however, when certain conditional distributions are not amenable to direct sampling, we use slice sampling [8]. The scalar conditional distributions for each of ?par , spar , mcin , mcout , ?jc , ?ki are known distributions, amenable to direct sampling. The conditional distributions for the scalars ?c , ?c , ?, ? and uk are not tractable, and for each of these we use slice sampling (doubling out and shrinking). The conditional distribution for each of r and r c is multivariate Gaussian, and we sample directly from each using a Cholesky decomposition of the covariance matrix. 1 p(r|z, ?, ?) = p(z|r, ?, ?)p(r) = N (r|c, C) (5) Z 1 p(r c |dc , ?c , ?c ) = p(dc |r, ?c , ?c )p(r) = N (r c |b, B), (6) Z where, using I to denote the identity matrix,  ?1   ?par z S ?1 + Is + I (7) , c = C C= par ?2 ? spar !?1 S? dc ?1 B= , b=B c. + vc (8) 2 ? (?c ) ?1 The diagonal matrix v c consists of mixture component variances (s2in or s2out ). S ?1 [or S ? ] is the tridiagonal precision matrix of the multivariate normal distribution p(z|r, ?, ?) [or p(d c |r c , ?c , ?c )], ?1 ?1 and has entries Sj,j = ?2 + ?1 for j = 2..(M ? 1), Sj,j = ?1 + ?1 for j = 1, M , and ?1 ?1 ?1 Sj,j+1 = Sj+1,j = ? ?1 [or analogously for S ? ]. The computation of C and B can be made more efficient by using the Sherman-Morrison-Woodbury matrix inversion lemma. For example, ?1 ?1 ?1 ?1 ?1 B = (?1c )2 (S ? ? S ? (v c + S ? )?1 S ? ), and we have S ?1 [or S ? ] almost for free, and no longer need to invert S [or S ? ] to obtain it. The conditional distributions of each of z, z c are also multivariate Gaussians. However, because of the underlying Markov dependencies, their precision matrixes are tridiagonal, and hence we can use belief propagation, in the style of Kalman filtering, followed by a stochastic traceback to sample from them efficiently. Thus each can be sampled in time proportional to M rather than M 3 , as required for a general multivariate Gaussian. Lastly, to sample from the conditional distribution of the hidden time vectors for each sample, ? k , we run belief propagation (analogous to the HMM forward-backward algorithm) followed by a stochastic traceback. In our experiments, the parent trace was initialized by averaging one smoothed example from each class. The child traces were initialized to the initial parent trace. The HMM states were initialized by a Viterbi decoding with respect to the initial values of the other parameters. The scaling factors were initialized to unity, and the child energy impulses to zero. MCMC was run for 5000 iterations, with convergence generally realized in less than 1000 iterations. 4 Experiments and Results We demonstrate use of the HB-CPM on two data sets. The first data set is the part of the NASA shuttle valve data [4], which measures valve solenoid current against time for some ?normal? runs and some ?abnormal? runs. Measurements were taken at a rate of 1ms per sample, with 1000 samples per time series. We subsampled the data by a factor of 7 in time since it was extremely dense. The results of performing posterior inference in our model on this two-class data set are shown in Figure 1. They nicely match our intuition of what makes a good solution. In our experiments, we also compared our model to a simple ?single-class? version of the HB-CPM in which we simply remove the child trace level of the model, letting all observed data in both classes depend directly on one single parent trace. The single-class alignment, while doing a reasonable job, does so by coercing the two classes to look more similar than they should. This is evident in one particular region labeled on the graph and discussed in the legend. Essentially a single class alignment causes us to lose class-specific fine detail ? the precise information we seek to retain for difference detection. The second data set is from a botany study which uses reverse-phase HPLC (high performance liquid chromatography) as a high-throughput screening method to identify genes involved in xenobiotic uptake and metabolism in the model plant Arabidopsis thaliana. Liquid-chromatography (LC) techniques are currently being developed and refined with the aim of providing a robust platform with which to detect differences in biological organisms ? be they plants, animals or humans. Detected differences can reveal new fundamental biological insight, or can be applied in more clinical settings. LC-mass spectrometry technology has recently undergone explosive growth in tackling the problem of biomarker discovery ? for example, detecting biological markers that can predict treatment outcome or severity of disease, thereby providing the potential for improved health care and better understanding of the mechanisms of drug and disease. In botany, LC-UV data is used to help understand the uptake and metabolism of compounds in plants by looking for differences across experimental conditions, and it is this type of data that we use here. LC separates mixtures of analytes on the basis of some chemical property ? hydrophobicity, for reverse-phase LC, used to generate our data. Components of the analyte in our data set were detected as they came off the LC column with a Diode Array Detector (DAD), yielding UV-visible spectra collected at 540 time points (we used the 280 nm band, which is informative for these experiments). We performed background subtraction [2] and then subsampled this data by a factor of four. This is a three-class data set, where the first class is untreated plant extract, followed by two classes consisting of this same plant treated with compounds that were identified as possessing robust uptake in vivo, and, hence, when metabolized, provide a differential LC-UV signal of interest. Figure 3 gives an overview of the LC-UV results, while Figure 4 zooms in on a particular area of interest to highlight how subtle differences can be detected by the HB-CPM, but not by a singleclass alignment scheme. As with the NASA data set, a single-class alignment coerces features across classes that are in fact different to look the same, thereby preventing us from detecting them. Recall that this data set consists of a ?no treatment? plant extract, and two ?treatments? of this same plant. Though our model was not informed of these special relationships, it nevertheless elegantly captures this structure by giving almost no energy impulses to the ?no treatment? class, meaning that this class is essentially the parent trace, and allowing the ?treatment? classes to diverge from it, thereby nicely matching the reality of the situation. All averaging over MCMC runs shown is over 4000 samples, after a 1000 burn in period, which took around 3 hours for the NASA data, and 5 hours for the LC data set, on machines with dual 3 GHz Pentium 4 processors. 5 Related Work While much work has been done on time series alignment, and on comparison/clustering of time series, none of this work, to our knowledge, directly addresses the problem presented in this paper ? simultaneously aligning and comparing sets of related time series in order to characterize how they differ from one another. The classical algorithm for aligning time series is Dynamic Time Warping (DTW) [10]. DTW works on pairs of time series, aligning one time series to a specified reference time, in a non-probabilistic way, without explicit allowance for differences in related time series. More recently, Gaffney et al [5] jointly clustered and aligned time series data from different classes. However, their model does not attempt to put time series from different classes into correspondence with one another ? only time series within a class are aligned to one another. Ziv Bar-Joseph et al [1] use a similar approach to cluster and align microarray time series data. Ramsay et al [9] have introduced a curve clustering model, in which a time warping function, h(t), for each time series is learned by way of learning its relative curvature, parameterized with order one B-spline coefficients. This model accounts for 5 5 3 3 9 0 5 ?9 9 0 ?9 9 3 0 ?9 5 5 3 3 0 50 100 150 200 250 300 0 50 100 150 200 250 Figure 3: Seven time series from each of three classes of LC-UV data. On all figures, the horizontal axis is time, or latent time for figures of latent traces and observed time series aligned to latent traces. The vertical axis is log of UV absorbance. Top left: The raw, unaligned data. Middle left: Average of the unaligned data within each class in thick line, with the thin lines showing one standard deviation on either side. Bottom left: Average of the aligned data within each class, using the single-class alignment version of the model (no child traces), again with one standard deviation lines shown in the thinner style line. Right: Mean and one standard deviation over MCMC samples using the HB-CPM model. Top right: Parent trace. Middle right: Class-specific energy impulses, with the top-most showing the class impulses for the ?no treatment? class. Bottom right: Child traces superimposed. See Figure 4 for a zoom-in in around the arrow. systematic changes in the range and domain of time series in a way that aligns curves with the same fundamental shape. However, their method does not allow for class-specific differences between shapes to be taken into account. The anomaly detection (AD) literature deals with related, yet distinct problems. For example, Chan et al [3] build a model of one class of time series data (they use the same NASA valve data as in this paper), and then match test data, possibly belonging to another class (e.g. ?abnormal? shuttle valve data) to this model to obtain an anomaly score. Emphasis in the AD community is on detecting abnormal events relative to a normal baseline, in an on-line manner, rather than comparing and contrasting two or more classes from a dataset containing examples of all classes. The problem of ?elastic curve matching? is addressed in [6], where a target time series that best matches a query series is found, by mapping the problem of finding the best matching subsequence to the problem of finding the cheapest path in a DAG (directed acyclic graph). 6 Discussion and Conclusion We have introduced a hierarchical, Bayesian model to perform detection of rare differences between sets of related time series, a problem which arises across a wide range of domains. By training our model, we obtain the posterior distribution over a set of class-specific canonical representations of each class, which are aligned in a way that preserves their common sub-structures, yet retains and highlights important differences. This model can be extended in several interesting and useful ways. One small modification could be useful for the LC-UV data set presented in this paper, in which one of the classes was ?no treatment?, while the other two were each a different ?treatment?. We might model the ?no treatment? as the parent trace, and each of the treatments as a child trace, so that the direct comparison of interest would be made more explicit. Another direction would be to apply the HB-CPM in a completely 300 4 5 0 3 ?5 4 5 0 3 ?5 4 5 0 3 ?5 100 105 110 115 120 125 130 135 140 145 150 155 100 105 110 115 120 125 130 135 140 145 150 155 Figure 4: Left: A zoom in of data displayed in Figure 3, from the region of time 100-150 (labeled in that figure in latent time, not observed time). Top left: mean and standard deviation of the unaligned data. Middle left: mean and standard deviation of the single-class alignment. Bottom left: mean and standard deviation of the child traces from the HB-CPM. A case in point of a difference that could be detected with the HB-CPM and not in the raw or single-class aligned data, is the difference occurring at time point 127. Right: The mean and standard deviation of the child energy impulses, with dashed lines showing correspondences with the child traces in the bottom left panel. unsupervised setting where we learn not only the canonical class representations, but also obtain the posterior over the class labels by introducing a latent class indicator variable. Lastly, one could use a model with cyclical latent traces to model cyclic data such as electrocardiogram (ECG) and climate data. In such a model, an observed trace being generated by the model would be allowed to cycle back to the start of the latent trace, and the smoothness constraints on the trace would be extended to apply to beginning and end of the traces, coercing these to be similar. Such a model would allow one to do anomaly detection in cyclic data, as well as segmentation. Acknowledgments: Thanks to David Ross and Roland Memisevic for useful discussions, and Ben Marlin for his Matlab slice sampling code. References [1] Z. Bar-Joseph, G. Gerber, D. K. Gifford, T. Jaakkola, and I. Simon. A new approach to analyzing gene expression time series data. In RECOMB, pages 39?48, 2002. [2] H. Boelens, R. Dijkstra, P. Eilers, F. Fitzpatrick, and J. Westerhuis. New background correction method for liquid chromatography with diode array detection, infrared spectroscopic detection and raman spectroscopic detection. Journal of Chromatography A, 1057:21?30, 2004. [3] P. K. Chan and M. V. Mahoney. Modeling multiple time series for anomaly detection. In ICDM, 2005. [4] B. Ferrell and S. Santuro. NASA shuttle valve data. http://www.cs.fit.edu/?pkc/nasa/data/, 2005. [5] S. J. Gaffney and P. Smyth. Joint probabilistic curve clustering and alignment. In Advances in Neural Information Processing Systems 17, 2005. [6] L. Latecki, V. Megalooikonomou, Q. Wang, R. Lakaemper, C. Ratanamahatana, and E. Keogh. Elastic partial matching of time series, 2005. [7] J. Listgarten, R. M. Neal, S. T. Roweis, and A. Emili. Multiple alignment of continuous time series. In Advances in Neural Information Processing Systems 17, 2005. [8] R. M. Neal. Slice sampling. Annals of Statistics, 31:705?767, 2003. [9] J. Ramsay and X. Li. Curve registration. Journal of the Royal Statistical Society(B), 60, 1998. [10] H. Sakoe and S. Chiba. Dynamic programming algorithm for spoken word recognition. Readings in Speech Recognition, pages 159?165, 1990.
3017 |@word version:3 middle:5 inversion:1 proportion:2 replicate:1 seek:1 crucially:2 decomposition:1 covariance:1 thereby:3 absorbance:1 xkn:1 initial:2 cyclic:2 series:58 contains:1 score:1 liquid:5 existing:1 current:5 com:1 z2:2 comparing:2 tackling:1 yet:2 r1c:1 visible:1 informative:1 shape:2 remove:1 aside:1 generative:1 metabolism:2 xk:3 beginning:1 core:1 z2c:1 detecting:4 provides:2 node:4 toronto:3 preference:1 five:1 along:3 direct:3 differential:1 ik:1 consists:2 manner:3 sakoe:1 introduce:1 g4:1 themselves:1 multi:9 automatically:1 little:1 valve:7 latecki:1 underlying:4 panel:1 mass:2 what:2 developed:1 informed:1 contrasting:1 unified:2 unobserved:1 transformation:2 finding:2 marlin:1 spoken:1 megalooikonomou:1 quantitative:1 growth:1 exactly:1 rm:2 uk:4 arabidopsis:2 normally:1 control:3 thinner:2 analyzing:1 path:3 might:2 plus:1 burn:1 emphasis:1 studied:1 ecg:1 limited:1 range:4 directed:1 practical:1 woodbury:1 acknowledgment:1 practice:1 block:2 x3:1 area:1 drug:1 dictate:1 matching:4 confidence:1 word:1 close:1 put:1 collapsed:1 influence:1 www:1 independently:1 impracticality:1 resolution:2 insight:1 array:4 his:1 handle:1 analogous:1 annals:1 target:1 infrequent:2 exact:1 neighbouring:1 anomaly:4 us:3 smyth:1 programming:1 element:2 recognition:3 infrared:1 labeled:2 observed:19 bottom:6 wang:1 capture:3 region:4 cycle:2 gifford:1 topmost:1 substantial:2 principled:2 mentioned:1 intuition:1 disease:2 ideally:1 dynamic:2 depend:1 purely:1 completely:2 basis:1 joint:2 distinct:2 monte:1 detected:4 query:1 outcome:1 refined:2 whose:1 encoded:1 larger:2 tightness:1 statistic:1 emili:1 jointly:1 noisy:2 listgarten:2 sequence:2 took:1 propose:1 unaligned:5 interaction:1 zm:2 aligned:13 mixing:1 ontario:1 roweis:3 parent:22 double:1 convergence:1 r1:2 cluster:1 produce:2 ben:1 inlier:1 help:1 pose:1 job:1 solves:1 c:2 diode:4 quantify:1 differ:3 direction:1 thick:2 correct:2 stochastic:2 vc:1 human:1 require:1 coercing:2 clustered:1 spectroscopic:2 biological:4 keogh:1 correction:1 sufficiently:1 around:2 ic:1 normal:8 exp:2 great:1 viterbi:1 predict:1 mapping:1 fitzpatrick:1 consecutive:1 xk2:1 cpm:20 label:4 lose:1 currently:1 ross:1 create:1 gaussian:8 aim:1 rather:4 avoid:1 shuttle:3 jaakkola:1 emission:2 superimposed:2 likelihood:1 biomarker:1 contrast:2 industrial:1 equipment:1 pentium:1 detect:2 baseline:1 inference:8 typically:1 hidden:8 interested:2 overall:2 among:1 dual:1 ziv:1 denoted:2 ference:1 platform:1 animal:1 special:1 nicely:2 sampling:9 encouraged:1 x4:1 broad:1 look:2 unsupervised:2 throughput:1 thin:2 spline:1 rjc:3 simultaneously:4 gamma:4 zoom:3 preserve:1 subsampled:2 phase:2 consisting:1 explosive:1 attempt:2 detection:15 interest:5 screening:1 gaffney:2 alignment:27 mahoney:1 mixture:5 truly:1 cutler:2 yielding:1 chain:5 amenable:2 accurate:1 encourage:1 partial:1 gerber:1 initialized:4 column:1 modeling:3 markovian:1 retains:1 applicability:1 introducing:1 deviation:10 entry:1 rare:2 uniform:1 tridiagonal:3 characterize:3 dependency:1 perturbed:1 thanks:1 density:2 peak:1 fundamental:2 retain:1 memisevic:1 systematic:2 probabilistic:3 off:1 decoding:1 diverge:1 together:5 analogously:1 again:2 nm:1 containing:1 possibly:1 style:3 rescaling:1 li:1 account:2 potential:1 electrocardiogram:1 wk:1 coefficient:1 jc:5 ad:2 performed:2 dad:1 doing:1 portion:1 start:1 narrowed:1 simon:1 vivo:1 ass:1 variance:8 efficiently:1 identify:1 bayesian:7 raw:3 produced:1 marginally:1 carlo:1 monitoring:1 none:1 m5s:1 processor:1 simultaneous:3 detector:1 aligns:1 against:1 energy:13 involved:1 associated:1 sampled:1 dataset:1 treatment:10 manifest:1 knowledge:2 recall:1 segmentation:1 subtle:1 sean:1 amplitude:1 back:1 nasa:7 higher:1 x6:1 specify:1 improved:1 done:1 though:1 furthermore:1 xk1:1 lastly:2 horizontal:2 nonlinear:1 marker:1 propagation:2 defines:1 reveal:1 impulse:18 building:1 true:1 untreated:1 former:1 analytically:1 hence:2 chemical:1 laboratory:1 neal:3 illustrated:2 deal:1 climate:1 visualizing:1 x5:1 during:2 width:1 encourages:1 m:1 allowable:1 evident:1 demonstrate:1 performs:3 analyte:1 meaning:1 novel:1 recently:2 possessing:1 common:8 multinomial:3 overview:1 attached:1 belong:1 discussed:1 organism:1 significant:1 composition:1 measurement:1 gibbs:1 dag:1 smoothness:3 uv:7 similarly:2 ramsay:2 sherman:1 moving:1 stable:1 similarity:1 longer:1 align:2 aligning:4 curvature:1 posterior:11 multivariate:5 chan:2 pkc:1 reverse:2 occasionally:1 certain:1 compound:2 binary:1 came:1 preserving:1 seen:1 care:2 subtraction:1 period:1 morrison:1 dashed:1 signal:2 multiple:3 desirable:1 ii:1 rj:2 smooth:2 match:3 clinical:1 long:1 spar:3 icdm:1 roland:1 ultraviolet:2 s2in:3 basic:2 circumstance:1 noiseless:1 essentially:2 iteration:2 represent:1 invert:1 spectrometry:2 addition:1 background:2 fine:1 addressed:1 microarray:2 unlike:1 undirected:1 legend:1 call:2 near:2 hb:17 fit:1 zi:3 identified:1 sibling:1 shift:1 expression:1 speech:2 nine:1 cause:1 matlab:1 generally:1 useful:3 traceback:2 extensively:1 band:1 category:3 generate:1 http:1 exist:2 canonical:8 overly:1 correctly:1 per:2 singleclass:1 diverse:1 diagnosis:1 group:1 four:2 nevertheless:1 drawn:4 registration:1 advancing:1 backward:1 graph:2 run:6 inverse:4 parameterized:1 uncertainty:3 rachel:2 extends:1 almost:2 reasonable:1 allowance:1 raman:1 ric:1 scaling:3 thaliana:2 abnormal:5 ki:1 followed:3 correspondence:2 precisely:1 constraint:1 x2:1 ri:2 encodes:1 generates:1 speed:1 xki:1 extremely:1 performing:1 department:2 belonging:2 conjugate:1 across:5 remain:2 smaller:1 sam:1 unity:2 joseph:2 cth:2 modification:2 presently:1 outlier:1 taken:3 agree:1 previously:1 jennifer:1 discus:1 slack:1 turn:2 mechanism:1 letting:2 tractable:1 end:3 gaussians:2 hotmail:1 operation:1 apply:3 hierarchical:5 appropriate:1 original:1 assumes:2 top:6 dirichlet:2 clustering:3 marginalized:1 music:1 giving:1 build:2 classical:1 society:1 warping:2 realized:2 diagonal:1 separate:3 mapped:1 hmm:7 seven:1 collected:1 spanning:1 assuming:1 length:1 kalman:1 modeled:1 relationship:1 code:1 providing:2 setup:1 dci:3 trace:59 synthesizing:1 perform:1 allowing:4 disagree:1 vertical:2 markov:4 dijkstra:1 gas:1 displayed:1 situation:2 extended:3 variability:2 looking:2 precise:1 dc:7 severity:1 smoothed:1 community:2 tive:1 introduced:2 pair:1 required:1 specified:1 david:1 z1:2 learned:2 hour:2 address:3 able:2 bar:3 pattern:2 reading:1 including:1 royal:1 belief:5 event:1 treated:1 indicator:2 zr:2 advanced:1 scheme:1 technology:1 dc1:1 axis:4 uptake:3 analytes:1 dtw:2 extract:2 health:1 prior:13 understanding:1 discovery:1 literature:1 relative:4 plant:9 fully:2 par:4 highlight:2 eilers:1 interesting:1 filtering:1 proportional:1 acyclic:1 recomb:1 hydrophobicity:1 sufficient:1 undergone:1 course:1 free:1 side:2 allow:3 understand:1 wide:2 characterizing:1 sparse:1 ghz:1 slice:4 curve:5 chiba:1 world:1 transition:2 qn:1 preventing:1 forward:1 made:3 jump:1 replicated:1 dc2:1 emitting:1 sj:4 observable:2 gene:2 global:2 overfitting:1 alternatively:1 spectrum:1 subsequence:1 continuous:4 latent:22 iterative:1 reality:1 ratanamahatana:1 additionally:2 nature:1 learn:1 robust:2 ca:1 expanding:1 elastic:2 obtaining:1 z1c:1 domain:3 inappropriately:2 elegantly:1 inherit:1 pk:1 main:2 dense:1 cheapest:1 arrow:1 noise:1 puckrin:2 profile:3 child:26 allowed:1 x1:1 chromatography:6 attributable:1 slow:1 lc:11 precision:3 sub:3 shrinking:1 wish:1 explicit:2 learns:1 down:1 specific:13 utoronto:1 showing:5 r2:2 normalizing:1 intractable:1 adding:1 effectively:1 jenn:1 conditioned:5 occurring:1 easier:1 botany:4 depicted:1 simply:1 univariate:1 likely:1 highlighting:1 scalar:5 doubling:1 cyclical:1 radford:2 corresponds:1 dispersed:1 dcm:1 conditional:7 goal:1 identity:1 shared:2 change:1 averaging:2 lemma:1 experimental:3 cholesky:1 latter:1 arises:1 mcmc:8
2,224
3,018
Recursive ICA Honghao Shan, Lingyun Zhang, Garrison W. Cottrell Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92093-0404 {hshan,lingyun,gary}@cs.ucsd.edu Abstract Independent Component Analysis (ICA) is a popular method for extracting independent features from visual data. However, as a fundamentally linear technique, there is always nonlinear residual redundancy that is not captured by ICA. Hence there have been many attempts to try to create a hierarchical version of ICA, but so far none of the approaches have a natural way to apply them more than once. Here we show that there is a relatively simple technique that transforms the absolute values of the outputs of a previous application of ICA into a normal distribution, to which ICA maybe applied again. This results in a recursive ICA algorithm that may be applied any number of times in order to extract higher order structure from previous layers. 1 Introduction Linear implementations of Barlow?s efficient encoding hypothesis1 , such as ICA [1] and sparse coding [2], have been used to explain the very first layers of auditory and visual information processing in the cerebral cortex [1, 2, 3]. Nevertheless, many interesting structures are nonlinear functions of the stimulus inputs, which are unlikely to be captured by a linear model. For example, for natural images, it has been observed that there is still significant statistical dependency between the variance of the filter outputs [4]. Several extensions of the linear ICA algorithm [5, 6, 7, 8] have been proposed to reduce such residual nonlinear redundancy, with an explicit or implicit aim of explaining higher perceptual layers, such as complex cells in V1. However, none of these extensions are obviously recursive, so it is unclear how to generalize them to multi-layer models in order to account for even higher perceptual layers. In this paper, we propose a hierarchical redundancy reduction model in which the problem of modeling the residual nonlinear dependency is transformed into another LEE problem, as illustrated in Figure 1. There are at least two reasons why we want to do this. First, this transforms a new and hard problem into an easier and previously solved problem. Second, different parts of the brain share similar anatomical structures and it is likely that they are also working under similar computational principles. For example, fMRI studies have shown that removal of one sensory modality leads to neural reorganization of the remaining modalities [9], suggesting that the same principles must be at work across modalities. Since the LEE model has been so successful in explaining the very first layer of perceptual information processing in the cerebral cortex, it seems reasonable to hypothesize that higher layers might also be explained by a LEE model. The problem at hand is then how to transform the problem of modeling the residual nonlinear dependency into a LEE problem. To achieve this goal, we need to first make clear what the input constraints are that are imposed by the LEE model. This is done in Section 2. After that, we will derive the transformation function that ?prepares? the output of ICA for its recursive application, and then test this model on natural images. 1 We refer to such algorithms as linear efficient encoding (LEE) algorithms throughout this paper. Figure 1: The RICA (Recursive ICA) model. After the first layer of linear efficient encoding, sensory inputs X are now represented by S. The signs of S are discarded. Then coordinate-wise nonlinear activation functions gi are applied to each dimension of S, so that the input of the next layer X 0 = g(|S|) satisfies the input constraints imposed by the LEE model. The statistical structure among dimensions of X 0 are then extracted by the next layer of linear efficient encoding. 2 Bayesian Explanation of Linear Efficient Encoding It has long been hypothesized that the functional role of perception is to capture the statistical structure of the sensory stimuli so that appropriate action decisions can be made to maximize the chance of survival (see [10] for a brief review). Barlow provided the insight that the statistical structure is measured by the redundancy of the stimuli and that completely independent stimuli cannot be distinguished from random noise [11]. He also hypothesized that one way for the neural system to capture the statistical structure is to remove the redundancy in the sensory outputs. This so-called redundancy reduction principle forms the foundation of ICA algorithms. Algorithms following the sparse coding principle are also able to find interesting structures when applied to natural image patches [2]. Later it was realized that although ICA and sparse coding algorithms started out from different principles and goals, their implementations can be summarized in the same Bayesian framework [12]. In this framework, the observed data X is assumed to be generated by some underlying signal sources S: X = AS +  where A is a linear mixing matrix and  is additive Gaussian noise. Also, it is assumed that the features Sj are independent from each other, and that the marginal distribution of Sj is sparse. For the sparse coding algorithm described in [2], although it started from the goal of finding sparse features, the algorithm?s implementation implicitly assumes the independence of Sj ?s. For the infomax ICA algorithm [1], although it aimed at finding independent features, the algorithm?s implementation assumes a sparse marginal prior (p(Sj ) ? sech(Sj )). The energy-based ICA algorithm using a student-t prior [13] can also be placed in this framework for complete representations. The moral here, though, is that in practice, the samples available are always insufficient to allow any efficient inference without making some assumptions about the data distribution. A sparseness and independence assumption about the data distribution is appropriate because: (1) independence allows the system to capture the statistical structure of the stimuli, as described above, and (2) sparse distribution of the sensory outputs is energy-economic. This is important for the survival of the biological system, considering the fact that human brain consists 2% of the body weight but accounts for 20% of its resting metabolism [14]. The linear efficient encoding model captures the important characteristics of sensory coding: capturing the statistical structure (independence) of sensory stimuli with minimum cost (sparseness). This generative model describes our assumption about the data. How well the algorithms perform depends on how well this assumption matches the real data. Hence, it is very important to check what kind of data the model generates. If the input data strongly deviate from what can be generated by the model (in other words, the observed data strongly deviate from our assumption), the results could be errant no matter how much effort we put into the model parameter estimation. As to the LEE model, there is a clear constraint on the marginal distribution of Xi . Here we limit our study on those ICA algorithms that produce basis functions resembling the simple cells? receptive fields when applied on natural image patches. Such algorithms [1, 13, 15] typically adopt a symmetric 2 and sparse marginal prior for Sj ?s that can be well approximated by a generalized Gaussian distribution. In fact, if we apply linear filters resembling the receptive fields of simple cells on natural images, the distribution of the filter responses can be well approximated by a generalized Gaussian distribution. Here we show that such a prior suggests that the Xi ?s should also be symmetric. A random variable X is symmetric if and only if its characteristic function is real valued. In the above Bayesian framework, we assume that Sj ?s are independent and the marginal distribution of Sj is symmetric about zero. The characteristic function is then given by: ? P ? X E[e ?1tXi ] = E[e ?1t j Ai,j Sj ] (Xi = Ai,j Sj ) (1) j Y ? = E[ e ?1tAi,j Sj ] (2) j = Y ? E[e ?1tAi,j Sj ] (Sj ?s are independent from each other) (3) j Since Ai,j Sj is symmetric, it is easy to see that Xi must also be symmetric. A surprising fact about our perceptual system is that there does exist such a process that regularizes the marginal distribution of the sensory inputs. In the visual system, for example, the data is whitened in the retina and the LGN before transmission to V1. The functional role of this process is generally described as removing pairwise redundancy, as natural images (as well as natural sounds) obey the 1/f power law in the frequency domain [16]. However, as shown in Figure 2, it also regulates the marginal distribution of the input to follow a generalized-gaussian-like distribution3 . This phenomenon has long been observed. We believe that besides the functional role of removing second-order redundancy, whitening might also serve as a role of formatting the sensory input for the cortex. For example, it has been observed [1] that without pre-whitening the images, the learned basis functions by ICA do not cover a broad range of spatial frequencies. Figure 2: The distribution of pixel values of whiten images follows a generalized Gaussian distribution (see Section 2). The shape parameter of the distribution is about 1.094, which means that the marginal distribution of the inputs to the LEE model is already very sparse. 2 p(X) is symmetric if X and ?X have the same distribution. For all the image patches we tried, the distribution of pixel values on whitened image patches can be well fitted by a generalized Gaussian distribution. This is true even for small image patches. The only exception we have discovered occurs when the original image contains only binomial noise. 3 In this work, we will make the assumption that the marginal distribution of the inputs to the LEE model is a generalized gaussian distribution, as this enables the LEE model to work more efficiently. Also, as just discussed, at least for sound and image processing, there is an effective way to achieve this neurally. 3 Reducing Residual Redundancy For the filter outputs S of a layer of LEE, we will first discard information that provides no interesting structure (i.e., redundancy), and find an activation function such that the marginal distribution obeys the input requirements of the next layer. 3.1 Discarding the Signs It has been argued that the signs of the filter outputs do not carry any redundancy [5]. The models proposed in [6, 7, 8] also implicitly or explicitly discard the signs. We have observed the usefulness of this process in a study of natural image statistics. We applied the FastICA algorithm [15] to 20x20 natural image patches, and studied the joint distribution of the filter outputs. As shown in the left plot of Figure 3, p(si |sj ) = p(si | ? sj ), i.e. the conditional probability of si on sj only depends on the absolute value of sj . In other words, the signs of S do not provide any dependency among the dimensions. By removing the sign and applying our transformation (described in the next section), the nonlinear dependency between the si ?s is exposed (see Figure 3, right). Figure 3: Left: s1 and s2 are ICA filter responses on natural image patches. The red dashed lines plot the linear regression between them. Right: After the coordinate-wise nonlinear transformation, the two features are no longer uncorrelated. 3.2 Nonlinear Activation Function The only problem left is to find the coordinate-wise activation function gi for each dimension of S such that Xi0 = gi (|Si |) follows a generalized Gaussian distribution, as required by the next layer of LEE. In this work, we make the transformed features have a normal distribution. By doing so, we force the LEE model of the higher layer to set more A0j,i to nonzero values (so that the Central Limit Theorem takes effect to make Xi0 a Gaussian distribution), which leads to more global structures at the higher layer. We used two methods to find this activation function in our experiments. Parametric Activation Function Assume s approximately follows a generalized Gaussian distribution(GGD). The probability density function of a GGD is given by: s ? ? exp{? } (4) f (s; ?, ?) = 2??(1/?) ? where ? > 0 is a scale parameter and ? > 0 is a shape parameter and ? denotes the gamma function. These two parameters can be estimated efficiently by an iterative algorithm developed by [17]. s is then transformed into a normally distributed N (0, 1) random variable by the function g: ? u = g(|s|) = F ?1 ?( |s|? , 1 ) ( ?1 ? ) ?( ? ) (5) where F denotes the cumulative density function (cdf) of standard normal distribution and ? denotes the incomplete gamma function. This transformation can be seen as three consecutive steps: ? Discard the sign: u ? |s|, now u bears pdf g(u; ?, ?) = and cdf ?( |s|? 1 , ) ?? ? ?( ?1 ) ? ??( ?1 ) exp{? ?u ? }, 0 ? u < ? , 0 ? u < ?. ? Transform to a uniform distribution U [0, 1] by applying its own cdf: u ? ?( |s|? 1 , ) ?? ? ?( ?1 ) . ? Transform to a Gaussian distribution by applying the inverse cdf of N (0, 1): u ? F ?1 (u). Nonparametric Activation Function When the number of samples N is sufficiently large, a non-parametric activation function works more efficiently. In this approach, all the samples |Si | are sorted in ascending order. For each sample c (|s|)) s, cdf (|s|) is approximated by the ratio of its ranking in the list with N . Then u = F ?1 (cdf will approximately follow the standard normal distribution. Note that since ui depends only on the rank order of |si |, the results would be the same if the signs are discarded by taking s2i . 4 Experiments on Natural Images To test the behavior of our model, we applied it to small patches taken from digitized natural images. The image dataset is available on the World Wide Web from Bruno Olshausen 4 . It contains ten 512x512 pre-whitened images. We took 151,290 evenly distributed 20x20 image patches. We ran the FastICA algorithm [15] and obtained 397 basis functions. As reported in other models, the basis functions are Gabor-like filters (Figure 4). The nonparametric method was used to transform the marginal distribution of the outputs? absolute values to a standard normal distribution. Then the FastICA algorithm was applied again to retrieve 100 basis functions5 . We adopted the visualization method employed by [12] to investigate what kind of structures the second layer units are fond of. The basis functions are fitted to Gabor filter functions using a gradient descent algorithm [12]. The connection weights from a layer-2 unit to layer-1 units are shown in Figure 5, arranged by either the center or frequency/orientation of the fitted Gabor filters. The layer-2 units are qualitatively similar to those found in [18]. Some units welcome strong activation of layer-1 units within a certain orientation range but have no preference for locations, while others have a location preference but welcome activation of layer-1 units of all frequencies and orientations, and some develop a picky appetite for both. Again, the nonparametric method was used to transform the marginal distribution of the absolute values of the outputs from the second layer to a standard normal distribution, and FastICA was applied to retrieve 20 basis functions. We had no initial guess of what kind of statistical structure these third layer units might capture. The activation map of a couple of these units, however, seemed to suggest that they might be tuned to respond to complicated textures. In particular, one unit seems more activated by seemingly blank background, while another seems to like textures of leaves (Figure 6). We think that probably a larger database than merely 10 images, and larger image patches would be helpful for producing cleaner high level units. The same procedure can be repeated for multiple layers. However, at this point, until we develop better methods for analyzing the representation developed by these deeply embedded units, we will leave this for future work. 4 http://redwood.berkeley.edu/bruno/sparsenet/ This reduction in the number of units follows the example of [18]. In general, there appears to be less information in later layers (as assessed by eigenvalue analysis), most likely due to the discarding of the sign. 5 Figure 4: A subset of the 397 ICA image basis functions. Each basis function is 20x20 pixels. They are 2D Gabor like filters. Figure 5: Sample units from the second layer. The upper panel arranges the connection weights from layer-2 units to layer-1 units by the centers of the fitted Gabor filters. Every point corresponds to one basis function of the first layer, located at the center of the fitted Gabor filter. Warm colors represent strong positive connections; cold colors represent negative connections. For example, the leftmost unit prefers strong activation of layer-1 units located on the right and weak activation of layer-1 units on the left. The lower panel arranges the connection weights by the frequencies and the orientations of the fitted Gabor filters. Now every point corresponds to the Gabor filter?s frequency and orientation (in polar coordinates). The third leftmost unit welcomes strong activation of Gabor filters whose orientations are around 34 ? but prefers no/little activation from those whose orientations are around 14 ?. 5 Discussion The key idea of our model is to transform the high-order residual redundancy to linear dependency that can be easily exploited again by the LEE model. By using activation functions that are dependent on the marginal distribution of the outputs, a normal Gaussian interface is provided at every layer. This procedure can then repeat itself and a hierarchical model with same structure at every level can thus be constructed. As the redundancy is reduced progressively along the layers, statistical structures are also captured to progressively higher orders. Our simulation of a three layer Recursive ICA shows the effectiveness of our model. The first layer, not surprisingly, produces the Gabor like basis functions as linear ICA always does. The second layer, however, produces basis functions that qualitatively resemble those produced by a previous hierarchical generative model [7]. This is remarkable given that our model is essentially a filtering model with no assumptions of underlying independent variables, but merely targeting redundancy reduction. The advantage of our model is the theoretical simplicity of generalization to a third layer or more. For the Karklin and Lewicki model, the assumption that the ultimate independent causal variables are two layers away from the images has to be reworked for a three layer system. It is not clear how the variables at every layer should affect the next when an extra layer is added. Osindero et al. [8] employed an energy based model. The energy function used at the first layer made it essentially a linear ICA algorithm, thus it also produces Gabor like filters. The first layer outputs are squared to discard the signs and then fed to the next layer. The inputs for the second layer are thus all positive and bear a very different marginal distribution from those for the first layer. The energy function is changed accordingly and the second layer is essentially doing nonnegative ICA. The output of this layer, however, will all be positive, which makes discarding the signs no longer an effective way of exposing higher-order dependence. Thus, to extend to another layer, new activation functions and new energy function must be derived. The third layer of our model produces some interesting results in that some units seem to have preferences for complicated textures (Figure 6). However, as the statistical structure represented here must be of very high order, we are still looking for an effective visualization method. Also, as Figure 6: Activation maps on two images (upper and lower panel respectively) for two units per layer. The leftmost two images are the raw images. The second left column to the rightmost column are activation maps of two units from the first layer to the third respectively. The first layer units respond to small local edges, the second layer units respond to larger borders, and the third layer units seem to respond to large area of textures. units at the second layer have larger receptive field than those at the first layer, it is reasonable to expect the third layer to bear even larger ones. We believe that a wider range of visual structure will be picked up by the third layer units with a larger patch size on a larger training set. Acknowledgments We thank Eric Wiewiora, Lei Zhang and members from GURU for helpful discussions. This work was supported by NIH grant MH57075 to GWC. References [1] Anthony J. Bell and Terrence J. Sejnowski. The ?independent components? of natural scenes are edge filters. Vision Research, 37(23):3327?3338, 1997. [2] Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607?609, 1996. [3] Michael S. Lewicki. Efficient coding of natural sounds. Nature Neuroscience, 5(4):356?363, 2002. [4] Odelia Schwartz and Eero P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 4(8):819?825, 2001. [5] Aapo Hyvarinen and Patrik O. Hoyer. A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research, 41(18):2413? 2423, 2001. [6] Martin J. Wainwright and Eero P. Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In Advances in Neural Information Processing Systems, volume 12, pages 855?861, Cambridge, MA, May 2000. MIT Press. [7] Yan Karklin and Michael S. Lewicki. A hierarchical bayesian model for learning non-linear statistical regularities in non-stationary natural signals. Neural Computation, 17(2):397?423, 2005. [8] Simon Osindero, Max Welling, and Geoffrey E. Hinton. Topographic Product Models Applied to Natural Scene Statistics. Neural Computation, 18:381?414, 2005. [9] Eva M. Finney, Ione Fine, and Karen R. Dobkins. Visual stimuli activate auditory cortex in the deaf. Nature Neuroscience, 4:1171?1173, 2001. [10] Horace B. Barlow. Redundancy reduction revisited. Network: Computation in Neural Systems, 12:241?253, 2001. [11] Horace B. Barlow. Possible principles underlying the transformation of sensory messages. In Walter A. Rosenblith, editor, Sensory Communication, pages 217?234. MIT Press, Cambridge, MA, USA, 1961. [12] Michael S. Lewicki and Bruno A. Olshausen. A probabilistic framework for the adaptation and comparison of image codes. Journal of the Optical Society of America A, 16(7):1587?1601, 1999. [13] Yee Whye Teh, Max Welling, Simon Osindero, and Geoffrey E. Hinton. Energy-based models for sparse overcomplete representations. Journal of Machine Learning Research, 4:1235? 1260, 2003. [14] David Attwell and Simon B. Laughlin. An energy budget for signaling in the grey matter of the brain. Journal of Cerebral Blood Flow and Metabolism, 21(10):1133?1145, 2001. [15] Aapo Hyvarinen and Erkki Oja. A fast fixed-point algorithm for independent component analysis. Neural Computation, 9(7):1483?1492, 1997. [16] David J. Field. What is the goal of sensory coding? Neural Computation, 6(4):559?601, 1994. [17] Kai-Sheng Song. A globally convergent and consistent method for estimating the shape parameter of a generalized Gaussian distribution. IEEE Transactions on Information Theory, 52(2):510?527, 2006. [18] Yan Karklin and Michael S. Lewicki. Learning higher-order structures in natural images. Network: Computation in Neural Systems, 14:483?499, 2003.
3018 |@word version:1 seems:3 grey:1 simulation:1 tried:1 carry:1 reduction:5 initial:1 contains:2 tuned:1 rightmost:1 blank:1 surprising:1 activation:19 si:7 must:4 exposing:1 cottrell:1 additive:1 wiewiora:1 shape:3 enables:1 hypothesize:1 remove:1 plot:2 progressively:2 stationary:1 generative:2 leaf:1 metabolism:2 guess:1 accordingly:1 provides:1 revisited:1 location:2 preference:3 zhang:2 along:1 constructed:1 consists:1 pairwise:1 ica:23 behavior:1 multi:1 brain:3 globally:1 little:1 considering:1 provided:2 estimating:1 underlying:3 panel:3 what:6 kind:3 developed:2 finding:2 transformation:5 appetite:1 berkeley:1 every:5 schwartz:1 control:1 normally:1 grant:1 unit:28 producing:1 positive:3 before:1 engineering:1 local:1 limit:2 encoding:6 analyzing:1 approximately:2 might:4 studied:1 suggests:1 range:3 obeys:1 acknowledgment:1 recursive:6 practice:1 x512:1 cold:1 procedure:2 signaling:1 area:1 yan:2 bell:1 gabor:11 word:2 pre:2 suggest:1 cannot:1 targeting:1 put:1 applying:3 yee:1 imposed:2 map:3 center:3 resembling:2 arranges:2 simplicity:1 insight:1 retrieve:2 coordinate:4 diego:1 approximated:3 located:2 database:1 observed:6 role:4 solved:1 capture:5 eva:1 ran:1 deeply:1 ui:1 exposed:1 serve:1 eric:1 completely:1 basis:12 easily:1 joint:1 represented:2 america:1 s2i:1 walter:1 fast:1 effective:3 activate:1 sejnowski:1 whose:2 larger:7 valued:1 kai:1 statistic:4 gi:3 topographic:1 think:1 transform:6 itself:1 emergence:1 seemingly:1 obviously:1 advantage:1 eigenvalue:1 took:1 propose:1 product:1 adaptation:1 mixing:1 achieve:2 regularity:1 transmission:1 requirement:1 produce:5 distribution3:1 leave:1 wider:1 derive:1 develop:2 measured:1 strong:4 c:1 resemble:1 filter:18 human:1 argued:1 generalization:1 finney:1 fond:1 biological:1 extension:2 sufficiently:1 around:2 normal:7 exp:2 adopt:1 consecutive:1 estimation:1 polar:1 create:1 a0j:1 mit:2 always:3 gaussian:13 aim:1 derived:1 rank:1 check:1 helpful:2 inference:1 dependent:1 unlikely:1 typically:1 transformed:3 lgn:1 pixel:3 among:2 orientation:7 spatial:1 marginal:14 field:7 once:1 broad:1 reworked:1 fmri:1 future:1 others:1 stimulus:7 fundamentally:1 retina:1 oja:1 gamma:2 attempt:1 message:1 investigate:1 gwc:1 mixture:1 activated:1 edge:2 incomplete:1 causal:1 overcomplete:1 theoretical:1 fitted:6 column:2 modeling:2 cover:1 cost:1 subset:1 uniform:1 usefulness:1 successful:1 fastica:4 osindero:3 reported:1 dependency:6 guru:1 density:2 lee:15 terrence:1 probabilistic:1 infomax:1 michael:4 again:4 central:1 squared:1 account:2 suggesting:1 student:1 coding:8 summarized:1 matter:2 explicitly:1 ranking:1 depends:3 later:2 try:1 picked:1 doing:2 red:1 complicated:2 simon:3 variance:1 characteristic:3 efficiently:3 generalize:1 weak:1 bayesian:4 raw:1 produced:1 none:2 explain:1 rosenblith:1 energy:8 frequency:6 couple:1 gain:1 auditory:2 dataset:1 popular:1 color:2 appears:1 higher:9 follow:2 response:2 arranged:1 done:1 though:1 strongly:2 just:1 implicit:1 until:1 working:1 hand:1 sheng:1 web:1 nonlinear:9 lei:1 believe:2 olshausen:3 usa:1 effect:1 hypothesized:2 true:1 barlow:4 hence:2 symmetric:7 nonzero:1 illustrated:1 whiten:1 generalized:9 leftmost:3 whye:1 pdf:1 complete:1 txi:1 interface:1 image:33 wise:3 nih:1 functional:3 regulates:1 volume:1 cerebral:3 extend:1 discussed:1 he:1 resting:1 xi0:2 significant:1 refer:1 cambridge:2 ai:3 bruno:4 had:1 cortex:4 sech:1 whitening:2 longer:2 own:1 jolla:1 discard:4 certain:1 exploited:1 captured:3 minimum:1 prepares:1 seen:1 employed:2 maximize:1 signal:3 dashed:1 neurally:1 sound:3 multiple:1 simoncelli:2 match:1 long:2 regression:1 aapo:2 whitened:3 essentially:3 vision:2 represent:2 cell:5 background:1 want:1 fine:1 source:1 modality:3 extra:1 probably:1 member:1 flow:1 effectiveness:1 seem:2 extracting:1 easy:1 deaf:1 independence:4 affect:1 reduce:1 economic:1 idea:1 ultimate:1 moral:1 effort:1 song:1 karen:1 action:1 prefers:2 generally:1 clear:3 aimed:1 cleaner:1 maybe:1 transforms:2 nonparametric:3 picky:1 ten:1 welcome:3 reduced:1 http:1 exist:1 sign:11 estimated:1 neuroscience:3 per:1 anatomical:1 redundancy:15 key:1 nevertheless:1 blood:1 v1:2 merely:2 inverse:1 respond:4 throughout:1 reasonable:2 patch:11 decision:1 capturing:1 layer:60 shan:1 convergent:1 nonnegative:1 constraint:3 scene:2 erkki:1 generates:1 optical:1 relatively:1 martin:1 department:1 across:1 describes:1 formatting:1 making:1 s1:1 explained:1 taken:1 visualization:2 previously:1 tai:2 ascending:1 fed:1 adopted:1 available:2 gaussians:1 apply:2 obey:1 hierarchical:5 away:1 appropriate:2 distinguished:1 original:1 assumes:2 remaining:1 binomial:1 denotes:3 society:1 already:1 realized:1 occurs:1 added:1 receptive:5 parametric:2 dependence:1 unclear:1 hoyer:1 gradient:1 thank:1 evenly:1 reason:1 besides:1 reorganization:1 code:2 insufficient:1 ratio:1 x20:3 negative:1 implementation:4 perform:1 teh:1 upper:2 discarded:2 descent:1 regularizes:1 hinton:2 looking:1 communication:1 digitized:1 discovered:1 ucsd:1 redwood:1 ggd:2 david:3 required:1 connection:5 california:1 learned:1 sparsenet:1 able:1 perception:1 max:2 explanation:1 wainwright:1 power:1 natural:22 force:1 warm:1 karklin:3 residual:6 brief:1 started:2 extract:1 patrik:1 deviate:2 review:1 prior:4 removal:1 law:1 embedded:1 expect:1 bear:3 topography:1 interesting:4 filtering:1 geoffrey:2 remarkable:1 foundation:1 rica:1 consistent:1 principle:6 editor:1 uncorrelated:1 share:1 changed:1 placed:1 repeat:1 surprisingly:1 supported:1 allow:1 laughlin:1 explaining:2 wide:1 taking:1 absolute:4 sparse:13 distributed:2 dimension:4 world:1 cumulative:1 seemed:1 sensory:13 made:2 qualitatively:2 san:1 far:1 hyvarinen:2 welling:2 transaction:1 sj:18 implicitly:2 global:1 assumed:2 eero:2 xi:4 iterative:1 why:1 nature:4 ca:1 complex:2 anthony:1 domain:1 s2:1 noise:3 border:1 repeated:1 body:1 garrison:1 explicit:1 perceptual:4 third:8 learns:1 removing:3 theorem:1 discarding:3 horace:2 list:1 survival:2 texture:4 budget:1 sparseness:2 easier:1 likely:2 visual:5 lewicki:5 gary:1 corresponds:2 chance:1 satisfies:1 extracted:1 cdf:6 ma:2 conditional:1 goal:4 sorted:1 hard:1 reducing:1 attwell:1 called:1 la:1 exception:1 odelia:1 assessed:1 phenomenon:1
2,225
3,019
Mixture Regression for Covariate Shift Amos J Storkey Institute of Adaptive and Neural Computation School of Informatics, University of Edinburgh [email protected] Masashi Sugiyama Department of Computer Science Tokyo Institute of Technology [email protected] Abstract In supervised learning there is a typical presumption that the training and test points are taken from the same distribution. In practice this assumption is commonly violated. The situations where the training and test data are from different distributions is called covariate shift. Recent work has examined techniques for dealing with covariate shift in terms of minimisation of generalisation error. As yet the literature lacks a Bayesian generative perspective on this problem. This paper tackles this issue for regression models. Recent work on covariate shift can be understood in terms of mixture regression. Using this view, we obtain a general approach to regression under covariate shift, which reproduces previous work as a special case. The main advantages of this new formulation over previous models for covariate shift are that we no longer need to presume the test and training densities are known, the regression and density estimation are combined into a single procedure, and previous methods are reproduced as special cases of this procedure, shedding light on the implicit assumptions the methods are making. 1 Introduction There is a common presumption in developing supervised methods that the distribution of training points used for learning supervised models will match the distribution of points seen in a new test scenario. The expectation that the training and test points follow the same distribution is explicitly stated in [2, p. 10], is an assumption of empirical risk minimisation, [see e.g. 9, p. 25], and is implicit in the common practice of randomized splitting given data into a ?training set? and a ?test set?, where the latter is used in assessing performance [5, p. 482-495]. This paper, then, is concerned with the following issue. A set of real valued training data pairs of the form (x, y) is provided to train a model for a supervised learning problem. In addition data of the form x is provided from one (or more) test environments where the model will be used. The question to be addressed is ?How should we predict a value of y given a value x from within that particular test environment?? Cases where test scenarios truly match the training data are probably rare. The problem of mismatch has been grappled within literature from a number of fields, and has become known as covariate shift [14]. Specific examples of covariate shift include situations in reinforcement learning [c.f. 13] and bio-informatics [c.f. 1]. The common issue of sample selection bias [7] is a particular case of covariate shift. Much of the recent analysis of covariate shift has been made in the context of assessing the asymptotic bias of various estimators [15]. In general it has been noted that in the case of mismatched models (i.e. where the model from which the training data is generated is not included in the training model class), some typical estimators, such as least squares approaches, produce biased asymptotic estimators [14]. It might appear that the presumption of matched models in Bayesian analysis means covariate shift is not an issue: failure or otherwise under situations of covariate shift is solved by valid choice for the prior distribution over conditional models. The difficulty with this dismissal of the subject is that modelling conditional distributions alone is not always valid. In fact we can categorise at least three different types of covariate shift: 1. Independent covariate shift: Ptrain (y|x) = Ptest (y|x), but Ptrain (x) 6= Ptest (x). 2. Dependent prior probability change: Ptrain (x|y) = Ptest (x|y), but Ptrain (y) 6= Ptest (y). 3. Latent prior probability change: Ptrain (x, y|r) = Ptest (x, y|r) for all values of some latent variable r, but Ptrain (r) 6= Ptest (r). Let us presume that we are only interested in the quality of the conditional model Ptest (y|x). Then Case 1 is the only one of the above where covariate shift will have no effect on modelling. Case 2 is the well known situation of class prior probability change and, for example, is considered in comparing the benefits of a naive Bayes model, which allows for class prior probability change, and discriminant models, which typically do not. Case 3 involves a more general assumption, and arguably can be used to cover most situations of covariate shift, by incorporating any known structural characteristics of the problem into some latent variable r. Change in the distribution of x points implicitly informs us about variation in the targets y via the shift in the latent variable r, which is the causal factor for the change. The purpose of this paper is to provide a generative framework for analysis of covariate shift. The main advantages of this new formulation over previous approaches are ? It provides an explicit model for the changes occurring in situations of covariate shift, and hence the predictions that result from it. ? There is no need to presume the training and test distributions are known. Furthermore the test covariates are also used as part of the model estimation procedure, resulting in better predictions. ? Previous results, such as Importance Weighted Least Squares, are special cases of this method with explicit presumptions that can be relaxed to gain more general models. Hence this paper is a natural extension to the existing work. ? Utilising the test covariate distribution gives performance benefits over using the same model for training data alone. ? All the usual machinery for mixture of experts are available, and so this approach allows model selection and many natural extensions. Outline. In Section 2, related work is discussed, before the problem is formally specified and a general model is derived in Section 3. A specific form of mixture regression model is formulated and an Expectation Maximisation solution is given in Section 3.1. The specific relationship to Importance Weighted Least Squares is discussed in Section 3.1.2. Test examples are given in section 4. The results and methods are discussed in Section 5. 2 Prior work Covariate shift will be interpreted, in the context of this work, using mixture of regressor models, where the regression model is dependent on a latent class variable. Clustered regression models have been discussed widely [4, 18, 8, 16]. The benefits of the mixture of regressor approach for heterogeneous data was discussed in [17], but not formulated specifically for the problem of covariate shift. This paper establishes for the first time the relationship between the mixture of regressor model and the typical statistical results in the literature on covariate shift. The main differences of our approach from a standard mixture of regressor formalism is that we utilise the training and test distributions as part of the model and do not use only a conditional model, and we allow coupling of regressors across different mixture components. The main significance with regard to the literature on covariate shift is that we establish covariate shift within a general probabilistic modelling paradigm and hence extend the standard techniques to establish more general methods, which are also applicable when the training and test distributions are not explicitly given. The mixture of regressors form for (x, y) used in this paper is a specific from of mixture of experts [10]. Hence hierarchical extensions are also possible in the form of [11]. The problem of sample selection bias is related to covariate shift. Sample selection bias has been discussed in [19], where they estimate the distribution determining the bias for a classification problem. The problem of sample selection bias differs from the case in this paper as here there is no fundamental requirement of distribution overlap between the training and test sets. First, each can have zero density in regions the other is non-zero. Second, the presumption is different: rather than there being a sample rejection process that characterised the difference between training and test sets, there is a sample production process that differs. 3 Framework for Covariate Shift This paper follows most others in considering the restricted case of a single training and single test set. Each datum x is assumed to have been generated from one of a number of data sources using a mixture distribution corresponding to the source. The proportions of each of the sources varies across the training and test datasets. Hence, in the context of this paper, we understand covariate shift to be effected by a change in the contribution of different sources to the data. The motivation of the framework in this paper is that there is a latent feature set upon which each dataset is dependent, and the the variations between the two datasets are dependent upon variation of the proportions, but not the form, of those latent features. This is characterised by presuming each data source is a member of one of two different sets. Each of the two sets of sources is also associated with a regression model. The two sets of sources have the following characteristics: ? Source set 1 corresponds to sources that may occur in the test data, and potentially also in the training data, and are associated with regression model P1 (y|x). ? Source set 2 corresponds to sources that occur only in the training data, and are associated with regression model P2 (y|x). By taking this approach we note that we will be able to separate out effects that we expect to be only characteristics of the training data from effects that are common across training and test sets. The full generative model for the observed data consists of the model for the training data D and model for the test data T . The test data is just used to determine the nature of the covariate shift, and consists of only of the covariates x, and not any targets y. We emphasise that we do not presume to have seen the test data we wish to predict. Rather a prior model is built for the training and test data, and this is then conditioned on the information from the training data and the known covariates for the test data but not the unknown targets. 3.1 Mixture Regression for Covariate Shift In this section the full model is introduced. This significantly extends the previous work on covariate shift, in that the model allows for unknown training and test distributions, and utilises a mixture model approach for representing the relationship between the two. In Section 3.1.2, we will show how the previous results on covariate shift are special cases of the general model. We will develop this formalism for any parametric form for the regressors P (y|x). In fact this restriction is mainly for ease of explanation, and the method can be used with non-parametric models too, and will be tested in the case of Gaussian process models1 . The model takes the following form ? The distribution of the training data and test data are denoted PD and PT respectively, and are unknown in general. ? Source set 1 consists of M mixture distributions, where mixture t is denoted P1t (x). Each of the components is associated2 with regression model P1 (y|x). ? Source set 2 consists of M2 mixture distributions, where mixture t is denoted P2t (x). Each of the components is associated with the regression model P2 (y|x). 1 The primary restriction is than we need to be able to compute standard EM responsibilites for a given regressor, and hence for Gaussian processes a variational approximation is needed to do this. 2 If a component i is associated with a regression model j, this means that any datum x generated from the mixture component i, will also have a corresponding y generated from the associated regression model Pj (y|x). ? The training and test data distributions take the following form: X D D T PD (x) = ?1 ?1t P1t (x) + ?2 ?2t P2t (x) and PT (x) = ?1t P1t (x) (1) t D Hence ?1 and ?2 are parameters for the proportions of the two source sets in the training data, ?1t are D the relative proportions of each mixture from source set 1 in the training data, and ?2t are the relative T proportions of each mixture from source set 2 in the training data. Finally ?1t are the proportions of each mixture from source set 1 in the test data. All these parameters are presumed unknown. At some points in the paper it will be presumed the mixtures are Gaussian, when the form N (x; m, K) will be used to denote the Gaussian distribution function of x, with mean m and covariance K. For a parametric model, with the collection of mixture parameters denoted by ?, the collection of regression parameters denoted by ?, and the mixing proportions, ? and ? we have the full probabilistic model P ({i? , y? , x? |? ? D}, {i? , x? |? ? T }|?, ?, ?) = Y Y P (s? |?)P (t? |?, s? )Ps? t? (x? |?t? )Ps? (y? |x? , ?) P (t? |?)P1t? (x? |?). (2) ??D ??T ? where s denotes the source set used to generate the data point ?, and t? denotes the particular mixture from that source set used to generate the data point ?. In words, this says that the model for the training dataset involves sampling the particular source set s? , then the mixture component t? from that particular source set. Given these we then sample an x? from the relevant mixture and a y? conditionally on x? from the relevant regressor. The same procedure is followed for the test set, except now there is only one source set to consider. 3.1.1 EM algorithm A maximum likelihood solution for the parameters (?, ?, ?, ?) can be obtained for this model (given the training data and test covariate) using Expectation Maximisation (EM) [3]. The derivations are standard EM calculations (see e.g. [2]), and hence are not reiterated here. Denote the responsibility of mixture i for data point ? by ?i? . Then the application of EM involves maximisation of log P ({y? , x? |? ? D}, {x? |? ? T }|?, ?, ?, ?) (3) with respect to the parameters through iteration of E and M steps. The E-step update uses current parameter values to compute the responsibility (denoted by ?s) of each mixture 1t and 2t for each data point ? in the training set and each data point ? in the test set using D ? T P1t (x? |?) ?s ?st Pst (x? |?)Ps (y? |x? , ?) ? and ?1t = P 1t T . D ? ? ? ? s,t ?s ?st Pst (x |?)Ps (y |x , ?) t ?1t P1t (x |?) ? ?st =P (4) ? We set ?2t = 0 for ? ? T , as none of these mixtures are represented in the test set. The parameters of the mixture model distributions are then updated with the usual M steps for the relevant mixture component, and the regression parameters are updated using maximum responsibility-weighted likelihood. When each mixture component is a Gaussian of the form N (x; mst , Kst ), when we have a Gaussian regression error term, and denoting the (vector of) regression functions by fs for each source set s, these update rules are: P P ? ? ? ? ? T ??(D,T ) ?st x ??(D,T ) ?st (x ? mst )(x ? mst ) P P mst = , Kst = (5) ? ? ??D,T ?st ??D,T ?st ? 1 X ? 1 X ? 1 X ?st T D , ?1t = ?1t (6) ?s = ?st , ?st = |D| |D| ?s |T | ??D,t ??D ??T " # X ? ? ? T ? ? fs = arg min ?st (fs (x ) ? y ) (fs (x ) ? y ) (7) fs ?,t Given the learnt model, inference is straightforward. The test data is associated with a single regression model P1 (y|x), and so the predictive distribution for the test set is the learnt predictor P1 (y|xi ) for each point xi in the test set. 3.1.2 Importance Weighted Least Squares Previous results in modelling covariate shift can be obtained as special cases of the general approach taken in this paper. Suppose we make the assumptions that PD and PT are known, and that the source set 1 contains just the one component, which must be PT by definition. Suppose also that the two regressors have a large and identical variance ?. In this simple case, we do not need to know the actual test points (in this framework these are only used to infer the test distribution, which is assumed given here). The M step update only involves update to the regressor. For the E step we use the approximation P (y? |x? , ?1 ) ? P (y? |x? , ?2 ), which becomes asymptotically true in the case of infinite variance ?. The resulting E and M steps are " # X PT (x? )?1 ? ? ? ? T ? ? and f1 = arg min ? ? ? (f1 (x ) ? y ) (f1 (x ) ? y ) (8) f1 PD (x? ) ? where we note that ?1 is a common constant and can be dropped from the calculations. Hence we never need to learn ?1 or the parameters associated with mixture 2 in this procedure. Also no iterative EM procedure is needed as the E step is independent of the M step results. Hence this is a one shot process. This is the Importance Weighted Least Squares estimator for covariate shift [14]. A simple extension of this model will allow the large variance assumption to be relaxed, so the model can use the regressor information for computing responsibilities. 4 4.1 Examples Generated Test Data We demonstrate the mixture of regressors approach to covariate shift (MRCS) on generated test data: a one dimensional regression problem with two sources each corresponding to different linear regressors. Regression performance for MRCS with Gaussian mixtures and linear regressors is compared with three other cases. The first is an importance weighted least squares estimator (IWLS) given the best mixture model fit for the data, corresponding to the current standard for modelling covariate shift. The second uses a mixture of regressors model that ignores the form of the test data, but chooses the regressor corresponding to the mixtures which best match the test data distribution using a KL divergence measure (MRKL). This corresponds to recognising that covariate shift can happen, but ignoring the nature of the test distribution in the modelling process, and trying to choose the best of the two regressors. The third case is where the mixture of regressors is used simply as a standard regression model, ignoring the possibility of covariate shift (MRREG). The generative procedure for each of the 100 test datasets involves generating random parameter values for between 1 and 3 mixtures for each of two linear regressors. Test and training datasets of 200 data points each are generated from these mixtures and regression models, using different mixing proportions in each case. The various approaches were run 8 times with different random starting parameters for all methods. 80 iterations of EM were used. A fixed number of iterations was chosen to allow reasonable comparison. Analysis was done for fixed model sizes and for model choice using a Bayesian Information Criterion (BIC). Even though the regularity conditions for BIC do not hold for mixture models, it has been shown that BIC is consistent in this case [12]. It has also been shown to be a good choice on practical grounds [6]. The results of these tests show the significant benefits of explicit recognition of covariate shift over straight regression even compared with the use of the same mixture of regressors model, but without reference to the test distribution. It also shows benefits of the approach of this paper over the current state of the art for modelling covariate shift. Table 1 gives the result of these approaches for various fixed choices of numbers of mixtures associated with each regressor. Independent of the use of any model order choice, the Mixture of Regressors for Covariate Shift (MRCS) performs better than the other approaches. Table 1 also gives the results when the Bayesian Information Criterion is used for selecting the number of mixtures. Again MRCS performs best, and consistently gives better performance on the test data for more than 70 percent of the test cases. To illustrate the difference between the methods, Figure 1 plots the results of training a MRCS model on some one dimensional data using a regularised cubic regressor. The fit to the test data is also shown. Once again this is compared with IWLS and MRKL. It can be seen that both IWLS and MRKL fail to untangle the regressors associated with the overlapping central clusters in the training data and hence perform badly in that region of the test data. 5 6 4 5 4 3 3 2 2 1 1 0 0 ?1 ?1 ?2 ?3 ?3 ?2 ?2 ?1 0 1 2 3 4 ?3 ?3 ?2 ?1 (a) 0 1 2 0 1 2 0 1 2 (b) 5 6 4 5 4 3 3 2 2 1 1 0 0 ?1 ?1 ?2 ?3 ?3 ?2 ?2 ?1 0 1 2 3 4 ?3 ?3 ?2 ?1 (c) (d) 5 6 4 5 4 3 3 2 2 1 1 0 0 ?1 ?1 ?2 ?3 ?3 ?2 ?2 ?1 0 1 (e) 2 3 4 ?3 ?3 ?2 ?1 (f) Figure 1: Nonlinear regression using covariate shift. (a),(c),(e) Training set fit and (b),(d),(f) test data with predictions for MRCS (top), IWLS (middle) and MRKL (bottom) respectively. In (a),(c),(e), the ?.? data labels the points for which the test regressor has greater responsibility, and the ?+? data labels points for which the training only regressor has greater responsibility. Table 1: Average mean square error over all 100 datasets for each choice of fixed model mixture size. The actual number of mixtures in the data varies. MRCS - mixture of regressors for covariate shift, IWLS - Importance weighted least squares, MRKL - Mixture of regressors, evaluated on regressor with best fit to test distribution. MRREG - Mixture of regressors as a standard regression model, ignoring covariate shift. The sixth row gives the average mean square error over all 100 datasets, with number of mixtures chosen using a Bayesian information criterion for each case, and the last row gives the proportion of times MRCS performs better than the other cases for a BIC choice of model. PValues: If two of the approaches were equivalent performers, empirically better performance for 70/100 or more cases would only occur on less than 1 ? 10?4 such trials. 1 Mixture 2 Mixtures 3 Mixtures 4 Mixtures 5 Mixtures BIC Choice MCRS better 4.2 MRCS 0.588 0.536 0.601 0.623 0.612 0.6100 - IWLS 0.797 0.804 0.831 0.817 0.837 0.7990 77/100 MRKL 3.274 2.673 3.390 2.823 2.817 2.8638 72/100 MRREG 0.890 0.881 0.887 0.894 0.898 0.8813 84/100 Auto-Mpg Test It is useful to see that the approach does indeed make a noticeable difference on data that takes the appropriate prior form, but that says nothing about how appropriate that prior is for real problems. Here we demonstrate the method on the auto-mpg problem from the UCI dataset. This provides a natural scenario for demonstrating covariate shift. The auto-mpg data can be found at http://www.ics.uci.edu/?mlearn/MLSummary.html and involves the problem of predicting the city cycle fuel consumption of cars. One of the attributes is a class label dictating the origin of a particular car. To demonstrate covariate shift we can consider the prediction task trained on cars from one place of origin and tested on cars from another place of origin. Here we consider predicting the fuel consumption (attribute 1) using the four continuous attributes. We train the model using data on cars from origin 1, and test on cars from origin 2 and origin 3. We use the same test algorithms as the previous section, but now using a Gaussian process regressors for each regression function. The results of running this are in Table 2. The Gaussian process hyper-parameters were optimised separately for each case. These are results obtained using a Bayesian Information Criterion for selecting the number of mixtures between 1 and 14 for each of the cases. We obtain similar results if we compare methods with various fixed numbers of mixtures. Critically, we note that all covariate shift methods performed better than a straight Gaussian Process predictor in this situation. The mixture of Gaussian processes did not perform as well as the methods which explicitly recognised the covariate shift, although interestingly did perform better than a straight Gaussian process predictor. Again the MRCS performed better overall. 5 Discussion This paper establishes that explicit generative modelling of covariate shift can bring improvements over conditional regression models, or over standard covariate shift methods that ignore the dependent data in the modelling process. The method is also better than using an identical mixture of regressors model for the training data alone, as it utilises the positions of the independent test points to help refine the mixture locations and the separation of regressors. We expect significant improvements can be made with a fully Bayesian treatment of the parameters. This framework is currently being extended to the case of multiple training and test datasets using a fully Bayesian scheme, and will be the subject of future work. In this setting we have a Topic model, Table 2: Tests of methods on the auto-mpg dataset. These are the (standardised) mean squared errors for each approach. GP denotes the use of Gaussian Process regression for prediciton. Orgin 2, and Origin 3 denote the two different car origins used to test the model. Origin 2 Origin 3 GP 1.192 0.898 MRCS 0.600 0.568 IWLS 0.700 0.691 MRKL 1.2243 1.3862 MRREG 0.7397 0.706 similar to Latent Dirichlet Allocation, where each dataset is built from a number of contributing regression components, where each component is expressed in different proportions in each dataset. The model and tests of this paper show that this multiple dataset extension could well be fruitful. 6 Conclusion In this paper a novel approach to the problem of covariate shift has been developed that is demonstrably better than state of the art regression approaches, and better than the current standard for covariate shift. These have been tested on both generated data, and on a real problem of covariate shift, derived from a standard UCI dataset. Importance Weighted Least Squares is shown to be a special case. Specifically we provide explicit modelling of the covariate shift process by assuming a shift in the proportions of a number of latent components. A mixture of regressors model is used for this purpose, but it differs from standard mixture of regressors by allowing sharing of the regression functions between mixture components and explicitly including a model for the test set as part of the process. References [1] P. Baldi, S. Brumak, and G. A. Stolovitzky. Bioinformatics: The Machine Learning Approach. MIT Press, Cambridge, 1998. [2] C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995. [3] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39:1?38, 1977. [4] W.S. DeSarbo and W.L. Cron. A maximum likelihood methodology for clusterwise linear regression. Journal of Classification, 5:249?282, 1988. [5] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. Wiley Interscience, 2001. [6] C. Fraley and A.E. Raftery. How many clusters? Which clustering method? Answers via model-based cluster analysis. Computer Journal, 41:578?588, 1998. [7] J. J. Heckman. Sample selection bias as a specification error. Econometrica, 47:153?162, 1979. [8] C. Hennig. Identifiability of models for clusterwise linear regressions. Journal of Classification, 17:273? 296, 2000. [9] R. Herbrich. Learning Kernel Classifiers. MIT Press, 2002. [10] R.A. Jacobs, M.I. Jordan, S.J. Nowlan, and G.E. Hinton. Adaptive mixtures of local experts. Neural Computation, 3:79?87, 1991. [11] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6:181?214, 1994. [12] C. Keribin. Consistent estimation of the order of mixture models. Technical report, Universit?e d?Evry-Val d?Essonne, Laboratoire Analyse et Probabilit?e, 1997. [13] C.R. Shelton. Importance Sampling for Reinforcement Learning with multiple Objectives. PhD thesis, Massachusetts Institute of Technology, 2001. [14] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90:227?244, 2000. [15] M. Sugiyama and K. -R. M?uller. Input-dependent estimation of generalisation error under covariate shift. Statistics and Decisions, 23:249?279, 2005. [16] H.G. Sung. Gaussian Mixture Regression and Classification. PhD thesis, Rice University, 2004. [17] J.K. Vermunt. A general non-parametric approach to unobserved heterogeneity in the analysis of event history data. In J. Hagenaars and A. McCutcheon, editors, Applied Latent Class Models. Cambridge University Press, 2002. [18] M. Wedel and W.S. DeSarbo. A mixture likelihood approach for generalised linear models. Journal of Classification, 12:21?55, 1995. [19] B. Zadrozny. Learning and evaluating classifiers under sample selection bias. In Proceedings of ICML, 2004.
3019 |@word trial:1 middle:1 proportion:11 duda:1 covariance:1 jacob:2 shot:1 contains:1 selecting:2 denoting:1 interestingly:1 existing:1 current:4 comparing:1 nowlan:1 yet:1 must:1 mst:4 happen:1 plot:1 update:4 alone:3 generative:5 provides:2 location:1 herbrich:1 prediciton:1 become:1 consists:4 interscience:1 baldi:1 indeed:1 presumed:2 p1:4 mpg:4 planning:1 actual:2 considering:1 becomes:1 provided:2 matched:1 fuel:2 interpreted:1 developed:1 unobserved:1 sung:1 masashi:1 tackle:1 universit:1 classifier:2 uk:1 bio:1 appear:1 arguably:1 before:1 generalised:1 understood:1 dropped:1 local:1 oxford:1 optimised:1 might:1 examined:1 ease:1 presuming:1 practical:1 maximisation:3 practice:2 differs:3 categorise:1 procedure:7 probabilit:1 empirical:1 significantly:1 word:1 selection:7 risk:1 context:3 restriction:2 equivalent:1 www:1 fruitful:1 straightforward:1 starting:1 splitting:1 m2:1 estimator:5 rule:1 variation:3 updated:2 target:3 pt:5 suppose:2 us:2 regularised:1 origin:10 storkey:2 recognition:2 observed:1 bottom:1 solved:1 region:2 cycle:1 environment:2 pd:4 stolovitzky:1 covariates:3 dempster:1 econometrica:1 trained:1 predictive:2 upon:2 various:4 represented:1 derivation:1 train:2 hyper:1 widely:1 valued:1 say:2 otherwise:1 statistic:1 gp:2 analyse:1 laird:1 reproduced:1 advantage:2 relevant:3 uci:3 mixing:2 regularity:1 requirement:1 assessing:2 p:4 produce:1 generating:1 cluster:3 help:1 coupling:1 informs:1 ac:2 develop:1 illustrate:1 school:1 noticeable:1 p2:2 c:1 involves:6 tokyo:1 attribute:3 clusterwise:2 f1:4 clustered:1 standardised:1 extension:5 hold:1 considered:1 ground:1 ic:1 predict:2 purpose:2 estimation:4 applicable:1 label:3 currently:1 establishes:2 city:1 amos:1 weighted:8 uller:1 mit:2 always:1 gaussian:14 rather:2 minimisation:2 derived:2 improvement:2 consistently:1 modelling:10 likelihood:6 mainly:1 inference:3 dependent:6 typically:1 interested:1 issue:4 classification:6 arg:2 html:1 denoted:6 overall:1 art:2 special:6 field:1 once:1 never:1 sampling:2 identical:2 icml:1 future:1 others:1 report:1 divergence:1 possibility:1 mixture:71 truly:1 light:1 machinery:1 incomplete:1 causal:1 untangle:1 formalism:2 cover:1 rare:1 predictor:3 too:1 answer:1 varies:2 learnt:2 combined:1 chooses:1 st:11 density:3 fundamental:1 randomized:1 probabilistic:2 informatics:2 regressor:14 again:3 central:1 squared:1 thesis:2 choose:1 expert:4 explicitly:4 performed:2 view:1 responsibility:6 bayes:1 effected:1 identifiability:1 contribution:1 square:10 variance:3 characteristic:3 bayesian:8 critically:1 none:1 presume:4 straight:3 history:1 mlearn:1 sharing:1 ed:1 definition:1 sixth:1 failure:1 sugi:1 associated:10 gain:1 dataset:8 pst:2 treatment:1 massachusetts:1 ptrain:6 car:7 supervised:4 follow:1 methodology:1 formulation:2 done:1 though:1 evaluated:1 furthermore:1 ptest:7 implicit:2 just:2 nonlinear:1 overlapping:1 lack:1 evry:1 quality:1 effect:3 true:1 hence:11 conditionally:1 noted:1 criterion:4 trying:1 recognised:1 outline:1 demonstrate:3 performs:3 bring:1 percent:1 variational:1 novel:1 common:5 empirically:1 stork:1 jp:1 discussed:6 extend:1 significant:2 cambridge:2 sugiyama:2 specification:1 longer:1 recent:3 perspective:1 scenario:3 seen:3 greater:2 relaxed:2 utilises:2 performer:1 determine:1 paradigm:1 full:3 multiple:3 infer:1 technical:1 match:3 calculation:2 hart:1 prediction:4 regression:37 cron:1 heterogeneous:1 titech:1 expectation:3 iteration:3 kernel:1 addition:1 separately:1 addressed:1 laboratoire:1 source:25 biased:1 probably:1 subject:2 member:1 desarbo:2 jordan:2 structural:1 concerned:1 fit:4 bic:5 shift:56 f:5 fraley:1 dictating:1 useful:1 demonstrably:1 generate:2 http:1 hennig:1 four:1 demonstrating:1 pj:1 asymptotically:1 utilising:1 run:1 extends:1 mlsummary:1 reasonable:1 place:2 separation:1 decision:1 followed:1 datum:2 refine:1 badly:1 occur:3 min:2 essonne:1 department:1 developing:1 shimodaira:1 across:3 em:9 making:1 restricted:1 taken:2 fail:1 needed:2 know:1 available:1 hierarchical:2 appropriate:2 denotes:3 top:1 include:1 running:1 dirichlet:1 clustering:1 establish:2 reiterated:1 society:1 objective:1 question:1 parametric:4 p2t:2 primary:1 usual:2 heckman:1 separate:1 consumption:2 topic:1 discriminant:1 assuming:1 kst:2 relationship:3 potentially:1 stated:1 unknown:4 perform:3 allowing:1 datasets:7 zadrozny:1 situation:7 extended:1 hinton:1 heterogeneity:1 introduced:1 pair:1 specified:1 kl:1 able:2 pattern:2 mismatch:1 built:2 including:1 royal:1 explanation:1 overlap:1 event:1 difficulty:1 natural:3 predicting:2 representing:1 scheme:1 technology:2 raftery:1 naive:1 auto:4 prior:9 literature:4 val:1 determining:1 asymptotic:2 relative:2 contributing:1 fully:2 expect:2 presumption:5 allocation:1 consistent:2 rubin:1 editor:1 production:1 row:2 last:1 bias:8 allow:3 understand:1 institute:3 mismatched:1 taking:1 emphasise:1 edinburgh:1 benefit:5 regard:1 valid:2 evaluating:1 ignores:1 commonly:1 adaptive:2 reinforcement:2 made:2 regressors:22 collection:2 ignore:1 implicitly:1 wedel:1 dealing:1 reproduces:1 assumed:2 xi:2 continuous:1 latent:10 iterative:1 table:5 nature:2 learn:1 ignoring:3 improving:1 did:2 significance:1 main:4 motivation:1 nothing:1 cubic:1 wiley:1 position:1 explicit:5 wish:1 third:1 weighting:1 specific:4 covariate:56 bishop:1 incorporating:1 recognising:1 importance:8 phd:2 conditioned:1 occurring:1 rejection:1 p1t:6 simply:1 expressed:1 utilise:1 corresponds:3 rice:1 conditional:5 formulated:2 change:8 included:1 typical:3 generalisation:2 specifically:2 characterised:2 except:1 infinite:1 pvalues:1 called:1 shedding:1 formally:1 latter:1 bioinformatics:1 violated:1 tested:3 shelton:1
2,226
302
Phase-coupling in Two-Dimensional Networks of Interacting Oscillators Ernst Niebur, Daniel M. Kammen, Christof Koch, Daniel Ruderman! & Heinz G. Schuster2 Computation and Neural Systems Caltech 216-76 Pasadena, CA 91125 ABSTRACT Coherent oscillatory activity in large networks of biological or artificial neural units may be a useful mechanism for coding information pertaining to a single perceptual object or for detailing regularities within a data set. We consider the dynamics of a large array of simple coupled oscillators under a variety of connection schemes. Of particular interest is the rapid and robust phase-locking that results from a "sparse" scheme where each oscillator is strongly coupled to a tiny, randomly selected, subset of its neighbors. 1 INTRODUCTION Networks of interacting oscillators provide an excellent model for numerous physical processes ranging from the behavior of magnetic materials to models of atmospheric dynamics to the activity of populations of neurons in a variety of cortical locations. Particularly prominent in the neurophysiological data are the 40-60 Hz oscillations that have long been reported in the rat and rabbit olfactory bulb and cortex on the basis of single-and multi-unit recordings as well as EEG activity (Freeman, 1978). In addition, periodicities in eye movement reaction times (Poppel and Logothetis, 1986), as well as oscillations in the auditory evoked potential in response to single click or a series of clicks (Madler and Poppel, 1987) all support a 30 - 50 Hz framework for aspects of cortical activity. Two groups (Eckhorn et al., 1988, Gray 1 Permanent 2 Permanent address: Department of Physics, University of California, Berkeley, CA 94720 address: Institut fiir Theoretische Physik, Universitat Kiel, 2300 Kiell, Germany. 123 124 Niebur, Kammen, Koch, Ruderman, and Schuster and Singer, 1989; Gray et al., 1989) have recently reported highly synchronized, stimulus specific oscillations in the 35 - 85 Hz range in areas 17, 18 and PMLS of anesthetized as well as awake cats. Neurons with similar orientation tuning up to 7 mm apart show phase-locked oscillations with a phase shift of less than 1 msec that have been proposed to play a role in the coding of visual information (Crick and Koch, 1990, Niebur et al. 1991). The complexity of networks of even relatively simple neuronal units - let alone "real" cortical cells - warrants a systematic investigation of the behavior of two dimensional systems. To address this question we begin with a network of mathematically simple limit-cycle oscillators. While the dynamics of pairs of oscillators are well understood (Sakaguchi, et al. 1988, Schuster and Wagner, 1990a,b), this is not the case for large networks with nontrivial connection schemes. Of general interest is the phase-coupling that results in networks of oscillators with different coupling schemes. We will summarize some generic features of simple nearest-neighbor coupled models, models where each oscillator receives input from a large neighborhood, and of "sparse" connection geometries where each cell is connected to only a tiny fraction of the units in its neighborhood, but with large coupling strength. The numerical work was performed on a CM-2 Connection Machine and involved 16,384 oscillators in a 128 by 128 square grid. 2 The Model The basic unit in our networks is an oscillator whose phase (Jij is 21r periodic and which has the intrinsic frequency Wij. The dynamics of an isolated oscillator are described by: d;) d(J? . = (1) Wij. The influence of the network can be expressed as an additional interaction term, (2) The coupling function, lij we used is expressed as the sum of terms, each one consisting of the product of a coupling strength and the sine of a phase difference (see below, eq. 3). The sinusoidal form of the interaction is, of course, linear for small differences. This system, and numerous variants, has received a considerable amount of attention from solid state physicists (see, e.g. Kosterlitz and Thouless 1973, and Sakaguchi et al. 1988), although primarily in the limit of t - 00. With an interest in the possible role of networks of oscillators in the parsing or segregating of incident signals in nervous systems, we will concentrate on short time, non-equilibrium, properties. We shall confine ourselves to two generic network configurations described by L ' -d(Jij J .. 1:lszn((J?? dt = w .. + a &) &), 1:1 &) (J1:I) , (3) Phase-coupling in Two-Dimensional Networks of Interacting Oscillators where 0' designates the global strength of the interaction, and the geometry of the interactions is incorporated in Jij ,kl' The networks are all defined on a square grid and they are characterized as follows: 1: Gaussian Connections. The cells are connected to every oscillator within a specified neighborhood with Gaussian weighted connections. Hence, J ij ,kl 1 ( (i = 271'0' exp k) 2 + (j - 20'2 I) 2 ) . (4) We truncate this function at 20', i.e. Jij,kl = 0 if (i - k)2 + (j _1)2 > (20')2. While the connectivity in the nearest neighbor case is 4, the connectivity is significantly higher for the Gaussian connection schemes: Already 0' 2 yields 28 connections per cell, and the largest network we studied, with 0' 6, results in 372 connections per cell. = = 2: Sparse Gaussian Connections. In this scheme we no longer require symmetric connections, or that the connection pattern is identical from unit to unit. A given cell is connected to a fixed number, n, of neighboring cells, with the probability of a given connection determined by '" rij,kl 1 = 271'0' exp ( (i - k?20'2 + (j - I? ) . (5) Jij,kl is unity with probability Pij,kl and zero otherwise . This connection scheme is constructed by drawing for each lattice site n coordinate pairs from a Gaussian distribution, and use these as the indices of the cells that are connected with the oscillator at location (i,j). Therefore, the probability of making a connection decreases with distance. If a connection is made, however, the weight is the same as for all other connections. We typically used n = 5, and in all cases 2 ~ n < 10. For all networks, the sum of the weights of all connections with a given oscillator i, j was conserved and chosen as 0' 2::kl Jij,kl = 10 * w, where w is the average frequency of all N oscillators in the system, w = 2::ij Wij. By this procedure, the total impact of the interaction term is identical in all cases. k 3 RESULTS Perhaps the most basic, and most revealing, comparison of the behavior of the models introduced above is the two-point correlation function of phase-coupling, which is defined as (6) where R is defined as the separation between a pair of cells, R = hj - nIl. We compute and then average C(R, t) over 10,000 pairs of oscillators separated by R in the array. In all cases, the frequencies Wij are chosen randomly, with a Gaussian distribution with mean 0.5 and variance 1. In Figure 1 we plot C(R, t) for separations 125 126 Niebur, Kammen, Koch, Ruderman, and Schuster of R = 20, 30, 40, 50, 6, and 70 oscillators. Time is measured in oscillation periods of the mean oscillator frequency, w. At t 0, phases are distributed randomly between 0 and 27r with a uniform distribution. The case of Gaussian connectivity with u 6 and hence 372 connection per cell is seen in Figure l(a), and the sparse connectivity scheme with u 6 and n 5 is presented in Figure l(b). The most striking difference is that correlation levels of over 0.9 are rapidly achieved in the sparse scheme for all cases, even for separations of 70 oscillators (plotted as asterisks, *), while there are clear separation-dependent differences in the phase-locking behavior of the Gaussian model. In fact, even after t 10 there is no significant locking over the longer distances of R = 50,60, or 70 units. For local connectivity schemes, like Gaussian connectivity with u 2 or nearest neighbors connections, no long-range order evolves even at larger times (data not shown). = = = = = = = Data in Fig. 1 were computed with a uniform phase distribution for t o. An interesting and robust feature of the dynamics emerges when the influence of different types of initial phase distributions are examined. In Figure 2 we plot the probability distribution of phases at different early times. In Figure 2( a) the distribution of phases is plotted at t 0 (diamonds), t 0.2 ("plus signs, +) and at t 0.4 (squares) for the sparse scheme with a uniform initial distribution. In Figure 2(b), the evolution of a Gaussian initial distribution centered at () = 7r of the phases is plotted. Note the slight curve in the distribution at t = 0, indicating that the Gaussian initial seeding is rather slight (variance u = 27r) . Remarkably, however, this has a dramatic impact on the phase-locking as after two-tenth of an average cycle time (" plus" signs) there is already a pronounced peak in the distribution . At t 0.4 (squares) the system that started with the uniform distribution begins to only exhibit a slight increase in the phase-correlation while the system with Gaussian distributed initial phases is strongly peaked with virtually no probability of encountering phase values that differ significantly from the mean. = = = = 4 DISCUSSION The power of the sparse connection scheme to rapidly generate phase-locking throughout the network that is equivalent, or superior, to that of the massively interconnected Gaussian scheme highlights a trade-off in network dynamics: massive averaging versus strong, long-range, connections. With n = 5, the sparse scheme effectively "tiles" a two-dimensional lattice and tightly phase-locks oscillators even at opposite corners of the array. Similar results are obtained even with n = 2 (data not shown). In many ways the Gaussian and sparse geometries reperent opposing avenues to achieve global coherence: exhaustive local coupling or distributed, but powerful long-range coupling. The amount of wiring necessary to implement these schemes is, however, radically different. Phase-coupling in Two-Dimensional Networks of Interacting Oscillators Acknowledgement EN is supported by the Swiss National Science Foundation through Grant No. 822025941. DMK is a recipient of a Weizman Postdoctoral Fellowship. CK acknowledges support from the Air Force Office of Scientific Research, a NSF Presidential Young Investigator Award and from the James S. McDonnell Foundation. HGS is supported by the Volkswagen Foundation. References Crick, F. and Koch, C. 1990. Towards a neurobiological theory of consciousness. Seminars Neurosci., 2, 263 - 275. Eckhorn, R., Bauer, R., Jordan, W., Brosch, M., Kruse, W., Munk, M. and Reitboeck, H. J. 1988. Coherent oscillations: A mechanism of feature linking in the visual cortex? Bioi. Cybern., 60, 121 - 130. Freeman, W. J. 1978. Spatial properties of an EEG event in the olfactory bulb and cortex. Elect. Clin. Neurophys., 44, 586 - 605. Gray, C. M., Konig, P., Engel, A. K. and Singer, W. 1989. Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties. Nature, 338, 334 -337. Kosterlitz, J. M. and Thouless, D. J. 1973. Ordering, metastability and phase transitions in two-dimensional systems. J. Physics C., 6, 1181 - 1203. Madler, C. and Poppel, E. 1987. Auditory evoked potentials indicate the loss of neuronal oscillations during general anaesthesia. Naturwissenschaften, 74, 42 - 43. Niebur, E., Kammen, D. M., and Koch, C. 1991. Phase-locking in I-D and 2-D networks of oscillating neurons. In Nonlinear dynamics and neuronal networks, Singer, W., and Schuster, H. G. (eds.). VCH Verlag: Weinheim, FRG. Poppel, E. and Logothetis, N. 1986. Neuronal oscillations in the human brain. Naturwissenschaften, 73, 267 - 268. Sakaguchi, H., Shinomoto, S. and Kuramoto, Y. 1988. Mutual entrainment in oscillator lattices with nonvariational type interaction. Prog. Theor. Phys., 79, 1069 - 1079. Schuster, H. G. and Wagner, P. 1990a. A model for neuronal oscillations in the visual cortex 1: Mean-field theory and derivation of phase equations. Biological Cybernetics, 64, 77 - 82. Schuster, H. G . and Wagner, P. 1990b. A model for neuronal oscillations in the visual cortex 2: Phase description of feature dependant synchronization. Biological Cybernetics, 64, 83. 127 128 Niebur, Kammen, Koch, Ruderman, and Schuster (A) .~ ?E ? ? C(l ?? ) ? ? ? ? ?? D D D ? D ? c ? ???? ? ???????????????????????????????? ? ,. ..... . ? ???? ? ... ... ..... ". ? ??? ???? " '" ? ? .2 ??????????????????????????????????????? ? ? ?4 ??? ?? " ? ?? ?? ." ??? ,. 6,." ,. ? ????? ???? .6?? ...??????? .. .66??? ,. ? ?? a ? 6 ????? ? ????????? ? ??? ".. ... ?? ?????? o . .. .......... ... ....... .11 ~ .?. ???? ? ? ? ? .? ? ????????????????????...??????.. ? ???????? ~.". " ?? , .?? ? ???..' ID f1J11. I ? II) ?8 ??? ?? o? ? ? ? ?6 ?? ? ?????? ???~rittl ~!!!~e.I;lil~lllllttlll ?????? ,1 .-w ?? .??CW?? 0 0 .... ? CC ???? - ?? C C " " ?? ? ???? ? c ??? C ?? ??? ??? ? ? C ? ?? C ? C(l ?? ) ? C C ?4 ? 6 ? C ?2 ? ?6 ? _ 6 ? ?C. 6 ? o ? I.? ~.c ...~ .I! .......... ...... .......... ... .... ........... .... .. ... ... ....... .. ............... ..... . . -.2~--------~------ o __ ______ ________ ______ ~ ~ ? ftIIl. ~ ~ ID I Figure 1: Tw~point correlation functions, C(R, t), for various separations, R, in (a) the (T = 6 Gaussian scheme with 372 connections per cell and (b) the spar~e connection scheme with (T 6 and n 5 connections per cell. Separations of R = 20 (diamonds), R = 30 ("Plus" signs, +), R = 40 (squares), R = 50 (crosses, x), R = 60 (triangles), and R = 70 (asterisks, *) are shown. Note the rapid locking for all lengths in the sparse scheme (b) while the Gaussian scheme (a) appears far more "diffusive," with progressively poorer and slower locking as R increases. = = Phase-coupling in Two-Dimensional Networks of Interacting Oscillators 14~----~----~----~-----r----~--~--~----~----~----~ (A I 10 PI' 1 8 r. Figure 2: Snapshots of the distribution of phases in the sparse scheme (n 5,0' = 6) when the system begins from (a) uniform and (b) a Gaussian "biased" initial distribution. The figures show the probability P(O) to find a phase between 0 and 0 + dO (bin size 'KIlO). At t 0, the distribution is fiat (a) or very slightly curved (b); see text. The difference in the time evolution can clearly be seen in the state of the system after t = 0.2 ("plus" signs, +) and t = 0.4 (squares). = = 129
302 |@word physik:1 dramatic:1 volkswagen:1 solid:1 initial:6 configuration:1 series:1 daniel:2 reaction:1 neurophys:1 parsing:1 numerical:1 j1:1 seeding:1 plot:2 progressively:1 alone:1 selected:1 nervous:1 short:1 location:2 kiel:1 constructed:1 olfactory:2 inter:1 rapid:2 behavior:4 multi:1 brain:1 heinz:1 freeman:2 weinheim:1 begin:3 cm:1 berkeley:1 every:1 unit:8 grant:1 christof:1 understood:1 local:2 limit:2 physicist:1 id:2 plus:4 studied:1 examined:1 evoked:2 metastability:1 range:4 locked:1 implement:1 swiss:1 procedure:1 area:1 significantly:2 revealing:1 influence:2 cybern:1 equivalent:1 attention:1 rabbit:1 array:3 population:1 coordinate:1 play:1 logothetis:2 massive:1 particularly:1 role:2 rij:1 cycle:2 connected:4 ordering:1 movement:1 decrease:1 trade:1 locking:8 complexity:1 dynamic:7 basis:1 triangle:1 cat:2 various:1 derivation:1 separated:1 pertaining:1 artificial:1 neighborhood:3 exhaustive:1 whose:1 larger:1 drawing:1 otherwise:1 presidential:1 kammen:5 interaction:6 jij:6 product:1 interconnected:1 neighboring:1 rapidly:2 ernst:1 achieve:1 description:1 pronounced:1 konig:1 regularity:1 oscillating:1 object:1 coupling:12 measured:1 nearest:3 ij:2 received:1 eq:1 strong:1 indicate:1 synchronized:1 differ:1 concentrate:1 centered:1 human:1 material:1 munk:1 frg:1 require:1 bin:1 investigation:1 biological:3 theor:1 mathematically:1 mm:1 koch:7 confine:1 exp:2 equilibrium:1 anaesthesia:1 early:1 largest:1 engel:1 weighted:1 reflects:1 clearly:1 gaussian:17 rather:1 ck:1 hj:1 office:1 dependent:1 typically:1 pasadena:1 wij:4 germany:1 orientation:1 spatial:1 mutual:1 field:1 identical:2 warrant:1 peaked:1 stimulus:2 primarily:1 randomly:3 tightly:1 national:1 thouless:2 phase:30 geometry:3 consisting:1 ourselves:1 opposing:1 interest:3 highly:1 hg:1 poorer:1 necessary:1 institut:1 detailing:1 plotted:3 isolated:1 lattice:3 subset:1 uniform:5 universitat:1 reported:2 periodic:1 peak:1 systematic:1 physic:2 off:1 connectivity:6 tile:1 corner:1 naturwissenschaften:2 potential:2 sinusoidal:1 coding:2 permanent:2 performed:1 sine:1 square:6 air:1 variance:2 yield:1 theoretische:1 niebur:6 poppel:4 cc:1 cybernetics:2 oscillatory:2 phys:1 ed:1 frequency:4 involved:1 james:1 auditory:2 emerges:1 fiat:1 appears:1 higher:1 dt:1 response:2 strongly:2 correlation:4 receives:1 ruderman:4 nonlinear:1 dependant:1 gray:3 perhaps:1 scientific:1 evolution:2 hence:2 symmetric:1 consciousness:1 wiring:1 shinomoto:1 during:1 elect:1 rat:1 prominent:1 dmk:1 ranging:1 recently:1 superior:1 physical:1 linking:1 slight:3 significant:1 tuning:1 eckhorn:2 grid:2 cortex:6 longer:2 encountering:1 apart:1 massively:1 verlag:1 kosterlitz:2 caltech:1 conserved:1 seen:2 additional:1 kilo:1 period:1 kruse:1 signal:1 ii:1 characterized:1 cross:1 long:4 spar:1 award:1 impact:2 variant:1 basic:2 achieved:1 cell:12 diffusive:1 addition:1 fellowship:1 remarkably:1 biased:1 hz:3 recording:1 pmls:1 virtually:1 jordan:1 variety:2 click:2 opposite:1 avenue:1 shift:1 useful:1 clear:1 amount:2 generate:1 nsf:1 sign:4 per:5 shall:1 group:1 segregating:1 tenth:1 fraction:1 sum:2 powerful:1 striking:1 prog:1 throughout:1 separation:6 oscillation:10 coherence:1 activity:4 nontrivial:1 strength:3 awake:1 aspect:1 relatively:1 department:1 truncate:1 mcdonnell:1 slightly:1 unity:1 tw:1 evolves:1 making:1 equation:1 brosch:1 mechanism:2 singer:3 generic:2 magnetic:1 slower:1 recipient:1 lock:1 clin:1 question:1 already:2 exhibit:2 distance:2 cw:1 length:1 index:1 lil:1 diamond:2 neuron:3 snapshot:1 curved:1 incorporated:1 interacting:5 atmospheric:1 introduced:1 pair:4 kl:8 specified:1 connection:25 vch:1 california:1 coherent:2 address:3 below:1 pattern:1 summarize:1 power:1 event:1 force:1 scheme:20 eye:1 numerous:2 started:1 acknowledges:1 coupled:3 lij:1 text:1 acknowledgement:1 synchronization:2 loss:1 highlight:1 interesting:1 versus:1 asterisk:2 foundation:3 bulb:2 incident:1 reitboeck:1 pij:1 tiny:2 pi:1 periodicity:1 course:1 supported:2 neighbor:4 wagner:3 anesthetized:1 sparse:11 distributed:3 bauer:1 curve:1 cortical:3 transition:1 made:1 far:1 neurobiological:1 fiir:1 global:3 postdoctoral:1 designates:1 nature:1 robust:2 ca:2 eeg:2 excellent:1 neurosci:1 neuronal:6 site:1 fig:1 en:1 seminar:1 msec:1 perceptual:1 young:1 specific:1 intrinsic:1 effectively:1 columnar:1 neurophysiological:1 visual:5 expressed:2 radically:1 bioi:1 towards:1 oscillator:25 crick:2 considerable:1 determined:1 entrainment:1 averaging:1 total:1 nil:1 indicating:1 support:2 investigator:1 schuster:7
2,227
3,020
Multi-dynamic Bayesian Networks Karim Filali and Jeff A. Bilmes Departments of Computer Science & Engineering and Electrical Engineering University of Washington Seattle, WA 98195 {karim@cs,bilmes@ee}.washington.edu Abstract We present a generalization of dynamic Bayesian networks to concisely describe complex probability distributions such as in problems with multiple interacting variable-length streams of random variables. Our framework incorporates recent graphical model constructs to account for existence uncertainty, value-specific independence, aggregation relationships, and local and global constraints, while still retaining a Bayesian network interpretation and efficient inference and learning techniques. We introduce one such general technique, which is an extension of Value Elimination, a backtracking search inference algorithm. Multi-dynamic Bayesian networks are motivated by our work on Statistical Machine Translation (MT). We present results on MT word alignment in support of our claim that MDBNs are a promising framework for the rapid prototyping of new MT systems. 1 INTRODUCTION The description of factorization properties of families of probabilities using graphs (i.e., graphical models, or GMs), has proven very useful in modeling a wide variety of statistical and machine learning domains such as expert systems, medical diagnosis, decision making, speech recognition, and natural language processing. There are many different types of graphical model, each with its own properties and benefits, including Bayesian networks, undirected Markov random fields, and factor graphs. Moreover, for different types of scientific modeling, different types of graphs are more or less appropriate. For example, static Bayesian networks are quite useful when the size of set of random variables in the domain does not grow or shrink for all data instances and queries of interest. Hidden Markov models (HMMs), on the other hand, are such that the number of underlying random variables changes depending on the desired length (which can be a random variable), and HMMs are applicable even without knowing this length as they can be extended indefinitely using online inference. HMMs have been generalized to dynamic Bayesian networks (DBNs) and temporal conditional random fields (CRFs), where an underlying set of variables gets repeated as needed to fill any finite but unbounded length. Probabilistic relational models (PRMs) [5] allow for a more complex template that can be expanded in multiple dimensions simultaneously. An attribute common to all of the above cases is that the specification of rules for expanding any particular instance of a model is finite. In other words, these forms of GM allow the specification of models with an unlimited number of random variables (RVs) using a finite description. This is achieved using parameter tying, so while the number of RVs increases without bound, the number of parameters does not. In this paper, we introduce a new class of model we call multi-dynamic Bayesian networks. MDBNs are motivated by our research into the application of graphical models to the domain of statistical machine translation (MT) and they have two key attributes from the graphical modeling perspective. First, an MDBN generalizes a DBN in that there are multiple ?streams? of variables that can get unrolled, but where each stream may be unrolled by a differing amount. In the most general case, connecting these different streams together would require the specification of conditional probabil- ity tables with a varying and potentially unlimited number of parents. To avoid this problem and retain the template?s finite description length, we utilize a switching parent functionality (also called value-specific independence). Second, in order to capture the notion of fertility in MT-systems (defined later in the text), we employ a form of existence uncertainty [7] (that we call switching existence), whereby the existence of a given random variable might depend on the value of other random variables in the network. Being fully propositional, MDBNs lie between DBNs and PRMs in terms of expressiveness. While PRMs are capable of describing any MDBN, there are, in general, advantages to restricting ourselves to a more specific class of model. For example, in the DBN case, it is possible to provide a bound on inference costs just by looking at attributes of the DBN template only (e.g., the left or right interfaces [12, 2]). Restricting the model can also make it simpler to use in practice. MDBNs are still relatively simple, while at the same time making possible the easy expression of MT systems, and opening doors to novel forms of probabilistic inference as we show below. In section 2, we introduce MDBNs, and describe their application to machine translation showing how it is possible to represent even complex MT systems. In section 3, we describe MDBN learning and decoding algorithms. In section 4, we present experimental results in the area of statistical machine translation, and future work is discussed in section 5. 2 MDBNs A standard DBN [4] template consists of a directed acyclic graph G = (V, E) = (V1 ? V2 , E1 ? E2 ? E2? ) with node set V and edge set E. For t ? {1, 2}, the sets Vt are the nodes at slice t, Et are the intra-slice edges between nodes in Vt , and Et? are the inter-slice edges between nodes in V1 and V2 . To unroll a DBN to length T , the nodes V2 along with the edges adjacent to any node in V2 are cloned T ? 1 times (where parameters of cloned variables are constrained to be the same as the template) and re-connected at the corresponding places. An MDBN with K streams consists of the union of K DBN templates along with a template structure specifying rules to connect the various streams together. An MDBN template is a directed graph [ [ (k) G = (V, E) = ( V (k) , E (k) ? Ell ) k (k) (k) th k (k) Ell where (V , E ) is the k DBN, and the edges are rules specifying how to connect stream k to the other streams. These rules are general in that they specify the set of edges for all values of Tk . There can be arbitrary nesting of the streams such as, for example, it is possible to specify a model that can grow along several dimensions simultaneously. An MDBN also utilizes ?switching existence?, meaning some subset of the variables in V bestow existence onto other variables in the network. We call these variables existence bestowing (or ebnodes). The idea of bestowing existence is well defined over a discrete space, and is not dissimilar to a variable length DBN. For example, we may have a joint distribution over lengths as follows: p(X1 , . . . , XN , N ) = p(X1 , . . . , Xn |N = n)p(N = n) where here N is an eb-node that determines the number of other random variables in the DGM. Our notion of eb-nodes allows us to model certain characteristics found within machine translation systems, such as ?fertility? [3], where a given English word is cloned a random number of times in the generative process that explains a translation from French into English. This random cloning might happen simultaneously at all points along a given MDBN stream. This means that even for a given fixed stream length Ti = ti , each stream could have a randomly varying number of random variables. Our graphical notation for eb-nodes consists of the eb-node as a square box containing variables whose existence is determined by the eb-node. We start by providing a simple example of an expanded MDBN for three well known MT systems, namely the IBM models 1 and 2 [3], and the ?HMM? model [15].1 We adopt the convention in [3] that our goal is to translate from a string of French words F = f of length M = m into a string of English words E = e of length L = l ? of course these can be any two languages. The basic generative (noisy channel) approach when translating from French to English is to represent the joint 1 We will refer to it as M-HMM to avoid confusion with regular HMMs. distribution P (f , e) = P (f |e)P (e). P (e) is a language model specifying the prior over the word string e. The key goal is to produce a finite-description length representation for P (f |e) where f and e are of arbitrary length. A hidden P alignment string, a, specifies how the English words align to the French word, leading to P (f |e) = a P (f , a|e). Figure 1(a) is a 2-stream MDBN expanded representation of the three models, in this case ? = 4 and m = 3. As shown, it appears that the fan-in to node fi will be ? and thus will grow without bound. However, a switching mechanism whereby P (fi |e, ai ) = P (fi |eai ) limits the number of parameters regardless of L. This means that the alignment variable ai indicates the English word eai that should be aligned to French word fi . The variable e0 is a null word that connects to French words not explained by any of e1 , . . . , e? . The graph expresses all three models ? the difference is that, in Models 1 and 2, there are no edges between aj and aj+1 . In Model 1, p(aj = ?) is uniform on the set {1, . . . , L}; in Model 2, the distribution over aj is a function only of its position j, and on the English and French lengths ? and m respectively. In the M-HMM model, the ai variables form a first order Markov chain. l e0 ? e1 e3 e2 e4 ?0 ?01 f1 a1 f2 a2 f3 u v a3 m (a) Models 1,2 and M-HMM e1 e2 e3 ?1 ?2 ?3 m? ?02 ?11 ?12 ?13 ?21 ?02 ?11 ?12 ?13 ?21 f1 f2 f3 f4 f5 f6 a1 a2 a3 a4 a5 a6 ?01 w y x m (b) Expanded M3 graph Figure 1: Expanded 2-stream MDBN description of IBM Models 1 and 2, and the M-HMM model for MT; and the expanded MDBN description of IBM Model 3 with fertility assignment ?0 = 2, ?1 = 3, ?2 = 1, ?3 = 0. From the above, we see that it would be difficult to express this model graphically using a standard DBN since L and M are unequal random variables. Indeed, there are two DBNs in operation, one consisting of the English string, and the other consisting of the French string and its alignment. Moreover, the fully connected structure of the graph in the figure can represent the appropriate family of model, but it also represents models whose parameter space grows without bound ? the switching function allows the model template to stay finite regardless of L and M . With our MDBN descriptive abilities complete, it is now possible to describe the more complex IBM models 3, and 4[3] (an MDBN for Model3 is depicted in fig. 1(b)). The top most random variable, ?, is a hidden switching existence variable corresponding to the length of the English string. The box abutting ? includes all the nodes whose existence depends on the value of ?. In the figure, ? = 3, thus resulting in three English words e1 , e2 , and e3 connected using a second-order Markov chain. To each English word ei corresponds a conditionally dependent fertility eb-node ?i , which indicates how many times ei is used by words in the French string. Each ?i in turn controls the existence of a set of variables under it. Given the fertilities (the figure depicts the case ?1 = 3, ?2 = 1, ?3 = 0), for each word ei , ?i French word variables are granted existence and are denoted by ?i1 , ?i2 , . . . , ?i?i , what is called the tablet [3] of ei . The values taken by the ? variables need to match the actual observed French sequence f1 , . . . , fm . This is represented as a shared constraint between all the f , ?, and ? variables which have incoming edges into the observed variable v. v?s conditional probability table is such that it is one only when the associated constraint is satisfied2 . The variable 2 This type of encoding of constraints corresponds to the standard mechanism used by Pearl [14]. A naive implementation, however, would enumerate a number of configurations exponential in the number of constrained variables, while typically only a small fraction of the configurations would have positive probability. ?i,k ? {1, . . . , m} is a switching dependency parent with respect to the constraint variable v and determines which fj participates in an equality constraint with ?i,k . The bottom variable m is a switching existence node (observed to be 6 in the figure) with corresponding French word sequence and alignment variables. The French sequence participates in the v constraint described above, while the alignment variables aj ? {1, . . . , ?}, j ? 1, . . . , m constrain the fertilities to take their unique allowable values (for the given alignment). Alignments also restrict the domain of permutation variables, ?, using the constraint variable x. Finally, the domain size of each aj has to lie in the interval [0, ?] and that is enforced by the variable u. The dashed edges connecting the alignment a variables represent an extension to implement an M3/M-HMM hybrid. P? The null submodel involving the deterministic node m? (= i=1 ?i ) and eb-node ?0 accounts for French words that are not explained by any of the English words e1 , . . . , e? . In this submodel, successive permutation variables are ordered and this constraint is implemented using the observed child w of ?0i and ?0(i+1) . Model 4 [3] is similar to Model 3 except that the former is based on a more elaborate distortion model that uses relative instead of absolute positions both within and between tablets. 3 Inference, Parameter Estimation and MPE Multi-dynamic Bayesian Networks are amenable to any type of inference that is applicable to regular Bayesian networks as long as switching existence relationships are respected and all the constraints (aggregation for example) are satisfied. Unfortunately DBN inference procedures that take advantage of the repeatable template and can preprocess it offline, are not easy to apply to MDBNs. A case in point is the Junction Tree algorithm [11]. Triangulation algorithms exist that create an offline triangulated version of the input graph and do not re-triangulate it for each different instance of the input data [12, 2]. In MDBNs, due to the flexibility to unroll templates in several dimensions and to specify dependencies and constraints spanning the entire unrolled graph, it is not obvious how we can exploit any repetitive patterns in a Junction Tree-style offline triangulation of the graph template. In section 4, we discuss sampling inference methods we have used. Here we discuss our extension to a backtracking search algorithm with the same performance guarantees as the JT algorithm, but with the advantage of easily handling determinism, existence uncertainty, and constraints, both learned and explicitly stated. Value Elimination (VE) ([1]), is a backtracking Bayesian network inference technique that caches factors associated with portions of the search tree and uses them to avoid iterating again over the same subtrees. We follow the notation introduced in [1] and refer the reader to that paper for details about VE inference. We have extended the VE inference approach to handle explicitly encoded constraints, existence uncertainty, and to perform approximate local domain pruning (see section 4). We omit these details as well as others in the original paper and briefly describe the main data structure required by VE and sketch the algorithm we refer to as FirstPass (fig. 1) since it constitutes the first step of the learning procedure, our main contribution in this section. A VE factor, F , is such that we can write the following marginal of the joint distribution X P (X = x, Y = y, Z) = F.val ? f (Z) X=x such that (X?Y)?Z = ?, F.val is a constant, and f (Z) a function of Z only. Y is a set of variables previously instantiated in the current branch of search tree to the value vector y. The pair (Y, y) is referred to as a dependency set (F.Dset). X is referred to as a subsumed set (F.Sset). By caching the tuple (F.Dset, F.Sset, F.val), we avoid recomputing the marginal again whenever (1) F.Dset is active, meaning all nodes stored in F.Dset are assigned their cached values in the current branch of the search tree; and (2) none of the variables in F.Sset are assigned yet. FirstPass (alg. 1) visits nodes in the graph in Depth First fashion. In line 7, we get the values of all Newly Single-valued (NSV) CPTs i.e., CPTs that involve the current node, V , and in which all We use a general directed domain pruning constraint. Deterministic relationships then become a special case of our constraint whereby the domain of the child variable is constrained to a single value with probability one. Variable traversal order: A, B, C, and D. Factors are numbered by order of creation. *Fi denotes the activation of factor i. Tau values propagated recursively F7: Dset={} Sset={A,B,C,D} val=P(E=e) F7.tau = 1.0 = P(Evidence)/F7.val A F5: Dset={A=0} Sset={B,C,D} F2 D *F1 *F2 Factor values needed for c(A=0) and c(C=0,B=0) computation: F5.val=P(B=0|A=0)*F3.val+P(B=1|A=0)*F4.val F3.val=P(C=0|B=0)*F1.val+P(C=1|B=0)*F2.val F4.val=P(C=0|B=1)*F1.val+P(C=1|B=1)*F2.val F1.val=P(D=0|C=0)P(E=e|D=0)+P(D=1|C=0)P(E=e|D=1) F2.val=P(D=0|C=1)P(E=e|D=0)+P(D=1|C=1)P(E=e|D=1) First pass C *F3 *F4 Second pass D B F4 C F6.tau = F7.tau * P(A=1) 1 B F3: Dset={B=0} Sset={C,D} F1 F5.tau = F7.tau * P(A=0) F6 0 F3.tau = F5.tau * P(B=0|A=0) + F6.tau * P(B=0|A=1) = P(B=0) F4.tau = F5.tau * P(B=1|A=0) + F6.tau * P(B=1|A=1) = P(B=1) F1.tau = F3.tau * P(C=0|B=0) + F4.tau * P(C=0|B=1) = P(C=0) F2.tau = F3.tau * P(C=1|B=0) + F4.tau * P(C=1|B=1) = P(C=1) c(A=0)=(1/P(e))*(F7.tau*P(A=0)*F5.val)=(1/P(e))(P(A=0)*P(E=e|A=0))=P(A=0|E=e) c(C=0,B=0)=(1/P(e))*F3.tau*P(C=0|B=0)*F1.val =(1/P(e) * (P(A=0,B=0)+P(A=1,B=0)) * P(C=0|B=0) * F1.val =(1/P(e)) * P(B=0) * P(C=0|B=0) * F1.val =(1/P(e)) * P(B=0) * P(C=0|B=0) * F1.val =(1/P(e)) * P(C=0,B=0) * F1.val =P(C=0,B=0,E=e)/P(e)=P(C=0,B=0|E=e) Figure 2: Learning example using the Markov chain A ? B ? C ? D ? E, where E is observed. In the first pass, factors (Dset, Sset and val) are learned in a bottom up fashion. Also, the normalization constant P (E = e) (probability of evidence) is obtained. In the second pass, tau values are updated in a top-down fashion and used to calculate expected counts c(F.head, pa(F.head)) corresponding to each F.head (the figure shows the derivations for (A=0) and (C=0,B=0), but all counts are updated in the same pass). other variables are already assigned (these variables and their values are accumulated into Dset). We also check for factors that are active, multiply their values in, and accumulate subsumed vars in Sset (to avoid branching on them). In line 10, we add V to the Sset. In line 11, we cache a new factor F with value F.val = sum. We store V into F.head, a pointer to the last variable to be inserted into F.Sset, and needed for parameter estimation described below. F.Dset consists of all the variables, except V , that appeared in any NSV CPT or the Dset of an activated factor at line 6. Regular Value Elimination is query-based, similar to variable elimination and recursive conditioning?what this means is that to answer a query of the type P (Q|E = e), where Q is query variable and E a set of evidence nodes, we force Q to be at the top of the search tree, run the backtracking algorithm and then read the answers to the queries P (Q = q|E = e), q ? Dom[Q], along each of the outgoing edges of Q. Parameter estimation would require running a number of queries on the order of the number of parameters to estimate. We extend VE into an algorithm that allows us to obtain Expectation Maximization sufficient statistics in a single run of Value Elimination plus a second pass, which can never take longer than the first one (and in practice is much faster). This two-pass procedure is analogous to the collect-distribute evidence procedure in the Junction Tree algorithm, but here we do this via a search tree. Let ?X=x|pa(X)=y be a parameter associated with variable X with value x and parents Y = pa(X) when they have value y. Assuming a maximum likelihood learning scenario3 , to estimate ?X=x|pa(X)=y , we need to compute X f (X = x, pa(X) = y, E = e) = P (W, X = x, pa(X) = y, E = e) W\{X,pa(X)} which is a sum of joint probabilities of all configurations that are consistent with the assignment {X = x, pa(X) = y}. If we were to turn off factor caching, we would enumerate all such variable configurations and could compute the sum. When standard VE factors are used, however, this is no longer possible whenever X or any of its parents becomes subsumed. Fig. 2 illustrates an example of a VE tree and the factors that are learned in the case of a Markov chain with an evidence node at the end. We can readily estimate the parameters associated with variables A and B as they are not subsumed along any branch. C and D become subsumed, however, and we cannot obtain the correct counts along all the branches that would lead to C and D in the full enumeration case. To address this issue, we store a special value, F.tau, in each factor. F.tau holds the sum over all path probabilities from the first level of the search tree to the level at which the factor F was 3 For Bayesian networks the likelihood function decomposes such that maximizing the expectation of the complete likelihood is equivalent to maximizing the ?local likelihood? of each variable in the network. either created or activated. For example, F 6.tau in fig. 2 is simply P (A = 1). Although we can compute F 3.tau directly, we can also compute it recursively using F 5.tau and F 6.tau as shown in the figure. This is because both F 5 and F 6 subsume F 3: in the context {F 5.Dset}, there exists a (unique) value dsub of F 5.head4 s.t. F 3 becomes activable. Likewise for F 6. We cannot compute F 1.tau directly, but we can, recursively, from F 3.tau and F 4.tau by taking advantage of a similar subsumption relationship. In general, we can show that the following recursive relationship holds: Q X Fact .val pa F.tau ? F .tau ? N SVF pa .head=dsub ? Fact ?Fact (1) F.val pa pa F ?F where F pa is the set of factors that subsume F , Fact is the set of all factors (including F ) that become active in the context of {F pa .Dset, F pa .head = dsub } and N SVF pa .head=dsub is the product of all newly single valued CPTs under the same context. For top-level factors (not subsumed by any factor), F.tau = Pevidence /F.val, which is 1.0 when there is a unique top-level factor. Alg. 2 is a simple recursive computation of eq. 1 for each factor. We visit learned factors in the reverse order in which they were learned to ensure that, for any factor F ? , F ? .tau is incremented (line 13) by any F that might have activated F ? (line 12). For example, in fig. 2, F 4 uses F 1 and F 2, so F 4.tau needs to be updated before F 1.tau and F 2.tau. In line 11, we can increment the counts for any NSV CPT entries since F.tau will account for the possible ways of reaching the configuration {F.Dset, F.head = d} in an equivalent full enumeration tree. Algorithm 1: FirstPass(level) 1 2 3 4 5 6 7 8 9 10 Input: Graph G Output: A list of learned factors and Pevidence Select var V to branch on if V ==NONE then return Sset={}, Dset={} for d ? Dom[V ] do V ?d prod = productOfAllNSVsAndActiveFactors(Dset, Sset) if prod != 0 then FirstPass(level+1) sum += prod Sset = Sset ? {V } cacheNewFactor(F.head ? V ,F.val ? sum, F.Sset ? Sset, F.Dset ? Dset); Algorithm 2: SecondPass() 1 2 3 4 5 6 7 8 9 10 11 12 13 Input: F : List of factors in the reverse order learned in the first pass and Pevidence . Result: Updated counts foreach F ? F do if F.Dset = {} then F.tau ? Pevidence /F.val else F.tau ? 0.0 Assign vars in F.Dset to their values V ? F.head (last node to have been subsumed in this factor) foreach d ? Dom[V ] do prod = productOfAllNSVsAndActiveFactors() prod? = F.tau foreach newly single-valued CPT C do count(C.child,C.parents)+=prod/Pevidence F ? =getListOfActiveFactors() for F ? ? F ? do F ? .tau+ = prod/F ? .val Most Probable Explanation We compute MPE using a very similar two-pass algorithm. In the first pass, factors are used to store a maximum instead of a summation over variables in the Sset. We also keep track of the value of F.head at which the maximum is achieved. In the second pass, we recursively find the optimal variable configuration by following the trail of factors that are activated when we assign each F.head variable to its maximum value starting from the last learned factor. 4 Recall, F.head is the last variable to be added to a newly created factor in line 10 of alg. 1 4 MACHINE TRANSLATION WORD ALIGNMENT EXPERIMENTS A major motivation for pursuing the type of representation and inference described above is to make it possible to solve computationally-intensive real-world problems using large amounts of data, while retaining the full generality and expressiveness afforded by the MDBN modeling language. In the experiments below we compare running times of MDBNs to GIZA++ on IBM Models 1 through 4 and the M-HMM model. GIZA++ is a special-purpose optimized MT word alignment C++ tool that is widely used in current state-of-the-art phrase-based MT systems [10] and at the time of this writing is the only publicly available software that implements all of the IBM Models. We test on French-English 107 hand-aligned sentences5 from a corpus of the European parliament proceedings (Europarl [9]) and train on 10000 sentence pairs from the same corpus and of maximum number of words 40. The Alignment Error Rate (AER) [13] evaluation metric quantifies how well the MPE assignment to the hidden alignment variables matches human-generated alignments. Several pruning and smoothing techniques are used by GIZA and MDBNs. GIZA prunes low lexical (P (f |e)) probability values and uses a default small value for unseen (or pruned) probability table entries. For models 3 and 4, for which there is no known polynomial time algorithm to perform the full E-step or compute MPE, GIZA generates a set of high probability alignments using an MHMM and hill-climbing and collects EM counts over these alignments using M3 or M4. For MDBN models we use the following pruning strategy: at each level of the search tree we prune values which, together, account for the lowest specified percentage of the total probability mass of the product of all newly active CPTs in line 6 of alg. 1. This is a more effective pruning than simply removing low-probability values of each CPD because it factors in the joint contribution of multiple active variables. Table 1 shows a comparison of timing numbers obtained GIZA++ and MDBNs. The runtime numbers shown are for the combined tasks of training and decoding; however, training time dominates given the difference in size between train and test sets. For models 1 and 2 neither GIZA nor MDBNs perform any pruning. For the M-HMM, we prune 60% of probability mass at each level and use a Dirichlet prior over the alignment variables such that long-range transitions are exponentially less likely than shorter ones.6 This model achieves similar times and AER to GIZA?s. Interestingly, without any pruning, the MDBN M-HMM takes 160 minutes to complete while only marginally improving upon the pruned model. Experimenting with several pruning thresholds, we found that AER would worsen much more slowly than runtime decreases. Models 3 and 4 have treewidth equal to the number of alignment variables (because of the global constraints tying them) and therefore require approximate inference. Using Model 3, and a drastic pruning threshold that only keeps the value with the top probability at each level, we were able to achieve an AER not much higher than GIZA?s. For M4, it achieves a best AER of 31.7% while we do not improve upon Model3, most likely because a too restrictive pruning. Nevertheless, a simple variation on Model3 in the MDBN framework achieves a lower AER than our regular M3 (with pruning still the same). The M3-HMM hybrid model combines the Markov alignment dependencies from the M-HMM model with the fertility model of M3. MCMC Inference Sampling is widely used for inference in high-treewidth models. Although MDBNs support Likelihood Weighing, it is very inefficient when the probability of evidence is very small, as is the case in our MT models. Besides being slow, Markov chain Monte Carlo can be problematic when the joint distribution is not positive everywhere, in particular in the presence of determinism and hard constraints. Techniques such as blocking Gibbs sampling [8] try to address the problem. Often, however, one has to carefully choose a problem-dependent proposal distribution. We used MCMC to improve training of the M3-HMM model. We were able to achieve an AER of 32.8% (down from 39.1%) but using 400 minutes of uniprocessor time. 5 CONCLUSION The existing classes of graphical models are not ideally suited for representing SMT models because ?natural? semantics for specifying the latter combine flavors of different GM types on top of standard directed Bayesian network semantics: switching parents found in Bayesian Multinets [6], aggregation relationships such as in Probabilistic Relational Models [5], and existence uncertainty [7]. We 5 Available at http://www.cs.washington.edu/homes/karim French and English have similar word orders. On a different language pair, a different prior might be more appropriate. With a uniform prior, the MDBN M-HMM has 36.0% AER. 6 Model Init M1 M2 M-HMM M3 M4 M3-HMM GIZA++ M1 M-HMM 1m45s (47.7%) N/A 2m02s (41.3%) N/A 4m05s (35.0%) N/A 2m50 (45%) 5m20s (38.5%) 5m20s (34.8%) 7m45s (31.7%) N/A MDBN M1 3m20s (48.0%) 5m30s (41.0%) 4m15s (33.0%) 12m (43.6%) 25m (43.6%) 9m30 (41.0%) M-HMM N/A N/A N/A 9m (42.5%) 23m (42.6%) 9m15s (39.1%) MCMC 400m (32.8%) Table 1: MDBN VE-based learning versus GIZA++ timings and %AER using 5 EM iterations. The columns M1 and M-HMM correspond to the model that is used to initialize the model in the corresponding row. The last row is a hybrid Model3-HMM model that we implemented using MDBNs and is not expressible using GIZA. have introduced a generalization of dynamic Bayesian networks to easily and concisely build models consisting of varying-length parallel asynchronous and interacting data streams. We have shown that our framework is useful for expressing various statistical machine translation models. We have also introduced new parameter estimation and decoding algorithms using exact and approximate searchbased probability computation. While our timing results are not yet as fast as a hand-optimized C++ program on the equivalent model, we have shown that even in this general-purpose framework of MDBNs, our timing numbers are competitive and usable. Our framework can of course do much more than the IBM and HMM models. One of our goals is to use this framework to rapidly prototype novel MT systems and develop methods to statistically induce an interlingua. We also intend to use MDBNs in other domains such as multi-party social interaction analysis. References [1] F. Bacchus, S. Dalmao, and T. Pitassi. Value elimination: Bayesian inference via backtracking search. In UAI-03, pages 20?28, San Francisco, CA, 2003. Morgan Kaufmann. [2] J. Bilmes and C. Bartels. On triangulating dynamic graphical models. In Uncertainty in Artificial Intelligence: Proceedings of the 19th Conference, pages 47?56. Morgan Kaufmann, 2003. [3] P. F. Brown, J. Cocke, S. A. Della Piettra, V. J. Della Piettra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin. A statistical approach to machine translation. Computational Linguistics, 16(2):79?85, June 1990. [4] T. Dean and K. Kanazawa. Probabilistic temporal reasoning. AAAI, pages 524?528, 1988. [5] N. Friedman, L. Getoor, D. Koller, and A. Pfeffer. Learning probabilistic relational models. In IJCAI, pages 1300?1309, 1999. [6] D. Geiger and D. Heckerman. Knowledge representation and inference in similarity networks and Bayesian multinets. Artif. Intell., 82(1-2):45?74, 1996. [7] L. Getoor, N. Friedman, D. Koller, and B. Taskar. Learning probabilistic models of link structure. Journal of Machine Learning Research, 3(4-5):697?707, May 2003. [8] C. Jensen, A. Kong, and U. Kjaerulff. Blocking Gibbs sampling in very large probabilistic expert systems. In International Journal of Human Computer Studies. Special Issue on Real-World Applications of Uncertain Reasoning., 1995. [9] P. Koehn. Europarl: A multilingual corpus for evaluation of machine http://www.isi.edu/koehn/publications/europarl, 2002. translation. [10] P. Koehn, F. Och, and D. Marcu. Statistical phrase-based translation. In NAACL/HLT 2003, 2003. [11] S. Lauritzen. Graphical Models. Oxford Science Publications, 1996. [12] K. Murphy. Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, U.C. Berkeley, Dept. of EECS, CS Division, 2002. [13] F. J. Och and H. Ney. Improved statistical alignment models. In ACL, pages 440?447, Oct 2000. [14] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 2nd printing edition, 1988. [15] S. Vogel, H. Ney, and C. Tillmann. HMM-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics, pages 836?841, Morristown, NJ, USA, 1996.
3020 |@word kong:1 briefly:1 version:1 polynomial:1 nd:1 cloned:3 recursively:4 configuration:6 interestingly:1 existing:1 current:4 activation:1 yet:2 readily:1 happen:1 generative:2 intelligence:1 weighing:1 indefinitely:1 pointer:1 node:23 successive:1 simpler:1 unbounded:1 along:7 become:3 consists:4 combine:2 introduce:3 inter:1 expected:1 indeed:1 rapid:1 isi:1 nor:1 multi:5 actual:1 enumeration:2 cache:2 becomes:2 moreover:2 underlying:2 notation:2 mass:2 null:2 what:2 tying:2 lowest:1 string:8 differing:1 nj:1 guarantee:1 temporal:2 berkeley:1 ti:2 morristown:1 runtime:2 control:1 medical:1 omit:1 och:2 positive:2 before:1 engineering:2 local:3 subsumption:1 timing:4 limit:1 switching:10 encoding:1 oxford:1 path:1 might:4 plus:1 acl:1 eb:7 specifying:4 collect:2 hmms:4 factorization:1 range:1 statistically:1 directed:4 unique:3 practice:2 union:1 implement:2 recursive:3 procedure:4 area:1 word:25 m50:1 regular:4 numbered:1 induce:1 get:3 onto:1 cannot:2 giza:12 context:3 writing:1 www:2 equivalent:3 deterministic:2 dean:1 sset:17 crfs:1 maximizing:2 graphically:1 regardless:2 starting:1 lexical:1 tillmann:1 m2:1 rule:4 nesting:1 submodel:2 fill:1 ity:1 handle:1 notion:2 variation:1 increment:1 analogous:1 updated:4 dbns:3 gm:3 exact:1 us:4 trail:1 tablet:2 pa:16 recognition:1 marcu:1 pfeffer:1 blocking:2 observed:5 bottom:2 inserted:1 taskar:1 electrical:1 capture:1 calculate:1 connected:3 decrease:1 incremented:1 ideally:1 dynamic:9 traversal:1 dom:3 depend:1 creation:1 upon:2 division:1 f2:8 easily:2 joint:6 various:2 represented:1 derivation:1 train:2 instantiated:1 fast:1 describe:5 effective:1 monte:1 query:6 artificial:1 quite:1 whose:3 encoded:1 valued:3 solve:1 distortion:1 widely:2 koehn:3 plausible:1 ability:1 statistic:1 unseen:1 noisy:1 online:1 advantage:4 descriptive:1 sequence:3 interaction:1 product:2 aligned:2 rapidly:1 translate:1 flexibility:1 achieve:2 description:6 probabil:1 seattle:1 parent:7 ijcai:1 produce:1 cached:1 tk:1 depending:1 develop:1 lauritzen:1 eq:1 implemented:2 c:3 treewidth:2 convention:1 triangulated:1 correct:1 attribute:3 functionality:1 f4:8 human:2 translating:1 elimination:6 explains:1 require:3 assign:2 f1:14 generalization:2 probable:1 summation:1 extension:3 hold:2 claim:1 major:1 achieves:3 adopt:1 a2:2 purpose:2 estimation:4 f7:6 applicable:2 create:1 tool:1 reaching:1 avoid:5 caching:2 varying:3 publication:2 june:1 indicates:2 experimenting:1 cloning:1 check:1 likelihood:5 inference:20 dependent:2 accumulated:1 typically:1 entire:1 hidden:4 koller:2 bartels:1 expressible:1 i1:1 semantics:2 issue:2 denoted:1 retaining:2 smoothing:1 constrained:3 special:4 ell:2 art:1 marginal:2 field:2 construct:1 f3:10 washington:3 sampling:4 equal:1 never:1 represents:1 constitutes:1 triangulate:1 future:1 others:1 cpd:1 intelligent:1 opening:1 employ:1 randomly:1 simultaneously:3 ve:9 intell:1 m4:3 murphy:1 ourselves:1 connects:1 consisting:3 friedman:2 subsumed:7 interest:1 a5:1 intra:1 multiply:1 evaluation:2 alignment:21 activated:4 chain:5 amenable:1 subtrees:1 edge:10 capable:1 tuple:1 shorter:1 tree:12 m30:1 desired:1 re:2 e0:2 uncertain:1 instance:3 column:1 modeling:4 recomputing:1 assignment:3 a6:1 maximization:1 cost:1 phrase:2 subset:1 entry:2 uniform:2 bacchus:1 too:1 stored:1 connect:2 dependency:4 answer:2 eec:1 dalmao:1 combined:1 international:1 retain:1 stay:1 probabilistic:8 participates:2 off:1 decoding:3 connecting:2 together:3 thesis:1 again:2 satisfied:1 aaai:1 containing:1 choose:1 slowly:1 f5:7 expert:2 usable:1 leading:1 style:1 return:1 inefficient:1 account:4 f6:5 distribute:1 includes:1 explicitly:2 depends:1 stream:15 cpts:4 later:1 try:1 mpe:4 portion:1 start:1 aggregation:3 competitive:1 parallel:1 worsen:1 nsv:3 contribution:2 square:1 publicly:1 kaufmann:3 characteristic:1 likewise:1 correspond:1 preprocess:1 climbing:1 bayesian:18 none:2 marginally:1 bilmes:3 carlo:1 uniprocessor:1 whenever:2 hlt:1 obvious:1 e2:5 associated:4 static:1 propagated:1 newly:5 recall:1 knowledge:1 carefully:1 appears:1 higher:1 follow:1 specify:3 improved:1 shrink:1 box:2 generality:1 just:1 outgoing:1 hand:3 dgm:1 sketch:1 ei:4 french:16 aj:6 scientific:1 grows:1 artif:1 usa:1 naacl:1 brown:1 unroll:2 equality:1 former:1 assigned:3 read:1 karim:3 i2:1 conditionally:1 adjacent:1 branching:1 whereby:3 generalized:1 allowable:1 hill:1 complete:3 confusion:1 interface:1 fj:1 reasoning:3 meaning:2 novel:2 fi:5 common:1 mt:13 conditioning:1 foreach:3 exponentially:1 discussed:1 interpretation:1 extend:1 m1:4 accumulate:1 refer:3 expressing:1 gibbs:2 ai:3 dbn:10 language:5 specification:3 longer:2 similarity:1 pitassi:1 align:1 add:1 own:1 recent:1 triangulation:2 perspective:1 reverse:2 store:3 certain:1 vt:2 morgan:3 prune:3 dashed:1 rv:2 multiple:4 branch:5 full:4 match:2 faster:1 long:2 cocke:1 e1:6 visit:2 a1:2 involving:1 basic:1 expectation:2 metric:1 repetitive:1 represent:4 normalization:1 iteration:1 achieved:2 proposal:1 interval:1 else:1 grow:3 vogel:1 smt:1 undirected:1 incorporates:1 lafferty:1 call:3 ee:1 svf:2 door:1 presence:1 easy:2 variety:1 independence:2 fm:1 restrict:1 idea:1 prototype:1 knowing:1 intensive:1 motivated:2 expression:1 granted:1 speech:1 e3:3 prms:3 cpt:3 enumerate:2 useful:3 eai:2 iterating:1 involve:1 amount:2 http:2 specifies:1 exist:1 percentage:1 problematic:1 track:1 diagnosis:1 discrete:1 write:1 express:2 key:2 threshold:2 nevertheless:1 neither:1 utilize:1 v1:2 graph:13 fraction:1 sum:6 enforced:1 run:2 everywhere:1 uncertainty:6 place:1 family:2 reader:1 pursuing:1 utilizes:1 home:1 geiger:1 decision:1 bound:4 fan:1 aer:9 constraint:17 constrain:1 afforded:1 software:1 multinets:2 unlimited:2 generates:1 pruned:2 expanded:6 relatively:1 department:1 heckerman:1 em:2 making:2 explained:2 handling:1 taken:1 computationally:1 previously:1 describing:1 turn:2 mechanism:2 discus:2 needed:3 count:7 drastic:1 end:1 generalizes:1 operation:1 junction:3 available:2 apply:1 v2:4 appropriate:3 ney:2 existence:18 original:1 top:7 denotes:1 running:2 ensure:1 dirichlet:1 graphical:9 a4:1 linguistics:2 exploit:1 restrictive:1 build:1 respected:1 intend:1 already:1 added:1 strategy:1 mhmm:1 link:1 hmm:21 spanning:1 assuming:1 length:16 besides:1 relationship:6 providing:1 unrolled:3 abutting:1 difficult:1 unfortunately:1 potentially:1 stated:1 implementation:1 perform:3 markov:8 finite:6 subsume:2 extended:2 relational:3 looking:1 head:13 interacting:2 arbitrary:2 expressiveness:2 introduced:3 propositional:1 namely:1 required:1 pair:3 specified:1 optimized:2 sentence:1 concisely:2 unequal:1 learned:8 pearl:2 address:2 able:2 below:3 pattern:1 prototyping:1 appeared:1 program:1 including:2 tau:42 explanation:1 getoor:2 natural:2 hybrid:3 force:1 representing:1 improve:2 created:2 interlingua:1 naive:1 fertility:7 text:1 prior:4 val:30 initialize:1 relative:1 fully:2 permutation:2 proven:1 acyclic:1 var:3 versus:1 kjaerulff:1 sufficient:1 consistent:1 mercer:1 parliament:1 translation:12 ibm:7 row:2 course:2 last:5 asynchronous:1 english:14 offline:3 allow:2 wide:1 template:12 taking:1 absolute:1 determinism:2 jelinek:1 benefit:1 slice:3 dimension:3 xn:2 depth:1 world:2 default:1 transition:1 san:1 party:1 social:1 approximate:3 pruning:11 multilingual:1 keep:2 global:2 active:5 incoming:1 uai:1 corpus:3 francisco:1 search:10 prod:7 quantifies:1 decomposes:1 table:5 promising:1 channel:1 expanding:1 ca:1 init:1 model3:4 improving:1 alg:4 complex:4 european:1 domain:9 main:2 motivation:1 edition:1 repeated:1 child:3 x1:2 fig:5 referred:2 depicts:1 elaborate:1 fashion:3 slow:1 position:2 exponential:1 lie:2 printing:1 e4:1 down:2 removing:1 minute:2 specific:3 repeatable:1 jt:1 showing:1 jensen:1 list:2 a3:2 evidence:6 exists:1 dominates:1 kanazawa:1 restricting:2 phd:1 illustrates:1 flavor:1 suited:1 depicted:1 backtracking:5 simply:2 likely:2 ordered:1 m20s:3 corresponds:2 determines:2 oct:1 conditional:3 goal:3 jeff:1 shared:1 change:1 hard:1 determined:1 except:2 called:2 total:1 pas:11 triangulating:1 experimental:1 m3:9 select:1 support:2 latter:1 dissimilar:1 dept:1 mcmc:3 della:2 europarl:3
2,228
3,021
Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing Long (Leo) Zhu Department of Statistics University of California at Los Angeles Los Angeles, CA 90095 [email protected] Yuanhao Chen Department of Automation University of Science and Technology of China Hefei, Anhui 230026 P.R.China [email protected] Alan Yuille Department of Statistics University of California at Los Angeles Los Angeles, CA 90095 [email protected] Abstract We describe an unsupervised method for learning a probabilistic grammar of an object from a set of training examples. Our approach is invariant to the scale and rotation of the objects. We illustrate our approach using thirteen objects from the Caltech 101 database. In addition, we learn the model of a hybrid object class where we do not know the specific object or its position, scale or pose. This is illustrated by learning a hybrid class consisting of faces, motorbikes, and airplanes. The individual objects can be recovered as different aspects of the grammar for the object class. In all cases, we validate our results by learning the probability grammars from training datasets and evaluating them on the test datasets. We compare our method to alternative approaches. The advantages of our approach is the speed of inference (under one second), the parsing of the object, and increased accuracy of performance. Moreover, our approach is very general and can be applied to a large range of objects and structures. 1 Introduction Remarkable progress in mathematics and computer science of probability is leading to a revolution in the scope of probabilistic models. In particular, there are exciting new probability models [1, 3, 4, 5, 6, 11] defined on structured relational systems such as graphs or grammars. These new formulations subsume more traditional models, such as Markov Random Fields (MRF?s) [2], and have growing applications to natural languages, machine learning, and computer vision. Although these models have enormous representational power, there are many practical drawbacks which must be overcome before using them. In particular, we need efficient algorithms to learn the models from training data and to perform inference on new examples. This problem is particularly difficult when the structure of the representation is unknown and needs to be induced from the data. In this paper we develop an algorithm called ?structure induction? (or ?structure pursuit?) which we use to learn the probability model in an unsupervised manner from a set of training data. This algorithm proceeds by building an AND-OR graph [5] in an iterative way. The form of the resulting graph structure ensures that inference can be performed rapidly for new data. Chair 90.9% Cougar 90.9% Piano 96.3% Scissors 94.9% Panda 90.0% Rooster 92.1% Stapler 90.5 % Wheelchair 92.4% Windsor Chair 92.4% Wrench 84.6% Figure 1: We have learnt probability grammars for these ten objects in the Caltech 101 database, obtaining scores over 90 % for most objects. A score of 90.00 %, means that we have a detection rate of 90 % and a false positive rate of 10 % (10 % = (100 - 90) %). The number of data examples are 62, 69, 90, 39, 38, 49, 45, 59, 56, 39 ordered left-to-right and top-to-bottom. Our application is to the detection, recognition, and parsing of objects in images. The training data consists of a set of images where the target object is present but at an unknown location. This topic has been much studied [16] (see technical report ? Zhu, Chen and Yuille 2006 ? for additional references). Our approach has the following four properties. Firstly, a wide range of applicability which we demonstrate by learning models for 13 object categories from the Caltech-101 [16], Figure (1,5). Secondly, the approach is invariant to rotation and a large range of scale of the objects. Thirdly, the approach is able to deal with object classes, which we illustrate by learning a hybrid class consisting of faces, motorbikes and airplane. Fourthly, the inference is performed rapidly in under a second. 2 2.1 Background Representation, Inference and Learning Structured models define a probability distribution on structured relational systems such as graphs or grammars. This includes many standard models of probability distributions defined on graphs ? for example, graphs with fixed structure, such as MRF?s [2] or Conditional Random Fields [3], or Probabilistic Context Free Grammars (PCFG?s) [4] where the structure is variable. Attempts have been made to unify these approaches under a common formulation. For example, Case-Factor Diagrams [1] have recently been proposed as a framework which subsumes both MRF?s and PCFG?s. In this paper, we will be concerned with models that combine probabilistic grammars with MRF?s. The grammars are based on AND-OR graphs [1, 5, 6], which relate to mixtures of trees [7]. This merging of MRF?s with probabilistic grammars results in structured models which have great representational power. There has been considerable interest in inference algorithms for these structured models, for example McAllester et al [1] describe how dynamic programming algorithms (e.g. Viterbi and inside-outside) can be used to rapidly compute properties of interest. Our paper is concerned with the task of unsupervised learning of structured models for applications to detecting, recognizing, and representing visual objects. In this paper, we restrict ourselves to a special case of Probabilistic Grammars with OR nodes, and MRF?s. This is simpler than the full cases studied by McAllester but is more complex than the MRF models standardly used for this problem. For MRF models, the number of graph nodes is fixed and structure induction consists of determining the connections between the nodes and the forms of the corresponding potentials. For these graphs, an effective strategy is feature induction [8] which is also known as feature pursuit [9]. A similar strategy is also used to learn CRF?s [10]. In both cases, the learning is fully supervised. For Bayesian network, there is work on learning the structure using the EM algorithm [12]. Learning the structure of grammars in an unsupervised way is more difficult. Klein and Manning [4] have developed unsupervised learning of PCFG?s for parsing natural language, but here the structure of grammar is specified. Zettlemoyer and Collins [11] perform similar work based on lexical learning with lambda-calculus language. In short, to our knowledge, there is no unsupervised learning algorithm for structure induction for a Probabilistic Grammar-MRF model. Moreover, our vision application requires the ability to learn the model of the target object in the presence of unknown background structure. Methods exist in the computer vision literature for achieving this for an MRF model [16], but not for Probabilistic Grammars. 2.2 Our Model: High-Level Description Figure 2: Graphical Models. In this paper, we consider a combination of PCFG and MRF. The leaf nodes of the graph will be image features that are described by MRF?s. Instead of using the full PCFG, we restrict the grammar to containing one OR-node. Our model contains a restricted set of grammatical rules, see figure (2). The top, triangular node, is an OR node. It can have an arbitrary number of child nodes. The simplest type of child node is a histogram model (far left panel of figure (2)). We can obtain more complex models by adding MRF models in the form of triples, see figure (2) left to right. Combination of triples can be expressed in a junction tree representation, see the sixth and seventh panels of figure (2). This representation enable rapid inference. The computation complexity of inference is bounded by the width and height of the subtrees. In more abstract terms, we define a set of rules R(x, y) for allowable parses of input x to a parse tree y. These rules have potentials ?(x, r, t) for a production rule r ? R(x, y) and ?(x, wM , t) for the MRF models (see details in the technical report), where t are nuisance parameters (e.g. geometric transformations and missing data) and w = (wG , wM ) are model parameters. The wG are the grammar parameters and the wM are the MRF parameters. We define a set W of model parameters that are allowed to be non-zero (w = 0 if w ? / W ). The structure of the model is determined by the set W . The model is defined by: P (x, y, w, t) = P (t)P (w)P (y)P (x|y, w, t), where P (x|y, w, t) = 1 e Z P r?R(x,y) wG ??(x,r,t)+ P M RF ?M RF (x,t,wM ) (1) , (2) where M RF denotes the cliques of the MRF. Z is the normalization constant. We now face three tasks: (I) structure learning, (II) parameter learning to estimate w, and (III) inference to estimate y. Inference requires estimating the parse tree y from input x. The model parameters P w are fixed. The nuisance parameters are integrated out. This requires solving y ? = arg max t P (y, t|x, w) by the EM algorithm using dynamic programming to estimate y ? efficiently. During the E step, we approximate the sum over t by a saddle point approximation. Parameter learning we specify a set W of parameters w which we estimate P by MAP (the other w?s are constrained to be zero). Hence we estimate w? = arg maxw?W y,t P (w, t, y|x). This is performed by an EM algorithm, where the summation over y can be performed by dynamic programming, the summation over t is again performed by a saddle point. The w can be calculated by sufficient statistics. Structure Learning corresponds to increasing the set of parameters w that can be non-zero. For each structure we define a score given by its fit to the data. Formally we extend W to W 0 where 0 W ? W P. (The allowed extensions are defined in thePnext section). We now compute P (x|w ? W ) = w?W,t,y P (x, y, t|w) and P (x|w ? W 0 ) = w?W 0 ,t,y P (x, y, t|w). This requires EM, dynamic programming, and a saddle point approximation. We refer to the model fits, P (x|w ? W ) and P (x|w ? W 0 ), as the scores for structure W and W 0 respectively. 3 Brief Details of Our Model We now give a brief description of our model. A detailed description is given in our technical report (Zhu, Chen, and Yuille 2006). Figure 3: Triplets without Orientation (left two panels). Triplets with Orientation (right two panels). 3.1 The setup of the Model We represent the images by features {xi : i = 1, .., N (? )}, where N (? ) is the number of features in image ? . Each feature is represented by a pair xi = (zi , Ai ), where zi is the location of the feature in the image and Ai is an appearance vector. The image features are detected by the Kadir-Brady operator [13], and their appearance is calculated by the SIFT operator [14]. These operators ensure that the features are invariant to scale, rotation, and some appearance variations. The default background model for the image is to define a histogram model over the positions and appearance of the image features, see first panel of figure (2). Next we use triples of image features as the basic building blocks to construct a model. Our model will be constructed by adding new triplets to the existing model, as shown in the first few panels of figure (2). Each triplet will be represented by a triplet model which is given by Gaussian distributions ~ = ~1, T) = G(~z|T(~ ~ ?A , ?A ), where on spatial position and on appearance P (~x|M ?G , ?G )G(A|~ ?G , ?A , ?G , ?A are the means and covariances of the positions and appearances. The {Mi } are missing data index variables [15], and T denotes transformations due to rotation and scaling. The major advantage of using triplets is that they have geometrical properties which are independent of the scale and rotation of the triplet. These properties include the angles between the vertices, see figure (3). Thus we can decompose the representation of the triplet into two types of properties: (i) those which are independent of scale and rotation, (ii) those that depend explicitly on scale and rotation. By using the invariant properties, we can perform rapid search over triplets when position, scale, and rotation are unknown. In addition, two triplets can be easily combined by a common edge to form a more complex model ? see sixth panel of figure (2). This representation is suitable for the junction tree algorithm [2], which enables rapid inference. For structure learning, we face the task of how to expand the set W of non-zero parameters to a new set W 0 . The problem is that there are many ways to expand the set, and it is computationally impossible to evaluate all of them. Our strategy is to use a clustering method, see below, to make proposals for expanding the structure. These proposals are then evaluated by model selection. Our clustering method exploits the invariance properties of triplets. We perform clustering on both the appearance and on the geometrical invariants of the triplets. This gives rise to a triplet vocabulary consisting of triplets that frequently occur in the dataset. These are used to make proposals for which triplets to include in the model, and hence for how to expand the set W of non-zero parameters. Input: Training Image ? = 1, .., M and the triplet vocabulary {Ta : a ? ?}. Initialize G to be the root node with the background model, and let G? = G. Algorithm for Structure Induction: ? STEP 1: ? OR-NODE EXTENSION For T ? {Ta : a ? ?} S ? G0 = G T (ORing) ? Update parameters of G0 by EM algorithm ? If Score(G0 ) > Score(G? ) Then G? = G0 ? AND-NODE EXTENSION For Image ? = 1, .., M ? P = the highest probability parse for Image ? by G ? For each T Triple T in Image ? if T P 6= ? S ? G0 = G T (ANDing) ? Update parameters of G0 by EM algorithm ? If Score(G0 ) > Score(G? ) Then G? = G0 ? STEP 2: G = G? . Go to STEP 1 until Score(G) ? Score(G? ) < T hreshold Output: G Figure 4: Structure Induction Algorithm 3.2 Structure Induction: Learning the Probabilistic Grammar MRF We now have the necessary background to describe our structure induction algorithm. The full procedure is described in the pseudo code in figure (4). Figure (2) shows an example of the structure being induced sequentially. Initially we assume that all the data is generated by the background model. In the terminology of section (2.2), this is equivalent to setting all of the model parameters w to be zero (except those for the background model). We can estimate the parameters of this model and score the model as described in section (2.2). Next we seek to expand the structure of this model. To do this, we use the triplet vocabularies to make proposals. Since the current model is the background model, the only structure change allowed is to add a triplet model as one child of the root node (i.e. to create the background plus triple model described in the previous section, see figure (2)). We consider all members of the triplet vocabulary as candidates, using their cluster means and covariances as prior probabilities on their geometry and attribute properties. Then, for all these triples we construct the background plus triplet model, estimate their parameters and score them. We accept the one with highest score as the new structure. As the graph structure grows, we now have more ways to expand the graph. We can add a new triplet as a child of the root node. This proceeds as in the previous paragraph. Or we can take two members of an existing triplet, and use them to construct a new triplet. In this case, we first parse the data using the current model. Then we use the triplet vocabulary to propose possible triplets, which partially overlap with the current model (and give them prior probabilities on their parameters as before). Then, for all possible extensions, we use the methods in section (2.2) to score the models. We select the one with highest score as the new graph model. If the score increase is not sufficient, we cease building the graph model. See the structured models in figure (5). ... ... ... Figure 5: Individual Models learnt for Faces, Motorbikes and Airplanes. Table 1: Performance Comparisons 4 4.1 Dataset Size Single Model Hybrid Model Constellation[16] Faces Motorbikes Airplanes Faces(Rotated) Faces(Rotated+Scaled) 435 800 800 435 435 98.0 92.6 90.9 94.8 92.3 84.0 82.7 87.3 ? ? 96.4 92.5 90.2 ? ? Experimental Results Learning Individual Objects Models In this section, we demonstrate the performance of our models for thirteen objects chosen from the Caltech-101 dataset. Each dataset was randomly split into two sets with equal size(one for training and the other for testing). K-means clustering (typically, K is set to 150) was used to learn the triplet vocabularies (see Zhu, Chen, Yuille 2006 for details). Each row in figure 3 corresponds to some triples in the same group. In this experiment, we did not use orientation information from the feature detector. We illustrate our results in figure (1) and Table (1). A score of 90 % means that we get a true positive rate of 90 % and a false positive rate of 10 %. For comparison, we show the performance of the Constellation Model [16]. (Further comparisons to alternative methods are reported in the technical report). The models for individual objects classes, learnt from the proposed algorithm, are illustrated in figure (5). Observe that the generative models have different tree-width and depth. Each subtree of the root node defines an Markov Random Field to describe one configuration of the object. The computational cost of the inference, using dynamic programming, is proportional to the height of the subtree and exponential to the maximum width(only three in our case). The detection time is Figure 6: Parsed Results: Invariant to Rotation and Scale. Figure 7: Hybrid Model learnt for Faces, Motorbikes and Airplanes. less than one second (including the processing of features and inference) for the image with the size of 320*240. The training time is around two hours for 250 training images. 4.2 Invariance to Rotation and Scale This section shows that we can learn and detect objects even when the rotation (in the image) and the scale are unknown (within a range). In this experiment, orientation information, output from feature detector, is used to model the geometry distributions of the triplets. The relative angle between the orientation of each feature and the orientation of the edge of tri-angle is calculated to make the model invariant to rotation. See Figure (3). We implemented the comparison experiment on face dataset. A face model is learnt from the training images with normalized scale and orientation. We tested this model on the testing data with 360degree in-plane rotation and another testing data with rotation and scaling together. The scaling range is from 60% of the original size to 150%(i.e. 180 ? 120 ? 450 ? 300). Table (1) shows the comparison results. The parsing results (rotation+scale) are illustrated in Figure (6). 4.3 Learning Classes of Models In this section, we show that we can learn a model for an object class. We use a hybrid class which consists of faces, airplanes, and motorbikes. In other words, we know that one object is present in each image but we do not know which. In the training stage, we randomly select images from the datasets of faces, airplanes, and motorbikes. Similarly, we test the hybrid model on examples selected randomly from these three datasets. The learnt hybrid model is illustrated in Figure (7). It breaks down nicely into or?s of the models for each object. Table (1) shows the performance for the hybrid model. This demonstrates that the proposed method can learn a model for the class with extremely large variation. The parsed results are shown in Figure (8). 5 Discussion This paper showed that it is possible to perform unsupervised learning to determine a probabilistic grammar combined with a Markov Random Fields. Our approach is based on structure pursuit where the object model is built up in an iterative manner (similar to feature pursuit used for MRF?s and CRF?s). The building blocks of our model are triplets of features, whose invariance properties can be exploited for rapid computation. Our application is to the detection and parsing of objects. We demonstrated: (a) that we can learn probabilistic models for a variety of different objects, (b) that our approach is invariant to scale and Figure 8: Parsed Results by Hybrid Model (left three panels). Parsed by Standard Model (right three panels). rotation, (c) that we can learn models for hybrid classes, and (d) that we can perform inference rapidly in under one second. Our approach can also be extended. By using a richer vocabulary of features we can learn a more sophisticated generative grammar which will be able to represent objects in greater detail and deal with significant variations in viewpoint and appearance. Acknowledgements We gratefully acknowledge support from the W.M. Keck Foundation, NSF grant number 0413214, and NIH grant RO1 EY015261. References [1] D. McAllester, M. Collins, and F. Pereira. Case-Factor Diagrams for Structured Probabilistic Modeling in UAI, 2004. [2] B.D. Ripley. ?Pattern Recognition and Neural Networks?. Cambridge University Press. 1996. [3] J. Lafferty, A. McCallum and F. Pereira. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. ICML-2001 [4] D. Klein and C. Manning, ?Natural Language Grammar Induction Using a Constituent-Context Model,? Advances in Neural Information Processing Systems 14 (NIPS-2001), 2001. [5] H. Dechter and Robert Mateescu. AND/OR Search Spaces for Graphical Models. In Artificial Intelligence, 2006. [6] H. Chen, Z.J. Xu, Z.Q. Liu, and S.C. Zhu. Composite Templates for Cloth Modeling and Sketching. Proc. IEEE Conf. on Pattern Recognition and Computer Vision, June, New York, 2006. [7] M. Meila and M. I. Jordan. Mixture of Trees: Learning with mixtures of trees. Journal of Machine Learning Research, 1, 1-48, 2000. [8] S. Della Pietra, V. Della Pietra, and J. Lafferty. Feature Induction for MRF: Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4), April 1997, pp. 380-393 [9] S. C. Zhu, Y. N. Wu, and D. Mumford, ?Minimax entropy principle and its application to texture modeling,? Neural Comp., vol. 9, no. 8, pp. 1627?1660, 1997. [10] A. McCallum. Feature Induction for CRF: Efficiently Inducing Features of Conditional Random Fields. Conference on Uncertainty in Artificial Intelligence (UAI), 2003. [11] L. S. Zettlemoyer and M. Collins. Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars. Conference on Uncertainty in Artificial Intelligence (UAI), 2005. [12] N. Friedman. The Bayesian structural EM algorithm. Fourteenth Conf. on Uncertainty in Artificial Intelligence (UAI), 1998. [13] T. Kadir and M. Brady. Scale, Saliency and Image Description. International Journal of Computer Vision. 45 (2):83-105, November 2001. [14] D. G. Lowe, ?Distinctive image features from scale-invariant keypoints,? International Journal of Computer Vision, 60, 2. pp. 91-110. 2004. [15] R.J.A. Little and D.B. Rubin. Statistical Analysis with Missing Data, Wiley, Hoboken, New Jersey. 2002. [16] Fergus, R. , Perona, P. and Zisserman, A. Object Class Recognition by Unsupervised Scale-Invariant Learning Proc. of the IEEE Conf on Computer Vision and Pattern Recognition. 2003.
3021 |@word calculus:1 seek:1 covariance:2 configuration:1 contains:1 score:17 liu:1 existing:2 recovered:1 current:3 must:1 parsing:6 hoboken:1 dechter:1 enables:1 update:2 generative:2 leaf:1 selected:1 intelligence:5 plane:1 mccallum:2 short:1 detecting:1 node:15 location:2 firstly:1 simpler:1 height:2 constructed:1 consists:3 combine:1 yhchen4:1 inside:1 paragraph:1 manner:2 rapid:4 frequently:1 growing:1 little:1 increasing:1 estimating:1 moreover:2 bounded:1 panel:9 developed:1 transformation:2 brady:2 pseudo:1 scaled:1 demonstrates:1 grant:2 segmenting:1 before:2 positive:3 plus:2 china:2 studied:2 range:5 practical:1 testing:3 block:2 procedure:1 composite:1 word:1 get:1 selection:1 operator:3 context:2 impossible:1 equivalent:1 map:2 demonstrated:1 lexical:1 missing:3 go:1 unify:1 rule:4 variation:3 target:2 anding:1 programming:5 recognition:5 particularly:1 database:2 bottom:1 ensures:1 highest:3 complexity:1 dynamic:5 depend:1 solving:1 yuille:5 distinctive:1 easily:1 represented:2 jersey:1 leo:1 effective:1 describe:4 detected:1 artificial:4 labeling:1 outside:1 whose:1 richer:1 kadir:2 wg:3 grammar:23 ability:1 statistic:3 triangular:1 advantage:2 sequence:1 propose:1 rapidly:4 representational:2 description:4 inducing:2 validate:1 constituent:1 los:4 cluster:1 keck:1 rotated:2 object:32 illustrate:3 develop:1 pose:1 stat:2 progress:1 implemented:1 drawback:1 attribute:1 mcallester:3 enable:1 decompose:1 secondly:1 summation:2 extension:4 fourthly:1 around:1 great:1 scope:1 viterbi:1 major:1 proc:2 create:1 gaussian:1 june:1 rooster:1 detect:1 inference:14 cloth:1 integrated:1 typically:1 accept:1 initially:1 perona:1 expand:5 arg:2 classification:1 orientation:7 constrained:1 special:1 spatial:1 initialize:1 field:7 construct:3 equal:1 nicely:1 unsupervised:9 icml:1 report:4 few:1 randomly:3 individual:4 pietra:2 geometry:2 consisting:3 ourselves:1 attempt:1 friedman:1 detection:5 interest:2 mixture:3 subtrees:1 edge:2 necessary:1 tree:8 increased:1 modeling:3 applicability:1 cost:1 vertex:1 recognizing:1 seventh:1 reported:1 learnt:6 combined:2 international:2 probabilistic:15 together:1 sketching:1 again:1 cougar:1 containing:1 lambda:1 conf:3 leading:1 potential:2 subsumes:1 automation:1 includes:1 explicitly:1 scissors:1 performed:5 root:4 break:1 lowe:1 wm:4 panda:1 yuanhao:1 accuracy:1 efficiently:2 saliency:1 bayesian:2 comp:1 detector:2 sixth:2 pp:3 mi:1 wheelchair:1 dataset:5 logical:1 knowledge:1 sophisticated:1 ta:2 supervised:1 specify:1 zisserman:1 april:1 formulation:2 evaluated:1 stage:1 until:1 parse:4 defines:1 grows:1 building:4 normalized:1 true:1 hence:2 illustrated:4 deal:2 during:1 width:3 nuisance:2 allowable:1 crf:3 demonstrate:2 geometrical:2 image:22 recently:1 hreshold:1 common:2 rotation:16 nih:1 thirdly:1 extend:1 refer:1 significant:1 cambridge:1 ai:2 meila:1 mathematics:1 similarly:1 language:4 gratefully:1 add:2 showed:1 exploited:1 caltech:4 additional:1 greater:1 determine:1 ii:2 full:3 keypoints:1 alan:1 technical:4 long:1 mrf:19 basic:1 vision:7 histogram:2 normalization:1 represent:2 proposal:4 addition:2 windsor:1 background:10 zettlemoyer:2 diagram:2 tri:1 induced:2 member:2 lafferty:2 jordan:1 structural:1 presence:1 iii:1 split:1 concerned:2 variety:1 fit:2 zi:2 restrict:2 airplane:7 angeles:4 categorial:1 york:1 detailed:1 ten:1 category:1 simplest:1 exist:1 nsf:1 klein:2 vol:1 group:1 four:1 terminology:1 enormous:1 achieving:1 lzhu:1 graph:14 sum:1 angle:3 fourteenth:1 uncertainty:3 wu:1 scaling:3 oring:1 occur:1 ucla:2 aspect:1 speed:1 chair:2 extremely:1 department:3 structured:9 combination:2 manning:2 anhui:1 em:7 ro1:1 invariant:10 restricted:1 computationally:1 know:3 pursuit:4 junction:2 observe:1 alternative:2 motorbike:7 original:1 top:2 denotes:2 ensure:1 include:2 clustering:4 graphical:2 exploit:1 parsed:4 wrench:1 g0:8 mumford:1 strategy:3 traditional:1 stapler:1 topic:1 induction:11 code:1 index:1 difficult:2 setup:1 thirteen:2 robert:1 relate:1 rise:1 unknown:5 perform:6 datasets:4 markov:3 acknowledge:1 november:1 subsume:1 relational:2 extended:1 arbitrary:1 standardly:1 pair:1 specified:1 connection:1 sentence:1 california:2 hour:1 nip:1 able:2 proceeds:2 below:1 pattern:4 rf:3 max:1 including:1 built:1 power:2 suitable:1 overlap:1 natural:3 hybrid:11 zhu:6 representing:1 minimax:1 technology:1 brief:2 prior:2 piano:1 literature:1 geometric:1 acknowledgement:1 determining:1 relative:1 fully:1 par:1 proportional:1 remarkable:1 triple:7 foundation:1 degree:1 sufficient:2 rubin:1 exciting:1 viewpoint:1 principle:1 production:1 row:1 mateescu:1 free:1 wide:1 template:1 face:13 grammatical:1 overcome:1 calculated:3 default:1 evaluating:1 vocabulary:7 depth:1 made:1 far:1 transaction:1 approximate:1 clique:1 sequentially:1 uai:4 xi:2 fergus:1 ripley:1 search:2 iterative:2 triplet:28 table:4 learn:12 ca:2 expanding:1 obtaining:1 complex:3 did:1 child:4 allowed:3 xu:1 wiley:1 position:5 pereira:2 exponential:1 candidate:1 down:1 specific:1 revolution:1 sift:1 constellation:2 cease:1 false:2 pcfg:5 merging:1 adding:2 texture:1 subtree:2 chen:5 entropy:1 saddle:3 appearance:8 visual:1 expressed:1 ordered:1 partially:1 maxw:1 corresponds:2 conditional:3 considerable:1 change:1 determined:1 except:1 called:1 invariance:3 experimental:1 formally:1 select:2 support:1 ustc:1 collins:3 evaluate:1 tested:1 della:2
2,229
3,022
Learning Structural Equation Models for fMRI Amos J. Storkey School of Informatics University of Edinburgh Stephen Lawrie Division of Psychiatry University of Edinburgh Enrico Simonotto Division of Psychiatry University of Edinburgh Lawrence Murray School of Informatics University of Edinburgh Heather Whalley Division of Psychiatry University of Edinburgh David McGonigle Centre for Functional Imaging Studies University of Edinburgh Abstract Structural equation models can be seen as an extension of Gaussian belief networks to cyclic graphs, and we show they can be understood generatively as the model for the joint distribution of long term average equilibrium activity of Gaussian dynamic belief networks. Most use of structural equation models in fMRI involves postulating a particular structure and comparing learnt parameters across different groups. In this paper it is argued that there are situations where priors about structure are not firm or exhaustive, and given sufficient data, it is worth investigating learning network structure as part of the approach to connectivity analysis. First we demonstrate structure learning on a toy problem. We then show that for particular fMRI data the simple models usually assumed are not supported. We show that is is possible to learn sensible structural equation models that can provide modelling benefits, but that are not necessarily going to be the same as a true causal model, and suggest the combination of prior models and learning or the use of temporal information from dynamic models may provide more benefits than learning structural equations alone. 1 Introduction Structural equation modelling (SEM) is a technique widely used in the behavioural sciences. It has also appeared as a standard approach for analysis of what has become known as effective connectivity in the functional magnetic resonance imaging (fMRI) literature and is still in common use despite the increasing interest in dynamical methods such as dynamic causal models [6]. Simply put, effective connectivity analysis involves looking at the possible causal influences between brain regions given measurements of the activity of those regions. Structural equation models are a Gaussian modelling tool, and are similar to Gaussian belief networks. In fact Gaussian belief networks can be seen as a subset of valid structural equation models. However structural equation models do not have the same acyclicity constraints as belief networks. It should be noted that the graphical form used in this paper is at odds with traditional SEM representations, and consistent with that used for belief networks, as those will be more familiar to the expected audience. Within the fMRI context, the use of structural equation generally takes the following form. First certain regions of interests (commonly called seeds) are chosen according to some understanding of what brain regions might be of interest or of importance. Then neurobiological knowledge is used to propose a connectivity model. This connectivity model states what regions are connected to what other regions, and the direction of the connectivity. This connectivity model is used to define a structural equation model. The parameters of this model are then typically estimated using maximum likelihood methods, and then comparison of connection parameters is made across subject classes. In this paper we consider what can be done when it is hard to specify connectivity a priori, and ask how much we can achieve by learning network structures from the fMRI data itself. The novel developments of this paper include the examination of various generative representations for structural equation models which allow straightforward comparisons with belief networks and other models such as dynamic causal models. We implement Bayesian Information Criterion approximations to the evidence and use this in a Metropolis-Hastings sampling scheme for learning structural equation models. These models are then applied to toy data, and to fMRI data, which allows the examination of the types of assumptions typically made. 1.1 Related Work: Structural Equation Models Structural equation models and path analysis have a long history. The methods were introduced in the context of genetics in [20], and in econometrics in [7]. They have been used extensively in the social sciences in a variety of ways. Linear Gaussian structural equation models can be split into the case of path analysis [20], where the all the variables are directly measurable and structural equation models with latent variables [1], where latent variable models are allowed. Factor analysis is another special case of this latter. Furthermore structural equation models can also be characterised by the inclusion of exogenous influences. Structural equation models have been analysed and understood in Bayesian terms before. They form a part of the causal modelling framework of Pearl [11], and have been discussed within that context, as well as a number of others [11, 4, 13, 10]. Approaches to learning structural equation models have not played a significant part in fMRI methods. One approach is described in [3] where they use a genetic algorithm approach for the search. In [21], the authors look at learning Bayesian networks but do not consider cyclic networks. For dynamic causal models (rather than structural equation models) the issue of model comparison was dealt with in [12], but large scale structure learning was not considered. In fMRI literature, SEMs have generally been used to model ?effective connectivity?, or rather modelling the causal relationships between different brain regions. They were first applied to imaging data by [9], and there have been many further applications [2, 5, 14]. The first analysis on data from schizophrenia studies was detailed in [15]. In fact it seems SEMs have been the most widely used model for connectivity analyses in neuroimaging. In all of the studies cited above the underlying structure was presumed known or presumed to be one of a small number of possibilities. There has been some discussion of how best to obtain reasonable structures from neuro-anatomical data, but this approach is currently used only very rarely. 2 Why Learn SEMs? The presumption in much fMRI connectivity analysis is that we can obtain models for activity dependence from neuro-anatomical sources. The problem with this is that it fails to account for the fact that connectivity analysis is usually done with a limited number of regions. It is highly possible that a connection from one region to another is mediated via a third region, which is not included in the SEM model. The strength of that mediation is unknown from neuro-anatomical data and is generally ignored: most connectivity models focus only on direct anatomical connections, with the accompanying implicit assumption that there are no other regions involved in the network under study, or that these regions would contribute only minimally to the model. Furthermore, just because regions are physically connected does not mean there is any actual functional influence in a particular context. Hence it has to be accepted that neuro-anatomically derived connectivity is a first guess at best. It is not the purpose of this paper to propose that anatomical connectivity be ignored, but instead it asks what happens if we go to the other extreme: can we say something about connectivity from the data? In reality anatomical connectivity models are needed, and can be used to provide good priors for the connections and even for the relative connection strengths. Statistically there are huge equivalences in structural equation models that will not be determined by the data alone. 3 Understanding Structural Equation Models In this section two generative views of structural equation modelling are presented. The idea behind structural equation modelling is that it represents causal dependence between different variables. The fact that cyclic structures are allowed in structural equation models could be seen as an implicit assumption of some underlying dynamic which the structural equation model is an equilibrium rep- resentation of. Indeed that is commonly how effective connectivity models are interpreted in an fMRI context. Two linear models, both of which produce a structural equation model prior, are presented here. Though these models have the same statistical properties, they have different generative motivations and different non-linear extensions, so they are both potentially instructive. 3.1 The Traditional Model The standard SEM view is that the core SEM structure is a covariance produced by the solution to a set of linear equations x = Ax + ? with Gaussian term ?. This does not have any direct generative elucidation, but can instead be thought of as relating to a deterministic dynamical system subject to uncertain fixed input. Suppose we have a dynamical system xt+1 = Axt + ?, subject to some input ?, where we presume the system input is unknown and Gaussian distributed. To generate from the model, we sample ?, run the dynamical system to its fixed point, and use that fixed point as a sample of x. This fixed point is given by x = (I ? A)?1 ? which produces the standard SEM covariance structure for x. This requires A to be a contraction map to obtain stable fixed points. All the other aspects of the general form of SEM are either inputs to or measurements from this system. 3.2 Average Activity Of A Gaussian Dynamic Bayesian Network An alternative and potentially appealing view is that the the SEM represents the distribution of the long term activity of the nodes in a Gaussian dynamic Bayesian network (Kalman filter). Suppose we have xt = Axt?1 + ? t , where ? t are IID Gaussian variables, and x0 , x1 , . . . is a series of real variables. This defines a Markov chain, and is the evolution equation of a Gaussian dynamic Bayesian network. we are at the equilibrium distribution of this Markov ? Suppose P ? = (1/ N ) N x for large N , we can use the Kalman filter to see that chain. Then setting x t t=1 ? PN ? ? PN PN (1/ N ) t=1 xt = ? (1/ N )[A(x0 ? xN ) + A i=1 xt ] + (1/ N ) t=1 ? t . Presuming A is a ? ? A? contraction map, (1/ N )[A(x0 ? xN )] becomes negligibly small and so x x + ? where ? is distributed identically to ? t due to the fact that the variance of a sum of Gaussians is the sum of the variances. The approximation becomes an equality in the large N limit. Again this is the required form for obtaining the covariance of the SEM. This interpretation says that if we have some latent system running as a Gaussian dynamic Bayesian network, but our measuring equipment is only capable of capturing longer term averages of the network activity then our measurements are distributed according to an SEM. This generative interpretation is appealing in the context of fMRI acquisition. Note in both of these interpretations that it is important that A is a contraction. By formulating the generative framework we see it is important to restrict the form of connectivity model in this way. 4 Model Structure The standard formalism for Structural Equation Models is now outlined. A structural equation model for observational variables y, latent variables, x and sometimes for latent input variables ? and observations of the input variables z is given by the following equations x = (I ? A)?1 (R? + ?), y = Bx + ? and z = C? + ? (1) where ?, ?, ? and ? are Gaussian, and A is presumed to be zero diagonal. For for S = I ? A, the resulting covariance for the observed variables (y, z) is given by BS ?1 (RK? RT + K? )[S ?1 ]T + K? CK? RT [S ?1 ]T B T BS ?1 RK? C T CK? C T + K? . (2) where K? is the covariance of ?, K? the covariance of ? etc. There are a number of common simplifications to this framework. The first case involves presuming no inputs and a fully visible system. Hence we marginalise the observations of the input variables z, set K? = ?, C = 0 ,R = 0, B = 1, ? = 0. Then the covariance K1 of y is K1 = (I ? A)?1 K? [(I ? A)?1 ]T . The next simplest case would involve presuming once again that there are no inputs but that in fact the observations are stochastic functions of the latent variables. This involves setting K? = ?, C = 0, R = 0, B = 1. We then have K2 = (I ? A)?1 K? [(I ? A)?1 ]T + K? . If we view the observations as noisy versions of the latent variables then K? is diagonal. This will be the most general case considered in this paper. Adding any of the remaining components is not particularly demanding as it simply uses a conditional rather than unconditional model. Suppose we denote by K the covariance corresponding to the required model. For most of this paper we presume K = K2 . We then have the following probability for the whole data Y = {y1 , y2 , . . . , yN }. ? ? Y 1 j 1 T ?1 j ? ? ?) = (y ? y ) K (y ? y ) (3) exp ? P (Y |K, y 2 (2?)m/2 |K|1/2 j ?=x ? +? ? and the latent mean is x ? = (I ? A)?1 ?, ? and where where the observable model mean is y ? and ?, along with elements of the matrix A and covariances K? and K? , are parameters. 5 Priors, Maximum Posterior and Bayesian Information Criterion The previous section outlines the basic model of the data given the various parameters. In this section we provide prior distributions for the parameters of the structural equation model. Independent Gaussian priors are put on the parameters: ? ? 1 T 1/2 2 ? (4) exp ? T (Aij ? Aij ) P (Aij |T ) = 2 (2?)1/2 with regularisation parameter T . For the purposes of this paper we take A?ij = 0, presume we have no particular a priori bias towards positive or negative connections and a uniform prior over structures. An independent prior over connections seems reasonable as two separate connections between different brain regions would have no a priori reason to be related. Any relationship is due to functional purpose and is therefore a posteriori. The use of a uniform prior over all structures is an extreme position, which we have taken in this paper to contrast with using only one structure. In reality we would want to use neurobiologically guided priors over structures. Inverse gamma priors were also specified for K? and K? originally, along with a prior for the mean ? ? . It was found that these typically had no effects on the experiments and were dropped for simplicity. Hence K? and K? will be optimised without regularisation, and ? ? is set to zero. T is chosen by 10 fold cross-validation from a set of 10 possible values. We can calculate all the relevant derivatives for the SEM straightforwardly, and adapt the parameters to maximize the posterior of the structural equation model. In this paper we use a conjugate gradient approach. By adding a Bayesian Information Criterion term [16], (?0.5m log N ) for m parameters and N data points, to the log posterior at the maximum posterior solution, we can obtain an approximation of the evidence P (Y |M ) where M encodes the structural information we are interested in and consists of indicator variables Mij indicating a connection for node j to node i. This will enable us to sample from an approximate posterior distribution of structures to find a sample which best represents the data. 6 Sampling From SEMs In order to represent the posterior distribution over network structures, we resort to a sampling approach. Because there are no acyclicity constraints, MCMC proposals are simpler than the comparable situation for belief networks in that no acyclicity checking needs to be done for the proposals. A simple proposal scheme is to randomly generate a swap matrix MS which is XORed with M . We choose highly sparse swap matrices, but to reduce the possibility of transitioning randomly about the larger graphs, without ever considering smaller networks we introduce a bias towards removing connections rather than adding connections in generating the swap matrix. This means the proposal is no longer symmetric, and so a corresponding Hastings factor needs to be included in the acceptance probability, so the result is still a sample from the original posterior. 7 Tests On A Toy Problem We tested the approach on a toy problem with 8 variables. We sampled 800 data points from y = (I?A)?1 ?+? for ? Gaussian with unit diagonal covariance, ? Gaussian with 0.2 diagonal covariance and with A given by ? 0 ? 0 0 ?0.26 0 0 ?0.03 0 ? ? ? ? 0 0 0 0 ?0.17 0.31 0.385 0 0 0 0 0.49 0.42 0 0.47 0 ?0.36 0.27 0 ?0.13 0 0 0 0 0 0 0 0 ?0.36 0.34 0 0 ?0.18 0 0.16 0.55 0 ?0.03 0 0 0 0.5 0 0 ?0.08 ?0.25 0 0 0 0 0 0.25 0 0 ?0.22 0 ? ? ? ? This connectivity matrix is represented graphically in Figure 11. In modelling this we used T = 10. This prior ensures that any A that is not of very low prior probability is a contraction. Also contraction constraints were added to the optimisation. Priors on other parameters were set to be broad. An annealed sampling procedure was used for the first 4000 samples from the MetropolisHastings Monte-Carlo procedure. After that a further 4000 samples burn-in was used. The next 4000 samples were used for analysis. We assess what edges are common in the samples, and what the highest posterior sampled graphs are. Figure 1b provides illustrations of edges which are present in more than 0.15 of the samples. It can be seen that many of the critical edges are there in most samples (indeed some are always there). Those which are missing in both cases tend to be either low magnitude connections or are due to directional confusion. (a) (b) (c) (d) Figure 1: (a) Graphical structure of the ground truth model for the toy data, and (b) Edges present in more than 0.15 of the cases, (c) the highest posterior structure from the sample (d) a random sample. The graphs for the maximum posterior sample and a random sample are shown in Figure 1. We can see that again in the maximum posterior sample, there is a misplaced edge (the edge from 5 to 6 is replaced by one from 5 to 1) and a number are missing or have exchanged direction. The samples generally have likelihoods which are very similar to the likelihood for the true model. We can conclude from this that we can gain some information from learning SEM structures, but as with learning any graphical models there are many symmetries and equivalences, so it is vital not to infer too much from the learnt structures. 8 Tests On fMRI Data The approach of this paper was tested on two different fMRI datasets. The first dataset (D1) was taken from a dataset that had previously used to examine inter-session variance in a single subject [8, 17]. We used the auditory paced finger-tapping task; briefly, a single subject tapped his right index finger, paced by an auditory tone (1.5Hz). Each activation epoch was alternated with a rest epoch, in which the pacing tone was delivered to control for auditory activation. Thirteen blocks were collected per session (seven rest and six active). Each block was 24s/6 scans long, making 78 scans in total for each of 33 sessions. The subject maintained fixation on a cross that was backprojected onto a transparent screen by a LCD video projector as in previous experiments. The subject was a healthy 23 year old right-handed male. The data were acquired on a Siemens MAGNETOM Vision (Siemens, Erlangen, Germany) at 2T. Each BOLD-EPI volume scan consisted of 48 transverse slices (inplane matrix 64x64; voxel size 3x3x3mm; TE=40ms; TR=4.1s). A T1weighted high-resolution MRI of the subject (1 x 1 x 1.5mm resolution) was acquired to facilitate anatomical localisation of the functional data. The data were processed with statistical parametric mapping (SPM) software SPM5 (Wellcome Department of Cognitive Neurology; www.fil.ion.ucl.ac.uk/spm). After removal of the first two volumes to account for T1 saturation effects, cerebral volumes were realigned to correct for both within- and between-session subject motion). The data were filtered with a 128s high-pass filter, and an AR(1)-model was used to account for serial correlation in the data Experimental effects were estimated using session design matrices modeling the hemodynamically convolved time-course of the active movement condition, and 6 subject movement parameters. Note that no spatial smoothing was applied to this dataset, to attempt to preserve single-voxel timeseries. Seeds were selected from significantly active voxels identified using a random effects analysis in SPM5 (ones-sample t-test across 33 sessions; p < 0.05 FWE corrected for multiple comparsions). For comparison with previous extant work, the most significant voxel in each cluster was chosen as a seed, giving 13 seeds representing 13 separate anatomical regions. When it was obvious that a given cluster encompassed more than one distinct anatomical region, seeds were also selected for other regions covered by the cluster. 2000 data points were used for training, the remaining 574 were reserved as a test set. The second dataset (D2) was from a long term study of subjects who are at genetically enhanced risk of schizophrenia. Imaging was carried out on 90 subjects at the Brain Imaging Research Centre for Scotland (Edinburgh, Scotland, UK) on a GE 1.5 T Signa scanner. A high resolution structural scan was acquired using a 3D T1-weighted sequence (TI = 600 ms). Functional data was acquired using an EPI sequence. A total of 204 volumes were acquired. The first four volumes of each acquisition were discarded. Preliminary analysis was carried out using SPM99. Data were first realigned to correct for head movement, normalized to the standard EPI template and smoothed. The resulting data consists of a image-volume series of 200 time points for each of the remaining 90 patients. The voxel time courses were temporally filtered. In order to reduce task related effects, we modelled the task conditions with standard block effects (Hayling), all convolved with canonical hemodynamical response functions, and fitted a general linear model (which also included regressors for the estimated head movement) to the time filtered data; the residuals of this procedure were used as the data for all the work described in this paper. The full data set was split into two halves, a training and a test set. Data from 45 of the subjects was used for training and 45 for testing. For an effective connectivity analysis, a number of brain regions (seeds) were chosen on the basis of the results of a functional connectivity study [19] and taking regard of areas which may be of particular clinical interest. In total 14 regions were chosen, along with their 14 cross-hemisphere counterparts. Hence we are interested in learning a 28 by 28 connectivity matrix. 8.1 Learning SEM Structure For both datasets a similar procedure to the toy example was followed for learning structure for the fMRI data. The stability of the log posterior along with estimations of cross-correlation against lag were used as heuristics to determine convergence prior to obtaining 10000 sample points. Assuming a fully visible path analysis (covariance K1 ) model, where no measurement noise is included is typical in fMRI analysis (e.g. [15] for a Schizophrenia studies), we found that samples from the posterior of this model were in fact so highly connected that displaying them would infeasible. For D2 a connectivity of 350 of the 752 total possible connections was typical. However note that only 376 connections are needed to fully specify a general covariance. Hence we can assume that in this situation the data is not suggesting any particular structure in the data which is reasonably amenable to path analysis. We can generalise the path analysis model by making the region activities latent variables, and allow the measurement variables to be noisy versions of those regions. In SEM terms this is equivalent to assuming the covariance structure given by K2 . A repeat of the whole procedure with this covariance results in much smaller structures. We focus on this approach. For dataset D1, we sample posterior structures given the training data with T = 100. There is notable variation in these structures although some key links (eg Left Motor Cortex (L M1) to Left Posterior Parietal Cortex (L PPC) are included in most samples. In addition an a priori connectivity (a) (b) Figure 2: Structure for (a) the hand specified model (b) the highest posterior sample. (a) (b) (c) Figure 3: Graphical structure for (a) the highest posterior structure from the sample (b) random sample. (c) a sample from the two tier model. The regions are Inferior Frontal Gyrus, Medial Frontal Gyrus, Ant. Cingulate, Frontal Operculum, Superior Frontal Gyrus, Middle Frontal Gyrus, Superior Temporal Gyrus, Middle Temporal Gyrus, Insula, Thalamus, Amygdala Hippocampal Region, Cuneus/Precuneus, Inferior Parietal Lobule and Posterior Cerebellum. structure is proposed for the regions in the study, taking into account the task involved. This was obtained by using knowledge of neuroanatomical connectivity drawn from studies using tract-tracing in non-human primates. It was produced independent of the connectivity analysis and without knowledge of its results, but taking into account the seed locations and their corresponding activities. Note that this is a simple finger tapping motor task with seeds corresponding to the associated regions. Though not trivial, we would expect the specification to be easier and more accurate here than for more complicated cognitive tasks, due to the high number of papers using this task in functional neuroimaging. Task D1 is also of note due to its focus on repeated scanning in a single individual, thus negating any problems in seed selection that may arise from inter-subject spatial variance. These two cases described above are specified as different hypothesised models. We denote the hand-specified structure MH and we select the maximum a posteriori sample ML (for ?Learnt Model?) as a potential alternative. The two structures are illustrated in Figure 2. The maximum a posteriori parameters are then estimated for the two models using the same conjugate gradient procedure on the same dataset. These two models are then used predictively on the remaining unseen test data. We compute the predictive log-likelihoods for each model. We find that the best predictive log-likelihoods for each approach are the same (3SF) for both models. They are also the same as the predictive likelihood using the full sample covariance, which given the large data sizes used is well specified. Both these models perform better than other random models with equivalent numbers of connections. In reality learnt models are going to be used in new situations and situations with less data. One test of the appropriateness of a model is to assess its predictive capability when trained on little data. By estimating the model parameters on 100 data points, instead of 2000, we find that the learnt model performs very slightly better than the hand specified model (log odds ratio of 63 on a 574 point test set), and both perform better than the full covariance (log odds of 292). This indicates that both MH and ML are providing salient reduced representations which capture useful characteristics of the data. We also ran tests on D2. Maximum posterior samples and a random sample are illustrated in Figure 3. Note that although these samples appear to still be quite highly connected, they in fact have about 130 connections. Even so this is significantly greater than the idealised connectivity structures typically used in most studies. One further approach is to assume a fully connected structure, but where the connectivity is in two categories. We have priors on connectivity with the same values of Tij as before for the strong connections and much larger values for the weaker connections. When this is added to the form of the model (where we make the incorrect but practical assumption that the BIC assumption still holds for the stronger connections) we obtain even simpler structures. Following this procedure we find that models of the form of 3c are typical samples from the posterior where only the larger connections are shown. Again connections such as those between the Cuneus/Precuneus and the Superior Frontal Gyrus, the Thalamic connections, and some of the cross-hemispheric connections are amongst those that would be expected. This approach is related to recent work on the use of sparse priors for effective connectivity [18]. 9 Future Directions This work demonstrates that if we learn structural equation models from data, we find there is little evidence for the simple forms of path analysis model which is in common use in the fMRI literature. We suggest that learning connectivity can be a reasonable complement to current procedures where prior specification is hard. Learning on its own does discover useful parameterised representations, but these parameterisations are not the same as reasonable prior specifications. This is unsurprising due to the statistical equivalence of many SEM structures. It should be expected that combining learnt structures with prior anatomical models will help in the specification of more accurate connectivity assumptions, as it will reduce the number of equivalence and focus on more reasonable structural forms. Furthermore future comparisons can be made using a sample of reasonable models instead of a single a priori chosen model. We would also expect that the major gains in learning models with come from the focus on dynamical networks which do not suffer from specificity problems. Even if the level of temporal information is small, any temporal information provides handles for inferring causality that are unavailable with static equilibrium models. References [1] K. A. Bollen. Structural Equations with Latent Variables. John Wiley and Sons, 1989. [2] C. Buchel, J.T. Coull, and K.J. Friston. The preedictive value of changes in effective connectivity for human learning. Science, 283:1528?1541, 1999. [3] E. Bullmore, B. Howitz, G. Honey, M. Brammer, S. Williams, and T. Sharma. How good is good enough in path analysis of fMRI data? Neuroimage, 11:289?301, 2000. [4] D. Dash. Restructing dynamic causal systems in equilibrium. In Proc. Uncertainty in AI 2005, 2005. [5] K.J. Friston and C. Buchel. Attentional modulation of effective connectivity from V2 to V5/MT in humans. Proceedings of the National Academy of Sciecnes, 97:7591?7596, 2000. [6] K.J. Friston, L. Harrison, and W.D. Penny. Dynamic causal modelling. NeuroImage, 19:1273?1302, 2003. [7] T. Haavelmo. The statistical implications of a system of simultaneous equations. Econometrica, 11:1?12, 1943. [8] D. McGonigle, A. Howseman, B. Athwal, K.J. Friston, R. Frackowiak, and A. Holmes. Variability in fmri: An examination of intersession differences. Neuroimage, 11:708?734, 2000. [9] A. R. McIntosh and F. Gozales-Lima. Structural equation modelling and its application to network analysis in functional brain imaging. Human Brain Mapping, 2:2?22, 1994. [10] C. Glymour P. Spirtes and R. Scheines. Causation, Prediction and Search. MIT Press, 2 edition, 2001. [11] J. Pearl. Causality. Cambridge University Press, 2000. [12] W.D. Penny, K.E. Stephan, A. Mechelli, and K.J. Friston. Comparing dynamic causal models. Neuroimage, 22:1157?1172, 2004. [13] T. Richardson. A discovery algorithm for directed cyclic graphs. In Proceedings of the 12th Conference on Uncertainty in Artificial Intelligence, 1996. [14] J. Rowe, K.E. Stephan, K. Friston, R. Frackowiak, A. Lees, and R. Passingham. Attention to action in Parkinsons disease. Brain, 125:276?289, 2002. [15] R. Schlosser, T. Gesierich, B. Kauffman, G. Vucurevic, S. Hunsche, J. Gawehn, and P. Stoeter. Altered effective connectivity during working memory performance in schizophrenia: a study with fMRI and structural equation modeling. Neuroimage, 19:751?763, 2003. [16] G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461?464, 1978. [17] S.M.Smith, C.F. Beckmann, N. Ramnani, M.W. Woolrich, P.R. Bannister, M. Jenkinson, P.M. Matthews, and D. McGonigle. Variability in fMRI: A re-examination of intersession differences. Human Brain Mapping, 24:248?257, 2005. [18] P.A. Valdes Sosa, J.M. Sanchez-Bornot, A. Lage-Castellanos, M. Vega-Hernandez, J. Bosch Bayard, L. Melie-Garcia, and E. Canales-Rodriguez. Estimating brain functional connectiivty with sparse multivariate autoregression. Philosophical Transactions of the Royal Society onf London B Biological Sciences, 360:969?981, 2005. [19] H.C. Whalley, E. Simonotto, I. Marshall, D.G.C Owens, N.H. Goddard, E.C. Johnstone, and S.M. Lawrie. Functional disconnectivity in subjects at high genetic risk of schizophrenia. Brain, 128:2097?2108, 2005. [20] S. Wright. Correlation and causation. Journal of Agricultural Research, 20:557?585, 1921. [21] X. Zheng and J. C. Rajapakse. Learning functional structure from fMR images. Neuroimage, 31:1601? 1613, 2006.
3022 |@word briefly:1 version:2 cingulate:1 mri:1 seems:2 middle:2 stronger:1 d2:3 covariance:17 contraction:5 asks:1 tr:1 cyclic:4 generatively:1 series:2 genetic:2 current:1 comparing:2 sosa:1 analysed:1 activation:2 john:1 visible:2 motor:2 medial:1 alone:2 generative:6 selected:2 guess:1 half:1 tone:2 intelligence:1 scotland:2 smith:1 core:1 filtered:3 precuneus:2 provides:2 valdes:1 contribute:1 node:3 location:1 simpler:2 along:4 direct:2 become:1 incorrect:1 consists:2 fixation:1 introduce:1 acquired:5 x0:3 inter:2 presumed:3 indeed:2 expected:3 examine:1 brain:12 actual:1 little:2 considering:1 increasing:1 becomes:2 agricultural:1 estimating:3 underlying:2 discover:1 what:8 interpreted:1 temporal:5 ti:1 honey:1 axt:2 k2:3 demonstrates:1 uk:2 control:1 unit:1 misplaced:1 yn:1 appear:1 before:2 positive:1 understood:2 dropped:1 t1:2 cuneus:2 limit:1 despite:1 optimised:1 path:7 tapping:2 modulation:1 hernandez:1 might:1 burn:1 minimally:1 heather:1 equivalence:4 limited:1 statistically:1 presuming:3 directed:1 lobule:1 practical:1 testing:1 block:3 implement:1 procedure:8 area:1 thought:1 significantly:2 lawrie:2 specificity:1 suggest:2 onto:1 selection:1 put:2 context:6 influence:3 risk:2 www:1 measurable:1 deterministic:1 map:2 missing:2 projector:1 annealed:1 straightforward:1 go:1 graphically:1 williams:1 attention:1 resolution:3 simplicity:1 holmes:1 his:1 stability:1 handle:1 x64:1 variation:1 annals:1 enhanced:1 suppose:4 lima:1 us:1 tapped:1 storkey:1 element:1 particularly:1 neurobiologically:1 econometrics:1 observed:1 negligibly:1 mcgonigle:3 capture:1 calculate:1 region:25 ensures:1 connected:5 hypothesised:1 lage:1 movement:4 highest:4 t1weighted:1 ran:1 disease:1 econometrica:1 dynamic:13 lcd:1 trained:1 predictive:4 division:3 swap:3 basis:1 joint:1 mh:2 frackowiak:2 various:2 represented:1 finger:3 epi:3 distinct:1 effective:9 london:1 monte:1 artificial:1 exhaustive:1 firm:1 quite:1 lag:1 widely:2 larger:3 heuristic:1 say:2 tested:2 bullmore:1 statistic:1 unseen:1 richardson:1 itself:1 noisy:2 delivered:1 sequence:2 ucl:1 propose:2 relevant:1 combining:1 achieve:1 academy:1 convergence:1 cluster:3 jenkinson:1 produce:2 generating:1 tract:1 help:1 ac:1 bosch:1 ij:1 school:2 strong:1 involves:4 come:1 appropriateness:1 direction:3 guided:1 correct:2 filter:3 stochastic:1 human:5 enable:1 observational:1 argued:1 transparent:1 pacing:1 preliminary:1 biological:1 magnetom:1 extension:2 accompanying:1 mm:1 fil:1 considered:2 ground:1 wright:1 exp:2 scanner:1 lawrence:1 equilibrium:5 seed:9 mapping:3 matthew:1 major:1 purpose:3 fmr:1 estimation:1 proc:1 whalley:2 currently:1 healthy:1 schwarz:1 tool:1 amos:1 weighted:1 mit:1 gaussian:17 always:1 rather:4 ck:2 pn:3 parkinson:1 realigned:2 derived:1 focus:5 ax:1 modelling:10 likelihood:6 indicates:1 contrast:1 psychiatry:3 equipment:1 posteriori:3 typically:4 going:2 interested:2 germany:1 issue:1 priori:5 development:1 resonance:1 spatial:2 special:1 smoothing:1 once:1 sampling:4 represents:3 broad:1 look:1 schlosser:1 fmri:21 future:2 others:1 causation:2 randomly:2 gamma:1 preserve:1 national:1 individual:1 familiar:1 replaced:1 attempt:1 interest:4 huge:1 acceptance:1 possibility:2 highly:4 localisation:1 zheng:1 male:1 extreme:2 unconditional:1 behind:1 chain:2 amenable:1 accurate:2 implication:1 edge:6 capable:1 old:1 exchanged:1 re:1 causal:11 uncertain:1 fitted:1 handed:1 formalism:1 modeling:2 negating:1 castellanos:1 ar:1 marshall:1 measuring:1 subset:1 uniform:2 resentation:1 too:1 unsurprising:1 straightforwardly:1 scanning:1 learnt:6 cited:1 lee:1 informatics:2 extant:1 connectivity:36 again:4 woolrich:1 choose:1 cognitive:2 resort:1 derivative:1 bx:1 toy:6 account:5 suggesting:1 insula:1 potential:1 bold:1 notable:1 view:4 exogenous:1 thalamic:1 complicated:1 capability:1 ass:2 variance:4 reserved:1 who:1 characteristic:1 directional:1 ant:1 dealt:1 modelled:1 bayesian:9 produced:2 iid:1 carlo:1 worth:1 presume:3 history:1 simultaneous:1 against:1 rowe:1 acquisition:2 involved:2 obvious:1 associated:1 static:1 erlangen:1 sampled:2 gain:2 dataset:6 auditory:3 ask:1 knowledge:3 originally:1 specify:2 response:1 done:3 though:2 furthermore:3 just:1 implicit:2 parameterised:1 correlation:3 hand:3 hastings:2 working:1 rodriguez:1 spm:2 defines:1 facilitate:1 effect:6 consisted:1 true:2 y2:1 normalized:1 evolution:1 hence:5 equality:1 counterpart:1 idealised:1 symmetric:1 spirtes:1 illustrated:2 eg:1 cerebellum:1 during:1 inferior:2 maintained:1 noted:1 criterion:3 m:3 hippocampal:1 hemispheric:1 outline:1 demonstrate:1 confusion:1 performs:1 motion:1 image:2 novel:1 vega:1 common:4 superior:3 functional:12 mt:1 volume:6 cerebral:1 discussed:1 interpretation:3 m1:1 relating:1 measurement:5 significant:2 cambridge:1 ai:1 mcintosh:1 outlined:1 session:6 inclusion:1 centre:2 predictively:1 had:2 fwe:1 stable:1 specification:4 longer:2 cortex:2 etc:1 something:1 posterior:20 own:1 recent:1 multivariate:1 hemisphere:1 certain:1 rep:1 sems:4 seen:4 greater:1 determine:1 maximize:1 sharma:1 stephen:1 multiple:1 full:3 infer:1 thalamus:1 adapt:1 cross:5 long:5 clinical:1 serial:1 schizophrenia:5 prediction:1 neuro:4 basic:1 optimisation:1 vision:1 patient:1 physically:1 sometimes:1 represent:1 ion:1 audience:1 proposal:4 addition:1 want:1 enrico:1 harrison:1 source:1 rest:2 simonotto:2 subject:15 tend:1 hz:1 backprojected:1 sanchez:1 odds:3 structural:39 split:2 identically:1 vital:1 enough:1 variety:1 stephan:2 bic:1 restrict:1 identified:1 reduce:3 idea:1 intersession:2 hold:1 six:1 suffer:1 action:1 ignored:2 generally:4 useful:2 detailed:1 involve:1 covered:1 tij:1 extensively:1 processed:1 category:1 simplest:1 gyrus:7 generate:2 reduced:1 canonical:1 estimated:4 per:1 anatomical:10 group:1 key:1 four:1 salient:1 drawn:1 bannister:1 imaging:6 graph:5 sum:2 year:1 run:1 inverse:1 uncertainty:2 reasonable:6 comparable:1 capturing:1 paced:2 played:1 simplification:1 followed:1 fold:1 dash:1 activity:8 strength:2 constraint:3 software:1 encodes:1 aspect:1 formulating:1 glymour:1 department:1 according:2 combination:1 conjugate:2 across:3 smaller:2 slightly:1 son:1 appealing:2 metropolis:1 b:2 happens:1 making:2 primate:1 anatomically:1 taken:2 wellcome:1 behavioural:1 equation:40 spm5:2 previously:1 tier:1 scheines:1 needed:2 ge:1 autoregression:1 gaussians:1 v2:1 magnetic:1 alternative:2 convolved:2 original:1 neuroanatomical:1 running:1 include:1 remaining:4 graphical:4 elucidation:1 rajapakse:1 goddard:1 giving:1 k1:3 murray:1 society:1 added:2 equivalent:2 v5:1 mechelli:1 parametric:1 dependence:2 rt:2 traditional:2 diagonal:4 gradient:2 amongst:1 separate:2 link:1 attentional:1 sensible:1 seven:1 collected:1 trivial:1 reason:1 assuming:2 kalman:2 index:1 relationship:2 illustration:1 ratio:1 providing:1 beckmann:1 neuroimaging:2 thirteen:1 potentially:2 negative:1 design:1 bollen:1 unknown:2 perform:2 observation:4 markov:2 datasets:2 discarded:1 timeseries:1 parietal:2 situation:5 looking:1 ever:1 head:2 y1:1 variability:2 smoothed:1 transverse:1 david:1 introduced:1 complement:1 required:2 specified:6 connection:23 philosophical:1 marginalise:1 pearl:2 usually:2 dynamical:5 kauffman:1 appeared:1 genetically:1 saturation:1 royal:1 memory:1 video:1 belief:8 critical:1 demanding:1 metropolishastings:1 examination:4 friston:6 indicator:1 residual:1 representing:1 scheme:2 altered:1 temporally:1 carried:2 mediated:1 alternated:1 prior:22 voxels:1 literature:3 understanding:2 checking:1 epoch:2 removal:1 mediation:1 relative:1 regularisation:2 fully:4 expect:2 buchel:2 presumption:1 validation:1 sufficient:1 consistent:1 signa:1 displaying:1 genetics:1 course:2 supported:1 repeat:1 infeasible:1 aij:3 bias:2 allow:2 weaker:1 generalise:1 johnstone:1 template:1 taking:3 sparse:3 penny:2 tracing:1 edinburgh:7 benefit:2 distributed:3 slice:1 xn:2 valid:1 regard:1 ppc:1 amygdala:1 dimension:1 author:1 commonly:2 made:3 regressors:1 voxel:4 social:1 transaction:1 approximate:1 observable:1 neurobiological:1 ml:2 active:3 investigating:1 assumed:1 conclude:1 neurology:1 discovery:1 search:2 latent:10 why:1 reality:3 learn:3 reasonably:1 sem:15 obtaining:2 symmetry:1 unavailable:1 necessarily:1 brammer:1 motivation:1 whole:2 noise:1 arise:1 edition:1 allowed:2 repeated:1 x1:1 causality:2 screen:1 encompassed:1 postulating:1 wiley:1 fails:1 position:1 inferring:1 neuroimage:6 sf:1 third:1 rk:2 removing:1 transitioning:1 xt:4 evidence:3 adding:3 importance:1 magnitude:1 te:1 easier:1 garcia:1 simply:2 mij:1 truth:1 conditional:1 towards:2 owen:1 hard:2 change:1 included:5 characterised:1 determined:1 corrected:1 typical:3 called:1 total:4 pas:1 accepted:1 experimental:1 siemens:2 parameterisations:1 acyclicity:3 rarely:1 indicating:1 select:1 latter:1 scan:4 frontal:6 mcmc:1 d1:3 instructive:1
2,230
3,023
Unified Inference for Variational Bayesian Linear Gaussian State-Space Models David Barber IDIAP Research Institute rue du Simplon 4, Martigny, Switzerland [email protected] Silvia Chiappa IDIAP Research Institute rue du Simplon 4, Martigny, Switzerland [email protected] Abstract Linear Gaussian State-Space Models are widely used and a Bayesian treatment of parameters is therefore of considerable interest. The approximate Variational Bayesian method applied to these models is an attractive approach, used successfully in applications ranging from acoustics to bioinformatics. The most challenging aspect of implementing the method is in performing inference on the hidden state sequence of the model. We show how to convert the inference problem so that standard Kalman Filtering/Smoothing recursions from the literature may be applied. This is in contrast to previously published approaches based on Belief Propagation. Our framework both simplifies and unifies the inference problem, so that future applications may be more easily developed. We demonstrate the elegance of the approach on Bayesian temporal ICA, with an application to finding independent dynamical processes underlying noisy EEG signals. 1 Linear Gaussian State-Space Models Linear Gaussian State-Space Models (LGSSMs)1 are fundamental in time-series analysis [1, 2, 3]. In these models the observations v1:T 2 are generated from an underlying dynamical system on h1:T according to: vt = Bht + ?tv , ?tv ? N (0V , ?V ), ht = Aht?1 + ?th , ?th ? N (0H , ?H ) , where N (?, ?) denotes a Gaussian with mean ? and covariance ?, and 0X denotes an Xdimensional zero vector. The observation vt has dimension V and the hidden state ht has dimension H. Probabilistically, the LGSSM is defined by: p(v1:T , h1:T |?) = p(v1 |h1 )p(h1 ) T Y p(vt |ht )p(ht |ht?1 ), t=2 with p(vt |ht ) = N (Bht , ?V ), p(ht |ht?1 ) = N (Aht?1 , ?H ), p(h1 ) = N (?, ?) and where ? = {A, B, ?H , ?V , ?, ?} denotes the model parameters. Because of the widespread use of these models, a Bayesian treatment of parameters is of considerable interest [4, 5, 6, 7, 8]. An exact implementation of the Bayesian LGSSM is formally intractable [8], and recently a Variational Bayesian (VB) approximation has been studied [4, 5, 6, 7, 9]. The most challenging part of implementing the VB method is performing inference over h1:T , and previous authors have developed their own specialized routines, based on Belief Propagation, since standard LGSSM inference routines appear, at first sight, not to be applicable. 1 2 Also called Kalman Filters/Smoothers, Linear Dynamical Systems. v1:T denotes v1 , . . . , vT . A key contribution of this paper is to show how the Variational Bayesian treatment of the LGSSM can be implemented using standard LGSSM inference routines. Based on the insight we provide, any standard inference method may be applied, including those specifically addressed to improve numerical stability [2, 10, 11]. In this article, we decided to describe the predictor-corrector and Rauch-Tung-Striebel recursions [2], and also suggest a small modification that reduces computational cost. The Bayesian LGSSM is particularly of interest when strong prior constraints are needed to find adequate solutions. One such case is in EEG signal analysis, whereby we wish to extract sources that evolve independently through time. Since EEG is particularly noisy [12], a prior that encourages sources to have preferential dynamics is advantageous. This application is discussed in Section 4, and demonstrates the ease of applying our VB framework. 2 Bayesian Linear Gaussian State-Space Models In the Bayesian treatment of the LGSSM, instead of considering the model parameters ? as fixed, ? where ? ? is a set of hyperparameters. Then: we define a prior distribution p(?|?), Z ? = ? . p(v1:T |?) p(v1:T |?)p(?|?) (1) ? In a full Bayesian treatment we would define additional prior distributions over the hyperparameters ? Here we take instead the ML-II (?evidence?) framework, in which the optimal set of hyperpa?. ? with respect to ? ? [6, 7, 9]. rameters is found by maximizing p(v1:T |?) For the parameter priors, here we define Gaussians on the columns of A and B 3 : p(A|?, ?H ) ? H Y e? ?j 2 (Aj ?A?j ) T ? ??1 H (Aj ?Aj ) , p(B|?, ?V ) ? j=1 H Y e? ?j 2 T (Bj ?B?j ) ? ??1 V (Bj ?Bj ) , j=1 ? The which has the effect of biasing the transition and emission matrices to desired forms A? and B. ?1 ?1 4 conjugate priors for general inverse covariances ?H and ?V are Wishart distributions [7] . In the simpler case assumed here of diagonal covariances these become Gamma distributions [5, 7]. The ? = {?, ?}5 . hyperparameters are then ? Variational Bayes ? is difficult due to the intractability of the integrals. Instead, in Optimizing Eq. (1) with respect to ? VB, one considers the lower bound [6, 7, 9]6 : D E ? ? Hq (?, h1:T ) + log p(?|?) ? L = log p(v1:T |?) + hE(h1:T , ?)iq(?,h1:T ) ? F, q(?) where E(h1:T , ?) ? log p(v1:T , h1:T |?). Hd (x) signifies the entropy of the distribution d(x), and h?id(x) denotes the expectation operator. The key approximation in VB is q(?, h1:T ) ? q(?)q(h1:T ), from which one may show that, for optimality of F, ? hE(h1:T ,?)iq(h1:T ) . q(h1:T ) ? ehE(h1:T ,?)iq(?) , q(?) ? p(?|?)e These coupled equations need to be iterated to convergence. The updates for the parameters q(?) are straightforward and are given in Appendices A and B. Once converged, the hyperparameters are ? which lead to simple update formulae [7]. updated by maximizing F with respect to ?, Our main concern is with the update for q(h1:T ), for which this paper makes a departure from treatments previously presented. 3 More general Gaussian priors may be more suitable depending on the application. For expositional simplicity, we do not put priors on ? and ?. 5 For simplicity, we keep the parameters of the Gamma priors fixed. 6 Strictly we should write throughout q(?|v1:T ). We omit the dependence on v1:T for notational convenience. 4 Unified Inference on q(h1:T ) 3 Optimally q(h1:T ) is Gaussian since, up to a constant, hE(h1:T , ?)iq(?) is quadratic in h1:T 7 : ? T ? D E ? 1X ? T ?1 T ?1 ? (vt ?Bht ) ?V (vt ?Bht ) q(B,? ) + (ht ?Aht?1 ) ?H (ht ?Aht?1 ) . V 2 t=1 q(A,?H ) (2) In addition, optimally, q(A|?H ) and q(B|?V ) are Gaussians (see Appendix A), so we can easily carry out the averages in Eq. (2). The further averages over q(?H ) and q(?V ) are also easy due to conjugacy. Whilst this defines the distribution q(h1:T ), quantities such as q(ht ), required for example for the parameter updates (see the Appendices), need to be inferred from this distribution. Clearly, in the non-Bayesian case, the averages over the parameters are not present, and the above simply represents the posterior distribution of an LGSSM whose visible variables have been clamped into their evidential states. In that case, inference can be performed using any standard LGSSM routine. Our aim, therefore, is to try to represent the averaged Eq. (2) directly as the posterior distribution q?(h1:T |? v1:T ) of an LGSSM , for some suitable parameter settings. Mean + Fluctuation Decomposition A useful decomposition is to write ? ? ? ?1 ? T (vt ? Bht )T ??1 ?V (vt ? hBi ht ) + hTt SB ht , V (vt ? Bht ) q(B,?V )= (vt ? hBi ht ) | {z } {z } | mean f luctuation and similarly ? ? ? ?1 ? T (ht ?Aht?1 )T ??1 ?H (ht ?hAi ht?1 ) +hTt?1 SA ht?1 , H (ht ?Aht?1 ) q(A,?H )= (ht ?hAi ht?1 ) {z } | {z } | mean ? f luctuation ? T ? ?1 ? ?1 where the parameter covariances are SB ? B T ??1 ?V hBi = V HB and SA ? V B ? hBi ? T ?1 ? ? ? T ?1 ?1 A ?H A ?hAi ?H hAi = HHA (for HA and HB defined in Appendix A). The mean terms simply represent a clamped LGSSM with averaged parameters. However, the extra contributions from the fluctuations mean that Eq. (2) cannot be written as a clamped LGSSM with averaged parameters. In order to deal with these extra terms, our idea is to treat the fluctuations as arising from an augmented visible variable, for which Eq. (2) can then be considered as a clamped LGSSM. Inference Using an Augmented LGSSM To represent Eq. (2) as an LGSSM q?(h1:T |? v1:T ), we may augment vt and B as8 : v?t = vert(vt , 0H , 0H ), ? = vert(hBi , UA , UB ), B where UA is the Cholesky decomposition of SA , so that UAT UA = SA . Similarly, UB is the Cholesky decomposition of SB . The equivalent LGSSM q?(h1:T |? v1:T ) is then completed by specifying9 ? ? ? ? ? H ? ??1 ?1 , ? ? V ? diag( ??1 ?1 , IH , IH ), ? ? ? ?. A? ? hAi , ? ? ? ?, ? H V The validity of this parameter assignment can be checked by showing that, up to negligible constants, the exponent of this augmented LGSSM has the same form as Eq. (2)10 . Now that this has been written as an LGSSM q?(h1:T |? v1:T ), standard inference routines in the literature may be applied to compute q(ht |v1:T ) = q?(ht |? v1:T ) [1, 2, 11]11 . 7 For simplicity of exposition, we ignore the first time-point here. The notation vert(x1 , . . . , xn ) stands for vertically concatenating the arguments x1 , . . . , xn . 9 ?t = B, ? for t = 1, . . . , T ? 1. For time T , B ?T has the Strictly, we need a time-dependent emission B Cholesky factor UA replaced by 0H,H . 10 There are several ways of achieving a similar augmentation. We chose this since, in the non-Bayesian limit UA = UB = 0H,H , no numerical instabilities would be introduced. 11 Note that, since the augmented LGSSM q?(h1:T |? v1:T ) is designed to match the fully clamped distribution q(h1:T |v1:T ), the filtered posterior q?(ht |? v1:t ) does not correspond to q(ht |v1:t ). 8 Algorithm 1 LGSSM: Forward and backward recursive updates. The smoothed posterior p(ht |v1:T ) ? T and covariance P T . is returned in the mean h t t procedure F ORWARD 1a: P ? ? ? ??1 T T 1b: P ? D?, where D ? I ? ?UAB I + UAB ?UAB UAB 0 ? 2a: h1 ? ? ? 0 ? D? 2b: h 1 ?1 ? h ? 0 + K(vt ? B h ? 0) 3: K ? P B T (BP B T + ?V )?1 , P11 ? (I ? KB)P , h 1 1 1 for t ? 2, T do t?1 T 4: Ptt?1 ? APt?1 A + ?H t?1 5a: P ? Pt ? ??1 T T 5b: P ? Dt Ptt?1 , where Dt ? I ? Ptt?1 UAB I + UAB Ptt?1 UAB UAB ? t?1 ? Ah ? t?1 6a: h t t?1 ? t?1 ? Dt Ah ? t?1 6b: h t t?1 T ? t?1 + K(vt ? B h ? t?1 ) ?t ? h 7: K ? P B (BP B T + ?V )?1 , Ptt ? (I ? KB)P , h t t t end for end procedure procedure BACKWARD for t ? T ? 1, 1 do ? ? t At ? Ptt AT (Pt+1 )?1 ? ? T ? ? T t t Pt ? Pt + At (Pt+1 ? Pt+1 )At T ? ? ?T ? h ? t + At (h ? T ? Ah ? t) h t t t t+1 end for end procedure For completeness, we decided to describe the standard predictor-corrector form of the Kalman Filter, together with the Rauch-Tung-Striebel Smoother [2]. These are given in Algorithm 1, where q?(ht |? v1:T ) is computed by calling the FORWARD and BACKWARD procedures. We present two variants of the FORWARD pass. Either we may call procedure FORWARD in ? B, ? ? ?H, ? ?V , ? ? and the augmented visible variables v?t in which Algorithm 1 with parameters A, ?, ? we use steps 1a, 2a, 5a and 6a. This is exactly the predictor-corrector form of a Kalman Filter [2]. Otherwise, in order to reduce the computational cost, we may call procedure FORWARD with the ? ? ? hBi , ? ? H , ??1 ?1 , ? ? and the original visible variable vt in which we use steps parameters A, ?, ? V T 1b (where UAB UAB ? SA +SB ), 2b, 5b and 6b. The two algorithms are mathematically equivalent. Computing q(ht |v1:T ) = q?(ht |? v1:T ) is then completed by calling the common BACKWARD pass. The important point here is that the reader may supply any standard Kalman Filtering/Smoothing routine, and simply call it with the appropriate parameters. In some parameter regimes, or in very long time-series, numerical stability may be a serious concern, for which several stabilized algorithms have been developed over the years, for example the square-root forms [2, 10, 11]. By converting the problem to a standard form, we have therefore unified and simplified inference, so that future applications may be more readily developed12 . 3.1 Relation to Previous Approaches An alternative approach to the one above, and taken in [5, 7], is to write the posterior as log q(h1:T ) = T X ?t (ht?1 , ht ) + const. t=2 for suitably defined quadratic forms ?t (ht?1 , ht ). Here the potentials ?t (ht?1 , ht ) encode the averaging over the parameters A, B, ?H , ?V . The approach taken in [7] is to recognize this as a 12 The computation of the log-likelihood bound does not require any augmentation. pairwise Markov chain, for which the Belief Propagation recursions may be applied. The approach in [5] is based on a Kullback-Leibler minimization of the posterior with a chain structure, which is algorithmically equivalent to Belief Propagation. Whilst mathematically valid procedures, the resulting algorithms do not correspond to any of the standard forms in the Kalman Filtering/Smoothing literature, whose properties have been well studied [14]. 4 An Application to Bayesian ICA A particular case for which the Bayesian LGSSM is of interest is in extracting independent source signals underlying a multivariate timeseries [5, 15]. This will demonstrate how the approach developed in Section 3 makes VB easily to apply. The sources si are modeled as independent in the following sense: p(si1:T , sj1:T ) = p(si1:T )p(sj1:T ), for i 6= j, i, j = 1, . . . , C. Independence implies block diagonal transition and state noise matrices A, ?H and ?, where each block c has dimension Hc . A one dimensional source sct for each independent dynamical subsystem is then formed from sct = 1Tc hct , where 1c is a unit vector and hct is the state of dynamical system c. Combining the sources, we can write st = P ht , where P = diag(1T1 , . . . , 1TC ), ht = vert(h1t , . . . , hC t ). The resulting emission matrix is constrained to be of the form B = W P , where W is the V ? C mixing matrix. This means that the observations are formed from linearly mixing the sources, vt = W st + ?tv . The Figure 1: The structure of graphical structure of this model is presented in Fig 1. To encourage redundant components to be removed, we place a zero mean Gaussian the LGSSM for ICA. prior on W . In this case, we do not define a prior for the parameters ?H and ?V which are instead considered as hyperparameters. More details of the model are given in [15]. The constraint B = W P requires a minor modification from Section 3, as we discuss below. Inference on q(h1:T ) A small modification of the mean + fluctuation decomposition for B occurs, namely: ? ? T ?1 T T (vt ? Bht )T ??1 V (vt ? Bht ) q(W ) = (vt ? hBi ht ) ?V (vt ? hBi ht ) + ht P SW P ht , ?1 where hBi ? hW i P and SW = V HW . The quantities hW i and HW are obtained as in Appendix A.1 with the replacement ht ? P ht . To represent the above as a LGSSM, we augment vt and B as v?t = vert(vt , 0H , 0C ), ? = vert(hBi , UA , UW P ), B where UW is the Cholesky decomposition of SW . The equivalent LGSSM is then completed by ? H ? ?H , ? ? V ? diag(?V , IH , IC ), ? ? ? ?, and inference for specifying A? ? hAi, ? ? ? ?, ? q(h1:T ) performed using Algorithm 1. This demonstrates the elegance and unity of the approach in Section 3, since no new algorithm needs to be developed to perform inference, even in this special constrained parameter case. 4.1 Demonstration As a simple demonstration, we used an LGSSM to generate 3 sources sct with random 5?5 transition matrices Ac , ? = 0H and ? ? ?H ? IH . The sources were mixed into three observations vt = W st + ?tv , for W chosen with elements from a zero mean unit variance Gaussian distribution, and ?V = IV . We then trained a Bayesian LGSSM with 5 sources and 7 ? 7 transition matrices Ac . To bias the model to find the simplest sources, we used A?c ? 0Hc ,Hc for all sources. In Fig2a and Fig 2b we see the original sources and the noisy observations respectively. In Fig2c we see the estimated sources from our method after convergence of the hyperparameter updates. Two of the 5 sources have been removed, and the remaining three are a reasonable estimation of the original sources. Another possible approach for introducing prior knowledge is to use a Maximum a Posteriori (MAP) 0 50 100 150 200 250 300 0 50 100 (a) 150 200 250 300 0 50 (b) 100 150 200 250 300 0 50 (c) 100 150 200 250 300 (d) Figure 2: (a) Original sources st . (b) Observations resulting from mixing the original sources, vt = W st + ?tv , ?tv ? N (0, I). (c) Recovered sources using the Bayesian LGSSM. (d) Sources found with MAP LGSSM. 0 1 2 (a) 3s 0 1 2 (b) 3s 0 1 2 (c) 3s 0 1 2 (d) 3s 0 1 2 3s (e) Figure 3: (a) Original raw EEG recordings from 4 channels. (b-e) 16 sources st estimated by the Bayesian LGSSM. procedure by adding a prior term to the original log-likelihood log p(v1:T |A, W, ?H , ?V , ?, ?) + log p(A|?) + log p(W |?). However, it is not clear how to reliably find the hyperparameters ? and ? in this case. One solution is to estimate them by optimizing the new objective function jointly with respect to the parameters and hyperparameters (this is the so-called joint map estimation ? see for example [16]). A typical result of using this joint MAP approach on the artificial data is presented in Fig 2d. The joint MAP does not estimate the hyperparameters well, and the incorrect number of sources is identified. 4.2 Application to EEG Analysis In Fig 3a we plot three seconds of EEG data recorded from 4 channels (located in the right hemisphere) while a person is performing imagined movement of the right hand. As is typical in EEG, each channel shows drift terms below 1 Hz which correspond to artifacts of the instrumentation, together with the presence of 50 Hz mains contamination and masks the rhythmical activity related to the mental task, mainly centered at 10 and 20 Hz [17]. We would therefore like a method which enables us to extract components in these information-rich 10 and 20 Hz frequency bands. Standard ICA methods such as FastICA do not find satisfactory sources based on raw ?noisy? data, and preprocessing with band-pass filters is usually required. Additionally, in EEG research, flexibility in the number of recovered sources is important since there may be many independent oscillators of interest underlying the observations and we would like some way to automatically determine their effective number. To preferentially find sources at particular frequencies, we specified a block diagonal matrix A?c for each source c, where each block is a 2 ? 2 rotation matrix at the desired frequency. We defined the following 16 groups of frequencies: [0.5], [0.5], [0.5], [0.5]; [10,11], [10,11], [10,11], [10,11]; [20,21], [20,21], [20,21], [20,21]; [50], [50], [50], [50]. The temporal evolution of the sources obtained after training the Bayesian LGSSM is given in Fig3(b,c,d,e) (grouped by frequency range). The Bayes LGSSM removed 4 unnecessary sources from the mixing matrix W , that is one [10,11] Hz and three [20,21] Hz sources. The first 4 sources contain dominant low frequency drift, sources 5, 6 and 8 contain [10,11] Hz, while source 10 contains [20,21] Hz centered activity. Of the 4 sources initialized to 50 Hz, only 2 retained 50 Hz activity, while the Ac of the other two have changed to model other frequencies present in the EEG. This method demonstrates the usefulness and applicability of the VB method in a real-world situation. 5 Conclusion We considered the application of Variational Bayesian learning to Linear Gaussian State-Space Models. This is an important class of models with widespread application, and finding a simple way to implement this approximate Bayesian procedure is of considerable interest. The most demanding part of the procedure is inference of the hidden states of the model. Previously, this has been achieved using Belief Propagation, which differs from inference in the Kalman Filtering/Smoothing literature, for which highly efficient and stabilized procedures exist. A central contribution of this paper is to show how inference can be written using the standard Kalman Filtering/Smoothing recursions by augmenting the original model. Additionally, a minor modification to the standard Kalman Filtering routine may be applied for computational efficiency. We demonstrated the elegance and unity of our approach by showing how to easily apply a Variational Bayes analysis of temporal ICA. Specifically, our Bayes ICA approach successfully extracts independent processes underlying EEG signals, biased towards preferred frequency ranges. We hope that this simple and unifying interpretation of Variational Bayesian LGSSMs may therefore facilitate the further application to related models. A A.1 Parameter Updates for A and B Determining q(B|?V ) By examining F, the contribution of q(B|?V ) can be interpreted as the negative KL divergence between q(B|? Hence, ?? V ) and a Gaussian. ?? ?? optimally, q(B|?V ) is a Gaussian. The covariance [?B ]ij,kl ? Bij ? hBij i Bkl ? hBkl i (averages wrt q(B|?V )) is given by: ?1 ]jl [?V ]ik , where [HB ]jl ? [?B ]ij,kl = [HB T D X hjt hlt t=1 ?1 , where [NB ]ij ? The mean is given by hBi = NB HB E q(ht ) D E j t=1 ht PT q(ht ) + ?j ?jl . ?ij . vti + ?j B Determining q(A|?H ) Optimally, q(A|?H ) is a Gaussian with covariance ?1 ]jl [?H ]ik , where [HA ]jl ? [?A ]ij,kl = [HA T ?1 D X hjt hlt t=1 ?1 , where [NA ]ij ? The mean is given by hAi = NA HA B PT t=2 D E q(ht ) hjt?1 hit E + ?j ?jl . q(ht?1:t ) + ?j A?ij . Covariance Updates By specifying a Wishart prior for the inverse of the covariances, conjugate update formulae are possible. In practice, it is more common to specify diagonal inverse covariances, for which the corresponding priors are simply Gamma distributions [7, 5]. For this simple diagonal case, the explicit updates are given below. Determining q(?V ) For the constraint ??1 = diag(?), where each diagonal element follows a Gamma prior V Ga(b1 , b2 ) [7], q(?) factorizes and the optimal updates are ? q(?i ) = Ga ?b1 + where GB ? ?1 T NB HB NB . ? T X T 1 , b2 + ? (vti )2 ? [GB ]ii + 2 2 t=1 X j ?? 2 ?? ?ij ?j B , Determining q(?H ) Analogously, for ??1 H = diag(? ) with prior Ga(a1 , a2 ) [5], the updates are ? ?? ? T X X ? i 2? T ? 1 1 , a2 + ? (ht ) ? [GA ]ii + ?j A?2ij ?? , q(?i ) = Ga ?a1 + 2 2 t=2 j ?1 T where GA ? NA HA NA . Acknowledgments This work is supported by the European DIRAC Project FP6-0027787. This paper only reflects the authors? views and funding agencies are not liable for any use that may be made of the information contained herein. References [1] Y. Bar-Shalom and X.-R. Li. Estimation and Tracking: Principles, Techniques and Software. Artech House, 1998. [2] M. S. Grewal and A. P. Andrews. Kalman Filtering: Theory and Practice Using MATLAB. John Wiley and Sons, Inc., 2001. [3] R. H. Shumway and D. S. Stoffer. Time Series Analysis and Its Applications. Springer, 2000. [4] M. J. Beal, F. Falciani, Z. Ghahramani, C. Rangel, and D. L. Wild. A Bayesian approach to reconstructing genetic regulatory networks with hidden factors. Bioinformatics, 21:349?356, 2005. [5] A. T. Cemgil and S. J. Godsill. Probabilistic phase vocoder and its application to interpolation of missing values in audio signals. In 13th European Signal Processing Conference, 2005. [6] H. Valpola and J. Karhunen. An unsupervised ensemble learning method for nonlinear dynamic statespace models. Neural Computation, 14:2647?2692, 2002. [7] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. Ph.D. thesis, Gatsby Computational Neuroscience Unit, University College London, 2003. [8] M. Davy and S. J. Godsill. Bayesian harmonic models for musical signal analysis (with discussion). In J.O. Bernardo, J.O. Berger, A.P Dawid, and A.F.M. Smith, editors, Bayesian Statistics VII. Oxford University Press, 2003. [9] D. J. C. MacKay. Ensemble learning and evidence maximisation. Unpublished manuscipt: www.variational-bayes.org, 1995. [10] M. Morf and T. Kailath. Square-root algorithms for least-squares estimation. IEEE Transactions on Automatic Control, 20:487?497, 1975. [11] P. Park and T. Kailath. New square-root smoothing algorithms. IEEE Transactions on Automatic Control, 41:727?732, 1996. [12] E. Niedermeyer and F. Lopes Da Silva. Electroencephalography: basic principles, clinical applications and related fields. Lippincott Williams and Wilkins, 1999. [13] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, 11:305?345, 1999. [14] M. Verhaegen and P. Van Dooren. Numerical aspects of different Kalman filter implementations. IEEE Transactions of Automatic Control, 31:907?917, 1986. [15] S. Chiappa and D. Barber. Bayesian linear Gaussian state-space models for biosignal decomposition. Signal Processing Letters, 14, 2007. [16] S. S. Saquib, C. A. Bouman, and K. Sauer. ML parameter estimation for Markov random fields with applicationsto Bayesian tomography. IEEE Transactions on Image Processing, 7:1029?1044, 1998. [17] G. Pfurtscheller and F. H. Lopes da Silva. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clinical Neurophysiology, pages 1842?1857, 1999.
3023 |@word neurophysiology:1 advantageous:1 suitably:1 covariance:10 decomposition:7 carry:1 series:3 contains:1 genetic:1 expositional:1 recovered:2 si:1 written:3 readily:1 john:1 visible:4 numerical:4 enables:1 designed:1 plot:1 update:12 smith:1 filtered:1 mental:1 completeness:1 org:1 simpler:1 si1:2 become:1 supply:1 ik:2 incorrect:1 wild:1 pairwise:1 mask:1 ica:6 automatically:1 considering:1 ua:6 electroencephalography:1 project:1 underlying:5 notation:1 interpreted:1 developed:5 whilst:2 unified:3 finding:2 temporal:3 bernardo:1 exactly:1 demonstrates:3 hit:1 control:3 unit:3 omit:1 appear:1 t1:1 negligible:1 vertically:1 treat:1 cemgil:1 limit:1 id:1 oxford:1 fluctuation:4 interpolation:1 chose:1 studied:2 specifying:2 challenging:2 ease:1 range:2 averaged:3 lgssms:2 decided:2 acknowledgment:1 recursive:1 block:4 implement:1 differs:1 practice:2 maximisation:1 procedure:12 fig3:1 vert:6 davy:1 suggest:1 convenience:1 cannot:1 subsystem:1 operator:1 ga:6 put:1 nb:4 applying:1 instability:1 www:1 equivalent:4 map:5 demonstrated:1 missing:1 maximizing:2 straightforward:1 williams:1 independently:1 simplicity:3 insight:1 hd:1 stability:2 updated:1 pt:8 exact:1 element:2 dawid:1 particularly:2 located:1 tung:2 movement:1 removed:3 contamination:1 sj1:2 agency:1 dynamic:2 trained:1 lippincott:1 efficiency:1 easily:4 joint:3 describe:2 effective:1 london:1 artificial:1 whose:2 widely:1 otherwise:1 statistic:1 jointly:1 noisy:4 beal:2 sequence:1 combining:1 mixing:4 flexibility:1 roweis:1 dirac:1 convergence:2 iq:4 depending:1 ac:3 augmenting:1 chiappa:3 andrew:1 ij:9 minor:2 sa:5 eq:7 strong:1 implemented:1 idiap:4 implies:1 switzerland:2 filter:5 kb:2 centered:2 implementing:2 require:1 mathematically:2 strictly:2 considered:3 ic:1 bj:3 a2:2 estimation:5 applicable:1 grouped:1 successfully:2 reflects:1 minimization:1 hope:1 clearly:1 gaussian:16 sight:1 aim:1 factorizes:1 probabilistically:1 encode:1 emission:3 notational:1 likelihood:2 mainly:1 contrast:1 sense:1 posteriori:1 inference:20 striebel:2 dependent:1 sb:4 hidden:4 relation:1 augment:2 exponent:1 smoothing:6 constrained:2 special:1 mackay:1 field:2 once:1 represents:1 park:1 unsupervised:1 future:2 serious:1 dooren:1 gamma:4 recognize:1 divergence:1 replaced:1 phase:1 replacement:1 interest:6 highly:1 stoffer:1 chain:2 integral:1 encourage:1 preferential:1 sauer:1 iv:1 initialized:1 desired:2 bouman:1 hbi:11 column:1 assignment:1 signifies:1 applicability:1 introducing:1 cost:2 predictor:3 usefulness:1 fastica:1 examining:1 optimally:4 st:6 person:1 fundamental:1 probabilistic:1 together:2 analogously:1 na:4 augmentation:2 central:1 recorded:1 thesis:1 wishart:2 li:1 potential:1 b2:2 inc:1 performed:2 h1:33 bht:8 try:1 root:3 view:1 bayes:5 sct:3 contribution:4 square:4 formed:2 variance:1 musical:1 ensemble:2 correspond:3 bayesian:29 unifies:1 iterated:1 raw:2 liable:1 published:1 converged:1 evidential:1 ah:3 checked:1 hlt:2 frequency:8 elegance:3 treatment:6 knowledge:1 routine:7 dt:3 specify:1 hand:1 nonlinear:1 propagation:5 widespread:2 defines:1 aj:3 artifact:1 facilitate:1 effect:1 validity:1 contain:2 evolution:1 hence:1 leibler:1 satisfactory:1 deal:1 attractive:1 encourages:1 whereby:1 demonstrate:2 silva:2 ranging:1 variational:10 harmonic:1 image:1 recently:1 funding:1 common:2 rotation:1 specialized:1 imagined:1 discussed:1 he:3 interpretation:1 jl:6 automatic:3 similarly:2 dominant:1 posterior:6 own:1 multivariate:1 optimizing:2 shalom:1 instrumentation:1 hemisphere:1 vt:25 uab:10 additional:1 converting:1 determine:1 redundant:1 signal:8 ii:3 smoother:2 full:1 artech:1 reduces:1 match:1 ptt:6 clinical:2 long:1 a1:2 variant:1 basic:2 expectation:1 represent:4 achieved:1 addition:1 addressed:1 source:33 extra:2 biased:1 recording:1 hz:10 call:3 extracting:1 htt:2 presence:1 easy:1 hb:6 independence:1 identified:1 reduce:1 simplifies:1 idea:1 rauch:2 gb:2 returned:1 adequate:1 matlab:1 useful:1 clear:1 band:2 ph:1 tomography:1 simplest:1 generate:1 exist:1 stabilized:2 estimated:2 arising:1 algorithmically:1 neuroscience:1 write:4 hyperparameter:1 group:1 key:2 achieving:1 p11:1 ht:49 backward:4 v1:27 uw:2 fp6:1 convert:1 year:1 inverse:3 letter:1 lope:2 place:1 throughout:1 reader:1 reasonable:1 appendix:5 vb:7 bound:2 quadratic:2 h1t:1 activity:3 constraint:3 bp:2 software:1 calling:2 lgssm:31 aht:6 aspect:2 argument:1 optimality:1 performing:3 tv:6 according:1 conjugate:2 son:1 reconstructing:1 unity:2 modification:4 taken:2 equation:1 conjugacy:1 previously:3 discus:1 needed:1 wrt:1 end:4 gaussians:2 hct:2 apply:2 appropriate:1 apt:1 alternative:1 original:8 denotes:5 remaining:1 rhythmical:1 completed:3 graphical:1 sw:3 unifying:2 const:1 ghahramani:2 objective:1 quantity:2 occurs:1 dependence:1 diagonal:6 hai:7 hq:1 valpola:1 barber:3 considers:1 meg:1 kalman:11 modeled:1 corrector:3 retained:1 berger:1 demonstration:2 preferentially:1 difficult:1 negative:1 martigny:2 godsill:2 implementation:2 reliably:1 perform:1 observation:7 markov:2 timeseries:1 situation:1 smoothed:1 drift:2 inferred:1 david:2 introduced:1 namely:1 required:2 specified:1 kl:4 unpublished:1 acoustic:1 herein:1 bar:1 dynamical:5 below:3 usually:1 departure:1 biasing:1 regime:1 including:1 belief:5 simplon:2 suitable:2 demanding:1 event:1 recursion:4 improve:1 extract:3 coupled:1 prior:17 literature:4 review:1 evolve:1 determining:4 shumway:1 synchronization:1 fully:1 mixed:1 rameters:1 filtering:7 article:1 principle:3 editor:1 intractability:1 changed:1 supported:1 bias:1 institute:2 vocoder:1 van:1 dimension:3 xn:2 transition:4 stand:1 valid:1 rich:1 world:1 author:2 forward:5 made:1 preprocessing:1 simplified:1 transaction:4 approximate:3 ignore:1 preferred:1 kullback:1 keep:1 ml:2 hjt:3 vti:2 b1:2 assumed:1 unnecessary:1 regulatory:1 ehe:1 additionally:2 channel:3 eeg:11 du:2 hc:4 european:2 rue:2 diag:5 da:2 main:2 linearly:1 silvia:2 noise:1 hyperparameters:8 x1:2 augmented:5 fig:4 gatsby:1 wiley:1 pfurtscheller:1 wish:1 explicit:1 concatenating:1 clamped:5 house:1 uat:1 hw:4 bij:1 formula:2 showing:2 desynchronization:1 evidence:2 concern:2 intractable:1 hha:1 ih:4 adding:1 karhunen:1 vii:1 entropy:1 tc:2 simply:4 orward:1 contained:1 tracking:1 springer:1 ch:2 kailath:2 exposition:1 towards:1 oscillator:1 bkl:1 considerable:3 specifically:2 typical:2 averaging:1 called:2 pas:3 wilkins:1 formally:1 college:1 rangel:1 cholesky:4 bioinformatics:2 ub:3 statespace:1 audio:1
2,231
3,024
Max-margin classification of incomplete data 2 Gal Chechik1 , Geremy Heitz2 , Gal Elidan1 , Pieter Abbeel 1 , Daphne Koller 1 1 Department of Computer Science, Stanford University, Stanford CA, 94305 Department of Electrical Engineering, Stanford University, Stanford CA, 94305 Email for correspondence: [email protected] Abstract We consider the problem of learning classifiers for structurally incomplete data, where some objects have a subset of features inherently absent due to complex relationships between the features. The common approach for handling missing features is to begin with a preprocessing phase that completes the missing features, and then use a standard classification procedure. In this paper we show how incomplete data can be classified directly without any completion of the missing features using a max-margin learning framework. We formulate this task using a geometrically-inspired objective function, and discuss two optimization approaches: The linearly separable case is written as a set of convex feasibility problems, and the non-separable case has a non-convex objective that we optimize iteratively. By avoiding the pre-processing phase in which the data is completed, these approaches offer considerable computational savings. More importantly, we show that by elegantly handling complex patterns of missing values, our approach is both competitive with other methods when the values are missing at random and outperforms them when the missing values have non-trivial structure. We demonstrate our results on two real-world problems: edge prediction in metabolic pathways, and automobile detection in natural images. 1 Introduction In the traditional formulation of supervised learning, data instances are viewed as vectors of features in some high-dimensional space. However, in many real-world tasks, data instances have a complex pattern of missing features. While features may sometimes be missing due to measurement noise or corruption, different samples often have different sets of observable features due to inherent properties of the instances. For example, in the case of recognizing objects in natural images, an object is often classified using a set of image patches corresponding to parts of the object (like the license plate for cars); but some images may not contain all parts, either because a part was not captured in the image or because the specific instance does not have this part in the first place. In other scenarios, some features cannot even be defined for all instances. Such situations arise when the objects to be learned are organized based on a known graph structure, since their features may rely on local properties of the graph. For example, we might wish to classify the attributes of a web-page given the attributes of neighboring web-pages [8]. In analyzing genomic data, we may wish to predict the edges in networks of interacting proteins or chemical reactions [9, 15]. In these cases, the local neighborhood of an instance in the graph often varies drastically, and it has already been observed that variation this could introduce statistical biases [8]. In the web-page task, for instance, a useful feature is the most common topic of other sites that point to a given page. When a page has no such parents, however, this feature is meaningless and should be considered structurally absent. The common approach for classification with missing features is imputation, a two phase procedure where the values of the missing attributes are first filled in during a preprocessing phase, after which a standard classifier is applied to the completed data [10]. Most Imputation techniques make most sense when the features are missing due to noise, especially in the setting of missing at random (MAR, when the missingness pattern is conditionally independent of the unobserved features given the observations), or missing completely at random (MCAR, when it is independent of both observed and unobserved measurements). In common practice of applying imputation, missing attributes in continuous data are often filled with zeros, or with the average of all of the data instances, or using the k nearest neighbors (kNN) of each instance to find a plausible value of its missing features. A second family of imputation methods builds probabilistic generative models of the features using raw maximum likelihood or algorithms such as expectation maximization (EM) [4]. Such model-based methods allow the designer to introduce prior knowledge and are extremely useful when priors can be explicitly modeled. These methods work very well for MAR data settings, because they assume that the missing features are generated by the same model that generates the observed features. However, model-based approaches can be computationally expensive, and require significant prior knowledge about the data. More importantly, they will produce meaningless completions for features that are structurally absent. As an extreme example, consider two subpopulation of instances (e.g., animals and buildings) having no overlapping features (e.g., body parts, and architectural aspects), in which filling missing values (e.g., the body parts of buildings) is clearly meaningless and may harm classification performance. As a result, for structurally absent features, it would be useful if we could avoid unnecessary prediction of hypothetical undefined values, and classify instances directly. We approach this problem directly from the geometric interpretation of the classification task as finding a separating hyperplane in the feature space. We view instances with different feature sets as lying in subspaces of the full feature space, and suggest a modified optimization objective within the framework of support vector machines (SVMs), that explicitly considers the subspace of each instance. We show how the linearly separable case can be efficiently solved using convex optimization (second order cone programming, SOCP). The objective of the non separable case is non-convex, and we propose an iterative procedure that is found to converge in practice. These approaches may be viewed as model-free methods for handling missing data in the cases where the MAR assumption fails to hold. We evaluate the performance of our approach in two real world applications: prediction of missing enzymes in a metabolic network, and automobile detection in natural images. In both tasks, features may be inherently absent due to the mechanisms described above, and our methods are found superior to other simple imputation methods. 2 Max-Margin Formulation for Missing Features Let x1 . . . xn be a set of samples with binary labels yi ? {?1, 1}. Each sample xi is characterized by a subset of features F(xi ), from a full set F of size d. A sample that has all features F(xi ) = F, is viewed as a vector in Rd , where the ith coordinate corresponds to the ith feature. A sample xi with partially valid features can be viewed as embedded in the relevant subspace R|F (xi )| ? Rd . For simplicity of notation, we treat each xi as if it were a vector in Rd where only its F(xi ) entries are valid and define the inner product with another P vector in Rd as wx = k:fk ?F (xi ) wk xk . Importantly, since instances share features, the learned classifier must be consistent across instances, assigning the same weight to a given feature in different samples, even if those instance do not lie in the same subspace. In the classical SVM approach [14, 13], a linear classifier w is optimized to maximize the margin, defined as mini yi (wxi + b)/kwk, and the learning problem is reduced to the quadratic constrained optimization problem n min w,?,b X 1 kwk2 + C ?i 2 i=1 s.t. yi (wxi + b) ? 1 ? ?i , i = 1...n (1) where b is a threshold, the ??s are slack variables necessary for the case when the training instances are not linearly separable, and C is the error penalty. Eq. (1) can be extended to nonlinear classifiers using kernels [13]. Figure 1: The margin is incorrectly scaled when a sample that has missing features is treated as if the missing features have a value of zero. In this example, the margin of a sample that only has one feature (the x dimension) is measured both in the higher dimensional space (?2 ) and the lower one (?1 ). If all features are assumed to exist, and we give the missing feature (along the y axis) a value of zero, the margin ?2 measured in the higher dimensional space is shorter that the margin measured in the relevant subspace ?1 . ? 2 ? 1 Consider now learning such a classifier in the presence of missing data. At first glance, it may appear that since the x?s only affect the optimization through inner products with w, missing features can merely be skipped (or equivalently, replaced with zeros), thereby preserving the values of the inner product. However, this does not properly normalize the different entries in w, and damages classification accuracy. The reason is illustrated in Fig. 1 where a single sample in R2 has one valid and one missing feature. Due to the missing feature, measuring the margin in the full space ?2 , underestimates the correct geometric margin of the sample in the valid space ?1 . This is different from the case where the feature exists but is unknown, in which the sample?s margin could be either over- or under-estimated. In the next sections, we explore how this Eq. (1) can be solved while properly taking this normalization into account. We start by reminding the reader about the geometric interpretation of SVM. 3 Geometric interpretation The derivation of the SVM classifier [14] is motivated by the goal of finding a hyperplane that maximally separates the positive examples from the negative, as measured by the geometric i wxi margin ?(w) = mini ykwk . The task of maximizing the margin ?(w),   yi wxi max ?(w) = max min w w i kwk (2) is transformed into the quadratic programming problem of Eq. (1) in two steps. First, kwk, 1 is taken out of the minimization, yielding maxw kwk (mini yi wxi ). Then, the following invariance is used: for every solution, there exists a solution that achieves the same target function value, but with a margin that equals 1. This allows us to write the SVM problem ?1 as a constrained optimization problem: maxw kwk s.t. yi (wxi ) ? 1. This is equivalent 2 to minimizing kwk with the same constraints, which equals the SVM problem of Eq. (1). In the case of missing features, this derivation no longer optimizes the correct geometrical margin (Fig. 1). To address this problem, we treat the margin of each instance in its own (i) xi iw subspace, by defining the instance margin for the ith instance as ?i (w) = ykw where (i) k qP (i) 2 kw k = k:fk ?F (xi ) wk . The geometric margin is, as before, the minimum over all instance margins, yielding a new optimization problem   yi w(i) xi max min . (3) w i kw(i) k Unfortunately, since different margin terms are normalized by different norms kw (i) k, we can no longer take the denominator out of the minimization as above. In addition, each of the terms yi w(i) xi /kw(i) k is non-convex in w, which is difficult to solve directly in an efficient way. We now discuss two approaches for solving this problem. In the linearly separable case, the optimization problem of Eq. (3) is equivalent to max w,? ? s.t. yi w(i) xi ? ?kw(i) k i = 1...n , (4) This is a convex feasibility problem for any fixed value of ?, which is a real scalar that corresponds to the margin. It can be solved efficiently using a bisection search over ? ? R + , where in each iteration we solve a convex second order cone program (SOCP) [11]. Unfortunately, extending this formulation to the non-separable while preserving the geometric margin interpretation case makes the problem non-convex (this is discussed elsewhere). A second approach for solving Eq. (3) is to treat each instance margin individually. We represent each of the norms kw(i) k as a scaling of the full norm by defining scaling coefficients si = kw(i) k/kwk, and rewriting Eq. (3) as     yi wxi 1 yi wxi kw(i) k max min = max min , si = . (5) w w kwk i si kwk i si kwk The si factors are scalars, and had we known them, we could have solved a standard SVM problem. Unfortunately they depend on w (i) and are unknown. This formalism allows us to use again the invariance to the rescaling of kwk and rewrite as a constrained optimization problem over si and w. In the non-separable case, Eq. (5) becomes min w,b,?,s X 1 kwk2 + C ?i 2 i s.t. 1 (yi (wxi + b)) ? 1 ? ?i , si si = kw(i) k/kwk , i = 1...n (6) i = 1...n This constrained optimization problem is no longer a QP. In fact, due to the normalization constraint it is not even convex in w. One solution is a projected gradient approach, in which one iterates between steps in the direction of the gradient of the Lagrangian and projections to the constrained space, by calculating si = kw(i) k/kwk. For the right choices of step sizes, such approaches are guaranteed to converge to local minima [2]. We can use a faster iterative algorithm based on the fact that the problem is a QP for any given set of si ?s, and iterate between (1) solve a QP for w given si , and (2) use the resulting w to calculate new si ?s. This algorithm differs from a projected gradient approach in that rather than taking a series of small gradient steps, it takes bigger leaps, and projects back to the constrained space after each step. Since the convergence of this iterative algorithm is not guaranteed, we used cross validation to choose an early stopping point and found that the best solutions were obtained within 2-5 steps. Typically, the objective improved on the first 1-3 iterations, but then, in about 75% of the cases the objective oscillated. In the remaining cases the algorithm converged to a fixed point. It is easy to see that a fixed point of this iterative procedure achieves an optimal solution for Eq. (6), since it achieves a minimal kwk while obeying the si constraints. As a result, when this algorithm converges, the solution is also guaranteed to be a locally optimal solution of the original problem Eq. (3). The power of the SVM approach can be largely attributed to the flexibility and efficiency of nonlinear classification allowed through the use of kernels. The dual of the above QP can be kernelized as in a standard SVM, yielding n n X 1 X yi yj maxn ?i ? ?i K (xi , xj ) ?j 2 s sj i ??R i=1 i,j=1 s.t. 0 ? ?i ? C ; n X ?i yi = 0. (7) i=1 where K(xi , xj ) is the kernel function that simulates an inner product in the higher dimensional feature space. Classification as in standard SVM by Pof new samples are obtained 1 calculating the margin ?(xnew ) = j yj ?j s1j K(xj , xnew ) snew . Kernels in this formulation operate over vectors with missing features, hence we have to develop kernels that handle them correctly. Fortunately, many kernels only depend on their inputs through their inner product. In this case there is an easy procedure to construct a modified kernel that takes such missing values into account. For example, for a polynomial d d kernel K(xi , xj ) = (hxi , xj i + 1) , define K 0 (xi , xj ) = K(xi , xP j ) = (hxi , xj iF + 1) , with the inner product calculated over valid features hxi , xj iF = k:fk ??(xj )?F (xi ) hxik , xjk i. This can be easily proved to be a kernel. (a) (b) (c) 0.5 classification error 0.4 0.3 0.2 0.1 0 geom zero mean flag?agg kNN 5 Figure 2: Car classification results. (a) An easy instance where all local features are approximately in agreement. (b) A hard instance where local features are divided into two distinct groups. This instance was correctly classified by the ?geometric margin? approach but misclassified by all other methods. (c) Classification accuracy of the different methods for the task of object recognition in real images. Error bars are standard errors of the mean (SEM) over the five cross validation sets. 4 Experiments We evaluated our approaches in three different missingness scenarios. First, as a sanity check, we explored performance when features are missing at random, in a series of five standard UCI benchmarks, and also in a large digit recognition task using MNIST data. In this setting our methods performed equally well as other approaches (or slightly better). The full details of these experiments are provided in a longer version of this work. Second, we study a visual object recognition application where some features are missing because they cannot be located in the image. Finally, we apply our methods to a problem of biological network completion, where missingness patterns of features is determined by the known structure of the network. For all applications, we compare our iterative algorithm with five common approaches for completing missing features. 1. Zero: Missing values were set to zero. 2. Mean: Missing values were set to the average feature values 3. Aggregated Flags: Features were annotated with an explicit additional feature noting whether a feature is valid or missing. To reduce the number of added features, we added a single flag for each group of features that were valid or invalid together across all instances. For example, In the vision application, all features of a landmark candidate are grouped together since they are all invalid if the match is wrong (see below). 4. kNN: Missing features were set with the mean value obtained from the K nearest neighbors instances; neighborhood was measured using a Euclidean distance in the subspace relevant to each two samples, number of neighbors was varied as K = 3, 5, 10, 20, and the best result is the one reported. 5. EM: Generative model in the spirit of [4]. A Gaussian mixture model is learned by iterating between (1) learning a GMM model of the filled data (2) re-filling missing values using clusters means, weighted by the posterior probability that a cluster generated the sample. Covariances were assumed spherical. The number of clusters was varied as K = 3, 5, 10, 15, 20, and the best result is the one reported. 6. Geometric margin: Our non-separable approach described in Sec. 3. In all of the experiments, we used a 5-fold cross validation procedure and evaluated performance using a testing set that was not used during training. In addition, 20% of the training set was used for choosing optimization parameters, such as the kernel type, its parameters, and an early stopping point for the iterative algorithm. 4.1 Visual object recognition We now consider a visual object recognition task where instances have structurally missing features. In this task we attempt to determine if an object from a certain class (automobiles) is present in a given input image. The task of classifying images based on the object class that they contain has seen much work in recent years [1, 5],and discriminative approaches have typically produced very good results [5, 12]. Features in these methods are commonly constructed from regions of interest (patches) in the image. These patches typically cover ?landmarks? of the object, like the trunk or a headlight for a car. A typical set of patches includes several candidates for any object part, and some images may have more candidates for a given part than others. For example, a trunk of a car may not be found in a picture of a hatch-back car, hence all its corresponding features are considered to be structurally missing from that image. Our object model contains a set of ?landmarks?, for which we find several matches in a given image (details are omitted due to lack of space). Fig. 2 shows examples of matches for the front windshield landmark. Because of the noisy matches, the highest scoring match often does not match the true landmark, and the number of high-quality matches (features) varies in practice. It is in precisely such a scenario that we expect our proposed algorithm to be effective. In some cases, landmark models could provide confidence levels for each match. These could in principle be used as additional features to help the classifiers give more weight to better matches, and are expected to improve classification when the confidence measure is reliable. While this is a potentially useful approach for the current application, this paper takes a different approach: it does not use any soft confidence values but rather treats the low-confidence matches as wrong, removing them from the data. Concretely, we located up to 10 candidate patches (21 ? 21 pixels) that were promising (likelihood above a given threshold) for each of the 19 landmarks in the car model. For each candidate, we compute the first 10 principal component coefficients of the image patch and concatenate these patches to form the image feature vector. If the number of patches for a given landmark is less than ten, we consider the rest to be structurally absent. We evaluated performance for this task using two levels of a 5-fold cross validation procedure as explained above. We compared several kernels and report results using the kernel that fared best on the validation set, which was usually a second order polynomial kernel. Fig. 2c compares the accuracy of the different methods. We found the geometric approach to be significantly superior to all other methods. To further evaluate our method, we qualitatively examined the classification results for several images across the various methods. Fig. 2a shows the top 10 matches for the front windshield landmark for a representative ?easy? test instance where all local features are approximately in agreement. This instance was correctly classified by all methods. In contrast, Fig. 2b shows a representative ?hard? test instance where local features cluster into two different groups. In this case, the cluster of bad matches was automatically excluded yielding missing features, and our geometric approach was the only method able to classify the instance correctly. 4.2 Metabolic pathway reconstruction As a final application, we consider the problem of predicting missing enzymes in metabolic pathways, a long-standing and important challenge in computational biology [15, 9]. Instances in this task have missing features due to the structure of the biochemical network. Cells use a complex network of chemical reactions to produce their chemical building blocks (Fig. 3). Each reaction transforms a set of molecular compounds (called substrates) into another set of molecules (products), and requires the presence of an enzyme to catalyze the reaction. It is often unknown which enzyme catalyzes a given reaction, and it is desirable to predict the identity of such missing enzymes computationally. Our approach for predicting missing enzymes is based on the observation that enzymes in local network neighborhoods usually participate in related functions. As a result, neighboring enzyme pairs have non trivial correlations over their features that reflect their functional relations. Importantly, different types of neighborhood relations between enzyme pairs lead to different relations of their properties. For example, an enzyme in a linear chain depends on the preceding enzyme product as its substrate. Hence it is expected that the corresponding genes are co-expressed [9, 15]. On the other hand, enzymes in forking motifs (same substrate, different products) often have anti-correlated expression profiles [7]. To preserve the distinction between different neighbor relations, we defined a set of network motifs, including forks, funnels and linear chains. Each enzyme is represented as a vector of features that measure its relatedness to each of its neighbors. A feature vector has structurally missing entries if the enzyme does not have all types of neighbors. For example, the enzyme PHA2 in Fig. 3 does not have a neighbor of type fork, and therefore all features assigned to such a neighbor are absent in the representation of the reaction ?Prephanate ? Phenylpyruvate?. classification error 0.5 0.4 0.3 0.2 0.1 0 geom zero mean flag?agg kNN 10 1 1 ? False positives 0.8 0.2 0 0 geom zero mean flag?agg kNN 10 0.25 0.75 1 True positives Figure 3: Left: A fragment of the full metabolic pathway network in S. cerevisiae. Chemical reactions (arrows) transform a set of molecular compounds into other compounds. Small molecules like CO2 were omitted from this drawing for clarity. Reactions are catalyzed by enzymes (boxed names, e.g., ARO7), but in some cases these enzymes are unknown. The network imposes various neighborhood relations between enzymes assigned to reactions, like linear chains (ARO7,PHA2), forks (TRP2,ARO7) and funnels (ARO9,PHA2) Top Right: Classification accuracy for compared methods. The classification task is to identify if a candidate enzyme is in the right ?neighborhood?. Error bars are SEMs over 5 cross validation sets. Bottom right: ROC curves for the same task. We used the metabolic network of S. cerevisiae, as reconstructed by Palsson and colleagues [3], after removing 14 metabolic currencies and reactions with unknown enzymes, leaving 1265 directed reactions. We used three data types: (1) A compendium of 645 gene expression experiments; each experimental condition k contributed one feature, the point-wise Pearson xi (k)xj (k) . xi is the vector of expression levels across conditions. (2) The proteincorrelation kx i kkxj k domain content of each enzyme as found by the Prosite database. Each domain k contributed one feature, the point-wise symmetric DKL measure xi (k) (log(xi (k)/(xj (k) + xi (k))/2)) + xj (k) (log(xj (k)/(xj (k) + xi (k))/2)). (3) The cellular localization of the protein [6]; each cellular localization contributed one feature, the point-wise Hamming distance. In total, the feature vector length was about 3900. Pathway reconstruction requires that we rank candidate enzymes by their potential to match a reaction. As a first step towards this goal, we train a binary classifier, to predict if an enzyme fits its neighborhood. We created a set of positive examples from the reactions with known enzymes (? 520 reactions), and also created negative examples by plugging impostor genes into ?wrong? neighborhoods. We trained an SVM classifier using a 5-fold cross validation procedure as described above. Figure 3 shows the classification error of the different methods in the gene filling task. The geometric margin approach achieves significantly better performance in this task. kNN achieved very poor performance compared to all other methods. One reason could be that the Euclidean distance is inappropriate for the current task and that a more elaborate distance measure needs to developed for this type of data. Learning metrics is a complicated task in general, and more so in the current problem since the feature vectors contain entries of several different types, making it unlikely that a naive distance measure would work well. Finally, the resulting classifier is used for predicting missing enzymes, by ranking all candidate enzymes according to their match to a given neighborhood. Evaluating the quality of ranking on known enzymes (cross validation), shows that it significantly outperforms previous approaches [9] (not shown here due to space limitations). We attribute this to the ability of the current approach to preserve different types of network-neighbors as separate features in spite of creating missing values. 5 Discussion We presented a novel method for max-margin training of classifiers in the presence of missing features, where the pattern of missing features is an inherent part of the domain. Instead of completing missing features as a preprocessing phase, we developed a max-margin learning objective based on a geometric interpretation of the margin when different instances essentially lie in different spaces. Using two challenging real life problems we showed that our method is significantly superior when the pattern of missing features has structure. The standard treatment of missing features is based on the concept that missing features exist but are unobserved. This assumption underlies the approach of completing features before the data is used in classification. This paper focuses on a different scenario, in which features are inherently absent. In such cases, it is not clear why we should guess hypothetical values for undefined features, since the completed values are filled based on other observed values, and do not add information to our classifiers. In fact, by completing features that are not supposed to be part of an instance, we may be confusing the learning algorithm by presenting it with problem which may be harder than the one we actually need to solve. Interestingly, the problem of classifying with missing features is related to another problem, where individual reliability measures are available for features at each instance separately. This is a common case in analysis scientific measurements, where the reliability of each experiment could be provided separately. For example, DNA micro-array experiments have inherent measures of experimental noise levels, and biological variability is often estimated using replicates. This problem can be viewed in the same framework described in this paper: the geometric margin must be defined separately for each instance since the different noise levels distort the relative scale of each coordinate of each instance separately. Relative to this setting, the completely missing and fully valid features discussed in this paper are extreme points on the spectrum of reliability. It will be interesting to see which aspects of the geometric formulation discussed in this paper can be extended to this new problem. Acknowledgement: This paper was supported by a NSF grant DBI-0345474. References [1] A. Berg, T. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondence. In CVPR, 2005. [2] Paul H. Calamai and Jorge J. More:9A. Projected gradient methods for linearly constrained problems. Math. Program., 39(1):93?116, 1987. [3] J. Forster, I. Famili, P. Fu, B.. Palsson, and J. Nielsen. Genome-scale reconstruction of the saccharomyces cerevisiae metabolic network. Genome Research, 13(2):244?253, February 2003. [4] Z. Ghahramani and MI. Jordan. Supervised learning from incomplete data via an EM approach. In JD. Cowan, G. Tesauro, and J. Alspector, editors, NIPS, volume 6, pages 120?127, 1994. [5] K. Grauman and T. Darrell. Pyramid match kernels: Discriminative classification with sets of image features. In ICCV, 2005. [6] W.K. Huh, J.V. Falvo, L.C. Gerke, A.S. Carroll, R.W. Howson, J.S. Weissman, and E.K. O?Shea. Global analysis of protein localization in budding yeast. Nature, 425:686?691, 2003. [7] J. Ihmels, R. Levy, and N. Barkai. Principles of transcriptional control in the metabolic network of saccharomyces cerevisiae. Nature Biotechnology, 22:86?92, 2003. [8] D. Jensen and J. Neville. Linkage and autocorrelation cause feature selection bias in relational learning. In ICML, 2002. [9] P. Kharchenko, D. Vitkup, and GM Church. Filling gaps in a metabolic network using expression information. Bioinformatics, 20:I178?I185, 2003. [10] R.J.A. Little and D.B. Rubin. Statistical Analysis with Missing Data. NY wiley, 1987. [11] MS. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second-order cone programming. Linear Algebra and its Applications, 284:193?228, 1998. [12] A. Quattoni, M. Collins, and T. Darrell. Conditional random fields for object recognition. In LK. Saul, Y. Weiss, and L. Bottou, editors, NIPS 17, pages 1097?1104, 2005. [13] B. Sch? olkopf and A.J. Smola. Learning with Kernels: Support Vector Machines, Regularization Optimization and Beyond. MIT Press, Cambridge, MA, 2002. [14] V.N. Vapnik. The nature of statistical learning theory. SpringerVerlag, 1995. [15] J. P. Vert and Y. Yamanishi. Supervised graph inference. In LK. Saul, Y. Weiss, and L. Bottou, editors, NIPS 17, pages 1433?1440, 2004.
3024 |@word version:1 polynomial:2 norm:3 pieter:1 covariance:1 thereby:1 harder:1 mcar:1 series:2 contains:1 fragment:1 interestingly:1 outperforms:2 reaction:14 current:4 si:13 assigning:1 written:1 must:2 concatenate:1 wx:1 shape:1 generative:2 guess:1 xk:1 ith:3 iterates:1 math:1 daphne:1 five:3 along:1 constructed:1 pathway:5 autocorrelation:1 introduce:2 expected:2 alspector:1 fared:1 inspired:1 spherical:1 automatically:1 little:1 inappropriate:1 becomes:1 begin:1 project:1 notation:1 pof:1 provided:2 developed:2 unobserved:3 gal:3 finding:2 every:1 hypothetical:2 grauman:1 classifier:13 scaled:1 wrong:3 control:1 grant:1 appear:1 positive:4 before:2 engineering:1 local:8 treat:4 analyzing:1 approximately:2 might:1 examined:1 howson:1 challenging:1 co:1 directed:1 yj:2 testing:1 practice:3 block:1 impostor:1 differs:1 digit:1 procedure:8 significantly:4 vert:1 projection:1 matching:1 pre:1 confidence:4 subpopulation:1 boyd:1 spite:1 protein:3 suggest:1 cannot:2 selection:1 applying:1 optimize:1 equivalent:2 lagrangian:1 missing:57 maximizing:1 convex:9 formulate:1 oscillated:1 simplicity:1 snew:1 array:1 importantly:4 dbi:1 vandenberghe:1 handle:1 variation:1 coordinate:2 target:1 gm:1 programming:3 substrate:3 agreement:2 expensive:1 recognition:7 located:2 database:1 observed:4 fork:3 bottom:1 electrical:1 solved:4 calculate:1 region:1 highest:1 co2:1 lobo:1 trained:1 depend:2 solving:2 rewrite:1 algebra:1 localization:3 efficiency:1 completely:2 easily:1 various:2 represented:1 derivation:2 train:1 distinct:1 effective:1 neighborhood:9 choosing:1 pearson:1 sanity:1 stanford:5 plausible:1 solve:4 distortion:1 drawing:1 cvpr:1 ability:1 knn:6 transform:1 noisy:1 final:1 propose:1 reconstruction:3 product:9 neighboring:2 relevant:3 uci:1 flexibility:1 supposed:1 normalize:1 olkopf:1 parent:1 convergence:1 cluster:5 extending:1 darrell:2 produce:2 yamanishi:1 converges:1 object:16 help:1 develop:1 completion:3 measured:5 nearest:2 eq:10 direction:1 correct:2 attribute:5 annotated:1 require:1 abbeel:1 reminding:1 biological:2 hold:1 lying:1 considered:2 predict:3 achieves:4 early:2 omitted:2 label:1 iw:1 leap:1 individually:1 grouped:1 weighted:1 minimization:2 mit:1 clearly:1 genomic:1 gaussian:1 cerevisiae:4 modified:2 rather:2 avoid:1 focus:1 properly:2 saccharomyces:2 rank:1 likelihood:2 check:1 contrast:1 skipped:1 sense:1 inference:1 motif:2 stopping:2 biochemical:1 typically:3 unlikely:1 kernelized:1 koller:1 relation:5 transformed:1 misclassified:1 pixel:1 classification:19 dual:1 animal:1 constrained:7 equal:2 construct:1 saving:1 having:1 field:1 biology:1 kw:10 icml:1 filling:4 others:1 report:1 inherent:3 micro:1 preserve:2 individual:1 replaced:1 phase:5 attempt:1 detection:2 interest:1 replicates:1 mixture:1 extreme:2 yielding:4 undefined:2 chain:3 edge:2 fu:1 necessary:1 shorter:1 filled:4 incomplete:4 euclidean:2 re:1 xjk:1 minimal:1 instance:38 formalism:1 classify:3 soft:1 cover:1 measuring:1 maximization:1 subset:2 entry:4 recognizing:1 s1j:1 front:2 reported:2 varies:2 prosite:1 standing:1 probabilistic:1 together:2 ihmels:1 again:1 reflect:1 choose:1 creating:1 rescaling:1 account:2 potential:1 socp:2 sec:1 wk:2 includes:1 coefficient:2 explicitly:2 ranking:2 depends:1 performed:1 view:1 kwk:14 competitive:1 start:1 complicated:1 accuracy:4 largely:1 efficiently:2 identify:1 raw:1 produced:1 bisection:1 corruption:1 classified:4 converged:1 quattoni:1 email:1 distort:1 underestimate:1 colleague:1 attributed:1 mi:1 hamming:1 proved:1 treatment:1 knowledge:2 car:6 organized:1 nielsen:1 actually:1 back:2 higher:3 supervised:3 maximally:1 improved:1 wei:2 formulation:5 evaluated:3 mar:3 smola:1 correlation:1 hand:1 web:3 nonlinear:2 overlapping:1 lack:1 glance:1 quality:2 scientific:1 yeast:1 barkai:1 building:3 name:1 contain:3 normalized:1 true:2 concept:1 hence:3 assigned:2 chemical:4 excluded:1 symmetric:1 iteratively:1 regularization:1 illustrated:1 conditionally:1 kharchenko:1 during:2 m:1 plate:1 presenting:1 demonstrate:1 geometrical:1 image:18 wise:3 novel:1 common:6 superior:3 functional:1 qp:5 volume:1 discussed:3 interpretation:5 kwk2:2 measurement:3 significant:1 cambridge:1 ai:1 rd:4 fk:3 had:1 reliability:3 hxi:3 longer:4 carroll:1 add:1 enzyme:27 posterior:1 own:1 recent:1 showed:1 optimizes:1 tesauro:1 scenario:4 compound:3 certain:1 binary:2 jorge:1 life:1 yi:14 sems:1 scoring:1 geremy:1 captured:1 preserving:2 minimum:2 fortunately:1 additional:2 seen:1 preceding:1 converge:2 maximize:1 aggregated:1 determine:1 full:6 desirable:1 ykw:1 currency:1 faster:1 characterized:1 match:15 offer:1 cross:7 long:1 huh:1 divided:1 equally:1 molecular:2 bigger:1 dkl:1 feasibility:2 plugging:1 prediction:3 underlies:1 weissman:1 denominator:1 vision:1 expectation:1 metric:1 essentially:1 iteration:2 sometimes:1 kernel:15 normalization:2 represent:1 achieved:1 cell:1 pyramid:1 addition:2 separately:4 completes:1 leaving:1 sch:1 meaningless:3 operate:1 rest:1 headlight:1 simulates:1 cowan:1 spirit:1 jordan:1 presence:3 noting:1 easy:4 iterate:1 affect:1 xj:14 fit:1 inner:6 reduce:1 absent:8 whether:1 motivated:1 expression:4 linkage:1 penalty:1 biotechnology:1 cause:1 useful:4 iterating:1 clear:1 transforms:1 locally:1 ten:1 svms:1 dna:1 reduced:1 exist:2 nsf:1 designer:1 estimated:2 correctly:4 write:1 group:3 threshold:2 license:1 imputation:5 clarity:1 gmm:1 rewriting:1 graph:4 geometrically:1 merely:1 cone:3 year:1 missingness:3 place:1 family:1 reader:1 architectural:1 patch:8 confusing:1 scaling:2 completing:4 guaranteed:3 xnew:2 correspondence:2 fold:3 quadratic:2 constraint:3 precisely:1 compendium:1 generates:1 aspect:2 extremely:1 min:6 separable:9 department:2 maxn:1 according:1 poor:1 catalyze:1 wxi:9 across:4 slightly:1 em:3 making:1 explained:1 iccv:1 handling:3 taken:1 computationally:2 trunk:2 discus:2 slack:1 mechanism:1 available:1 apply:1 jd:1 original:1 top:2 remaining:1 completed:3 calculating:2 ghahramani:1 especially:1 build:1 february:1 classical:1 objective:7 malik:1 already:1 added:2 damage:1 traditional:1 forster:1 transcriptional:1 gradient:5 subspace:7 distance:5 separate:2 separating:1 landmark:9 participate:1 topic:1 considers:1 cellular:2 trivial:2 reason:2 length:1 modeled:1 relationship:1 mini:3 neville:1 minimizing:1 equivalently:1 difficult:1 unfortunately:3 potentially:1 negative:2 unknown:5 contributed:3 budding:1 observation:2 benchmark:1 anti:1 incorrectly:1 situation:1 extended:2 defining:2 variability:1 relational:1 interacting:1 varied:2 pair:2 optimized:1 learned:3 distinction:1 nip:3 lebret:1 address:1 able:1 bar:2 beyond:1 below:1 pattern:6 usually:2 challenge:1 geom:3 program:2 max:11 reliable:1 hatch:1 including:1 power:1 natural:3 rely:1 treated:1 predicting:3 improve:1 picture:1 axis:1 created:2 church:1 lk:2 naive:1 prior:3 geometric:15 acknowledgement:1 relative:2 embedded:1 fully:1 expect:1 interesting:1 limitation:1 validation:8 funnel:2 consistent:1 xp:1 imposes:1 principle:2 metabolic:10 editor:3 rubin:1 classifying:2 share:1 elsewhere:1 supported:1 free:1 drastically:1 bias:2 allow:1 neighbor:9 saul:2 taking:2 curve:1 dimension:1 xn:1 world:3 valid:8 calculated:1 evaluating:1 genome:2 concretely:1 commonly:1 qualitatively:1 preprocessing:3 projected:3 sj:1 reconstructed:1 observable:1 relatedness:1 gene:4 windshield:2 global:1 harm:1 unnecessary:1 assumed:2 xi:25 discriminative:2 spectrum:1 continuous:1 iterative:6 search:1 why:1 promising:1 nature:3 molecule:2 ca:2 inherently:3 sem:1 boxed:1 automobile:3 complex:4 bottou:2 elegantly:1 domain:3 linearly:5 arrow:1 noise:4 arise:1 profile:1 paul:1 allowed:1 body:2 x1:1 site:1 fig:8 representative:2 roc:1 elaborate:1 ny:1 wiley:1 structurally:8 fails:1 wish:2 obeying:1 explicit:1 lie:2 candidate:8 catalyzes:1 levy:1 removing:2 bad:1 specific:1 jensen:1 r2:1 explored:1 svm:10 exists:2 mnist:1 false:1 vapnik:1 shea:1 margin:31 kx:1 gap:1 explore:1 visual:3 expressed:1 partially:1 scalar:2 agg:3 maxw:2 corresponds:2 ma:1 conditional:1 viewed:5 goal:2 identity:1 invalid:2 towards:1 catalyzed:1 considerable:1 hard:2 content:1 springerverlag:1 determined:1 typical:1 hyperplane:2 flag:5 principal:1 called:1 total:1 invariance:2 experimental:2 berg:2 support:2 collins:1 bioinformatics:1 evaluate:2 avoiding:1 correlated:1
2,232
3,025
Accelerated Variational Dirichlet Process Mixtures Kenichi Kurihara Dept. of Computer Science Tokyo Institute of Technology Tokyo, Japan [email protected] Max Welling Bren School of Information and Computer Science UC Irvine Irvine, CA 92697-3425 [email protected] Nikos Vlassis Informatics Institute University of Amsterdam The Netherlands [email protected] Abstract Dirichlet Process (DP) mixture models are promising candidates for clustering applications where the number of clusters is unknown a priori. Due to computational considerations these models are unfortunately unsuitable for large scale data-mining applications. We propose a class of deterministic accelerated DP mixture models that can routinely handle millions of data-cases. The speedup is achieved by incorporating kd-trees into a variational Bayesian algorithm for DP mixtures in the stick-breaking representation, similar to that of Blei and Jordan (2005). Our algorithm differs in the use of kd-trees and in the way we handle truncation: we only assume that the variational distributions are fixed at their priors after a certain level. Experiments show that speedups relative to the standard variational algorithm can be significant. 1 Introduction Evidenced by three recent workshops1 , nonparametric Bayesian methods are gaining popularity in the machine learning community. In each of these workshops computational efficiency was mentioned as an important direction for future research. In this paper we propose computational speedups for Dirichlet Process (DP) mixture models [1, 2, 3, 4, 5, 6, 7], with the purpose of improving their applicability in modern day data-mining problems where millions of data-cases are no exception. Our approach is related to, and complements, the variational mean-field algorithm for DP mixture models of Blei and Jordan [7]. In this approach, the intractable posterior of the DP mixture is approximated with a factorized variational finite (truncated) mixture model with T components, that is optimized to minimize the KL distance to the posterior. However, a downside of their model is that the variational families are not nested over T , and locating an optimal truncation level T may be difficult (see Section 3). In this paper we propose an alternative variational mean-field algorithm, called VDP (Variational DP), in which the variational families are nested over T . In our model we allow for an unbounded number of components for the variational mixture, but we tie the variational distributions after level 1 http://aluminum.cse.buffalo.edu:8079/npbayes/nipsws05/topics http://www.cs.toronto.edu/? beal/npbayes/ http://www2.informatik.hu-berlin.de/? bickel/npb-workshop.html T to their priors. Our algorithm proceeds in a greedy manner by starting with T = 1 and releasing components when this improves (significantly) the KL bound. Releasing is most effectively done by splitting a component in two children and updating them to convergence. Our approach essentially resolves the issue in [7] of searching for an optimal truncation level of the variational mixture (see Section 4). Additionally, a significant contribution is that we incorporate kd-trees into the VDP algorithm as a way to speed up convergence [8, 9]. A kd-tree structure recursively partitions the data space into a number of nodes, where each node contains a subset of the data-cases. Following [9], for a given tree expansion we tie together the responsibility over mixture components of all data-cases contained in each outer node of the tree. By caching certain sufficient statistics in each node of the kd-tree we then achieve computational gains, while the variational approximation becomes a function of the depth of the tree at which one operates (see Section 6). The resulting Fast-VDP algorithm provides an elegant way to trade off computational resources against accuracy. We can always release new components from the pool and split kd-tree nodes as long as we have computational resources left. Our setup guarantees that this will always (at least in theory) improve the KL bound (in practice local optima may force us to reject certain splits, see Section 7). As we empirically demonstrate in Section 8, a kd-tree can offer significant speedups, allowing our algorithm to handle millions of data-cases. As a result, Fast-VDP is the first algorithm entertaining an unbounded number of clusters that is practical for modern day data-mining applications. 2 The Dirichlet Process Mixture in the Stick-Breaking Representation A DP mixture model in the stick-breaking representation can be viewed as possessing an infinite number of components with random mixing weights [4]. In particular, the generative model of a DP mixture assumes: ? An infinite collection of components H = {?i }? i=1 that are independently drawn from a prior p? (?i |?) with hyperparameters ?. ? An infinite collection of ?stick lengths? V = {vi }? i=1 , vi ? [0, 1], ?i, that are independently drawn from a prior pv (vi |?) with hyperparameters ?. They define the mixing weights Qi?1 {?i }? i=1 of the mixture as ?i (V ) = vi j=1 (1 ? vj ), for i = 1, . . . , ?. ? An observation model px (x|?) that generates a datum x from component ?. Given a dataset X = {xn }N n=1 , each data-case xn is assumed to be generated by first drawing a component label zn = k ? {1, . . . , ?} from the infinite mixture with probability pz (zn = k|V ) ? ?k (V ), and then drawing xn from the corresponding observation model px (xn |?k ). We will denote Z = {zn }N n=1 the set of all labels, W = {H, V, Z} the set of all latent variables of the DP mixture, and ? = {?, ?} the hyperparameters. In clustering problems we are mainly interested in computing the posterior over data labels p(zn |X, ?), as well as the predictive density R R p(x|X, ?) = H,V p(x|H, V ) Z p(W |X, ?), which are both intractable since p(W |X, ?) cannot be computed analytically. 3 Variational Inference in Dirichlet Process Mixtures For variational inference, the intractable posterior p(W |X, ?) of the DP mixture can be approximated with a parametric family of factorized variational distributions q(W ; ?) of the form q(W ; ?) = L h Y i=1 N iY qvi (vi ; ?vi ) q?i (?i ; ??i ) qzn (zn ) (1) n=1 where qvi (vi ; ?vi ) and q?i (?i ; ??i ) are parametric models with parameters ?vi and ??i (one parameter per i), and qzn (zn ) are discrete distributions over the component labels (one distribution per n). Blei and Jordan [7] define an explicit truncation level L ? T for the variational mixture in (1) by setting qvT (vT = 1) = 1 and assuming that data-cases assign zero responsibility to components with index higher than the truncation level T , i.e., qzn (zn > T ) = 0. Consequently, in their model only components of the mixture up to level T need to be considered. Variational inference then consists in estimating a set of T parameters {?vi , ??i }Ti=1 and a set of N distributions {qzn (zn )}N n=1 , collectively denoted by ?, that minimize the Kullback-Leibler divergence D[q(W ; ?)||p(W |X, ?)] between the true posterior and the variational approximation, or equivalently that minimize the free energy F (?) = Eq [log q(W ; ?)] ? Eq [log p(W, X|?)]. Since each distribution qzn (zn ) has nonzero support only for zn ? T , minimizing F (?) results in a set of update equations for ? that involve only finite sums [7]. However, explicitly truncating the variational mixture as above has the undesirable property that the variational family with truncation level T is not contained within the variational family with truncation level T + 1, i.e., the families are not nested.2 The result is that there may be an optimal finite truncation level T for q, that contradicts the intuition that the more components we allow in q the better the approximation should be (reaching its best when T ? ?). Moreover, locating a near-optimal truncation level may be difficult since F as a function of T may exhibit local minima (see Fig. 4 in [7]). 4 Variational Inference with an Infinite Variational Model Here we propose a slightly different variational model for q that allows families over T to be nested. In our setup, q is given by (1) where we let L go to infinity but we tie the parameters of all models after a specific level T (with T ? L). In particular, we impose the condition that for all components with index i > T the variational distributions for the stick-lengths qvi (vi ) and the variational distributions for the components q?i (?i ) are equal to their corresponding priors, i.e., qvi (vi ; ?vi ) = pv (vi |?) and q?i (?i ; ??i ) = p? (?i |?). In our model we define the free energy F as the limit F = limL?? FL , where FL is the free energy defined by q in (1) and a truncated DP mixture at level L (justified by the almost sure convergence of an L-truncated Dirichlet process to an infinite Dirichlet process when L ? ? [6]). Using the parameter tying assumption for i > T , the free energy reads    X    N T  X q? (?i ; ??i ) qzn (zn ) qv (vi ; ?vi ) +Eq?i log i + Eq log Eqvi log i . (2) F = pv (vi |?) p? (?i |?) pz (zn |V )px (xn |?zn ) n=1 i=1 In our scheme T defines an implicit truncation level of the variational mixture, since there are no free parameters to optimize beyond level T . As in [7], the free energy F is a function of T parameters {?vi , ??i }Ti=1 and N distributions {qzn (zn )}N n=1 . However, contrary to [7], data-cases may now assign nonzero responsibility to components beyond level T , and therefore each qzn (zn ) must now have infinite support (which requires computing infinite sums in the various quantities of interest). An important implication of our setup is that the variational families are now nested with respect to T (since for i > T , qvi (vi ) and q?i (?i ) can always revert to their priors), and as a result it is guaranteed that as we increase T there exist solutions that decrease F . This is an important result because it allows for optimization with adaptive T starting from T = 1 (see Section 7). ?From the last term of (2) we directly see that the qzn (zn ) that minimizes F is given by exp(Sn,i ) qzn (zn = i) = P? j=1 exp(Sn,j ) where Sn,i = EqV [log pz (zn = i|V )] + Eq?i [log px (xn |?i )]. ?vi ??i (3) (4) Minimization of F over and can be carried out by direct differentiation of (2) for particular choices of models for qvi and q?i (see Section 5). Using qzn from (3), the free energy (2) reads     X T  N ? X X qvi (vi ; ?vi ) q?i (?i ; ??i ) F = Eqvi log + Eq?i log ? log exp(Sn,i ). (5) pv (vi |?) p? (?i |?) n=1 i=1 i=1 P? Evaluation of F requires computing the infinite sum i=1 exp(Sn,i ) in (5). The difficult part is P ? i=T +1 exp(Sn,i ). Under the parameter tying assumption for i > T , most terms of Sn,i in (4) 2 We thank David Blei for pointing this out. factor out of the infinite sum as constants (since they do not depend on i) except for the term Pi?1 j=T +1 Epv [log(1 ? v)] = (i ? 1 ? T )Epv [log(1 ? v)]. From the above, the infinite sum can be shown to be ? X Sn,T +1 . exp(Sn,i ) = (6) 1 ? exp Epv [log(1 ? v)] i=T +1 Using the variational q(W ) as an approximation to the true posterior p(W |X, ?), the required posterior over data labels can be approximated by p(zn |X, ?) ? qzn (zn ). Although qzn (zn ) has infinite support, in practice it suffices to use the individual qzn (zn = i) for the finite part i ? T , and the cumulative qzn (zn > TP) for the infinite part. Finally, using the parameter tying assumption for ? i > T , and the identity i=1 ?i (V ) = 1, the predictive density p(x|X, ?) can be approximated by p(x|X, ?) ? T X i=1 T h i X EqV [?i (V )]Eq?i [px (x|?i )] + 1 ? Epv [?i (V )] Ep? [px (x|?)]. (7) i=1 Note that all quantities of interest, such as the free energy (5) and the predictive distribution (7), can be computed analytically even though they involve infinite sums. 5 Solutions for the exponential family The results in the previous section apply independently of the choice of models for the DP mixture. In this section we provide analytical solutions for models in the exponential family. In particular we assume that pv (vi |?) = Beta(?1 , ?2 ) and qvi (vi ; ?vi ) = Beta(?vi,1 , ?vi,2 ), and that px (x|?), p? (?|?), and q?i (?i ; ??i ) are given by px (x|?) = h(x) exp{? T x ? a(?)} p? (?|?) = h(?) exp{?1 ? + ?2 (?a(?)) ? a(?)} (8) (9) q?i (?i ; ??i ) = h(?i ) exp{??i,1 ?i + ??i,2 (?a(?i )) ? a(??i )}. (10) In this case, the probabilities qzn (zn = i) are given by (3) with Sn,i computed from (4) using Eqvi [log vi ] = ?(?vi,1 ) ? ?(?vi,1 + ?vi,2 ) Eqvj [log(1 ? vj )] = ?(?vi,2 ) ? ?(?vi,1 + (11) ?vi,2 ) (12) Eq?i [log px (xn |?i )] = Eq?i [?i ]T xn ? Eq?i [a(?i )] v (13) ? where ?(?) is the digamma function. The optimal parameters ? , ? can be found to be ?vi,1 = ?1 + N X qzn (zn = i) ?vi,2 = ?2 + n=1 ??i,1 = ?1 + N X N ? X X qzn (zn = j) (14) n=1 j=i+1 qzn (zn = i)xn ??i,2 n=1 = ?2 + N X qzn (zn = i). (15) n=1 The update equations are similar to those inP [7] except that we have used Beta(?1 , ?2 ) instead of ? Beta(1, ?), and ?vi,2 involves an infinite sum j=i+1 qzn (zn = j) which can be computed using (3) and (6). In [7] the corresponding sum is finite since qzn (zn ) is truncated at T . Note that the VDP algorithm operates in a space where component labels are distinguishable, i.e., if we permute the labels the total probability of the data changes. Since the average a priori mixture weights of the components are ordered by their size, the optimal labelling of the a posteriori variational components is also ordered according to cluster size. Hence, we have incorporated a reordering step of components according to approximate size after each optimization step in our final algorithm (a feature that was not present in [7]). 6 Accelerating inference using a kd-tree In this section we show that we can achieve accelerated inference for large datasets when we store the data in a kd-tree [10] and cache data sufficient statistics in each node of the kd-tree [8]. A kd-tree is a binary tree in which the root node contains all points, and each child node contains a subset of the data points contained in its father node, where points are separated by a (typically axis-aligned) hyperplane. Each point in the set is contained in exactly one node, and the set of outer nodes of a given expansion of the kd-tree form a partition of the data set. Suppose the kd-tree containing our data X is expanded to some level. Following [9], to achieve accelerated update equations we constrain all xn in outer node A to share the same qzn (zn ) ? qzA (zA ). We can then show that, under this constraint, the qzA (zA ) that minimizes F is given by exp(SA,i ) qzA (zA = i) = P? j=1 exp(SA,j ) (16) where SA,i is computed as in (4) using (11)?(13) with (13) replaced by Eq?i [?i ]T hxiA ?Eq?i [a(?i )], and hxiA denotes average over all data xn contained in node A. Similarly, if |nA | is the number of data in node A, the optimal parameters can be shown to be ?vi,1 = ?1 + X |nA |qzA (zA = i) X |nA |qzA (zA = i)hxiA ?vi,2 = ?2 + A ??i,1 = ?1 + |nA | X A |nA |qzA (zA = i). qzA (zA = j) (17) j=i+1 A ??i,2 = ?2 + ? X X (18) A Finally, using qzA (zA ) from (16) the free energy (5) reads   X   T  ? X X q?i (?i ; ??i ) qvi (vi ; ?vi ) F = Eqvi log + Eq?i log ? |nA | log exp(SA,i ). (19) pv (vi |?) p? (?i |?) i=1 i=1 A The infinite sums in (17) and (19) can be computed from (6) with Sn,T +1 replaced by SA,T +1 . Note that the cost of each update cycle is O(T |A|), which can be a significant improvement over the O(T N ) cost when not using a kd-tree. (The cost of building the kd-tree is O(N log N ) but this is amortized by multiple optimization steps.) Note that by refining the tree (expanding outer nodes) the free energy F cannot increase. This allows us to control the trade-off between computational resources and approximation: we can always choose to descend the tree until our computational resources run out, and the level of approximation will be directly tied to F (deeper levels will mean lower F ). 7 The algorithm The proposed framework is quite general and allows flexibility in the design of an algorithm. Below we show in pseudocode the algorithm that we used in our experiments (for DP Gaussian mixtures). Input is a dataset X = {xn }N n=1 that is already stored in a kd-tree structure. Output is a set of parameters {?vi , ??i }Ti=1 and a value for T . From that we can compute responsibilities qzn using (3). 1. Set T = 1. Expand the kd-tree to some initial level (e.g. four). P 2. Sample a number of ?candidate? components c according to size A |nA |qzA (zA = c), and split the component that leads to the maximal reduction of FT . For each candidate c do: (a) Expand one-level deeper the outer nodes of the kd-tree that assign to c the highest responsibility qzA (zA = c) among all components. (b) Split c in two components, i and j, through the bisector of its principal component. Initialize the responsibilities qzA (zA = i) and qzA (zA = j). (c) Update only SA,i , ?vi , ??i and SA,j , ?vj , ??j for the new components i and j, keeping all other parameters as well as the kd-tree expansion fixed. 3. Update SA,t , ?vt , ??t for all t ? T + 1, while expanding the kd-tree and reordering components. 4. If FT +1 > FT ? ? then halt, else set T := T + 1 and go to step 2. In the above algorithm, the number of sampled candidate components in step 2 can be tuned according to the desired cost/accuracy tradeoff. In our experiments we used 10 candidate components. In step 2b we initialized the responsibilities by qzA (zA = i) = 1 = 1 ? qzA (zA = j) if hxiA is 23 9 1 1 0.9 0.8 5000 1000 2000 free energy ratio speedup factor Fast-VDP VDP BJ #data Figure 1: Relative runtimes and free energies of Fast-VDP, VDP, and BJ. Fast-VDP VDP 15 10 5 1 160 40 1 5 3 1 1.02 1.02 1.01 1.01 1 10 100 1000 #data in thousands 1.02 1.01 1 1 16 32 64 128 dimensionality 1 2 3 c-separation Figure 2: Speedup factors and free energy ratios between Fast-VDP and VDP. Top and bottom figures show speedups and free energy ratios, respectively. closer to i than to j (according to distance to the expected first moment). In order to speed up the partial updates in step 2c, we additionally set qzA (zA = k) = 0 for all k 6= i, j (so all responsibility is shared between the two new components). In step 3 we reordered components every one cycle and expanded the kd-tree every three update cycles, controlling the expansion by the relative change of qzA (zA ) between a node and its children (alternatively one can measure the change of FT +1 ). Finally, in step 2c we monitored convergence of the partial updates through FT +1 which can be efficiently computed by adding/subtracting terms involving the new/old components. 8 Experimental results In this section we demonstrate VDP, and its kd-tree extension Fast-VDP, on synthetic and real datasets. In all experiments we assumed a Gaussian observation model px (x|?) and a Gaussianinverse Wishart for p? (?|?) and q?i (?i ; ??i ). Synthetic datasets. As argued in Section 4, an important advantage of VDP over the ?BJ? algorithm of [7] is that in VDP the variational families are nested over T , which ensures that the free energy is a monotone decreasing function of T and therefore allows for an adaptive T (starting with the trivial initialization T = 1). On the contrary, BJ optimizes the parameters for fixed T (and potentially minimizes the resulting free energy over different values of T ), which requires a nontrivial initialization step for each T . Clearly, both the total runtime as well as the quality of the final solution of BJ depend largely on its initialization step, which makes the direct comparison of VDP with BJ difficult. Still, to get a feeling of the relative performance of VDP, Fast-VDP, and BJ, we applied all three algorithms on a synthetic dataset containing 1000 to 5000 data-cases sampled from 10 Gaussians in 16 dimensions, in which the free parameters of BJ were set exactly as described in [7] (20 initialization trials and T = 20). VDP and Fast-VDP were also executed until T = 20. In Fig. 1 we show the speedup factors and free energy ratios3 among the three algorithms. Fast-VDP 3 Free energy ratio is defined as 1 + (FA ? FB )/|FB |, where A and B are either Fast-VDP, VDP or BJ. Fast-VDP VDP Figure 3: Clustering results of Fast-VDP and VDP, with a speedup of 21. The clusters are ordered according to size (from top left to bottom right). was approximately 23 times faster than BJ, and three times faster than VDP on 5000 data-cases. Moreover, Fast-VDP and VDP were always better than BJ in terms of free energy. In a second synthetic set of experiments we compared the speedup of Fast-VDP over VDP. We sampled data from 10 Gaussians in dimension D with component separation4 c. Using default number of data-cases N = 10, 000, dimensionality D = 16, and separation c = 2, we varied each of them, one at a time. In Fig. 2 we show the speedup factor (top) and the free energy ratio (bottom) between the two algorithms. Note that the latter is always worse for Fast-VDP since it is an approximation to VDP (ratio closer to one means better approximation). Fig. 2-left illustrates that the speedup of Fast-VDP over VDP is at least linear in N , as expected from the update equations in Section 6. The speedup factor was approximately 154 for one million data-cases, while the free energy ratio was almost constant over N . Fig. 2-center shows an interesting dependence of speed on dimensionality, with D = 64 giving the largest speedup. The three plots in Fig. 2 are in agreement with similar plots in [8, 9]. Real datasets. In this experiment we applied VDP and Fast-VDP for clustering image data. We used the MNIST dataset (http://yann.lecun.com/exdb/mnist/) which consists of 60, 000 images of the digits 0?9 in 784 dimensions (28 by 28 pixels). We first applied PCA to reduce the dimensionality of the data to 50. Fast-VDP found 96 clusters in 3, 379 seconds with free energy F = 1.759 ? 107 , while VDP found 88 clusters in 72, 037 seconds with free energy 1.684 ? 107 . The speedup was 21 and the free energy ratio was 1.044. The mean images of the discovered components are illustrated in Fig. 3. The results of the two algorithms seem qualitatively similar, while Fast-VDP computed its results much faster than VDP. In a second real data experiment we clustered documents from citeseer (http://citeseer.ist. psu.edu). The dataset has 30, 696 documents, with a vocabulary size of 32, 473 words. Each document is represented by the counts of words in its abstract. We preprocessed the dataset by Latent Dirichlet Allocation [12] with 200 topics5 . We subsequently transformed these topic-counts yj,k (count value of k?th topic in the j?th document) into xj,k = log(1 + yj,k ) to fit a normal distribution better. In this problem the elapsed time of Fast-VDP and VDP were 335 seconds and 2,256 seconds, respectively, hence a speedup of 6.7. The free energy ratio was 1.040. Fast-VDP found five clusters, while VDP found six clusters. Table 1 shows the three most frequent topics in each cluster. Although the two algorithms found a different number of clusters, we can see that the clusters B and F found by VDP are similar clusters, whereas Fast-VDP did not distinguish between these two. Table 2 shows words included in these topics, showing that the documents are wellclustered. 9 Conclusions We described VDP, a variational mean-field algorithm for Dirichlet Process mixtures, and its fast extension Fast-VDP that utilizes kd-trees to achieve speedups. Our contribution is twofold: First, 4 A Gaussian mixture is c-separated if for each pair (i, j) of components we have ||mi ? mj ||2 ? c D max(?max , ?max ) , where ?max denotes the maximum eigenvalue of their covariance [11]. i j 5 We thank David Newman for this preprocessing. 2 cluster a (in descending order) the three most 1 frequent topics 2 3 81 102 59 Fast-VDP b c d 73 174 40 35 50 110 49 92 94 e A B 76 4 129 81 102 59 73 40 174 VDP C D 35 50 110 76 4 129 E F 49 92 94 73 174 40 Table 1: The three most frequent topics in each clusters. Fast-VDP found five clusters, a?e, while VDP found six clusters, A?F. cluster the most frequent topic words a, A b, B, F c, C d, E e, D 81 73 35 49 76 economic, policy, countries, bank, growth, firm, public, trade, market, ... traffic, packet, tcp, network, delay, rate, bandwidth, buffer, end, loss, ... algebra, algebras, ring, algebraic, ideal, field, lie, group, theory, ... motion, tracking, camera, image, images, scene, stereo, object, ... grammar, semantic, parsing, syntactic, discourse, parser, linguistic, ... Table 2: Words in the most frequent topic of each cluster. we extended the framework of [7] to allow for nested variational families and an adaptive truncation level for the variational mixture. Second, we showed how kd-trees can be employed in the framework offering significant speedups, thus extending related results for finite mixture models [8, 9]. To our knowledge, the VDP algorithm is the first nonparametric Bayesian approach to large-scale data mining. Future work includes extending our approach to other models in the stick-breaking representation (e.g., priors of the form pvi (vi |ai , bi ) = Beta(ai , bi )), as well as alternative DP mixture representations such as the Chinese restaurant process [3]. Acknowledgments We thank Dave Newman for sharing code and David Blei for helpful comments. This material is based upon work supported by ONR under Grant No. N00014-06-1-0734 and the National Science Foundation under Grant No. 0535278 References [1] T. Ferguson. A Bayesian analysis of some nonparametric problems. Ann. Statist., 1:209?230, 1973. [2] C. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. Ann. Statist., 2(6):1152?1174, 1974. ? [3] D. Aldous. Exchangeability and related topics. In Ecole d? e? t?e de Probabilit?e de Saint-Flour XIII, 1983. [4] J. Sethuraman. A constructive definition of Dirichlet priors. Statist. Sinica, 4:639?650, 1994. [5] C.E. Rasmussen. The infinite Gaussian mixture model. In NIPS 12. MIT Press, 2000. [6] H. Ishwaran and M. Zarepour. Exact and approximate sum-representations for the Dirichlet process. Can. J. Statist., 30:269?283, 2002. [7] D.M. Blei and M.I. Jordan. Variational inference for Dirichlet process mixtures. Journal of Bayesian Analysis, 1(1):121?144, 2005. [8] A.W. Moore. Very fast EM-based mixture model clustering using multiresolution kd-trees. In NIPS 11. MIT Press, 1999. [9] J.J. Verbeek, J.R.J. Nunnink, and N. Vlassis. Accelerated EM-based clustering of large data sets. Data Mining and Knowledge Discovery, 13(3):291?307, 2006. [10] J.L. Bentley. Multidimensional binary search trees used for associative searching. Commun. ACM, 18(9):509?517, 1975. [11] S. Dasgupta. Learning mixtures of Gaussians. In IEEE Symp. on Foundations of Computer Science, 1999. [12] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993?1022, 2003.
3025 |@word trial:1 hu:1 covariance:1 citeseer:2 recursively:1 moment:1 reduction:1 initial:1 contains:3 tuned:1 document:5 offering:1 ecole:1 com:1 must:1 parsing:1 partition:2 entertaining:1 plot:2 update:10 greedy:1 generative:1 tcp:1 blei:7 provides:1 cse:1 toronto:1 node:17 five:2 unbounded:2 direct:2 beta:5 consists:2 symp:1 manner:1 market:1 expected:2 decreasing:1 resolve:1 cache:1 becomes:1 estimating:1 moreover:2 factorized:2 qvi:9 tying:3 minimizes:3 differentiation:1 guarantee:1 every:2 multidimensional:1 ti:3 growth:1 tie:3 exactly:2 runtime:1 stick:6 control:1 grant:2 local:2 limit:1 approximately:2 initialization:4 bi:2 practical:1 lecun:1 camera:1 yj:2 acknowledgment:1 practice:2 differs:1 digit:1 probabilit:1 significantly:1 reject:1 word:5 inp:1 get:1 cannot:2 undesirable:1 descending:1 www:1 optimize:1 deterministic:1 center:1 go:2 starting:3 independently:3 truncating:1 splitting:1 handle:3 searching:2 controlling:1 suppose:1 parser:1 exact:1 agreement:1 amortized:1 approximated:4 updating:1 ep:1 ft:5 bottom:3 bren:1 descend:1 thousand:1 ensures:1 cycle:3 trade:3 decrease:1 highest:1 mentioned:1 intuition:1 depend:2 algebra:2 predictive:3 reordered:1 upon:1 efficiency:1 routinely:1 various:1 represented:1 revert:1 separated:2 fast:28 newman:2 firm:1 quite:1 drawing:2 grammar:1 statistic:2 syntactic:1 final:2 associative:1 beal:1 advantage:1 eigenvalue:1 analytical:1 propose:4 subtracting:1 maximal:1 frequent:5 uci:1 aligned:1 mixing:2 flexibility:1 achieve:4 multiresolution:1 nunnink:1 convergence:4 cluster:18 optimum:1 extending:2 ring:1 object:1 ac:1 school:1 sa:8 eq:13 c:2 involves:1 direction:1 tokyo:2 subsequently:1 aluminum:1 packet:1 public:1 material:1 argued:1 assign:3 suffices:1 clustered:1 extension:2 considered:1 ic:1 normal:1 exp:13 bj:11 pointing:1 bickel:1 purpose:1 label:7 largest:1 qv:1 minimization:1 mit:2 clearly:1 always:6 gaussian:4 reaching:1 caching:1 exchangeability:1 linguistic:1 release:1 refining:1 improvement:1 mainly:1 digamma:1 posteriori:1 inference:7 helpful:1 ferguson:1 typically:1 expand:2 transformed:1 interested:1 pixel:1 issue:1 among:2 html:1 denoted:1 priori:2 initialize:1 uc:1 field:4 equal:1 psu:1 ng:1 runtimes:1 future:2 xiii:1 npb:1 modern:2 divergence:1 national:1 individual:1 replaced:2 interest:2 mining:5 evaluation:1 flour:1 mixture:37 nl:1 implication:1 closer:2 partial:2 tree:32 old:1 initialized:1 desired:1 downside:1 tp:1 zn:31 applicability:1 cost:4 subset:2 father:1 delay:1 stored:1 synthetic:4 npbayes:2 density:2 off:2 informatics:1 pool:1 together:1 iy:1 na:7 qvt:1 containing:2 choose:1 wishart:1 worse:1 japan:1 de:3 includes:1 explicitly:1 vi:46 root:1 responsibility:8 traffic:1 contribution:2 minimize:3 accuracy:2 largely:1 efficiently:1 bayesian:6 informatik:1 dave:1 za:16 sharing:1 definition:1 against:1 energy:24 mi:2 monitored:1 irvine:2 gain:1 dataset:6 sampled:3 knowledge:2 improves:1 dimensionality:4 higher:1 day:2 done:1 though:1 implicit:1 until:2 qzn:24 defines:1 quality:1 bentley:1 building:1 zarepour:1 true:2 analytically:2 hence:2 read:3 leibler:1 nonzero:2 moore:1 semantic:1 illustrated:1 exdb:1 demonstrate:2 motion:1 image:5 variational:37 consideration:1 possessing:1 pseudocode:1 empirically:1 jp:1 million:4 significant:5 ai:2 similarly:1 posterior:7 recent:1 showed:1 aldous:1 optimizes:1 commun:1 store:1 certain:3 buffer:1 n00014:1 binary:2 onr:1 vt:2 minimum:1 nikos:1 impose:1 employed:1 multiple:1 faster:3 offer:1 long:1 halt:1 qi:1 verbeek:1 involving:1 essentially:1 titech:1 achieved:1 justified:1 whereas:1 else:1 country:1 releasing:2 sure:1 comment:1 elegant:1 contrary:2 seem:1 jordan:5 www2:1 near:1 ideal:1 split:4 xj:1 fit:1 restaurant:1 bandwidth:1 reduce:1 economic:1 tradeoff:1 six:2 pca:1 accelerating:1 stereo:1 locating:2 algebraic:1 involve:2 netherlands:1 nonparametric:4 statist:4 http:5 exist:1 popularity:1 per:2 discrete:1 dasgupta:1 ist:1 group:1 four:1 drawn:2 preprocessed:1 monotone:1 sum:10 run:1 family:12 almost:2 yann:1 separation:2 utilizes:1 bound:2 fl:2 guaranteed:1 datum:1 distinguish:1 nontrivial:1 infinity:1 constraint:1 constrain:1 scene:1 generates:1 pvi:1 speed:3 expanded:2 px:10 speedup:18 according:6 kenichi:1 kd:25 slightly:1 em:2 contradicts:1 vdp:57 resource:4 equation:4 count:3 end:1 gaussians:3 apply:1 ishwaran:1 alternative:2 assumes:1 dirichlet:14 clustering:6 denotes:2 top:3 saint:1 unsuitable:1 giving:1 chinese:1 already:1 quantity:2 liml:1 parametric:2 fa:1 dependence:1 exhibit:1 dp:15 distance:2 thank:3 berlin:1 outer:5 topic:10 trivial:1 assuming:1 length:2 code:1 index:2 ratio:9 minimizing:1 equivalently:1 difficult:4 unfortunately:1 setup:3 executed:1 potentially:1 sinica:1 eqvi:4 design:1 policy:1 unknown:1 allowing:1 observation:3 datasets:4 finite:6 buffalo:1 truncated:4 vlassis:3 incorporated:1 extended:1 incorporate:1 discovered:1 varied:1 community:1 david:3 evidenced:1 complement:1 required:1 kl:3 pair:1 optimized:1 elapsed:1 nip:2 beyond:2 proceeds:1 below:1 max:5 gaining:1 force:1 scheme:1 improve:1 technology:1 axis:1 sethuraman:1 carried:1 sn:11 prior:8 discovery:1 relative:4 reordering:2 loss:1 interesting:1 allocation:2 foundation:2 sufficient:2 bank:1 pi:1 share:1 supported:1 last:1 truncation:11 free:26 keeping:1 rasmussen:1 allow:3 deeper:2 institute:2 depth:1 xn:12 eqv:2 cumulative:1 dimension:3 fb:2 default:1 vocabulary:1 collection:2 adaptive:3 qualitatively:1 preprocessing:1 feeling:1 welling:2 approximate:2 kullback:1 assumed:2 alternatively:1 search:1 latent:3 table:4 additionally:2 promising:1 mj:1 ca:1 expanding:2 improving:1 permute:1 expansion:4 vj:3 uva:1 did:1 hyperparameters:3 child:3 fig:7 pv:6 explicit:1 exponential:2 candidate:5 lie:1 tied:1 breaking:4 specific:1 showing:1 pz:3 incorporating:1 workshop:2 intractable:3 mnist:2 adding:1 effectively:1 labelling:1 illustrates:1 distinguishable:1 antoniak:1 amsterdam:1 contained:5 ordered:3 tracking:1 collectively:1 nested:7 discourse:1 acm:1 viewed:1 identity:1 consequently:1 ann:2 twofold:1 shared:1 epv:4 change:3 included:1 infinite:17 except:2 operates:2 kurihara:2 hyperplane:1 principal:1 called:1 total:2 experimental:1 bisector:1 exception:1 support:3 latter:1 accelerated:5 constructive:1 dept:1
2,233
3,026
Blind source separation for over-determined delayed mixtures Lars Omlor, Martin Giese? Laboratory for Action Representation and Learning Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research University of T?ubingen, Germany Abstract Blind source separation, i.e. the extraction of unknown sources from a set of given signals, is relevant for many applications. A special case of this problem is dimension reduction, where the goal is to approximate a given set of signals by superpositions of a minimal number of sources. Since in this case the signals outnumber the sources the problem is over-determined. Most popular approaches for addressing this problem are based on purely linear mixing models. However, many applications like the modeling of acoustic signals, EMG signals, or movement trajectories, require temporal shift-invariance of the extracted components. This case has only rarely been treated in the computational literature, and specifically for the case of dimension reduction almost no algorithms have been proposed. We present a new algorithm for the solution of this problem, which is based on a timefrequency transformation (Wigner-Ville distribution) of the generative model. We show that this algorithm outperforms classical source separation algorithms for linear mixtures, and also a related method for mixtures with delays. In addition, applying the new algorithm to trajectories of human gaits, we demonstrate that it is suitable for the extraction of spatio-temporal components that are easier to interpret than components extracted with other classical algorithms. 1 Introduction Blind source separation techniques, such as Independent Components Analysis (ICA), have received great interest in many domains including neuroscience [3; 19; 2], machine learning [12; 11], and speech and signal processing [25]. A variety of algorithms have been proposed for different types of mixing models. Many studies have focused on instantaneous mixing, where target signals are modeled by the linear superposition of a number of source signals separately for each point in time. Another set of studies has treated convolutive mixing, where signals result from the superposition of filtered source signals (see [9] and [6] for review). Much less explored are algorithms for anechoic mixing. In this case, signals are approximated by linear combinations of source signals with time delays. Classical cases of anechoic mixing arise in electrical engineering, when signals from multiple antennas are received asynchronously, or in acoustics when sound signals are recorded with multiple microphones resulting in different running times. A few algorithms have been proposed for the solution of under-determined anechoic mixing problems, where the number of sources exceeds the number of signals [8; 4; 25; 22]. A method that treats the case of equal numbers of signals and sources, which is based on joint diagonalization of spectral matrices, has been proposed by Yeredor [24]. Almost no work exists on over-determined anechoic mixing problems, where the number of source signals is smaller than the number of original signals ? the case that is most important for dimension reduction problems. Most ? WWW home page: http://www.uni-tuebingen.de/uni/knv/arl/index.html existing methods for the solution of under-determined problems cannot be transferred to the overdetermined case, because they involve additional assumptions about the data (e.g. specific spatial structure [20]) or the solution (e.g. sparseness [4]). One approach employed for under-determined anechoic mixtures is based on the assumption of small delays and a linearization of the mixture model [5]. While this original method cannot be transferred to our problem, since it requires additional assumptions about the spatial structure of the data, preliminary work in [1] applies the same basic approximation for the over-determined case. In this paper we present a new algorithm for the solution of the over-determined anechoic mixing problem, which makes no further assumptions about the size of the delays. The proposed method is derived by applying methods from stochastic time-frequency analysis. We tested the novel algorithm with two different test data sets, human movement trajectories and synthetic mixtures of acoustic signals. We demonstrate that the method results in more accurate solutions with fewer sources than classical methods (like PCA and normal ICA) for instantaneous mixing. Also, we demonstrate that our algorithm outperforms the SOBIDS algorithm in [1] for anechoic mixtures. In addition, we demonstrate that the method seems suitable for the extraction of biologically meaningful components from human movement data. 2 2.1 Source separation for over-determined delayed mixtures Delayed mixture problem In the following we assume that m signals xi (t), 1 ? i ? m have been observed. These signals are approximated by a linear combination of n source signals sj (t) with 1 ? j ? n, with temporal delays ?ij . In the case of anechoic mixing signals and sources obey the relationship: xi (t) = n X ?ij ? sj (t ? ?ij ) i = 1, ? ? ? , m (1) j=1 In the over-determined case the signals outnumber the sources, i.e. m ? n. Equation (1) is a special case of a convolutive mixture problem, where the filter kernels are given by delta functions. However, the treatment as general deconvolution problem would neglect the special structure of the convolutive kernel that is given by a weighted sum of delta pulses: n X xi = (?ij ?(t ? ?ij )) ? sj i = 1, ? ? ? , m (2) j=1 Nevertheless, this formulation suggests a treatment of this problem exploiting the framework of harmonic analysis. Since normal Fourier transformation of equation (1) results in frequency-dependent mixtures of complex phase terms, time-frequency analysis turns out to be a more appropriate framework for the separation of sources in the above mixture models. 2.2 Wigner Ville Spectrum In signal processing and acoustics a variety of time-frequency representations have been proposed, ranging from linear and multilinear to nonlinear transformations. Due to their close connections to energy and correlation measures, specifically bilinear or quadratic distributions seem very appealing. A very popular quadratic representation is the Wigner distribution, and its modifications that are included in Cohen?s class [7]. While the Wigner Ville Spectrum is usually defined as a deterministic integral transform, it can also be extended for the analysis of (nonstationary) random processes resulting in the following definition for the Wigner Ville spectrum (WVS): Assuming that x(t) is a random process and x? (t) = x ?(?t) the reversed conjugated process, the WVS can be defined as [16]: Z n  ? ? ? o ?2?i?? Wx (t, ?) = E x t + x t? e d? (3) 2 2 ? The WVS is basically a 2-D function defined over the time-frequency plain, which can loosely be interpreted as a time-frequency distribution of the mean energy of x(t). The definition (3) implies many useful properties [16], three of which are particularly useful for the following derivation. If F denotes the Fourier Transform and T? , Mf the time and frequency shift operators, e.g. (T? Mf x)(t) := e?2?if (t?? ) x(t ? ? ), then the WVS has the following properties: 1. Time-frequency shift covariance: W(T? Mf x) (t, ?) = Wx (t ? ?, ? ? f ) (4) Z 2. Marginal properties: Wx (t, ?)dt = E{|Fx|2 } (5) Z Wx (t, ?)df = E{|x|2 } (6) R tWx (t, ?)dt (7) 3. Mean group delay: tx (?) := R Wx (t, ?)dt Since the group delay (7) is not uniquely defined in the stochastic case [15], the last property gives a natural definition. Consistent with the standard definition for deterministic signals, the group delay for the deterministic case can be rewritten: R tWx (t, ?)dt ?1 ? arg((Fx)(?)) = tx (?) = R 2? ?? Wx (t, ?)dt . 2.3 Application to the delayed mixture model The original mixture model can be rewritten in the time-frequency domain by computing the WVS of both sides of (1):  Z  X n ? ? ? ?ij ?ik sj (t + ? ?ij )sk (t ? ? ?ik ) e?2?i?? d? Wxi (t, ?) = E 2 2 j,k=1  Z  n X ? ? = ?ij ?ik E sj (t + ? ?ij )s?k (t ? ? ?ik ) e?2?i?? d? (8) 2 2 j,k=1 Assuming that the source signals sj are statistically independent equation (8) can be further simplified, since all cross terms of the form E{si sj } with i 6= j will vanish (assuming additionally E{si } = 0). This together with the shift covariance of the distribution leads to the central equation: n X |?|2ij Wsj (t ? ?ij , ?) i = 1, ? ? ? , m (9) Wxi (t, ?) = j Several existing algorithms for the solution of general convolutive or instantaneous mixture problems have exploited the 2-D structure of expressions equivalent or similar to equation (9), e.g. by time-frequency masking [25] or joint diagonalization [24; 1]. However, the 2-D representation of a 1-D random process is redundant and sometimes may conceal the structure of the underlying data due to interference [10]. Furthermore, the increased amount of data often results in prohibitive computational costs for large-scale problems. This redundancy can be avoided by projecting the WVS back onto several 1-D random processes. The most simple projections are obtained by computing the first- and zero-order moments of equation (9). These moments can be computed analytically making use of equations (5) and (7), resulting in: Z Z n n X X 2 2 E{|Fxi | } = Wxi (t, ?)dt = |?|ij Wsj (t ? ?ij , ?)dt = |?|2ij E{|Fsj |2 } (10) j j and analogously 2 E{|Fxi (?)| } ? txi (?) = n  X |?|2ij 2  ? E{|Fsi | } ? tsj (?) + ?ij   . (11) j Equation (10) defines an instantaneous mixture problem with non-negativity constrains. Such problems can be treated with standard nonnegative matrix factorization [14] or ICA [11; 12] approaches. After solving equation (10), the remaining unknowns in equation (11) are the delays and the group delay (complex phase). By successive solution of these two equations the expected values of all unknown parameters can be determined. So instead of resolving (9) it is sufficient to solve (10) and (11) by successive iteration. This suggests the following two-step algorithm to estimate the unknowns in the model (1). 2.4 Two-step algorithm The last result implies the following algorithm for the estimation of the sources, delays, and the mixing matrix in (1): 1. Compute |Fxi |2 and solve |Fxi |2 (?) = n X |?|2ij |Fsj |2 (?) (12) j e.g. using non-negative matrix factorization. For our implementation we applied an algorithm for Bayesian non-negative ICA [11]. 2. Initialize ?ij = 0 and iterate the following steps: (a) Numerically solve : n |Fxi (?)|2 ? X ? arg{Fxi } = |?|2ij ? |Fsi |2 ? ?? j  ? arg{Fsj } + ?ij ??  (13) ? for the term ?? arg{Fsj }. Integrate the solution to obtain Fsj . (b) Exploiting the knowledge of the sources sj it is possible to update the mixing and the delay matrix through optimization of the following cost function, which is derived from (1), where S(?~j ) = (sk (ti ? ?jk ))i,k , Aj = (?ij )i : cj ] = argmin [?~bj , A ~j )k2 [?~j ,Aj ] kxj ? Aj ? S(? (14) This minimization is accomplished following [21], assuming uncorrelatedness for the sources and independence of the time delays. (c) Update ?ij and go back to (a), until convergence is achieved. 3 Test data sets For comparing the novel algorithm with other related methods we used two different types of data sets. A first set of data was generated artificially by mixing different sound sources, varying the mixing weights and the delays. This data was non-periodic and enabled us to validate the accuracy of the reconstruction of the source signals. The second data set were human movement trajectories of emotional gaits that were recorded using motion capture. The gait trajectories were periodic and served for testing the suitability of the new method for extracting biologically interpretable movement components. Our first data set consisted of synthetically generated delayed mixtures generated from segments of speech and sound signals taken from an ICA benchmark data set described in [6]. The data basis contained in total 14 signals, with a length of 8000 time points each. In order to obtain statistically representative results, data sets were recomputed 20 times with random selection of the source signals, and/or of the mixing and delay matrices. Three types of mixtures were generated: (I) Mixtures of 2 source segments with random mixing and delay matrices (2 ? 2), each resulting in two simulated signals x1,2 . This data set was used to compare the new method with PCA and (fast) ICA [12]. Data set (I) was included to show that the new algorithm is also able to address the even-determined case (n = m). (II) Mixtures of 2 randomly selected segments from the speech data basis using the constant mixing matrix A = [1, 2; 3, 1; 10, 5; 1, 2; 1, 1] and the constant delay matrix T = (?ij )ij = [0, 4000; 2500, 5000; 100, 200; 1, 1; 500, 333]. This data set was used to compare the new method with PCA and ICA, and the SOBIDS algorithm [1], which requires at least twice as many signals as sources. Data set (II) with fixed mixing and delay matrices was included since completely random generation sometimes produced degenerated anechoic mixtures (instantaneous mixtures or ill-conditioned mixing matrices). (III) A third data set was generated by mixing two randomly selected source segments with random mixture matrices and random delay matrices. To compare the performance of the different algorithms we used a performance measure M that was defined by the maximum of the cross-correlations between extracted sources, sextract,j , and original sources sorig,j (after appropriate matching of the individual sources, since the recovered sources are not ordered in a specific manner): M = (1/n) n X j=1 max |E{sextract,j (t) ? sorig,j (t + ? )}| ? The second test data set consisted of movement trajectories of human actors walking neutrally, or with different emotional styles (happy, angry, sad and fearful). The movements were recorded using a VICON 612 motion capture system with 7 cameras, obtaining the 3D positions of 41 passive markers on the bodies of the actors. We recorded trajectories from 13 lay actors, repeating each walking style three times per actor. A hierarchical kinematic body model (skeleton) with 17 joints was fitted to the marker positions, and joint angles were computed. Rotations between adjacent body segments were described as Euler angles, defining flexion, abduction and rotation about the connecting joints. The data for the unsupervised learning procedure included only the flexion angles of the hip, knee, elbow, shoulder and the clavicle, since the other angles had relatively high noise levels. From each trajectory only one gait cycle was extracted, which was time normalized. This resulted in a data set with 1950 samples with a length of 100 time points each. 3.1 Results Delayed mixtures of sound sources: Figure 1 shows the results for the extraction of the sound sources from the data sets (I)-(III). The bar plots show means and standard deviations of the performance measure M over twenty simulations. On all data sets our new method shows an overall performance measure above 80%, while PCA and ICA show performances between 50 and 60%. The SOBIDS algorithm reaches a performance level of 72%. For a more accurate statistical comparison of the different methods we used a one-way repeated measure ANOVA for the measure M . There was a significant effect for all three data sets. Post-hoc comparison using the Least Significant Difference (LSD) [18] revealed that our new method was significantly better than PCA and ICA for all three data sets. Our method significantly outperformed the SOBIDS algorithm for data set (III). For data set (II) the difference between these two algorithms was not significant, due to the increased overall variability in this data set. The better performance of our algorithm compared to PCA and ICA results from the appropriate modeling of time delays. The better performance compared to the SOBIDS algorithm might be explained by the fact that this algorithm requires the assumption of small delays. Figure 1: Comparison of different blind source separation algorithms for synthetic mixtures of sound signals with delays (data sets I-III, see text). The stars indicate significant (p < 0.001) differences compared to the new algorithm. Human gait trajectories: With the second data set we tested whether the proposed novel algorithm is suitable for the extraction of interpretable source signals from human movement data. By performing normal ICA separately on individual joint trajectories and comparing the extracted sources, we had observed before that such sources are often very similar except for time shifts between different joints. This motivates the hypothesis that (1) might provide an appropriate generative model for such gait trajectories. This hypothesis is confirmed by the data presented in Figure 2 that shows the approximation quality (explained variance) for different numbers of extracted sources and comparing four different algorithms: PCA, (fast) ICA [12], Bayesian positive ICA [11], SOBIDS, and the new algorithm. The new method outperforms all other methods, and in particular the methods without time delays. Specifically, the new algorithm is capable of approximating 97% of the trajectory data with only 3 sources, while PCA and ICA require more than 6 sources to achieve the same level of accuracy. Figure 2: Comparison of different blind source separation algorithms. Explained variance is shown for different numbers of extracted sources. In order to test whether the novel algorithm results in source signals that provide useful interpretations of biological data we modeled all trajectories in our gait data sets by linear superpositions of the extracted sources and analyzed the resulting mixture matrices A. To extract weight components that are specific for individual emotional styles we modeled the mixture matrices applying sparse linear regression. The (vectorized) weights of the individual gait trajectories for emotion j, defining the vector aj , were approximated by the sum of a component a0 (containing the weights that characterize neutral walking) and an emotion-specific contribution. Formally, this multi-linear regression model can be written as aj ? a0 + C ? ej , (15) where C is a weight matrix that determines the emotion-specific contributions to the mixing weights. Its columns are given by the differences between the weights for the different emotional styles (happy, sad, fearful and angry) and the weights for neutral walking. In order to obtain easily interpretable results, the matrix C was sparsified by L1 norm minimization. The solution of the linear regression problem was obtained by minimizing the following cost function (with ? > 0) using quadratic programming: E(C) = X j kaj ? a0 ? C ? ej k2 + ? X |Cij | (16) i,j This regression basically computes the mean differences between the weights for neutral walking and for emotional walking. The sparsification separates automatically important and less important features. The concentration of the variance into a few important predictors simplifies the interpretation. Figure 3 shows a gray level plot of the matrix C, illustrating the weight differences compared to neutral walking for the four different emotional styles and the different joint angles. Positive elements of the matrix indicate cases where the joint amplitudes for the emotional gait are increased compared to normal walking. Negative elements are indicated by white triangles in the lower left corner of the individual cells of the plot. They correspond to cases where the joint angle amplitudes for the emotional walk are reduced compared to normal walking. The + and ? signs in the figure summarize data from psychophysical experiments that have investigated kinematic features that were important for the perception of emotional gaits [17; 23]. Plus signs indicate cases where (perceived) increases of the joint amplitudes compared to normal walking were correlated with the perception of the corresponding emotion, and minus signs to cases where a (perceived) reduction of the joint angle amplitudes was correlated with the perception of the corresponding emotion. Comparison between these psychophysical results and the elements of the matrix C (Figure 3a) shows a very close match between the weight changes and the features that are important for the perception of emotions from gaits. This implies that the novel algorithm extracts features that can be meaningfully interpreted in a biological sense. Figure 3b shows the results of the same analysis for sources that had been extracted with PCA, matching the numbers of non-zero elements of the estimated matrix C. For gait trajectories the source signals by PCA and ICA are virtually identical. Therefore, results in Figure 3 would be unchanged for ICA. In either case, the match is significantly worse than for the sources extracted with the novel algorithm (panel a). In addition, the signs of the matrix elements often do not match the signs of the amplitude changes in the psychophysical experiments. This implies that the new algorithm extracts spatio-temporal components from human gait data that are more easily interpretable than components extracted with PCA. Figure 3: Elements of the weight matrix C, encoding emotion-specific deviations from neutral walking, for different degrees of freedom. Negative elements are indicated by white triangles in the lower left corners of the cells. Kinematic features that have been shown to be important for the perception of emotions from gait in psychophysical experiments are indicated by the plus and minus signs. (Details see text.) 3.2 Conclusion We present a new algorithm for the solution of over-determined blind source separation problems for mixtures of sources with delays. The proposed method has been derived by application of a timefrequency transformation to the mixture model, resulting in a two-step algorithm that combines positive ICA with another iterative optimization step. We demonstrate that the developed algorithm outperforms other source separation algorithms with and without time delays on synthetic data sets defined by delayed mixtures of speech signals, and also on real data sets obtained by motion capture of human full-body movements. For human movements we also demonstrate that, at least for the case of human gait, the new algorithm provides a more compact and interpretable representations than the alternative methods we tested. To our knowledge the proposed algorithm is the first one that solves over-determined delayed mixing problems without specific additional assumptions about the structure of the delay matrix, e.g. limited sizes of the delays. In contrast to nonnegative matrix factorization with delays [2], the proposed method is applicable to non-positive signals and sources. Future work will focus on testing the algorithm with a broader range of data sets, also including particularly non-periodic human movements. In addition, it seems possible to extend the proposed method for multi-dimensional translation vectors (delays), making it applicable for the learning of translation-invariant features in two-dimensional images. Acknowledgments This work was supported by HFSP, DFG, the Volkswagenstiftung and the EU FP6 Project ?COBOL?. We thank C.L. Roether for help with the trajectory acquisition and the psychological interpretation of the data, and W. Ilg for support with the motion capturing. References [1] J. Ashtar, et al (2004) A novel approach to blind separation of delayed sources in linear mixtures. 7th Semester Signal Processing, Aalborg University. [2] A. d?Avella, E. Bizzi (2005) Shared and specific muscle synergies in natural motor behaviors. Proc Natl Acad Sci U S A 102(8) 3076-3081. [3] A.J. Bell, T.J. Sejnowski (1995) An information-maximization approach to blind separation and blind deconvolution. Neural Computation 7 1129-1159. [4] P. Bofill (2003) Underdetermined blind separation of delayed sound sources in the frequency domain. Neurocomputing Vol. 55 627-641. [5] A. Celik, et al (2005) Gradient Flow Independent Component Analysis in Micropower VLSI. Advances in Neural Information Processing Systems 18, 187-194. [6] A. Cichocki, S. Amari, (2002) Adaptive Blind Signal and Image Processing. John Wiley, Chichester (2002.) [7] L. Cohen (1995) Time-Frequency Analysis. Englewood Cliffs, NJ. PrenticeHall. [8] B. Emile, P. Comon (1998) Estimation of time delays between unknown colored signals. Signal Processing 69 93?100. [9] P.D. O?Grady, B.A. Pearlmutter, S.T. Rickard (1982) Survey of sparse and non-sparse methods in source separation. International Journal of Imaging Systems and Technology (IJIST), special issue on blind source separation and deconvolution in imaging and image processing (15). [10] F. Hlawatsch, P. Flandrin (1997) The Interference Structure of the Wigner Distribution and Related Time-Frequency Signal Representations. The Wigner Distribution -Theory and Applications in Signal Processing. Amsterdam: Elsevier, 59-133. [11] P. Hojen-Sorensen, O. Winther, L. Hansen (2002) Mean field approaches to independent component analysis. Neural Computation 14 889-918. [12] A. Hyv?arinen, E.O., (1997) A fast fixed-point algorithm for independent component analysis. Neural Computation 9 1483-1492. [13] Y. Ivanenko, R. Poppele, F. Lacquaniti (2004) Five basic muscle activation patterns account for muscle activity during human locomotion. J Physiol. 556(Pt1) 267-282. [14] D.D. Lee, H.S. Seung (1999) Learning the parts of objects by Non-Negative Matrix Factorization. Nature 401. [15] W. Martin (1982) Time-frequency analysis of random signals. Proc. IEEE Int. Conf. on Acoust., Speech and Signal Processing. 1325-1328. [16] G. Matz, F. Hlawatsch (2003) Wigner distributions (nearly) everywhere: Time-frequency analysis of signals, systems, random processes, signal spaces, and frames. Signal Processing, special section ?From Signal Processing Theory to Implementation? on the occasion of the 65th birthday of W. Mecklenbr?auker 83 1355-1378. [17] M. de Meijer (1989) The contribution of general features of body movement to the attribution of emotions. Journal of nonverbal behaviour 13 247-268 [18] R. G. Miller Jr. (1981) Simultaneous Statistical Inference. Springer, New York, NY, 2nd edition. [19] R. Vig?ario, V. Jousm?aki, M. H?am?al?ainen, R. Hari, E.Oja (1998) Independent component analysis for identification of artifacts in magnetoencephalographic recordings. Advances in Neural Information Processing Systems 10 229-235. [20] R. Roy, T. Kailath (1989) ESPRIT?Estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust., Speech, Sig. Proc. 37 984-995. [21] A. Swindelhurst (1998) Time delay and spatial signature estimation using known asynchronous signals. IEEE Trans. on Sig. Proc. ASSP-33,no. 6 1461-1470. [22] K. Torkkola (1996) Blind separation of delayed sources based on information maximization. ICASSP?96 3509-3512. [23] H. G. Wallbott (1998) Bodily expression of emotion. European Journal of Social Psychology 28 879-896. [24] A. Yeredor (2003) Time-delay estimation in mixtures. Acoustics, Speech, and Signal Processing 5 237-240. ? Y?ylmaz, S.Rickard (2004) Blind Separation of Speech Mixtures via Time-Frequency Mask[25] O. ing. IEEE Transactions On Signal Processing 52 1830-1847.
3026 |@word illustrating:1 timefrequency:2 seems:2 norm:1 nd:1 hyv:1 pulse:1 simulation:1 covariance:2 lacquaniti:1 minus:2 moment:2 reduction:4 outperforms:4 existing:2 recovered:1 comparing:3 si:2 activation:1 written:1 john:1 physiol:1 wx:6 motor:1 plot:3 interpretable:5 update:2 ainen:1 generative:2 fewer:1 prohibitive:1 selected:2 filtered:1 colored:1 provides:1 successive:2 semester:1 five:1 hojen:1 ik:4 magnetoencephalographic:1 combine:1 manner:1 mask:1 expected:1 ica:17 behavior:1 yeredor:2 multi:2 brain:1 automatically:1 elbow:1 project:1 underlying:1 panel:1 argmin:1 interpreted:2 developed:1 acoust:2 transformation:4 sparsification:1 nj:1 temporal:4 ti:1 esprit:1 k2:2 before:1 positive:4 engineering:1 treat:1 vig:1 acad:1 bilinear:1 encoding:1 lsd:1 cliff:1 birthday:1 might:2 plus:2 twice:1 suggests:2 factorization:4 limited:1 range:1 statistically:2 acknowledgment:1 camera:1 testing:2 procedure:1 bell:1 significantly:3 projection:1 matching:2 cannot:2 close:2 onto:1 operator:1 selection:1 applying:3 www:2 equivalent:1 deterministic:3 go:1 attribution:1 focused:1 survey:1 knee:1 enabled:1 fx:2 target:1 programming:1 overdetermined:1 hypothesis:2 locomotion:1 sig:2 element:7 roy:1 approximated:3 particularly:2 jk:1 walking:11 lay:1 observed:2 electrical:1 capture:3 cycle:1 eu:1 movement:12 constrains:1 skeleton:1 seung:1 signature:1 solving:1 segment:5 purely:1 basis:2 completely:1 triangle:2 kxj:1 joint:12 easily:2 icassp:1 tx:2 derivation:1 fast:3 sejnowski:1 pt1:1 solve:3 amari:1 transform:2 antenna:1 asynchronously:1 hoc:1 gait:15 reconstruction:1 relevant:1 mixing:23 anechoic:9 achieve:1 validate:1 exploiting:2 convergence:1 wsj:2 object:1 help:1 ij:24 received:2 solves:1 arl:1 implies:4 indicate:3 filter:1 lars:1 stochastic:2 human:13 require:2 arinen:1 behaviour:1 preliminary:1 suitability:1 biological:2 multilinear:1 underdetermined:1 avella:1 normal:6 great:1 bj:1 bizzi:1 perceived:2 estimation:5 proc:4 outperformed:1 applicable:2 superposition:4 hansen:1 ilg:1 weighted:1 minimization:2 ej:2 varying:1 broader:1 derived:3 focus:1 abduction:1 contrast:1 celik:1 sense:1 am:1 elsevier:1 inference:1 dependent:1 a0:3 vlsi:1 germany:1 arg:4 overall:2 html:1 ill:1 issue:1 spatial:3 special:5 initialize:1 marginal:1 equal:1 emotion:10 field:1 extraction:5 identical:1 unsupervised:1 nearly:1 future:1 few:2 aalborg:1 randomly:2 oja:1 resulted:1 neurocomputing:1 individual:5 dfg:1 delayed:11 bodily:1 phase:2 freedom:1 interest:1 englewood:1 kinematic:3 chichester:1 mixture:33 analyzed:1 natl:1 sorensen:1 accurate:2 integral:1 capable:1 loosely:1 walk:1 minimal:1 fitted:1 psychological:1 increased:3 hip:1 modeling:2 column:1 maximization:2 cost:3 addressing:1 deviation:2 euler:1 neutral:5 predictor:1 delay:32 characterize:1 emg:1 fearful:2 periodic:3 vicon:1 synthetic:3 international:1 tsj:1 winther:1 lee:1 together:1 analogously:1 connecting:1 prenticehall:1 central:1 recorded:4 containing:1 worse:1 cognitive:1 corner:2 conf:1 style:5 conjugated:1 account:1 de:2 star:1 int:1 blind:14 masking:1 grady:1 contribution:3 accuracy:2 variance:3 miller:1 correspond:1 bayesian:2 flandrin:1 identification:1 produced:1 basically:2 trajectory:16 served:1 confirmed:1 simultaneous:1 reach:1 definition:4 energy:2 acquisition:1 frequency:16 outnumber:2 nonverbal:1 treatment:2 popular:2 knowledge:2 cj:1 amplitude:5 back:2 dt:7 formulation:1 furthermore:1 correlation:2 until:1 nonlinear:1 marker:2 defines:1 quality:1 indicated:3 artifact:1 aj:5 gray:1 effect:1 consisted:2 normalized:1 analytically:1 laboratory:1 white:2 adjacent:1 during:1 uniquely:1 aki:1 giese:1 occasion:1 demonstrate:6 txi:1 pearlmutter:1 motion:4 l1:1 passive:1 wigner:8 ranging:1 harmonic:1 instantaneous:5 novel:7 image:3 rotation:2 cohen:2 extend:1 interpretation:3 interpret:1 numerically:1 significant:4 fsi:2 had:3 actor:4 uncorrelatedness:1 ubingen:1 wv:6 accomplished:1 exploited:1 muscle:3 additional:3 employed:1 redundant:1 signal:55 ii:3 resolving:1 multiple:2 sound:7 full:1 exceeds:1 ing:1 match:3 clinical:1 cross:2 post:1 neutrally:1 basic:2 regression:4 df:1 iteration:1 kernel:2 sometimes:2 achieved:1 cell:2 addition:4 separately:2 twx:2 source:58 recording:1 virtually:1 meaningfully:1 flow:1 seem:1 nonstationary:1 extracting:1 synthetically:1 revealed:1 iii:4 variety:2 iterate:1 independence:1 psychology:1 simplifies:1 shift:5 whether:2 expression:2 pca:11 speech:8 york:1 action:1 useful:3 involve:1 amount:1 repeating:1 reduced:1 http:1 sign:6 neuroscience:1 delta:2 per:1 estimated:1 vol:1 group:4 redundancy:1 recomputed:1 four:2 nevertheless:1 matz:1 ario:1 anova:1 imaging:2 ville:4 fp6:1 sum:2 angle:7 everywhere:1 almost:2 separation:17 home:1 sad:2 kaj:1 capturing:1 angry:2 quadratic:3 nonnegative:2 activity:1 fourier:2 performing:1 flexion:2 martin:2 relatively:1 transferred:2 department:1 combination:2 wxi:3 smaller:1 jr:1 appealing:1 biologically:2 modification:1 making:2 comon:1 projecting:1 explained:3 invariant:1 interference:2 taken:1 equation:11 turn:1 rewritten:2 obey:1 hierarchical:1 fxi:6 spectral:1 appropriate:4 alternative:1 original:4 denotes:1 running:1 remaining:1 conceal:1 emotional:9 neglect:1 approximating:1 classical:4 unchanged:1 psychophysical:4 concentration:1 gradient:1 reversed:1 separate:1 thank:1 simulated:1 sci:1 tuebingen:1 bofill:1 assuming:4 degenerated:1 length:2 modeled:3 index:1 relationship:1 rotational:1 happy:2 minimizing:1 cij:1 negative:5 implementation:2 motivates:1 fsj:5 unknown:5 twenty:1 benchmark:1 sparsified:1 defining:2 extended:1 shoulder:1 variability:1 assp:1 frame:1 connection:1 acoustic:5 trans:2 address:1 able:1 bar:1 usually:1 perception:5 pattern:1 convolutive:4 summarize:1 including:2 max:1 suitable:3 treated:3 natural:2 technology:1 negativity:1 extract:3 cichocki:1 text:2 review:1 literature:1 generation:1 emile:1 integrate:1 degree:1 sufficient:1 consistent:1 vectorized:1 translation:2 supported:1 last:2 asynchronous:1 side:1 institute:1 sparse:3 dimension:3 plain:1 computes:1 adaptive:1 simplified:1 avoided:1 social:1 transaction:1 sj:8 approximate:1 compact:1 uni:2 synergy:1 hari:1 jousm:1 spatio:2 xi:3 neurology:1 spectrum:3 iterative:1 sk:2 additionally:1 nature:1 obtaining:1 investigated:1 complex:2 artificially:1 european:1 domain:3 noise:1 arise:1 edition:1 repeated:1 x1:1 body:5 representative:1 ny:1 wiley:1 position:2 vanish:1 third:1 meijer:1 specific:8 explored:1 micropower:1 torkkola:1 deconvolution:3 exists:1 rickard:2 diagonalization:2 linearization:1 conditioned:1 sparseness:1 easier:1 mf:3 amsterdam:1 contained:1 ordered:1 applies:1 springer:1 determines:1 extracted:11 goal:1 kailath:1 shared:1 change:2 included:4 determined:14 specifically:3 except:1 microphone:1 total:1 hfsp:1 invariance:2 meaningful:1 rarely:1 formally:1 support:1 tested:3 correlated:2
2,234
3,027
A Novel Gaussian Sum Smoother for Approximate Inference in Switching Linear Dynamical Systems David Barber and Bertrand Mesot IDIAP Research Institute Martigny 1920, Switzerland david.barber/[email protected] Abstract We introduce a method for approximate smoothed inference in a class of switching linear dynamical systems, based on a novel form of Gaussian Sum smoother. This class includes the switching Kalman Filter and the more general case of switch transitions dependent on the continuous latent state. The method improves on the standard Kim smoothing approach by dispensing with one of the key approximations, thus making fuller use of the available future information. Whilst the only central assumption required is projection to a mixture of Gaussians, we show that an additional conditional independence assumption results in a simpler but stable and accurate alternative. Unlike the alternative unstable Expectation Propagation procedure, our method consists only of a single forward and backward pass and is reminiscent of the standard smoothing ?correction? recursions in the simpler linear dynamical system. The algorithm performs well on both toy experiments and in a large scale application to noise robust speech recognition. 1 Switching Linear Dynamical System The Linear Dynamical System (LDS) [1] is a key temporal model in which a latent linear process generates the observed series. For complex time-series which are not well described globally by a single LDS, we may break the time-series into segments, each modeled by a potentially different LDS. This is the basis for the Switching LDS (SLDS) [2, 3, 4, 5] where, for each time t, a switch variable st ? 1, . . . , S describes which of the LDSs is to be used. The observation (or ?visible?) vt ? RV is linearly related to the hidden state ht ? RH with additive noise ? by vt = B(st )ht + ? v (st ) ? p(vt |ht , st ) = N (B(st )ht , ?v (st )) (1) where N (?, ?) denotes a Gaussian distribution with mean ? and covariance ?. The transition dynamics of the continuous hidden state ht is linear,  ht = A(st )ht?1 + ? h (st ), ? p(ht |ht?1 , st ) = N A(st )ht?1 , ?h (st ) (2) The switch st may depend on both the previous st?1 and ht?1 . This is an augmented SLDS (aSLDS), and defines the model p(v1:T , h1:T , s1:T ) = T Y p(vt |ht , st )p(ht |ht?1 , st )p(st |ht?1 , st?1 ) t=1 The standard SLDS[4] considers only switch transitions p(st |st?1 ). At time t = 1, p(s1 |h0 , s0 ) simply denotes the prior p(s1 ), and p(h1 |h0 , s1 ) denotes p(h1 |s1 ). The aim of this article is to address how to perform inference in the aSLDS. In particular we desire the filtered estimate p(ht , st |v1:t ) and the smoothed estimate p(ht , st |v1:T ), for any 1 ? t ? T . Both filtered and smoothed inference in the SLDS is intractable, scaling exponentially with time [4]. s1 s2 s3 s4 h1 h2 h3 h4 v1 v2 v3 v4 Figure 1: The independence structure of the aSLDS. Square nodes denote discrete variables, round nodes continuous variables. In the SLDS links from h to s are not normally considered. 2 Expectation Correction Our approach to approximate p(ht , st |v1:T ) mirrors the Rauch-Tung-Striebel ?correction? smoother for the simpler LDS [1].The method consists of a single forward pass to recursively find the filtered posterior p(ht , st |v1:t ), followed by a single backward pass to correct this into a smoothed posterior p(ht , st |v1:T ). The forward pass we use is equivalent to standard Assumed Density Filtering (ADF) [6]. The main contribution of this paper is a novel form of backward pass, based only on collapsing the smoothed posterior to a mixture of Gaussians. Together with the ADF forward pass, we call the method Expectation Correction, since it corrects the moments found from the forward pass. A more detailed description of the method, including pseudocode, is given in [7]. 2.1 Forward Pass (Filtering) Readers familiar with ADF may wish to continue directly to Section (2.2). Our aim is to form a recursion for p(st , ht |v1:t ), based on a Gaussian mixture approximation of p(ht |st , v1:t ). Without loss of generality, we may decompose the filtered posterior as p(ht , st |v1:t ) = p(ht |st , v1:t )p(st |v1:t ) (3) The exact representation of p(ht |st , v1:t ) is a mixture with O(S t ) components. We therefore approximate this with a smaller I-component mixture p(ht |st , v1:t ) ? I X p(ht |it , st , v1:t )p(it |st , v1:t ) it =1 where p(ht |it , st , v1:t ) is a Gaussian parameterized with mean f (it , st ) and covariance F (it , st ). To find a recursion for these parameters, consider X p(ht+1 |st+1 , v1:t+1 ) = p(ht+1 |st , it , st+1 , v1:t+1 )p(st , it |st+1 , v1:t+1 ) (4) st ,it Evaluating p(ht+1 |st , it , st+1 , v1:t+1 ) We find p(ht+1 |st , it , st+1 , v1:t+1 ) by first computing the joint distribution p(ht+1 , vt+1 |st , it , st+1 , v1:t ), which is a Gaussian with covariance and mean elements, ?hh = A(st+1 )F (it , st )AT (st+1 ) + ?h (st+1 ), ?vv = B(st+1 )?hh B T (st+1 ) + ?v (st+1 ) ?vh = B(st+1 )F (it , st ), ?v = B(st+1 )A(st+1 )f (it , st ), ?h = A(st+1 )f (it , st ) (5) and then conditioning on vt+1 1 . For the case S = 1, this forms the usual Kalman Filter recursions[1]. Evaluating p(st , it |st+1 , v1:t+1 ) The mixture weight in (4) can be found from the decomposition p(st , it |st+1 , v1:t+1 ) ? p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ) (6) 1 ?1 p(x|y) is a Gaussian with mean ?x + ?xy ??1 yy (y ? ?y ) and covariance ?xx ? ?xy ?yy ?yx . The first factor in (6), p(vt+1 |it , st , st+1 , v1:t ) is a Gaussian with mean ?v and covariance ?vv , as given in (5). The last two factors p(it |st , v1:t ) and p(st |v1:t ) are given from the previous iteration. Finally, p(st+1 |it , st , v1:t ) is found from p(st+1 |it , st , v1:t ) = hp(st+1 |ht , st )ip(ht |it ,st ,v1:t ) (7) where h?ip denotes expectation with respect to p. In the SLDS, (7) is replaced by the Markov transition p(st+1 |st ). In the aSLDS, however, (7) will generally need to be computed numerically. Closing the recursion We are now in a position to calculate (4). For each setting of the variable st+1 , we have a mixture of I ? S Gaussians which we numerically collapse back to I Gaussians to form p(ht+1 |st+1 , v1:t+1 ) ? I X p(ht+1 |it+1 , st+1 , v1:t+1 )p(it+1 |st+1 , v1:t+1 ) it+1 =1 Any method of choice may be supplied to collapse a mixture to a smaller mixture; our code simply repeatedly merges low-weight components. In this way the new mixture coefficients p(it+1 |st+1 , v1:t+1 ), it+1 ? 1, . . . , I are defined, completing the description of how to form a recursion for p(ht+1 |st+1 , v1:t+1 ) in (3). A recursion for the switch variable is given by X p(st+1 |v1:t+1 ) ? p(vt+1 |st+1 , it , st , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ) st ,it where all terms have been computed during the recursion for p(ht+1 |st+1 , v1:t+1 ). The likelihood p(v1:T ) may be found by recursing p(v1:t+1 ) = p(vt+1 |v1:t )p(v1:t ), where X p(vt+1 |vt ) = p(vt+1 |it , st , st+1 , v1:t )p(st+1 |it , st , v1:t )p(it |st , v1:t )p(st |v1:t ) it ,st ,st+1 2.2 Backward Pass (Smoothing) The main contribution of this paper is to find a suitable way to ?correct? the filtered posterior p(st , ht |v1:t ) obtained from the forward pass into a smoothed posterior p(st , ht |v1:T ). We derive this for the case of a single Gaussian representation. The extension to the mixture case is straightforward and presented in [7]. We approximate the smoothed posterior p(ht |st , v1:T ) by a Gaussian with mean g(st ) and covariance G(st ) and our aim is to find a recursion for these parameters. A useful starting point for a recursion is: X p(st+1 |v1:T )p(ht |st , st+1 , v1:T )p(st |st+1 , v1:T ) p(ht , st |v1:T ) = st+1 The term p(ht |st , st+1 , v1:T ) may be computed as Z p(ht |st , st+1 , v1:T ) = p(ht |ht+1 , st , st+1 , v1:t )p(ht+1 |st , st+1 , v1:T ) (8) ht+1 The recursion therefore requires p(ht+1 |st , st+1 , v1:T ), which we can write as p(ht+1 |st , st+1 , v1:T ) ? p(ht+1 |st+1 , v1:T )p(st |st+1 , ht+1 , v1:t ) (9) The difficulty here is that the functional form of p(st |st+1 , ht+1 , v1:t ) is not squared exponential in ht+1 , so that p(ht+1 |st , st+1 , v1:T ) will not be Gaussian2 . One possibility would be to approximate the non-Gaussian p(ht+1 |st , st+1 , v1:T ) by a Gaussian (or mixture thereof) by minimizing the Kullback-Leilbler divergence between the two, or performing moment matching in the case of a single Gaussian. A simpler alternative (which forms ?standard? EC) is to make the assumption p(ht+1 |st , st+1 , v1:T ) ? p(ht+1 |st+1 , v1:T ), where p(ht+1 |st+1 , v1:T ) is already known from the previous backward recursion. Under this assumption, the recursion becomes X p(ht , st |v1:T ) ? p(st+1 |v1:T )p(st |st+1 , v1:T ) hp(ht |ht+1 , st , st+1 , v1:t )ip(ht+1 |st+1 ,v1:T ) (10) st+1 2 In the exact calculation, p(ht+1 |st , st+1 , v1:T ) is a mixture of Gaussians, see [7]. However, since in (9) the two terms p(ht+1 |st+1 , v1:T ) will only be approximately computed during the recursion, our approximation to p(ht+1 |st , st+1 , v1:T ) will not be a mixture of Gaussians. Evaluating hp(ht |ht+1 , st , st+1 , v1:t )ip(ht+1 |st+1 ,v1:T ) hp(ht |ht+1 , st , st+1 , v1:t )ip(ht+1 |st+1 ,v1:T ) is a Gaussian in ht , whose statistics we will now compute. First we find p(ht |ht+1 , st , st+1 , v1:t ) which may be obtained from the joint distribution p(ht , ht+1 |st , st+1 , v1:t ) = p(ht+1 |ht , st+1 )p(ht |st , v1:t ) (11) which itself can be found from a forward dynamics from the filtered estimate p(ht |st , v1:t ). The statistics for the marginal p(ht |st , st+1 , v1:t ) are simply those of p(ht |st , v1:t ), since st+1 carries no extra information about ht . The remaining statistics are the mean of ht+1 , the covariance of ht+1 and cross-variance between ht and ht+1 , which are given by hht+1 i = A(st+1 )ft (st ), ?t+1,t+1 = A(st+1 )Ft (st )AT (st+1 )+?h (st+1 ), ?t+1,t = A(st+1 )Ft (st ) Given the statistics of (11), we may now condition on ht+1 to find p(ht |ht+1 , st , st+1 , v1:t ). Doing so effectively constitutes a reversal of the dynamics, ? ? ? ht = A (st , st+1 )ht+1 + ? ? (st , st+1 ) ? ? ? ? ? , s ), ? where A (st , st+1 ) and ? ? (st , st+1 ) ? N (? m(s t t+1 ? (st , st+1 )) are easily found using conditioning. Averaging the above reversed dynamics over p(ht+1 |st+1 , v1:T ), we find that hp(ht |ht+1 , st , st+1 , v1:t )ip(ht+1 |st+1 ,v1:T ) is a Gaussian with statistics ? ? ? ? ? ? ? ? , s ), ? = ? ?t = A (st , st+1 )g(st+1 )+? m(s A (st , st+1 )G(st+1 ) A T (st , st+1 )+ ? (st , st+1 ) t t+1 t,t These equations directly mirror the standard RTS backward pass[1]. Evaluating p(st |st+1 , v1:T ) The main departure of EC from previous methods is in treating the term p(st |st+1 , v1:T ) = hp(st |ht+1 , st+1 , v1:t )ip(ht+1 |st+1 ,v1:T ) (12) The term p(st |ht+1 , st+1 , v1:t ) is given by p(ht+1 |st+1 , st , v1:t )p(st , st+1 |v1:t ) p(st |ht+1 , st+1 , v1:t ) = P ? ? s? p(ht+1 |st+1 , st , v1:t )p(st , st+1 |v1:t ) (13) t Here p(st , st+1 |v1:t ) = p(st+1 |st , v1:t )p(st |v1:t ), where p(st+1 |st , v1:t ) occurs in the forward pass, (7). In (13), p(ht+1 |st+1 , st , v1:t ) is found by marginalizing (11). Computing the average of (13) with respect to p(ht+1 |st+1 , v1:T ) may be achieved by any numerical integration method desired. A simple approximation is to evaluate the integrand at the mean value of the averaging distribution p(ht+1 |st+1 , v1:T ). More sophisticated methods (see [7]) such as sampling from the Gaussian p(ht+1 |st+1 , v1:T ) have the advantage that covariance information is used3 . Closing the Recursion We have now computed both the continuous and discrete factors in (8), which we wish to use to write the smoothed estimate in the form p(ht , st |v1:T ) = p(st |v1:T )p(ht |st , v1:T ). The distribution p(ht |st , v1:T ) is readily obtained from the joint (8) by conditioning on st to form the mixture X p(ht |st , v1:T ) = p(st+1 |st , v1:T )p(ht |st , st+1 , v1:T ) st+1 which may then be collapsed to a single Gaussian (the mixture case is discussed in [7]). The smoothed posterior p(st |v1:T ) is given by X p(st |v1:T ) = p(st+1 |v1:T ) hp(st |ht+1 , st+1 , v1:t )ip(ht+1 |st+1 ,v1:T ) . (14) st+1 3 This is a form of exact sampling since drawing samples from a Gaussian is easy. This should not be confused with meaning that this use of sampling renders EC a sequential Monte-Carlo scheme. 2.3 Relation to other methods The EC Backward pass is closely related to Kim?s method [8]. In both EC and Kim?s method, the approximation p(ht+1 |st , st+1 , v1:T ) ? p(ht+1 |st+1 , v1:T ), is used to form a numerically simple backward pass. The other ?approximation? in EC is to numerically compute the average in (14). In Kim?s method, however, an update for the discrete variables is formed by replacing the required term in (14) by hp(st |ht+1 , st+1 , v1:t )ip(ht+1 |st+1 ,v1:T ) ? p(st |st+1 , v1:t ) (15) Since p(st |st+1 , v1:t ) ? p(st+1 |st )p(st |v1:t )/p(st+1 |v1:t ), this can be computed simply from the filtered results alone. The fundamental difference therefore between EC and Kim?s method is that the approximation, (15), is not required by EC. The EC backward pass therefore makes fuller use of the future information, resulting in a recursion which intimately couples the continuous and discrete variables. The resulting effect on the quality of the approximation can be profound, as we will see in the experiments. The Expectation Propagation (EP) algorithm makes the central assumption of collapsing the posteriors to a Gaussian family [5]; the collapse is defined by a consistency criterion on overlapping marginals. In our experiments, we take the approach in [9] of collapsing to a single Gaussian. Ensuring consistency requires frequent translations between moment and canonical parameterizations, which is the origin of potentially severe numerical instability [10]. In contrast, EC works largely with moment parameterizations of Gaussians, for which relatively few numerical difficulties arise. Unlike EP, EC is not based on a consistency criterion and a subtle issue arises about possible inconsistencies in the Forward and Backward approximations for EC. For example, under the conditional independence assumption in the Backward Pass, p(hT |sT ?1 , sT , v1:T ) ? p(hT |sT , v1:T ), which is in contradiction to (5) which states that the approximation to p(hT |sT ?1 , sT , v1:T ) will depend on sT ?1 . Such potential inconsistencies arise because of the approximations made, and should not be considered as separate approximations in themselves. Rather than using a global (consistency) objective, EC attempts to faithfully approximate the exact Forward and Backward propagation routines. For this reason, as in the exact computation, only a single Forward and Backward pass are required in EC. In [11] a related dynamics reversed is proposed. However, the singularities resulting from incorrectly treating p(vt+1:T |ht , st ) as a density are heuristically finessed. In [12] a variational method approximates the joint distribution p(h1:T , s1:T |v1:T ) rather than the marginal inference p(ht , st |v1:T ). This is a disadvantage when compared to other methods that directly approximate the marginal. Sequential Monte Carlo methods (Particle Filters)[13], are essentially mixture of delta-function approximations. Whilst potentially powerful, these typically suffer in high-dimensional hidden spaces, unless techniques such as Rao-Blackwellization are performed. ADF is generally preferential to Particle Filtering since in ADF the approximation is a mixture of non-trivial distributions, and is therefore more able to represent the posterior. 3 Demonstration Testing EC in a problem with a reasonably long temporal sequence, T , is important since numerical instabilities may not be apparent in timeseries of just a few points. To do this, we sequentially generate hidden and visible states from a given model, here with H = 3, S = 2, V = 1 ? see Figure(2) for full details of the experimental setup. Then, given only the parameters of the model and the visible observations (but not any of the hidden states h1:T , s1:T ), the task is to infer p(ht |st , v1:T ) and p(st |v1:T ). Since the exact computation is exponential in T , a simple alternative is to assume that the original sample states s1:T are the ?correct? inferences, and compare how our most probable posterior smoothed estimates arg maxst p(st |v1:T ) compare with the assumed correct sample st . We chose conditions that, from the viewpoint of classical signal processing, are difficult, with changes in the switches occurring at a much higher rate than the typical frequencies in the signal vt . For EC we use the mean approximation for the numerical integration of (12). We included the Particle Filter merely for a point of comparison with ADF, since they are not designed to approximate PF RBPF EP ADFS KimS ECS ADFM KimM ECM 1000 800 600 400 200 0 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 0 10 20 Figure 2: The number of errors in estimating p(st |v1:T ) for a binary switch (S = 2) over a time series of length T = 100. Hence 50 errors corresponds to random guessing. Plotted are histograms of the errors are over 1000 experiments. The x-axes are cut off at 20 errors to improve visualization of the results. (PF) Particle Filter. (RBPF) Rao-Blackwellized PF. (EP) Expectation Propagation. (ADFS) Assumed Density Filtering using a Single Gaussian. (KimS) Kim?s smoother using the results from ADFS. (ECS) Expectation Correction using a Single Gaussian (I = J = 1). (ADFM) ADF using a multiple of I = 4 Gaussians. (KimM) Kim?s smoother using the results from ADFM. (ECM) Expectation Correction using a mixture with I = J = 4 components. S = 2, V = 1 (scalar observations), T = 100, with zero output bias. A(s) = 0.9999 ? orth(randn(H, H)), B(s) = randn(V, H). H = 3, ?h (s) = IH , ?v (s) = 0.1IV , p(st+1 |st ) ? 1S?S + IS . At time t = 1, the priors are p1 = uniform, with h1 drawn from N (10 ? randn(H, 1), IH ). the smoothed estimate, for which 1000 particles were used, with Kitagawa resampling. For the RaoBlackwellized Particle Filter [13], 500 particles were used, with Kitagawa resampling. We found that EP4 was numerically unstable and often struggled to converge. To encourage convergence, we used the damping method in [9], performing 20 iterations with a damping factor of 0.5. Nevertheless, the disappointing performance of EP is most likely due to conflicts resulting from numerical instabilities introduced by the frequent conversions between moment and canonical representations. The best filtered results are given using ADF, since this is better able to represent the variance in the filtered posterior than the sampling methods. Unlike Kim?s method, EC makes good use of the future information to clean up the filtered results considerably. One should bear in mind that both EC and Kim?s method use the same ADF filtered results. This demonstrates that EC may dramatically improve on Kim?s method, so that the small amount of extra work in making a numerical approximation of p(st |st+1 , v1:T ), (12), may bring significant benefits. We found similar conclusions for experiments with an aSLDS[7]. 4 Application to Noise Robust ASR Here we briefly present an application of the SLDS to robust Automatic Speech Recognition (ASR), for which the intractable inference is performed by EC, and serves to demonstrate how EC scales well to a large-scale application. Fuller details are given in [14]. The standard approach to noise robust ASR is to provide a set of noise-robust features to a standard Hidden Markov Model (HMM) classifier, which is based on modeling the acoustic feature vector. For example, the method of Unsupervised Spectral Subtraction (USS) [15] provides state-of-the-art performance in this respect. Incorporating noise models directly into such feature-based HMM systems is difficult, mainly because the explicit influence of the noise on the features is poorly understood. An alternative is to model the raw speech signal directly, such as the SAR-HMM model [16] for which, under clean conditions, isolated spoken digit recognition performs well. However, the SAR-HMM performs poorly under noisy conditions, since no explicit noise processes are taken into account by the model. The approach we take here is to extend the SAR-HMM to include an explicit noise process, so that the observed signal vt is modeled as a noise corrupted version of a clean hidden signal vth : vt = vth + ??t 4 with ??t ? N (0, ? ?2) Generalized EP [5], which groups variables together improves on the results, but is still far inferior to the EC results presented here ? Onno Zoeter personal communication. Noise Variance 0 10?7 10?6 10?5 10?4 10?3 SNR (dB) 26.5 26.3 25.1 19.7 10.6 0.7 HMM 100.0% 100.0% 90.9% 86.4% 59.1% 9.1% SAR-HMM 97.0% 79.8% 56.7% 22.2% 9.7% 9.1% AR-SLDS 96.8% 96.8% 96.4% 94.8% 84.0% 61.2% Table 1: Comparison of the recognition accuracy of three models when the test utterances are corrupted by various levels of Gaussian noise. The dynamics of the clean signal is modeled by a switching AR process vth = R X h cr (st )vt?r + ?th (st ), ?th (st ) ? N (0, ? 2 (st )) r=1 where st ? {1, . . . , S} denotes which of a set of AR coefficients cr (st ) are to be used at time t, and ?th (st ) is the so-called innovation noise. When ? 2 (st ) ? 0, this model reproduces the SARHMM of [16], a specially constrained HMM. Hence inference and learning for the SAR-HMM are tractable and straightforward. For the case ? 2 (st ) > 0 the model can be recast as an SLDS. To do this we define ht as a vector which contains the R most recent clean hidden samples  T h ht = vth . . . vt?r+1 (16) and we set A(st ) to be an R ? R matrix where the first row contains the AR coefficients ?cr (st ) and the rest is a shifted down identity matrix. For example, for a third order (R = 3) AR process, " # ?c1 (st ) ?c2 (st ) ?c3 (st ) 1 0 0 A(st ) = . (17) 0 1 0 The hidden covariance matrix ?h (s) has all elements zero, except the top-left most which is set to the innovation variance. To extract the first component of ht we use the (switch independent) 1 ? R projection matrix B = [ 1 0 . . . 0 ]. The (switch independent) visible scalar noise variance is given by ?v ? ?v2 . A well-known issue with raw speech signal models is that the energy of a signal may vary from one speaker to another or because of a change in recording conditions. For this reason the innovation ?h is adjusted by maximizing the likelihood of an observed sequence with respect to the innovation covariance, a process called Gain Adaptation [16]. 4.1 Training & Evaluation Following [16], we trained a separate SAR-HMM for each of the eleven digits (0?9 and ?oh?) from the TI-DIGITS database [17]. The training set for each digit was composed of 110 single digit utterances down-sampled to 8 kHz, each one pronounced by a male speaker. Each SAR-HMM was composed of ten states with a left-right transition matrix. Each state was associated with a 10thorder AR process and the model was constrained to stay an integer multiple of K = 140 time steps (0.0175 seconds) in the same state. We refer the reader to [16] for a detailed explanation of the training procedure used with the SAR-HMM. An AR-SLDS was built for each of the eleven digits by copying the parameters of the corresponding trained SAR-HMM, i.e., the AR coefficients cr (s) are copied into the first row of the hidden transition matrix A(s) and the same discrete transition distribution p(st | st?1 ) is used. The models were then evaluated on a test set composed of 112 corrupted utterances of each of the eleven digits, each pronounced by different male speakers than those used in the training set. The recognition accuracy obtained by the models on the corrupted test sets is presented in Table 1. As expected, the performance of the SAR-HMM rapidly decreases with noise. The feature-based HMM with USS has high accuracy only for high SNR levels. In contrast, the AR-SLDS achieves a recognition accuracy of 61.2% at a SNR close to 0 dB, while the performance of the two other methods is equivalent to random guessing (9.1%). Whilst other inference methods may also perform well in this case, we found that EC performs admirably, without numerical instabilities, even for time-series with several thousand time-steps. 5 Discussion We presented a method for approximate smoothed inference in an augmented class of switching linear dynamical systems. Our approximation is based on the idea that due to the forgetting which commonly occurs in Markovian models, a finite number of mixture components may provide a reasonable approximation. Clearly, in systems with very long correlation times our method may require too many mixture components to produce a satisfactory result, although we are unaware of other techniques that would be able to cope well in that case. The main benefit of EC over Kim smoothing is that future information is more accurately dealt with. Whilst EC is not as general as EP, EC carefully exploits the properties of singly-connected distributions, such as the aSLDS, to provide a numerically stable procedure. We hope that the ideas presented here may therefore help facilitate the practical application of dynamic hybrid networks. Acknowledgements This work is supported by the EU Project FP6-0027787. This paper only reflects the authors? views and funding agencies are not liable for any use that may be made of the information contained herein. References [1] Y. Bar-Shalom and Xiao-Rong Li. Estimation and Tracking : Principles, Techniques and Software. Artech House, Norwood, MA, 1998. [2] V. Pavlovic, J. M. Rehg, and J. MacCormick. Learning switching linear models of human motion. In Advances in Neural Information Processing systems (NIPS 13), pages 981?987, 2001. [3] A. T. Cemgil, B. Kappen, and D. Barber. A Generative Model for Music Transcription. IEEE Transactions on Audio, Speech and Language Processing, 14(2):679 ? 694, 2006. [4] U. N. Lerner. Hybrid Bayesian Networks for Reasoning about Complex Systems. PhD thesis, Stanford University, 2002. [5] O. Zoeter. Monitoring non-linear and switching dynamical systems. PhD thesis, Radboud University Nijmegen, 2005. [6] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT Media Lab, 2001. [7] D. Barber. Expectation Correction for Smoothed Inference in Switching Linear Dynamical Systems. Journal of Machine Learning Research, 7:2515?2540, 2006. [8] C-J. Kim. Dynamic linear models with Markov-switching. Journal of Econometrics, 60:1?22, 1994. [9] T. Heskes and O. Zoeter. Expectation Propagation for approximate inference in dynamic Bayesian networks. In A. Darwiche and N. Friedman, editors, Uncertainty in Art. Intelligence, pages 216?223, 2002. [10] S. Lauritzen and F. Jensen. Stable local computation with conditional Gaussian distributions. Statistics and Computing, 11:191?203, 2001. [11] G. Kitagawa. The Two-Filter Formula for Smoothing and an implementation of the Gaussian-sum smoother. Annals of the Institute of Statistical Mathematics, 46(4):605?623, 1994. [12] Z. Ghahramani and G. E. Hinton. Variational learning for switching state-space models. Neural Computation, 12(4):963?996, 1998. [13] A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Springer, 2001. [14] B. Mesot and D. Barber. Switching Linear Dynamical Systems for Noise Robust Speech Recognition. IDIAP-RR 08, 2006. [15] G. Lathoud, M. Magimai-Doss, B. Mesot, and H. Bourlard. Unsupervised spectral subtraction for noiserobust ASR. In Proceedings of ASRU 2005, pages 189?194, November 2005. [16] Y. Ephraim and W. J. J. Roberts. Revisiting autoregressive hidden Markov modeling of speech signals. IEEE Signal Processing Letters, 12(2):166?169, February 2005. [17] R.G. Leonard. A database for speaker independent digit recognition. In Proceedings of ICASSP84, volume 3, 1984.
3027 |@word version:1 briefly:1 heuristically:1 covariance:10 decomposition:1 recursively:1 carry:1 kappen:1 moment:5 series:5 contains:2 freitas:1 reminiscent:1 readily:1 additive:1 numerical:8 visible:4 eleven:3 treating:2 designed:1 update:1 resampling:2 alone:1 generative:1 intelligence:1 rts:1 filtered:11 provides:1 parameterizations:2 node:2 simpler:4 blackwellized:1 h4:1 adfm:3 c2:1 profound:1 consists:2 darwiche:1 introduce:1 admirably:1 forgetting:1 expected:1 mesot:4 themselves:1 ldss:1 p1:1 blackwellization:1 bertrand:2 globally:1 pf:3 becomes:1 confused:1 xx:1 estimating:1 project:1 medium:1 whilst:4 spoken:1 temporal:2 ti:1 demonstrates:1 classifier:1 normally:1 understood:1 local:1 cemgil:1 switching:13 approximately:1 chose:1 collapse:3 finessed:1 practical:1 testing:1 practice:1 digit:8 procedure:3 projection:2 matching:1 close:1 collapsed:1 influence:1 instability:4 equivalent:2 maximizing:1 straightforward:2 starting:1 contradiction:1 oh:1 rehg:1 sar:10 annals:1 exact:6 us:2 origin:1 element:2 recognition:8 econometrics:1 cut:1 database:2 observed:3 ft:3 ep:7 tung:1 calculate:1 thousand:1 revisiting:1 connected:1 eu:1 decrease:1 ephraim:1 agency:1 dynamic:9 personal:1 trained:2 depend:2 segment:1 basis:1 easily:1 joint:4 various:1 monte:3 radboud:1 h0:2 whose:1 apparent:1 stanford:1 drawing:1 statistic:6 itself:1 noisy:1 ip:9 advantage:1 sequence:2 rr:1 adaptation:1 frequent:2 rapidly:1 poorly:2 description:2 pronounced:2 convergence:1 produce:1 help:1 derive:1 lauritzen:1 h3:1 idiap:3 switzerland:1 closely:1 correct:4 filter:7 used3:1 human:1 require:1 decompose:1 probable:1 singularity:1 kitagawa:3 adjusted:1 extension:1 rong:1 correction:7 randn:3 considered:2 vary:1 achieves:1 estimation:1 faithfully:1 reflects:1 hope:1 mit:1 clearly:1 gaussian:25 aim:3 rather:2 cr:4 ecm:2 ax:1 likelihood:2 mainly:1 contrast:2 kim:12 inference:13 dependent:1 striebel:1 typically:1 hidden:11 relation:1 issue:2 arg:1 smoothing:5 integration:2 art:2 constrained:2 marginal:3 fuller:3 asr:4 sampling:4 unsupervised:2 constitutes:1 future:4 pavlovic:1 gordon:1 few:2 composed:3 lerner:1 divergence:1 familiar:1 replaced:1 attempt:1 friedman:1 possibility:1 evaluation:1 severe:1 male:2 mixture:21 accurate:1 encourage:1 preferential:1 xy:2 unless:1 damping:2 iv:1 desired:1 plotted:1 isolated:1 modeling:2 rao:2 markovian:1 ar:9 disadvantage:1 snr:3 uniform:1 too:1 corrupted:4 considerably:1 st:306 density:3 fundamental:1 stay:1 v4:1 off:1 corrects:1 together:2 squared:1 central:2 thesis:3 collapsing:3 toy:1 li:1 account:1 potential:1 de:1 includes:1 coefficient:4 performed:2 break:1 h1:7 view:1 lab:1 doing:1 zoeter:3 contribution:2 square:1 formed:1 accuracy:4 variance:5 largely:1 dealt:1 lds:5 raw:2 bayesian:3 accurately:1 carlo:3 monitoring:1 liable:1 energy:1 frequency:1 minka:1 thereof:1 associated:1 couple:1 gain:1 sampled:1 improves:2 subtle:1 routine:1 sophisticated:1 carefully:1 back:1 adf:9 higher:1 evaluated:1 generality:1 just:1 correlation:1 replacing:1 overlapping:1 propagation:5 defines:1 quality:1 facilitate:1 effect:1 hence:2 satisfactory:1 round:1 during:2 onno:1 inferior:1 speaker:4 criterion:2 generalized:1 demonstrate:1 performs:4 motion:1 bring:1 reasoning:1 meaning:1 variational:2 novel:3 funding:1 pseudocode:1 functional:1 conditioning:3 exponentially:1 khz:1 volume:1 discussed:1 extend:1 approximates:1 numerically:6 marginals:1 significant:1 refer:1 automatic:1 consistency:4 heskes:1 hp:8 mathematics:1 closing:2 particle:7 language:1 stable:3 posterior:12 recent:1 shalom:1 disappointing:1 binary:1 continue:1 vt:19 inconsistency:2 additional:1 subtraction:2 converge:1 v3:1 signal:10 artech:1 smoother:6 rv:1 full:1 multiple:2 infer:1 calculation:1 cross:1 long:2 ensuring:1 essentially:1 expectation:10 iteration:2 represent:2 histogram:1 achieved:1 c1:1 recursing:1 extra:2 rest:1 unlike:3 specially:1 recording:1 db:2 call:1 integer:1 easy:1 switch:9 independence:3 idea:2 rauch:1 render:1 suffer:1 speech:7 repeatedly:1 dramatically:1 generally:2 useful:1 detailed:2 amount:1 s4:1 singly:1 ten:1 struggled:1 generate:1 supplied:1 canonical:2 s3:1 shifted:1 delta:1 yy:2 discrete:5 write:2 group:1 key:2 nevertheless:1 drawn:1 clean:5 ht:129 backward:13 v1:145 merely:1 fp6:1 sum:3 parameterized:1 powerful:1 uncertainty:1 letter:1 family:2 reader:2 reasonable:1 scaling:1 completing:1 followed:1 copied:1 software:1 generates:1 integrand:1 performing:2 relatively:1 describes:1 smaller:2 intimately:1 making:2 s1:9 taken:1 equation:1 visualization:1 hh:2 mind:1 tractable:1 reversal:1 serf:1 available:1 gaussians:8 v2:2 spectral:2 alternative:5 original:1 denotes:5 remaining:1 include:1 top:1 yx:1 music:1 exploit:1 ghahramani:1 february:1 classical:1 objective:1 already:1 occurs:2 usual:1 guessing:2 reversed:2 link:1 separate:2 maccormick:1 hmm:15 barber:5 considers:1 unstable:2 trivial:1 reason:2 kalman:2 code:1 modeled:3 length:1 copying:1 minimizing:1 demonstration:1 innovation:4 setup:1 difficult:2 robert:1 potentially:3 raoblackwellized:1 nijmegen:1 martigny:1 implementation:1 perform:2 conversion:1 observation:3 markov:4 finite:1 november:1 timeseries:1 incorrectly:1 hinton:1 communication:1 smoothed:13 david:2 introduced:1 required:4 c3:1 conflict:1 acoustic:1 merges:1 herein:1 nip:1 address:1 able:3 bar:1 dynamical:9 departure:1 recast:1 including:1 built:1 explanation:1 suitable:1 difficulty:2 hybrid:2 bourlard:1 recursion:16 scheme:1 improve:2 kimm:2 extract:1 utterance:3 vh:1 prior:2 acknowledgement:1 marginalizing:1 loss:1 bear:1 filtering:4 h2:1 norwood:1 rbpf:2 s0:1 article:1 xiao:1 viewpoint:1 principle:1 editor:1 translation:1 row:2 supported:1 last:1 bias:1 vv:2 institute:2 benefit:2 transition:7 evaluating:4 unaware:1 autoregressive:1 forward:12 made:2 commonly:1 thorder:1 author:1 ec:28 far:1 cope:1 transaction:1 approximate:12 kullback:1 transcription:1 global:1 sequentially:1 reproduces:1 doucet:1 assumed:3 continuous:5 latent:2 table:2 reasonably:1 robust:6 complex:2 main:4 linearly:1 rh:1 s2:1 noise:16 arise:2 augmented:2 dispensing:1 dos:1 position:1 orth:1 wish:2 explicit:3 exponential:2 house:1 third:1 down:2 formula:1 jensen:1 intractable:2 incorporating:1 ih:2 sequential:3 effectively:1 mirror:2 phd:3 occurring:1 slds:11 simply:4 likely:1 vth:4 desire:1 contained:1 tracking:1 scalar:2 springer:1 ch:1 corresponds:1 ma:1 conditional:3 identity:1 leonard:1 asru:1 change:2 included:1 typical:1 except:1 averaging:2 called:2 pas:17 hht:1 experimental:1 arises:1 evaluate:1 audio:1
2,235
3,028
A selective attention multi?chip system with dynamic synapses and spiking neurons Chiara Bartolozzi Institute of neuroinformatics UNI-ETH Zurich Wintherthurerstr. 190, 8057, Switzerland [email protected] Giacomo Indiveri Institute of neuroinformatics UNI-ETH Zurich Wintherthurerstr. 190, 8057, Switzerland [email protected] Abstract Selective attention is the strategy used by biological sensory systems to solve the problem of limited parallel processing capacity: salient subregions of the input stimuli are serially processed, while non?salient regions are suppressed. We present an mixed mode analog/digital Very Large Scale Integration implementation of a building block for a multi?chip neuromorphic hardware model of selective attention. We describe the chip?s architecture and its behavior, when its is part of a multi?chip system with a spiking retina as input, and show how it can be used to implement in real-time flexible models of bottom-up attention. 1 Introduction Biological systems interact with the outside world in real-time, reacting to complex stimuli in few milliseconds. This is a highly demanding computational task, that requires either very high speed sequential computation or fast massively parallel processing. Real systems however have to cope with limited resources. Biological systems solve this issue by sequentially allocating computational resources on small regions of the input stimuli, for analyzing them in parallel, with a strategy known as Selective Attention, that takes advantage of both sequential and parallel processing. A wise approach to the design of artificial systems that need to interact with the real world in real time is to take inspiration from the strategies developed by biological systems. The psychophysical study of selective attention distinguished two complementary strategies for the selection of salient regions of the input stimuli, one depending on the physical (bottom-up) characteristic of the input, the other depending on its semantic (top-down) and task related properties. Much of the applied research has focused on modeling the bottom-up aspect of selective attention. As a consequence, several software [1, 2, 3] and hardware models [4, 5, 6, 7] based on the concept of saliency map, winner-takes-all (WTA) competition, and inhibition of return (IOR) [8] have been proposed. We focus on HW implementation of such selective attention systems on compact, low-power, analogue VLSI chips. Previous implementations focused on either very abstract object/attention WTA architectures with dedicated single-chip solutions [6], or on very detailed models of spike-based competitive networks [9]; we propose a multi-chip solution that combines the advantages of spikebased solutions for communicating signals across chips, with a dedicated and compact WTA architecture for implementing competition among a large number of elements in parallel. Specifically we present a new chip with 32 ? 32 cells, that can sequentially select the most active regions of the input stimuli, the Selective Attention Chip (SAC). It is a transceiver chip employing a spike-based representation (AER, Address-Event-Representation [10]). Its input signals topographically encode the local conspicuousness of the input over the entire visual scene. Its output signals can be used in real time to drive motors of active vision systems or to select subregions of images captured from wide field-of-view cameras. The AER communication protocol and the 2D structure of the network make it particularly suitable for processing signals from silicon spiking retinas. The basic circuits of the chip we present have already been proposed in [11]. The chip we present here comprises improvements in the basic circuits, and additional dynamic components that will be described in Section 3. The chip?s improvements over previous implementations arise from the design of new AER interfacing circuits, both for the input decoding stage and the output arbitration, and new synaptic circuits: the Diff-Pair Integrator (DPI) described in [12]. The DPI is a log-domain compact circuit that reproduces the time course of biological post-synaptic currents. Besides having easily and independently tunable gain and time constant, it produces mean currents proportional to the input frequencies, more suitable for the input of the current-mode WTA cell employed as core computational unit in the SAC. This new circuit allows the analysis of the properties of the chip, including the effect of the introduction of additional dynamic properties to the circuits, such as Short-Term Depression (STD) [13, 14] in the input synapses and spike frequency adaptation in the output Integrate and Fire (I&F ) neurons [15]. In the next sections we describe the chip?s architecture and present experimental results from a two chip system comprising the SAC and a silicon ?transient? retina that produces spikes in response to temporal changes in scene contrast. 2 The Selective Attention Chip We fabricated a prototype of the SAC in standard AMS 0.35?m CMOS technology. The chip comprises an array of 32 ? 32 pixels, each one is 90 ? 45?m2 and the whole chip with AER digital interface and pads occupies an area of 10mm2 . The basic functionality of the SAC is to scan the input in order of decreasing activity. The chip input and output signals are asynchronous digital pulses (spikes) that use the Address Event Representation (AER) [16]. The input spikes to each pixel are translated into a current (see Iex of Fig.1) by a circuit that models the dynamics of a biological excitatory synapse [12]. A current mode hysteretic Winner?Take?All (WTA) competitive cell compares the input currents of each pixel; the winning cell sources a constant current to the correspondent output leaky Integrate and Fire (I&F) neuron [15]. The spiking neuron in the array then signals which pixel is winning the competition for saliency, and therefore the pixel that receives the highest input frequency. The output spikes of the I&F neuron are sent also to a feedback inhibitory synapse (see Fig. 1), that subtracts current (Iior ) from the input node of the WTA cell; the net input current to the winner pixel is then decreased, and a new pixel can eventually be selected. This self-inhibition mechanism is known as Inhibition of Return (IOR) and allows the network to select sequentially the most salient regions of input images, reproducing the attentional scan path. To nearest neighbors AER Input A E R Excitatory Synapse Iex + + Inhibitory Synapse Hysteretic WTA A E R Output I&F Neuron AER Output Iior IOR To nearest neighbors Figure 1: Block diagram of a basic cell of the 32 ? 32 selective attention architecture. This basic functionality of the SAC is augmented by the introduction of dynamic properties such as Short-Term Depression (STD) in the input synapses and spike frequency adaptation in the output neuron. STD is a property observed in physiological recordings[17] of synapses that decrease their efficacy when they receive consecutive stimulations. In our synapse the effect of a single spike on the integrated current depends on a voltage, the synaptic weight. The initial weight of the synapse is set by an external voltage reference, then as the synapse receives spikes the effective synaptic weight decreases. STD is a local gain control, that increases sensitivity to changes in the input and makes the synapse insensitive to constant stimulation. Spiking frequency adaptation is another property of neurons that when stimulated with constant input decrease their output firing rate with time. The spiking frequency of the silicon I&F neuron is monotonic with its input current, the adaptation neuron?s mechanism decreases the neuron?s firing rate with time [15]. We exploit this property to decrease the output bandwidth of the SAC. The SAC has been designed with tunable parameters that allow to modify the strength of synaptic contributions, the dynamics of synaptic short term depression and of neuronal adaptation, as well as the spatial extent of competition and the dynamics of IOR. All these parameters enrich the dynamics of the network that can be exploited to model the complex selective attention scan path. 3 Multi?Chip Selective Attention System The SAC uses the asynchronous AER SCX (Silicon Cortex) protocol, that allows multiple AER chips to communicate using spikes, just like the cortex, and can be used in multi?chip systems, with multiple senders and multiple receivers [18, 19]. Using this representation the SAC can exchange data, while processing signals in parallel, in real time [20]. The communication protocol used and the SAC?s bidimensional architecture make it particularly suitable for processing visual inputs coming from artificial spiking retinas. We built a two chip system, connecting a silicon retina [21] to the SAC input. The retina is an AER asynchronous imager that responds to contrast variations, it has 64 ? 64 pixels that respond to on and off transients. A dedicated PCI-AER board [18] connects the retina to the SAC, via a look-up table that maps the activity of the 64 ? 64 pixels of the retina to the 32 ? 32 pixels of the SAC. In this setup the mapping is linear grouping 4 retina pixels to 1 SAC pixel, more complex mappings, as for example the foveal mapping, will be tested in the future. The board allows also to monitor the activity of both chips on a Linux desktop. 4 Experimental Data We performed preliminary experiments with the two chips setup described in the previous section. We stimulated the retina with two black squares flashing at 6Hz on a white background, on a LCD screen, using the matlab PsychoToolbox [22] as shown in Fig. 2. In Fig. 3 we show the response of the two chips to this stimulus: each dot represents the mean firing rate of the correspondent pixel in the chips. The pixels of the retina that focus on the black squares are active and show a high mean firing rate, some other pixels in the array have spontaneous activity. To show the mapping between the retina and the SAC we performed a control experiment: we turned off the competition and the IOR and also we disabled STD and the neuronal adaptation, in this way all the pixels that receive an input activity will be active. All the pixels that receive the input from the pixels of the retina that we stimulate with the black squares are active, more over the spontaneous activity (noise) of the other pixels are ?cleaned?, thanks to the filtering property of the input synapses. In the next figures we show the response of the system to the stimulus described above, while changing the settings of the SAC. In all the figures the top and bottom boxes show raster plots, respectively of the retina and the SAC: each dot corresponds to a spike emitted by a pixel (or neuron) (y axis) at a certain time (x axis). The middle trace shows the voltage Vnet , that is proportional to the total input current (Iex ? Iior of Fig. 1) to the WTA cell that receives input from one of the most active pixels of the retina. In Fig. 4(a) we show the same data of Fig. 3, the retina sends many spikes every time the black squares appear and disappear from the screen, the WTA input node, with this settings, receives only the excitatory current from the input synapse, as shown by the increase of the voltage Vnet in correspondence of the retinal spikes. Since in our control experiment there is no competition, all the stimulated pixels are active, as shown in the SAC raster plot. In Fig. 4(b) we show the effect of 10 5 20 10 Neuron Y Neuron Y Figure 2: Multi-chip system: The retina (top-right box) is stimulated with an LCD screen, its output is sent to the SAC (bottom-right box) via the PCIAER board (bottom-left box). The activity of the two chips is monitored via the PCIAER board on a Linux desktop. 30 40 15 20 25 50 30 60 10 20 30 40 Neuron X (a) 50 60 5 10 15 20 Neuron X 25 30 (b) Figure 3: Response of the two chips to an image. (a) The silicon retina is stimulated, via an LCD screen, with two flashing (6Hz) black squares on a white background (see Fig. 2). We show the mean firing output of each pixel of the retina. The pixels corresponding to the black squares in the image have higher firing rate than the others, some of the pixels of the retina are spontaneously firing at lower frequencies. (b) The activity of the retina is the input of the SAC: the 64 ? 64 pixels of the retina are mapped with a ratio 4 : 1 to the 32 ? 32 pixels of the SAC. We show the mean firing rate of the SAC pixels in response to the retinal stimulation, when the Winner-Takes-All competition is disabled. In this case the SAC output reflects the input, with some suppression of the noisy pixels due to the filtering properties of the input synaptic circuits. introducing spike frequency adaptation: in this case the output frequency of each neuron decreases, reducing the output bandwidth and the AER-bus traffic. In Fig. 5 we show the effect of competition and Inhibition of Return. When we turn on the WTA competition only one pixel is selected at any time, therefore only one neuron is firing, as shown in the raster plot of Fig. 5(a); on the node Vnet we can observe that when the correspondent neuron is winning there is an extra input current, because it doesn?t reset to its resting value when the synapse is not active. This positive current implements a form of self-excitation that gives hysteretic properties to the network dynamics, and stabilizes the WTA network. If we turn on the inhibitory synapse (Fig. 5(b)), as soon as the neuron starts to fire, Neuron 4000 2000 Vnet (V) 0 0 0.2 0.4 0.6 0.8 1 1.2 0 0 1.4 2.2 0.2 0.4 0.6 0.8 1 1.2 0.4 0.6 0.8 1 1.2 1.4 0.2 0.4 0.6 0.8 1 1.2 1.4 0.2 0.4 0.6 0.8 Time (sec) 1 1.2 1.4 2.2 1.4 1000 Neuron 1000 Neuron 0.2 2 2 500 0 0 2000 Vnet (V) Neuron 4000 0.2 0.4 0.6 0.8 Time (sec) (a) 1 1.2 1.4 500 0 0 (b) Figure 4: Time response to black squares flashing on a white background: we use the same stimulation and setup described in Fig 3. The top figure shows the raster plot of the retina output, one dot corresponds to a spike produced by one pixel at a specific time. The retina produces events every time the squares appear on or disappear from the screen. The middle plot shows the voltage Vnet of the input node of the WTA cell correspondent to the synapse that receives input from one of the most active pixel of the retina. The bottom figure shows the raster plot of the SAC neurons. (a) We show the ?control? experiment (same as in Fig 3): the competition, IOR, and all the other features of the SAC are turned off, the output of the chip reproduces the input, with some suppression of the pixels that receive very low activity from the retina, thanks to the input synapses filtering properties. In the middle plot Vnet reflects the effect of the sole input current from the synapse, that integrates the spikes received from the correspondent pixel of the retina. In this case, since the lateral inhibitory connections are switched off, there is no competition and all the output I&F neurons correspondent to the stimulated input synapses are active. (b) We add spike frequency adaptation to the previous experiment settings, the output firing rate of the neurons is decreased, reducing the bandwidth of the SAC output. the inhibitory current decreases the total input current to the correspondent WTA cell: the voltage Vnet reflects this mechanism as it is reset to its resting value even before the input from the retina ceases. The WTA cell is then deselected and the output neuron stops firing, while another neuron is selected and starts firing, as shown in the SAC raster plot. The inhibitory synapse time constant is tunable and when it is slow the effect of inhibition lasts for hundreds of milliseconds after the I&F stopped firing, in this way we prevent that pixel to be reselected immediately and we can have scan path with many different pixels. 5 Conclusions In this paper we presented a neuromorphic device implementing a Winner?Take?All network comprising dynamic synapses and adaptive neurons. This device is designed to be a part of a multi?chip system for Selective Attention: via an AER communication system it can be interfaced to silicon spiking retinas and to software implementations of associative memories. We built a multi?chip system with the SAC and a silicon transient retina. The real time measurements allowed by the physical realization of the system are certainly a powerful method to explore the network behavior by changing its parameters. Preliminary experiments confirmed the basic functionality of the SAC and the robustness of the system; the analysis will be extended with the systematic study of STD, IOR, adaptation and lateral excitatory coupling among the nearby cells. References [1] L. Itti, E. Niebur, and C. Koch. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254?1259, 1998. [2] H. Bosch, R. Milanese, and A. Labbi. Object segmentation by attention-induced oscillations. In Proc. IEEE Int. Joint Conf. Neural Networks, volume 2, pages 1167?1171, 1998. Neuron 4000 2000 Vnet (V) 0 0 0.2 0.4 0.6 0.8 1 1.2 0 0 1.4 2.2 2 0.2 0.4 0.6 0.8 1 1.2 0.4 0.6 0.8 1 1.2 1.4 0.2 0.4 0.6 0.8 1 1.2 1.4 0.2 0.4 0.6 0.8 Time (sec) 1 1.2 1.4 2 1000 Neuron Neuron 0.2 2.2 1.4 1000 500 0 0 2000 Vnet (V) Neuron 4000 0.2 0.4 0.6 0.8 Time (sec) (a) 1 1.2 1.4 500 0 0 (b) Figure 5: Response of the system with WTA competition and Inhibition of Return. The setup, stimulus and figure content are the same as in Fig. 4. (a) We turn on the WTA competition, and the hysteretic self-excitation. The retina activity is unchanged, the node Vnet now reflects the input current from the synapse, and also the contribution of the hysteretic current: when the monitored pixel wins the competition for saliency a current is fed in the input node, and Vnet does not reset to its resting value when the synapse is not active. Now there is only one active neuron in the whole chip, when it does not win the competition for saliency the hysteretic current fades away and another neuron begins spiking. (b) We turn on the inhibitory synapse that implements the self-inhibition (IOR). We can observe the effect of the inhibitory current subtracted from the input node (see text) on Vnet , that with the same input as before sets back to its resting level much faster. The raster plot shows how this mechanism allows to deselect the current winner and select other inputs. [3] S. Baluja and D.A Pomerleau. Expectation-based selective attention for the visual monitoring and control of a robot vehicle. Robotics and Autonomous Systems Journal, 22:329?344, 1997. [4] V. Brajovic and T. Kanade. Computational sensor for visual tracking with attention. IEEE Journal of Solid State Circuits, 33(8):1199?1207, August 1998. [5] T. K. Horiuchi and C. Koch. Analog VLSI-based modeling of the primate oculomotor system. Neural Computation, 11(1):243?265, January 1999. [6] T. G. Morris, T. K. Horiuchi, and S. P. DeWeerth. Object-based selection within an analog VLSI visual attention system. IEEE Transactions on Circuits and Systems II, 45(12):1564?1572, 1998. [7] G. Indiveri. Modeling selective attention using a neuromorphic analog VLSI device. Neural Computation, 12(12):2857?2880, December 2000. [8] C. Koch and S Ullman. Shifts in selective visual-attention ? towards the underlying neural circuitry. Human Neurobiology, 4(4):219?227, 1985. [9] E. Chicca. A Neuromorphic VLSI System for Modeling Spike?Based Cooperative Competitive Neural Networks. PhD thesis, ETH Z?urich, Z?urich, Switzerland, April 2006. [10] E. Chicca, A. M. Whatley, V. Dante, P. Lichtsteiner, T. Delbruck, P. Del Giudice, R. J. Douglas, and G. Indiveri. A multi-chip pulse-based neuromorphic infrastructure and its application to a model of orientation selectivity. IEEE Transactions on Circuits and Systems I, Regular Papers, 2006. (in press). [11] C. Bartolozzi and G. Indiveri. Selective attention implemented with dynamic synapses and integrate-andfire neurons. NeuroComputing, special issue on Brain Inspired Cognitive Systems, 2005. In press. [12] C. Bartolozzi and G. Indiveri. Silicon synaptic homeostasis. In Brain Inspired Cognitive Systems 2006, 2006. [13] C. Rasche and R. Hahnloser. Silicon synaptic depression. Biological Cybernetics, 84(1):57?62, 2001. [14] M Boegerhausen, P Suter, and S.-C. Liu. Modeling short-term synaptic depression in silicon. Neural Computation, 15(2):331?348, Feb 2003. [15] G. Indiveri. A low-power adaptive integrate-and-fire neuron circuit. In Proc. IEEE International Symposium on Circuits and Systems, pages IV?820?IV?823. IEEE, May 2003. [16] M. Mahowald. An Analog VLSI System for Stereoscopic Vision. Kluwer, Boston, MA, 1994. [17] L. Abbott, K. Sen, J. Varela, and S. Nelson. Synaptic depression and cortical gain control. Science, 275(5297):220?223, 1997. [18] V. Dante and P. Del Giudice. The PCI-AER interface board. In A. Cohen, R. Douglas, T. Horiuchi, G. Indiveri, C. Koch, T. Sejnowski, and S. Shamma, editors, 2001 Telluride Workshop on Neuromorphic Engineering Report, pages 99?103, 2001. http://www.ini.unizh.ch/telluride/previous/report01.pdf. [19] S. R. Deiss, R. J. Douglas, and A. M. Whatley. A pulse-coded communications infrastructure for neuromorphic systems. In W. Maass and C. M. Bishop, editors, Pulsed Neural Networks, chapter 6, pages 157?78. MIT Press, 1998. [20] G. Indiveri. A neuromorphic VLSI device for implementing 2-D selective attention systems. IEEE Transactions on Neural Networks, 12(6):1455?1463, November 2001. [21] P. Lichtsteiner, C. Posch, and T. Delbr?uck. A 128?128 120dB 30mW asynchronous vision sensor that responds to relative intensity change. In 2006 IEEE ISSCC Digest of Technical Papers, pages 508?509. IEEE, 2006. [22] D.H. Brainard. The psychophisics toolbox. Spatial Vision, 10:433?436, 1997. Acknowledgments This work was inspired by interactions with the participants of the Neuromorphic Engineering Workshop (http://www.ini.unizh.ch/telluride) and was supported by the EU grant ALAVLSI (IST?2001?38099) and DAISY (FP6-2005-015803), and by the Italian CNR (Centro Nazionale delle Ricerche) fellowship 203.22. The silicon retina was provided by Patrick Lichtsteiner. We further wish to thank Matthias Oster and Dylan Muir for providing AER software tools.
3028 |@word middle:3 pulse:3 solid:1 initial:1 liu:1 foveal:1 efficacy:1 current:24 motor:1 designed:2 plot:9 intelligence:1 selected:3 device:4 desktop:2 core:1 short:4 infrastructure:2 node:7 symposium:1 transceiver:1 isscc:1 combine:1 rapid:1 behavior:2 multi:10 integrator:1 brain:2 inspired:3 decreasing:1 begin:1 provided:1 underlying:1 circuit:14 developed:1 fabricated:1 temporal:1 every:2 lichtsteiner:3 control:6 unit:1 imager:1 grant:1 appear:2 positive:1 before:2 engineering:2 local:2 modify:1 consequence:1 conspicuousness:1 analyzing:1 reacting:1 path:3 firing:13 black:7 limited:2 shamma:1 acknowledgment:1 camera:1 spontaneously:1 block:2 implement:3 area:1 eth:3 regular:1 spikebased:1 selection:2 www:2 map:2 attention:24 urich:2 independently:1 focused:2 immediately:1 sac:30 chicca:2 communicating:1 m2:1 fade:1 array:3 variation:1 autonomous:1 spontaneous:2 us:1 delbr:1 element:1 particularly:2 std:6 cooperative:1 bottom:7 observed:1 region:5 eu:1 decrease:7 highest:1 nazionale:1 dynamic:11 lcd:3 topographically:1 translated:1 easily:1 joint:1 chip:37 chapter:1 horiuchi:3 fast:1 describe:2 effective:1 sejnowski:1 artificial:2 pci:2 outside:1 neuroinformatics:2 solve:2 noisy:1 associative:1 advantage:2 whatley:2 net:1 matthias:1 sen:1 propose:1 interaction:1 coming:1 reset:3 adaptation:9 turned:2 realization:1 competition:15 correspondent:7 produce:3 cmos:1 object:3 brainard:1 depending:2 coupling:1 bosch:1 nearest:2 sole:1 received:1 implemented:1 switzerland:3 functionality:3 occupies:1 human:1 transient:3 implementing:3 exchange:1 boegerhausen:1 preliminary:2 biological:7 koch:4 mapping:4 rasche:1 milanese:1 circuitry:1 stabilizes:1 consecutive:1 proc:2 integrates:1 homeostasis:1 tool:1 reflects:4 mit:1 interfacing:1 sensor:2 voltage:6 iex:3 encode:1 focus:2 indiveri:8 improvement:2 contrast:2 suppression:2 am:1 entire:1 integrated:1 pad:1 vlsi:7 italian:1 selective:18 comprising:2 pixel:37 issue:2 among:2 flexible:1 orientation:1 enrich:1 spatial:2 integration:1 special:1 field:1 having:1 mm2:1 represents:1 look:1 unizh:2 future:1 others:1 stimulus:8 report:1 few:1 retina:32 suter:1 neurocomputing:1 connects:1 fire:4 highly:1 certainly:1 allocating:1 posch:1 iv:2 stopped:1 modeling:5 delle:1 delbruck:1 neuromorphic:9 mahowald:1 introducing:1 alavlsi:1 hundred:1 giacomo:2 thanks:2 international:1 sensitivity:1 systematic:1 off:4 decoding:1 connecting:1 linux:2 thesis:1 external:1 conf:1 cognitive:2 itti:1 return:4 ullman:1 retinal:2 sec:4 int:1 depends:1 performed:2 view:1 vehicle:1 traffic:1 competitive:3 start:2 participant:1 parallel:6 daisy:1 contribution:2 square:8 characteristic:1 interfaced:1 saliency:5 produced:1 niebur:1 monitoring:1 confirmed:1 drive:1 cybernetics:1 synapsis:9 phys:2 synaptic:11 raster:7 bartolozzi:3 frequency:10 monitored:2 gain:3 stop:1 tunable:3 segmentation:1 back:1 higher:1 response:7 synapse:17 april:1 box:4 just:1 stage:1 deweerth:1 receives:5 del:2 mode:3 stimulate:1 disabled:2 building:1 effect:7 concept:1 inspiration:1 maass:1 semantic:1 white:3 self:4 excitation:2 ini:4 pdf:1 muir:1 dedicated:3 interface:2 image:4 wise:1 stimulation:4 spiking:9 physical:2 cohen:1 winner:6 insensitive:1 volume:1 analog:5 resting:4 kluwer:1 silicon:12 measurement:1 dot:3 robot:1 cortex:2 inhibition:7 add:1 feb:1 patrick:1 pulsed:1 massively:1 selectivity:1 certain:1 exploited:1 captured:1 additional:2 employed:1 ricerche:1 signal:7 ii:1 multiple:3 technical:1 faster:1 post:1 coded:1 vnet:13 basic:6 vision:4 expectation:1 robotics:1 cell:11 receive:4 background:3 fellowship:1 decreased:2 diagram:1 source:1 sends:1 extra:1 recording:1 hz:2 induced:1 sent:2 db:1 december:1 cnr:1 emitted:1 mw:1 architecture:6 bandwidth:3 prototype:1 shift:1 depression:6 matlab:1 detailed:1 subregions:2 hardware:2 processed:1 morris:1 http:2 millisecond:2 inhibitory:8 stereoscopic:1 ist:1 varela:1 salient:4 hysteretic:6 monitor:1 changing:2 prevent:1 douglas:3 andfire:1 abbott:1 fp6:1 powerful:1 communicate:1 respond:1 oscillation:1 correspondence:1 activity:10 aer:15 strength:1 scene:3 software:3 centro:1 giudice:2 nearby:1 aspect:1 speed:1 across:1 suppressed:1 wta:16 primate:1 resource:2 zurich:2 bus:1 turn:4 eventually:1 mechanism:4 fed:1 observe:2 away:1 distinguished:1 subtracted:1 robustness:1 top:4 exploit:1 dante:2 disappear:2 unchanged:1 psychophysical:1 already:1 spike:19 digest:1 strategy:4 responds:2 win:2 attentional:1 mapped:1 ior:8 capacity:1 lateral:2 thank:1 nelson:1 extent:1 besides:1 ratio:1 providing:1 setup:4 trace:1 implementation:5 design:2 pomerleau:1 neuron:38 november:1 january:1 extended:1 communication:4 neurobiology:1 reproducing:1 dpi:2 august:1 intensity:1 pair:1 cleaned:1 toolbox:1 deiss:1 connection:1 address:2 pattern:1 oculomotor:1 built:2 including:1 memory:1 analogue:1 power:2 event:3 demanding:1 serially:1 suitable:3 technology:1 axis:2 oster:1 text:1 relative:1 mixed:1 proportional:2 filtering:3 digital:3 integrate:4 switched:1 editor:2 course:1 excitatory:4 supported:1 last:1 asynchronous:4 soon:1 allow:1 institute:2 wide:1 neighbor:2 leaky:1 feedback:1 cortical:1 world:2 doesn:1 sensory:1 adaptive:2 subtracts:1 employing:1 cope:1 transaction:4 compact:3 uni:2 reproduces:2 sequentially:3 active:12 receiver:1 table:1 stimulated:6 kanade:1 interact:2 complex:3 protocol:3 domain:1 whole:2 noise:1 arise:1 allowed:1 complementary:1 augmented:1 fig:15 neuronal:2 board:5 screen:5 slow:1 comprises:2 wish:1 winning:3 dylan:1 hw:1 down:1 specific:1 bishop:1 physiological:1 cease:1 grouping:1 workshop:2 sequential:2 flashing:3 phd:1 boston:1 explore:1 sender:1 visual:7 chiara:2 tracking:1 monotonic:1 ch:4 corresponds:2 ma:1 hahnloser:1 telluride:3 towards:1 content:1 change:3 specifically:1 baluja:1 diff:1 reducing:2 total:2 uck:1 experimental:2 select:4 bidimensional:1 scan:4 ethz:2 arbitration:1 tested:1
2,236
3,029
Learning to Model Spatial Dependency: Semi-Supervised Discriminative Random Fields Chi-Hoon Lee Department of Computing Science University of Alberta [email protected] Feng Jiao Department of Computing Science University of Waterloo [email protected] Shaojun Wang ? Department of Computer Science and Engineering Wright State University [email protected] Dale Schuurmans, Russell Greiner Department of Computing Science University of Alberta {dale, greiner}@cs.ualberta.ca Abstract We present a novel, semi-supervised approach to training discriminative random fields (DRFs) that efficiently exploits labeled and unlabeled training data to achieve improved accuracy in a variety of image processing tasks. We formulate DRF training as a form of MAP estimation that combines conditional loglikelihood on labeled data, given a data-dependent prior, with a conditional entropy regularizer defined on unlabeled data. Although the training objective is no longer concave, we develop an efficient local optimization procedure that produces classifiers that are more accurate than ones based on standard supervised DRF training. We then apply our semi-supervised approach to train DRFs to segment both synthetic and real data sets, and demonstrate significant improvements over supervised DRFs in each case. 1 Introduction Random field models are a popular probabilistic framework for representing complex dependencies in natural image data. The two predominant types of random field models correspond to generative versus discriminative graphical models respectively. Classical Markov random fields (MRFs) [2] follow a traditional generative approach, where one models the joint probability of the observed image along with the hidden label field over the pixels. Discriminative random fields (DRFs) [11, 10], on the other hand, directly model the conditional probability over the pixel label field given an observed image. In this sense, a DRF is equivalent to a conditional random field [12] defined over a 2-D lattice. Following the basic tenet of Vapnik [18], it is natural to anticipate that learning an accurate joint model should be more challenging than learning an accurate conditional model. Indeed, recent experimental evidence shows that DRFs tend to produce more accurate image labeling models than MRFs, in many applications like gesture recognition [15] and object detection [11, 10, 19, 17]. Although DRFs tend to produce superior pixel labellings to MRFs, partly by relaxing the assumption of conditional independence of observed images given the labels, the approach relies more heavily on supervised training. DRF training typically uses labeled image data where each pixel label has been assigned. However, it is considerably more difficult to obtain labeled data for image analysis than for other classification tasks, such as document classification, since hand-labeling the individual pixels of each image is much harder than assigning class labels to objects like text documents. ? Work done while at University of Alberta Recently, semi-supervised training has taken on an important new role in many application areas due to the abundance of unlabeled data. Consequently, many researchers are now working on developing semi-supervised learning techniques for a variety of approaches, including generative models [14], self-learning [5], co-training [3], information-theoretic regularization [6, 8], and graph-based transduction [22, 23, 24]. However, most of these techniques have been developed for univariate classification problems, or class label classification with a structured input [22, 23, 24]. Unfortunately, semi-supervised learning for structured classification problems, where the prediction variables are interdependent in complex ways, have not been as widely studied, with few exceptions [1, 9]. Current work on semi-supervised learning for structured predictors [1, 9] has focused primarily on simple sequence prediction tasks where learning and inference can be efficiently performed using standard dynamic programming. Unfortunately, the problem we address is more challenging, since the spatial correlations in a 2-D grid structure create numerous dependency cycles. That is, our graphical model structure prevents exact inference from being feasible. Kumar et al [10] and Vishwanathan et al [19] argue that learning a model in the context of approximate inference creates a greater risk of the over-fitting and over estimating. In this paper, we extend the work on semi-supervised learning for sequence predictors [1, 9], particularly the CRF based approach [9], to semi-supervised learning of DRFs. There are several advantages of our approach to semi-supervised DRFs. (1) We inherit the standard advantage of discriminative conditional versus joint model training, while still being able to exploit unlabeled data. (2) The use of unlabeled data enhances our ability to avoid parameter over-fitting and over-estimation in grid based random fields even when using a learner that uses only approximate inference methods. (3) We are still able to model spatial correlations in a 2-D lattice, despite the fact that this introduces dependency cycles in the model. That is, our semi-supervised training procedure can be interpreted as a MAP estimator, where the parameter prior for the model on labeled data is governed by the conditional entropy of the model on unlabeled data. This allows us to learn local potentials that capture spatial correlations while often avoiding local over-estimation. We demonstrate the robustness of our model by applying it to a pixel denoising problem on synthetic images, and also to a challenging real world problem of segmenting tumor in magnetic resonance images. In each case, we have obtained significant improvements over current baselines based on standard DRF training. 2 Semi-Supervised DRFs (SSDRFs) We formulate a new semi-supervised DRF training principle based on the standard supervised formulation of [11, 10]. Let x be an observed input image, represented by x = {x i }i?S , where S is a set of the observed image pixels (nodes). Let y = {yi }i?S be the joint set of labels over all pixels of an image. For simplicity we assume each component y i ? y ranges over binary classes Y = {?1, 1}. For example, x might be a magnetic resonance image of a brain and y is a realization of a joint labeling over all pixels that indicates whether each pixel is normal or a tumor. In this case, Y would be the set of pre-defined pixel categories (e.g. tumor versus non-tumor). A DRF is a conditional random field defined on the pixel labels, conditioned on the observation x. More explicitly, the joint distribution over the labels y given the observations x is written p? (y|x) = ?X ? XX 1 exp ?w (yi , x) + ?? (yi , yj , x) Z? (x) i?S i?S j?N (1) i   Here Ni denotes the neighboring pixels of i. ?w (yi , x) = log ?(yi wT hi (x) denotes the node potential at pixel i, which quantifies the belief that the class label is yi for the pre-defined feature vextor hi (x), where ?(t) = 1+e1 ?t . ?? (yi , yj , x) = yi yj vT ?ij (x) is an edge potential that captures spatial correlations among neighboring pixels (here, the ones at positions i and j), such that ? ij (x) is the pre-defined feature vector associated with observation x. Z ? (x) is the normalizing factor, also known as a (conditional) partition function, which is Z? (x) = X y exp ?X i?S ?w (yi , x) + X X ?? (yi , yj , x) ? (2) i?S j?Ni Finally, ? = (w, ?) are the model parameters. When the edge potentials are set to zero, a DRF yields a standard logistic regression classifier. The potentials in a DRF can use properties of the observed image, and thereby relax the conditional independence assumption of MRFs. Moreover, the edge potentials in a DRF can smooth discontinuities between heterogeneous class pixels, and also correct errors made by the node potentials. Assume we have a set of independent labeled images, D l = (x(1) , y(1) )), ? ? ? , (x(M ) , y(M ) ) , and a ? ? set of independent unlabeled images, D u = x(M +1) , ? ? ? , x(T ) . Our goal is to build a DRF model ? ? from the combined set of labeled and unlabeled examples, D l ? Du . The standard supervised DRF training procedure is based on maximizing the log of the posterior probability of the labeled examples in D l CL(?) = M X log P (y(k) |x(k) ) ? k=1 ?T ? 2? 2 (3) A Gaussian prior over the edge parameters ? is assumed and a uniform prior over parameters w. Here p(?) = N (?; 0, ? 2 I), where I is the identity matrix. The hyperparameter ? 2 adds a regularization term. In effect, the Gaussian prior introduces a form of regularization to limit over-fitting on rare features and avoid degeneracy in the case of correlated features. There are a few issues regarding the supervised learning criteria (3). First, the value of ? 2 is critical to the final result, and unfortunately selecting the appropriate ? 2 is a non-trivial task, which in turn makes the learning procedures more challenging and costly [13]. Second, the Gaussian prior is data-independent, and is not associated with either the unlabeled or labeled observations a priori. Inspired by the work in [8] and [9], we propose a semi-supervised learning algorithm for DRFs that makes full use of the available data by exploiting a form of entropy regularization as a prior over the parameters on D u . Specifically, for a semi-supervised DRF, we attempt to find ? that maximizes the following objective function RL(?) = M X log p? (y(m) |x(m) ) + ? m=1 T X m=M +1 X p? (y|x(m) ) log p? (y|x(m) ) (4) y The first term of (4) is the conditional likelihood over the labeled data set D l , and the second term is a conditional entropy prior over the unlabeled data set D u , weighted by a tradeoff parameter ?. The resulting estimate is then formulated as a MAP estimate. The goal of the objective (4) is to minimize the uncertainty on possible configurations over parameters. That is, minimizing the conditional entropy over unlabeled instances provides more confidence to the algorithm that the hypothetical labellings for the unlabeled data are consistent with the supervised labels, as greater certainty on the estimated labellings coincides with greater conditional likelihood on the supervised labels, and vice versa. This criterion has been shown to be effective for univariate classification [8], and chain structured CRFs [9]; here we apply it to the 2-D lattice case. 3 Parameter Estimation Several factors constrain the form of training algorithm: Because of overhead and the risk of divergence, it was not practical to employ a Newton method. Iterative scaling was not possible because the updates no longer have a closed form. Although the criticism of the gradient descent?s principle is well taken, it is the most practical approach we will adopt to optimize the semi-supervised MAP formulation (4) and allows us to improve on standard supervised DRF training. To formulate a local optimization procedure, we need to compute the gradient of the objective (4) with respect to the parameters. Unfortunately, because of the nonlinear mapping function ?(.), we are not able to represent the gradient of objective function as compactly as [9], which was able to express the gradient as a product of the covariance matrix of features and the parameter vector ?. Nevertheless, it is straightforward to show that the derivatives of objective function with respect to the node parameters w is given by 1 1 Note that the derivatives of objective function with respect to the edge parameters ? are computed analogously. ? RL(?) = ?w 0 1 M ? ? X ? ? X X (m) T (m) @y (m) 1 ? ?(y (m) wT hi (x(m) ) ? p? (y|x )yi 1 ? ?(yi w hi (x ) A hi (x(m) ) i i m=1 i?S m +? (5) y T X X m=M +1 i?S m 0 ? ? ? ? X X T (m) (m) @ ?? (yi , yj , x) yi 1 ? ?(yi w hi (x ) p? (y|x ) ?w (yi , x) + j?Ni y ? hX p? (y|x (m) ? ) ?w (yi , x) + ?? (yi , yj , x) ?i j?Ni y hX X ? p? (y|x(m) )yi 1 ? ?(yi wT hi (x(m) ) ?i y 1 A hi (x(m) ), where the first term in (5) is the gradient of the supervised component of the DRF over labeled data, and the second term is the gradient of conditional entropy prior of the DRF over unlabeled data. Given the lattice structure of the joint labels, it is intractable to compute the exact expectation terms in the above derivatives. It is also intractable to compute the conditional partition function Z ? (x). Therefore, as in standard supervised DRFs, we need to incorporate some form of approximation. Following [2, 11, 10], we incorporate the pseudo-likelihood approximation, which assumes that the joint conditional distribution can be approximated as a product of the local posterior probabilities given the neighboring nodes and the observation p? (y|x) Y ? (6) p? (yi |yNi , x) i?S p? (yi |yNi , x) ? ? X 1 exp ?w (yi , x) + ?? (yi , yj , x) zi (x) j?N = (7) i Using the factored approximation in (7), we can reformulate the training objective as m RLP L (?) S M X X = (m) log p? (yi (8) (m) |yNi , x(m) ) m=1 i=1 m T X +? S X X p? (yi |yNi , x(m) ) log p? (yi |yNi x(m) ) m=M +1 i=1 yi Here, the derivative of the second term in (8), with respect to the potential parameters w and ?, can be reformulated as a factored conditional entropy, yielding ? RLP L (?) (9) ?w 0 1 M ? ? X ? ? X X @y (m) 1 ? ?(y (m) wT hi (x(m) ) ? = p? (yi |yNi , x(m) )yi 1 ? ?(yi wT hi (x(m) ) A hi (x(m) ) i i m=1 i?S m +? T X yi X m=M +1 i?S m 0 @ X p? (yi |yNi , x yi ? hX (m) ? ) ?w (yi , x) + j?Ni p? (yi |yNi x (m) ? yi p? (yi |yNi , x ? ? ? T (m) ?? (yi , yj , x) yi 1 ? ?(yi w hi (x ) ) ?w (yi , x) + yi hX X X ?? (yi , yj , x) ?i j?Ni (m) ? T )yi 1 ? ?(yi w hi (x (m) ) ?i 1 A hi (x(m) ) Note that ??? RLP L (?) is computed analogously. Assuming the factorization, the true conditional entropy and feature expectations can be computed in terms of local conditional distributions. This allows us efficiently to approximate the global conditional entropy over unlabeled data. Note that there may be an over-smoothing issue associated with the pseudo-likelihood approximation, as mentioned in [10, 19]. However, due to the fast and stable performance of this approximation in the supervised case [2, 10] we still employ it, but below show that the over-smoothing effect is mitigated by our data-dependent prior in the MAP objective (4). 4 Inference As a result of our formulation, the learning method is tightly coupled with the inference steps. That is, for the unlabeled data, XU , each time we compute the local conditional covariance (9), we perform inference steps for each node i and its neighboring nodes N i . Our inference is based on iterative conditional modes (ICM) [2], and is given by yi? = argmax P (yi |yNi , X) (10) yi ?Y where, for each position i, we assume that the labels of all of its neighbors y 0P? Ni are fixed. We could alternatively compute the marginal conditional probability P (y i |X) = yS\i P (yi , yS\i |X) for each node using the sum-product algorithm (i.e. loopy belief propagation), which iteratively propagates the belief of each node to its neighbors. Clearly, there are a range of approximation methods available, each entailing different accuracy-complexity tradeoffs. However, we have found that ICM yields good performance at our tasks below, and is probably one of the simplest possible alternatives. 5 Experiments Using standard supervised DRF models, Kumar and Hebert [11, 10] reported interesting experimental results for joint classification tasks on a 2-D lattice, which represents an image with a DRF model. Since labeling image data is expensive and tedious, we believe that better results could be further obtained by formulating a MAP estimation of DRFs by also using the abundant unlabeled image data. In this section, we present a series of experiments on synthetic and real data sets using our novel semi-supervised DRFs(SSDRFs). In order to evaluate our model, we compare the results with those using maximum likelihood estimation of supervised DRFs [11]. There is a major reason that we consider the standard MLE DRF from [11] instead of the parameter regularized DRFs from [10]: that is, we want to show the difference between the ML and MAP principles without using any regularization term that can be problematic [10, 13]. To quantify the performance of each model, we used the Jaccard score J = (T P +FT PP +F N ) , where TP denotes true positives, FP false positives, and FN false negatives. Although there are many accuracy measures available, we used this score to penalize the false negatives since many imaging tasks are very imbalanced: that is, only a small percentage of pixels are in the ?positive? class. The tradeoff parameter, ?, was hand-tuned on one held out data set and then held fixed at 0.2 for all of the experiments. 5.1 Synthetic image sets Our primary goal in using synthetic data sets was to demonstrate how well different models classified pixels as a binary classification over a 2-D lattice in the presence of noise. We generated 18 synthetic data sets, each with its own shape. The intensities of pixels in each image were independently corrupted by noise generated from a Gaussian N (0, 1). Figure 1 shows the results of using supervised DRFs, as well as semi-supervised DRFs. [10, 19] reported over-smoothing effects from the local approximation approach of PL while our experiments indicate that the over-smoothing is caused not only by PL approximation, but also by the sensitivity of the regularization to the parameters. However, using our semi-supervised DRF as a MAP formulation, we have dramatically improved the performance over standard supervised DRF. Note that the first row in Figure 1 shows good results from the standard DRF, while the oversmoothed outputs are presented in the last row. Although the ML approach may learn proper parameters from some of data sets, unfortunately its performance has not been consistent since the standard DRF?s learning of the edge potential tends to be overestimated. For instance, the last row shows that overestimating parameters of the DRF segment almost all pixels into a class due to the complicated edges and structures containing non-target area within the target area, while semi-supervised DRF performance is not degraded at all. Overall, by learning more statistics from unlabeled data, our model dominates the standard DRF in most cases. This is because our MAP formulation avoids the overestimate of potentials and uses the edge potential to correct the errors made by the node potential. Figure 2(a) shows the results over 18 synthetic data sets. Each point above the diagonal line in Figure 2(a) indicates SSDRF producing higher Jaccard scores for a data set. Note that our model stably converged as we increased the ratio (nU/nL) of unlabeled data sets in our learning, 1 4500 0.9 J: 0.933890 4400 0.8 J: 0.933377 J: 0.729527 J: 0.957983 SSDRF 0.7 4300 0.6 0.5 4200 0.4 4100 0.3 0.2 4000 0.1 J: 0.008178 J: 0.923836 0 Figure 1: Outputs from synthetic data sets. From left to right: Testing instance, Ground Truth, Logistic Regression (LR), DRF, and SSDRF. 0 (a) 0.2 0.4 DRF 0.6 0.8 Accuracy from DRF and SSDRF for all 18 synthetic data sets 1 3900 1 2 3 4 5 6 7 8 9 10 (b) Log likelihood values (Y axis) for a testing image by increasing ratio (X axis) of unlabeled instances for SSDRF Figure 2: Accuracy and Convergency as in Figure 2(b), where nU denotes the number of unlabeled images and nL the number of labeled images. Similar results have also been reported in simple single variable classification task [8]. 5.2 Brain Tumor Segmentation We have applied our semi-supervised DRF model to the challenging real world problem of segmenting tumor in medical images. Our goal here is to classify each pixel of an magnetic resonance (MR) image into a pre-defined category: tumor and non-tumor. This is a very important, yet notoriously difficult, task in surgical planning and radiation therapy which currently involves a significant amount of manual work by human medical experts. We applied three models to the classification of 9 studies from brain tumor MR images. For each study2 , i, we divided the MR images into DiL , DiU , and DiS , where an MR image (a.k.a slice) has three modalities available ? T1, T2, and T1 contrast. Note that each modality for each slice has 66, 564 pixels. As with much of the related work on automatic brain tumor segmentation (such as [7, 21]), our training is based on patient-specific data, where training MR images for a classifier are obtained from the patient to be tested. Note that the training sets and testing sets for a classifier are disjoint. Specifically, LR and DRF takes DiL as the training set and DiU and DiS for testing sets, while SSDRF takes DiL and DiU for training and DiU and DiS for testing. We segmented the ?enhancing? tumor area, the region that appears hyper-intense after injecting the contrast agent (we also included non-enhancing areas contained within the enhancing contour). Table 1 and 2 present Jaccard scores of testing DiU and DiS for each study, pi , respectively. While the standard supervised DRF improves over its degenerate model LR by 1%, semi-supervised DRF significantly improves over the supervised DRF by 11%, which is significant at p < 0.00566 using a paired example t test. Considering the fact that MR images contain much noise and the three modalities are not consistent among slices of the same patient, our improvement is considerable. Figure 3 shows the segmentation results by overlaying the testing slices with segmented outputs from the three models. Each row demonstrates the segmentation for a slice, where the white blob areas for the slice correspond to the enhancing tumor area. 6 Conclusion We have proposed a new semi-supervised learning algorithm for DRFs, which was formulated as MAP estimation with conditional entropy over unlabeled data as a data-dependent prior regularization. Our approach is motivated by the information-theoretic argument [8, 16] that unlabeled examples can provide the most benefit when classes have small overlap. We introduced a simple approximation approach for this new learning procedure that exploits the local conditional probability to efficiently compute the derivative of objective function. 2 Each study involves a number (typically 21) of images of a single patient ? here parallel axial slices through the head. Table 1: Jaccard Scores for DiU . Studies p1 p2 p3 p4 p5 p6 p7 p8 p9 Average Testing from DiU LR DRF SSDRF 53.84 59.81 59.81 83.24 83.65 84.67 30.72 30.17 75.76 72.04 76.16 79.02 73.26 73.59 75.25 88.39 89.61 87.01 69.33 69.91 75.60 58.49 58.89 73.03 60.85 56.49 83.91 65.57 66.48 77.12 Table 2: Jaccard Scores for DiS . Studies p1 p2 p3 p4 p5 p6 p7 p8 p9 Average Testing from DiS LR DRF SSDRF 68.01 68.75 68.75 69.61 69.73 70.06 23.11 21.90 71.13 56.52 63.07 68.40 51.38 52.36 51.29 85.65 86.35 85.43 66.71 68.68 70.27 44.92 45.36 73.09 21.11 20.16 38.06 54.11 55.15 66.27 Figure 3: From Left to Right: Human Expert, LR, DRF, and SSDRF We have applied this new approach to the problem of image pixel classification tasks. By exploiting the availability of auxiliary unlabeled data, we are able to improve the performance of the state of the art supervised DRF approach. Our semi-supervised DRF approach shares all of the benefits of the standard DRF training, including the ability to exploit arbitrary potentials in the presence of dependency cycles, while improving accuracy through the use of the unlabeled data. The main drawback is the increased training time involved in computing the derivative of the conditional entropy over unlabeled data. Nevertheless, the algorithm is efficient to be trained on unlabeled data sets, and to obtain a significant improvement in classification accuracy over standard supervised training of DRFs as well as iid logistic regression classifiers. To further accelerate the performance with respect to accuracy, we may apply loopy belief propagation [20] or graph-cuts [4] as an inference tool. Since our model is tightly coupled with inference steps during the learning, the proper choice of an inference algorithm will most likely improve segmentation tasks. Acknowledgments This research is supported by the Alberta Ingenuity Centre for Machine Learning, Cross Cancer Institute, and NSERC. We gratefully acknowledge many helpful suggestions from members of the Brain Tumor Analysis Project, including Dr. A. Murtha and Dr. J Sander. References [1] Y. Altun, D. McAllester, and M. Belkin. Maximum margin semi-supervised learning for structured variables. In NIPS 18. 2006. [2] J. Besag. On the statistical analysis of dirty pictures. Journal of Royal Statistical Society. Series B, 48:3:259?302, 1986. [3] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 1998. [4] Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts. In ICCV (1), pages 377?384, 1999. [5] G. Celeux and G. Govaert. A classification EM algorithm for clustering and two stochastic versions. Comput. Stat. Data Anal., 14(3):315?332, 1992. [6] A. Corduneanu and T. Jaakkola. Data dependent regularization. In O. Chapelle, B. Schoelkopf, and A. Zien, editors, Semi-Supervised Learning. MIT Press, 2006. [7] C. Garcia and J.A. Moreno. Kernel based method for segmentation and modeling of magnetic resonance images. LNCS, 3315:636?645, Oct 2004. [8] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In NIPS 17, 2004. [9] F. Jiao, S. Wang, C. Lee, R. Greiner, and D Schuurmans. Semi-supervised conditional random fields for improved sequence segmentation and labeling. In COLING/ACL, 2006. [10] S. Kumar and M. Hebert. Discriminative fields for modeling spatial dependencies in natural images. In NIPS 16, 2003. [11] S. Kumar and M. Hebert. Discriminative random fields: A discriminative framework for contextual interaction in classification. In CVPR, 2003. [12] J. Lafferty, F. Pereira, and A. McCallum. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [13] C. Lee, R. Greiner, and O. Za??ane. Efficient spatial classification using decoupled conditional random fields. In 10th European Conference on Principles and Practice of Knowledge Discovery in Databases, pages 272?283, 2006. [14] K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103?134, 2000. [15] A. Quattoni, M. Collins, and T. Darrell. Conditional random fields for object recognition. In NIPS 17, 2004. [16] S. Roberts, R. Everson, and I. Rezek. Maximum certainty data partitioning, 2000. [17] A. Torralba, K. Murphy, and W. Freeman. Contextual models for object detection using boosted random fields. In NIPS 17, 2004. [18] V. Vapnik. Statistical Learning Theory. John-Wiley, 1998. [19] S.V.N. Vishwanathan, N. Schraudolph, M. Schmidt, and K. Murphy. Accelerated training of conditional random fields with stochastic gradient methods. In ICML, 2006. [20] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In NIPS 13, pages 689?695, 2000. [21] J. Zhang, K. Ma, M.H. Er, and V. Chong. Tumor segmentation from magnetic resonance imaging by learning via one-class support vector machine. Intl. Workshop on Advanced Image Technology, 2004. [22] D. Zhou, O. Bousquet, T. Navin Lal, J. Weston, and B. Sch?olkopf. Learning with local and global consistency. In NIPS 16, 2004. [23] D. Zhou, J. Huang, and B. Sch?olkopf. Learning from labeled and unlabeled data on a directed graph. In ICML, 2005. [24] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In ICML, 2003.
3029 |@word version:1 tedious:1 covariance:2 thereby:1 harder:1 configuration:1 series:2 score:6 selecting:1 tuned:1 document:3 yni:10 current:2 contextual:2 assigning:1 yet:1 written:1 john:1 tenet:1 fn:1 partition:2 shape:1 moreno:1 update:1 generative:3 p7:2 mccallum:2 lr:6 provides:1 node:10 zhang:1 along:1 drfs:19 combine:1 fitting:3 overhead:1 p8:2 indeed:1 ingenuity:1 p1:2 planning:1 brain:5 chi:1 inspired:1 freeman:2 alberta:4 p9:2 considering:1 increasing:1 project:1 estimating:1 xx:1 moreover:1 maximizes:1 mitigated:1 interpreted:1 developed:1 certainty:2 pseudo:2 hypothetical:1 concave:1 classifier:5 demonstrates:1 partitioning:1 medical:2 producing:1 segmenting:3 positive:3 overestimate:1 engineering:1 local:10 diu:7 tends:1 limit:1 t1:2 overlaying:1 despite:1 might:1 acl:1 studied:1 challenging:5 relaxing:1 co:2 factorization:1 range:2 directed:1 practical:2 acknowledgment:1 yj:9 testing:9 practice:1 procedure:6 lncs:1 area:7 significantly:1 pre:4 confidence:1 altun:1 unlabeled:29 convergency:1 context:1 risk:2 applying:1 optimize:1 equivalent:1 map:10 maximizing:1 crfs:1 straightforward:1 independently:1 focused:1 formulate:3 simplicity:1 factored:2 estimator:1 target:2 heavily:1 ualberta:2 exact:2 programming:1 oversmoothed:1 us:3 recognition:2 particularly:1 approximated:1 expensive:1 cut:2 labeled:15 database:1 observed:6 role:1 ft:1 p5:2 wang:3 capture:2 region:1 schoelkopf:1 cycle:3 russell:1 mentioned:1 complexity:1 dynamic:1 trained:1 entailing:1 segment:2 creates:1 learner:1 compactly:1 accelerate:1 joint:9 represented:1 regularizer:1 train:1 jiao:2 fast:2 effective:1 labeling:6 hyper:1 widely:1 cvpr:1 loglikelihood:1 relax:1 ability:2 statistic:1 final:1 sequence:4 advantage:2 blob:1 propose:1 interaction:1 product:3 p4:2 neighboring:4 combining:1 realization:1 degenerate:1 achieve:1 olkopf:2 exploiting:2 darrell:1 intl:1 produce:3 object:4 develop:1 stat:1 radiation:1 axial:1 ij:2 p2:2 auxiliary:1 c:3 involves:2 indicate:1 quantify:1 drawback:1 correct:2 stochastic:2 human:2 mcallester:1 hx:4 anticipate:1 pl:2 therapy:1 wright:2 normal:1 exp:3 ground:1 mapping:1 major:1 adopt:1 torralba:1 estimation:7 injecting:1 label:14 currently:1 waterloo:1 vice:1 create:1 tool:1 weighted:1 minimization:2 mit:1 clearly:1 gaussian:5 avoid:2 zhou:2 boosted:1 jaakkola:1 improvement:4 indicates:2 likelihood:6 contrast:2 besag:1 criticism:1 baseline:1 sense:1 helpful:1 inference:11 dependent:4 mrfs:4 typically:2 hidden:1 pixel:23 issue:2 classification:16 among:2 overall:1 colt:1 priori:1 resonance:5 spatial:7 smoothing:4 art:1 marginal:1 field:20 represents:1 icml:4 t2:1 overestimating:1 few:2 primarily:1 employ:2 belkin:1 tightly:2 divergence:1 individual:1 murphy:2 argmax:1 attempt:1 detection:2 chong:1 predominant:1 introduces:2 nl:2 yielding:1 held:2 chain:1 accurate:4 edge:8 hoon:1 intense:1 decoupled:1 abundant:1 instance:4 increased:2 classify:1 modeling:2 tp:1 lattice:6 loopy:2 rare:1 predictor:2 uniform:1 veksler:1 reported:3 dependency:6 corrupted:1 synthetic:9 considerably:1 combined:1 sensitivity:1 overestimated:1 lee:3 probabilistic:2 analogously:2 containing:1 huang:1 dr:2 expert:2 derivative:6 potential:13 availability:1 explicitly:1 caused:1 performed:1 closed:1 complicated:1 parallel:1 minimize:1 ni:7 accuracy:8 degraded:1 efficiently:4 correspond:2 yield:2 surgical:1 iid:1 notoriously:1 researcher:1 classified:1 converged:1 za:1 quattoni:1 manual:1 energy:1 pp:1 involved:1 associated:3 degeneracy:1 popular:1 mitchell:2 knowledge:1 improves:2 segmentation:8 appears:1 higher:1 supervised:49 follow:1 improved:3 wei:1 formulation:5 done:1 p6:2 correlation:4 hand:3 working:1 navin:1 nonlinear:1 propagation:3 logistic:3 mode:1 stably:1 corduneanu:1 dil:3 believe:1 effect:3 contain:1 true:2 regularization:8 assigned:1 iteratively:1 white:1 during:1 self:1 coincides:1 criterion:2 generalized:1 theoretic:2 demonstrate:3 crf:1 image:38 harmonic:1 novel:2 recently:1 boykov:1 superior:1 rl:2 extend:1 significant:5 versa:1 automatic:1 grid:2 consistency:1 centre:1 gratefully:1 chapelle:1 stable:1 longer:2 add:1 posterior:2 imbalanced:1 recent:1 own:1 binary:2 vt:1 yuri:1 yi:50 greater:3 mr:6 semi:29 zien:1 full:1 smooth:1 segmented:2 gesture:1 cross:1 schraudolph:1 divided:1 e1:1 y:2 mle:1 paired:1 prediction:2 basic:1 regression:3 heterogeneous:1 patient:4 expectation:2 enhancing:4 ane:1 represent:1 kernel:1 penalize:1 want:1 modality:3 sch:2 probably:1 tend:2 member:1 lafferty:2 presence:2 bengio:1 sander:1 variety:2 independence:2 zi:1 regarding:1 tradeoff:3 whether:1 motivated:1 reformulated:1 dramatically:1 amount:1 zabih:1 category:2 simplest:1 percentage:1 problematic:1 estimated:1 disjoint:1 hyperparameter:1 express:1 rezek:1 nevertheless:2 blum:1 imaging:2 graph:4 sum:1 uncertainty:1 almost:1 p3:2 scaling:1 jaccard:5 hi:14 vishwanathan:2 constrain:1 bousquet:1 argument:1 formulating:1 kumar:4 department:4 developing:1 structured:5 em:2 labellings:3 iccv:1 taken:2 turn:1 available:4 everson:1 apply:3 yedidia:1 appropriate:1 magnetic:5 shaojun:2 robustness:1 alternative:1 schmidt:1 denotes:4 assumes:1 dirty:1 clustering:1 graphical:2 newton:1 ramin:1 exploit:4 ghahramani:1 build:1 classical:1 society:1 feng:1 objective:10 costly:1 primary:1 traditional:1 diagonal:1 govaert:1 enhances:1 gradient:7 thrun:1 argue:1 trivial:1 reason:1 assuming:1 reformulate:1 ratio:2 minimizing:1 difficult:2 unfortunately:5 robert:1 negative:2 anal:1 proper:2 perform:1 observation:5 markov:1 acknowledge:1 descent:1 head:1 arbitrary:1 intensity:1 introduced:1 lal:1 nu:2 discontinuity:1 nip:7 address:1 able:5 below:2 fp:1 including:3 royal:1 belief:5 critical:1 overlap:1 natural:3 regularized:1 advanced:1 zhu:1 representing:1 improve:3 technology:1 numerous:1 picture:1 axis:2 coupled:2 text:2 prior:11 interdependent:1 discovery:1 interesting:1 suggestion:1 versus:3 agent:1 consistent:3 propagates:1 principle:4 editor:1 grandvalet:1 pi:1 share:1 row:4 cancer:1 supported:1 last:2 hebert:3 dis:6 institute:1 neighbor:2 benefit:2 slice:7 drf:40 world:2 avoids:1 contour:1 dale:2 made:2 approximate:4 ml:2 global:2 assumed:1 discriminative:8 alternatively:1 iterative:2 quantifies:1 table:3 learn:2 ca:3 nigam:1 schuurmans:2 improving:1 du:1 complex:2 cl:1 european:1 inherit:1 main:1 uwaterloo:1 noise:3 icm:2 xu:1 transduction:1 wiley:1 position:2 pereira:1 comput:1 governed:1 coling:1 abundance:1 specific:1 er:1 evidence:1 normalizing:1 intractable:2 dominates:1 workshop:1 vapnik:2 false:3 conditioned:1 margin:1 entropy:12 garcia:1 univariate:2 likely:1 greiner:4 prevents:1 contained:1 nserc:1 rlp:3 truth:1 relies:1 ma:1 oct:1 conditional:33 weston:1 goal:4 identity:1 formulated:2 consequently:1 feasible:1 considerable:1 included:1 specifically:2 wt:5 denoising:1 tumor:14 olga:1 celeux:1 partly:1 experimental:2 exception:1 support:1 collins:1 accelerated:1 incorporate:2 evaluate:1 tested:1 avoiding:1 correlated:1
2,237
303
ART2/BP architecture for adaptive estimation of dynamic processes Einar S~rheim * Department of Computer Science UNIK, Kjeller University of Oslo N-2007 Norway Abstract The goal has been to construct a supervised artificial neural network that learns incrementally an unknown mapping. As a result a network consisting of a combination of ART2 and backpropagation is proposed and is called an "ART2/BP" network. The ART2 network is used to build and focus a supervised backpropagation network. The ART2/BP network has the advantage of being able to dynamically expand itself in response to input patterns containing new information. Simulation results show that the ART2/BP network outperforms a classical maximum likelihood method for the estimation of a discrete dynamic and nonlinear transfer function. 1 INTRODUCTION Most current neural network architectures such as backpropagation require a cyclic presentation of the entire training set to converge. They are thus not very well suited for adaptive estimation tasks where the training vectors arrive one by one, and where the network may never see the same training vector twice. The ART2/BP network system is an attempt to construct a network that works well on these problems. Main features of our ART2/BP are: ? implements incremental supervised learning ? dynamically self-expanding *e-mail address:[email protected]@ifi.uio.no 169 170 Sorheim ? learning of a novel training pattern does not wash away memory of previous training patterns ? short convergence time for learning a new pattern 2 BACKGROUND Adaptive estimation of nonlinear functions requires some basic features of the estimation algorithm. 1. Incremental learning The input/output pairs arrive to the estimation machine one by one. By accumulating the input/output pairs into a training set and rerun the training procedure at every arrival of a new input/output pair, one could use a conventional method. Obvious disadvantages would however be ? huge learning time required as the size of the training set increases . ? an upper limit, N, on the number of elements in the training set will have to be set. The training set will then be a gliding horizon of the N last input/output pairs, and information prior to the N last input/output pairs will be lost. 2. Plasticity Learning of a new input/output pair should not wash away the memory of previously learned nonconflicting input/output pairs. With most existing feedforward supervised nets this is hard to accomplish, though some efforts have been made (Otwell 90). Some networks, like the ART-family and RCN (Ryan 1988) are plastic but they are self-organizing, not supervised. To summarize: Need a supervised network that learns incrementally the mapping of an unknown system and that can be used to predict future outputs. The system in question maps analog vectors to analog vectors. 3 COMBINED ARCHITECTURE In the proposed network architecture an ART2 network controls a BP network, see Figure 1. The BP-network consists of many relatively small subnetworks where the subnets are specialized on one particular domain of the input space. ART2 controls how the input space is divided among the subnets and the total amount of sub nets needed. The ART2 network analyzes the input part of the input/output pairs as they arrive .... to the system. For a given input pattern i:r, ART2 finds the category G:r which has the closest resemblance to ~. If this resemblance is good enough, ~ is of category G:r and the LTM-weights of G:r are updated. The BP-subnetwork BP:r, connected to G:r, is as a consequence activatedt and relearning of BP:r is done. The learning set consists of a "representative" set of the neighbouring subnets patterns and a small number of the previous patterns belonging to category G:r. To summarize the ART2IBP Architecture for Adaptive Estimation of Dynamic Processes algorithm goes as follows: 1. Send input vector to ART2 network 2. ART2 classification. 3. If in learning mode adjust ART2 LTM weights of the winning node. 4. Send input to the back propagation network connected to the winning ART2 node. 5. If in learning mode: ? find a representative training set. ? do epoch learning on training set. Otherwise ? compute output of the selected back propagation network. 6. Go to 1. for new input vector. The ART2/BP neural network can be used for adaptive estimation of nonlinear dynamic processes. The mapping to be estimated then is yet + ot) u(t) yet) = l( u(t), yet?~ (1) f ~m f ~n The input/output pairs will be i7J [u(t) , yet), yet + ot)], denote the input part of i7J: i [u(t) , yet)] and the output part of (0: 0 yet + ot). = 4 = ART2 MODIFIED ART2 was developed by Carpenter& Grossberg see (Carpenter 1987) and (Carpenter 1988). ART2 categorizes arbitrary sequences of analog input patterns, and the categories can be of arbitrary coarseness. For a detailed description of ART2, see (Carpenter 1987). 4.1 MODIFICATION In the standard ART2-algorithm input vectors (patterns) are normalized. For this application it is not desired to classify parallel vectors of different magnitude as belonging to the same category. By adding an extra element to the input vector where this element is simply (2) the new input vector becomes - From a scaled vector of i: i - (3) = a :{ the original vector i could easily be found as : (4) 171 172 Sorheim and by using the augmented i as the input to ART2 instead of i one can at any point in Fl( representation layer) and F2( categorization layer) generate the corresponding non-normalized vector. The F2 node competition is modified so that the node having bottom-up LTM weights with the smallest distance (distance being the euclidean norm) to the Fl layer pattern code wins the competition. The distance d J of F2 node J is given by: IIv - v zj (5) zjll being the 12 - norm Fl pattern code. bottom - up LTM weights of F2 node J Reset is done by calculating the distance d between the Fl layer pattern code V and ~ J : d = IIv- ~I (6) and comparing it to a largest acceptable bound p. If d > p the winning node is inhibited and a new node will be created. If d ~ p LTM-patterns of the winning node J are modified (learning). 5 BACK PROPAGATION NETWORK The backpropagation network used in this work is of the standard feedforward type, see (Rumelhart 1986) . The number of hidden layers and nodes should be kept low in the subnetworks, for the problems in our simulations we used 1 hidden layer with 2 nodes. As for training algorithms several different kinds have been tried: ? Standard back propagation (SBP) ? A modified back propagation (MBP) method similar to the one used in the BPS simulator from George Mason University. ? Quickprop (Q). ? A quasi-Newton method (BFGS). All of these except SBP show similar performance in my test cases. The BP-networks performs as an interpolator in this algorithm and any good interpolation algorithm can be used instead of BP. Approximation theory gives several interesting techniques for approximation/interpolation of multidimensional functions such as Radial Basis Functions and Hyper Basis Functions, for further detail see (Poggio 90). These methods requires a representative training set where the input part determines the location of centers in the input space. The ART2 alg<r rithm can be used for determining these centers in an adaptive way and thus making possible an incremental version of the approximation theory techniques. This idea has not been tested yet, but is an interesting concept for further research. ART2IBP Architecture for Adaptive Estimation of Dynamic Processes 6 LEARNING Learning in ART2/BP is a two stage process. First the input patterns is sent to the ART2 network for categorizing and learning. ART2 will then activate the BP subnetwork that is a local expert on patterns of the same category as the input pattern, and learning of this subnetwork will occur. A training set that is representative for the domain of the input space has to be found. Let a small number of the last categorized input/output pairs be allocated to its corresponding subnet to provide a part of the training set. Denote such a set as LJOc, (C being the category). Define the location ofF2 node J to be its bottom-up weights ;J. Let the current input i~ define an origin, then find the F2 nodes closest to origin in each n-ant of the input space. Call this set of nodes N~ and the set oflast input/output pairs stored in these nodes N JO~. The training set is then chosen to be: T~ N _IO~ U LJO~ Before training, the elements in T~ are scaled to increase accuracy and to accelerate learning. BP-Iearning is then performed, the stopping criteria being a fixed error term or a maximum number of iterations. = 7 ESTIMATION In estimation mode learning in the network is turned off. Given an input thenetwork will produce an output that hopefully will be close to the output of the real system. The ART2-network selects a winning node in the same way as described before but now the reset assembly is not activated. Then the input is fed to the corresponding BP subnetwork and its output is used as an estimate of the original functions output. Because each subnetwork is scaled to cover the domain of the input space made up by the complex hull Co(T~) of its training set T~, the entire ART2/BP network will cover the complex hull C o(T) C ~n+m where: T= {set of all previous fs used to train the network} Good estimation/prediction can thus be expected if i ( Co(T). This means that if the input vector i lies in a domain of the input space that has not been previously explored by the elements in the training set, the network will generalize poorly. 8 EXAMPLE The ART2/BP network has been used to estimate a dynamic model of a tank filled with liquid. The liquid level is sampled every 6t time interval and the ART2/BP network is used to estimate the discrete dynamic nonlinear transfer function of the liquid level as a function of inlet liquid flow and previous liquid level. That is, we want to find a good estimate j(.,.) of: y(t + 6t) u(t) y(t) f( u(t), y(t? inlet liquid flow at time t liquid level at time t (7) 173 174 Sorheim o. 2 0.1 5 o. 1 0.0 5 ',J , '\ .1 An .~ r VS] ' j -0.0 5 \ w' ~J WH I,~ II! .~, .A. e- ~Yce:::::::::J 150 2100 250 = I 300 , -0. 1 black line: ARMA model estimation error (y(t + 6t) - YARMA(t + 6t)) grey line: ART2/BP estimation error (y(t + 6t) - YART2/BP(t + 6t? Figure 1: Comparison of the estimation error of the ARMA model and the ART2/BP network To increase the nonlinearities of the transfer function, the area of the tank varies with a step function of the liquid level. The BP subnetworks have 2 input nodes, 1 hidden layer with 2 neurons and a single neuron output layer. In the simulations p 0.04 and the last three categorized input/output pairs are stored at every subnetwork. As the input space is 2-dimensional giving 4 neighbouring nodes the maximum size of the training set 7 input/output pairs. After a learning period of 1000 samples with random inlet flow, three test cases are run with the network in estimation mode. The network had then formed about 140 categories. The same set of simulation data is also run through an offline maximum likelihood method to estimate a linear ARMA model of the plant, see (Ljung 1983). / = Figure 1 shows the simulation results of the three test cases where : samples 1-100 : random input flow. samples 101-200 : constant input flow at a low level. samples 201-300 : constant input flow at a high level. In Figure 1, the estimation errors of the two methods are compared. For the first 100 samples with stochastic input flow, the estimation error variance of the ART2IBP Architecture for Adaptive Estimation of Dynamic ftocesses ART2/BP network is roughly a factor 10 less than that of the ARMA-model. The performance of ART2/BP is also significantly better for the constant input flow cases, here the ARMA model has an error of -- 0.02 while the ART2/BP-error is - 0.002. The overall improvement in estimation error is a reduction of roughly 0.1 . Also keep in mind that ART2/BP is compared to an offline maximum likelihood method while ART2/BP clearly is an online method. The online version of the maximum likelihood would most probably have given a worse performance than the offline version. 9 CONCLUSION/COMMENTS The proposed ART2/BP neural network architecture offers some unique features compared to backpropagation. It provides incremental learning and can be applied to truly adaptive estimation tasks. In our example it also outperforms a classical maximum likelihood method for the estimation of a discrete dynamic nonlinear transfer function. Future work will be the investigation of ART2/BP's properties for multistep-ahead prediction of dynamic nonlinear transfer functions, and embedding ART2/BP in a neural adaptive controller. Acknow ledgments Special thanks to Steve Lehar at Boston University for providing me with his ART2 simulation program. It proved to be crucial for getting a quick start on ART2 and understanding the concept. References Carpenter, G.A. & Grossberg, S. (1987). ART2: Self-organization of stable category recognition codes for analog input patterns. Applied Optics pp 4919-4930. Carpenter, G.A. & Grossberg, S. (1988). The ART of adaptive pattern recognition by a self-organizing neural network. Computer 21 pp 77-88. Fahlman, S.E. (1988). Faster-Learning Variations on Back-Propagation: An Empirical Study. Proceedings of the 1988 Connectionist Models Summer School. Morgan Kaufmann. Ljung, L. & S~derstr~m (1983). Theory and practice of recursive identification. The MIT press, Cambridge, MA. Otwell, K. (1990). Incremental backpropagation learning from novelty-based orthogonalization. Proceedings IJNN90 . Poggio, T., Girosi, F. (1990). Networks for Approximation and Learning. Proceedings of the IEEE,Vol. 78, No.9. Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Parallel Distributed Processing: Explorations in the microstructure of Cognition, Vol. 1. The MIT Press,Cambridge, MA. Ryan, T. W. (1988). The resonance correlation network. Proceedings IJNN88. 175
303 |@word version:3 coarseness:1 norm:2 grey:1 simulation:6 tried:1 quickprop:1 ljo:1 reduction:1 cyclic:1 liquid:8 outperforms:2 existing:1 current:2 comparing:1 yet:8 plasticity:1 girosi:1 v:1 selected:1 short:1 provides:1 node:18 location:2 consists:2 expected:1 roughly:2 simulator:1 becomes:1 kind:1 developed:1 every:3 multidimensional:1 iearning:1 scaled:3 control:2 before:2 local:1 limit:1 consequence:1 io:1 multistep:1 interpolation:2 black:1 twice:1 dynamically:2 co:2 grossberg:3 unique:1 lost:1 practice:1 implement:1 recursive:1 backpropagation:6 procedure:1 area:1 empirical:1 significantly:1 radial:1 close:1 accumulating:1 conventional:1 map:1 quick:1 center:2 send:2 go:2 williams:1 his:1 embedding:1 variation:1 updated:1 neighbouring:2 origin:2 element:5 rumelhart:2 recognition:2 sbp:2 bottom:3 connected:2 dynamic:10 iiv:2 f2:5 basis:2 oslo:1 easily:1 accelerate:1 train:1 activate:1 artificial:1 hyper:1 otherwise:1 itself:1 online:2 advantage:1 sequence:1 net:2 reset:2 turned:1 organizing:2 poorly:1 description:1 competition:2 getting:1 convergence:1 produce:1 categorization:1 incremental:5 subnets:3 school:1 hull:2 stochastic:1 exploration:1 require:1 subnet:1 microstructure:1 investigation:1 ryan:2 mapping:3 predict:1 cognition:1 smallest:1 estimation:22 largest:1 mit:2 clearly:1 modified:4 categorizing:1 focus:1 improvement:1 likelihood:5 stopping:1 entire:2 hidden:3 expand:1 quasi:1 selects:1 rerun:1 tank:2 among:1 classification:1 overall:1 resonance:1 art:2 special:1 construct:2 never:1 categorizes:1 having:1 future:2 connectionist:1 inhibited:1 consisting:1 attempt:1 organization:1 huge:1 adjust:1 truly:1 activated:1 poggio:2 filled:1 euclidean:1 desired:1 arma:5 classify:1 cover:2 disadvantage:1 stored:2 varies:1 accomplish:1 gliding:1 combined:1 my:1 thanks:1 off:1 jo:1 containing:1 worse:1 expert:1 nonlinearities:1 bfgs:1 performed:1 start:1 parallel:2 formed:1 accuracy:1 variance:1 kaufmann:1 ant:1 generalize:1 identification:1 plastic:1 pp:2 obvious:1 sampled:1 proved:1 wh:1 back:6 steve:1 norway:1 supervised:6 response:1 done:2 though:1 stage:1 correlation:1 inlet:3 nonlinear:6 hopefully:1 propagation:6 incrementally:2 mode:4 resemblance:2 normalized:2 concept:2 self:4 criterion:1 performs:1 orthogonalization:1 novel:1 rcn:1 specialized:1 analog:4 cambridge:2 had:1 stable:1 closest:2 morgan:1 analyzes:1 george:1 converge:1 novelty:1 period:1 ii:1 faster:1 offer:1 divided:1 prediction:2 basic:1 controller:1 iteration:1 background:1 want:1 interval:1 allocated:1 crucial:1 ot:3 extra:1 probably:1 comment:1 sent:1 flow:8 call:1 mbp:1 feedforward:2 enough:1 ledgments:1 architecture:8 ifi:1 idea:1 effort:1 f:1 detailed:1 amount:1 category:9 generate:1 zj:1 estimated:1 discrete:3 vol:2 interpolator:1 kept:1 run:2 arrive:3 family:1 acceptable:1 bound:1 layer:8 fl:4 summer:1 occur:1 ahead:1 optic:1 bp:33 relatively:1 department:1 combination:1 belonging:2 modification:1 making:1 previously:2 needed:1 mind:1 fed:1 subnetworks:3 away:2 original:2 assembly:1 newton:1 calculating:1 giving:1 build:1 classical:2 question:1 subnetwork:6 win:1 distance:4 me:1 mail:1 code:4 providing:1 acknow:1 unknown:2 upper:1 neuron:2 hinton:1 arbitrary:2 pair:13 required:1 learned:1 address:1 able:1 pattern:18 summarize:2 program:1 memory:2 created:1 prior:1 epoch:1 understanding:1 determining:1 plant:1 ljung:2 interesting:2 last:4 fahlman:1 offline:3 distributed:1 made:2 adaptive:11 keep:1 uio:1 transfer:5 expanding:1 alg:1 complex:2 domain:4 main:1 arrival:1 categorized:2 carpenter:6 augmented:1 representative:4 rithm:1 sub:1 winning:5 lie:1 learns:2 mason:1 explored:1 adding:1 ltm:5 wash:2 magnitude:1 i7j:2 horizon:1 relearning:1 boston:1 suited:1 simply:1 determines:1 ma:2 goal:1 presentation:1 hard:1 except:1 called:1 total:1 bps:1 tested:1
2,238
3,030
Approximate Correspondences in High Dimensions Kristen Grauman Department of Computer Sciences University of Texas at Austin [email protected] Trevor Darrell CS and AI Laboratory Massachusetts Institute of Technology [email protected] Abstract Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel. 1 Introduction When a single data object is described by a set of feature vectors, it is often useful to consider the matching or ?correspondence? between two sets? elements in order to measure their overall similarity or recover the alignment of their parts. For example, in computer vision, images are often represented as collections of local part descriptions extracted from regions or patches (e.g., [11, 12]), and many recognition algorithms rely on establishing the correspondence between the parts from two images to quantify similarity between objects or localize an object within the image [2, 3, 7]. Likewise, in text processing, a document may be represented as a bag of word-feature vectors; for example, Latent Semantic Analysis can be used to recover a ?word meaning? subspace on which to project the co-occurrence count vectors for every word [9]. The relationship between documents may then be judged in terms of the matching between the sets of local meaning features. The critical challenge, however, is to compute the correspondences between the feature sets in an efficient way. The optimal correspondences?those that minimize the matching cost?require cubic time to compute, which quickly becomes prohibitive for sizeable sets and makes processing realistic large data sets impractical. Due to the optimal matching?s complexity, researchers have developed approximation algorithms to compute close solutions for a fraction of the computational cost [4, 8, 1, 7]. However, previous approximations suffer from distortion factors that increase linearly with the dimension of the features, and they fail to take advantage of structure in the feature space. In this paper we present a new algorithm for computing an approximate partial matching between point sets that can remain accurate even for sets with high-dimensional feature vectors, and benefits from taking advantage of the underlying structure in the feature space. The main idea is to derive a hierarchical, data-dependent decomposition of the feature space that can be used to encode feature sets as multi-resolution histograms with non-uniformly shaped bins. For two such histograms (pyramids), the matching cost is efficiently calculated by counting the number of features that intersect in each bin, and weighting these match counts according to geometric estimates of inter-feature distances. Our method allows for partial matchings, which means that the input sets can have varying numbers of features in them, and outlier features from the larger set can be ignored with no penalty to the matching cost. The matching score is computed in time linear in the number of features per set, and it forms a Mercer kernel suitable for use within existing kernel-based algorithms. In this paper we demonstrate how, unlike previous set matching approximations (including our original pyramid match algorithm [7]), the proposed approach can maintain consistent accuracy as the dimension of the features within the sets increases. We also show how the data-dependent hierarchical decomposition of the feature space produces more accurate correspondence fields than a previous approximation that uses a uniform decomposition. Finally, using our matching measure as a kernel in a discriminative classifier, we achieve improved object recognition results over a state-of-the-art set kernel on a benchmark data set. 2 Related Work Several previous matching approximation methods have also considered a hierarchical decomposition of the feature space to reduce matching complexity, but all suffer from distortion factors that scale linearly with the feature dimension [4, 8, 1, 7]. In this work we show how to alleviate this decline in accuracy for high-dimensional data by tuning the hierarchical decomposition according to the particular structure of the data, when such structure exists. We build on our pyramid match algorithm [7], a partial matching approximation that also uses histogram intersection to efficiently count matches implicitly formed by the bin structures. However, in contrast to [7], our use of data-dependent, non-uniform bins and a more precise weighting scheme results in matchings that are consistently accurate for structured, high-dimensional data. The idea of partitioning a feature space with vector quantization (VQ) is fairly widely used in practice; in the vision literature in particular, VQ has been used to establish a vocabulary of prototypical image features, from ?textons? to the ?visual words? of [16]. A variant of the pyramid match applied to spatial features was shown to be effective for matching quantized features in [10]. More recently, the authors of [13] have shown that a tree-structured vector quantization (TSVQ [5]) of image features provides a scalable means of indexing into a very large feature vocabulary. The actual tree structure employed is similar to the one constructed in this work; however, whereas the authors of [13] are interested in matching individual features to one another to access an inverted file, our approach computes approximate correspondences between sets of features. Note the distinction between the problem we are addressing?approximate matchings between sets?and the problem of efficiently identifying approximate or exact nearest neighbor feature vectors (e.g., via k-d trees): in the former, the goal is a one-to-one correspondence between sets of vectors, whereas in the latter, a single vector is independently matched to a nearby vector. 3 Approach The main contribution of this work is a new very efficient approximate bipartite matching method that measures the correspondence-based similarity between unordered, variable-sized sets of vectors, and can optionally extract an explicit correspondence field. We call our algorithm the vocabulary-guided (VG) pyramid match, since the histogram pyramids are defined by the ?vocabulary? or structure of the feature space, and the pyramids are used to count implicit matches. The basic idea is to first partition the given feature space into a pyramid of non-uniformly shaped regions based on the distribution of a provided corpus of feature vectors. Point sets are then encoded as multi-resolution histograms determined by that pyramid, and an efficient intersection-based computation between any two histogram pyramids yields an approximate matching score for the original sets. The implicit matching version of our method estimates the inter-feature distances based on their respective distances to the bin centers. To produce an explicit correspondence field between the sets, we use the pyramid construct to divide-and-conquer the optimal matching computation. As our experiments will show, the proposed algorithm in practice provides a good approximation to the optimal partial matching, but is orders of magnitude faster to compute. Preliminaries: We consider a feature space F of d-dimensional vectors, F ? <d . The point sets our algorithm matches will come from the input space S, which contains sets of feature vectors drawn from F : S = {X|X = {x1 , . . . , xm }}, where each xi ? F , and the value m = |X| may vary across instances of sets in S. Throughout the text we will use the terms feature, vector, and point interchangeably to refer to the elements within a set. (a) Uniform bins (b) Vocabulary-guided bins Figure 1: Rather than carve the feature space into uniformly-shaped partitions (left), we let the vocabulary (structure) of the feature space determine the partitions (right). As a result, the bins are better concentrated on decomposing the space where features cluster, particularly for high-dimensional feature spaces. These figures depict the grid boundaries for two resolution levels for a 2-D feature space. In both (a) and (b), the left plot contains the coarser resolution level, and the right plot contains the finer one. Features are red points, bin centers are larger black points, and blue lines denote bin boundaries. A partial matching between two point sets is an assignment that maps all points in the smaller set to some subset of the points in the larger (or equally-sized) set. Given point sets X and Y, where m = |X|, n = |Y|, and m ? n, a partial matching M (X, Y; ?) = {(x1 , y?1 ), . . . , (xm , y?m )} pairs each point in X to some unique point in Y according to the permutation of indices specified by ? = [?1 , . . . , ?m ], 1 ? ?i ? n, where ?i specifies which point y?i ? Y is matched to xi ? X, for 1 ? i ? m. The cost points: P of a partial matching is the sum of the distances between matched ? C (M(X, Y; ?)) = ||x ? y || . The optimal partial matching M(X, Y; ? ) uses the i ? 2 i xi ?X assignment ? ? that minimizes this cost: ? ? = argmin? C (M(X, Y; ?)). It is this matching that we wish to efficiently approximate. In Section 3.2 we describe how our algorithm approximates the cost C (M(X, Y; ? ? )); for a small increase in computational cost we can also extract explicit correspondences to estimate ? ? itself. 3.1 Building Vocabulary-Guided Pyramids The first step is to generate the structure of the vocabulary-guided (VG) pyramid to define the bin placement for the multi-resolution histograms used in the matching. This is a one-time process performed before any matching takes place. We would like the bins in the pyramid to follow the feature distribution and concentrate partitions where the features actually fall. To accomplish this, we perform hierarchical clustering on a sample of representative feature vectors from F . We randomly select some example feature vectors from the feature type of interest to form the representative feature corpus, and perform hierarchical k-means clustering with the Euclidean distance to build the pyramid tree. Other hierarchical clustering techniques, such as agglomerative clustering, are also possible and do not change the operation of the method. For this unsupervised clustering process there are two parameters: the number of levels in the tree L, and the branching factor k. The initial corpus of features is clustered into k top-level groups, where group membership is determined by the Voronoi partitioning of the feature corpus according to the k cluster centers. Then the clustering is repeated recursively L ? 1 times on each of these groups, filling out a tree with L total levels containing k i bins (nodes) at level i, where levels are counted from the root (i = 0) to the leaves (i = L ? 1). The bins are irregularly shaped and sized, and their boundaries are determined by the Voronoi cells surrounding the cluster centers. (See Figure 1.) For each bin in the VG pyramid we record its diameter, which we estimate empirically based on the maximal inter-feature distance between any points from the initial feature corpus that were assigned to it. Once we have constructed a VG pyramid, we can embed point sets from S as multi-resolution histograms. A point?s placement in the pyramid is determined by comparing it to the appropriate k bin centers at each of the L pyramid levels. The histogram count is incremented for the bin (among the k choices) that the point is nearest to in terms of the same distance function used to cluster the initial corpus. We then push the point down the tree and continue to increment finer level counts only along the branch (bin center) that is chosen at each level. So a point is first assigned to one of the top-level clusters, then it is assigned to one of its children, and so on recursively. This amounts to a total of kL distances that must be computed between a point and the pyramid?s bin centers. Given the bin structure of the VG pyramid, a point set X is mapped to its pyramid: ? (X) = [H0 (X), . . . , HL?1 (X)], with Hi (X) = [hp, n, di1 , . . . , hp, n, diki ], and where Hi (X) is a k i dimensional histogram associated with level i in the pyramid, p ? Zi for entries in Hi (X), and 0 ? i < L. Each entry in this histogram is a triple hp, n, di giving the bin index, the bin count, and the bin?s points? maximal distance to the bin center, respectively. Storing the VG pyramid itself requires space for O(k L ) d-dimensional feature vectors, i.e., all of the cluster centers. However, each point set?s histogram is stored sparsely, meaning only O(mL) nonzero bin counts are maintained to encode the entire pyramid for a set with m features. This is an important point: we do not store O(k L ) counts for every point set; Hi (X) is represented by at most m triples having n > 0. We achieve a sparse implementation as follows: each vector in a set is pushed through the tree as described above. At every level i, we record a hp, n, di triple describing the nonzero entry for the current bin. The vector p = [p1 , . . . , pi ], pj ? [1, k] denotes the indices of the clusters traversed from the root so far, n ? Z+ denotes the count for the bin (initially 1), and d ? < denotes the distance computed between the inserted point and the current bin?s center. Upon reaching the leaf level, p is an L-dimensional path-vector indicating which of the k bins were chosen at each level, and every path-vector uniquely identifies some bin on the pyramid. Initially, an input set with m features yields a total of mL such triples?there is one nonzero entry per level per point, and each has n = 1. Then each of the L lists of entries is sorted by the index vectors (p in the triple), and they are collapsed to a list of sorted nonzero entries with unique indices: when two or more entries with the same index are found, they are replaced with a single entry with the same index for p, the summed counts for n, and the maximum distance for d. The sorting is done in linear time using integer sorting algorithms. Maintaining the maximum distance of any point in a bin to the bin center will allow us to efficiently estimate inter-point distances at the time of matching, as described in Section 3.2. 3.2 Vocabulary-Guided Pyramid Match Given two point sets? pyramid encodings, we efficiently compute the approximate matching score using a simple weighted intersection measure. The VG pyramid?s multi-resolution partitioning of the feature space is used to direct the matching. The basic intuition is to start collecting groups of matched points from the bottom of the pyramid up, i.e., from within increasingly larger partitions. In this way, we will first consider matching the closest points (at the leaves), and as we climb to the higher-level clusters in the pyramid we will allow increasingly further points to be matched. We define the number of new matches within a bin to be a count of the minimum number of points either of the two input sets contributes to that bin, minus the number of matches already counted by any of its child bins. A weighted sum of these counts yields an approximate matching score. Let nij (X) denote the element n from hp, n, dij , the j th bin entry of histogram Hi (X), and let ch (nij (X)) denote the element n for the hth child bin of that entry, 1 ? h ? k. Similarly, let dij (X) refer to the element d from the same triple. Given point sets X and Y, we compute the matching score via their pyramids ?(X) and ?(Y) as follows: C (?(X), ?(Y)) = L?1 ki XX i=0 j=1 " wij min (nij (X), nij (Y)) ? k X h=1 # min (ch (nij (X)) , ch (nij (Y))) . (1) The outer sum loops over the levels in the pyramids; the second sum loops over the bins at a given level, and the innermost sum loops over the children of a given bin. The first min term reflects the number of matchable points in the current bin, and the second min term tallies the number of matches already counted at finer resolutions (in child bins). Note that as the leaf nodes have no children, when i = L ? 1 the last sum is zero. All matches are new at the leaves. The matching scores are normalized according to the size of the input sets in order to not favor larger sets. The number of new matches calculated for a bin is weighted by wij , an estimate of the distance between points contained in the bin.1 With a VG pyramid match there are two alternatives for the distance estimate: (a) weights based on the diameters of the pyramid?s bins, or (b) input-dependent weights based on the maximal distances of the points in the bin to its center. Option (a) is a conservative estimate of the actual inter-point distances in the bin if the corpus of features used to build the pyramid is representative of the feature space; its advantages are that it provides a guaranteed Mercer kernel (see below) and eliminates the need to store a distance d in the entry triples. Option (b)?s input-specific weights estimate the distance between any two points in the bin as the sum of the stored maximal to-center distances from either input set: wij = dij (X) + dij (Y). This weighting 1 To use our matching as a cost function, weights are set as the distance estimates; to use as a similarity measure or kernel, weights are set as (some function of) the inverse of the distance estimates. gives a true upper bound on the furthest any two points could be from one another, and it has the potential to provide tighter estimates of inter-feature distances (as we confirm experimentally below); however, we cannot guarantee this weighting will yield a Mercer kernel. Just as we encode the pyramids sparsely, we derive a means to compute intersections in Eqn. 1 without ever traversing the entire pyramid tree. Given two sparse lists Hi (X) and Hi (Y) which have been sorted according to the bin indices, we obtain the minimum counts in linear time by moving pointers down the lists and processing only those nonzero entries that share an index, making the time required to compute a matching between two pyramids O(mL). A key aspect of our method is that we obtain a measure of matching quality between two point sets without computing pair-wise distances between their features?an O(m2 ) savings over sub-optimal greedy matchings. Instead, we exploit the fact that the points? placement in the pyramid reflects their distance from one another. The only inter-feature distances computed are the kL distances needed to insert a point into the pyramid, and this small one-time cost is amortized every time we re-use a histogram to approximate another matching against a different point set. We first suggested the idea of using histogram intersection to count implicit matches in a multiresolution grid in [7]. However, in [7], bins are constructed to uniformly partition the space, bin diameters exponentially increase over the levels, and intersections are weighted indistinguishably across an entire level. In contrast, here we have developed a pyramid embedding that partitions according to the distribution of features, and weighting schemes that allow more precise approximations of the inter-feature costs. As we will show in Section 4, our VG pyramid match remains accurate and efficient even for high-dimensional feature spaces, while the uniform-bin pyramid match is limited in practice to relatively low-dimensional features. For the increased accuracy our method provides, there are some complexity trade-offs versus [7], which does not require computing any distances to place the points into bins; their uniform shape and size allows points to be placed directly via division by bin size. On the other hand, sorting the bin indices with the VG method has a lower complexity, since the values only range to k, the branch factor, which is typically much smaller than the aspect ratio that bounds the range in [7]. In addition, as we show in Section 4, in practice the cost of extracting an explicit correspondence field using the uniform-bin pyramid in high dimensions approaches the cubic cost of the optimal measure, whereas it remains linear with the proposed approach, assuming features are not uniformly distributed. Our approximation can be used to compare sets of vectors in any case where the presence of lowcost correspondences indicates their similarity (e.g., nearest-neighbor retrieval). We can also employ the measure as a kernel function for structured inputs. According to Mercer?s theorem, a kernel is p.s.d if and only if it corresponds to an inner product in some feature space [15]. We can re-write PL?1 Pki Eqn. 1 as: C (?(X), ?(Y)) = i=0 j=1 (wij ? pij ) min (nij (X), nij (Y)), where pij refers to the weight associated with the parent bin of the j th node at level i. Since the min operation is p.d. [14], and since kernels are closed under summation and scaling by a positive constant [15], we have that the VG pyramid match is a Mercer kernel if wij ? pij . This inequality holds if every child bin receives a similarity weight that is greater than its parent bin, or rather that every child bin has a distance estimate that is less than that of its parent. Indeed this is the case for weighting option (a), where wij is inversely proportional to the diameter of the bin. It holds by definition of the hierarchical clustering: the diameter of a subset of points must be less than or equal to the diameter of all those points. We cannot make this guarantee for weighting option (b). In addition to scalar matching scores, we can optionally extract explicit correspondence fields through the pyramid. In this case, the VG pyramid decomposes the required matching computation into a hierarchy of smaller matchings. Upon encountering a bin with a nonzero intersection, the optimal matching is computed between only those features from the two sets that fall into that particular bin. All points that are used in that per-bin matching are then flagged as matched and may not take part in subsequent matchings at coarser resolutions of the pyramid. 4 Results In this section, we provide results to empirically demonstrate our matching?s accuracy and efficiency on real data, and we compare it to a pyramid match using a uniform partitioning of the feature space. In addition to directly evaluating the matching scores and correspondence fields, we show that our method leads to improved object recognition performance when used as a kernel within a discriminative classifier. Rank correlations, d=8(R=0.86) 0.95 9000 8000 8000 7000 7000 6000 5000 4000 0.8 0.75 0.7 0 20 40 60 80 100 Feature dimension (d) 120 4000 3000 2000 1000 0 2000 4000 6000 8000 0 10000 2000 4000 6000 8000 10000 Uniform bin pyramid match ranks Rank correlations, d=8(R=0.92) Rank correlations, d=128(R=0.95) 10000 10000 9000 9000 8000 8000 7000 7000 6000 5000 4000 6000 5000 4000 3000 3000 2000 2000 1000 0 0 Uniform bin pyramid match ranks Optimal ranks 0.85 5000 2000 0 0.9 6000 3000 1000 Optimal ranks Spearman rank correlation with optimal match (R) 1 Optimal ranks Uniform bin pyramid VG pyramid ? input?specific weights VG pyramid ? global weights 10000 9000 Optimal ranks Ranking quality over feature dimensions 1.05 Rank correlations, d=128(R=0.78) 10000 1000 0 2000 4000 6000 8000 10000 0 0 2000 4000 6000 8000 10000 VG pyramid match (input?specific weights) ranks VG pyramid match (input?specific weights) ranks Figure 2: Comparison of optimal and approximate matching rankings on image data. Left: The set rankings produced with the VG pyramid match are consistently accurate for increasing feature dimensions, while the accuracy with uniform bins degrades about linearly in the feature dimension. Right: Example rankings for both approximations at d = [8, 128]. Approximate Matching Scores: In these experiments, we extracted local SIFT [11] features from images in the ETH-80 database, producing an unordered set of about m = 256 vectors for every example. In this case, F is the space of SIFT image features. We sampled some features from 300 of the images to build the VG pyramid, and 100 images were used to test the matching. In order to test across varying feature dimensions, we also used some training features to establish a PCA subspace that was used to project features onto varying numbers of bases. For each feature dimension, we built a VG pyramid with k = 10 and L = 5, encoded the 100 point sets as pyramids, and computed the pair-wise matching scores with both our method and the optimal least-cost matching. If our measure is approximating the optimal matching well, we should find the ranking we induce to be highly correlated with the ranking produced by the optimal matching for the same data. In other words, the images should be sorted similarly by either method. Spearman?s rank correlation PN coefficient R provides a good quantitative measure to evaluate this: R = 1 ? 6 1 D2 /N (N 2 ? 1), where D is the difference in rank for the N corresponding ordinal values assigned by the two measures. The left plot in Figure 2 shows the Spearman correlation scores against the optimal measure for both our method (with both weighting options) and the approximation in [7] for varying feature dimensions for the 10,000 pair-wise matching scores for the 100 test sets. Due to the randomized elements of the algorithms, for each method we have plotted the mean and standard deviation of the correlation for 10 runs on the same data. While the VG pyramid match remains consistently accurate for high feature dimensions (R = 0.95 with input-specific weights), the accuracy of the uniform bins degrades rapidly for dimensions over 10. The ranking quality of the input-specific weighting scheme (blue diamonds) is somewhat stronger than that of the ?global? bin diameter weighting scheme (green squares). The four plots on the right of Figure 2 display the actual ranks computed for both approximations for two of the 26 dimensions summarized in the left plot. The black diagonals denote the optimal performance, where the approximate rankings would be identical to the optimal ones; higher Spearman correlations have points clustered more tightly along this diagonal. For the low-dimensional features, the methods perform fairly comparably; however, for the full 128-D features, the VG pyramid match is far superior (rightmost column). The optimal measure requires about 1.25s per match, while our approximation is about 2500x faster at 5 ? 10?4 s per match. Computing the pyramid structure from the feature corpus took about three minutes in Matlab; this is a one-time offline cost. For a pyramid matching to work well, the gradation in bin sizes up the pyramid must be such that at most levels of the pyramid we can capture distinct groups of points to match within the bins. That is, unless all the points in two sets are equidistant, the bin placement must allow us to match very near points at the finest resolutions, and gradually add matches that are more distant at coarser resolutions. In low dimensions, both uniform or data-dependent bins can achieve this. In high dimensions, however, uniform bin placement and exponentially increasing bin diameters fail to capture such a gradation: once any features from different point sets are close enough to d=8 150 100 50 0 ?50 0 2 4 6 250 150 100 50 0 ?50 0 8 2 Pyramid level 4 d = 128 d = 13 Vocabulary?guided bins Uniform bins 200 6 8 Number of new matches formed 200 Number of new matches formed Number of new matches formed Number of new matches formed d=3 Vocabulary?guided bins Uniform bins 250 Vocabulary?guided bins Uniform bins 250 200 150 100 50 0 ?50 0 2 Pyramid level 4 6 Vocabulary?guided bins Uniform bins 250 200 150 100 50 0 ?50 0 8 2 Pyramid level 4 6 8 Pyramid level Figure 3: Number of new matches formed at each pyramid level for either uniform (dashed red) or VG (solid blue) bins for increasing feature dimensions. Points represent mean counts per level for 10,000 matches. In low dimensions, both partition styles gradually collect matches up the pyramid. In high dimensions with uniform partitions, points begin sharing a bin ?all at once?; in contrast, the VG bins still accrue new matches consistently across levels since the decomposition is tailored to where points cluster in the feature space. x 10 Error in approximate correspondence fields Mean error per match 1.4 1.2 1 0.8 Uniform bins, random per Uniform bins, optimal per Vocab.?guided bins, random per Vocab.?guided bins, optimal per 0.6 0.4 0 20 40 60 80 Feature dimension (d ) 100 120 Computation time Mean time per match (s) (LOG SCALE) 5 1.6 Optimal Uniform bins, random per Uniform bins, optimal per Vocab.?guided bins, random per Vocab.?guided bins, optimal per 1 10 0 10 ?1 10 ?2 10 ?3 10 0 20 40 60 80 100 120 Feature dimension (d ) Figure 4: Comparison of correspondence field errors (left) and associated computation times (right). This figure is best viewed in color. (Note that errors level out with d for all methods due to PCA.) match (share bins), the bins are so large that almost all of them match. The matching score is then approximately the number of points weighted by a single bin size. In contrast, when we tailor the feature space partitions to the distribution of the data, even in high dimensions the match counts increase gradually across levels, thereby yielding more discriminating implicit matches. Figure 3 confirms this intuition, again using the ETH-80 image data from above. Approximate Correspondence Fields: For the same image data, we ran the explicit matching variant of our method and compared the induced correspondences to those produced by the globally optimal measure. For comparison, we also applied the same variant to pyramids with uniform bins. We measure the error of an approximate ? by the sum of the errors at every link in the field: Pmatching ? E (M (X, Y; ? ? ) , M (X, Y; ? ? )) = xi ?X ||y??i ? y?i? ||2 . Figure 4 compares the correspondence field error and computation times for the VG and uniform pyramids. For each approximation, there are two variations tested: in one, an optimal assignment is computed for all points in the same bin; for the other, a random assignment is made. The left plot shows the mean error per match for each method, and the right plot shows the corresponding mean time required to compute those matches. The computation times are as we would expect: the optimal matching is orders of magnitude more expensive than the approximations. Using the random assignment variation, both approximations have negligible costs, since they simply choose any combination of points within a bin. However, in high dimensions, the time required by the uniform bin pyramid with the optimal per-bin matching approaches the time required by the optimal matching itself. This occurs for similar reasons as the poorer matching score accuracy exhibited by the uniform bins, both in the left plot and above in Figure 2; since most or all of the points begin to match at a certain level, the pyramid does not help to divide-and-conquer the computation, and for high dimensions, the optimal matching in its entirety must be computed. In contrast, the expense of the VG pyramid matching remains steady and low, even for high dimensions, since data-dependent pyramids better divide the matching labor into the natural segments in the feature space. For similar reasons, the errors are comparable for the optimal per-bin variation with either the VG or uniform bins. The VG bins divide the computation so it can be done inexpensively, while the uniform bins divide the computation poorly and must compute it expensively, but about as accurately. Likewise, the error for the uniform bins when using a per-bin random assignment is very high for any but the lowest dimensions (red line on left plot), since such a large number of points are being randomly assigned to one another. In contrast, the VG bins actually result in similar errors whether the points in a bin are matched optimally or randomly (blue and pink lines on left plot). Pyramid matching method Vocabulary-guided bins Uniform bins Mean recognition rate/class (d=128 / d=10) 99.0 / 97.7 64.9 / 96.5 Time/match (s) (d=128 / d=10) 6.1e-4 / 6.2e-4 1.5e-3 / 5.7e-4 This again indicates that tuning the pyramid bins to the data?s distribution achieves a much more suitable breakdown of the computation, even in high dimensions. Realizing Improvements in Recognition: Finally, we have experimented with the VG pyramid match within a discriminative classifier for an object recognition task. We trained an SVM with our matching as the kernel to recognize the four categories in the Caltech-4 benchmark data set. We trained with 200 images per class and tested with all the remaining images. We extracted features using both the Harris and MSER [12] detectors and the 128-D SIFT [11] descriptor. We also generated lower-dimensional (d = 10) features using PCA. To form a Mercer kernel, the weights were set according to each bin diameter Aij : wij = e?Aij /? , with ? set automatically as the mean distance between a sample of features from the training set. The table shows our improvements over the uniform-bin pyramid match kernel. The VG pyramid match is more accurate and requires minor additional computation. Our near-perfect performance on this data set is comparable to that reached by others in the literature; the real significance of the result is that it distinguishes what can be achieved with a VG pyramid embedding as opposed to the uniform histograms used in [7], particularly for high-dimensional features. In addition, here the optimal matching requires 0.31s per match, over 500x the cost of our method. Conclusion: We have introduced a linear-time method to compute a matching between point sets that takes advantage of the underlying structure in the feature space and remains consistently accurate and efficient for high-dimensional inputs on real image data. Our results demonstrate the strength of the approximation empirically, compare it directly against an alternative state-of-the-art approximation, and successfully use it as a Mercer kernel for an object recognition task. We have commented most on potential applications in vision and text, but in fact it is a generic matching measure that can be applied whenever it is meaningful to compare sets by their correspondence. Acknowledgments: We thank Ben Kuipers for suggesting the use of Spearman?s rank correlation. References [1] P. Agarwal and K. R. Varadarajan. A Near-Linear Algorithm for Euclidean Bipartite Matching. In Symposium on Computational Geometry, 2004. [2] S. Belongie, J. Malik, and J. Puzicha. Shape Matching and Object Recognition Using Shape Contexts. IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(24):509?522, April 2002. [3] A. Berg, T. Berg, and J. Malik. Shape Matching and Object Recognition using Low Distortion Correspondences. In Proc. IEEE Conf. on Comp. Vision and Pattern Recognition, San Diego, CA, June 2005. [4] M. Charikar. Similarity Estimation Techniques from Rounding Algorithms. In Proceedings of the 34th Annual ACM Symposium on Theory of Computing, 2002. [5] A. Gersho and R. Gray. Vector Quantization and Signal Compression. Springer, 1992. [6] K. Grauman. Matching Sets of Features for Efficient Retrieval and Recognition. PhD thesis, MIT, 2006. [7] K. Grauman and T. Darrell. The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features. In Proc. IEEE Int. Conf. on Computer Vision, Beijing, China, Oct 2005. [8] P. Indyk and N. Thaper. Fast Image Retrieval via Embeddings. In 3rd International Workshop on Statistical and Computational Theories of Vision, Nice, France, Oct 2003. [9] T. K. Landauer, P. W. Foltz, and D. Laham. Introduction to LSA. Discourse Processes, 25:259?84, 1998. [10] S. Lazebnik, C. Schmid, and J. Ponce. Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Scene Categories. In Proc. IEEE Conf. on Comp. Vision and Pattern Recognition, June 2006. [11] D. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 60(2):91?110, Jan 2004. [12] J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust Wide Baseline Stereo from Maximally Stable Extremal Regions. In British Machine Vision Conference, Cardiff, UK, Sept. 2002. [13] D. Nister and H. Stewenius. Scalable Recognition with a Vocabulary Tree. In Proc. IEEE Conf. on Comp. Vision and Pattern Recognition, New York City, NY, June 2006. [14] F. Odone, A. Barla, and A. Verri. Building Kernels from Binary Strings for Image Matching. IEEE Trans. on Image Processing, 14(2):169?180, Feb 2005. [15] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge Univ. Press, 2004. [16] J. Sivic and A. Zisserman. Video Google: A Text Retrieval Approach to Object Matching in Videos. In Proc. IEEE Int. Conf. on Computer Vision, Nice, Oct 2003.
3030 |@word version:1 compression:1 stronger:1 d2:1 confirms:1 decomposition:6 innermost:1 thereby:1 minus:1 solid:1 recursively:2 initial:3 contains:3 score:14 document:2 rightmost:1 existing:1 current:3 comparing:1 must:6 finest:1 indistinguishably:1 realistic:1 partition:10 subsequent:1 distant:1 shape:4 plot:10 depict:1 greedy:1 prohibitive:1 leaf:5 intelligence:1 realizing:1 record:2 pointer:1 provides:5 quantized:1 node:3 along:2 constructed:3 direct:1 symposium:2 introduce:1 inter:8 indeed:1 p1:1 multi:5 vocab:4 globally:1 automatically:1 actual:3 kuiper:1 increasing:3 becomes:1 project:2 provided:1 underlying:3 matched:7 xx:1 begin:2 lowest:1 what:1 argmin:1 minimizes:1 string:1 developed:2 impractical:1 guarantee:2 quantitative:1 every:9 collecting:1 grauman:4 classifier:4 uk:1 partitioning:4 lsa:1 producing:1 before:1 positive:1 negligible:1 local:3 encoding:1 establishing:1 path:2 approximately:1 black:2 china:1 collect:1 co:1 limited:1 range:2 unique:2 acknowledgment:1 practice:4 jan:1 intersect:1 eth:2 matching:77 word:5 induce:1 refers:1 varadarajan:1 cannot:2 close:2 onto:1 judged:1 collapsed:1 context:1 gradation:2 map:1 center:13 mser:1 independently:1 resolution:11 identifying:1 m2:1 embedding:3 variation:3 increment:1 laham:1 hierarchy:2 diego:1 exact:1 us:3 element:6 amortized:1 recognition:15 particularly:2 expensive:1 breakdown:1 sparsely:2 coarser:3 database:1 bottom:1 inserted:1 capture:2 region:3 trade:1 incremented:1 ran:1 intuition:2 complexity:4 cristianini:1 trained:2 segment:1 upon:2 bipartite:2 division:1 efficiency:1 distinctive:1 matchings:6 represented:3 surrounding:1 univ:1 distinct:1 fast:1 effective:1 describe:1 h0:1 odone:1 encoded:2 larger:5 widely:1 distortion:4 favor:1 itself:3 indyk:1 advantage:5 took:1 maximal:4 product:1 loop:3 rapidly:1 poorly:1 achieve:3 multiresolution:1 description:1 parent:3 cluster:9 darrell:2 produce:2 perfect:1 ben:1 object:11 help:1 derive:2 nearest:3 minor:1 c:2 entirety:1 come:1 quantify:1 concentrate:1 guided:14 bin:117 require:2 clustered:2 kristen:1 alleviate:1 preliminary:1 tighter:1 summation:1 traversed:1 insert:1 pl:1 hold:2 considered:1 tsvq:1 achieves:2 vary:1 estimation:1 proc:5 bag:2 utexas:1 extremal:1 successfully:1 city:1 weighted:5 reflects:2 mit:2 offs:1 pki:1 rather:2 reaching:1 pn:1 varying:4 encode:3 june:3 ponce:1 improvement:2 consistently:5 rank:17 indicates:2 contrast:6 baseline:1 dependent:6 voronoi:2 membership:1 entire:3 typically:1 initially:2 wij:7 france:1 interested:1 overall:1 among:1 classification:1 art:3 spatial:2 fairly:2 summed:1 field:11 construct:1 once:3 shaped:5 having:1 equal:1 saving:1 identical:1 unsupervised:1 filling:1 di1:1 others:1 employ:1 distinguishes:1 randomly:3 tightly:1 recognize:1 individual:1 replaced:1 geometry:1 maintain:2 inexpensively:1 interest:1 highly:1 alignment:1 yielding:1 accurate:9 poorer:1 partial:9 respective:1 traversing:1 unless:1 tree:10 divide:5 euclidean:2 taylor:1 re:2 plotted:1 accrue:1 nij:8 instance:1 increased:1 column:1 assignment:6 cost:17 addressing:1 subset:2 entry:12 deviation:1 uniform:34 recognizing:1 dij:4 rounding:1 optimally:1 stored:2 accomplish:1 international:2 randomized:1 discriminating:1 csail:1 quickly:1 again:2 thesis:1 containing:1 choose:1 opposed:1 conf:5 style:1 suggesting:1 potential:2 sizeable:1 unordered:2 summarized:1 coefficient:1 expensively:1 int:2 textons:1 ranking:8 stewenius:1 performed:1 root:2 lowe:1 closed:1 red:3 start:1 recover:2 option:5 reached:1 contribution:1 minimize:1 square:1 formed:6 accuracy:8 descriptor:1 likewise:2 efficiently:6 yield:4 accurately:1 produced:3 comparably:1 thaper:1 comp:3 researcher:1 finer:3 detector:1 sharing:1 trevor:2 whenever:1 definition:1 against:3 associated:3 di:2 sampled:1 massachusetts:1 color:1 actually:2 higher:2 follow:1 zisserman:1 improved:3 april:1 maximally:1 verri:1 done:2 just:1 implicit:4 correlation:10 hand:1 eqn:2 receives:1 google:1 quality:3 gray:1 building:2 normalized:1 true:1 former:1 assigned:5 laboratory:1 nonzero:6 semantic:1 interchangeably:1 branching:1 uniquely:1 maintained:1 steady:1 demonstrate:4 image:21 meaning:3 wise:3 novel:1 recently:1 lazebnik:1 superior:1 empirically:3 exponentially:2 approximates:1 refer:2 cambridge:1 ai:1 tuning:2 rd:1 grid:2 hp:5 similarly:2 shawe:1 moving:1 access:1 stable:1 similarity:8 encountering:1 base:1 add:1 feb:1 closest:1 store:2 certain:1 inequality:1 binary:1 continue:1 caltech:1 inverted:1 minimum:2 greater:1 somewhat:1 additional:1 employed:1 determine:1 dashed:1 signal:1 branch:2 full:1 keypoints:1 match:56 faster:2 retrieval:4 equally:1 variant:3 scalable:2 basic:2 vision:11 histogram:16 kernel:22 represent:1 tailored:1 pyramid:89 achieved:1 cell:1 agarwal:1 whereas:4 addition:4 eliminates:1 unlike:1 exhibited:1 file:1 induced:1 climb:1 call:1 integer:1 extracting:1 near:3 counting:1 presence:1 enough:1 embeddings:1 zi:1 equidistant:1 reduce:1 idea:4 decline:1 inner:1 texas:1 whether:1 pca:3 penalty:1 stereo:1 suffer:3 york:1 matlab:1 ignored:1 useful:1 lowcost:1 amount:1 concentrated:1 nister:1 category:2 diameter:9 generate:1 specifies:1 chum:1 per:23 blue:4 write:1 group:5 key:1 four:2 commented:1 localize:1 drawn:1 urban:1 pj:1 fraction:1 sum:8 beijing:1 run:1 inverse:1 tailor:1 place:2 throughout:1 almost:1 patch:1 scaling:1 comparable:2 pushed:1 ki:1 hi:7 bound:2 guaranteed:1 display:1 correspondence:23 annual:1 strength:1 placement:5 scene:1 nearby:1 carve:1 aspect:2 min:6 relatively:1 department:1 structured:3 according:9 charikar:1 combination:1 pink:1 spearman:5 remain:1 across:5 smaller:3 increasingly:2 making:1 hl:1 outlier:1 gradually:3 indexing:1 invariant:1 vq:2 remains:6 describing:1 count:17 fail:2 needed:1 ordinal:1 irregularly:1 gersho:1 decomposing:1 operation:2 hierarchical:8 appropriate:1 generic:1 occurrence:1 alternative:2 original:2 top:2 clustering:7 denotes:3 remaining:1 maintaining:1 exploit:1 giving:1 build:4 establish:2 conquer:2 approximating:1 malik:2 matas:1 already:2 occurs:1 degrades:2 diagonal:2 subspace:2 distance:30 link:1 mapped:1 thank:1 outer:1 agglomerative:1 reason:2 furthest:1 assuming:1 index:10 relationship:1 ratio:1 optionally:2 expense:1 pajdla:1 implementation:1 perform:3 diamond:1 upper:1 benchmark:2 ever:1 precise:2 introduced:1 pair:4 required:5 specified:1 kl:2 sivic:1 distinction:1 trans:2 beyond:1 suggested:1 below:2 pattern:5 xm:2 cardiff:1 challenge:1 built:1 including:1 green:1 video:2 critical:1 suitable:2 natural:1 rely:1 scheme:4 technology:1 inversely:1 identifies:1 extract:3 schmid:1 sept:1 text:4 nice:2 geometric:1 literature:2 expect:1 permutation:1 prototypical:1 proportional:1 versus:1 vg:31 triple:7 pij:3 consistent:1 mercer:8 storing:1 pi:1 share:2 austin:1 placed:1 last:1 offline:1 aij:2 allow:4 institute:1 neighbor:2 fall:2 taking:1 foltz:1 wide:1 sparse:2 benefit:1 distributed:1 boundary:3 dimension:30 calculated:2 vocabulary:15 evaluating:1 computes:1 author:2 collection:1 made:1 san:1 counted:3 far:2 approximate:18 implicitly:1 confirm:1 ml:3 global:2 corpus:8 belongie:1 discriminative:5 xi:4 landauer:1 latent:1 decomposes:1 table:1 robust:1 ca:1 contributes:1 hth:1 significance:1 main:2 linearly:4 repeated:1 child:8 x1:2 representative:3 cubic:2 ny:1 sub:1 explicit:6 wish:1 tally:1 weighting:10 down:2 theorem:1 embed:1 minute:1 specific:6 british:1 sift:3 list:4 experimented:1 svm:1 exists:1 workshop:1 quantization:3 phd:1 magnitude:2 push:1 barla:1 sorting:3 intersection:8 simply:1 visual:1 labor:1 contained:1 scalar:1 springer:1 ch:3 corresponds:1 discourse:1 extracted:3 harris:1 acm:1 oct:3 goal:1 sized:3 sorted:4 flagged:1 viewed:1 change:1 experimentally:1 determined:4 uniformly:6 conservative:1 total:3 meaningful:1 indicating:1 select:1 puzicha:1 berg:2 latter:1 evaluate:1 tested:2 correlated:1
2,239
3,031
Predicting spike times from subthreshold dynamics of a neuron Ryota Kobayashi Department of Physics Kyoto University Kyoto 606-8502, Japan [email protected] Shigeru Shinomoto Department of Physics Kyoto University Kyoto 606-8502, Japan [email protected] Abstract It has been established that a neuron reproduces highly precise spike response to identical ?uctuating input currents. We wish to accurately predict the ?ring times of a given neuron for any input current. For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. It is found that the prediction is signi?cantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. 1 Introduction Since Hodgkin and Huxley [1] described the ionic ?ux across the neuronal membrane with four nonlinear differential equations more than half a century ago, continuous efforts have been made either to extract an essence of the nonlinear dynamical aspect by simplifying the model, or construct ever more realistic models by including more ionic channels in the model. In the simpli?cation proposed by FitzHugh [2] and Nagumo et al [3], the number of equations is reduced to two: the fast and slow variables which minimally represent the excitable dynamics. The leaky integrate-and-?re model [4], originally proposed far in advance of the Hodgkin-Huxley model, consists of only one variable that corresponds to the membrane potential, with a voltage resetting mechanism. Those simpli?ed models have been successful in not only extracting the essence of the dynamics, but also in reducing the computational cost of studying the large-scale dynamics of an assembly of neurons. In contrast to such taste for simpli?cation, there are also a number of studies that pursue realism by developing multi-compartment models and installing newly found ionic channels. User-friendly simulation platforms, such as NEURON [5] and GENESIS [6], enable experimental neurophysiologists to reproduce casually their experimental results or to explore potentially interesting phenomena for a new experiment to be performed. Though those simulators have been successful in reproducing qualitative aspects of neuronal responses to various conditions, quantitative reproduction as well as prediction for novel experiments appears to be dif?cult to realize [7]. The dif?culty is due to the complexity of the model accompanied with a large number of undetermined free parameters. Even if a true model of a particular neuron is included in the family of models, it is practically dif?cult to explore the true parameters in the high-dimensional space of parameters that dominate the nonlinear dynamics. Recently it was suggested by Kistler et al [8, 9] to extend the leaky integrate-and-?re model so that real membrane dynamics of any neuron can be adopted. The so called ?spike response model? has been successful in not only reproducing the data but also in predicting the spike timing for a novel input current [8, 9, 10, 11]. The details of an integration kernel are learned easily from the sample data. The fairly precise prediction achieved by such a simple model indicates that the spike occurrence is determined principally by the subthreshold dynamics. In other words, the highly nonlinear dynamics of a neuron can be decomposed into two simple, predictable processes: a relatively simple subthreshold dynamics, and the dynamics of an action potential of a nearly ?xed shape (Fig.1). action potential (nearly fixed) V subthreshold (predictable) dV/dt Figure 1: The highly nonlinear dynamics of a neuron is decomposed into two simple, predictable processes. In this paper, we propose a framework of improving the prediction of spike times by paying close attention to the transfer between the two predictable processes mentioned above. It is assumed in the original spike response model that a spike occurs if the membrane potential exceeds a certain threshold [9]. We revised this rule to maximally utilize the information of a higher-dimensional state space, consisting of not only the instantaneous membrane potential, but also its time derivative(s). Such a subthreshold state can provide cues for the occurrence of a spike, but with a certain time difference. For the purpose of exploring the optimal time shift, we propose a method of maximizing the mutual information between the subthreshold state and the occurrence of a spike. By employing the linear ?lter model [12] and the spike response model [9] for mimicking the subthreshold voltage response of a neuron, we examine how much the present framework may improve the prediction for simulation data of the fast-spiking model [13]. 2 Methods The response of a neuron is precisely reproduced when presented with identical ?uctuating input currents [14]. This implies that the neuronal membrane potential V (t) is determined by the past input current {I(t)}, or V (t) = F ({I(t)}), (1) where F ({I(t)}) represents a functional of a time-dependent current I(t). A rapid swing in the polarity of the membrane potential is called a ?spike.? The occurrence of a spike could be de?ned practically by measuring the membrane potential V (t) exceeding a certain threshold, V (t) > Vth . (2) The time of each spike could be de?ned either as the ?rst time the threshold is exceeded, or as the peak of the action potential that follows the crossing. Kistler et al [8] and Jolivet et al [10, 11] proposed a method of mimicking the membrane dynamics of a target neuron with the simple spike response model in which an input current is linearly integrated. The leaky integrate-and-?re model can be regarded as an example of the spike response model [9]; the differential equation can be rewritten as an integral equation in which the membrane potential is given as the integral of the past input current with an exponentially decaying kernel. The spike response model is an extension of the leaky integrate-and-?re model, where the integrating kernel is adaptively determined by the data, and the after hyperpolarizing potential is added subsequently to every spike. It is also possible to further include terms that reduce the responsiveness and increase the threshold after an action potential takes place. Even in the learning stage, no model is able to perfectly reproduce the output V (t) of a target neuron for a given input I(t). We will denote the output of the model (in the lower case) as v(t) = fk ({I(t)}), (3) where k represents a set of model parameters. The model parameters are learned by mimicking sample input-output data. This is achieved by minimizing the integrated square error, ? 2 Ek = (V (t) ? v(t)) dt. (4) 2.1 State space method As the output of the model v(t) is not identical to the true membrane potential of the target neuron V (t), a spike occurrence cannot be determined accurately by simply applying the same threshold rule Eq.(2) to v(t). In this paper, we suggest revising the spike generation rule so that a spike occurrence is best predicted from the model potential v(t). Suppose that we have adjusted the parameters of fk ({I(t)}) so that the output of the model {v(t)} best approximates the membrane potential {V (t)} of a target neuron for a given set of currents {I(t)}. If the sample data set {I(t), V (t)} employed in learning is large enough, the spike occurrence can be predicted by estimating an empirical probability of a spike being generated at the time t, given a time-dependent orbit of an estimated output, {v(t)}, as pspike (t|{v(t)}). (5) In a practical experiment, however, the amount of collectable data is insuf?cient for estimating the spiking probability with respect to any orbit of v(t). In place of such exhaustive examination, we suggest utilizing the state space information such as the time derivatives of the model potential at a certain time. The spike occurrence at time t could be predicted from the m-dimensional state space information ?v ? (v, v ? , ? ? ? , v (m?1) ), as observed at a time s before t, as pspike (t|?vt?s ), (6) where ?vt?s ? (v(t ? s), v ? (t ? s), ? ? ? , v (m?1) (t ? s)). 2.2 Determination of the optimal time shift The time shift s introduced in the spike time prediction, Eq.(6), is chosen to make the prediction more reliable. We propose optimizing the time shift s by maximizing the mutual information between the state space information ?vt?s and the presence or absence of a spike at a time interval (t ? ?t/2, t + ?t/2], which is denoted as zt = 1 or 0. The mutual information [15] is given as M I(zt ; ?vt?s ) = M I(?vt?s ; zt ) = H(?vt?s ) ? H(?vt?s |zt ), where (7) ? H(?vt?s ) H(?vt?s |zt ) = ? d?vt?s p(?vt?s ) log p(?vt?s ), ? ? = ? d?vt?s p(?vt?s |zt )p(zt ) log p(?vt?s |zt ). (8) (9) zt ?{0,1} Here, p(?vt?s |zt ) is the probability, given the presence or absence of a spike at time t ? (t??t/2, t+ ?t/2], of the state being ?vt?s , a time s before the spike. With the time difference s optimized, we then obtain the empirical probability of the spike occurrence at the time t, given the state space information at the time t ? s, using the Bayes theorem, pspike (t|?vt?s ) ? p(zt = 1|?vt?s ) = 3 p(?vt?s |zt )p(zt ) . p(?vt?s ) (10) Results We evaluated our state space method of predicting spike times by applying it to target data obtained for a fast-spiking neuron model proposed by Erisir et al [13] (see Appendix). In this virtual experiment, two ?uctuating currents characterized by the same mean and standard deviation are injected to the (model) fast-spiking neuron to obtain two sets of input-output data {I(t), V (t)}. A predictive model was trained using one sample data set, and then its predictive performance for the other sample data was evaluated. Each input current is generated by the Ornstein-Uhlenbeck process. We have tested two kinds of ?uctuating currents characterized with different means and standard deviations: (Currents I) the mean ? = 1.5 [?A], the standard deviation ? = 1.0 [?A] and the time scale of the ?uctuation ? = 2 [msec]; (Currents II) the mean ? = 0.0 [?A], the standard deviation ? = 4.0 [?A] and the time scale of the ?uctuation ? = 2 [msec]. For each set with these statistics, we derived two independent sequences of I(t). In this study we adopted the linear ?lter model and the spike response model as prediction models. A B 0.03 2 K(t) 1 MI 0.02 0.01 0 0 20 t (ms) 40 0 0 4 s (ms) 8 Figure 2: A: The estimated kernel. B: The mutual information between the estimated potential and the occurrence of a spike. We brie?y describe here the results for the linear ?lter model [12], ? ? v(t) = K(t? )I(t ? t? ) dt? + v0 . (11) 0 The model parameters k consist of the shape of the kernel K(t) and the constant v0 . In learning the target sample data {I(t), V (t)}, these parameters are adjusted to minimize the integrated square error, Eq.(4). Figure 2A depicts the shape of the kernel K(t) estimated from the target sample data {I(t), V (t)} obtained from the virtual experiment of the fast-spiking neuron model. Based on the voltage estimation v(t) with respect to sample data, we compute the empirical probabilities, p(?vt?s ), p(?vt?s |zt ) and p(zt ) for two-dimensional state space information ?vt?s ? (v(t ? s), v ? (t ? s)). In computing empirical probabilities, we quantized the two-dimensional phase space ?v ? (v, v ? ), and the time. In the discretized time, we counted the occurrence of a spike, zt = 1, for the bins in which the true membrane potential V (t) exceeds a reasonable threshold Vth . With a suf?ciently small time step (we adopted ?t = 0.1 [msec]), a single spike is transformed into a succession of spike occurrences zt = 1. The mutual information computed according to Eq.(7) is depicted in Fig. 2B whose maximum position of s ? 2 [msec] determines the optimal time shift. The spike is predicted if the estimated probability pspike (t|?vt?s ) of Eq.(10) exceeds a certain threshold value. Though it would be more ef?cient to use the systematic method suggested by Paninski et al [16], we determined the threshold value empirically so that the coincidence factor ? described in the following is maximized. Figure 3 compares a naive thresholding method and our state space method, in reference to the original spike times. It is observed from this ?gure that the prediction of the state space method is more accurate and robust than that of thresholding method. 50 V(t) 0 50 v(t) 0 pspike(t) 3000 3500 t (ms) Figure 3: Comparison of the spike time predictions. (Top): The target membrane potential V (t). (Middle): Prediction by thresholding the model potential. (Bottom): Prediction by the present state space method. Vertical arrows represent the predicted spikes. Figure 4 depicts an orbit in the state space of (V, V ? ) of a target neuron for an instance of the spike generation, and the orbit of the predictive model in the state space of (v, v ? ) that mimics it. The predictive model can mimic the target orbit in the subthreshold region, but fails to catch the spiking orbit in the suprathreshold region. The spike occurrence is predicted by estimating the conditional probability, Eq.(10), given the state (v, v ? ) of the predictive model. Contour lines of the probability in Figure 4C are not parallel to the dv/dt-axis. The contour lines for higher probabilities of spiking resemble an ad hoc ?dynamic spike threshold? introduced by Azouz and Gray [17]. Namely, v drops with dv/dt along the contour lines. Contrastingly, the contour lines for lower probabilities of spiking are inversely curved: v increases with dv/dt along the contour lines. In the present framework, the state space information corresponding to the relatively low probability of spiking is effectively used for predicting spike times. Prediction performance is compared with the benchmark of Kistler et al [8], the ?coincidence factor,? ?(?) = Ncoinc ? ?Ncoinc ? 1 2 (Ndata + Nmodel ) 1 , 1 ? 2?? (12) where Ndata and Nmodel respectively represent the numbers of spikes in the original data and prediction model, Ncoinc is the number of coincident spikes with the precision of ?, ?Ncoinc ? = 2??Ndata is the expected number of coincidences of the data and the Poisson spikes with rate ?. ? is chosen as 2 [msec] in accordance with Jolivet et al [10]. Table 1: The coincidence factors evaluated for two methods of prediction based on the linear ?lter model. method Currents I Currents II thresholding 0.272 0.567 state space 0.430 0.666 B C v V c 14 14 b A c 12 12 10 10 8 8 b V 120 80 40 a a 6 0 -500 0 500 dV/dt 6 -2 0 dV/dt 2 -2 0 dv/dt 2 Figure 4: A: An orbit in the state space of (V, V ? ) of a target neuron for an instance of the spike generation (from 3240 to 3270 [msec] of Fig. 3). B: magni?ed view. C: The orbit in the state space of (v, v ? ) of the predictive model that mimics the target neuron. Contours represent the probability of spike occurrence computed with the Bayes formula, Eq.(10).The dashed lines represent the threshold adopted in the naive thresholding method (Fig 3 Middle). Three points a, b, and c in the spaces of (V, V ? ) and (v, v ? ) represent the states of identical times, respectively, t = 3242, 3252 and 3253 [msec]. Table 2: The coincidence factors evaluated for two methods of prediction based on the spike response model. method thresholding state space Currents I 0.501 0.641 Currents II 0.805 0.842 The coincidence factors evaluated for a simple thresholding method and the state space method based on the linear ?lter model are summarized in Table 1, and those based on the spike response model are summarized in Table 2. It is observed that the prediction is signi?cantly improved by our state space method. It should be noted, however, that a model with the same set of parameters does not perform well over a range of inputs generated with different mean and variance: The model parameterized with the Currents I does not effectively predict the spikes of the neuron for the Current II, and vice versa. Nevertheless, our state-space method exhibits the better prediction than the naive thresholding strategy, if the statistics of the different inputs are relatively similar. 4 Summary We proposed a method of evaluating the probability of the spike occurrence by observing the state space of the membrane potential and its time derivative(s) in advance of the possible spike time. It is found that the prediction is signi?cantly improved by the state space method compared to the prediction obtained by simply thresholding an instantaneous value of the estimated potential. It is interesting to apply our method to biological data and categorize neurons based on their spiking mechanisms. The state space method developed here is a rather general framework that may be applicable to any nonlinear phenomena composed of locally predictable dynamics. The generalization of linear ?lter analysis developed here has a certain similarity to the Linear-Nonlinear-Poisson (LNP) model [18, 19]. It would be interesting to generalize the present method of analysis to a wider range of phenomena such as the analysis of the coding of visual system [19, 20]. Acknowledgments This study is supported in part by Grants-in-Aid for Scienti?c Research to SS from the Ministry of Education, Culture, Sports, Science and Technology of Japan (16300068, 18020015) and the 21st Century COE ?Center for Diversity and Universality in Physics? and to RK from Foundation For C & C Promotion. Appendix: Fast-spiking neuron model The fast-spiking neuron model proposed by Erisir et al [13] was used in this contribution as a (virtual) target experiment. The details of the model were adjusted to Jolivet et al [10] to allow the direct comparison of the performances. Speci?cally, the model is described as du(t) C = ?[INa + IK1 + IK2 + IL ] + I ext (t) , (13) dt INa = gNa m3 h(u ? ENa ) , (14) IK1 = gK1 n41 (u ? EK ) , IL = gL (u ? EL ) , IK2 = gK2 n22 (u ? EK ) , where the gate variables x ? n1 , n2 , m and h obey the differential equations of the form, dx = ?x (u)(1 ? x) ? ?x (u)x, dt whose parameters ?x (u) and ?x (u) are functions of u, as listed in Table 3. (15) (16) (17) Table 3: The parameters for the fast-spiking model. The membrane capacity is C = 1.0[?F/cm2 ]. Channel Variable ? ? gx (mS/cm2 ) Ex (mV) Na m ?3020+40u 1?exp(? u?75.5 13.5 ) 1.2262 u exp( 42.248 ) 112.5 74 ?? h 0.0035 u exp( 24.186 ) 0.8712+0.017u 1?exp(? 51.25+u ) 5.2 ?? ?? K1 n1 0.014(44+u) 1?exp(? 44+u 2.3 ) 0.0043 exp( 44+u 34 ) 0.225 ?90.0 K2 n2 u?95 1?exp(? u?95 11.8 ) 0.025 u exp( 22.22 ) 225.0 ?90.0 L ?? ?? ?? 0.25 ?70 References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] Hodgkin, A.L. & Huxley, A.F. (1952) J. Physiol. 117:500-544. FitzHugh, R. (1961) Biophys. J. 1:445-466. Nagumo, J., Arimoto, S. & Yoshizawa, S. (1962) Proc. IRE 50:2061-2070. Lapicque, L. (1907) J. Physiol. Pathol. Gen. 9:620-635. Hines, M.L. & Carnevale, N.T. (1997) Neural Comp. 9:1179-1209. Bower, J.M. & Beeman, D. (1995) The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System. New York: Springer-Verlag. Tsubo, Y., Kaneko, T. & Shinomoto, S. (2004) Neural Networks 17:165-173. Kistler, W., Gerstner, W. & van Hemmen, J.L. (1997) Neural Comp. 9:1015-1045. Gerstner, W. & Kistler, W. (2002) Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge Univ. Press. Jolivet, R., Lewis, T.J. & Gerstner, W. (2004) J. Neurophysiol. 92:959-976. Jolivet, R., Rauch, A., L?uscher, H.R. & Gerstner, W. (2006) Integrate-and-Fire models with adaptation are good enough: predicting spike times under random current injection. In Y. Weiss, B. Sch?olkopf and J. Platt (eds.), Advances in Neural Information Processing Systems 18, pp. 595-602. Cambridge, MA: MIT Press. Westwick, D.T. & Kearney, R.E. (2003) Identi?cation of Nonlinear Physiological Systems. (Ieee Press Series in Biomedical Engineering) Piscataway: Wiley-IEEE Press. Erisir, A., Lau, D., Rudy, B. & Leonard, C.S. (1999) J. Neurophysiol. 82:2476-2489. Mainen, Z.F. & Sejnowski, T.J. (1995) Science 268:1503-1506. MacKay, D. (2003) Information Theory, Inference and Learning Algorithms. Cambridge: Cambridge Univ. Press. Paninski, L., Pillow, J.W. & Simoncelli, E.P. (2005) Neurocomputing 65-66: 379-385 Azouz, R. & Gray, C.M. (2000) PNAS 97(14):8110-8115. Chichilnisky, E.J. (2001) Network 12(2):199-213. Pillow, J.W., Paninski, L., Uzzell, V.J., Simoncelli, E.P. & Chichilnisky, E.J. (2005) Journal of Neuroscience 25(47):11003-11013. Arcas, B.A., Fairhall, A.L. & Bialek, W. (2003) Neural Comp. 15:1715-1749.
3031 |@word middle:2 cm2:2 simulation:3 simplifying:1 series:1 mainen:1 past:2 current:22 universality:1 dx:1 realize:1 physiol:2 hyperpolarizing:1 realistic:2 plasticity:1 shape:3 drop:1 cue:2 half:1 cult:2 realism:1 gure:1 ire:1 quantized:1 gx:1 along:2 direct:1 differential:3 qualitative:1 consists:1 n22:1 expected:1 rapid:1 examine:1 multi:1 simulator:1 discretized:1 decomposed:2 estimating:3 xed:1 kind:1 pursue:1 revising:1 developed:2 uctuating:4 quantitative:1 every:1 friendly:1 k2:1 platt:1 grant:1 before:2 kobayashi:2 engineering:1 timing:1 accordance:1 ext:1 minimally:1 dif:3 range:2 practical:1 acknowledgment:1 empirical:4 word:1 integrating:1 suggest:2 cannot:1 close:1 applying:2 center:1 maximizing:2 attention:1 rule:3 utilizing:1 regarded:1 dominate:1 century:2 population:1 target:13 suppose:1 user:1 crossing:1 observed:3 bottom:1 coincidence:6 region:2 mentioned:1 predictable:5 complexity:1 dynamic:16 trained:1 predictive:6 neurophysiol:2 easily:1 various:1 univ:2 fast:8 describe:1 sejnowski:1 exhaustive:1 whose:2 s:1 statistic:2 reproduced:1 hoc:1 sequence:1 scphys:2 propose:3 adaptation:1 gen:1 culty:1 olkopf:1 rst:1 ring:1 wider:1 ac:2 eq:7 paying:1 predicted:6 signi:3 implies:1 resemble:1 subsequently:1 enable:1 suprathreshold:1 kistler:5 virtual:3 bin:1 education:1 generalization:1 biological:1 adjusted:3 exploring:2 extension:1 practically:2 exp:8 predict:2 gk2:1 adopt:1 purpose:2 estimation:1 proc:1 applicable:1 vice:1 promotion:1 mit:1 rather:1 voltage:3 derived:1 indicates:1 contrast:1 inference:1 dependent:2 el:1 integrated:3 reproduce:2 transformed:1 mimicking:3 denoted:1 platform:1 integration:1 fairly:1 mutual:5 mackay:1 construct:1 identical:4 represents:2 nearly:2 mimic:4 composed:1 neurocomputing:1 phase:1 consisting:1 fire:1 n1:2 highly:3 uscher:1 scienti:1 accurate:1 integral:2 culture:1 re:4 orbit:8 instance:2 measuring:1 cost:1 deviation:4 undetermined:1 successful:3 adaptively:1 st:1 rudy:1 peak:1 cantly:3 systematic:1 physic:3 na:1 book:1 ek:3 derivative:4 japan:3 potential:25 de:2 diversity:1 accompanied:1 summarized:2 coding:1 mv:1 ornstein:1 ad:1 performed:1 view:1 shigeru:1 observing:2 decaying:1 bayes:2 parallel:1 contribution:1 minimize:1 square:2 compartment:1 il:2 variance:1 resetting:1 succession:1 subthreshold:8 maximized:1 generalize:1 magni:1 accurately:2 ionic:3 comp:3 ago:1 cation:3 ed:3 pp:1 yoshizawa:1 mi:1 newly:1 appears:1 exceeded:1 higher:2 dt:11 originally:1 tsubo:1 response:13 maximally:1 improved:3 wei:1 evaluated:5 though:2 stage:1 biomedical:1 nonlinear:8 gray:2 true:4 swing:1 shinomoto:3 essence:2 noted:1 m:4 instantaneous:3 novel:3 recently:1 ef:1 functional:1 spiking:14 empirically:1 arimoto:1 insuf:1 exponentially:1 jp:2 extend:1 approximates:1 versa:1 cambridge:5 ena:1 fk:2 ik1:2 similarity:1 v0:2 optimizing:1 certain:6 verlag:1 vt:25 lnp:1 responsiveness:1 ministry:1 simpli:3 employed:1 speci:1 pathol:1 dashed:1 ii:4 simoncelli:2 pnas:1 kyoto:6 exceeds:3 determination:1 characterized:2 nagumo:2 prediction:20 arca:1 poisson:2 represent:6 kernel:6 uhlenbeck:1 achieved:2 interval:1 sch:1 extracting:1 ciently:1 presence:2 enough:2 perfectly:1 gk1:1 reduce:1 shift:5 rauch:1 effort:1 york:1 action:4 listed:1 amount:1 locally:1 reduced:1 neuroscience:1 estimated:7 four:1 threshold:10 nevertheless:1 utilize:1 lter:6 parameterized:1 injected:1 hodgkin:3 place:2 family:1 reasonable:1 appendix:2 fairhall:1 precisely:1 huxley:3 aspect:2 ncoinc:4 fitzhugh:2 injection:1 relatively:3 ned:2 department:2 developing:1 according:1 piscataway:1 membrane:18 across:1 dv:7 lau:1 principally:1 equation:5 mechanism:2 carnevale:1 adopted:4 studying:1 rewritten:1 apply:1 obey:1 occurrence:16 uctuation:2 gate:1 original:3 top:1 include:1 assembly:1 coe:1 cally:1 k1:1 added:1 spike:56 occurs:1 strategy:1 bialek:1 exhibit:1 capacity:1 gna:1 polarity:1 minimizing:1 potentially:1 ryota:1 casually:1 zt:17 perform:1 vertical:1 neuron:27 revised:1 benchmark:1 coincident:1 curved:1 ever:1 precise:2 genesis:2 reproducing:2 introduced:2 namely:1 chichilnisky:2 optimized:1 identi:1 learned:2 established:1 jolivet:5 able:1 suggested:2 dynamical:1 including:1 reliable:1 examination:1 predicting:6 improve:1 technology:1 inversely:1 axis:1 excitable:1 extract:1 naive:3 catch:1 taste:1 ina:2 interesting:3 generation:3 suf:1 foundation:1 integrate:5 thresholding:10 ndata:3 summary:1 supported:1 gl:1 free:1 allow:1 ik2:2 leaky:4 van:1 evaluating:1 pillow:2 contour:6 made:1 counted:1 far:1 employing:1 reproduces:1 assumed:1 continuous:1 table:6 channel:3 transfer:1 robust:1 improving:1 du:1 gerstner:4 linearly:1 arrow:1 n2:2 neuronal:3 fig:4 cient:2 brie:1 depicts:2 hemmen:1 slow:1 aid:1 wiley:1 precision:1 fails:1 position:1 wish:1 exceeding:1 msec:7 bower:1 theorem:1 formula:1 rk:1 ton:1 physiological:1 reproduction:1 consist:1 effectively:2 biophys:1 depicted:1 simply:3 explore:2 paninski:3 visual:1 vth:2 ux:1 sport:1 springer:1 kaneko:1 corresponds:1 determines:1 lewis:1 hines:1 ma:1 conditional:1 leonard:1 installing:1 absence:2 included:1 determined:5 neurophysiologists:1 reducing:1 called:2 experimental:2 m3:1 uzzell:1 categorize:1 tested:1 phenomenon:3 ex:1
2,240
3,032
Learning Time-Intensity Profiles of Human Activity using Non-Parametric Bayesian Models Alexander T. Ihler Padhraic Smyth Donald Bren School of Information and Computer Science U.C. Irvine [email protected] [email protected] Abstract Data sets that characterize human activity over time through collections of timestamped events or counts are of increasing interest in application areas as humancomputer interaction, video surveillance, and Web data analysis. We propose a non-parametric Bayesian framework for modeling collections of such data. In particular, we use a Dirichlet process framework for learning a set of intensity functions corresponding to different categories, which form a basis set for representing individual time-periods (e.g., several days) depending on which categories the time-periods are assigned to. This allows the model to learn in a data-driven fashion what ?factors? are generating the observations on a particular day, including (for example) weekday versus weekend effects or day-specific effects corresponding to unique (single-day) occurrences of unusual behavior, sharing information where appropriate to obtain improved estimates of the behavior associated with each category. Applications to real?world data sets of count data involving both vehicles and people are used to illustrate the technique. 1 Introduction As sensor and storage technologies continue to improve in terms of both cost and performance, increasingly rich data sets are becoming available that characterize the rhythms of human activity over time. Examples include logs of radio frequency identification (RFID) tags, freeway traffic over time (loop-sensor data), crime statistics, email and Web access logs, and many more. Such data can be used to support a variety of different applications, such as classification of human or animal activities, detection of unusual events, or to support the broad understanding of behavior in a particular context such as the temporal patterns of Web usage. To ground the discussion, consider data consisting of a collection of individual or aggregated events from a single sensor, e.g., a time-stamped log recording every entry and exit from a building, or the timing and number of highway traffic accidents. For example, Figure 1 shows several days worth of data from a building log, smoothed so that the similarities in patterns are more readily visible. Of interest is the modeling of the underlying intensity of the process generating the data, where intensity here refers to the rate at which events occur. These processes are typically inhomogeneous in time (as in Figure 1), as they arise from the aggregated behavior of individuals, and thus exhibit a temporal dependence linked to the rhythms of the underlying human activity. The complexity of this temporal dependence is application-dependent and generally unknown before observing the data, suggesting that non- or semi-parametric methods (methods whose complexity is capable of growing as the number of observations increase) may be particularly appropriate. Formulating the underlying event generation as an inhomogeneous Poisson process is a common first step (see, e.g., [1, 4]), as it allows the application of various classic density estimation techniques to estimate the time-dependent intensity function (a normalized version of the rate function; see Sec- tion 2). Techniques used in this context include kernel density estimation [2], wavelet analysis [3], discretization [1], and nonparametric Bayesian models [4, 5]. Among these, nonparametric Bayesian approaches have a number of appealing advan60 tages. First, they allow us to represent and reason about uncertainty in the intensity function, 50 providing not just a single estimate but a distribution over functions. Second, the Bayesian 40 framework provides natural methods for model selection, allowing the data to be naturally ex30 plained by a parsimonious set of intensity functions, rather than using the most complex expla20 nation (though similar effects may be achieved using penalized likelihood functions [3]). Fi10 nally, Bayesian methods generalize to multiple or hierarchical models, which allow informa0 0 6 12 18 24 tion to be shared among several related but differing sets of observations (e.g., multiple days Figure 1: Count data from a building entry log of data). This second point is crucial for many observed on ten Mondays, each smoothed using a problems, as we rarely obtain many observa- kernel function [2, 6] to enable visual comparison. tions of exactly the same process under exactly the same conditions; instead, we observe multiple instances which are thought to be similar, but may in fact represent any number of slightly differing circumstances. For example, behavior may be dependent on not only time of day but also day of week, type of day (weekend or weekday), unobserved factors such as the weather, or other unusual circumstances. Sharing information allows us to improve our model, but we should only do so where appropriate (itself best indicated by similarity in the data). By being Bayesian, we can remain agnostic of what data should be shared and reason over our uncertainty in this structure. In what follows we propose a non-parametric Bayesian framework for modeling intensity functions for event data over time. In particular, we describe a Dirichlet process framework for learning the unknown rate functions, and learn a set of such functions corresponding to different categories. Individual time-periods (e.g., individual days) are then represented as additive combinations of intensity functions, depending on which categories are assigned to each time-period. This allows the model to learn in a data-driven fashion what ?factors? are generating the observations on a particular day, including (for example) weekday versus weekend effects as well as day-specific effects corresponding to unusual behavior present only on a single day. Applications to two real?world data sets, a building access log and accident statistics, are used to illustrate the technique. We will discuss in more detail in the sections that follow how our proposed approach is related to prior work on similar topics. Broadly speaking, from the viewpoint of modeling of inhomogeneous time-series of counts, our work extends the work of [4] to allow sharing of information among multiple, related processes (e.g., different days). Our approach can also be viewed as an alternative to the hierarchical Dirichlet process (HDP, [7]) for problems where the patterns across different groups are much more constrained than would be expected under an HDP model. 2 Poisson processes A common model for continuous-time event (counting) data is the Poisson process [8]. As the discrete Poisson distribution is characterized by a rate parameter ?, the Poisson process1 is characterized by a rate function ?(t); it has the property that over any given R time interval T , the number of events occurring within that time is Poisson with rate given by ? = T ?(t). We shall use a Bayesian semi-parametric model for ?(t), described next. Let us suppose that we have a single collection of event times {?i } arising from a Poisson process with rate function ?(t), i.e., {?i } ? P[? ; ?(t)] (1) 1 Here, we shall use the term Poisson process interchangeably with inhomogeneous Poisson process, meaning that the rate is a non-constant function of time t. R? where ?(t) is defined on t ? [??, ?]. We may write ?(t) = ?f (t), where ? = ?? ?(t) and f (t) is the intensity function, a normalized version of the rate function. A Bayesian model places prior distributions on these quantities; by selecting a parametric prior for ? and a nonparametric prior for f (t), we obtain a semi-parametric prior for ?(t). Specifically, we choose Z ? ? ?(a, b) f (t) = K(t; ?)dG(?) G ? DP[?G0 ] where ? is the gamma distribution, K is a kernel function (for example a Gaussian distribution) and DP is a Dirichlet process [9] with parameter ? and base distribution G0 . The Dirichlet process provides a nonparametric prior for f (t), such thatP (with probability one) f has the form of a mixture model with infinitely many components: f (t) = j wj K(t; ?j ). If desired we may also place prior distributions on some or all of these quantities (e.g., ?, {a, b}, or the parameters of G0 ) as well. Dirichlet processes and their variations [7, 9?11] have gained recent attention for their ability to provide representations consisting of arbitrarily large mixture models. In particular, they have been the subject of recent work in modeling intensity functions for Poisson processes defined over time [4] and space?time [5]. 2.1 Monte Carlo Inference For the Poisson process model just described, the likelihood of the data {?i }, i = 1 . . . N at some time T is given by ? Z ! T Y p({?i }; ?, f (?)) = exp ? ?f (t) ? N f (?i ) ?? i which, as T ? ? (i.e., as we observe a complete data set) becomes " # ? ? Y N p({?i }; ?, f (?)) = exp(??)? f (?i ) (2) i The rightmost term (term involving f ) has the same form as the likelihood of the ?i as i.i.d. samples from the mixture model distribution defined by f . As in many mixture model applications, it will be helpful to create auxiliary assignment variables zi for each ?i , indicating with which of the mixture components the sample ?i is associated. The complete data likelihood is then " # Y ? ? p({?i , zi }; ?, f (?)) = exp(??)? N wzi K(?i ; ?zi ) . i Inference is typically accomplished using Markov chain Monte Carlo (MCMC) sampling [9]. Specifically, although the posterior for ? has a simple closed form, p(?|{?i }) ? ?(N + a, 1 + b), sampling from f is more complicated. Samples from f can be drawn in a variety of ways. One of the most common methods is the so-called ?Chinese Restaurant Process? (CRP, [7, 9]), in which the relative weights wj are marginalized out while drawing the assignment variables zi . Such exact sampling approaches work by exploiting the fact that only a finite number of the mixture components are occupied by the data; by treating the unoccupied clusters as a single group, the infinite number of potential associations can be treated as a finite number. The operations involved (such as sampling values for ?j given a collection of associated event times ?i ) are easier for certain choices of K and G than others; for example using a Gaussian kernel and normal-Wishart distribution, the necessary quantities have convenient closed forms [9]. Another, more brute-force way around the issue of having infinitely many mixture components is to perform approximate sampling using a ?truncated? Dirichlet process representation [12, 13]. As described in [12], for a given ?, data set size N , and tolerance ?, one can compute a maximum number of components M necessary to approximate the Dirichlet process with a Dirichlet distribution using the relation ? ? 4 N exp[?(M ? 1)/?] and in this manner, can work with finite numbers of mixture components. This representation will prove useful in Section 3. The truncated DP approximation is helpful primarily because it allows us to sample the (complete) function f (t) (as compared to only the ?occupied? part in the CRP formulation). Given a set of assignments {zi } occupying (arbitrarily numbered) clusters 1 . . . J, we can sample the weights wj in two steps.P First, we sample the occupied mixture weights, wj (j ? J), and the total unoccupied ? weight w ? = J+1 wj , by drawing independent, Gamma-distributed random variables according to ?(Nj , 1) and ?(?, 1), respectively, and normalizing them to sum to one. The values of weights wj in the unoccupied clusters (j > J) can then be sampled given w ? using the stick?breaking representation of Sethuraman [14]. Note that the truncated DP approximation highlights the importance of also sampling ? if we hope for our representation to act non-parametric in the sense that it may grow more complex as the data increase, since for a fixed ? and ? the number of components M is quite insensitive to N . For more details on sampling such hyper-parameters see e.g. [10]. 2.2 Finite Time Domains Our description of non-parametric Bayesian techniques for Poisson processes has so far made implicit use of the fact that the domain of f (t) is infinite. When the domain of f is finite, for example [0, 1], a few minor complications arise. For example, the kernel functions K(?) should properly be defined as positive only on this interval. One possible solution to this issue is to use an alternate kernel function, such as the Beta distribution [4]. However, this means that posterior sampling of the parameters ? is no longer possible in closed form. Although methods such as Metropolis-Hastings may be used [4], they can be highly dependent on the choice of proposal density. Here, we take a slightly different approach, drawing truncated Gaussian kernels with parameters sampled from a truncated Normal-Wishart distribution. Specifically, we define N (t; ?, ? 2 )?1 (?) K(t; ? = [?, ? 2 ]) = R 1 N (x; ?, ? 2 )dx 0 [?, ? 2 ] ? ?1 (?) ?1 (?) N W(?, ? 2 ) where ?1 (t) is one on [0, 1] and zero elsewhere and N W is the normal-Wishart distribution. Sampling in this model turns out to be relatively simple and efficient using rejection methods. Given the R1 restrictions imposed on ? and ?, one can show that the normalizing quantity Z = 0 N (x; ?, ? 2 ) is always greater than one-third. Thus, to sample from the posterior we simply draw from the original, closed form posterior distribution, discarding (and re-sampling) if ? 6? [0, 1], ? 6? [0, 1], or with probability 1 ? (3Z)?1 . 3 Categorical Models As mentioned in the introduction, we often have several collections d = 1 . . . D of observations, {?di } with i = 1 . . . Nd , arising from D instances of the same or similar processes. If these processes are known to be identical and independent, sharing information among them is relatively easy?we obtain D observations Nd with which to estimate ?, and the ?di are collectively used to estimate f (t). However, if these processes are not necessarily identical, sharing information becomes more difficult. Yet it is just this situation which is most typical. Again consider Figure 1, which shows event data from ten different Mondays. Clearly, there is a great deal of consistency in both size and shape, although not every day is exactly the same, and one or two stand out as different. Were we to also look at, for example, Sundays or Tuesdays (as we do in Section 4), we would see that although Sunday and Monday appear quite different and, one suspects, have little shared information, Monday and Tuesday appear relatively similar and this similarity can probably be used to improve our rate estimates for both days. In this example, we might reasonably assume that the category memberships are known (for example, whether a given day is a weekday or weekend, or a Monday or Tuesday), though we shall relax this assumption in later sections. Then, given a structure of potential relationships, what is a reasonable model for sharing information among categories? There are, of course, many possible choices; we use a simple additive model, described in the next section. 3.1 Additive Models The intuition behind an additive model is that the data arises from the superposition of several underlying causes present during the period of interest. Again, we initially assume that the category memberships are known; thus, if a category is associated with a particular day, the activity profile associated with that category will be observed, along with additional activity arising from each of the other categories present. Let us associate a rate function ?c (t) = ?c fc (t) with each category in our model. We define the rate function of a given day d to be the sum of the rate functions of each category to which d belongs. Denoting by sdc the (binary-valued) membership indicator, i.e., that category c is present during day P d, we have that ?d (t) = c:sdc =1 ?c (t). At first, this model might seem quite restrictive. However, it matches our intuition of how the data is generated, stemming from the presence or absence of a particular behavioral pattern associated with some underlying cause (such as it being a work day). In fact, we do not want a model which is too flexible, such as a linear combination of patterns, since it is not physically meaningful to say, for example, that a day is only ?part? Monday. To learn the profiles associated with a given cause (e.g., things that happen every day versus only on weekdays or only on Mondays), it makes sense to take an ?all or nothing? model where the pattern is either present, or not. This also suggests that other methods of coupling Dirichlet processes, such as the hierarchical Dirichlet process [7], may be too flexible. The HDP couples the parameters of components across levels, but only loosely relates the actual shape of the profile, since it allows components to be larger or smaller (or even disappear completely). In [7], this is a desirable quality, but in our application it is not. Using an additive model allows both a consistent size and shape to emerge for each category, while associating deviations from that profile to categories further down in the hierarchy. Inference in this system is not significantly more difficult than in the single rate function case (Section 2). We define the association as [ydi , zdi ], where ydi indicates which of the categories generated P event ?di . It is easy to sample ydi according to p(ydi = c|{?c (t)}) ? [?c (?di )] / [ c0 ?c0 (?di )]. 3.2 Sampling Membership Of course, it is frequently the case that the membership(s) of each collection of data are not known precisely. In an extreme case, we may have no idea which collections are similar and should be grouped together and wish to find profiles in an unsupervised manner. More commonly, however, we have some prior knowledge and interpretation of the profiles but do not wish to strictly enforce a known membership. For example, if we create categories with assigned meanings (weekdays, weekends, Sundays, Mondays, and so on), a day which is nominally a Monday but also happens to be a holiday, closure, or other unusual circumstances may be completely different from other Monday profiles. Similarly, a day with unusual extra activity (receptions, talks, etc.) may see behavior unique to its particular circumstances and warrant an additional category to represent it. We can accommodate both these possibilities by also sampling the values of the membership indicator variables sdc , i.e., the binary indicator that day d sees behavior from category c. To this end, let us assume we have some prior knowledge of these membership probabilities, pdc (sdc ); we may then re-sample from their posterior distributions at each iteration of MCMC. This sampling step is difficult to do outside the truncated representation. Although up until this point we could easily have elected to use, for example, the CRP formulation for sampling, the association variables {ydi , zdi } are tightly coupled with the memberships sdc since if any ydi = c we must have that sdc = 1. Instead, to sample the sdc we condition on the truncated rate functions ?c (t), with truncation depth M chosen to provide arbitrarily high precision. The likelihood of the data under these rate functions for any values of {sdc } can then be computed directly via (2) where X X sdc ?c (t). sdc ?c and f (t) = ? ?1 ?= c c In practice, we propose changing the value of each membership variable sdc individually given the others, though more complex moves could also be applied. This gives the following sequence of MCMC sampling: (1) given a truncated representation of the {?c (t)}, sample membership variables {sdc }; (2) given {?c (t)} and {sdc }, sample associations {zdi }; (3) given associations {zdi }, sample 3 50 50 40 40 30 30 20 20 2 1 10 0 0 6 12 18 24 (a) Sundays 0 0 10 6 12 18 (b) Mondays 24 0 0 6 12 18 24 (c) Tuesdays Figure 2: Posterior mean estimates of rate functions for building entry log data, estimated individually for each day (dotted) and learned by sharing information among multiple days (solid) for (a) Sundays, (b) Mondays, and (c) Tuesdays. Sharing information among similar days gives greatly improved estimates of the rate functions, resolving otherwise obscured features such as the decrease during and increase subsequent to lunchtime. category magnitudes {?c } and a truncated representation of each fc (t) consisting of weights {wj } and parameters {?j }. 4 Experiments In this section we consider the application of our model to two data sets, one (mentioned previously) from the entry log of people entering a large campus building (produced by optical sensors at the front door), and the other from a log of vehicular traffic accidents. By design, both data sets contain about ten weeks worth of observations. In both cases, we have a plausible prior structure for and interpretation of the categories, i.e., that similar days will have similar profiles. To this end, we create categories for ?all days?, ?weekends?, ?weekdays?, and ?Sundays? through ?Saturdays?. Each of these categories has a high probability (pdc = .99) of membership for each eligible day. To account for the possibility of unusual increases in activity, we also add categories unique to each day, with lower prior probability (pdc = .20) of membership. This allows but discourages each day to add a new category if there is evidence of unusual activity. 4.1 Building Entry Data To see the improvement in the estimated rate functions when information is shared among similar days, Figure 2 shows results from three different days of the week (Sunday, Monday, Tuesday). Each panel shows the estimated profiles of each of the ten days estimated individually (using only that day?s observations) under a Dirichlet process mixture model (dotted lines). Superimposed in each panel is a single, black curve corresponding to the total profile for that day of week estimated using our categorical model; so, (a) shows the sum of the rate functions for ?all days?, ?weekends?, and ?Sundays?, while (b) shows the sum of ?all days?, ?weekdays?, and ?Mondays?. We use the same prior distributions for both the individual estimates and the shared estimate. Several features are worth noting. First, by sharing several days worth of observations, the model can produce a much more accurate estimate of the profiles. In this case, no single day contains enough observations to be confident about the details of the rate function, so each individually?estimated rate function appears relatively smooth. However, when information from other days is included, the rate function begins to resolve into a clearly bi-modal shape for weekdays. This ?bi-modal? rate behavior is quite real, and corresponds to the arrival of occupants in the morning (first mode), a lull during lunchtime, and a larger, narrower second peak as most occupants return from lunch. Second, although Monday and Tuesday profiles appear similar, they also have distinct behavior, such as increased activity late Tuesday morning. This behavior too has some basis in reality, corresponding to a regular weekly meeting held around lunchtime over most (though not quite all) of the weeks in question. The breakdown of a particular day (the first Tuesday) into its component categories is shown in Figure 3. As we might expect, there is little consistency between weekdays and weekends, quite a bit of similarity among weekdays and among just Tuesdays, and (for this particular day) very little to set it apart from other Tuesdays. We can also check to see that the category memberships sdc are being used effectively. One of the Mondays in our data set fell on a holiday (the individual profile very near zero). If we average the probabilities computed during MCMC to estimate the posterior probability of the sdc for that 40 40 40 40 30 30 30 30 20 20 20 20 10 10 10 0 0 6 12 18 0 0 24 6 (a) All Days 12 18 10 0 0 24 6 (b) Weekdays 12 18 0 0 24 6 (c) Tuesdays 12 18 24 (d) Unique Figure 3: Posterior mean estimates of the rate functions for each category to which the first Tuesday data might belong. For comparison, the total rate (sum of all categories) is shown as the dotted line. (a) The ?all days? category is small, indicating little consistency in the data between weekdays and weekends; (b) the ?weekdays? category is larger, and contains a component which appears to correspond to the occupants? return from lunch; (c) the ?Tuesday? category has modes in the morning and afternoon, perhaps capturing regular meetings or classes; (d) the ?unique? category (a category unique to this particular day) shows little or no activity. 15 20 0.6 15 10 0.4 10 5 0.2 0 0 5 6 12 18 24 0 0 6 12 18 0 0 24 6 12 18 24 Figure 4: Profiles associated with individual-day categories in the entry log data for several days with known events (periods between dashed vertical lines). The model successfully learns which days have significant unusual activity and associates reasonable profiles with that activity (note that increases in entrance count rate typically occurs shortly before or at the beginning of the event time). particular day, we find that it has near-zero probability of belonging to either the weekday or Monday categories, and uses only the all-day and unique categories. We can also examine days which have high probability of requiring their own category (indicating unusual activity). For this data set, we also have partial ground truth, consisting of a number of dates and times when activities were scheduled to take place in the building. Figure 4 shows three such days, and the corresponding rate profiles associated with their single-day categories. Again, all three days are estimated to have additional activity, and the period of time for that activity corresponds well with the actual start and end time shown in the schedule (dashed vertical lines). 60 4.2 Vehicular Accident Data Our second data set consists of a database of vehicular accident times recorded by North Carolina police departments. As we might expect of driving patterns, there is still less activity on weekends, but far more than was observed in the campus building log. 40 20 0 0 6 12 18 24 As before, sharing information allows us to Figure 5: Posterior mean and uncertainty for a decrease our posterior uncertainty on the rate single day of accident data, estimated individually for any particular day. Figure 5 quantifies this (red) and with data sharing (black). Sharing data idea by showing the posterior means and (point- considerably reduces the posterior uncertainty in wise) two-sigma confidence intervals for the the profile shape. rate function estimated for the same day (the first Monday in the data set) using that day?s data only (red curves) and using the category-based additive model (black). The additive model leverages the additional data to produce much tighter estimates of the rate profile. As with the previous example, the additional data also helps resolve detailed features of each day?s profile, as seen in Figure 6. For example, the weekday profiles show a tri-modal shape, with one mode corresponding to the morning commute, a small mode around noon, and another large mode 100 100 100 80 80 80 60 60 60 40 40 40 20 20 0 0 6 12 (a) Sundays 18 24 0 0 20 6 12 18 (b) Mondays 24 0 0 6 12 18 24 (c) Fridays Figure 6: Posterior mean estimates of rate functions for vehicular accidents, estimated individually for each day (dotted) and with sharing among multiple days (solid) for (a) Sundays, (b) Mondays, and (c) Fridays. As in Figure 2, sharing information helps resolve features which the individual days do not have enough data to reliably estimate. around the evening commute. This also helps make the pattern of deviation on Friday clear, showing (as we would expect) increased activity at night. 5 Conclusions The increasing availability of logs of ?human activity? data provides interesting opportunities for the application of statistical learning techniques. In this paper we proposed a non-parametric Bayesian approach to learning time-intensity profiles for such activity data, based on an inhomogeneous Poisson process framework. The proposed approach allows collections of observations (e.g., days) to be grouped together by category (day of week, weekday/weekend, etc.) which in turn leverages data across different collections to yield higher quality profile estimates. When the categorization of days is not a priori certain (e.g., days that fall on a holiday or days with unusual non-recurring additional activity) the model can infer the appropriate categorization, allowing (for example) automated detection of unusual events. On two large real-world data sets the model was able to infer interpretable activity profiles that correspond to real-world phenomena. Directions for further work in this area include richer models that allow for incorporation of observed covariates such as weather and other exogenous phenomena, as well as modeling of multiple spatially-correlated sensors (e.g., loop sensor data for freeway traffic). References [1] S. Scott and P. Smyth. The Markov modulated Poisson process and Markov Poisson cascade with applications to web traffic data. Bayesian Statistics, 7:671?680, 2003. [2] R. Helmers, I.W. Mangku, and R. Zitikis. Consistent estimation of the intensity function of a cyclic Poisson process. J. Multivar. Anal., 84(1):19?39, January 2003. [3] R. Willett and R. Nowak. Multiscale Poisson intensity and density estimation. submitted to IEEE Trans. IT, January 2005. [4] A. Kottas. Bayesian nonparametric mixture modeling for the intensity function of non-homogeneous Poisson processes. Technical Report ams2005-02, Department of Applied Math and Statistics, U.C. Santa Cruz, Santa Cruz, CA, 2005. [5] A. Kottas and B. Sanso. Bayesian mixture modeling for spatial Poisson process intensities, with applications to extreme value analysis. Technical Report ams2005-19, Dept. of Applied Math and Statistics, U.C. Santa Cruz, Santa Cruz, CA, 2005. [6] B.W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall, NY, 1986. [7] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. In NIPS 17, 2004. [8] D.R. Cox. Some statistical methods connected with series of events. J. R. Stat. Soc. B, 17:129?164, 1955. [9] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. J. of Comp. Graph. Stat., 9:283?297, 2000. [10] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. J. Amer. Stat. Assoc., 90:577?588, 1995. [11] L.F. James. Functionals of Dirichlet processes, the Cifarelli-Reganzzini identity and Beta-Gamma processes. Ann. Stat., 33(2):647?660, 2005. [12] H. Ishwaran and L.F. James. Gibbs sampling methods for stick-breaking priors. J. Amer. Stat. Assoc., 96:161?173, 2001. [13] H. Ishwaran and L.F. James. Approximate Dirichlet process computing in finite normal mixtures: smoothing and prior information. J. Comp. Graph. Statist., 11:508?532, 2002. [14] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994.
3032 |@word cox:1 version:2 nd:2 c0:2 closure:1 carolina:1 weekday:17 commute:2 solid:2 accommodate:1 cyclic:1 series:2 contains:2 selecting:1 denoting:1 rightmost:1 nally:1 discretization:1 yet:1 dx:1 must:1 readily:1 stemming:1 cruz:4 visible:1 additive:7 happen:1 subsequent:1 shape:6 entrance:1 treating:1 interpretable:1 beginning:1 blei:1 provides:3 math:2 complication:1 monday:20 along:1 beta:2 saturday:1 prove:1 consists:1 behavioral:1 manner:2 expected:1 behavior:11 frequently:1 growing:1 examine:1 resolve:3 little:5 actual:2 freeway:2 increasing:2 becomes:2 begin:1 underlying:5 campus:2 panel:2 agnostic:1 what:5 differing:2 unobserved:1 nj:1 temporal:3 every:3 nation:1 act:1 weekly:1 exactly:3 assoc:2 stick:2 brute:1 appear:3 before:3 positive:1 timing:1 becoming:1 reception:1 might:5 black:3 suggests:1 bi:2 unique:7 practice:1 silverman:1 area:2 thought:1 weather:2 convenient:1 significantly:1 confidence:1 cascade:1 donald:1 refers:1 numbered:1 regular:2 selection:1 storage:1 context:2 restriction:1 imposed:1 attention:1 zdi:4 classic:1 variation:1 holiday:3 hierarchy:1 suppose:1 smyth:3 exact:1 homogeneous:1 us:1 associate:2 particularly:1 breakdown:1 database:1 observed:4 bren:1 wj:7 connected:1 decrease:2 thatp:1 mentioned:2 intuition:2 complexity:2 covariates:1 exit:1 basis:2 completely:2 easily:1 lunchtime:3 various:1 represented:1 talk:1 weekend:11 distinct:1 describe:1 monte:2 hyper:1 outside:1 whose:1 quite:6 larger:3 valued:1 plausible:1 say:1 drawing:3 relax:1 otherwise:1 richer:1 ability:1 statistic:6 itself:1 beal:1 sequence:1 propose:3 interaction:1 wzi:1 uci:2 loop:2 date:1 description:1 exploiting:1 cluster:3 r1:1 produce:2 generating:3 categorization:2 escobar:1 tions:1 depending:2 illustrate:2 coupling:1 stat:5 help:3 minor:1 school:1 soc:1 auxiliary:1 direction:1 inhomogeneous:5 human:6 enable:1 tighter:1 strictly:1 around:4 hall:1 ic:2 ground:2 exp:4 normal:4 great:1 week:6 driving:1 estimation:6 radio:1 superposition:1 highway:1 grouped:2 individually:6 create:3 occupying:1 successfully:1 hope:1 clearly:2 sensor:6 gaussian:3 always:1 sunday:10 rather:1 occupied:3 surveillance:1 properly:1 improvement:1 likelihood:5 indicates:1 superimposed:1 greatly:1 check:1 sense:2 helpful:2 inference:4 dependent:4 membership:14 typically:3 initially:1 relation:1 issue:2 classification:1 among:11 flexible:2 priori:1 animal:1 constrained:1 spatial:1 smoothing:1 having:1 sampling:17 chapman:1 identical:2 broad:1 look:1 rfid:1 unsupervised:1 warrant:1 others:2 report:2 primarily:1 few:1 dg:1 gamma:3 tightly:1 individual:9 consisting:4 detection:2 interest:3 highly:1 possibility:2 mixture:15 extreme:2 behind:1 held:1 chain:2 accurate:1 capable:1 partial:1 necessary:2 nowak:1 loosely:1 desired:1 re:2 obscured:1 observa:1 instance:2 increased:2 modeling:8 assignment:3 cost:1 deviation:2 entry:6 too:3 front:1 characterize:2 considerably:1 confident:1 density:6 peak:1 together:2 again:3 recorded:1 padhraic:1 choose:1 wishart:3 friday:3 lull:1 return:2 suggesting:1 potential:2 account:1 sec:1 north:1 availability:1 kottas:2 vehicle:1 tion:2 later:1 closed:4 exogenous:1 linked:1 traffic:5 observing:1 start:1 red:2 complicated:1 correspond:2 yield:1 generalize:1 bayesian:16 identification:1 produced:1 carlo:2 occupant:3 worth:4 comp:2 submitted:1 sharing:14 email:1 definition:1 frequency:1 involved:1 james:3 naturally:1 associated:9 ihler:2 di:5 couple:1 irvine:1 sampled:2 knowledge:2 schedule:1 appears:2 higher:1 day:71 follow:1 modal:3 improved:2 formulation:2 amer:2 though:4 just:4 implicit:1 crp:3 until:1 hastings:1 web:4 night:1 unoccupied:3 multiscale:1 morning:4 mode:5 quality:2 indicated:1 perhaps:1 scheduled:1 usage:1 effect:5 building:9 normalized:2 contain:1 requiring:1 assigned:3 entering:1 spatially:1 neal:1 deal:1 interchangeably:1 during:5 rhythm:2 complete:3 elected:1 meaning:2 wise:1 common:3 discourages:1 insensitive:1 association:5 interpretation:2 belong:1 willett:1 significant:1 gibbs:1 consistency:3 similarly:1 access:2 similarity:4 longer:1 etc:2 base:1 add:2 posterior:13 own:1 recent:2 belongs:1 driven:2 apart:1 certain:2 binary:2 continue:1 arbitrarily:3 meeting:2 accomplished:1 seen:1 greater:1 additional:6 accident:7 aggregated:2 period:7 dashed:2 semi:3 relates:1 multiple:7 timestamped:1 desirable:1 resolving:1 reduces:1 infer:2 smooth:1 technical:2 match:1 characterized:2 multivar:1 involving:2 circumstance:4 poisson:19 physically:1 iteration:1 kernel:7 represent:3 achieved:1 proposal:1 want:1 interval:3 grow:1 crucial:1 extra:1 probably:1 fell:1 recording:1 subject:1 suspect:1 tri:1 thing:1 seem:1 jordan:1 near:2 counting:1 presence:1 door:1 noting:1 easy:2 enough:2 leverage:2 variety:2 automated:1 restaurant:1 zi:5 associating:1 idea:2 whether:1 speaking:1 cause:3 generally:1 useful:1 detailed:1 clear:1 santa:4 nonparametric:5 ten:4 statist:1 category:42 dotted:4 estimated:10 arising:3 broadly:1 discrete:1 write:1 shall:3 group:2 drawn:1 changing:1 stamped:1 graph:2 sum:5 uncertainty:5 extends:1 place:3 reasonable:2 eligible:1 parsimonious:1 draw:1 bit:1 capturing:1 activity:24 occur:1 precisely:1 incorporation:1 tag:1 formulating:1 vehicular:4 optical:1 relatively:4 department:2 according:2 alternate:1 combination:2 belonging:1 remain:1 slightly:2 increasingly:1 across:3 smaller:1 appealing:1 metropolis:1 lunch:2 happens:1 previously:1 discus:1 count:5 turn:2 end:3 unusual:12 available:1 operation:1 ishwaran:2 observe:2 hierarchical:4 appropriate:4 process1:1 enforce:1 occurrence:1 alternative:1 shortly:1 original:1 dirichlet:17 include:3 opportunity:1 marginalized:1 restrictive:1 chinese:1 pdc:3 disappear:1 move:1 g0:3 question:1 quantity:4 occurs:1 parametric:10 dependence:2 exhibit:1 dp:4 topic:1 reason:2 hdp:3 relationship:1 providing:1 difficult:3 sinica:1 sigma:1 design:1 reliably:1 anal:1 unknown:2 perform:1 allowing:2 teh:1 vertical:2 observation:11 markov:4 finite:6 truncated:9 january:2 situation:1 smoothed:2 police:1 intensity:16 crime:1 learned:1 nip:1 trans:1 able:1 recurring:1 pattern:8 scott:1 including:2 video:1 event:16 natural:1 treated:1 force:1 indicator:3 representing:1 improve:3 technology:1 sethuraman:2 categorical:2 coupled:1 prior:15 understanding:1 relative:1 expect:3 highlight:1 generation:1 interesting:1 versus:3 consistent:2 viewpoint:1 elsewhere:1 penalized:1 course:2 truncation:1 allow:4 fall:1 emerge:1 tolerance:1 distributed:1 curve:2 depth:1 world:4 stand:1 rich:1 collection:10 made:1 commonly:1 far:2 functionals:1 sdc:15 approximate:3 continuous:1 evening:1 quantifies:1 reality:1 noon:1 learn:4 reasonably:1 ca:2 tuesday:14 complex:3 necessarily:1 domain:3 statistica:1 arise:2 profile:24 arrival:1 nothing:1 west:1 fashion:2 ny:1 precision:1 wish:2 breaking:2 third:1 late:1 wavelet:1 learns:1 down:1 specific:2 discarding:1 showing:2 normalizing:2 evidence:1 effectively:1 gained:1 importance:1 magnitude:1 occurring:1 easier:1 rejection:1 fc:2 simply:1 infinitely:2 visual:1 nominally:1 collectively:1 corresponds:2 afternoon:1 truth:1 ydi:6 viewed:1 narrower:1 identity:1 ann:1 shared:5 absence:1 included:1 specifically:3 infinite:2 typical:1 called:1 total:3 meaningful:1 rarely:1 indicating:3 people:2 support:2 arises:1 modulated:1 alexander:1 constructive:1 dept:1 mcmc:4 phenomenon:2 correlated:1
2,241
3,033
Modeling Dyadic Data with Binary Latent Factors Edward Meeds Department of Computer Science University of Toronto [email protected] Zoubin Ghahramani Department of Engineering Cambridge University [email protected] Radford Neal Department of Computer Science University of Toronto [email protected] Sam Roweis Department of Computer Science University of Toronto [email protected] Abstract We introduce binary matrix factorization, a novel model for unsupervised matrix decomposition. The decomposition is learned by fitting a non-parametric Bayesian probabilistic model with binary latent variables to a matrix of dyadic data. Unlike bi-clustering models, which assign each row or column to a single cluster based on a categorical hidden feature, our binary feature model reflects the prior belief that items and attributes can be associated with more than one latent cluster at a time. We provide simple learning and inference rules for this new model and show how to extend it to an infinite model in which the number of features is not a priori fixed but is allowed to grow with the size of the data. 1 Distributed representations for dyadic data One of the major goals of probabilistic unsupervised learning is to discover underlying or hidden structure in a dataset by using latent variables to describe a complex data generation process. In this paper we focus on dyadic data: our domains have two finite sets of objects/entities and observations are made on dyads (pairs with one element from each set). Examples include sparse matrices of movie-viewer ratings, word-document counts or product-customer purchases. A simple way to capture structure in this kind of data is to do ?bi-clustering? (possibly using mixture models) by grouping the rows and (independently or simultaneously) the columns[6, 13, 9]. The modelling assumption in such a case is that movies come in K types and viewers in L types and that knowing the type of movie and type of viewer is sufficient to predict the response. Clustering or mixture models are quite restrictive ? their major disadvantage is that they do not admit a componential or distributed representation because items cannot simultaneously belong to several classes. (A movie, for example, might be explained as coming from a cluster of ?dramas? or ?comedies?; a viewer as a ?single male? or as a ?young mother?.) We might instead prefer a model (e.g. [10, 5]) in which objects can be assigned to multiple latent clusters: a movie might be a drama and have won an Oscar and have subtitles; a viewer might be single and female and a university graduate. Inference in such models falls under the broad area of factorial learning (e.g. [7, 1, 3, 12]), in which multiple interacting latent causes explain each observed datum. In this paper, we assume that both data items (rows) and attributes (columns) have this kind of componential structure: each item (row) has associated with it an unobserved vector of K binary features; similarly each attribute (column) has a hidden vector of L binary features. Knowing the features of the item and the features of the attribute are sufficient to generate (before noise) the response at that location in the matrix. In effect, we are factorizing a real-valued data (response) > , where and are binary feature matrix into (a distribution defined by) the product is a real-valued weight matrix. Below, we develop this binary matrix factorization matrices, and X W UWV U V K , w o  k wkl l uik xij vjl I L X =f W U V> J  (A) (B) Figure 1: (A) The graphical model representation of the linear-Gaussian BMF model. The concentration parameter and Beta weights for the columns of are represented by the symbols  and l . (B) BMF shown pictorally. X (BMF) model using Bayesian non-parametric priors over the number and values of the unobserved binary features and the unknown weights. 2 BMF model description X Binary matrix factorization is a model of an I  J dyadic data matrix with exchangeable rows and columns. The entries of can be real-valued, binary, or categorial; BMF models suitable for each type are described below. Associated with each row is a latent binary feature vector i ; similarly each column has an unobserved binary vector j . The primary parameters are represented of interaction weights. is generated by a fixed observation process f () applied by a matrix (elementwise) to the linear inner product of the features and weights, which is the ?factorization? or approximation of the data: X W u v X X j U; V; W  f (UWV> ; ) (1) where  are extra parameters specific to the model variant. Three possible parametric forms for > and covariance (1=) ; the noise (observation) distribution f are: Gaussian, with mean > >. logistic, with mean 1= (1 + exp( )); and Poisson, with mean (and variance) Other parametric forms are also possible. For illustrative purposes, we will use the linear-Gaussian model throughout this paper; this can be thought of as a two-sided version of the linear-Gaussian model found in [5]. UWV UWV UWV I To complete the description of the model, we need to specify prior distributions over the feature matrices ; and the weights . We adopt the same priors over binary matrices as previously described in [5]. For finite sized matrices with I rows and K columns, we generate a bias k independently for each column k using a Beta prior (denoted B ) and then conditioned on this bias generate the entries in column k independently from a Bernoulli with mean k . UV W k j ; K  B ( =K; ) Uj  P U I Y K Y i=1 k=1 j a ; b kuik (1 k )1 uik =  G (a ; b ) K Y k=1 knk (1 k )I nk where nk = i uik . The hyperprior on the concentration is a Gamma distribution (denoted G ), whose shape and scale hyperparameters control the expected fraction of zeros/ones in the matrix. The biases  are easily integrated out, which creates dependencies between the rows, although they remain exchangeable. The resulting prior depends only on the number nk of active features in each column. An identical prior is used on , with J rows and L columns, but with different concentration prior . The variable was set to 1 for all experiments. V The appropriate prior distribution over weights depends on the observation distribution f (). For is a matrix normal with prior mean o and the linear-Gaussian variant, a convenient prior on W W I covariance (1=) . The scale hyperpriors:  of the weights and output precision  (if needed) have Gamma W j Wo;   N (Wo; (1=) I)  j a  ; b  j a  ; b  G (a ; b)  G ( a ; b ) In certain cases, when the prior on the weights is conjugate to the output distribution model f , the weights may be analytically integrated out, expressing the marginal distribution of the data j ; only in terms of the binary features. This is true, for example, when we place a Gaussian prior on the weights and use a linear-Gaussian output process. XU V U V Remarkably, the Beta-Bernoulli prior distribution over (and similarly ) can easily be extended to the case where K ! 1, creating a distribution over binary matrices with a fixed number I of exchangeable rows and a potentially infinite number of columns (although the expected number of columns which are not entirely zero remains finite). Such a distribution, the Indian Buffet Process (IBP) was described by [5] and is analogous to the Dirichlet process and the associated Chinese restaurant process (CRP) [11]. Fortunately, as we will see, inference with this infinite prior is not only tractable, but is also nearly as efficient as the finite version. 3 Inference of features and parameters U As with many other complex hierarchical Bayesian models, exact inference of the latent variables and in the BMF model is intractable (ie there is no efficient way to sample exactly from the posterior nor to compute its exact marginals). However, as with many other non-parametric Bayesian models, we can employ Markov Chain Monte Carlo (MCMC) methods to create an iterative procedure which, if run for sufficiently long, will produce correct posterior samples. V 3.1 Finite binary latent feature matrices U V The posterior distribution of a single entry in (or ) given all other model parameters is proportional to the product of the conditional prior and the data likelihood. The conditional prior comes from integrating out the biases  in the Beta-Bernoulli model and is proportional the number of active entries in other rows of the same column plus a term for new activations. Gibbs sampling for single entries of (or ) can be done using the following updates: U V P (uik = 1jU ik ; V; W; X) = C ( =K + n i;k ) P (XjU ik ; uik = 1; V; W) (2) P (uik = 0jU ik ; V; W; X) = C ( + (I 1) n i;k ) P (XjU ik ; uik = 0; V; W) (3) P where n i;k = h6=i uhk , U ik excludes entry ik , and C is a normalizing constant. (Conditioning on ; K and  is implicit.) When conditioning on W, we only need to calculate the ratio of likelihoods corresponding to row i. (Note that this is not the case when the weights are integrated out.) + = P uih vjl whl (when uik = 1) and ^ This ratio is a simple function of the model?s predictions x ij hl P x^ij = hl uih vjl whl (when uik = 0). In the linear-Gaussian case: log P (u P (u ik ik jU ; V; W; X) = log ( =K + n ) = 0jU ; V ; W ; X) ( + (I 1) n =1 ik 1 i;k i;k ) ik X 2   ij x ( ij x^+ )2 ij x ( ij  x^ )2 ij j In the linear-Gaussian case, we can easily derive analogous Gibbs sampling updates for the weights and hyperparameters. To simplify the presentation, we consider a ?vectorized? representation of our variables. Let be an IJ column vector taken column-wise from , be a KL column vector taken column-wise from and be a IJ  KL binary matrix which is the kronecker product . (In ?Matlab notation?, = (:); = (:) and = kron( ; ).) In this notation, the data distribution is written as: j ; ;   N ( ; (1=) ). Given values for and , samples can be drawn for , , and  using the following posterior distributions (where conditioning on o ; ; ; a ; b ; a ; b is implicit):   j ;  N ( > +  ) 1 ( > +  o ) ; ( > +  ) 1 W V U w x w w xA W A x X w W xA w Aw AA I Xw VU A I Ax w AA U I V     G a + KL=2; b + 12 (w wo )> (w wo )    1  j x; A; w  G a + IJ=2; b + (x Aw)> (x Aw) 2 jw A AA Note that we do not have to explicitly compute the matrix . For computing the posterior of linearGaussian weights, the matrix > can be computed as > = kron( > ; > ). Similarly, the expression > is constructed by computing > and taking the elements column-wise. Ax AA U XV V VU U 3.2 Infinite binary latent feature matrices One of the most elegant aspects of non-parametric Bayesian modeling is the ability to use a prior which allows a countably infinite number of latent features. The number of instantiated features is automatically adjusted during inference and depends on the amount of data and how many features it supports. Remarkably, we can do MCMC sampling using such infinite priors with essentially no computational penalty over the finite case. To derive these updates (e.g. for row i of the matrix ), it is useful to consider partitioning the columns of into two sets as shown below. Let set A have at least one non-zero entry in rows other than i. Let set B be all other set A set B columns, including the set of columns where 0 1 0 0 1 0 0 0 0 0  the only non-zero entries are found in row i 0 0 1 0 0 0 0 0 0 0  and the countably infinite number of all-zero 1 1 0 0 1 0 0 0 0 0  1 0 0 1 1 0 0 0 0 0  columns. Sampling values for elements in row 1 1 0 0 1 0 1 0 1 0 row i i of set A given everything else is straightfor0 1 0 0 0 0 0 0 0 0  ward, and involves Gibbs updates almost iden0 0 0 1 0 0 0 0 0 0  tical to those in the finite case handled by equa1 0 0 0 1 0 0 0 0 0  tions (2) and (3); as K !1 and k in set A we get: U U P (uik = 1jU ik ; V; W) P (uik = 0jU ik ; V; W) = C  n i;k P (XjU ik ; uik = 1; V; W) = C  ( + I 1 n i;k ) P (XjU ik ; uik = 0; V; W) (4) (5) When sampling new values for set B, the columns are exchangeable, and so we are really only interested in the number of entries n?B in set B which will be turned on in row i. Sampling the number of entries set to 1 can be done with Metropolis-Hastings updates. Let J (n?B jnB ) = Poisson (n?B j = ( + I 1)) be the proposal distribution for a move which replaces the current nB active entries with n?B active entries in set B. The reverse proposal is J (nB jn?B ). The acceptance  probability is min 1; rnB !n?B , where rnB !n?B is P (n?B j ) J (nB jn?B ) P ( jn?B ) Poisson(n?B j = ( + I 1))J (nB jn?B ) P ( jn?B ) = = (6) P (nB j ) J (n?B jnB ) P ( jnB ) Poisson(nB j = ( + I 1))J (n?B jnB ) P ( jnB ) X X X X X X W This assumes a conjugate situation in which the weights are explicitly integrated out of the model to compute the marginal likelihood P ( jn?B ). In the non-conjugate case, a more complicated proposal is required. Instead of proposing n?B , we jointly propose n?B and associated feature ? from their prior distributions. In the linear-Gaussian model, where ? is a set of parameters B B weights for features in set B, the proposal distribution is: J (n?B ; B? jnB ; B ) = Poisson (n?B j = ( + I 1)) Normal ( B? jn?B ; ) (7) ? where uik = 1. As in the conjugate case, the We need actually sample only the finite portion of B acceptance ratio reduces to the ratio of data likelihoods: P ( jn?B ; B? ) rnB ;wB !n?B ;wB? = (8) P ( jn B ; B ) X w w w w w w X X w w 3.3 Faster mixing transition proposals UV W are the simplest moves we could The Gibbs updates described above for the entries of , and make in a Markov Chain Monte Carlo inference procedure for the BMF model. However, these limited local updates may result in extremely slow mixing. In practice, we often implement larger moves in indicator space using, for example, Metropolis-Hastings proposals on multiple features for row i simultaneously. For example, we can propose new values for several columns in row i of matrix by sampling feature values independently from their conditional priors. To compute the reverse proposal, we imagine forgetting the current configuration of those features for row i and compute the probability under the conditional prior of proposing the current configuration. The acceptance probability of such a proposal is (the maximum of unity and) the ratio of likelihoods between the new proposed configuration and the current configuration. U Split-merge moves may also be useful for efficiently sampling from the posterior distribution of the binary feature matrices. Jain and Neal [8] describe split-merge algorithms for Dirichlet process mixture models with non-conjugate component distributions. We have developed and implemented similar split-merge proposals for binary matrices with IBP priors. Due to space limitations, we present here only a sketch of the procedure. Two nonzero entries in are selected uniformly at random. If they are in the same column, we propose splitting that column; if they are in different columns, we propose merging their columns. The key difference between this algorithm and the Jain and Neal algorithm is that the binary features are not constrained to sum to unity in each row. Our split-merge algorithm also performs restricted Gibbs scans on columns of to increase acceptance probability. U U 3.4 Predictions A major reason for building generative models of data is to be able to impute missing data values given some observations. In the linear-Gaussian model, the predictive distribution at each iteration of the Markov chain is a Gaussian distribution. The interaction weights can be analytically integrated out at each iteration, also resulting in a Gaussian posterior, removing sampling noise contributed by having the weights explicitly represented. Computing the exact predictive distribution, however, conditional only on the model hyperparameters, is analytically intractable: it requires integrating over all binary matrices and , and all other nuisance parameters (e.g., the weights and precisions). Instead we integrate over these parameters implicitly by averaging predictive distributions from many MCMC iterations. This posterior, which is conditional only on the observed data and hyperparameters, is highly complex, potentially multimodal, and non-linear function of the observed variables. U V U V and . In our By averaging predictive distributions, our algorithm implicitly integrates over experiments, we show samples from the posteriors of and to help explain what the model is doing, but we stress that the posterior may have significant mass on many possible binary matrices. The number of features and their degrees of overlap will vary over MCMC iterations. Such variation will depend, for example, on the current value of and  (higher values will result in more features) and precision values (higher weight precision results in less variation in weights). U V 4 Experiments 4.1 Modified ?bars? problem A toy problem commonly used to illustrate additive feature or multiple cause models is the bars problem ([2, 12, 1]). Vertical and horizontal bars are combined in some way to generate data samples. The goal of the illustration is to show recovery of the latent structure in the form of bars. We have modified the typical usage of bars to accommodate the linear-Gaussian BMF with infinite features. Data consists of I vectors of size 82 where each vector can be reshaped into a square image. The generation process is as follows: since has the same number of rows as the dimension of the images, is fixed to be a set of vertical and horizontal bars (when reshaped into an image). is sampled from the IBP, and global precisions  and  are set to 1=2. The weights are sampled from zero mean Gaussians. Model estimates of and were initialized from an IBP prior. V V U W V U In Figure 2 we demonstrate the performance of the linear-Gaussian BMF on the bars data. We train the BMF with 200 training examples of the type shown in the top row in Figure 2. Some examples have their bottom halves labeled missing and are shown in the Figure with constant grey values. To handle this, we resample their values at each iteration of the Markov chain. The bottom row shows . Despite the relatively high the expected reconstruction using MCMC samples of , , and UV W noise levels in the data, the model is able to capture the complex relationships between bars and weights. The reconstruction of vertical bars is very good. The reconstruction of horizontal bars is good as well, considering that the model has no information regarding the existence of horizontal bars on the bottom half. (A) Data samples (B) Noise-free data (C) Initial reconstruction (D) Mean reconstruction (E) Nearest neighbour Figure 2: Bars reconstruction. (A) Bars randomly sampled from the complete dataset. The bottom half of these bars were removed and labeled missing during learning. (B) Noise-free versions of the same data. (C) The initial reconstruction. The missing values have been set to their expected value, 0, to highlight the missing region. (D) The average MCMC reconstruction of the entire image. (E) Based solely on the information in the top-half of the original data, these are the noise-free nearest neighbours in pixel space. V VW> V WV > used to generate the data. The Figure 3: Bars features. The top row shows values of and > > can be thought of as a second row shows a sample of and from the Markov chain. set of basis images which can be added together with binary coefficients ( ) to create images. V WV WV U By examining the features captured by the model, we can understand the performance just described. > along with one sample of those In Figure 3 we show the generating, or true, values of and > basis features from the Markov chain. Because the model is generated by adding multiple images shown on the right of Figure 3, multiple bars are used in each image. This is reflected in the > are fairly similar to the generating > , but the former are captured features. The learned composed of overlapping bar structure (learned ). V WV WV WV WV V 4.2 Digits In Section 2 we briefly stated that BMF can be applied to data models other than the linear-Gaussian model. We demonstrate this with a logistic BMF applied to binarized images of handwritten digits. We train logistic BMF with 100 examples each of digits 1, 2, and 3 from the USPS dataset. In the first five rows of Figure 4 we again illustrate the ability of BMF to impute missing data values. The top row shows all 16 samples from the dataset which had their bottom halves labeled missing. Missing values are filled-in at each iteration of the Markov chain. In the third and fourth rows we show the mean and mode (P (xij = 1) > 0:5) of the BMF reconstruction. In the bottom row we have shown the nearest neighbors, in pixel space, to the training examples based only on the top halves of the original digits. In the last three rows of Figure 4 we show the features captured by the model. In row F, we show the average image of the data which have each feature in on. It is clear that some row features > are shown. have distinct digit forms and others are overlapping. In row G, the basis images By adjusting the features that are non-zero in each row of , images are composed by adding basis images together. Finally, in row H we show . These pixel features mask out different regions in V U U WV pixel space, which are weighted together to create the basis images. Note that there are K features in rows F and G, and L features in row H. (A) (B) (C) (D) (E) (F) (G) (H) Figure 4: Digits reconstruction. (A) Digits randomly sampled from the complete dataset. The bottom half of these digits were removed and labeled missing during learning. (B) The data shown to the algorithm. The top half is the original data value. (C) The mean of the reconstruction for the bottom halves. (D) The mode reconstruction of the bottom halves. (E) The nearest neighbours of the original data are shown in the bottom half, and were found based solely on the information from the top halves of the images. (F) The average of all digits for each feature. (G) The feature > reshaped in the form of digits. By adding these features together, which the features do, reconstructions of the digits is possible. (H) reshaped into the form of digits. The first image represents a bias feature. WV U V U 4.3 Gene expression data Gene expression data is able to exhibit multiple and overlapping clusters simultaneously; finding models for such complex data is an interesting and active research area ([10], [13]). The plaid model[10], originally introduced for analysis of gene expression data, can be thought of as a nonBayesian special case of our model in which the matrix is diagonal and the number of binary features is fixed. Our goal in this experiment is merely to illustrate qualitatively the ability of BMF to find multiple clusters in gene expression data, some of which are overlapping, others non-overlapping. The data in this experiment consists of rows corresponding to genes and columns corresponding to patients; the patients suffer from one of two types of acute Leukemia [4]. In Figure 5 we show the factorization produced by the final state in the Markov chain. The rows and columns of the data and its expected reconstruction are ordered such that contiguous regions in were observable. Some of the many feature pairings are highlighted. The BMF clusters consist of broad, overlapping clusters, and small, non-overlapping clusters. One of the interesting possibilities of using BMF to model gene expression data would be to fix certain columns of or with knowledge gained from experiments or literature, and to allow the model to add new features that help explain the data in more detail. W X U V 5 Conclusion We have introduced a new model, binary matrix factorization, for unsupervised decomposition of dyadic data matrices. BMF makes use of non-parametric Bayesian methods to simultaneously discover binary distributed representations of both rows and columns of dyadic data. The model explains each row and column entity using a componential code composed of multiple binary latent features along with a set of parameters describing how the features interact to create the observed responses at each position in the matrix. BMF is based on a hierarchical Bayesian model and can be naturally extended to make use of a prior distribution which permits an infinite number of features, at very little extra computational cost. We have given MCMC algorithms for posterior inference of both the binary factors and the interaction parameters conditioned on some observed data, and (A) (B) X Figure 5: Gene expression results. (A) The top-left is sorted according to contiguous features in the final and in the Markov chain. The bottom-left is > and the top-right is . The bottom> . We have highlighted right is . (B) The same as (A), but the expected value of , ^ = regions that have both uik and vjl on. For clarity, we have only shown the (at most) two largest contiguous regions for each feature pair. U W V V X X UWV U demonstrated the model?s ability to capture overlapping structure and model complex joint distributions on a variety of data. BMF is fundamentally different from bi-clustering algorithms because of its distributed latent representation and from factorial models with continuous latent variables which interact linearly to produce the observations. This allows a much richer latent structure, which we believe makes BMF useful for many applications beyond the ones we outlined in this paper. References [1] P. Dayan and R. S. Zemel. Competition and multiple cause models. Neural Computation, 7(3), 1995. [2] P. Foldiak. Forming sparse representations by local anti-Hebbian learning. Biological Cybernetics, 64, 1990. [3] Z. Ghahramani. Factorial learning and the EM algorithm. In NIPS, volume 7. MIT Press, 1995. [4] T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, C. D. Bloomfield, and E. S. Lander. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science, 286(5439), 1999. [5] T. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS, volume 18. MIT Press, 2005. [6] J. A. Hartigan. Direct clustering of a data matrix. Journal of the American Statistical Association, 67, 1972. [7] G. Hinton and R. S. Zemel. Autoencoders, minimum description length, and Helmholtz free energy. In NIPS, volume 6. Morgan Kaufmann, 1994. [8] S. Jain and R. M. Neal. Splitting and merging for a nonconjugate Dirichlet process mixture model. To appear in Bayesian Analysis. [9] C. Kemp, J. B. Tenebaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. Proceedings of the Twenty-First National Conference on Artificial Intelligence, 2006. [10] L. Lazzeroni and A. Owen. Plaid models for gene expression data. Statistica Sinica, 12, 2002. [11] J. Pitman. Combinatorial stochastic processes. Lecture Notes for St. Flour Course, 2002. [12] E. Saund. A multiple cause mixture model for unsupervised learning. Neural Computation, 7(1), 1994. [13] R. Tibshirani, T. Hastie, M. Eisen, D. Ross, D. Botstein, and P. Brown. Clustering methods for the analysis of DNA microarray data. Technical report, Stanford University, 1999. Department of Statistics.
3033 |@word briefly:1 version:3 grey:1 tamayo:1 eng:1 decomposition:3 covariance:2 accommodate:1 initial:2 configuration:4 document:1 current:5 activation:1 written:1 additive:1 shape:1 update:7 generative:1 selected:1 half:12 item:5 intelligence:1 yamada:1 toronto:6 location:1 five:1 downing:1 along:2 constructed:1 direct:1 beta:4 ik:14 pairing:1 consists:2 fitting:1 introduce:1 mask:1 forgetting:1 expected:6 nor:1 automatically:1 little:1 considering:1 discover:2 underlying:1 notation:2 mass:1 what:1 kind:2 developed:1 proposing:2 unobserved:3 finding:1 binarized:1 exactly:1 uk:1 exchangeable:4 control:1 partitioning:1 appear:1 before:1 engineering:1 local:2 xv:1 slonim:1 despite:1 solely:2 merge:4 might:4 plus:1 factorization:6 limited:1 bi:3 graduate:1 drama:2 vu:2 practice:1 implement:1 digit:12 procedure:3 area:2 thought:3 convenient:1 word:1 integrating:2 griffith:2 zoubin:2 get:1 cannot:1 nb:6 knk:1 customer:1 missing:9 demonstrated:1 independently:4 splitting:2 recovery:1 rule:1 handle:1 variation:2 analogous:2 imagine:1 exact:3 element:3 helmholtz:1 labeled:4 observed:5 bottom:12 subtitle:1 uih:2 capture:3 calculate:1 region:5 removed:2 cam:1 rnb:3 depend:1 predictive:4 creates:1 meed:1 basis:5 usps:1 easily:3 multimodal:1 joint:1 represented:3 train:2 instantiated:1 jain:3 describe:2 distinct:1 monte:2 artificial:1 zemel:2 quite:1 whose:1 larger:1 valued:3 richer:1 stanford:1 ability:4 statistic:1 ward:1 reshaped:4 jointly:1 highlighted:2 final:2 propose:4 reconstruction:14 interaction:3 product:5 coming:1 mesirov:1 turned:1 mixing:2 roweis:2 description:3 competition:1 cluster:9 produce:2 generating:2 object:2 tions:1 derive:2 develop:1 ac:1 help:2 illustrate:3 nearest:4 ij:10 ibp:4 edward:1 implemented:1 c:3 involves:1 come:2 plaid:2 correct:1 attribute:4 stochastic:1 everything:1 explains:1 assign:1 fix:1 really:1 biological:1 adjusted:1 viewer:5 sufficiently:1 normal:2 exp:1 predict:1 major:3 vary:1 adopt:1 resample:1 purpose:1 integrates:1 combinatorial:1 ross:1 largest:1 create:4 reflects:1 weighted:1 mit:2 gaussian:16 modified:2 ax:2 focus:1 modelling:1 bernoulli:3 likelihood:5 inference:8 dayan:1 integrated:5 entire:1 hidden:3 interested:1 pixel:4 classification:1 denoted:2 priori:1 whl:2 constrained:1 special:1 fairly:1 marginal:2 having:1 sampling:9 identical:1 represents:1 broad:2 unsupervised:4 nearly:1 leukemia:1 purchase:1 others:2 report:1 simplify:1 fundamentally:1 employ:1 randomly:2 neighbour:3 composed:3 simultaneously:5 gamma:2 national:1 acceptance:4 highly:1 possibility:1 golub:1 flour:1 male:1 mixture:5 tical:1 chain:9 filled:1 hyperprior:1 initialized:1 column:36 modeling:2 wb:2 contiguous:3 disadvantage:1 cost:1 entry:14 examining:1 dependency:1 aw:3 combined:1 ju:6 st:1 ie:1 probabilistic:2 together:4 again:1 possibly:1 admit:1 creating:1 american:1 toy:1 coefficient:1 explicitly:3 depends:3 saund:1 doing:1 portion:1 complicated:1 square:1 variance:1 kaufmann:1 efficiently:1 tenebaum:1 bayesian:8 handwritten:1 produced:1 carlo:2 monitoring:1 cybernetics:1 explain:3 energy:1 naturally:1 associated:5 sampled:4 dataset:5 adjusting:1 knowledge:1 actually:1 higher:2 originally:1 reflected:1 response:4 specify:1 nonconjugate:1 jw:1 botstein:1 done:2 xa:2 implicit:2 crp:1 just:1 autoencoders:1 sketch:1 hastings:2 horizontal:4 overlapping:8 logistic:3 mode:2 believe:1 usage:1 effect:1 building:1 concept:1 true:2 brown:1 former:1 analytically:3 assigned:1 nonzero:1 neal:4 during:3 impute:2 nuisance:1 illustrative:1 won:1 stress:1 complete:3 demonstrate:2 performs:1 image:16 wise:3 novel:1 conditioning:3 volume:3 extend:1 belong:1 association:1 elementwise:1 marginals:1 componential:3 expressing:1 significant:1 cambridge:1 gibbs:5 mother:1 uv:3 outlined:1 similarly:4 had:1 acute:1 add:1 posterior:11 foldiak:1 female:1 reverse:2 certain:2 binary:30 wv:9 captured:3 minimum:1 fortunately:1 morgan:1 multiple:11 xju:4 reduces:1 hebbian:1 technical:1 faster:1 long:1 vjl:4 molecular:1 prediction:3 variant:2 essentially:1 patient:2 poisson:5 iteration:6 kron:2 proposal:9 remarkably:2 lander:1 else:1 grow:1 microarray:1 extra:2 unlike:1 elegant:1 vw:1 coller:1 split:4 variety:1 restaurant:1 hastie:1 inner:1 regarding:1 knowing:2 bloomfield:1 expression:9 handled:1 dyad:1 wo:4 categorial:1 penalty:1 suffer:1 loh:1 cause:4 matlab:1 useful:3 clear:1 factorial:3 amount:1 simplest:1 dna:1 generate:5 xij:2 tibshirani:1 key:1 drawn:1 clarity:1 hartigan:1 caligiuri:1 excludes:1 merely:1 fraction:1 sum:1 run:1 oscar:1 fourth:1 place:1 throughout:1 almost:1 ueda:1 prefer:1 entirely:1 datum:1 replaces:1 kronecker:1 aspect:1 min:1 extremely:1 relatively:1 department:5 according:1 conjugate:5 remain:1 em:1 sam:1 unity:2 metropolis:2 hl:2 uhk:1 explained:1 restricted:1 sided:1 taken:2 previously:1 remains:1 describing:1 count:1 nonbayesian:1 needed:1 tractable:1 gaussians:1 h6:1 permit:1 hyperpriors:1 hierarchical:2 appropriate:1 buffet:2 jn:9 existence:1 original:4 assumes:1 clustering:6 include:1 dirichlet:3 top:9 graphical:1 xw:1 restrictive:1 ghahramani:3 uj:1 chinese:1 move:4 added:1 parametric:7 concentration:3 primary:1 diagonal:1 exhibit:1 entity:2 kemp:1 reason:1 code:1 length:1 relationship:1 illustration:1 ratio:5 sinica:1 potentially:2 stated:1 unknown:1 contributed:1 twenty:1 vertical:3 observation:6 markov:9 finite:8 anti:1 situation:1 extended:2 hinton:1 relational:1 wkl:1 interacting:1 rating:1 introduced:2 pair:2 required:1 kl:3 learned:3 comedy:1 nip:3 able:3 bar:17 beyond:1 below:3 including:1 belief:1 suitable:1 overlap:1 indicator:1 movie:5 categorical:1 prior:25 literature:1 discovery:1 lecture:1 highlight:1 generation:2 limitation:1 proportional:2 interesting:2 integrate:1 degree:1 sufficient:2 vectorized:1 row:43 cancer:1 course:1 last:1 free:4 bias:5 allow:1 understand:1 fall:1 neighbor:1 taking:1 sparse:2 pitman:1 distributed:4 dimension:1 transition:1 eisen:1 made:1 commonly:1 qualitatively:1 observable:1 countably:2 implicitly:2 gene:9 global:1 active:5 factorizing:1 continuous:1 latent:17 iterative:1 lineargaussian:1 interact:2 complex:6 uwv:6 domain:1 lazzeroni:1 statistica:1 linearly:1 noise:7 bmf:22 hyperparameters:4 dyadic:7 allowed:1 xu:1 gaasenbeek:1 uik:15 slow:1 precision:5 position:1 third:1 young:1 removing:1 specific:1 symbol:1 normalizing:1 grouping:1 intractable:2 consist:1 merging:2 adding:3 gained:1 conditioned:2 nk:3 forming:1 ordered:1 huard:1 radford:2 aa:4 conditional:6 goal:3 sized:1 presentation:1 sorted:1 owen:1 infinite:11 typical:1 uniformly:1 averaging:2 support:1 scan:1 indian:2 mcmc:7
2,242
3,034
Implicit Surfaces with Globally Regularised and Compactly Supported Basis Functions ? Christian Walder?? , Bernhard Sch?olkopf? & Olivier Chapelle? Max Planck Institute for Biological Cybernetics, 72076 T?ubingen, Germany ? The University of Queensland, Brisbane, Queensland 4072, Australia [email protected] Abstract We consider the problem of constructing a function whose zero set is to represent a surface, given sample points with surface normal vectors. The contributions include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable properties previously only associated with fully supported bases, and show equivalence to a Gaussian process with modified covariance function. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data. 1 Introduction The problem of reconstructing a surface from a set of points frequently arises in computer graphics. Numerous methods of sampling physical surfaces are now available, including laser scanners, optical triangulation systems and mechanical probing methods. Inferring a surface from millions of points sampled with noise is a non-trivial task however, for which a variety of methods have been proposed. The class of implicit or level set surface representations is a rather large one, however other methods have also been suggested ? for a review see [1]. The implicit surface methods closest to the present work are those that construct the implicit using regularised function approximation [2], such as the ?Variational Implicits? of Turk and O?Brien [3], which produce excellent results, but at a cubic computational fitting cost in the number of points. The effectiveness of this type of approach is undisputed however, and has led researchers to look for ways to overcome the computational problems. Two main options have emerged. The first approach uses compactly supported kernel functions (we define and discuss kernel functions in Section 2), leading to fast algorithms that are easy to implement [4]. Unfortunately however these methods are suitable for benign data sets only. As noted in [5], compactly supported basis functions ?yield surfaces with many undesirable artifacts in addition to the lack of extrapolation across holes?. A similar conclusion was reached in [6] which states that local processing methods are ?more sensitive to the quality of input data [than] approximation and interpolation techniques based on globally-supported radial basis functions? ? a conclusion corroborated by the results within a different paper from the same group [7]. The second means of overcoming the aforementioned computational problem does not suffer from these problems however, as demonstrated by the FastRBFTM algorithm [5], which uses the the Fast Multipole Method (FMM) [8] to overcome the computational problems of non-compactly supported kernels. The resulting method is non-trivial to implement however and to date exists only in the proprietary FastRBFTM package. We believe that by applying them in a different manner, compactly supported basis functions can lead to high quality results, and the present work is an attempt to bring the reader to the same conclusion. In Section 3 we introduce a new technique for regularising such basis functions which Figure 1: (a) Rendered implicit surface model of ?Lucy?, constructed from 14 million points with normals. (b) A planar slice that cuts the nose ? the colour represents the value of the embedding function and the black line its zero level. (c) A black dot at each of the 364,982 compactly supported basis function centres which, along with the corresponding dilations and magnitudes, define the implicit. (a) (b) (c) allows high quality, highly scalable algorithms that are relatively easy to implement. We also show that the approximation can be interpreted as a Gaussian process with modified covariance function. Before doing so however, we present in Section 2 the other main contribution of the present work, which is to show how surface normal vectors can be incorporated directly into the regularised regression framework that is typically used for fitting implicit surfaces, thereby avoiding the problematic approach of constructing ?off-surface? points for the regression problem. To demonstrate the effectiveness of the method we apply it to various problems in Section 4 before summarising in the final Section 5. 2 Implicit Surface Fitting by Regularised Regression Here we discuss the use of regularised regression [2] for the problem of implicit surface fitting. In Section 2.1 we motivate and introduce a clean and direct means of making use of normal vectors. The following Section 2.2 extends on the ideas in Section 2.1 by formally generalising the important representer theorem. The final Section 2.3 discusses the choice of regulariser (and associated kernel function), as well as the associated computational problems that we overcome in Section 3. 2.1 Regression Based Approaches and the Use of Normal Vectors Typically implicit surface has been done by solving a regularised regression problem [5, 4] 2 arg min kf kH + C f m X 2 (f (xi ) ? yi ) , (1) i=1 where the yi are some estimate of the signed distance function at the xi , and f is the embedding function which takes on the value zero on the implicit surface. The norm kf kH is a regulariser which takes on larger values for less ?smooth? functions. We take H to be a reproducing kernel Hilbert space (RKHS) with representer of evaluation (kernel function) k(?, ?), so that we have the reproducing property, f (x) = hf, k(x, ?)iH . The solution to this problem has the form f (x) = m X ?i k (xi , x) . (2) i Note as a technical aside that the thin-plate kernel case ? which we will adopt ? requires a somewhat more technical interpretatiosn, as it is only conditionally positive definite. We discuss the positive definite case for clarity only, as it is simpler and yet sufficient to demonstrate the ideas involved. Choosing the (xi , yi ) pairs for (2) is itself a non-trivial problem, and heuristics are typically used to prevent contradictory target values (see e.g. [5]). We now propose more direct method, novel in the context of implicit fitting, which avoids these problems. The approach is suggested by the fact that the normal direction of the implicit surface is given by the gradient of the embedding function ? thus normal vectors can be incorporated by regression with gradient targets. The function that we seek is the minimiser of: m m X X 2 2 2 k(?f ) (xi ) ? ni kRd , (f (xi )) + C2 (3) kf kH + C1 i=1 i=1 which uses the given surface point/normal pairs (xi , ni ) directly. By imposing stationarity and using the reproducing property we can solve for the optimal f . A detailed derivation of this procedure is given in [1]. Here we provide only the result, which is that we have to solve for m coefficients ?i as well as a further md coefficients ?lj to obtain the optimal solution f (x) = m X ?i k (xi , x) + m X d X i i ?li kl (xi , x) , (4) l . where we define kl (xi , x) = [(?k) (xi , x)]l , the partial derivative of k in the l-th component of its first argument.1 The coefficients ? and ?l of the solution are found by solving the system given by X 0 = (K + I/C1 )? + Kl ?l (5) l Nm = Km ? + (Kmm + I/C2 )?k + X Klm ?l , m = 1 . . . d (6) l6=m where, writing klm for the second derivatives of k(?, ?) (defined similarly to the first), we?ve defined [Nl ]i = [ni ]l ; [?l ]i = ?li ; [Kl ]i,j = kl (xi , xj ) ; [?]i = ?i [K]i,j = k(xi , xj ) [Klm ]i,j = klm (xi , xj ). In summary, minimum norm approximation in an RKHS with gradient target values is optimally solved by a function in the span of the kernels and derivatives thereof as per Equation 4 (cf. Equation 2), and the coefficients of the solution are given by Equations (5) and (6). It turns out, however, that we can make a more general statement, which we do briefly in the next sub-Section. 2.2 The Representer Theorem with Linear Operators The representer theorem, much celebrated in the Machine Learning community, says that the function minimising an RKHS norm along with some penalties associated with the function value at various points (as in Equation 1 for example) is a sum of kernel functions at those points (as in Equation 2). As we saw in the previous section however, if gradients also appear in the risk function to be minimised, then gradients of the kernel function appear in the optimal solution. We now make a more general statement ? the case in the previous section corresponds to the following if we choose the linear operators Li (which we define shortly) as either identities or partial derivatives. The theorem is a generalisation of [9] (using the same proof idea) with equivalence if we choose all Li to be identity operators. The case of general linear operators was in fact dealt with already in [2] (which merely states the earlier result in [10]) ? but only for the case of a specific loss function c. The following theorem therefore combines the two frameworks: Theorem 1 Denote by X a non-empty set, by k a reproducing kernel with reproducing kernel Hilbert space H, by ? a strictly monotonic increasing real-valued function on [0, ?), by c : Rm ? R ? {?} an arbitrary cost function, and by L1 , . . . Lm a set of linear operators H ? H. Each minimiser f ? H of the regularised risk functional c((L1 f )(x1 ), . . . (Lm f )(xm )) + ?(||f ||2H ) admits the form f= m X ?i L?i kxi , i=1 where kx , k(?, x) and L?i denotes the adjoint of Li . 1 Square brackets with subscripts indicate matrix elements: [a]i is the i-th element of the vector a. (7) (8) Pm Proof. Decompose f into f = i=1 ?i L?i kxi + f? , with ?i ? R and hf? , L?i kxi iH = 0, for each i = 1 . . . m. Due to the reproducing property we can write, for j = 1 . . . m, = h(Lj f ), k(?, xj )iH m X = ?i hLj L?i kxi , k(?, xj )iH + h(Lj f? ), k(?, xj )iH (Lj f )(xj ) i=1 = m X ?i hLj L?i kxi , k(?, xj )iH . i=1 Thus, the first term in Equation 7 is independent of f? . Moreover, it is clear due to orthogonality that if f? 6= 0 then ? ? 2 ? 2 ? m m X X ? ? ?i L?i kxi + f? ? > ? ? ?i L?i kxi ? , i=1 H i=1 H so that for any fixed ?i ? R, Equation 7 is minimised when f? = 0. 2.3 Thin Plate Regulariser and Associated Kernel As is well known (see e.g. [2]), the choice of regulariser (the function norm in Equation 3) leads to a particular kernel function k(?, ?) to be used in Equation 4. For geometrical problems, an excellent regulariser is the thin-plate energy, which for arbitrary order m and dimension d is given by [2]: kf k2H = = h?f, ?f iL2 d X i1 =1 ??? (9) d Z X im =1 ? x1 =?? Z ? ??? xd =?? ? ? ? f ??? ?xi1 ?xim ?? ? ? f ??? ?xi1 ?xim ? dx1 . . . dxd , (10) where ? is a regularisation operator taking all partial derivatives of order m, which corresponds to a ?radial? kernel function of the form k(x, y) = t(||x ? y||), where [11]  2m?d r ln(r) if 2m > d and d is even, t(r) = r2m?d otherwise. There are a number of good reasons to use this regulariser rather than those leading to compactly supported kernels, as we touched on in the introduction. The main problem with compactly supported kernels is that the corresponding regularisers are somewhat poor for geometrical problems ? they always draw the function towards some nominal constant as one moves away from the data, thereby implementing the non-intuitive behaviour of regularising the constant function and making interpolation impossible ? for further discussion see [1] as well as [5, 6, 7]. The scheme we propose in Section 3 solves these problems, previously associated with compactly supported basis functions, by defining and computing the regulariser separately from the function basis. 3 A Fast Scheme using Compactly Supported Basis Functions Here we present a fast approximate scheme for solving the problem of the previous Section, in which we restrict the class of functions to the span of a compactly supported, multi-scale basis, as described in Section 3.1, and minimise the thin-plate regulariser within this span as per Section 3.2. 3.1 Restricting the Set of Available Functions Computationally, using the thin-plate spline leads to the problem that the linear system we need to solve (Equations 5 and 6), which is of size m(d + 1), is dense in the sense of having almost all non-zero entries. Since solving such a system na??vely has a cubic time complexity in m, we propose forcing f (?) to take the form: p X f (?) = ?k fk (?), (11) k=1 where the individual basis functions are fk (?) = ?(||vk ??||/sk ) for some function ? : R+ ? R with support [0, 1). The vk and sk are the basis function centres and dilations (or scales), respectively. For ? we choose the B3 -spline function:   4 X (?1)n n d+1 ?(r) = ? n))d+ , (12) (r + ( d! 2 d + 1 n=0 although this choice is rather inconsequential since, as we shall ensure, the regulariser is unrelated to the function basis ? any smooth compactly supported basis function could be used. In order to achieve the same interpolating properties as the thin-plate spline, we wish to minimise our regularised risk function given by Equation 3 within the span of Equation 11. The key to doing this is to note that as given before in Equation 9, the regulariser (function norm) can be written as 2 kf kH = h?f, ?f iL2 . Given this fact, a straightforward calculation leads to the following system for the optimal ?k (in the sense of minimising Equation 3): ! d d X X T T Kxvl Nl , (13) Kreg + C1 Kxv Kxv + C2 Kxvl Kxvl ? = C2 l=1 l=1 where we have defined the following matrices: [Kreg ]k,k0 = h?fk , ?fk0 iL2 ; [Kxv ]i,k = fk (xi ); [Kxvl ]i,k = [(?fk )(xi )]l ; [?]k = ?k ; [Nl ]i = [ni ]l . The computational advantage is that the coefficients that we need are now given by a sparse pdimensional positive semi-definite linear system, which can be constructed efficiently by simple code that takes advantage of software libraries for fast nearest neighbour type searches (see e.g. [12]). The system can then be solved efficiently using conjugate gradient type methods. In [1] we describe how we construct a basis with p  m that results in a highly sparse linear system, but that still contains good solutions. The critical matter of computing Kreg is dealt with next. 3.2 Computing the Regularisation Matrix We now come to the crucial point of calculating Kreg , which can be thought of as the regularisation matrix. The present Section is highly related to [13], however there numerical methods were resorted to for the calculation of Kreg ? presently we shall derive closed form solutions. Also worth comparing to the present Section is [14], where a prior over the expansion coefficients (here the ?) is used to mimic a given regulariser within an arbitrary basis, achieving a similar result but without the computational advantages we are aiming for. As we have already noted we can write 2 kf kH = h?f, ?f iL2 [2], so that for the function given by Equation 11 we have: ? p ?2 ?X ? ? ? ?j fj (?)? ? ? ? j=1 * = ? p X ?j fj (?), ? j=1 H = p X p X + ?k fk (?) k=1 L2 . ?j ?k h?fj (?), ?fk (?)iL2 = ? T Kreg ?. j,k=1 To build the sparse matrix Kreg , a fast range search library (e.g. [12]) can be used to identify the non-zero entries ? that is, all those [Kreg ]i,j for which i and j satisfy kvi ? vj k ? si + sj . In order to evaluate h?fj (?), ?fk (?)iL2 , it is necessary to solve the integral of Equation 10, the full derivation of which we relegate to [1] ? here we just provide the main results. It turns out that since the fi are all dilations and translations of the same function ?(k?k), then it is sufficient solve for the following . function of si , sj and d = vi ? vj : h??((?)si ? d), ??((?)sj )iL2 , which it turns out is given by " F??1 2m (2?j k?k) |s1 s2 | # ? ? ?( )?( ) (d), s1 s2 (14) Figure 2: Various values of the regularisation parameters lead to various amounts of ?smoothing? ? here we set C1 = C2 in Equation 3 to an increasing value from top-left to bottom-right of the figure. Figure 3: Ray traced three dimensional implicits, ?Happy Buddha? (543K points with normals) and the ?Thai Statue? (5 million points with normals). where j 2 = ?1, ? = Fx [?(x)], and by F (and F ?1 ) we mean the Fourier (inverse Fourier) transform operators in the subscripted variable. Computing Fourier transforms in d dimensions difficult in general, but for radial functions g(x) = gr (||x||) it may be made easier by the fact that the Fourier transform in d dimensions (as well as its inverse) can be computed by the single integral: d Z ? d (2?) 2 r 2 gr (r)J d?2 (||?||r)dr, Fx [gr (kxk)] (k?k) = d?2 2 ||?|| 2 0 where J? (r)is the ?-th order Bessel function of the first kind. Unfortunately the integrals required to attain Equation 14 in closed form cannot be solved for general dimensionality d, regularisation operator ? and basis function form ?, however we did manage to solve them for arguably the most useful case: d = 3 with the m = 2 thin plate energy and the B3 spline basis function of Equation 12. The resulting expressions are rather unwieldy however, so we give only an implementation in the C language in the Appendix of [1], where we also show that for the cases that cannot be solved analytically the required integral can at worst always be transformed to a two dimensional integral for which one can use numerical methods. 3.3 Interpretation as a Gaussian Process Presently we use ideas from [15] to demonstrate that the approximation described in this Section 3 is equivalent to inference in an exact Gaussian Process with covariance function depending on the choice of function basis. Placing a multivariate Gaussian prior over the coefficients in (11), namely ?1 ), we see that f obeys a zero mean Gaussian process prior ? writing [fx ]i = f (xi ) ? ? N (0, Kreg and denoting expectations by E [?] we have for the covariance T E [fx fxT ] = Kxz E [?? T ] Kxz ?1 T = Kxz Kreg Kxz Now, assuming an iid Gaussian noise model with variance ? 2 and defining Kxt etc. similarly to Kxz we can immediately write the joint distribution between the observation at a test point t, that is   yt ? N f (t), ? 2 and the vector of observations at the xi , namely yx ? N fx , ? 2 I , which is     ?1 ?1 Kxz Kreg Kzx + ? 2 I Kxz Kreg Kzt  p(yx , yt ) = N 0, . ?1 ?1 Ktz Kreg Kzx Ktz Kreg Kzt + ? 2 I  The posterior distribution is therefore itself Gaussian, p(yt |yx ) ? N ?yt |yx , ?yt |yx , and we can employ a well known expression2 for the marginals of a multivariate Gaussian followed by the Matrix inversion lemma to derive an expression for the mean of the posterior, T ?1 ?1 ?1 ?t|y = Kxz Kreg Kzt Kxz Kreg Kzx + ? 2 I y  ?1 T T = Ktz ? 2 Kreg + Kxz Kxz Kxz y. 2 ?? x y ? ?? ?N a b ? ? A , CT C B ??? ` ` ?? ? x|y ? N a + CB ?1 (y ? b), A ? CB ?1 C T Name Bunny Face Armadillo Dragon Buddha Asian Dragon Thai Statue Lucy # Points 34834 75970 172974 437645 543197 3609455 4999996 14027872 # Bases 9283 7593 45704 65288 105993 232197 530966 364982 Basis 0.4 0.7 6.6 14.4 117.4 441.6 3742.0 1425.8 Kreg 2.4 1.9 8.5 16.3 27.4 60.9 197.5 170.5 Kxv , Kzv? 3.7 7.0 37.0 70.9 99.4 608.3 1575.6 3484.1 Multiply 11.7 20.3 123.4 322.8 423.7 1885.0 3121.2 9367.7 Solve 20.4 16.0 72.3 1381.4 2909.3 1009.5 2569.5 1340.5 Total 38.7 46.0 247.9 1805.7 3577.2 4005.2 11205.7 15788.5 Table 1: Timing results with a 2.4GHz AMD Opteron 850 processor, for various 3D data sets. Column one is the number of points, each of which has an associated normal vector, and column two is the number of basis vectors (the p of Section 3.1). The remaining columns are all in units of seconds: column three is the time taken to construct the function basis, columns four and five are the times required to construct the indicated matrices, column six is the time required to multiply the matrices as per Equation 13, column seven is the time required to solve that same equation for ? and the final column is the total fitting time. By comparison with (11) and (13) (but with C1 = 1/? 2 , C2 = 0 and y = 0) we can see that the mean of the posterior distribution is identical to our approximate regularised solution based on compactly supported basis functions. For the corresponding posterior variance we have   ?1  ?1 ?1 ?1 ?1 Kzt + ? 2 ? Ktz Kreg ?yt |yx = Ktz Kreg Kzx Kxz Kreg Kzx + ? 2 I Kxz Kreg Kzt ?1 T = ? 2 Ktz ? 2 Kreg + Kxz Kxz Kzt + ? 2 . 4 Experiments We fit models to 3D data sets of up to 14 million data points ? timings are given in Table 1, where we also see that good compression ratios are attained, in that relatively few basis functions represent the shapes. Also note that the fitting time scales rather well, from 38 seconds for the Stanford Bunny (35 thousand points with normals) to 4 hours 23 minutes for the Lucy statue (14 million points with normals ? 14?106 ?(1 value term + 3 gradient terms ) ? 56 million ?regression targets?). Taking account of the different hardware the times seem to be similar to those of the FMM approach [5]. Some rendered examples are given in Figures 1 and 3, and the well-behaved nature of the implicit over the entire 3D volume of interest is shown for the Lucy data-set in the accompanying video. In practice the system is extremely robust and produces excellent results without any parameter adjustment ? smaller values of C1 and C2 in Equation 3 simply lead to the smoothing effect shown in Figure 2. The system also handles missing and noisy data gracefully, as demonstrated in [1]. Higher dimensional implicit surfaces are also possible, interesting being a 4D representation (3D + ?time?) of a moving 3D shape ? one use for this being the construction of animation sequences from a time series of 3D point cloud data ? in this case both spatial and temporal information can help to resolve noise or missing data problems within individual scans. We demonstrate this in the accompanying video, which shows that 4D surfaces yield superior 3D animation results in comparison to a sequence of 3D models. Also interesting are interpolations in 4D ? in the accompanying video we effectively interpolate between two three dimensional shapes. 5 Summary We have presented ideas both theoretically and practically useful for the computer graphics and machine learning communities, demonstrating them within the framework of implicit surface fitting. Many authors have demonstrated fast but limited quality results that occur with compactly supported function bases. The present work differs by precisely minimising a well justified regulariser within the span of such a basis, achieving fast and high quality results. We also showed how normal vectors can be incorporated directly into the usual regression based implicit surface fitting framework, giving a generalisation of the representer theorem. We demonstrated the algorithm on 3D problems of up to 14 million data points and in the accompanying video we showed the advantage of constructing a 4D surface (3D + time) for 3D animation, rather than a sequence of 3D surfaces. Figure 4: Reconstruction of the Stanford bunny after adding Gaussian noise with standard deviation, from left to right, 0, 0.6, 1.5 and 3.6 percent of the radius of the smallest enclosing sphere ? the normal vectors were similarly corrupted assuming they had length equal to this radius. The parameters C1 and C2 were chosen automatically using five-fold cross validation. References [1] C. Walder, B. Sch?olkopf, and O. Chapelle. Implicit surface modelling with a globally regularised basis of compact support. Technical report, Max Planck Institute for Biological Cybernetics, Department of Empirical Inference, Tbingen, Germany, April 2006. [2] G. Wahba. Spline Models for Observational Data. Series in Applied Mathematics, Vol. 59, SIAM, Philadelphia, 1990. [3] Greg Turk and James F. O?Brien. Shape transformation using variational implicit functions. Proceedings of ACM SIGGRAPH 1999, pages 335?342, August 1999. In [4] Bryan S. Morse, Terry S. Yoo, David T. Chen, Penny Rheingans, and K. R. Subramanian. Interpolating implicit surfaces from scattered surface data using compactly supported radial basis functions. In SMI ?01: Proc. Intl. Conf. on Shape Modeling & Applications, Washington, 2001. IEEE Computer Society. [5] J. C. Carr, R. K. Beatson, J. B. Cherrie, T. J. Mitchell, W. R. Fright, B. C. McCallum, and T. R. Evans. Reconstruction and representation of 3d objects with radial basis functions. In ACM SIGGRAPH 2001, pages 67?76. ACM Press, 2001. [6] Yutaka Ohtake, Alexander Belyaev, Marc Alexa, Greg Turk, and Hans-Peter Seidel. Multi-level partition of unity implicits. ACM Transactions on Graphics, 22(3):463?470, July 2003. [7] Y. Ohtake, A. Belyaev, and Hans-Peter Seidel. A multi-scale approach to 3d scattered data interpolation with compactly supported basis functions. In Proc. Intl. Conf. Shape Modeling, Washington, 2003. IEEE Computer Society. [8] L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. J. Comp. Phys., pages 280?292, 1997. [9] Bernhard Sch?olkopf, Ralf Herbrich, and Alex J. Smola. A generalized representer theorem. In COLT ?01/EuroCOLT ?01: Proceedings of the 14th Annual Conference on Computational Learning Theory, pages 416?426, London, UK, 2001. Springer-Verlag. [10] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. Journal of Mathematical Analysis and Applications, 33:82?95, 1971. [11] J. Duchon. Splines minimizing rotation-invariant semi-norms in sobolev spaces. Constructive Theory of Functions of Several Variables, pages 85?100, 1977. [12] C. Merkwirth, U. Parlitz, and W. Lauterborn. Fast nearest neighbor searching for nonlinear signal processing. Phys. Rev. E, 62(2):2089?2097, 2000. [13] Christian Walder, Olivier Chapelle, and Bernhard Sch?olkopf. Implicit surface modelling as an eigenvalue problem. Proceedings of the 22nd International Conference on Machine Learning, 2005. [14] M. O. Franz and P. V. Gehler. How to choose the covariance for gaussian process regression independently of the basis. In Proc. Gaussian Processes in Practice Workshop, 2006. [15] J. Quionero Candela and C. E. Rasmussen. A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research, 6:1935?1959, 12 2005.
3034 |@word briefly:1 inversion:1 compression:1 norm:6 nd:1 km:1 seek:1 simulation:1 queensland:2 covariance:5 thereby:2 celebrated:1 series:3 contains:1 denoting:1 rkhs:3 brien:2 comparing:1 si:3 yet:1 written:1 evans:1 numerical:2 partition:1 benign:1 shape:6 christian:2 aside:1 mccallum:1 herbrich:1 simpler:2 five:2 mathematical:1 along:3 constructed:2 direct:3 c2:8 fxt:1 fitting:9 combine:1 ray:1 introduce:2 manner:1 theoretically:1 mpg:1 frequently:1 fmm:2 multi:4 globally:3 eurocolt:1 automatically:1 resolve:1 increasing:2 moreover:1 unrelated:1 kimeldorf:1 kind:1 interpreted:1 transformation:1 temporal:1 xd:1 rm:1 uk:1 unit:1 appear:2 planck:2 arguably:1 before:3 positive:3 local:1 timing:2 aiming:1 subscript:1 interpolation:4 inconsequential:1 black:2 signed:1 equivalence:2 limited:1 range:1 obeys:1 practice:2 implement:3 definite:3 differs:1 procedure:1 empirical:1 thought:1 attain:1 radial:5 hlj:2 undesirable:1 cannot:2 operator:8 context:1 applying:1 writing:2 risk:3 impossible:1 equivalent:1 demonstrated:4 yt:6 missing:2 straightforward:1 independently:1 immediately:1 undisputed:1 ralf:1 embedding:3 handle:1 searching:1 fx:5 target:4 nominal:1 construction:1 exact:1 olivier:2 us:3 regularised:10 element:2 cut:1 corroborated:1 gehler:1 bottom:1 cloud:1 solved:4 worst:1 thousand:1 complexity:1 thai:2 motivate:1 solving:4 basis:31 compactly:17 joint:1 siggraph:2 k0:1 various:5 derivation:2 laser:1 fast:10 describe:1 london:1 choosing:1 belyaev:2 whose:1 emerged:1 larger:1 heuristic:1 solve:8 say:1 valued:1 otherwise:1 stanford:2 transform:2 itself:2 noisy:1 final:3 advantage:4 kxt:1 sequence:3 eigenvalue:1 propose:3 reconstruction:2 beatson:1 date:1 achieve:1 adjoint:1 intuitive:1 kh:5 olkopf:4 empty:1 xim:2 intl:2 produce:2 object:1 help:1 derive:2 depending:1 nearest:2 solves:1 indicate:1 come:1 direction:1 radius:2 opteron:1 australia:1 observational:1 implementing:1 behaviour:1 decompose:1 biological:2 im:1 strictly:1 kxv:4 scanner:1 dxd:1 accompanying:4 practically:1 normal:16 k2h:1 cb:2 lm:2 adopt:1 smallest:1 proc:3 sensitive:1 saw:1 gaussian:13 always:2 modified:2 rather:6 vk:2 modelling:2 sense:2 inference:2 typically:3 lj:4 entire:1 transformed:1 i1:1 germany:2 subscripted:1 arg:1 aforementioned:1 smi:1 colt:1 smoothing:2 spatial:1 equal:1 construct:4 pdimensional:1 having:1 washington:2 sampling:1 identical:1 represents:1 placing:1 look:1 representer:7 thin:7 mimic:1 report:1 spline:7 employ:1 few:1 neighbour:1 ve:1 interpolate:1 individual:2 asian:1 attempt:1 stationarity:1 interest:1 highly:3 multiply:2 evaluation:1 bracket:1 nl:3 integral:5 partial:3 necessary:1 minimiser:2 il2:7 vely:1 column:8 earlier:1 modeling:2 kreg:23 cost:2 deviation:1 entry:2 gr:3 graphic:3 klm:4 optimally:1 corrupted:1 kxi:7 international:1 siam:1 off:1 xi1:2 minimised:2 alexa:1 na:1 nm:1 manage:1 choose:4 dr:1 conf:2 derivative:5 leading:2 li:5 account:1 de:1 coefficient:7 matter:1 satisfy:1 vi:1 view:1 extrapolation:1 closed:2 candela:1 doing:2 reached:1 hf:2 option:1 contribution:2 square:1 ni:4 greg:2 variance:2 efficiently:2 yield:2 identify:1 dealt:2 iid:1 worth:1 researcher:1 cybernetics:2 comp:1 processor:1 phys:2 energy:2 turk:3 involved:1 thereof:1 james:1 associated:7 proof:2 sampled:1 treatment:1 mitchell:1 dimensionality:1 hilbert:2 fk0:1 attained:1 higher:1 planar:1 april:1 done:1 just:1 implicit:21 smola:1 nonlinear:1 lack:1 quality:5 artifact:1 indicated:1 behaved:1 believe:1 b3:2 name:1 effect:1 analytically:1 conditionally:1 noted:2 generalized:1 plate:7 demonstrate:5 carr:1 l1:2 bring:1 fj:4 percent:1 geometrical:2 variational:2 novel:2 fi:1 yutaka:1 superior:1 rotation:1 functional:1 physical:1 volume:1 million:8 interpretation:1 marginals:1 imposing:1 fk:8 pm:1 similarly:3 mathematics:1 particle:1 centre:2 language:1 had:1 dot:1 chapelle:3 moving:1 han:2 surface:30 etc:1 base:3 closest:1 multivariate:2 posterior:4 triangulation:1 showed:2 forcing:1 verlag:1 ubingen:1 yi:3 minimum:1 somewhat:2 ktz:6 bessel:1 signal:1 july:1 semi:2 full:1 desirable:1 seidel:2 smooth:2 technical:3 calculation:2 minimising:3 sphere:1 cross:1 scalable:1 regression:11 expectation:1 represent:2 kernel:17 c1:7 bunny:3 addition:1 justified:1 separately:1 brisbane:1 crucial:1 sch:4 effectiveness:2 seem:1 easy:2 variety:1 xj:8 fit:1 restrict:1 wahba:2 idea:5 minimise:2 expression:2 six:1 colour:1 penalty:1 regularisers:1 suffer:1 peter:2 statue:3 proprietary:1 useful:2 detailed:1 clear:1 amount:1 transforms:1 hardware:1 fright:1 problematic:1 per:3 bryan:1 write:3 shall:2 vol:1 group:1 key:1 four:1 demonstrating:1 tchebycheffian:1 achieving:2 traced:1 clarity:1 prevent:1 clean:1 resorted:1 cherrie:1 merely:1 sum:1 package:1 inverse:2 extends:1 almost:1 reader:1 sobolev:1 draw:1 appendix:1 ct:1 followed:1 fold:1 annual:1 occur:1 orthogonality:1 precisely:1 alex:1 software:1 dragon:2 fourier:4 argument:1 min:1 span:5 extremely:1 optical:1 rendered:2 relatively:2 department:1 poor:1 conjugate:1 across:1 smaller:1 reconstructing:1 unity:1 rev:1 making:2 s1:2 kmm:1 presently:2 invariant:1 taken:1 ln:1 equation:22 computationally:1 previously:2 discus:4 turn:3 nose:1 available:2 kxz:16 greengard:1 apply:1 away:1 r2m:1 shortly:1 denotes:1 multipole:1 include:1 cf:1 ensure:1 top:1 kzx:5 remaining:1 l6:1 unifying:1 yx:6 calculating:1 giving:1 build:1 society:2 move:1 already:2 md:1 usual:1 gradient:7 distance:1 gracefully:1 amd:1 seven:1 tuebingen:1 trivial:3 reason:1 assuming:2 code:1 length:1 ratio:1 happy:1 minimizing:1 difficult:1 unfortunately:2 statement:2 implementation:1 regulariser:12 enclosing:1 summarising:1 observation:2 walder:3 defining:2 incorporated:3 reproducing:6 arbitrary:3 august:1 community:2 overcoming:1 david:1 pair:2 mechanical:1 kl:5 required:5 namely:2 hour:1 suggested:2 xm:1 max:2 including:1 video:4 terry:1 subramanian:1 suitable:1 critical:1 scheme:3 library:2 numerous:1 duchon:1 philadelphia:1 review:1 prior:3 l2:1 kf:6 regularisation:6 morse:1 fully:1 loss:1 interesting:2 validation:1 sufficient:2 translation:1 summary:2 supported:19 last:1 rasmussen:1 institute:2 neighbor:1 taking:2 face:1 sparse:4 penny:1 ghz:1 slice:1 overcome:3 dimension:3 avoids:1 author:1 made:1 franz:1 transaction:1 sj:3 approximate:3 compact:1 bernhard:3 generalising:1 xi:18 search:2 sk:2 dilation:3 table:2 nature:1 robust:1 expansion:1 excellent:3 interpolating:2 constructing:3 marc:1 vj:2 did:1 main:4 dense:1 s2:2 noise:4 animation:3 x1:2 scattered:2 cubic:2 probing:1 sub:1 inferring:1 wish:1 kzt:6 touched:1 theorem:9 unwieldy:1 minute:1 specific:1 kvi:1 dx1:1 admits:1 exists:1 workshop:1 ih:6 restricting:1 adding:1 effectively:1 quionero:1 magnitude:1 hole:1 kx:1 chen:1 easier:1 led:1 lucy:4 simply:1 relegate:1 kxk:1 adjustment:1 monotonic:1 springer:1 corresponds:2 acm:4 identity:2 towards:1 regularising:3 generalisation:3 contradictory:1 lemma:1 total:2 formally:1 rokhlin:1 support:2 arises:1 scan:1 alexander:1 constructive:1 evaluate:1 yoo:1 avoiding:1
2,243
3,035
Learning to Traverse Image Manifolds Piotr Doll?ar, Vincent Rabaud and Serge Belongie University of California, San Diego {pdollar,vrabaud,sjb}@cs.ucsd.edu Abstract We present a new algorithm, Locally Smooth Manifold Learning (LSML), that learns a warping function from a point on an manifold to its neighbors. Important characteristics of LSML include the ability to recover the structure of the manifold in sparsely populated regions and beyond the support of the provided data. Applications of our proposed technique include embedding with a natural out-of-sample extension and tasks such as tangent distance estimation, frame rate up-conversion, video compression and motion transfer. 1 Introduction A number of techniques have been developed for dealing with high dimensional data sets that fall on or near a smooth low dimensional nonlinear manifold. Such data sets arise whenever the number of modes of variability of the data are much fewer than the dimension of the input space, as is the case for image sequences. Unsupervised manifold learning refers to the problem of recovering the structure of a manifold from a set of unordered sample points. Manifold learning is often equated with dimensionality reduction, where the goal is to find an embedding or ?unrolling? of the manifold into a lower dimensional space such that certain relationships between points are preserved. Such embeddings are typically used for visualization, with the projected dimension being 2 or 3. Image manifolds have also been studied in the context of measuring distance between images undergoing known transformations. For example, the tangent distance [20, 21] between two images is computed by generating local approximations of a manifold from known transformations and then computing the distance between these approximated manifolds. In this work, we seek to frame the problem of recovering the structure of a manifold as that of directly learning the transformations a point on a manifold may undergo. Our approach, Locally Smooth Manifold Learning (LSML), attempts to learn a warping function W with d degrees of freedom that can take any point on the manifold and generate its neighbors. LSML recovers a first order approximation of W, and by making smoothness assumptions on W can generalize to unseen points. We show that LSML can recover the structure of the manifold where data is given, and also in regions where it is not, including regions beyond the support of the original data. We propose a number of uses for the recovered warping function W, including embedding with a natural out-ofsample extension, and in the image domain discuss how it can be used for tasks such as computation of tangent distance, image sequence interpolation, compression, and motion transfer. We also show examples where LSML is used to simultaneously learn the structure of multiple ?parallel? manifolds, and even generalize to data on new manifolds. Finally, we show that by exploiting the manifold smoothness, LSML is robust under conditions where many embedding methods have difficulty. Related work is presented in Section 2 and the algorithm in Section 3. Experiments on point sets and results on images are shown in Sections 4 and 5, respectively. We conclude in Section 6. 2 Related Work Related work can be divided into two categories. The first is the literature on manifold learning, which serves as the foundation for this work. The second is work in computer vision and computer graphics addressing image warping and generative models for image formation. A number of classic methods exist for recovering the structure of a manifold. Principal component analysis (PCA) tries to find a linear subspace that best captures the variance of the original data. Traditional methods for nonlinear manifolds include self organizing maps, principal curves, and variants of multi-dimensional scaling (MDS) among others, see [11] for a brief introduction to these techniques. Recently the field has seen a number of interesting developments in nonlinear manifold learning. [19] introduced a kernelized version of (PCA). A number of related embedding methods have also been introduced, representatives include LLE [17], I SOMAP [22], and more recently SDE [24]. Broadly, such methods can be classified as spectral embedding techniques [24]; the embeddings they compute are based on an eigenvector decomposition of an n ? n matrix that represents geometrical relationships of some form between the original n points. Out-of-sample extensions have been proposed [3]. The goal of embedding methods (to find structure preserving embeddings) differs from the goals of LSML (learn to traverse the manifold). Four methods that we share inspiration with are [6, 13, 2, 16]. [6] employs a novel charting based technique to achieve increased robustness to noise and decreased probability of pathological behavior vs. LLE and I SOMAP; we exploit similar ideas in the construction of LSML but differ in motivation and potential applicability. [2] proposed a method to learn the tangent space of a manifold and demonstrated a preliminary illustration of rotating a small bitmap image by about 1? . Work by [13] is based on the notion of learning a model for class specific variation, the method reduces to computing a linear tangent subspace that models variability of each class. [16] shares one of our goals as it addresses the problem of learning Lie groups, the infinitesimal generators of certain geometric transformations. In image analysis, the number of dimensions is usually reduced via approaches like PCA [15], epitomic representation [12], or generative models like in the realMOVES system developed by Di Bernardo et al. [1]. Sometimes, a precise model of the data, like for faces [4] or eyes [14], is even used to reduce the complexity of the data. Another common approach is simply to have instances of an object in different conditions: [5] start by estimating feature correspondences between a novel input with unknown pose and lighting and a stored labeled example in order to apply an arbitrary warp between pictures. The applications range from video texture synthesis [18] and facial expression extrapolation [8, 23] to face recognition [10] and video rewrite [7]. 3 Algorithm Let D be the dimension of the input space, and assume the data lies on a smooth d-dimensional manifold (d  D). For simplicity assume that the manifold is diffeomorphic with a subset of Rd , meaning that it can be endowed with a global coordinate system (this requirement can easily be relaxed) and that there exists a continuous bijective mapping M that converts coordinates y ? Rd to points x ? RD on the manifold. The goal of most dimensionality reduction techniques given a set of data points xi is to find an embedding yi = M?1 (xi ) that preserves certain properties of the original data like the distances between all points (classical MDS) or the distances or angles between nearby points (e.g. spectral embedding methods). Instead, we seek to learn a warping function W that can take a point on the manifold and return any neighboring point on the manifold, capturing all the modes of variation of the data. Let us use W(x, ) to denote the warping of x, with  ? Rd acting on the degrees of freedom of the warp according to the formula M: W(x, ) = M(y + ), where y = M?1 (x). Taking the first order approximation of the above gives: W(x, ) ? x + H(x), where each column H?k (x) of the matrix H(x) is the partial derivative of M w.r.t. yk : H?k (x) = ?/?yk M(y). This approximation is valid given  small enough, hence we speak of W being an infinitesimal warping function. We can restate our goal of learning to warp in terms of learning a function H? : RD ? RD?d parameterized by a variable ?. Only data points xi sampled from one or several manifolds are given. For each xi , the set N i of neighbors is then computed (e.g. using variants of nearest neighbor such (a) (b) (c) (d) (e) (f) Figure 1: Overview. Twenty points (n=20) that lie on 1D curve (d=1) in a 2D space (D=2) are shown in (a). Black lines denote neighbors, in this case the neighborhood graph is not connected. We apply LSML to train H (with f = 4 RBFs). H maps points in R2 to tangent vectors; in (b) tangent vectors computed over a regularly spaced grid are displayed, with original points (blue) and curve (gray) overlayed. Tangent vectors near original points align with the curve, but note the seam through the middle. Regularization fixes this problem (c), the resulting tangents roughly align to the curve along its entirety. We can traverse the manifold by taking small steps in the direction of the tangent; (d) shows two such paths, generated starting at the red plus and traversing outward in large steps (outer curve) and finer steps (inner curve). This generates a coordinate system for the curve resulting in a 1D embedding shown in (e). In (f) two parallel curves are shown, with n=8 samples each. Training a common H results in a vector field that more accurately fits each curve than training a separate H for each (if the structure of the two manifolds was very different this need not be the case). as kNN or NN), with the constraint that two points can be neighbors only if they come from the same manifold. To proceed, we assume that if xj is a neighbor of xi , there then exists an unknown ij such that W(xi , ij ) = xj to within a good approximation. Equivalently: H? (xi )ij ? xj ? xi . We wish to find the best ? in the squared error sense (the ij being additional free parameters that must be optimized over). The expression of the error we need to minimize is therefore: error1 (?) = min ij { } n X X H? (xi )ij ? (xj ? xi ) 2 2 (1) i=1 j?N i Minimizing the above error function can be interpreted as trying to find a warping function that can transform a point into its neighbors. Note, however, that the warping function has only d degrees of freedom while a point may have many more neighbors. This intuition allows us to rewrite the error in an alternate form. Let ?i be the matrix where each column is of the form (xj ? xi ) for each > neighbor of xi . Let ?i = U i ?i V i be the thin singular value decomposition of ?i . Then, one can show [9] that error1 is equivalent to the following: error2 (?) = min i {E } n X H? (xi )E i ? U i ?i 2 (2) F i=1 Here, the matrices E i are the additional free parameters. Minimizing the above can be interpreted as searching for a warping function that directly explains the modes of variation at each point. This form is convenient since we no longer have to keep track of neighbors. Furthermore, if there is no noise and the linearity assumption holds there are at most d non- zero singular values. In practice we use the truncated SVD, keeping at most 2d singular values, allowing for significant computational savings. We now give the remaining details of LSML for the general case [9]. For the case of images, we present an efficient version in Section 5 which uses some basic domain knowledge to avoid solving a large regression. Although potentially any regression technique is applicable, a linear model is particularly easy to work with. Let f i be f features computed over xi . We can then define H? (xi ) = [?1 f i ? ? ? ?D f i ]> , where each ?k is a d ? f matrix. Re- arranging error2 gives: errorlin (?) = min i {E } D n X 2 X i> k > i i ?i f ? E ? Uk? i=1 k=1 2 (3) Solving simultaneously for E and ? is complex, but if either E or ? is fixed, solving for the remaining variable becomes a least squares problem (an equation of the form AXB = C can be rewritten as B > ? A ? vec(X) = vec(C), where ? denotes the Kronecker product and vec the (a) (b) (c) (d) Figure 2: Robustness. LSML used to recover the embedding of the S- curve under a number of sampling conditions. In each plot we show the original points along with the computed embedding (rotated to align vertically), correspondence is indicated by coloring/shading (color was determined by the y- coordinate of the embedding). In each case LSML was run with f = 8, d = 2, and neighbors computed by NN with  = 1 (the height of the curve is 4). The embeddings shown were recovered from data that was: (a) densely sampled (n=500) (b) sparsely sampled (n=100), (c) highly structured (n=190), and (d) noisy (n=500, random Gaussian noise with ? = .1). In each case LSML recovered the correct embedding. For comparison, LLE recovered good embeddings for (a) and (c) and I SOMAP for (a),(b), and (c). The experiments were repeated a number of times yielding similar results. For a discussion see the text. matrix vectorization function). To solve for ?, we use an alternating minimization procedure. In all experiments in this paper we perform 30 iterations of the above procedure, and while local minima do not seem to be to prevalent, we randomly restart the procedure 5 times. Finally, nowhere in the construction have we enforced that the learned tangent vectors be orthogonal (such a constraint would only be appropriate if the manifold was isometric to a plane). To avoid numerically unstable solutions we regularize the error: error0lin (?) = errorlin (?) + ?E n D X X i 2 k 2 E + ?? ? F F i=1 (4) k=1 For the features we use radial basis functions (RBFs) [11], the number of basis functions, f , being an additional parameter. Each basis function is of the form f j (x) = exp(?kx ? ?j k22 /2? 2 ) where the centers ?j are obtained using K- means clustering on the original data with f clusters and the width parameter ? is set to be twice the average of the minimum distance between each cluster and its nearest neighbor center. The feature vectors are then simply defined as f i = [f 1 (xi ) ? ? ? f p (xi )]> . The parameter f controls the smoothness of the final mapping H? ; larger values result in mappings that better fit local variations of the data, but whose generalization abilities to other points on the manifold may be weaker. This is exactly analogous to the standard supervised setting and techniques like cross validation could be used to optimize over f . 4 Experiments on Point Sets We begin with a discussion on the intuition behind various aspects of LSML. We then show experiments demonstrating the robustness of the method, followed by a number of applications. In the figures that follow we make use of color/shading to indicate point correspondences, for example when we show the original point set and its embedding. LSML learns a function H from points in RD to tangent directions that agree, up to a linear combination, with estimated tangent directions at the original training points of the manifold. By constraining H to be smooth (through use of a limited number of RBFs), we can compute tangents at points not seen during training, including points that may not lie on the underlying manifold. This generalization ability of H will be central to the types of applications considered. Finally, given multiple non- overlapping manifolds with similar structure, we can train a single H to correctly predict the tangents of each, allowing information to be shared. Fig. 1 gives a visual tutorial of these different concepts. LSML appears quite robust. Fig. 2 shows LSML successfully applied for recovering the embedding of the ?S- curve? under a number of sampling conditions (similar results were obtained on the ?Swissroll?). After H is learned, the embedding is computed by choosing a random point on the manifold (a) (b) (c) (d) Figure 3: Reconstruction. Reconstruction examples are used to demonstrate quality and generalization of H. (a) Points sampled from the Swiss- roll manifold (middle), some recovered tangent vectors in a zoomedin region (left) and embedding found by LSML (right). Here n = 500 f = 20, d = 2, and neighbors were computed by NN with  = 4 (height of roll is 20). Reconstruction of Swiss- roll (b), created by a backprojection from regularly spaced grid points in the embedding (traversal was done from a single original point located at the base of the roll, see text for details). Another reconstruction (c), this time using all points and extending the grid well beyond the support of the original data. The Swiss- roll is extended in a reasonable manner both inward (occluded) and outward. (d) Reconstruction of unit hemisphere (LSML trained with n = 100 f = 6, d = 2, NN with  = .3) by traversing outward from topmost point, note reconstruction in regions with no points. and establishing a coordinate system by traversing outward (the same procedure can be used to embed novel points, providing a natural out- of- sample extension). Here we compare only to LLE and I SOMAP using published code. The densely sampled case, Fig. 2(a), is comparatively easy and a number of methods have been shown to successfully recover an embedding. On sparsely sampled data, Fig. 2(b), the problem is more challenging; LLE had problems for n < 250 (lowering LLE?s regularization parameter helped somewhat). Real data need not be uniformly sampled, see Fig. 2(c). In the presence of noise Fig. 2(d), I SOMAP and LLE performed poorly. A single outlier can distort the shortest path computed by I SOMAP, and LLE does not directly use global information necessary to disambiguate noise. Other methods are known to be robust [6], and in [25] the authors propose a method to ?smooth? a manifold as a preprocessing step for manifold learning algorithms; however a full comparison is outside the scope of this work. Having learned H and computed an embedding, we can also backproject from a point y ? Rd to a point x on the manifold by first finding the coordinate of the closest point yi in the original data, then traversing from xi by j = yj ? yji along each tangent direction j (see Fig. 1(d)). Fig. 3(a) shows tangents and an embedding recovered by LSML on the Swiss- roll. In Fig. 3(b) we backproject from a grid of points in R2 ; by linking adjacent sets of points to form quadrilaterals we can display the resulting backprojected points as a surface. In Fig. 3(c), we likewise do a backprojection (this time keeping all the original points), however we backproject grid points well below and above the support of the original data. Although there is no ground truth here, the resulting extension of the surface seems ?natural?. Fig. 3(d) shows the reconstruction of a unit hemisphere by traversing outward from the topmost point. There is no isometric mapping (preserving distance) between a hemisphere and a plane, and given a sphere there is actually not even a conformal mapping (preserving angles). In the latter case an embedding is not possible, however, we can still easily recover H for both (only hemisphere results are shown). 5 Results on Images Before continuing, we consider potential applications of H in the image domain, including tangent distance estimation, nonlinear interpolation, extrapolation, compression, and motion transfer. We refer to results on point- sets to aid visualization. Tangent distance estimation: H computes the tangent and can be used directly in invariant recognition schemes such as [21]. Compression: Fig. 3(b,d) suggest how given a reference point and H nearby points can be reconstructed using d numbers (with distortion increasing with distance). Nonlinear interpolation and extrapolation: points can be generated within and beyond the support of given data (cf . Fig. 3); of potential use in tasks such as frame rate up- conversion, reconstructing dropped frames and view synthesis. Motion transfer: for certain classes of manifolds with ?parallel? structure (cf . Fig. 1(f)), a recovered warp may be used on an entirely novel image. These applications will depend not only on the accuracy of the learned H but also on how close a set of images is to a smooth manifold. (a) (b) (c) (d) Figure 4: The translation manifold. Here F i = X i ; s = 17, d = 2 and 9 sets of 6 translated images each were used (not including the cameraman). (a) Zero padded, smoothed test image x. (b) Visualization of learned ?, see text for details. (c) H? (x) computed via convolution. (d) Several transformations obtained after multiple steps along manifold for different linear combinations of H? (x). Some artifacts due to error propagation start to appear in the top figures. The key insight to working with images is that although images can live in very high dimensional spaces (with D ? 106 quite common), we do not have to learn a transformation with that many parameters. Let x be an image and H?k (x), k ? [1, d], be the d tangent images. Here we assume that each pixel in H?k (x) can be computed based only on the information in s ? s patch centered on the corresponding pixel in x. Thus, instead of learning a function RD ? RD?d we learn a 2 function Rs ? Rd , and to compute H we apply the per patch function at each of the D locations in the image. The resulting technique scales independently of D, in fact different sized images can be used. The per patch assumption is not always suitable, most notably for transformations that are based only on image coordinate and are independent of appearance. The approach of Section 3 needs to be slightly modified to accommodate patches. We rewrite each image xi ? RD as a s2 ? D matrix X i where each row contains pixels from one patch in xi (in training we sub-sample patches). Patches from all the images are clustered to obtain the f RBFs; each X i is then transformed to a f ? D matrix F i that contains the features computed for each patch. The per patch linear model can now be written as H? (xi ) = (?F i )> , where ? is a d ? f matrix (compare with the D ?s needed without the patch assumption). The error function, which is minimized in a similar way [9], becomes: errorimg (?) = min i {E } n 2 X i> > i F ? E ? U i ?i i=1 F (5) We begin with the illustrative example of translation (Fig. 4). Here, RBFs were not used, instead F i = X i . The learned ? is a 2 ? s2 matrix, which can be visualized as two s ? s images as in Fig. 4(b). These resemble derivative of Gaussian filters, which are in fact the infinitesimal generates for translation [16]. Computing the dot product of each column of ? with each patch can be done using a convolution. Fig. 4 shows applications of the learned transformations, which resemble translations with some artifacts. Fig. 5 shows the application of LSML for learning out-of-plane rotation of a teapot. On this size problem training LSML (in M ATLAB) takes a few minutes; convergence occurs within about 10 iterations of the minimization procedure. H? (x) for novel x can be computed with f convolutions (to compute cross correlation) and is also fast. The outer frames in Fig. 5 highlight a limitation of the approach: with every successive step error is introduced; eventually significant error can accumulate. Here, we used a step size which gives roughly 10 interpolated frames between each pair of original frames. With out-of-plane rotation, information must be created and the problem becomes ambiguous (multiple manifolds can intersect at a single point), hence generalization across images is not expected to be good. In Fig. 6, results are shown on an eye manifold with 2 degrees of freedom. LSML was trained on sparse data from video of a single eye; H? was used to synthesize views within and also well outside the support of the original data (cf . Fig. 6(c)). In Fig. 6(d), we applied the transformation learned from one person?s eye to a single image of another person?s eye (taken under the same imaging conditions). LSML was able to start from the novel test image and generate a convincing series of ?~ ?~ ? (a) ?~ ?~ ? (b) (c) (d) (e) Figure 5: Manifold generated by out- of- plane rotation of a teapot (data from [23], sub- sampled and smoothed). Here, d = 1, f = 400 and roughly 3000 patches of width s = 13 were sampled from 30 frames. Bottom row shows the ? ground truth dashed box contains of 30 images, representing ? 8(data of physical rotation. top rowand Figure 5: images; Manifold generated by 3outof-training plane rotation of a teapot from [24], sub-The sampled shows the learned transformation applied to the central image. By observing the tip, handle the two from white 30 blobs on smoothed). Here, d = 1, f = 400 and roughly 3000 patches of width s = 13 wereand sampled frames. the teapot, and comparing to ground truth data, we can observe the quality of the learned transformation on seen data (b) ? Bottom row shows the ground truth images; dashed box contains 3 of 30 training images, representing ? 8 and unseen data (d), both starting from a single frame (c). The outmost figures (a)(e) shows failure for large rotations. of physical rotation. The top row shows the learned transformation applied to the central image. By observing the tip, handle and the two white blobs on the teapot, and comparing to ground truth data, we can observe the quality of the learned transformation on seen data (b) and unseen data (d), both starting from a single frame (c). Fig. outmost 5 showsfigures the application of LSML out- of- plane rotation of a teapot. On this size problem The (a)(e) shows failurefor forlearning large rotations. training LSML (in M ATLAB) takes a few minutes; convergence occurs within about 10 iterations of the minimization procedure. H? (x) for novel x can be computed with f convolutions (to compute cross correlation) and is also fast. The outer frames in Fig. 5 highlight a limitation of the approach: with every successive step error is introduced; eventually significant error can accumulate. Here, we used a step size which gives Q Q 10 interpolated frames between each pair of original  roughly frames. With out- of- plane rotation, information Q   must beQcreated and the problem becomes ambiguous (multiple manifolds can intersect at a single point), Q  3 k Q 6 Q  hence generalization across images is not expected to be good. Q  Q  Q  Q  Q   Q QQ   Q Q   Q Q   Q Q  Q 3  k Q 6 Q  Q Q Q  Q QQ +  sQ ?  Q   Q  Q  Q  Q  QQ  Q  Q Q  Q   (a) QQ (b) (c) (d)  Q  Figure Q 1: Traversing the eye manifold. LSML +  sQtrained on one eye moving along five different lines (3 vertical 2 horizontal). Here d?= 2, f = 600, Q and s =Q19 and around 5000 patches were sampled; 2 frames were  neighbors the if they were adjacent in time.trained Figure (a) images generated from central image.  considered Q Figure 6: Traversing eye manifold. LSML onshows one eye moving along fivethedifferent lines (3 vertical The inner 8 frames lie just outside the support of the Q training data (not shown), the outer 8 are extrapolated  Q and beyond 2 horizontal). Here d = 2, f = 600, s = 19 and around 5000 patches were sampled; frames were its support. Figure (b) details H? (x) for two images in a warping sequence: a linear combination2can considered they in were adjacent in time. shows images from(c)the central image. lead the neighbors iris/eyelid toifmove different directions (e.g.Figure the sum(a) would make the irisgenerated go up). Figure shows The extrapolation inner 8 frames lie just ofwide the open training dataclosed. (not(c) shown), the outer 8 (d) are extrapolated far beyond the training the data,support i.e. an eye and Finally, Figure(d) shows how (a)outside (b)fully the eye learned one eyeH can be applied a novel in eyeanot seen during training.a linear combination can beyond its manifold support. we Figure (b)ondetails for twoonimages warping sequence: ? (x) Traversing eyeinmanifold. trained(e.g. on one moving vertical 2 Figure lead the6:iris/eyelid to the move differentLSML directions theeye sum wouldalong makefive thedifferent iris go lines up). (3Figure (c)and shows horizontal). Here d = 2, f = 600, s = 19 and around 5000 patches were sampled; 2 frames were considered neighbors extrapolation far beyond the training data, i.e. an eye wide open and fully closed. Finally, Figure(d) shows how if they were adjacent in time. Figure (a) shows images generated from the central image. The inner 8 frames lie just the eye the manifold learned on data one eye be applied novel eye not beyond seen during training. outside supportwe of the training (not can shown), the outeron8 aare extrapolated its support. Figure (b) details H? (x) for two images in a warping sequence: a linear combination can lead the iris/eyelid to move in different directions (e.g. the sum would make the iris go up). Figure (c) shows extrapolation far beyond the training data, i.e. an eye wide open and fully closed. Finally, Figure(d) shows how the eye manifold we learned on one eye can be applied on a novel transformations. Thus, motion transfer was possible - H? trained on one series of images generalized eye not seen during training. to a different set of images. 6 Conclusion In this work we presented an algorithm, Locally Smooth Manifold Learning, for learning the structure of a manifold. Rather than pose manifold learning as the problem of recovering an embedding, we posed the problem in terms of learning a warping function for traversing the manifold. Smoothness assumptions on W allowed us to generalize to unseen data. Proposed uses of LSML include tangent distance estimation, frame rate up- conversion, video compression and motion transfer. We are currently engaged in scaling the implementation to handle large datasets; the goal is to integrate LSML into recognition systems to provide increased invariance to transformations. Acknowledgements This work was funded by the following grants and organizations: NSF Career Grant #0448615, Alfred P. Sloan Research Fellowship, NSF IGERT Grant DGE-0333451, and UCSD Division of Calit2. We would like to thank Sameer Agarwal, Kristin Branson, Matt Tong, and Neel Joshi for valuable input and Anna Shemorry for helping us make it through the deadline. References [1] E. Di Bernardo, L. Goncalves and P. Perona.US Patent 6,552,729: Automatic generation of animation of synthetic characters., 2003. [2] Y. Bengio and M. Monperrus. Non-local manifold tangent learning. In NIPS. 2005. [3] Y. Bengio, J.F. Paiement, P. Vincent, O. Delalleau, N. Le Roux, and M. Ouimet. Out-of-sample extensions for LLE, isomap, MDS, eigenmaps, and spectral clustering. In NIPS, 2004. [4] D. Beymer and T. Poggio. Face recognition from one example view. In ICCV, page 500, Washington, DC, USA, 1995. IEEE Computer Society. [5] Volker Blanz and Thomas Vetter. Face recognition based on fitting a 3D morphable model. PAMI, 25(9):1063?1074, 2003. [6] M. Brand. Charting a manifold. In NIPS, 2003. [7] Christoph Bregler, Michele Covell, and Malcolm Slaney. Video rewrite: driving visual speech with audio. In SIGGRAPH, pages 353?360, 1997. [8] E. Chuang, H. Deshpande, and C. Bregler. Facial expression space learning. In Pacific Graphics, 2002. [9] P. Doll?ar, V. Rabaud, and S. Belongie. Learning to traverse image manifolds. Technical Report CS20070876, UCSD CSE, Jan. 2007. [10] G. J. Edwards, T. F. Cootes, and C. J. Taylor. Face recognition using active appearance models. ECCV, 1998. [11] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, 2001. [12] N. Jojic, B. Frey, and A. Kannan. Epitomic analysis of appearance and shape. In ICCV, 2003. [13] D. Keysers, W. Macherey, J. Dahmen, and H. Ney. Learning of variability for invariant statistical pattern recognition. ECML, 2001. [14] T. Moriyama, T. Kanade, J. Xiao, and J. F. Cohn. Meticulously detailed eye region model. PAMI, 2006. [15] H. Murase and S.K. Nayar. Visual learning and recognition of 3D objects from appearance. IJCV, 1995. [16] R. Rao and D. Ruderman. Learning Lie groups for invariant visual perception. In NIPS, 1999. [17] L. K. Saul and S. T. Roweis. Think globally, fit locally: unsupervised learning of low dimensional manifolds. JMLR, 2003. [18] A. Sch?odl, R. Szeliski, D.H. Salesin, and I. Essa. Video textures. In SIGGRAPH, 2000. [19] B. Sch?olkopf, A. Smola, and K. M?uller. Nonlinear component analysis as a kernel eigenvalue problem. Neur. Comp., 1998. [20] P. Simard, Y. LeCun, and J. S. Denker. Efficient pattern recognition using a new transformation distance. In NIPS, 1993. [21] P. Simard, Y. LeCun, J. S. Denker, and B. Victorri. Transformation invariance in pattern recognitiontangent distance and tangent propagation. In Neural Networks: Tricks of the Trade, 1998. [22] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290, 2000. [23] Joshua B. Tenenbaum and William T. Freeman. Separating style and content with bilinear models. Neural Computation, 12(6):1247?1283, 2000. [24] K. Q. Weinberger and L. K. Saul. Unsupervised learning of image manifolds by semidefinite programming. In CVPR04. [25] Z. Zhang and Zha. Local linear smoothing for nonlinear manifold learning. Technical report, 2003.
3035 |@word middle:2 version:2 compression:5 seems:1 open:3 seek:2 r:1 decomposition:2 accommodate:1 shading:2 reduction:3 contains:4 series:2 quadrilateral:1 bitmap:1 recovered:7 comparing:2 must:3 written:1 shape:1 plot:1 v:1 generative:2 fewer:1 plane:8 cse:1 location:1 traverse:4 successive:2 zhang:1 five:1 height:2 along:6 ijcv:1 fitting:1 manner:1 notably:1 expected:2 roughly:5 behavior:1 multi:1 freeman:1 globally:1 unrolling:1 becomes:4 provided:1 estimating:1 linearity:1 begin:2 underlying:1 increasing:1 inward:1 sde:1 interpreted:2 eigenvector:1 developed:2 outof:1 finding:1 transformation:17 every:2 bernardo:2 exactly:1 uk:1 control:1 unit:2 grant:3 appear:1 before:1 dropped:1 local:5 vertically:1 frey:1 bilinear:1 establishing:1 path:2 interpolation:3 pami:2 black:1 plus:1 twice:1 studied:1 challenging:1 christoph:1 branson:1 limited:1 range:1 lecun:2 yj:1 practice:1 differs:1 swiss:4 sq:1 procedure:6 jan:1 intersect:2 convenient:1 radial:1 refers:1 vetter:1 suggest:1 close:1 context:1 live:1 optimize:1 equivalent:1 map:2 demonstrated:1 center:2 go:3 starting:3 independently:1 simplicity:1 roux:1 insight:1 regularize:1 dahmen:1 embedding:24 handle:3 classic:1 notion:1 variation:4 coordinate:7 searching:1 arranging:1 analogous:1 diego:1 construction:2 qq:4 speak:1 programming:1 us:3 trick:1 synthesize:1 nowhere:1 approximated:1 recognition:9 particularly:1 located:1 element:1 sparsely:3 labeled:1 bottom:2 capture:1 region:6 connected:1 trade:1 valuable:1 yk:2 topmost:2 intuition:2 complexity:1 occluded:1 traversal:1 trained:5 depend:1 rewrite:4 solving:3 division:1 basis:3 translated:1 easily:2 siggraph:2 various:1 train:2 fast:2 formation:1 neighborhood:1 choosing:1 outside:5 whose:1 quite:2 larger:1 solve:1 posed:1 distortion:1 delalleau:1 blanz:1 ability:3 knn:1 unseen:4 think:1 transform:1 noisy:1 final:1 sequence:5 blob:2 eigenvalue:1 essa:1 propose:2 reconstruction:7 product:2 neighboring:1 organizing:1 poorly:1 achieve:1 roweis:1 olkopf:1 exploiting:1 convergence:2 cluster:2 requirement:1 extending:1 generating:1 rotated:1 object:2 pose:2 ij:6 nearest:2 edward:1 recovering:5 c:1 entirety:1 come:1 indicate:1 murase:1 differ:1 direction:7 error1:2 restate:1 resemble:2 correct:1 somap:6 filter:1 centered:1 explains:1 fix:1 generalization:5 clustered:1 preliminary:1 bregler:2 extension:6 helping:1 hold:1 around:3 considered:4 ground:5 exp:1 mapping:5 predict:1 scope:1 neel:1 driving:1 estimation:4 applicable:1 currently:1 successfully:2 kristin:1 minimization:3 uller:1 gaussian:2 always:1 modified:1 rather:1 avoid:2 volker:1 epitomic:2 prevalent:1 diffeomorphic:1 sense:1 nn:4 typically:1 kernelized:1 perona:1 transformed:1 pixel:3 among:1 development:1 smoothing:1 field:2 saving:1 having:1 piotr:1 sampling:2 washington:1 teapot:6 represents:1 unsupervised:3 thin:1 minimized:1 others:1 report:2 employ:1 few:2 pathological:1 randomly:1 simultaneously:2 preserve:1 densely:2 overlayed:1 william:1 attempt:1 freedom:4 friedman:1 organization:1 highly:1 yielding:1 semidefinite:1 behind:1 partial:1 necessary:1 poggio:1 facial:2 traversing:9 orthogonal:1 continuing:1 taylor:1 rotating:1 re:1 increased:2 instance:1 column:3 rao:1 ar:2 measuring:1 applicability:1 addressing:1 subset:1 eigenmaps:1 graphic:2 stored:1 synthetic:1 person:2 tip:2 synthesis:2 squared:1 central:6 slaney:1 derivative:2 simard:2 return:1 style:1 meticulously:1 potential:3 de:1 unordered:1 sloan:1 performed:1 try:1 extrapolation:6 helped:1 view:3 observing:2 closed:2 red:1 start:3 recover:5 zha:1 parallel:3 rbfs:5 minimize:1 square:1 accuracy:1 roll:6 variance:1 characteristic:1 likewise:1 serge:1 spaced:2 igert:1 covell:1 salesin:1 error2:2 generalize:3 vincent:2 accurately:1 lighting:1 comp:1 finer:1 published:1 classified:1 whenever:1 distort:1 infinitesimal:3 failure:1 deshpande:1 pdollar:1 atlab:2 di:2 recovers:1 sampled:14 knowledge:1 color:2 dimensionality:3 actually:1 coloring:1 appears:1 isometric:2 seam:1 supervised:1 follow:1 done:2 box:2 furthermore:1 just:3 smola:1 correlation:2 langford:1 working:1 horizontal:3 ruderman:1 cohn:1 monperrus:1 nonlinear:8 overlapping:1 propagation:2 mode:3 quality:3 gray:1 indicated:1 artifact:2 dge:1 michele:1 usa:1 matt:1 k22:1 concept:1 isomap:1 hence:3 inspiration:1 regularization:2 alternating:1 jojic:1 white:2 adjacent:4 during:4 self:1 width:3 ambiguous:2 illustrative:1 iris:5 generalized:1 trying:1 bijective:1 demonstrate:1 motion:6 silva:1 geometrical:1 image:51 meaning:1 novel:10 recently:2 common:3 rotation:10 physical:2 overview:1 patent:1 linking:1 numerically:1 accumulate:2 significant:3 refer:1 vec:3 smoothness:4 rd:12 automatic:1 grid:5 populated:1 had:1 dot:1 funded:1 moving:3 longer:1 surface:2 morphable:1 align:3 base:1 closest:1 hemisphere:4 certain:4 yi:2 joshua:1 seen:7 preserving:3 additional:3 relaxed:1 minimum:2 somewhat:1 shortest:1 dashed:2 multiple:5 full:1 sameer:1 reduces:1 smooth:8 technical:2 cross:3 sphere:1 divided:1 deadline:1 variant:2 basic:1 regression:2 vision:1 iteration:3 sometimes:1 kernel:1 agarwal:1 preserved:1 fellowship:1 decreased:1 victorri:1 singular:3 sch:2 backprojected:1 undergo:1 regularly:2 seem:1 joshi:1 near:2 presence:1 constraining:1 bengio:2 embeddings:5 enough:1 easy:2 xj:5 fit:3 hastie:1 reduce:1 idea:1 inner:4 expression:3 pca:3 speech:1 proceed:1 detailed:1 outward:5 locally:4 tenenbaum:2 visualized:1 category:1 reduced:1 generate:2 exist:1 nsf:2 tutorial:1 estimated:1 track:1 correctly:1 per:3 blue:1 broadly:1 alfred:1 tibshirani:1 odl:1 paiement:1 group:2 key:1 four:1 demonstrating:1 lowering:1 imaging:1 graph:1 padded:1 convert:1 sum:3 enforced:1 run:1 angle:2 parameterized:1 swissroll:1 cootes:1 reasonable:1 patch:16 scaling:2 capturing:1 entirely:1 followed:1 display:1 correspondence:3 cvpr04:1 constraint:2 kronecker:1 nearby:2 generates:2 aspect:1 interpolated:2 min:4 structured:1 pacific:1 according:1 alternate:1 neur:1 combination:4 across:2 slightly:1 reconstructing:1 character:1 making:1 outlier:1 invariant:3 iccv:2 taken:1 equation:1 visualization:3 agree:1 discus:1 cameraman:1 eventually:2 needed:1 ouimet:1 serf:1 conformal:1 endowed:1 doll:2 rewritten:1 apply:3 observe:2 denker:2 spectral:3 appropriate:1 ney:1 robustness:3 weinberger:1 original:18 thomas:1 denotes:1 remaining:2 include:5 clustering:2 cf:3 top:3 chuang:1 exploit:1 classical:1 comparatively:1 society:1 backprojection:2 warping:14 move:2 occurs:2 md:3 traditional:1 subspace:2 distance:15 separate:1 calit2:1 thank:1 separating:1 restart:1 outer:5 manifold:69 unstable:1 kannan:1 charting:2 code:1 relationship:2 illustration:1 providing:1 minimizing:2 convincing:1 equivalently:1 potentially:1 implementation:1 unknown:2 twenty:1 allowing:2 conversion:3 perform:1 convolution:4 outmost:2 vertical:3 datasets:1 displayed:1 truncated:1 ecml:1 extended:1 variability:3 precise:1 frame:21 dc:1 ucsd:3 smoothed:3 arbitrary:1 introduced:4 pair:2 optimized:1 california:1 learned:15 nip:5 address:1 beyond:10 able:1 ofsample:1 usually:1 below:1 pattern:3 perception:1 including:5 video:7 suitable:1 natural:4 difficulty:1 sjb:1 representing:2 scheme:1 brief:1 eye:20 picture:1 created:2 text:3 literature:1 geometric:2 tangent:25 acknowledgement:1 macherey:1 fully:3 highlight:2 interesting:1 limitation:2 goncalves:1 generation:1 generator:1 validation:1 foundation:1 integrate:1 degree:4 xiao:1 share:2 translation:4 row:4 eccv:1 extrapolated:3 free:2 keeping:2 lle:9 weaker:1 warp:4 szeliski:1 neighbor:17 fall:1 face:5 taking:2 eyelid:3 wide:2 sparse:1 saul:2 curve:13 dimension:4 valid:1 computes:1 equated:1 author:1 san:1 rabaud:2 projected:1 preprocessing:1 far:3 reconstructed:1 keep:1 dealing:1 global:3 active:1 belongie:2 conclude:1 backproject:3 xi:21 yji:1 continuous:1 vectorization:1 disambiguate:1 kanade:1 learn:7 transfer:6 robust:3 career:1 complex:1 domain:3 anna:1 motivation:1 noise:5 arise:1 s2:2 animation:1 repeated:1 allowed:1 fig:24 representative:1 aid:1 tong:1 sub:3 wish:1 lie:8 jmlr:1 learns:2 formula:1 minute:2 embed:1 specific:1 undergoing:1 r2:2 exists:2 texture:2 kx:1 keysers:1 simply:2 appearance:4 beymer:1 visual:4 springer:1 truth:5 goal:7 sized:1 shared:1 axb:1 content:1 determined:1 uniformly:1 acting:1 principal:2 engaged:1 svd:1 invariance:2 brand:1 support:11 latter:1 audio:1 malcolm:1 nayar:1
2,244
3,036
A Bayesian Approach to Diffusion Models of Decision-Making and Response Time Michael D. Lee? Department of Cognitive Sciences University of California, Irvine Irvine, CA, 92697-5100. [email protected] Ian G. Fuss Defence Science and Technology Organisation PO Box 1500, Edinburgh, SA 5111, Australia [email protected] Daniel J. Navarro School of Psychology University of Adelaide, SA 5005, Australia [email protected] Abstract We present a computational Bayesian approach for Wiener diffusion models, which are prominent accounts of response time distributions in decision-making. We first develop a general closed-form analytic approximation to the response time distributions for one-dimensional diffusion processes, and derive the required Wiener diffusion as a special case. We use this result to undertake Bayesian modeling of benchmark data, using posterior sampling to draw inferences about the interesting psychological parameters. With the aid of the benchmark data, we show the Bayesian account has several advantages, including dealing naturally with the parameter variation needed to account for some key features of the data, and providing quantitative measures to guide decisions about model construction. 1 Introduction In the past decade, modern computational Bayesian methods have been productively applied to the modeling of many core psychological phenomena. These areas include similarity modeling and structure learning [1], concept and category learning [2, 3], inductive inference and decisionmaking [4], language processes [5], and individual differences [6]. One central area that has been less affected is the modeling of response times in decision-making. Nevertheless, the time people take to produce behavior is a basic and ubiquitious measure that can constrain models and theories of human cognitive processes [7]. There is a large and well-developed set of competing models that aim to account for accuracy, response time distributions and (sometimes) confidence in decision-making. However, besides the effective application of hierarchical Bayesian methods to models that assume response times follow a Weibull distribution [8], most of the inference remains frequentist. In particular, sequential sampling models of response time, which are the dominant class in the field, have not adopted modern Bayesian methods for inference. The prominent recent review paper by Ratcliff and Smith, for example, relies entirely on frequentist methods for parameter estimation, and does not go beyond the application of the Bayesian Information Criterion for model selection [9]. ? Address correspondence to: Michael D. Lee, Department of Cognitive Sciences, 3151 Social Sciences Plaza, University of California, Irvine, CA 92697-5100. Telephone: (949) 824 5074. Facsimile: (949) 824 2307. URL: www.socsci.uci.edu/?mdlee. f (t??,?,?,?) Evidence (x) ? ? 0 ? ? ? f?(t??,?,?,?) Response Time (t) Figure 1: A diffusion model for response time distributions for both decisions in a two-choice decision-making task. See text for details. Much of the utility, however, in using sequential sampling models to understand decision-making, and their application to practical problems [10], requires making inferences about variations in parameters across subjects, stimuli, or experimental conditions. These inferences would benefit from the principled representation of uncertainty inherent in the Bayesian approach. In addition, many of the competing models have many parameters, are non-linear, and are non-nested. This means their comparison would benefit from Bayesian methods for model selection that do not approximate model complexity by counting the number of free parameters, as the Bayesian Information Criterion does. In this paper, we present a computational Bayesian approach for Wiener diffusion models [11], which are the most widely used special case of the sequential sampling approach. We apply our Bayesian method to the benchmark data of Ratcliff and Rouder [12], using posterior sampling to draw inferences about the interesting psychological parameters. With the aid of this application, we show that adopting the Bayesian perspective has several advantages, including dealing naturally with the parameter variation needed to account for some key features of the data, and providing quantitative measures to guide decisions about model construction. 2 The Diffusion Model and its Application to Benchmark Data 2.1 The Basic Model The basic one-dimensional diffusion model for accuracy and response time distributions in a twochoice decision-making task is shown in Figure 2.1. Time, t, progresses from left to right, and includes an fixed offset ? that parameterizes the time taken for the non-decision component of response time, such as the time taken to encode the stimulus and complete a motor response. The decision-making component itself is driven by independent samples from an stationary distribution that represents the evidence the stimulus provides in favor of the two alternative decisions. In the Wiener diffusion, this distribution is assumed to be Gaussian, with mean ?. Evidence sampled from this distribution is accrued over time, leading to a diffusion process that is finally absorbed by boundaries above and below at distances ? and ? from the origin. The response time distribution is then given by the first-passage distribution p (t | ?, ?, ?, ?) = f? (t | ?, ?, ?, ?) + f? (t | ?, ?, ?, ?), with the areas under f? and f? giving the proportion of decisions at each boundary. A natural reparameterization is to consider the starting point of evidence accrual z = (? ? ?) / (? + ?), which is considered a measure of bias, and the boundary separation a = ? + ?, which is considered a measure of caution. In either case, this basic form of the model has four free parameters: ?, ? and either ? and ? or z and a. 2.2 Previous Application to Benchmark Data The evolution of Wiener diffusion models of decision-making has involved a series of additional assumptions to address shortcomings in its ability to capture basic empirical regularities. This evolution is well described by Ratcliff and Rouder [12], who, in their Experiment 1 present a diffusion model analysis of a benchmark data set [8, 13]. In this experiment, three observers completed ten 35 minute sessions, each consisting of ten blocks with 102 trials per block. The task of observers was to decide between ?bright? and ?dark? responses for simple visual stimuli with different proportions of white dots, given noisy feedback about the accuracy of the responses. There were 33 different types of stimuli, ranging from 0% to 100% white in equal increments. In addition, the subjects were required to switch between adherence to ?speed? instructions and ?accuracy? instructions every two blocks. In accord with the experimental design, Ratcliff and Rouder fitted separate drift rates ?i for each of the i = 1, . . ., 33 stimuli; fitted separate boundaries for speed and accuracy instructions, but assumed the boundaries were symmetric (i.e., there was no bias), so that ?j = ?j for j = 1, 2; and fitted one offset ? for all stimuli and instructions. The data from this experiment are considered benchmark because they show a cross-over effect, whereby errors are faster than correct decisions for easy stimulus conditions under speed instructions, but errors are as slow or slower than correct decisions for hard stimulus conditions under accuracy instructions. As Ratcliff and Rouder point out, these trends are not accommodated by the basic model without allowing for variation in the parameters. Accordingly, to predict fast errors, the basic model is extended by assuming that the starting point is subject to between-trial variation, and so is convolved with a Gaussian or uniform distribution. Similarly, to predict slow errors, it is assumed that the mean drift rate is also subject to between-trial variation, and so is convolved with a Gaussian distribution. Both of these noise processes are parameterized with the standard sufficient statistics, which become additional parameters of the model. 3 Closed-form Response Time Distributions for Diffusion Models One practical reason diffusion models have resisted a Bayesian treatment is that the evaluation of their likelihood function through standard methods is computationally intensive, typically requiring the estimation of an oscillating but convergent infinite sum for each datum [14]. Instead, we use a new closed-form approximation to the required response time distribution. We give a very brief presentation of the approximation here. A more detailed technical note is available from the first author?s web page. The key assumption in our approximation is that the evolving diffusion distributions always assume a limiting form f. Given this form, we define the required limit for a sampling distribution with respect to an arbitrary time dependent mean ? (t, ?) and variance ?2 (t, ?), both which depend on parameters ? of the sampling distribution, in terms of the evidence accumulated x, as  f x; ? (t, ?) , ?2 (t, ?) , (1) from which the cumulative function at an upper boundary of one unit is obtained as Z ?   F1 ? (t, ?) , ?2 (t, ?) = f x; ? (t, ?) , ?2 (t, ?) dx. (2) 1 Differentiation, followed by algebraic manipulation gives the general result  f1 ? (t, ?) , ?2 (t, ?) = =  d F1 ? (t, ?) , ?2 (t, ?) dt ? ? (t, ?) [1 ? ? (t, ?)] + ? (t, ?) ?t ?2 (t, ?) ? ? (t, ?) ?t f  1 ? ? (t, ?) ? (t, ?)  .(3) Weiner diffusion from the origin to boundaries ? and ? with mean drift rate ? and variance ?2 (t) = t can be represented in this model by defining f (y) = exp ? 12 y2 , rescaling to a variance ?2 (t) = 1 1 a2 t and setting ? (t, ?) = a ?t + z. Thus the response time distributions for Weiner diffusion in this ?=?=10 ?=?=15 ?=15,?=10 Response Time Distribution ?=0.1 ?=0.05 ?=0.01 Response Time Figure 2: Comparison between the closed form approximation (dark broken lines) and the infinite sum distributions (light solid lines) for nine realistic combinations of drift rate and boundaries. approximation are f? (t | ?, ?, ?, ?) = 2? + ? (t ? ?) 3 2 (t ? ?) 2 f? (t | ?, ?, ?, ?) = 2? ? ? (t ? ?) 3 2 (t ? ?) 2 (2? ? ? (t ? ?)) exp ? 2 (t ? ?) (2? + ? (t ? ?)) exp ? 2 (t ? ?) 2 2 ! ! , . (4) 3.1 Adequacy of the Wiener Approximation Figure 2 shows the relationship between the response time distributions found by the previous infinite sum method, and those generated by our closed form approximation. For every combination of drift rates ? = 0.01, 0.05 and 0.10, and boundary combinations ? = ? = 10, ? = ? = 15 and ? = 15, ? = 10 we found the best (least-squares) match between the infinite-sum distribution and those distributions indexed by our approximation. These generating parameter combinations were chosen because they cover the range of the posterior distributions we infer from data later. Figure 2 shows that the approximation provides close matches across these parameterizations, although we note the approximation distributions do seem to use slightly (and apparently systematically) different parameter combinations to generate the best-matching distribution. While additional work is required to understand the exact relationship between the infinite-sum method and our approximation, the approximation is sufficiently accurate over the range of parameterizations of interest to be used as the basis for beginning to apply Bayesian methods to diffusion models. 4 Bayesian Modeling of Benchmark Data 4.1 General Model Our log likelihood function evaluates the density of each response time at the boundary corresponding to its associated decision, and assumes independence, so that X X ln f? (t | ?, ?, ?, ?) + ln f? (t | ?, ?, ?, ?) , (5) ln L (T | ?, ?, ?, ?) = t?D? t?D? ?j ?i tijk k = 1, . . ., n i = 1, . . . , 33 ? ?j j = 1, 2 Figure 3: Graphical model for the benchmark data analysis. where D? and D? are the set of all response times at the upper and lower boundaries respectively. We threshold at 10?30 to guard against degeneracy to negative values inherent in the approximation. The graphical model representation of the benchmark data is shown in Figure 3, where the observed response time data are now denoted T = {tijk }, with i = 1, . . ., 33 indexing the presented stimulus, j = 1, 2 indexing speed or accuracy instructions, and k = 1, . . ., n indexing all of the trials with this stimulus and instruction combination. We place proper approximations to non-informative distributions on all the parameters, so that they are all essentially flat over the values of interest. Specifically we assume the 33 drift rates are independent and each have a zero mean Gaussian prior with very small precision: ?i ? Gaussian (0, ? ), with ? = 10?6. The boundary parameters ? and ? are given the same priors, but because they are constrained to be positive, their sampling is censored accordingly: ?j , ?j ? Gaussian (0, ? ) ; ?j , ?j > 0. Since ? is bounded by the minimum time observed, we use the Uniform prior distribution: ? ? Uniform (0, min T ). This is a data-dependent prior, but the same results could be achieved with a fixed prior, and scaling of the time data, which are arbitrary up to scalar multiplication. 4.2 Formalizing Model Construction The Bayesian approach allows us to test the intuitively plausible model construction decisions made previously by Ratcliff and Rouder. Using the data from one of the observers (N.H.), we considered the marginal likelihoods, denoted simply L, based on the harmonic mean approximation [15], calculated from three chains of 105 samples from the posterior obtained using Winbugs. ? The full model described by Figure 3, with asymmetric boundaries, varying across speed and accuracy instructions. This model had ln L = ?48, 416. ? The restricted model with symmetric boundaries, still varying across instructions, as assumed by Ratcliff and Rouder. This model had ln L = ?48, 264. ? The restricted model with asymmetric boundaries not varying across instructions. This model had ln L = ?48, 964. ? The restricted model with symmetric boundaries not varying across instructions. This model had ln L = ?48, 907. These marginal log likelihoods make it clear that different boundaries are needed for the speed and accuracy instructions, but it is overly complicated to allow them to be asymmetric (i.e., there is no need to parameterize bias). We tested the robustness of these values by halving and doubling the prior variances, and using an adapted form of the ?informative? priors collated in [16], all of which lead to similar quantitative and identical qualitative conclusions. These results formally justify the model construction decisions made by Ratcliff and Rouder, and the remainder of our analysis applies to this restricted model. 4.3 Posterior Distributions Figure 4 shows the posterior distributions for the symmetric boundaries under both speed and accuracy instructions. These distributions are consistent with traditional analyses and the speed boundary is clearly significantly smaller than the accuracy boundary. We note that, for historical reasons only, 2 Wiener diffusion models have assumed ?2 (t) = (0.1t) , and so our scale is 100 times larger. p(??T) Speed Accuracy 0 5 10 Boundary Values 15 20 Figure 4: Posterior distributions for the boundaries under speed and accuracy instructions. The main panel of Figure 5 shows the posterior distributions for all 33 drift rate parameters. The posteriors are shown against the vertical axes, with wider bars corresponding to greater density, and are located according to their proportion of white dots on the horizontal axis. The approximately monotonic relationship between drift rate and proportion shows that the model allows stimulus properties to be inferred from the behavioral decision time data, as found by previous analyses. The right hand panel of Figure 5 shows the projection of three of the posterior distributions, labelled 4, 17 and 22. It is interesting to note that the uncertainty about the drift rate of stimuli 4 and 17 both take a Gaussian form, but with very different variances. More dramatically, the uncertainty about the drift rate for stimulus 22 is clearly bi-modal. 0.15 Drift Rate (?i) 0.1 0.05 22 0 17 ?0.05 ?0.1 ?0.15 4 0 0.1 17 0.2 4 22 0.3 0.4 0.5 0.6 0.7 Stimulus Proportion White 0.8 0.9 1 p(?i?T) Figure 5: Posterior distributions for the 33 drift rates, in terms of the proportion of white dots in their associated stimuli. 4.4 Accuracy and Fast and Slow Errors Figure 6 follows previous analyses of these data, and shows the relationship between the empirical proportions, and the model prediction of decision proportions for each stimulus type. For both the speed and accuracy instructions, there is close agreement between the model and data. Figure 7 shows the posterior predictive distribution of the model for two cases, analogous to those highlighted previously [13, Figure 6]. The left panel involves a relatively easy decision, corresponding to stimulus number 22 under speed instructions, and shows the models predictions for the response time for both correct (upper) and error (lower) decisions, together with the data, indicated by short vertical lines. For this easy decision, it can be seen the model predicts relatively fast errors. The right panel of Figure 7 involves a harder decision, corresponding to stimulus number 18 under Accuracy 1 0.8 0.8 0.6 0.6 Data Data Speed 1 0.4 0.2 0.4 0.2 0 0 0 0.2 0.4 0.6 Model 0.8 1 0 0.2 0.4 0.6 Model 0.8 1 Figure 6: Relationship between modeled and empirical accuracy, for the speed instructions (left panel) and accuracy instructions (right panel). Each marker corresponds to one of the 33 stimuli. accuracy instructions. Here the model predicts much slower errors, with a heavier tail than for the easy decision. Response Time Distribution These are the basic qualitative properties of prediction that motivated the introduction of betweentrial variability through noise processes in the traditional account. In the present Bayesian treatment, the required predictions are achieved because the posterior predictive automatically samples from a range of values for the drift and boundary parameters. By representing this variation in parameters as uncertainty about fixed values, we are making different basic assumptions from the traditional Wiener diffusion model. It is interesting to speculate that, if Bayesian results like those in Figure 7 had always been available, the introduction of additional variability processes described in [12] might never have eventuated. These processes seem solely designed to account for empirical effects like the cross-over effect; in particular, we are not aware of the parameters of the additional variability processes being used to draw substantive psychological inferences from data. 0 1500 Response Time (ms) 0 6000 Response Time (ms) Figure 7: Posterior predictive distributions for both correct (solid line) and error (broken line) responses, for two stimuli corresponding to easy (left panel) and hard (right panel) decisions. The density for error decisions in the easy responses has been scaled to allow its shape to be visible. 5 Conclusions Our analyses of the benchmark data confirm many of the central conclusions of previous analyses, but also make several new contributions. The posterior distributions shown in Figure 5 suggest that current parametric assumptions about drift rate variability may not be entirely appropriate. In particular, there is the intriguing possibility of multi-modalities evident in the drift rate of stimulus 22, and the associated raw data in Figure 7. Figure 5 also suggests a hierarchical account of the benchmark data, modeling the 33 drift rates ?i in terms of, for example, a low-dimensional psychometric function. This would be easily achieved in the current Bayesian framework. It should also be possible to introduce contaminant distributions in a mixture model, following previous suggestions [8, 14], using latent variable assignments for each response time. If it was desirable to replicate the current assumptions of starting-point and drift-rate variability, that would also easily be done in an extended hierarchical account. Finally, the availability of marginal likelihood measures, accounting for both model fit and complexity, offer the possibility of rigorous quantitative comparisons of alternative sequential sampling accounts, such as the Ornstein-Uhlenbeck and accumulator models [9], of response times in decision-making. Acknowledgments We thank Jeff Rouder for supplying the benchmark data, and Scott Brown, E.-J. Wagenmakers, and the reviewers for helpful comments. References [1] C. Kemp, A. Bernstein, and J. B. Tenenbaum. A generative theory of similarity. In B. G. Bara, L. W. Barsalou, and M. Bucciarelli, editors, Proceedings of the 27th Annual Conference of the Cognitive Science Society. Erlbaum, Mahwah, NJ, 2005. [2] J. R. Anderson. The adaptive nature of human categorization. Psychological Review, 98(3):409?429, 1991. [3] J. B. Tenenbaum and T. L. Griffiths. Generalization, similarity, and Bayesian inference. Behavioral and Brain Sciences, 24(4):629?640, 2001. [4] T. L. Griffiths and J. B. Tenenbaum. Structure and strength in causal induction. Cognitive Psychology, 51:354?384, 2005. [5] T. L. Griffiths, M. Steyvers, and D. Blei, and J. B. Tenenbaum. Integrating topics and syntax. Advances in Neural Information Processing Systems, 17, 2005. [6] D. J. Navarro, T. L. Griffiths, M. Steyvers, and M. D. Lee. Modeling individual differences using Dirichlet processes. Journal of Mathematical Psychology, 50:101?122, 2006. [7] R. D. Luce. Response Times: Their Role in Inferring Elementary Mental Organization . Oxford University Press, New York, 1986. [8] J. N. Rouder, J. Lu, P. L. Speckman, D. Sun, and Y. Jiang. A hierarchical model for estimating response time distributions. Psychonomic Bulletin & Review, 12:195?223, 2005. [9] R. Ratcliff and P. L. Smith. A comparison of sequential sampling models for two?choice reaction time. Psychological Review, 111:333?367, 2004. [10] R. Ratcliff, A. Thapar, and G. McKoon. The effects of aging on reaction time in a signal detection task. Psychology and Aging, 16:323?341, 2001. [11] R. Ratcliff. A theory of memory retrieval. Psychological Review, 85:59?108, 1978. [12] R. Ratcliff and J. N. Rouder. Modeling response times for two?choice decisions. Psychological Science, 9:347?356, 1998. [13] S. Brown and A. Heathcote. A ballistic model of choice response time. Psychological Review, 112:117?128, 1 2005. [14] R. Ratcliff and F. Tuerlinckx. Estimating parameters of the diffusion model: Approaches to dealing with contaminant reaction times and parameter variability. Psychonomic Bulletin & Review, 9:438?481, 2002. [15] A. E. Raftery. Hypothesis testing and model selection. In W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, editors, Markov chain Monte Carlo in Practice, pages 163?187. Chapman & Hall/CRC, Boca Raton (FL), 1996. [16] E.-J. Wagenmakers, H. J. L. van der Maas, and R. P. P. P. Grasman. An EZ?diffusion model for response time and accuracy. Psychonomic Bulletin & Review, in press.
3036 |@word trial:4 proportion:8 replicate:1 instruction:20 accounting:1 solid:2 harder:1 series:1 daniel:2 past:1 reaction:3 current:3 dx:1 intriguing:1 realistic:1 visible:1 informative:2 shape:1 fuss:2 analytic:1 motor:1 designed:1 stationary:1 generative:1 accordingly:2 beginning:1 smith:2 core:1 short:1 supplying:1 blei:1 mental:1 provides:2 parameterizations:2 mathematical:1 guard:1 become:1 qualitative:2 behavioral:2 introduce:1 behavior:1 multi:1 brain:1 automatically:1 gov:1 estimating:2 bounded:1 formalizing:1 panel:8 adherence:1 weibull:1 developed:1 caution:1 differentiation:1 nj:1 quantitative:4 every:2 scaled:1 unit:1 positive:1 limit:1 aging:2 oxford:1 jiang:1 solely:1 approximately:1 might:1 au:2 suggests:1 range:3 bi:1 practical:2 accumulator:1 acknowledgment:1 testing:1 gilks:1 practice:1 block:3 area:3 empirical:4 evolving:1 significantly:1 matching:1 projection:1 confidence:1 integrating:1 griffith:4 suggest:1 close:2 selection:3 www:1 reviewer:1 go:1 starting:3 collated:1 reparameterization:1 steyvers:2 variation:7 increment:1 analogous:1 limiting:1 construction:5 exact:1 hypothesis:1 origin:2 agreement:1 trend:1 productively:1 located:1 asymmetric:3 predicts:2 observed:2 role:1 capture:1 parameterize:1 boca:1 sun:1 principled:1 broken:2 complexity:2 depend:1 predictive:3 basis:1 po:1 easily:2 represented:1 fast:3 effective:1 shortcoming:1 monte:1 widely:1 plausible:1 larger:1 favor:1 ability:1 statistic:1 richardson:1 highlighted:1 itself:1 noisy:1 advantage:2 remainder:1 uci:2 regularity:1 decisionmaking:1 produce:1 oscillating:1 generating:1 categorization:1 wider:1 derive:1 develop:1 school:1 progress:1 sa:2 involves:2 correct:4 human:2 australia:2 mckoon:1 crc:1 f1:3 generalization:1 elementary:1 sufficiently:1 considered:4 hall:1 exp:3 predict:2 a2:1 estimation:2 ballistic:1 clearly:2 gaussian:7 defence:2 aim:1 always:2 varying:4 encode:1 ax:1 ratcliff:13 likelihood:5 rigorous:1 helpful:1 inference:9 dependent:2 accumulated:1 typically:1 denoted:2 constrained:1 special:2 marginal:3 field:1 equal:1 never:1 aware:1 sampling:10 chapman:1 identical:1 represents:1 stimulus:22 inherent:2 heathcote:1 modern:2 individual:2 consisting:1 bara:1 detection:1 organization:1 interest:2 possibility:2 evaluation:1 mixture:1 light:1 chain:2 accurate:1 censored:1 indexed:1 accommodated:1 causal:1 fitted:3 psychological:9 modeling:8 cover:1 assignment:1 uniform:3 erlbaum:1 accrued:1 density:3 lee:3 contaminant:2 michael:2 together:1 central:2 cognitive:5 leading:1 rescaling:1 account:10 speculate:1 includes:1 availability:1 ornstein:1 later:1 tijk:2 observer:3 closed:5 apparently:1 complicated:1 contribution:1 bright:1 square:1 accuracy:20 wiener:8 variance:5 who:1 rouder:10 bayesian:22 raw:1 lu:1 carlo:1 evaluates:1 against:2 involved:1 naturally:2 associated:3 degeneracy:1 sampled:1 irvine:3 treatment:2 dt:1 follow:1 response:37 modal:1 done:1 box:1 anderson:1 hand:1 horizontal:1 web:1 marker:1 indicated:1 effect:4 concept:1 requiring:1 y2:1 brown:2 inductive:1 evolution:2 symmetric:4 white:5 whereby:1 criterion:2 m:2 prominent:2 syntax:1 evident:1 complete:1 passage:1 ranging:1 harmonic:1 psychonomic:3 tail:1 session:1 similarly:1 language:1 had:5 dot:3 similarity:3 dominant:1 posterior:15 recent:1 perspective:1 driven:1 manipulation:1 der:1 seen:1 minimum:1 additional:5 greater:1 signal:1 full:1 desirable:1 infer:1 technical:1 match:2 faster:1 offer:1 cross:2 retrieval:1 halving:1 prediction:4 basic:9 essentially:1 sometimes:1 adopting:1 accord:1 uhlenbeck:1 achieved:3 addition:2 modality:1 navarro:3 subject:4 comment:1 seem:2 adequacy:1 counting:1 bernstein:1 easy:6 undertake:1 switch:1 independence:1 fit:1 psychology:4 competing:2 parameterizes:1 luce:1 intensive:1 weiner:2 motivated:1 heavier:1 utility:1 url:1 algebraic:1 york:1 nine:1 bucciarelli:1 dramatically:1 detailed:1 clear:1 dark:2 ten:2 tenenbaum:4 category:1 generate:1 overly:1 per:1 affected:1 key:3 four:1 nevertheless:1 threshold:1 diffusion:22 sum:5 barsalou:1 parameterized:1 uncertainty:4 place:1 decide:1 separation:1 draw:3 decision:31 scaling:1 entirely:2 fl:1 followed:1 datum:1 convergent:1 correspondence:1 plaza:1 annual:1 adapted:1 strength:1 constrain:1 flat:1 speed:14 min:1 winbugs:1 relatively:2 department:2 according:1 combination:6 across:6 slightly:1 smaller:1 making:12 intuitively:1 restricted:4 indexing:3 taken:2 computationally:1 ln:7 remains:1 previously:2 needed:3 adopted:1 available:2 apply:2 hierarchical:4 appropriate:1 frequentist:2 alternative:2 robustness:1 slower:2 convolved:2 assumes:1 dirichlet:1 include:1 completed:1 graphical:2 giving:1 society:1 wagenmakers:2 parametric:1 traditional:3 distance:1 separate:2 thank:1 topic:1 kemp:1 reason:2 induction:1 substantive:1 assuming:1 besides:1 modeled:1 relationship:5 providing:2 negative:1 design:1 proper:1 allowing:1 upper:3 vertical:2 markov:1 benchmark:13 defining:1 extended:2 variability:6 arbitrary:2 drift:17 inferred:1 raton:1 required:6 california:2 address:2 beyond:1 bar:1 below:1 scott:1 including:2 memory:1 natural:1 representing:1 technology:1 brief:1 spiegelhalter:1 axis:1 raftery:1 text:1 review:8 prior:7 multiplication:1 interesting:4 suggestion:1 sufficient:1 consistent:1 editor:2 systematically:1 maas:1 free:2 guide:2 bias:3 understand:2 allow:2 bulletin:3 edinburgh:1 benefit:2 boundary:23 feedback:1 calculated:1 van:1 cumulative:1 author:1 made:2 adaptive:1 historical:1 facsimile:1 social:1 approximate:1 dealing:3 confirm:1 assumed:5 latent:1 decade:1 nature:1 ca:2 main:1 noise:2 mahwah:1 psychometric:1 slow:3 aid:2 precision:1 inferring:1 ian:2 minute:1 offset:2 organisation:1 evidence:5 sequential:5 resisted:1 simply:1 absorbed:1 visual:1 ez:1 scalar:1 doubling:1 applies:1 monotonic:1 nested:1 corresponds:1 relies:1 presentation:1 labelled:1 jeff:1 hard:2 telephone:1 infinite:5 specifically:1 justify:1 experimental:2 formally:1 people:1 adelaide:2 tested:1 phenomenon:1
2,245
3,037
Bayesian Image Super-resolution, Continued Lyndsey C. Pickup, David P. Capel? , Stephen J. Roberts Andrew Zisserman Information Engineering Building, Dept. of Eng. Science, Parks Road, Oxford, OX1 3PJ, UK {elle,sjrob,az}@robots.ox.ac.uk ? 2D3, [email protected] Abstract This paper develops a multi-frame image super-resolution approach from a Bayesian view-point by marginalizing over the unknown registration parameters relating the set of input low-resolution views. In Tipping and Bishop?s Bayesian image super-resolution approach [16], the marginalization was over the superresolution image, necessitating the use of an unfavorable image prior. By integrating over the registration parameters rather than the high-resolution image, our method allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. In addition to the motion model used by Tipping and Bishop, illumination components are introduced into the generative model, allowing us to handle changes in lighting as well as motion. We show results on real and synthetic datasets to illustrate the efficacy of this approach. 1 Introduction Multi-frame image super-resolution refers to the process by which a group of images of the same scene are fused to produce an image or images with a higher spatial resolution, or with more visible detail in the high spatial frequency features [7]. Such problems are common, with everything from holiday snaps and DVD frames to satellite terrain imagery providing collections of low-resolution images to be enhanced, for instance to produce a more aesthetic image for media publication [15], or for higher-level vision tasks such as object recognition or localization [5]. Limits on the resolution of the original imaging device can be improved by exploiting the relative sub-pixel motion between the scene and the imaging plane. No matter how accurate the registration estimate, there will be some residual uncertainty associated with the parameters [13]. We propose a scheme to deal with this uncertainty by integrating over the registration parameters, and demonstrate improved results on synthetic and real digital image data. Image registration and super-resolution are often treated as distinct processes, to be considered sequentially [1, 3, 7]. Hardie et al. demonstrated that the low-resolution image registration can be updated using the super-resolution image estimate, and that this improves a Maximum a Posteriori (MAP) super-resolution image estimate [5]. More recently, Pickup et al. used a similar joint MAP approach to learn more general geometric and photometric registrations, the super-resolution image, and values for the prior?s parameters simultaneously [12]. Tipping and Bishop?s Bayesian image super-resolution work [16] uses a Maximum Likelihood (ML) point estimate of the registration parameters and the camera imaging blur, found by integrating the high-resolution image out of the registration problem and optimizing the marginal probability of the observed low-resolution images directly. This gives an improvement in the accuracy of the recovered registration (measured against known truth on synthetic data) compared to the MAP approach. The image-integrating Bayesian super-resolution method [16] is extremely costly in terms of computation time, requiring operations that scale with the cube of the total number of high-resolution pixels, severely limiting the size of the image patches over which they perform the registration (they use 9 ? 9 pixel patches). The marginalization also requires a form of prior on the super-resolution image that renders the integral tractable, though priors such as Tipping and Bishop?s chosen Gaussian form are known to be poor for tasks such as edge preservation, and much super-resolution work has employed other more favorable priors [2, 3, 4, 11, 14]. It is generally more desirable to integrate over the registration parameters rather than the superresolution image, because it is the registration that constitutes the ?nuisance parameters?, and the super-resolution image that we wish to estimate. We derive a new view of Bayesian image superresolution in which a MAP high-resolution image estimate is found by marginalizing over the uncertain registration parameters. Memory requirements are considerably lower than the imageintegrating case; while the algorithm is more costly than a simple MAP super-resolution estimate, it is not infeasible to run on images of several hundred pixels in size. Sections 2 and 3 develop the model and the proposed objective function. Section 4 evaluates results on synthetically-generated sequences (with ground truth for comparison), and on a real data example. A discussion of this approach and concluding remarks can be found in section 5. 2 Generative model The generative model for multi-frame super-resolution assumes a known scene x (vectorized, size N ? 1), and a given registration vector ?(k) . These are used to generate a vectorized low-resolution image y(k) with M pixels through a system matrix W(k) . Gaussian i.i.d. noise with precision ? is then added to y(k) ,   (k) (k) y(k) = ?(k) x + ?? + (k) (1) ? W ?  (k) ? N 0, ? ?1 I . (2) Photometric parameters ?? and ?? provide a global affine correction for the scene illumination, and ?? is simply an M ? 1 vector filled out with the value of ?? . Each row of W(k) constructs a single pixel in y(k) , and the row?s entries are the vectorized and point-spread function (PSF) response for each low-resolution pixel, in the frame of the super-resolution image [2, 3, 16]. The PSF is usually assumed to be an isotropic Gaussian on the imaging plane, though for some motion models (e.g. planar projective) this does not necessarily lead to a Gaussian distribution on the frame of x. For an individual low-resolution image, given registrations and x, the data likelihood is    M2 2      ? ? (k) (k) exp ? y(k) ? ?(k) W ? x ? ? . (3) p y(k) x, ?(k) , ?(k) = ? ? 2? 2 2 When the registration is known approximately, for instance by pre-registering inputs, the uncertainty ? (k) for each image?s parameter can be modeled as a Gaussian perturbation about the mean estimate ? set, with covariance C, which we restrict to be a diagonal matrix, ? ? ? (k) ? ? ?(k) ? ? (k) ? ? ? (k) ? (k) (4) ? ?? ? = ? ?? ? + ? (k) (k) ? ?? ?? ? (k)   p ?(k) , ?(k) ? = N (0, C)  ?1  21   |C | 1 (k)T ?1 (k) exp ? ? C ? . (2?)n 2 (5) (6) A Huber prior is assumed for the directional image gradients Dx in the super-resolution image x (in the horizontal, vertical, and two diagonal directions), n ? o 1 exp ? ? (Dx, ?) (7) p (x) = Z 2 x 2 z if |z| < ? ?(z, ?) = (8) 2?|z| ? ?2 otherwise where ? is a parameter of the Huber potential function, and ? is the prior strength parameter. This belongs to a family of functions often favored over Gaussians for super-resolution image priors [2, 3, 14] because the Huber distribution?s heavy tails mean image edges are penalized less severely. The difficulty in computing the partition function Zx is a consideration when marginalizing over x as in [16], though for the MAP image estimate, a value for this scale factor is not required. Regardless of the exact forms of these probability distributions, probabilistic super-resolution algorithms can usually be interpreted in one of the following ways. The most popular approach to super-resolution is to  obtain a MAP estimate, typically using an iterative scheme to maximize p x y( k), ?(k) , ?(k) with respect to x, where  QK  n o p (x) k=1 p y(k) x, ?(k) , ?(k) ( (k) (k)    p x y k), ? , ? , (9) = p y(k) ?(k) , ?(k) and the denominator is an unknown scaling factor. Tipping and Bishop?s approach takes an ML estimate of the registration by marginalizing over x, then calculates the super-resolution estimate as in (9). While Tipping and Bishop did not include a photometric model, the equivalent expression to be maximized with respect to ? and ? is Z K n o n o   Y p y(y) ? (k) , ?(k) = p (x) p y(y) x, ?(k) , ?(k) dx. (10) k=1 Note that Tipping and Bishop?s work does employ the same data likelihood expression as in (3), which forced them to select a Gaussian form for p (x), rather than a more suitable image prior, in order to keep the integral tractable. Finally, in this paper we find x through marginalizing over ? and ?, so that a MAP estimate of x can   be obtained by maximizing p x y(k) directly with respect to x. This is achieved by finding Z Y K      n o p(x) (k) (k) (k) (k) (k)   = p ? , ? p y ? , ? d {?, ?} , (11) p x y(k) x, p y(k) k=1 which is developed further in the next section. Note that the integral does not involve the prior, p (x). 3 Marginalizing over registration parameters   In order to obtain an expression for p x| y(k) from expressions (3), (6) and (7) above, the (k) ? (k) , ? ?? parameter variations ? must be integrated out of the problem. Registration estimates ? ? and ?? can be obtained using classical registration methods, either intensity-based [8] or estimation from image points [6], and the diagonal matrix C is constructed to reflect the confidence in each parameter estimate. This might mean a standard deviation of a tenth of a low-resolution pixel on image translation parameters, or a few gray levels? shift on the illumination model, for instance. The integral performed is   KM   Kn n ? o 2 2 1 ? b 1  exp ? ? (Dx, ?) 2? 2? Zx 2 p y(k) (  ) Z K 2 1   X ? (k) (k) (k) (k) (k) (k)?1 (k) ? exp ? x ? ?? + ? C ? d?, (12) y ? ?? W ? 2 2 2 k=1   where ? T = ? (1)T , ? (2)T , . . . , ? (K)T and all the ? and ? parameters are functions of ? as in (4). Expanding the data error term in the exponent for each low-resolution image as a second-order Taylor series about the estimated geometric registration parameter yields       2 (k) e(k) (?) = y(k) ? ?? ?(k) W(k) ?(k) x ? ?? ? (k) (13)  n o p x| y(k) =  2 1 = F (k) + G(k)T ? + ? (k)T H(k) ? (k) , 2 (14) Values for F , G and H can be found numerically (for geometric n o registrations) or analytically (for (k) (k) the photometric parameters) from x and y(k) , ?(k) , ?? , ?? . Thus the whole exponent of (12), f , becomes f    K  X ? (k) ? (k)T (k) 1 (k)T ? (k) (k) ?1 ? ? ? ? H +C = ? F ? G 2 2 2 2 k=1   ? ? 1 ? = ? F ? GT ? ? ? T H + V?1 ?, 2 2 2 2 (15) (16) where the omission of image superscripts indicates stacked matrices, and H is therefore a blockdiagonal nK ? nK sparse matrix, and V is comprised of the repeated diagonal of C. Finally, letting S = Z ? 2H + V?1 , exp {f } d? = =  Z   ? 1 T ? T exp ? F exp ? G ? ? ? S? d? 2 2 2   2   nK 1 ? ? ?2 T ?1 2 G S G . exp ? F (2?) |S| exp 2 8 (17) (18) The objective function, L, to be minimized with respect to x is obtained by taking the negative log of (12), using the result from (18), and neglecting the constant terms: L = ? ? 1 ? 2 T ?1 ? (Dx, ?) + F + log |S| ? G S G. 2 2 2 8 (19) This can be optimized using Scaled Conjugate Gradients (SCG) [9], noting that the gradient can be expressed dL dx = ? T d ? dF ? 2 T ?1 dG D ? (Dx) + ? G S 2  dx 2 dx 4 dx  3   dvecH ? ? T + vec S?1 ? GT S?1 ? GT S?1 , 4 16 dx (20) where derivatives of F , G and H with respect to x can be found analytically for photometric parameters, and numerically (using the analytic gradient of e(k) ? (k) with respect to x) with respect to the geometric parameters. 3.1 Implementation notes Notice that the value F from (16) is simply the reprojection error of the current estimate of x at the mean registration parameter values, and that gradients of this expression with respect to the ? parameters, and with respect to x can both be found analytically. To find the gradient with respect to (k) a geometric registration parameter ?i , and elements of the Hessian involving it, a central difference scheme involving only the k th image is used. Mean values for the registration are computed by standard registration techniques, and x is initialized using around 10 iterations of SCG to find the maximum likelihood solution evaluated at these mean parameters. Additionally, pixel values are scaled to lie between ? 21 and 12 , and the ML solution is bounded to lie within these values in order to curb the severe overfitting usually observed in ML super-resolution results. In our implementation, the parameters representing the ? values are scaled so that they share the same standard deviations as the ? parameters, which represent the sub-pixel geometric registration shifts, which makes the matrix V a multiple of the identity. The scale factors are chosen so that one standard deviation in ?? gives a 10-gray-level shift, and one standard deviation in ?? varies pixel values by around 10 gray levels at mean image intensity. 4 Results The first experiment takes a sixteen-image synthetic dataset created from an eyechart image. Data is generated at a zoom factor of 4, using a 2D translation-only motion model, and the two-parameter global affine illumination model described above, giving a total of four registration parameters per low-resolution image. Gaussian noise with standard deviation equivalent to 5 gray levels is added to each low-resolution pixel independently. The sub-pixel perturbations are evenly spaced over a grid up to plus or minus one half of a low-resolution pixel, giving a similar setup to that described in [10], but with additional lighting variation. The ground truth image and two of the low-resolution images appear in the first row of Figure 1. Geometric and photometric registration parameters were initialized to the identity, and the images were registered using an iterative intensity-based scheme. The resulting parameter values were used to recover two sets of super-resolution images: one using the standard Huber MAP algorithm, and the second using our extension integrating over the registration uncertainty. The Huber parameter ? was fixed at 0.01 for all runs, and ? was varied over a range of possible values representing ratios between ? and the image noise precision ?. The images giving lowest RMS error from each set are displayed in the second row of Figure 1. Visually, the differences between the images are subtle, though the bottom row of letters is better defined in the output from the new algorithm. Plotting the RMSE as a function of ? in Figure 2, we see that the proposed registration-integrating approach achieves a lower error, compared to the ground truth high-resolution image, than the standard Huber MAP algorithm for any choice of prior strength, ? in the optimal region. (a) ground truth high?res (d) best Huber (err = 15.6) (b) input 1/16 (c) input 16/16 (e) best int???? (err = 14.8) Figure 1: (a) Ground truth image. Only the central recoverable part is shown; (b,c) low-resolution images. The variation in intensity is clearly visible, and the sub-pixel displacements necessary for multi-frame image super-resolution are most apparent on the ?D? characters to the right of each image; (d) The best (?.e. minimum MSE ? see Figure 2) image from the regular Huber MAP algorithm, having super-resolved the dataset multiple times with different prior strength settings; (e) The best result using out approach of integrating over ? and ?. As well as having a lower RMSE, note the improvement in black-white edge detail on some of the letters on the bottom line. The second experiment uses real data with a 2D translation motion model and an affine lighting model exactly as above. The first and last images appear on the top row of Figure 3. Image registration was carried out in the same manner as before, and the geometric parameters agree with the provided homographies to within a few hundredths of a pixel. Super-resolution images were created RMSE comparison 23 Standard Huber MAP Integrating over registrations and illumination 22 RMSE in gray levels 21 20 19 18 17 16 15 14 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 ratio of prior strength parameter, ?, and noise precision, ? 0.09 0.1 Figure 2: Plot showing the variation of RMSE with prior strength for the standard Huber-prior MAP super-resolution method and our approach integrating over ? and ?. The images corresponding to the minima of the two curves are shown in Figure 1 for a number of ? values, the equivalent values to those quoted in [3] were found subjectively to be the most suitable. The covariance of the registration values was chosen to be similar to that used in the synthetic experiments. Finally, Tipping and Bishop?s method was extended to cover the illumination model and used to register and super-resolve the dataset, using the same PSF standard deviation (0.4 lowresolution pixels) as the other methods. The three sets of results on the real data sequence are shown in the middle and bottom rows of Figure 3. To facilitate a better comparison, a sub-region of each is expanded to make the letter details clearer. The Huber prior tends to make the edges unnaturally sharp, though it is very successful at regularizing the solution elsewhere. Between the Tipping and Bishop image and the registration-integrating approach, the text appears more clear in our method, and the regularization in the constant background regions is slightly more successful. 5 Discussion It is possible to interpret the extra terms introduced into the objective function in the derivation of this method as an extra regularizer term or image prior. Considering (19), the first two terms are identical to the standard MAP super-resolution problem using a Huber image prior. The two additional terms constitute an additional distribution over x in the cases where S is not dominated by V; as the distribution over ? and ? tightens to a single point, the terms tend to constant values. The intuition behind the method?s success is that this extra prior resulting from the final two terms of (19) will favor image solutions which are not acutely sensitive to minor adjustments in the image registration. The images of figure 4 illustrate the type of solution which would score poorly. To create the figure, one dataset was used to produce two super-resolved images, using two independent sets of registration parameters which were randomly perturbed by an i.i.d. Gaussian vector with a standard deviation of only 0.04 low-resolution pixels. The checker-board pattern typical of ML super-resolution images can be observed, and the difference image on the right shows the drastic contrast between the two image estimates. (a) input 1/10 (c) integrating ?, ? (e) regular Huber (detailed region) (b) input 10/10 (d) integrating ?, ? (detailed region) (f) Tipping & Bishop (detailed region) Figure 3: (a,b) First and last images from a real data sequence containing 10 images acquired on a rig which constrained the motion to be pure translation in 2D. (c) The full super-resolution output from our algorithm. (d) Detailed region of the central letters, again with our algorithm. (e) Detailed region of the regular Huber MAP super-resolution image, using parameter values suggested in [3], which are also found to be subjectively good choices. The edges are slightly artificially crisp, but the large smooth regions are well regularized. (f) Close-up of letter detail for comparison with Tipping and Bishop?s method of marginalization. The Gaussian form of their prior leads to a more blurred output, or one that over-fits to the image noise on the input data if the prior?s influence is decreased. 5.1 Conclusion This work has developed an alternative approach for Bayesian image super-resolution with several advantages over Tipping and Bishop?s original algorithm. These are namely a formal treatment of registration uncertainty, the use of a much more realistic image prior, and the computational speed and memory efficiency relating to the smaller dimension of the space over which we integrate. The results on real and synthetic images with this method show an advantage over the popular MAP approach, and over the result from Tipping and Bishop?s method, largely owing to our more favorable prior over the super-resolution image. It will be a straightforward extension of the current approach to incorporate learning for the pointspread function covariance, though it will result in a less sparse Hessian matrix H, because each row and column associated with the PSF parameter(s) has the potential to be full-rank, assuming a common camera configuration is shared across all the frames. Finally, the best way of learning the appropriate covariance values for the distribution over ? given the observed data, and how to assess the trade-off between its ?prior-like? effects and the need for a standard Huber-style image prior, are still open questions. Acknowledgements The real dataset used in the results section is due to Tomas Pajdla and Daniel Martinec, CMP, Prague, and is available at http://www.robots.ox.ac.uk/?vgg/data4.html. (a) truth (b) ML image 1 (c) ML image 2 (d) difference Figure 4: An example of the effect of tiny changes in the registration parameters. (a) Ground truth image from which a 16-image low-resolution dataset was generated. (b,c) Two ML super-resolution estimates. In both cases, the same dataset was used, but the registration parameters were perturbed by an i.i.d. vector with standard deviation of just 0.04 low-resolution pixels. (d) The difference between the two solutions. In all these images, values outside the valid image intensity range have been rounded to white or black values. This work was funded in part by EC Network of Excellence PASCAL. References [1] S. Baker and T. Kanade. Limits on super-resolution and how to break them. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1167?1183, 2002. [2] S. Borman. Topics in Multiframe Superresolution Restoration. PhD thesis, University of Notre Dame, Notre Dame, Indiana, May 2004. [3] D. Capel. Image Mosaicing and Super-resolution (Distinguished Dissertations). Springer, ISBN: 1852337710, 2004. [4] S. Farsiu, M. Elad, and P. Milanfar. A practical approach to super-resolution. In Proc. of the SPIE: Visual Communications and Image Processing, San-Jose, 2006. [5] R. C. Hardie, K. J. Barnard, and E. A. Armstrong. Joint map registration and high-resolution image estimation using a sequence of undersampled images. IEEE Transactions on Image Processing, 6(12):1621?1633, 1997. [6] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition, 2004. [7] M. Irani and S. Peleg. Super resolution from image sequences. ICPR, 2:115?120, June 1990. [8] M. Irani and S. Peleg. Improving resolution by image registration. Graphical Models and Image Processing, 53:231?239, 1991. [9] I. Nabney. Netlab algorithms for pattern recognition. Springer, 2002. [10] N. Nguyen, P. Milanfar, and G. Golub. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement. IEEE Transactions on Image Processing, 10(9):1299?1308, September 2001. [11] L. C. Pickup, S. J. Roberts, and A. Zisserman. A sampled texture prior for image superresolution. In Advances in Neural Information Processing Systems, pages 1587?1594, 2003. [12] L. C. Pickup, S. J. Roberts, and A. Zisserman. Optimizing and learning for super-resolution. In Proceedings of the British Machine Vision Conference, 2006. to appear. [13] D. Robinson and P. Milanfar. Fundamental performance limits in image registration. IEEE Transactions on Image Processing, 13(9):1185?1199, September 2004. [14] R. R. Schultz and R. L. Stevenson. A bayesian approach to image expansion for improved definition. IEEE Transactions on Image Processing, 3(3):233?242, 1994. [15] Salient Stills. http://www.salientstills.com/. [16] M. E. Tipping and C. M. Bishop. Bayesian imge super-resolution. In S. Thrun, S. Becker, and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15, pages 1279?1286, Cambridge, MA, 2003. MIT Press.
3037 |@word middle:1 open:1 km:1 scg:2 eng:1 covariance:4 minus:1 configuration:1 series:1 efficacy:1 score:1 daniel:1 err:2 recovered:1 com:2 current:2 dx:11 must:1 visible:2 realistic:2 blur:1 partition:1 analytic:1 plot:1 generative:3 half:1 device:1 intelligence:1 plane:2 isotropic:1 dissertation:1 registering:1 constructed:1 lowresolution:1 manner:1 excellence:1 acquired:1 huber:15 psf:4 multi:4 resolve:1 considering:1 becomes:1 provided:1 bounded:1 baker:1 medium:1 superresolution:5 lowest:1 interpreted:1 developed:2 finding:1 indiana:1 exactly:1 scaled:3 uk:3 appear:3 before:1 engineering:1 tends:1 limit:3 severely:2 oxford:1 approximately:1 might:1 plus:1 black:2 projective:1 range:2 practical:1 camera:2 displacement:1 pre:1 road:1 integrating:12 refers:1 confidence:1 regular:3 close:1 influence:1 crisp:1 equivalent:3 www:2 demonstrated:1 map:17 maximizing:1 straightforward:1 regardless:1 independently:1 resolution:66 tomas:1 pure:1 m2:1 continued:1 handle:1 variation:4 holiday:1 curb:1 updated:1 limiting:1 enhanced:1 exact:1 us:2 element:1 recognition:2 observed:4 bottom:3 region:9 rig:1 trade:1 intuition:1 localization:1 efficiency:1 resolved:2 joint:2 regularizer:1 derivation:1 stacked:1 distinct:1 forced:1 outside:1 apparent:1 elad:1 snap:1 otherwise:1 favor:1 superscript:1 final:1 sequence:5 advantage:2 isbn:2 propose:1 poorly:1 az:1 exploiting:1 enhancement:1 requirement:1 satellite:1 reprojection:1 produce:3 object:1 illustrate:2 andrew:1 ac:2 derive:1 develop:1 measured:1 clearer:1 minor:1 peleg:2 direction:1 hartley:1 owing:1 everything:1 extension:2 correction:1 around:2 considered:1 ground:6 exp:10 visually:1 achieves:1 favorable:2 estimation:2 proc:1 sensitive:1 create:1 mit:1 clearly:1 gaussian:9 super:42 rather:3 cmp:1 publication:1 june:1 improvement:2 rank:1 likelihood:4 indicates:1 contrast:1 posteriori:1 typically:1 integrated:1 pixel:19 html:1 pascal:1 acutely:1 favored:1 exponent:2 spatial:2 constrained:1 marginal:1 cube:1 construct:1 having:2 identical:1 park:1 constitutes:1 photometric:6 minimized:1 develops:1 employ:1 few:2 randomly:1 dg:1 simultaneously:1 zoom:1 individual:1 geometry:1 severe:1 golub:1 notre:2 behind:1 accurate:1 integral:5 edge:5 neglecting:1 necessary:1 filled:1 taylor:1 initialized:2 re:1 uncertain:1 instance:3 column:1 cover:1 restoration:2 sjrob:1 deviation:8 entry:1 hundred:1 comprised:1 successful:2 kn:1 mosaicing:1 varies:1 perturbed:2 considerably:2 synthetic:6 fundamental:1 probabilistic:1 lyndsey:1 off:1 rounded:1 fused:1 imagery:1 reflect:1 central:3 again:1 containing:1 multiframe:1 thesis:1 derivative:1 style:1 potential:2 stevenson:1 int:1 matter:1 hundredth:1 blurred:1 register:1 performed:1 view:4 break:1 recover:1 rmse:5 ass:1 accuracy:1 qk:1 largely:1 maximized:1 yield:1 spaced:1 directional:1 bayesian:9 lighting:3 zx:2 definition:1 against:1 evaluates:1 frequency:1 associated:2 spie:1 sampled:1 dataset:7 treatment:1 popular:2 improves:1 subtle:1 appears:1 higher:2 tipping:14 planar:1 zisserman:4 improved:3 response:1 evaluated:1 ox:2 though:6 just:1 horizontal:1 capel:3 ox1:1 hardie:2 gray:5 facilitate:1 effect:2 building:1 requiring:1 analytically:3 regularization:1 irani:2 deal:1 white:2 nuisance:1 generalized:1 demonstrate:1 necessitating:1 motion:7 image:105 consideration:1 recently:1 common:2 tightens:1 volume:1 tail:1 relating:2 numerically:2 interpret:1 cambridge:2 vec:1 grid:1 funded:1 robot:2 gt:3 subjectively:2 optimizing:2 belongs:1 success:1 minimum:2 additional:3 employed:1 maximize:1 stephen:1 preservation:1 multiple:3 desirable:1 recoverable:1 reduces:1 full:2 smooth:1 cross:1 calculates:1 involving:2 denominator:1 vision:3 df:1 iteration:1 represent:1 achieved:1 addition:1 background:1 decreased:1 extra:3 checker:1 tend:1 prague:1 noting:1 synthetically:1 aesthetic:1 marginalization:3 fit:1 restrict:1 vgg:1 shift:3 bottleneck:1 expression:5 rms:1 becker:1 elle:1 milanfar:3 render:1 hessian:2 constitute:1 remark:1 generally:1 clear:1 involve:1 detailed:5 generate:1 http:2 notice:1 estimated:1 per:1 group:1 four:1 salient:1 d3:2 pj:1 registration:44 tenth:1 imaging:4 run:2 jose:1 letter:5 uncertainty:5 family:1 patch:2 homographies:1 scaling:1 netlab:1 dame:2 nabney:1 strength:5 scene:4 dvd:1 dominated:1 speed:1 extremely:1 concluding:1 expanded:1 icpr:1 poor:1 conjugate:1 smaller:1 slightly:2 across:1 character:1 agree:1 letting:1 tractable:2 drastic:1 available:1 operation:1 gaussians:1 data4:1 appropriate:1 martinec:1 distinguished:1 alternative:1 original:2 assumes:1 top:1 include:1 graphical:1 giving:3 classical:1 objective:3 added:2 question:1 parametric:1 costly:2 diagonal:4 obermayer:1 september:2 gradient:6 thrun:1 unnaturally:1 evenly:1 topic:1 assuming:1 modeled:1 providing:1 ratio:2 setup:1 robert:3 pajdla:1 negative:1 implementation:2 unknown:2 perform:1 allowing:1 vertical:1 datasets:1 pickup:4 displayed:1 extended:1 communication:1 frame:8 perturbation:2 varied:1 omission:1 sharp:1 intensity:5 david:1 introduced:2 namely:1 required:1 optimized:1 registered:1 robinson:1 suggested:1 usually:3 pattern:3 memory:2 suitable:2 treated:1 difficulty:1 regularized:1 undersampled:1 residual:1 representing:2 scheme:4 created:2 carried:1 text:1 prior:27 geometric:8 acknowledgement:1 blockdiagonal:1 marginalizing:6 relative:1 sixteen:1 digital:1 validation:1 integrate:2 affine:3 vectorized:3 plotting:1 editor:1 tiny:1 share:1 heavy:1 translation:4 row:8 elsewhere:1 penalized:1 last:2 infeasible:1 formal:1 taking:1 sparse:2 curve:1 dimension:2 valid:1 collection:1 san:1 schultz:1 nguyen:1 ec:1 transaction:5 keep:1 ml:8 global:2 sequentially:1 overfitting:1 assumed:2 quoted:1 terrain:1 iterative:2 additionally:1 kanade:1 learn:1 expanding:1 improving:1 expansion:1 mse:1 necessarily:1 artificially:1 did:1 main:1 spread:1 whole:1 noise:5 edition:1 repeated:1 board:1 precision:3 sub:5 wish:1 lie:2 removing:1 british:1 bishop:14 showing:1 dl:1 phd:1 texture:1 illumination:6 nk:3 simply:2 visual:1 expressed:1 adjustment:1 springer:2 truth:8 ma:1 identity:2 shared:1 barnard:1 change:2 typical:1 total:2 unfavorable:1 select:1 incorporate:1 dept:1 armstrong:1 regularizing:1
2,246
3,038
Implicit Online Learning with Kernels Li Cheng S.V. N. Vishwanathan National ICT Australia [email protected] [email protected] Shaojun Wang Department of Computer Science and Engineering Wright State University [email protected] Dale Schuurmans Department of Computing Science University of Alberta, Canada [email protected] Terry Caelli National ICT Australia [email protected] Abstract We present two new algorithms for online learning in reproducing kernel Hilbert spaces. Our first algorithm, ILK (implicit online learning with kernels), employs a new, implicit update technique that can be applied to a wide variety of convex loss functions. We then introduce a bounded memory version, SILK (sparse ILK), that maintains a compact representation of the predictor without compromising solution quality, even in non-stationary environments. We prove loss bounds and analyze the convergence rate of both. Experimental evidence shows that our proposed algorithms outperform current methods on synthetic and real data. 1 Introduction Online learning refers to a paradigm where, at each time t, an instance xt ? X is presented to a learner, which uses its parameter vector ft to predict a label. This predicted label is then compared to the true label yt , via a non-negative, piecewise differentiable, convex loss function L(xt , yt , ft ). The learner then updates its parameter vector to minimize a risk functional, and the process repeats. Kivinen and Warmuth [1] proposed a generic framework for online learning where the risk functional, Jt (f ), to be minimized consists of two terms: a Bregman divergence between parameters ?G (f, ft ) := G(f ) ? G(ft ) ? hf ? ft , ?f G(ft )i, defined via a convex function G, and the instantaneous risk R(xt , yt , f ), which is usually given by a function of the instantaneous loss L(xt , yt , f ). The parameter updates are then derived via the principle ft+1 = argmin Jt (f ) := argmin{?G (f, ft ) + ?t R(xt , yt , f )}, (1) f f where ?t is the learning rate. Since Jt (f ) is convex, (1) is solved by setting the gradient (or, if necessary, a subgradient) to 0. Using the fact that ?f ?G (f, ft ) = ?f G(f ) ? ?f G(ft ), one obtains ?f G(ft+1 ) = ?f G(ft ) ? ?t ?f R(xt , yt , ft+1 ). (2) Since it is difficult to determine ?f R(xt , yt , ft+1 ) in closed form, an explicit update, as opposed to the above implicit update, uses the approximation ?f R(xt , yt , ft+1 ) ? ?f R(xt , yt , ft ) to arrive at the more easily computable expression [1] ?f G(ft+1 ) = ?f G(ft ) ? ?t ?f R(xt , yt , ft ). (3) In particular, if we set G(f ) = 21 ||f ||2 , then ?G (f, ft ) = 12 ||f ? ft ||2 and ?f G(f ) = f , and we obtain the familiar stochastic gradient descent update ft+1 = ft ? ?t ?f R(xt , yt , ft ). (4) We are interested in applying online learning updates in a reproducing kernel Hilbert space (RKHS). To lift the above update into an RKHS, H, one typically restricts attention to f ? H and defines [2] ? R(xt , yt , f ) := ||f ||2H + C ? L(xt , yt , f ), (5) 2 where || ? ||H denotes the RKHS norm, ? > 0 is a regularization constant, and C > 0 determines the penalty imposed on point prediction violations. Recall that if H is a RKHS of functions on X ? Y, then its defining kernel k : (X ? Y)2 ? R satisfies the reproducing property; namely that hf, k((x, y), ?)iH = f (x, y) for all f ? H. Therefore, by making the standard assumption that L only depends on f via its evaluations at f (x, y), one reaches the conclusion that ?f L(x, y, f ) ? H, and in particular X ?f L(x, y, f ) = ?y?k((x, y?), ?), (6) y??Y for some ?y? ? R. Since ?f R(xt , yt , ft ) = ?ft + C ? ?f L(xt , yt , ft ), one can use (4) to obtain an explicit update ft+1 = (1 ? ?t ?)ft ? ?t C ? ?f L(xt , yt , ft ), which combined with (6) shows that there must exist coefficients ?i,?y fully specifying ft+1 via t X X ft+1 = ?i,?y k((xi , y?), ?). (7) i=1 y??Y In this paper we propose an algorithm ILK (implicit online learning with kernels) that solves (2) directly, while still expressing updates in the form (7). That is, we derive a technique for computing the implicit update that can be applied to many popular loss functions, including quadratic, hinge, and logistic losses, as well as their extensions to structured domains (see e.g. [3])?in an RKHS. We also provide a general recipe to check if a new convex loss function is amenable to these implicit updates. Furthermore, to reduce the memory requirement of ILK, which grows linearly with the number of observations (instance-label pairs), we propose a sparse variant SILK (sparse ILK) that approximates the decision function f by truncating past observations with insignificant weights. 2 Implicit Updates in an RKHS As shown in (1), to perform an implicit update one needs to minimize ?G (f, ft ) + R(xt , yt , f ). By replacing R(xt , yt , f ) with (5), and setting G(f ) = 12 ||f ||2H , one obtains   ? 1 ft+1 = arg min J(f ) = argmin ||f ? ft ||2H + ?t ||f ||2H + C ? L(xt , yt , f ) . (8) f 2 2 f Since L is assumed convex with respect to f , setting ?f J = 0 and using an auxiliary variable ?t ? yields ?t = 1+? t? ft+1 = (1 ? ?t )ft ? (1 ? ?t )?t C?f L(xt , yt , ft+1 ). (9) On the other hand, from the form (7) it follows that ft+1 can also be written as ft+1 = t?1 X X ?i,?y k((xi , y?), ?) + i=1 y??Y X ?t,?y k((xt , y?), ?), (10) y??Y for some ?j,?y ? R and j = 1, . . . , t. Since ?f L(xt , yt , ft+1 ) = X ?t,?y k((xt , y?), ?), y??Y and for ease of exposition, we assume a fixed step size (learning rate) ?t = 1, consequently ?t = ? , it follows from (9) and (10) that ?i,?y = (1 ? ? )?i,?y ?t,?y = ?(1 ? ? )C?t,?y for i = 1, . . . , t ? 1, and y? ? Y, for all y? ? Y . (11) (12) Note that sophisticated step size adaptation algorithms (e.g. [3]) can be modified in a straightforward manner to work in our setting. The main difficulty in performing the above update arises from the fact that ?t,?y depends on ft+1 (see e.g. (13)) which in turn depends on ?t,?y via ?t,?y . The general recipe to overcome this problem is to first use (9) to write ?t,?y as a function of ?t,?y . Plugging this back into (12) yields an equation in ?t,?y alone, which sometimes can be solved efficiently. We now elucidate the details for some well-known loss functions. Square Loss In this case, k((xt , yt ), ?) = k(xt , ?). That is, the kernel does not depend on the value of y. Furthermore, we assume that Y = R, and write 1 1 L(xt , yt , f ) = (f (xt ) ? yt )2 = (hf (?), k(xt , ?)iH ? yt )2 , 2 2 which yields ?f L(xt , yt , f ) = (f (xt ) ? yt ) k(xt , ?). (13) Substituting into (12) and using (9) we have ?t = ?(1 ? ? )C((1 ? ? )ft (xt ) + ?t k(xt , xt ) ? yt ). After some straightforward algebraic manipulation we obtain the solution ?t = C(1 ? ? )(yt ? (1 ? ? )ft (xt )) . 1 + C(1 ? ? )k(xt , xt ) Binary Hinge Loss As before, we assume k((xt , yt ), ?) = k(xt , ?), and set Y = {?1}. The hinge loss for binary classification can be written as L(xt , yt , f ) = (? ? yt f (xt ))+ = (? ? yt hf, k(xt , ?)iH )+ , (14) where ? > 0 is the margin parameter, and (?)+ := max(0, ?). Recall that the subgradient is a set, and the function is said to be differentiable at a point if this set is a singleton [4]. The binary hinge loss is not differentiable at the hinge point, but its subgradient exists everywhere. Writing ?f L(xt , yt , f ) = ?t k(xt , ?) we have: yt f (xt ) > ? =? ?t = 0; yt f (xt ) = ? =? ?t ? [0, ?yt ]; yt f (xt ) < ? =? ?t = ?yt . (15a) (15b) (15c) We need to balance between two conflicting requirements while computing ?t . On one hand we want the loss to be zero, which can be achieved by setting ? ? yt ft+1 (xt ) = 0. On the other hand, the gradient of the loss at the new point ?f L(xt , yt , ft+1 ) must satisfy (15). We satisfy both constraints by appropriately clipping the optimal estimate of ?t . Let ? ? t denote the optimal estimate of ?t which leads to ? ? yt ft+1 (xt ) = 0. Using (9) we have ? ? yt ((1 ? ? )ft (xt ) + ? ? t k(xt , xt )) = 0, which yields ? ?t = yt (? ? (1 ? ? )yt ft (xt )) ? ? (1 ? ? )yt ft (xt ) = . yt k(xt , xt ) k(xt , xt ) On the other hand, by using (15) and (12) we scenarios, we arrive at the final update ? ?t ?? ?t = 0 ? yt (1 ? ? )C have ?t yt ? [0, (1 ? ? )C]. By combining the two if yt ? ? t ? [0, (1 ? ? )C]; if yt ? ? t < 0; if yt ? ? t > (1 ? ? )C. (16) The updates for the hinge loss used in novelty detection are very similar. Graph Structured Loss The graph-structured loss on label domain can be written as   . L(xt , yt , f ) = ?f (xt , yt ) + max(?(yt , y?) + f (xt , y?)) y?6=yt (17) + Here, the margin of separation between labels is given by ?(yt , y?) which in turn depends on the graph structure of the output space. This a very general loss, which includes binary and multiclass hinge loss as special cases (see e.g. [3]). We briefly summarize the update equations for this case. Let y ? = argmaxy?6=yt {?(yt , y?) + ft (xt , y?)} denote the best runner-up label for current instance xt . Then set ?t,yt = ??t,y? = ?t , use kt (y, y 0 ) to denote k((xt , y), (xt , y 0 )) and write ? ?t = ?(1 ? ? )ft (xt , yt ) + ?(yt , y ? ) + (1 ? ? )ft (xt , y ? ) . (kt (yt , yt ) + kt (y ? , y ? ) ? 2kt (yt , y ? )) The updates are now given by ? ?0 ?t = ? ? ? t (1 ? ? )C if ? ? t < 0; if ? ? t ? [0, (1 ? ? )C]; if ? ? t > (1 ? ? )C. (18) Logisitic Regression Loss The logistic regression loss and its gradient can be written as ?yt k(xt , ?) . L(xt , yt , f ) = log (1 + exp(?yt f (xt ))) , ?f L(xt , yt , f ) = 1 + exp(yt f (xt )) respectively. Using (9) and (12), we obtain (1 ? ? )Cyt ?t = . 1 + exp(yt (1 ? ? )ft (xt ) + ?t yt k(xt , xt )) Although this equation does not give a closed-form solution, the value of ?t can still be obtained by using a numerical root-finding routine, such as those described in [5]. 2.1 ILK and SILK Algorithms We refer to the algorithm that performs implicit updates as ILK, for ?implicit online learning with kernels?. The update equations of ILK enjoy certain advantages. For example, using (11) it is easy to see that an exponential decay term can be naturally incorporated to down-weight past observations: t X X ft+1 = (1 ? ? )t?i ?i,?y k((xi , y?), ?). (19) i=1 y??Y Intuitively, the parameter ? ? (0, 1) (determined by ? and ?) trades off between the regularizer and the loss on the current sample. In the case of hinge losses?both binary and graph structured?the weight |?t | is always upper bounded by (1 ? ? )C, which ensures limited influence from outliers (cf. (16) and (18)). A major drawback of the ILK algorithm described above is that the size of the kernel expansion grows linearly with the number of data points up to time t (see (10)). In many practical domains, where real time prediction is important (for example, video surveillance), storing all the past observations and their coefficients is prohibitively expensive. Therefore, following Kivinen et al. [2] and Vishwanathan et al. [3] one can truncate the function expansion by storing only a few relevant past observations. We call this version of our algorithm SILK, for ?sparse ILK?. Specifically, the SILK algorithm maintains a buffer of size ?. Each new point is inserted into the buffer with coefficient ?t . Once the buffer limit ? is exceeded, the point with the lowest coefficient value is discarded to maintain a bound on memory usage. This scheme is more effective than the straightforward least recently used (LRU) strategy proposed in Kivinen et al. [2] and Vishwanathan et al. [3]. It is relatively straightforward to show that the difference between the true predictor and its truncated version obtained by storing only ? expansion coefficients decreases exponentially as the buffer size ? increases [2]. 3 Theoretical Analysis In this section we will primarily focus on analyzing the graph-structured loss (17), establishing relative loss bounds and analyzing the rate of convergence of ILK and SILK. Our proof techniques adopt those of Kivinen et al. [2]. Due to the space constraints, we leave some details and analysis to the full version of the paper. Although the bounds we obtain are similar to those obtained in [2], our experimental results clearly show that ILK and SILK are stronger than the NORMA strategy of [2] and its truncated variant. 3.1 Mistake Bound We begin with a technical definition. Definition 1 A sequence of hypotheses {(f1 , . . . , fT ) :Pft ? H} is said to be (T, B,PD1 , D2 ) bounded if it satisfies ||ft ||2H ? B 2 ?t ? {1, . . . , T }, t ||ft ? ft+1 ||H ? D1 , and t ||ft ? ft+1 ||2H ? D2 for some B, D1 , D2 ? 0. The set of all (T, B, D1 , D2 ) bounded hypothesis sequences is denoted as F(T, B, D1 , D2 ). Given a fixed sequence of observations {(x1 , y1 ), . . . , (xT , yT )}, and a sequence of hypotheses {(f1 , . . . , fT ) ? F}, the number of errors M is defined as M := |{t : ?f (xt , yt , yt? ) ? 0}| , where ?f (xt , yt , yt? ) = f (xt , yt ) ? f (xt , yt? ) and yt? is the best runner-up label. To keep the equations succinct, we denote ?kt ((yt , y), ?) := k((xt , yt ), ?)?k((xt , y), ?), and ?kt ((yt , y), (yt , y)) := k?kt ((yt , y), ?)k2H = kt (yt , yt ) ? 2kt (yt , y) + kt (y, y). In the following we bound the number of mistakes M made by ILK by the cumulative loss of an arbitrary sequence of hypotheses from F(T, B, D1 , D2 ). Theorem 2 Let {(x1 , y1 ), . . . , (xT , yT )} be an arbitrary sequence of observations such that ?kt ((yt , y), (yt , y)) ? X 2 holds for any t, any y, and for some X > 0. For an arbitrary P average margin ? = P sequence ofg? hypotheses ?(g1 , ? ? ? , gT ) ? F(T, B, D1 , D2 ) with 1 ) ? ?(y , y ) , and bounded cumulative loss K := ?(y , y t t t t t L(xt , yt , gt ), the numt?E |E | ber of mistakes q of the sequence of hypotheses (f1 , ? ? ? , fT ) generated by ILK with learning rate ?t = ?, ? = 1 B? D2 T is upper-bounded by  1  1 K 2S S 2 S 2 K M ? , (20) + 2 +2 + 2 ? ? ? ? ?2 ? 2 where S = X4 (B 2 + BD1 + B T D2 ), ? > 0, and ytg? denotes the best runner-up label with hypothesis gt . When considering the stationary distribution in a separable (noiseless) scenario, this theorem allows us to obtain a mistake bound that is reminiscent of the Perceptron convergence theorem. In particular, if we assume the sequence of hypotheses (g1 , ? ? ? , gT ) ? F(T, B, D1 = 0, D2 = 0) and the cumulative loss K = 0, we obtain a bound on the number of mistakes B2X 2 . ?2 M ? 3.2 (21) Convergence Analysis PT The following theorem asserts that under mild assumptions, the cumulative risk t=1 R(xt , yt , ft ) of the hypothesis sequence produced by ILK converges to the minimum risk of the batch learning PT counterpart g ? := argming?H t=1 R(xt , yt , g) at a rate of O(T ?1/2 ). Theorem 3 Let {(x1 , y1 ), . . . , (xT , yT )} be an arbitrary sequence of observations such that ?kt ((yt , yt ), (yt , yt )) ? X 2 holds for any t, any y. Denote (f1 , . . . , fT ) the sequence of hypotheses PT produced by ILK with learning rate ?t = ?t?1/2 , t=1 R(xt , yt , ft ) the cumulative risk of this PT sequence, and t=1 R(xt , yt , g) the batch cumulative risk of (g, . . . , g), for any g ? H. Then T X R(xt , yt , ft ) ? t=1 where U = CX ? , 2 T X t=1 2 a = 4?C X + 2U 2 ? , and b = U2 2? g ? = arg min g?H we obtain R(xt , yt , g) + aT 1/2 + b, are constants. In particular, if T X R(xt , yt , g), t=1 T T 1X 1X R(xt , yt , ft ) ? R(xt , yt , g ? ) + O(T ?1/2 ). T t=1 T t=1 (22) Essentially the same theorem holds for SILK, but now with a slightly larger constant a = 2 4 ?(1 + ?2 )C 2 X 2 + 2U? . In addition, denote g ? the minimizer of the batch learning cumulaP tive risk t R(xt , yt , g), and f ? the minimizer of the minimum expected risk with R(f ? ) := minf E(x,y)?P (x,y) R(x, y, f ). As stated in [6] for the structured risk minimization framework, as 3500 3000 NORMA vs. ILK 180 2000 180 1500 170 1000 160 Mistakes of NORMA 2500 NORMA 160 150 500 0 -400 ILK 140 -200 0 200 400 600 800 SILK 140 140 ILK(0) Trunc. NORMA(0) 160 Mistakes of ILK 180 Figure 1: The left panel depicts a synthetic data sequence containing two classes (blue crosses and red diamonds, see the zoomed-in portion in bottom-left corner), with each class being sampled from a mixture of two drifting Gaussian distributions. Performance comparison of ILK vs NORMA and truncated NORMA on this data: Average cumulative error over 100 trials (middle), and average cumulative error each trial (right). the sample size T grows, T ? ?, we obtain g ? ? f ? in probability. This subsequently guarantees the convergence of the average regularized risk of ILK and SILK to R(f ? ). The upper bound in the above theorem can be directly plugged into Corollary 2 of Cesa-Bianchi et al. [7] to obtain bounds on the generalization error of ILK. Let f? denote the average hypothesis produced by averaging over all hypotheses f1 , . . . , fT . Then for any ? ? (0, 1), with probability at least 1 ? ?, the expected risk of f? is upper bounded by the risk of the best hypothesis chosen in q  hindsight plus a term which grows as O 4 1 T . Experiments We evaluate the performance of ILK and SILK by comparing them to NORMA [2] and its truncated variant. On OCR data, we also compare our algorithms to SVMD, a sophisticated step-size adaptation algorithm in RKHS presented in [3]. For a fair comparison we tuned the parameters of each algorithm separately and report the best results. In addition, we fixed the margin to ? = 1 for all our loss functions. Binary Classification on Synthetic Sequences The aim here is to demonstrate that ILK is better than NORMA in coping with non-stationary distributions. Each trial of our experiment works with 2000 two-dimensional instances sampled from a non-stationary distribution (see Figure 1) and the task is to classify the sampled points into one of two classes. The central panel of Figure 1 compares the number of errors made by various algorithms, averaged over 100 trials. Here, ILK and SILK make fewer mistakes than NORMA and truncated NORMA. We also tested two other algorithms, ILK(0) obtained by setting the decay factor ? to zero, and similarly for NORMA(0). As expected, both these variants make more mistakes because they are unable to forget the past, which is crucial for obtaining good performance in a non-stationary environment. To further compare the performance of ILK and NORMA we plot the relative errors of these two algorithms in the right panel of Figure 1. As can be seen, ILK out-performs NORMA on this simple non-stationary problem. Novelty Detection on Video Sequences As a significant application, we applied SILK to a background subtraction problem in video data analysis. The goal is to detect the moving foreground objects (such as cars, persons, etc) from relatively static background scenes in real time. The challenge in this application is to be able to cope with variations in lighting as well as jitter due to shaking of the camera. We formulate the problem as a novelty detection task using a network of classifiers, one for each pixel. For this task we compare the performance of SILK vs. truncated NORMA. (The ILK and NORMA algorithms are not suitable since their storage requirements grow linearly). A constant buffer size ? = 20 is used for both algorithms in this application. We report further implementation details in the full version of this paper. The first task is to identify people, under varying lighting conditions, in an indoor video sequence taken with a static camera. The left hand panel of Figure 2 plots the ROC curves of NORMA and SILK, which demonstrates the overall better performance of SILK. We sampled one of the initial frames after the light was switched off and back on. The results are shown in the right panel of Figure 2. As can be seen, SILK is able to recover from the change in lighting condition better than NORMA, and is able to identify foreground objects reasonably close to the ground truth. 1 0.9 True Positive 0.8 Frame 1353 0.7 Ground Truth SILK NORMA 0.6 0.5 0.4 0 1 2 3 4 False Positive 5 6 ?3 x 10 NORMA SILK Figure 2: Performance comparison of SILK vs truncated NORMA on a background subtraction (moving object detection) task, with varying lighting conditions. ROC curve (left) and a comparison of algorithms immediately after the lights have been switched off and on (right). Figure 3: Performance of SILK on a road traffic sequence (moving car detection) task, with a jittery camera. Two random frames and the performance of SILK on those frames are depicted. Our second experiment is a traffic sequence taken by a camera that shakes irregularly, which creates a challenging problem for any novelty detection algorithm. As seen from the randomly chosen frames plotted in Figure 3 SILK manages to obtain a visually plausible detection result. We cannot report a quantitative comparison with other methods in this case, due to the lack of manually labeled ground-truth data. Binary and Multiclass Classification on OCR data We present two sets of experiments on the MNIST dataset. The aim of the first set experiment is to show that SILK is competitive with NORMA and SVMD on a simple binary task. The data is split into two classes comprising the digits 0 ? 4 and 5 ? 9, respectively. A polynomial kernel of degree 9 and a buffer size of ? = 128 is employed for all three algorithms. Figure 4 (a) plots current average error rate, i.e., the total number of errors on the examples seen so far divided by the iteration number. As can be seen, after the initial oscillations have died out, SILK consistently outperforms SVMD and NORMA, achieving a lower average error after one pass through the dataset. Figure 4 (b) examines the effect of buffer size on SILK. As expected, smaller buffer sizes result in larger truncation error and hence worse performance. With increasing buffer size the asymptotic average error decreases. For the 10-way multiclass classification task we set ? = 128, and used a Gaussian kernel following [3]. Figure 4 (c) shows that SILK consistently outperforms NORMA and SVMD, while the trend with the increasing buffer size is repeated, as shown in Figure 4 (d). In both experiments, we used the parameters for NORMA and SVMD reported in [3], and set ? = 0.00005 and C = 100 for SILK. 5 Outlook and Discussion In this paper we presented a general recipe for performing implicit online updates in an RKHS. Specifically, we showed that for many popular loss functions these updates can be computed efficiently. We then presented a sparse version of our algorithm which uses limited basis expansions to approximate the function. For graph-structured loss we also showed loss bounds and rates of convergence. Experiments on real life datasets demonstrate that our algorithm is able to track nonstationary targets, and outperforms existing algorithms. For the binary hinge loss, when ? = 0 the proposed update formula for ?t (16) reduces to the PA-I algorithm of Crammer et al. [8]. Curiously enough, the motivation for the updates in both cases seems completely different. While we use an implicit update formula Crammer et al. [8] use (a) (b) (c) (d) Figure 4: Performance comparison of different algorithms over one run of the MNIST dataset. (a) Online binary classification. (b) Performance of SILK using different buffer sizes. (c) Online 10-way multiclass classification. (d) Performance of SILK on three different buffer sizes. a Lagrangian formulation, and a passive-aggressive strategy. Furthermore, the loss functions they handle are generally linear (hinge loss and its various generalizations) while our updates can handle other non-linear losses such as quadratic or logistic loss. Our analysis of loss bounds is admittedly straightforward given current results. The use of more sophisticated analysis and extending our bounds to deal with other non-linear loss functions is ongoing. We are also applying our techniques to video analysis applications by exploiting the structure of the output space. Acknowledgements We thank Xinhua Zhang, Simon Guenter, Nic Schraudolph and Bob Williamson for carefully proof reading the paper, pointing us to many references, and helping us improving presentation style. National ICT Australia is funded by the Australian Government?s Department of Communications, Information Technology and the Arts and the Australian Research Council through Backing Australia?s Ability and the ICT Center of Excellence program. This work is supported by the IST Program of the European Community, under the Pascal Network of Excellence, IST-2002-506778. References [1] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1?64, 1997. [2] J. Kivinen, A. J. Smola, and R. C. Williamson. Online learning with kernels. IEEE Transactions on Signal Processing, 52(8), 2004. [3] S. V. N. Vishwanathan, N. N. Schraudolph, and A. J. Smola. Step size adaptation in reproducing kernel Hilbert space. Journal of Machine Learning Research, 7, 2006. [4] R. T. Rockafellar. Convex Analysis, volume 28 of Princeton Mathematics Series. Princeton University Press, 1970. [5] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C: The Art of Scientific Computing (2nd ed.). Cambridge University Press, Cambridge, 1992. ISBN 0 - 521 - 43108 - 5. [6] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998. [7] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms. IEEE Trans. Information Theory, 50(9):2050?2057, 2004. [8] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive algorithms. Journal of Machine Learning Research, 7:551?585, 2006.
3038 |@word mild:1 trial:4 version:6 middle:1 briefly:1 norm:1 stronger:1 polynomial:1 seems:1 nd:1 dekel:1 d2:10 outlook:1 initial:2 series:1 tuned:1 rkhs:8 past:5 outperforms:3 existing:1 current:5 com:3 comparing:1 must:2 written:4 reminiscent:1 john:1 numerical:2 plot:3 update:27 v:4 stationary:6 alone:1 fewer:1 warmuth:2 zhang:1 bd1:1 prove:1 consists:1 manner:1 introduce:1 excellence:2 expected:4 alberta:1 considering:1 increasing:2 begin:1 bounded:7 panel:5 lowest:1 argmin:3 finding:1 hindsight:1 lru:1 guarantee:1 quantitative:1 prohibitively:1 classifier:1 demonstrates:1 enjoy:1 before:1 positive:2 engineering:1 died:1 limit:1 mistake:9 analyzing:2 establishing:1 plus:1 au:3 specifying:1 challenging:1 ease:1 limited:2 averaged:1 practical:1 camera:4 caelli:2 digit:1 coping:1 road:1 refers:1 cannot:1 close:1 storage:1 risk:13 applying:2 writing:1 influence:1 imposed:1 lagrangian:1 yt:107 center:1 straightforward:5 attention:1 truncating:1 convex:7 formulate:1 immediately:1 examines:1 d1:7 handle:2 variation:1 argming:1 elucidate:1 pt:4 target:1 ualberta:1 us:3 hypothesis:13 pa:1 trend:1 expensive:1 labeled:1 bottom:1 ft:69 inserted:1 wang:2 solved:2 ensures:1 trade:1 decrease:2 environment:2 xinhua:1 trunc:1 depend:1 creates:1 learner:2 basis:1 completely:1 easily:1 various:2 regularizer:1 effective:1 lift:1 shalev:1 larger:2 plausible:1 ability:2 g1:1 final:1 online:13 advantage:1 differentiable:3 sequence:19 isbn:1 propose:2 zoomed:1 adaptation:3 relevant:1 combining:1 shaking:1 asserts:1 recipe:4 exploiting:1 convergence:6 requirement:3 extending:1 leave:1 converges:1 object:3 derive:1 silk:30 solves:1 auxiliary:1 c:1 predicted:1 australian:2 drawback:1 compromising:1 norma:25 stochastic:1 subsequently:1 australia:4 government:1 f1:5 generalization:3 extension:1 helping:1 hold:3 wright:2 ground:3 exp:3 k2h:1 visually:1 predict:1 pointing:1 substituting:1 major:1 adopt:1 label:9 council:1 minimization:1 clearly:1 cyt:1 always:1 gaussian:2 aim:2 modified:1 surveillance:1 varying:2 corollary:1 derived:1 focus:1 consistently:2 check:1 detect:1 vetterling:1 typically:1 interested:1 comprising:1 backing:1 pixel:1 arg:2 classification:6 overall:1 pascal:1 denoted:1 art:2 special:1 once:1 manually:1 x4:1 svmd:5 minf:1 foreground:2 minimized:1 report:3 piecewise:1 employ:1 few:1 primarily:1 randomly:1 national:3 divergence:1 familiar:1 maintain:1 detection:7 evaluation:1 runner:3 violation:1 argmaxy:1 mixture:1 light:2 amenable:1 kt:12 bregman:1 necessary:1 plugged:1 plotted:1 theoretical:1 instance:4 classify:1 clipping:1 predictor:3 reported:1 synthetic:3 combined:1 person:1 off:3 central:1 cesa:2 opposed:1 containing:1 worse:1 corner:1 style:1 li:2 aggressive:2 singleton:1 includes:1 coefficient:5 rockafellar:1 satisfy:2 depends:4 root:1 closed:2 analyze:1 traffic:2 red:1 portion:1 hf:4 maintains:2 recover:1 competitive:1 simon:1 minimize:2 square:1 efficiently:2 yield:4 identify:2 produced:3 manages:1 lighting:4 bob:1 reach:1 ed:1 definition:2 naturally:1 proof:2 static:2 sampled:4 dataset:3 popular:2 recall:2 car:2 hilbert:3 routine:1 sophisticated:3 carefully:1 back:2 exceeded:1 formulation:1 furthermore:3 implicit:13 smola:2 hand:5 replacing:1 lack:1 defines:1 logistic:3 quality:1 scientific:1 grows:4 usage:1 effect:1 true:3 ofg:1 counterpart:1 regularization:1 hence:1 deal:1 guenter:1 demonstrate:2 performs:2 passive:2 instantaneous:2 recently:1 functional:2 exponentially:1 volume:1 approximates:1 expressing:1 refer:1 significant:1 cambridge:2 mathematics:1 similarly:1 funded:1 moving:3 gt:4 etc:1 showed:2 manipulation:1 scenario:2 certain:1 buffer:12 binary:10 life:1 seen:5 minimum:2 gentile:1 employed:1 subtraction:2 determine:1 paradigm:1 novelty:4 signal:1 full:2 reduces:1 technical:1 cross:1 schraudolph:2 divided:1 plugging:1 prediction:2 variant:4 regression:2 noiseless:1 essentially:1 iteration:1 kernel:13 sometimes:1 achieved:1 addition:2 want:1 separately:1 background:3 grow:1 crucial:1 appropriately:1 call:1 nonstationary:1 split:1 easy:1 enough:1 variety:1 reduce:1 svn:1 computable:1 multiclass:4 expression:1 curiously:1 penalty:1 algebraic:1 york:1 generally:1 shake:1 numt:1 nic:1 outperform:1 exist:1 restricts:1 track:1 blue:1 write:3 ist:2 achieving:1 graph:6 subgradient:3 run:1 everywhere:1 jitter:1 arrive:2 separation:1 oscillation:1 decision:1 bound:13 cheng:2 quadratic:2 ilk:30 vishwanathan:5 constraint:2 scene:1 min:2 performing:2 separable:1 relatively:2 department:3 structured:7 truncate:1 smaller:1 slightly:1 son:1 making:1 intuitively:1 outlier:1 taken:2 equation:5 turn:2 singer:1 irregularly:1 ocr:2 generic:1 shaojun:2 batch:3 drifting:1 denotes:2 cf:1 hinge:10 strategy:3 said:2 gradient:6 unable:1 thank:1 nicta:3 balance:1 difficult:1 negative:1 stated:1 implementation:1 perform:1 diamond:1 upper:4 bianchi:2 observation:8 datasets:1 discarded:1 descent:2 truncated:7 defining:1 incorporated:1 pd1:1 communication:1 y1:3 frame:5 reproducing:4 arbitrary:4 community:1 canada:1 tive:1 namely:1 pair:1 conflicting:1 trans:1 able:4 usually:1 indoor:1 reading:1 summarize:1 challenge:1 program:2 including:1 memory:3 max:2 video:5 terry:2 suitable:1 difficulty:1 regularized:1 kivinen:6 scheme:1 technology:1 ict:4 acknowledgement:1 relative:2 asymptotic:1 loss:39 fully:1 versus:1 switched:2 degree:1 principle:1 storing:3 repeat:1 supported:1 truncation:1 exponentiated:1 ber:1 perceptron:1 wide:1 sparse:5 overcome:1 curve:2 cumulative:8 dale:2 made:2 far:1 cope:1 transaction:1 approximate:1 compact:1 obtains:2 keep:1 assumed:1 xi:3 shwartz:1 reasonably:1 ca:1 obtaining:1 schuurmans:1 improving:1 expansion:4 williamson:2 european:1 domain:3 jittery:1 main:1 linearly:3 motivation:1 succinct:1 fair:1 repeated:1 x1:3 roc:2 depicts:1 wiley:1 explicit:2 exponential:1 down:1 theorem:7 formula:2 xt:96 jt:3 b2x:1 insignificant:1 decay:2 evidence:1 exists:1 ih:3 false:1 mnist:2 vapnik:1 keshet:1 margin:4 flannery:1 cx:1 forget:1 depicted:1 conconi:1 u2:1 minimizer:2 determines:1 satisfies:2 truth:3 teukolsky:1 goal:1 presentation:1 consequently:1 exposition:1 change:1 determined:1 specifically:2 averaging:1 admittedly:1 total:1 pas:1 experimental:2 people:1 arises:1 crammer:3 ongoing:1 evaluate:1 princeton:2 tested:1
2,247
3,039
Unsupervised Regression with Applications to Nonlinear System Identification Ali Rahimi Intel Research Seattle Seattle, WA 98105 [email protected] Ben Recht California Institute of Technology Pasadena, CA 91125 [email protected] Abstract We derive a cost functional for estimating the relationship between highdimensional observations and the low-dimensional process that generated them with no input-output examples. Limiting our search to invertible observation functions confers numerous benefits, including a compact representation and no suboptimal local minima. Our approximation algorithms for optimizing this cost functional are fast and give diagnostic bounds on the quality of their solution. Our method can be viewed as a manifold learning algorithm that utilizes a prior on the low-dimensional manifold coordinates. The benefits of taking advantage of such priors in manifold learning and searching for the inverse observation functions in system identification are demonstrated empirically by learning to track moving targets from raw measurements in a sensor network setting and in an RFID tracking experiment. 1 Introduction Measurements from sensor systems typically serve as a proxy for latent variables of interest. To recover these latent variables, the parameters of the sensor system must first be determined. When pairs of measurements and their corresponding latent variables are available, fully supervised regression techniques may be applied to learn a mapping between latent states and measurements. In many applications, however, latent states cannot be observed and only a diffuse prior on them is available. In such cases, marginalizing over the latent variables and searching for the model parameters using Expectation Maximization (EM) has become a popular approach [3,9,19]. Unfortunately, such algorithms are prone to local minima and require very careful initialization in practice. Using a simple change-of-variable model, we derive an approximation algorithm for the Unsupervised Regression problem ? estimating the nonlinear relationship between latent-states and their observations when no example pairs are available, when the observation function is invertible, and when the measurement noise is small. Our method is not susceptible to local minima and provides a guarantee on the quality of the recovered observation function. We identify conditions under which our estimate of the mapping is asymptotically consistent and empirically evaluate the quality of our solutions and their stability under variations of the prior. Because our algorithm takes advantage of an explicit prior on the latent variables, it recovers latent variables more accurately than manifold learning algorithms when applied to similar tasks. Our method may be applied to estimate the observation function in nonlinear dynamical systems by enforcing a Markovian dynamics prior over the latent states. We demonstrate this approach to nonlinear system identification by learning to track a moving object in a field of completely uncalibrated sensor nodes whose measurement functions are unknown. Given that the object moves smoothly over time, our algorithm learns a function that maps the raw measurements from the sensor network to the target?s location. In another experiment, we learn to track Radio Frequency ID (RFID) tags given a sequence of voltage measurements induced by the tag in a set of antennae . Given only these measurements and that the tag moves smoothly over time, we can recover a mapping from the voltages to the position of the tag. These results are surprising because no parametric sensor model is available in either scenario. We are able to recover the measurement model up to an affine transform given only raw measurement sequences and a diffuse prior on the state sequence. 2 A diffeomorphic warping model for unsupervised regression We assume that the set X = {xi }1???N of latent variables is drawn (not necessarily iid) from a known distribution, pX (X) = pX (x1 , ? ? ? , xN ). The set of measurements Y = {yi }1???N is the output of an unknown invertible nonlinearity applied to each latent variable, yi = f0 (xi ). We assume that observations, yi ? RD , are higher dimensional than latent variables xi ? Rd . Computing a MAP estimate of f0 requires marginalizing over X and maximizing over f . EM, or some other form of coordinate ascent on a Jensen bound of the likelihood, is a common way of estimating the parameters of this model, but such methods suffer from local minima. Because we have assumed that f0 is invertible and that there is no observation noise, this process describes a change of variables. The true distribution pY (Y) over Y can be computed in closed form using a generalization of the standard change of variables formula (see [14, thm 9.3.1] and [7, chap 11]):   1 N Y 0  ?2 pY (Y) = pY (Y; f0 ) = pX (f0?1 (y1 ), ? ? ? , f0?1 (yN )) det ?f f0?1 (yi ) ?f f0?1 (yi ) . i=1 (1) The determinant corrects the warping of each infinitesimal volume element around f0?1 (yi ) by accounting for the stretching induced by the nonlinearity. The change of variables formula immediately yields a likelihood over f , circumventing the need for integrating over the latent variables. We assume f0 diffeomorphically maps a ball in Rd containing the data onto its image. In this case, there exists a function g defined on an open set containing the image of f such that g(f (x)) = x and ?g?f = I for all x in the open set [5]. Consequently, we can substitute g for f ?1 in (1) and, taking advantage of the identity det(?f 0 ?f )?1 = det ?g?g 0 , write its log likelihood as N lY (Y; g) = log pY (Y; g) = log pX (g(y1 ), . . . , g(yN )) + 1X log det (?g(yi )?g(yi )0 ) . 2 i=1 (2) For many common priors pX , the maximum likelihood g yields an asymptotically consistent estimate of the true distribution pY . When certain conditions on pX are met (including stationarity, ergodicity, and kth-order Markov approximability), a generalized version of the Shannon-McMillanBreiman theorem [1] guarantees that log pY (Y; g) asymptotically converges to the relative entropy rate between the true pY (Y) and pY (Y; g). This quantity is maximized when these two distributions are equal. Therefore, if the true pY follows the change of variable model (1), the recovered g converges to the true f0?1 in the sense that they both describe a change of variable from the prior distribution pX to the distribution pY . Note that although our generative model assumes no observation noise, some noise in Y can be tolerated if we constrain our search over smooth functions g. This way, small perturbations in y due to observation noise produce small perturbations in g(y). 3 Approximation algorithms for finding the inverse mapping We constrain our search for g to a subset of smooth functions by requiring that g have a finite representation as a weighted sum of positive definite kernels k centered on observed data, g(y) = PN d i=1 ci k(y, yi ), with the weight vectors ci ? R . Accordingly, applying g to the set of observations gives g(Y) = CK, where C = [ c1 ???cN ] and K is the kernel matrix with Kij = k(yi , yj ). In i ,y) addition, ?g(y) = C?(y), where ?(y) is an N ? D matrix whose ith row is ?k(y . We tune ?y the smoothness of g by regularizing (2) with the RKHS norm [17] of g. This norm has the form kgk2k = tr CKC0 , and the regularization parameter is set to ?2 . For simplicity, we require pX to be a Gaussian with mean zero and inverse covariance ?X , but we note our methods can be extended to any log-concave distribution. Substituting into (2) and adding the smoothness penalty on g, we obtain: max C 0 ?vec (KC0 ) ?X vec (KC0 ) ? ?trCKC0 + N X log det C?(yi )?(yi )0 C0 , (3) i=1 where the vec (?) operator stacks up the columns of its matrix argument into a column vector. Equation (3) is not concave in C and is likely to be hard to maximize exactly. This is because log det(A0 A) is not concave for A ? Rd?D . Since the cost is non-concave, gradient descent methods may converge to local minima. Such local minima, in addition to the burdensome time and storage requirements, rule out descent strategies for optimizing (3). Our first algorithm for approximately solving this optimization problem constructs a semidefinite relaxation using a standard approach that replaces outer products of vectors with positive definite matrices. Rewrite (3) as   N   X 0 0 0 0 max ? tr Mvec (C0 ) vec (C0 ) + log det tr Jkl vec (C ) vec (C ) (4) i C i=1 M = (Id ? K)?X (Id ? K) + ?(Id ? K) , lk 0 Jkl i = E ? ?(yi )?(yi ) (5) where the klth entry of the matrix argument of the logdet is as specified, and the matrix Eij is zero everywhere except for 1 in its ijth entry. This optimization is equivalent to max Z0 ? tr (MZ) + N X log det   tr Jkl i Z , (6) i=1 subject to the additional constraint that rank(Z) = 1. Dropping the rank constraint yields a concave relaxation for (3). Standard interior point methods [20] or subgradient methods [2] can efficiently compute the optimal Z for this relaxed problem. A set of coefficients C can then be extracted from the top eigenvectors of the optimal Z, yielding an approximate solution to (3). Since (6) without the rank constraint is a relaxation of (3), the optimum of (6) is an upper bound on that of (3). Thus we can bound the difference in the value of the extracted solution and that of the global maximum of (3). As we will see in the following section, this method produces high quality solutions for a diverse set of learning problems. In practice, standard algorithms for (6) run slowly for large data sets, so we have developed an intuitive algorithm that also provides good approximations and runs much more quickly. The nonconcave logdet term serves to prevent the optimal solution of (2) from collapsing to g(y) = 0, since X = 0 is the most likely setting for the zero-mean Gaussian prior pX . To circumvent the non-concavity of the logdet term, we replace it with constraints requiring that the sample mean and covariance of g(Y) match the expected mean and covariance of the random variables X. These moment constraints prevent the optimal solution from collapsing to zero while remaining in the ? X , can be computed by averaging typical set of pX . The expected covariance of X, denoted by ? ? X only influences the final solution the block diagonals of ??1 . However, the particular choice of ? X up to a scaling and rotation on g, so in practice, we set it to the identity matrix. We thus obtain the following optimization problem: 0 min vec (KC0 ) ?X vec (KC0 ) + ?trCKC0 C s.t. 1 ?X CK(CK)0 = ? N 1 CK1 = 0, N (7) (8) (9) where 1 is a column vector of 1s. This optimization problem searches for a g that transforms observations into variables that are given high probability by pX and match its stationary statistics. This is a quadratic minimization with a single quadratic constraint and, after eliminating the linear constraints with a change of variables, can be solved as a generalized eigenvalue problem [4]. 4 Related Work Manifold learning algorithms and unsupervised nonlinear system identification algorithms solve variants of the unsupervised regression problem considered here. Our method provides a statistical model that augments manifold learning algorithms with a prior on latent variables. Our spectral algorithm from Section 3 reduces to a variant of KPCA [15] when X are drawn iid from a spherical Gaussian. By adopting a nearest-neighbors form for g instead of the RBF form, we obtain an algorithm that is similar to embedding step of LLE [12, chap 5]. In addition to our use of dynamics, a notable difference between our method and principal manifold methods [16] is that instead of learning a mapping from states to observations, we learn mappings from observations to states. This reduces the storage and computational requirements when processing high-dimensional data. As far as we are aware, in the manifold learning literature, only Jenkins and Mataric [6] explicitly take temporal coherency into account, by increasing the affinity of temporally adjacent points and applying Isomap [18]. State-of-the-art nonlinear system identification techniques seek to recover all the parameters of a continuous hidden Markov chain with nonlinear state transitions and observation functions given noisy observations [3, 8, 9, 19]. Because these models are so rich and have so many unknowns, these algorithms resort to coordinate ascent (for example, via EM), making them susceptible to local minima. In addition, each iteration of coordinate ascent requires some form of nonlinear smoothing over the latent variables, which is itself both computationally costly and becomes prone to local minima when the estimated observation function becomes non-invertible during the iterations. Further, because mappings from low-dimensional to high-dimensional vectors require many parameters to represent, existing approaches tend to be unsuitable for large-scale sensor network or image analysis problems. Our algorithms do not have local minima and represent the more compact inverse observation function where high-dimensional observations appear only in pairwise kernel evaluations. Comparisons with a semi-supervised variant of these algorithms [13] show that weak priors on the latent variables are extremely informative and that additional labeled data is often only necessary to fix the coordinate system. 5 Experiments The following experiments show that latent states and observation functions can be accurately and efficiently recovered up to a linear coordinate transformation given only raw measurements and a generic prior over the latent variables. We compare against various manifold learning and nonlinear system identification algorithms. We also show that our algorithm is robust to variations in the choice of the prior. As a measure of quality, we report the affine registration error, the average residual per data point after registering the recovered latent variables with their ground truth values using an affine transqP N 0 0 2 formation: err = minA,b N1 t=1 kAxt ? xt + bk2 , where xt is the ground truth setting for xt . All of our experiments use a spherical Gaussian kernel. To define the Gaussian prior pX , we start with a linear Gaussian Markov chain st = Ast?1 + ?t , where A and the covariance of ? are block diagonal and define d Markov chains that evolve independently from each other according to Newtonian motion. The latent variables xt extract the position components of st . The inverse covariance matrix corresponding to this process can be obtained in closed form. More details and additional experiments can be found in [12]. We begin with a low-dimensional data set to simplify visualization and comparison to systems that do not scale well with the dimensionality of the observations. Figure 1(b) shows the embedding of a 1500 step 2D random walk shown in Figure 1(a) into R3 by the function f (x, y) = (x, y cos(2y), y sin(2y)). Note that the 2D walk was not generated by a linear Gaussian model, as it bounces off the edges of its bounding box. Lifted points were passed to our algorithm, which returned the 2D variables shown in Figure 1(c). The true 2D coordinates are recovered up to a scale, a flip, and some shrinking in the lower left corner. Therefore the recovered g is close the inverse of the original mapping, up to a linear transform. Figure 1(d) shows states recovered by the algorithm of Roweis and Ghahramani [3]. Smoothing with the recovered function simply projects the 4 3 2 1 0 ?1 ?2 ?3 ?4 ?5 (a) (d) (b) ?6 5 6 4 0 2 ?5 0 (e) (c) (f) Figure 1: (a) 2D ground truth trajectory. Brighter colors indicate greater distance to the origin. (b) Embedding of the trajectory into R3 . (c) Latent variables are recovered up to a linear transformation and minor distortion. Roweis-Ghahramani (d), Isomap (e), and Isomap+temporal coherence (f) recovered low-dimensional coordinates that exhibit folding and other artifacts that cannot be corrected by a linear transformation. observations without unrolling the roll. The joint-max version of this algorithm took about an hour to converge on a 1Ghz Pentium III and converges only when started at solutions that are sufficiently close to the true solution. Our spectral algorithm took about 10 seconds. Isomap (Figure 1(e)) performs poorly on this data set due to the low sampling rate on the manifold and the fact that the true mapping f is not isometric. Including temporal neighbors into Isomap?s neighborhood structure (as per ST-Isomap) creates some folding, and the true underlying walk is not recovered (Figure 1(f)). KPCA (not shown) chooses a linear projection that simply eliminates the first coordinate. We found the optimal parameter settings for Isomap, KPCA, and ST-Isomap by a fine grid search over the parameter space of each algorithm. The upper bound on the log-likelihood returned by the relaxation (6) serves as a diagnostic on the quality of our approximations. This bound was ?3.9 ? 10?3 for this experiment. Rounding the result of the relaxation returned a g with log likelihood ?5.5 ? 10?3 . The spectral approximation (7) also returned a solution with log likelihood ?5.5 ? 10?3 , confirming our experience that these algorithms usually return similar solutions. For comparison, log-likelihood of KPCA?s solution was ?1.69 ? 10?2 , significantly less likely than our solutions, or the upper bound. 5.1 Learning to track in an uncalibrated sensor network We consider an artificial distributed sensor network scenario where many sensor nodes are deployed randomly in a field in order to track a moving target (Figure 2(a)). The location of the sensor nodes is unknown, and the sensors are uncalibrated, so that it is not known how the position of the target maps to the reported measurements. This situation arises when it is not feasible to calibrate each sensor prior to deployment or when variations in environmental conditions affect each sensor differently. Given only the raw measurements produced by the network from watching a smoothly moving target, we wish to learn a mapping from these measurements to the location of the target, even though no functional form for the measurement model is available. A similar problem was considered by [11], who sought to recover the location of sensor nodes using off-the-shelf manifold learning algorithms. Each latent state xt is the unknown position of the target at time t. The unknown function f (xt ) gives the set of measurements yt reported by the sensor network at time t. Figure 2(b) shows the time series of measurements from observing the target. In this case, measurements were generated by having each sensor s report its true distance dst to the target at time t and passing it through a random nonlinearity of the form ?s exp(?? s dst ). Note that only f , not the measurement function of each sensor, needs be invertible. This is equivalent to requiring that a memoryless mapping from measurements to positions must exist. 1 0.04 0.8 0.03 0.6 0.02 0.4 2.5 0.01 0.2 2 0 ?0.2 0 1.5 ?0.01 1 ?0.02 ?0.4 ?0.6 ?1 ?1 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 1 (b) 1 1010 1020 1030 1040 1050 1060 1070 1080 1090 1100 ?0.03 x 10 0 ?5 0.01 0.02 0.03 0.04 0.05 ?0.0235 ?0.023 ?3 ?8 0 0 ?2 ?7 0.2 ?0.01 x 10 ?1 ?6 0.4 ?0.02 ?3 ?3 0.6 ?4 ?9 ?5 ?0.2 ?10 ?0.4 ?11 ?0.6 ?12 ?0.8 ?1 ?1 (c) ?0.04 0 1000 ?4 0.8 (d) ?0.03 0.5 ?0.8 (a) ?6 ?7 ?8 ?13 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 1 (e) ?14 0.007 0.008 0.009 0.01 0.011 0.012 0.013 0.014 0.015 0.016 (f) ?9 ?0.027 ?0.0265 ?0.026 ?0.0255 ?0.025 ?0.0245 ?0.024 Figure 2: (a) A target followed a smooth trajectory (dotted line) in a field of 100 randomly placed uncalibrated sensors with random and unknown observation functions (circle). (b) Time series of measurements produced by the sensor network in response to the target?s motion. (c) The recovered trajectory given only raw sensor measurements, and no information about the observation function (other than smoothness and invertibility). It is recovered up to scaling and a rotation. (d) To test the recovered mapping further, the target was made to follow a zigzag pattern. (e) Output of g on the resulting measurements. The resulting trajectory is again similar to the ground truth zigzag, up to minor distortion. (f) The mapping obtained by KPCA cannot recover the zigzag, because KPCA does not utilize the prior on latent states. Assuming only that the target vaguely follows linear-Gaussian dynamics, and given only the time series of the raw measurements from the sensor network, our learning algorithm finds a transformation that maps observations from the sensor network to the position of the target up to a linear coordinate transform (Figure 2(c)). The recovered function g implicitly performs all the triangulation necessary for recovering the position of the target, even though the position or characteristics of the sensors were not known a priori. The bottom row of Figure 2 tests the recovered g by applying it to a new measurement set. To show that this sensor network problem is not trivial, the figure also shows the output of the mapping obtained by KPCA. 5.2 Learning to Track with the Sensetable The Sensetable is a hardware platform for tracking the position of radio frequency identification (RFID) tags. It consists of 10 antennae woven into a flat surface 30 ? 30 cm. As an RFID tag moves along the flat surface, the strength of the RF signal induced by RFID tag in each antenna is reported, producing a time series of 10 numbers. We wish to learn a mapping from these 10 voltage measurements to the 2D position of the RFID tag. Previously, such a mapping was recovered by hand, by meticulous physical modeling of this system, followed by trial-and-error to refine these mappings; a process that took about 3 months in total [10]. We show that it is possible to recover this mapping automatically, up to an affine transformation, given only the raw time series of measurements generated by moving the RFID tag by hand on the Sensetable for about 5 minutes. This is a challenging task because the relationship between the tag?s position and the observed measurements is highly oscillatory. (Figure 3(a)). Once it is learned, we can use the mapping to track RFID tags. This experiment serves as a real-world instantiation of the sensor network setup of the previous section in that each antenna effectively acts as an uncalibrated sensor node with an unknown and highly oscillatory measurement function. Figure 3(b) shows the ground truth trajectory of the RFID tag in this data set. Given only the 5 minute-long time series of raw voltage measurements, our algorithm recovered the trajectory shown in Figure 3(c). These recovered coordinates are scaled down and flipped about both axes as compared to the ground truth coordinates. There is also some additional shrinkage in the upper right corner, but the coordinates are otherwise recovered accurately, with an affine registration error of 1.8 cm per pixel. Figure 4 shows the result of LLE, KPCA, Isomap and ST-Isomap on this data set under their best parameter settings (again found by a grid search on each algorithm?s search space). None of these algorithms recover low-dimensional coordinates that resemble the ground truth. LLE, in addition to collapsing the coordinates to one dimension, exhibits severe folding, obtaining an affine registration 100 Ground truth 0.04 80 0.03 600 60 40 0.02 550 20 0.01 0 500 0 ?20 450 ?40 ?0.01 ?60 ?0.02 400 ?80 (a) ?0.03 ?100 (b) (c) 350 10 20 30 40 50 60 50 100 150 200 250 300 350 400 450 ?0.03 ?0.02 ?0.01 0 0.01 0.02 0.03 Figure 3: (a) The output of the Sensetable over a six second period, while moving the tag from the left edge of the table to the right edge. The observation function is highly complex and oscillatory. (b) The ground truth trajectory of the tag. Brighter points have greater ground truth y-value. (c) The trajectory recovered by our spectral algorithm is correct up to flips about both axes, a scale change, and some shrinkage along the edge. Isomap K=7 LLE k=15 KPCA Figure 4: From left to right, the trajectories recovered by LLE, KPCA, Isomap, ST-Isomap. All of these trajectories exhibit folding and severe distortions. error of 8.5 cm. KPCA also exhibited folding and large holes, with an affine registration error of 7.2 cm. Of these, Isomap performed the best with an affine registration error of 3.4 cm, though it exhibited some folding and a large hole in the center. Isomap with temporal coherency performed similarly, with a best affine registration error of 3.1 cm. Smoothing the output of these algorithms using the prior sometimes improves their accuracy by a few millimeters, but more often diminishes their accuracy by causing overshoots. To further test the mapping recovered by our algorithm, we traced various trajectories with an RFID tag and passed the resulting voltages through the recovered g. Figure 5 plots the results (after a flip about the y-axis). These shapes resemble the trajectories we traced. Noise in the recovered coordinates is due to measurement noise. The algorithm is robust to perturbations in pX . To demonstrate this, we generated 2000 random perturbations of the parameters of the inverse covariance of X used to generate the Sensetable results, and evaluated the resulting affine registration error. The random perturbations were generated by scaling the components of A and the diagonal elements of the covariance of ? over four orders of magnitude using a log uniform scaling. The affine registration error was below 3.6 cm for 38% of these 2000 perturbations. Typically, only the parameters of the kernel need to be tuned. In practice, we simply choose the kernel bandwidth parameter so that the minimum entry in K is approximately 0.1. 0.04 0.04 0.03 0.03 0.03 0.02 0.02 0.02 0.01 0.01 0.01 0 0 0 ?0.01 ?0.01 ?0.01 ?0.02 ?0.02 ?0.02 ?0.03 ?0.03 ?0.04 ?0.03 ?0.02 ?0.01 0 0.01 0.02 0.03 0.04 0.03 0.02 0.01 0 ?0.01 ?0.02 ?0.03 ?0.03 ?0.05 ?0.04 ?0.03 ?0.02 ?0.01 0 0.01 0.02 ?0.04 0.03 ?0.03 ?0.02 ?0.01 0 0.01 0.02 0.03 0.04 ?0.06 ?0.05 ?0.04 ?0.03 ?0.02 ?0.01 0 0.01 0.02 ?0.04 ?0.03 ?0.02 ?0.01 0 0.01 0.02 0.03 0.04 0.025 0.03 0.03 0.03 0.02 0.015 0.02 0.02 0.02 0.01 0.01 0.01 0.01 0 0 ?0.01 ?0.01 ?0.01 ?0.02 ?0.02 0.005 0 ?0.005 ?0.01 ?0.02 0 ?0.015 ?0.03 ?0.02 ?0.03 ?0.02 ?0.01 0 0.01 0.02 ?0.03 ?0.05 ?0.04 ?0.03 ?0.02 ?0.01 0 0.01 0.02 0.03 0.04 ?0.05 ?0.03 ?0.04 ?0.03 ?0.02 ?0.01 0 0.01 0.02 0.03 0.04 ?0.05 Figure 5: Tracking RFID tags using the recovered mapping. 6 Conclusions and Future Work We have shown how to recover the latent variables in a dynamical system given an approximate prior on the dynamics of these variables and observations of these states through an unknown invertible nonlinearity. The requirement that the observation function be invertible is similar to the requirement in manifold learning algorithms that the manifold not intersect itself. Our algorithm enhances manifold learning algorithms by leveraging a prior on the latent variables. Because we search for a mapping from observations to unknown states (as opposed to from states to observations), we can devise algorithms that are stable and avoid local minima. We applied this methodology to learning to track objects given only raw measurements from sensors with no constraints on the observation model other than invertibility and smoothness. We are currently evaluating various ways to relax the invertibility requirement on the observation function by allowing invertibility up to a linear subspace. We are also exploring different prior models, and experimenting with ways to jointly optimize over g and the parameters of pX . References [1] P.H. Algoet and T.M. Cover. A sandwich proof of the Shannon-McMillan-Breiman theorem. The Annals of Probability, 16:899?909, 1988. [2] Aharon Ben-Tal and Arkadi Nemirovski. Non-euclidean restricted memory level method for large-scale convex optimization. Mathematical Programming, 102:407?456, 2005. [3] Z. Ghahramani and S. Roweis. Learning nonlinear dynamical systems using an EM algorithm. In Advances in Neural Information Processing Systems (NIPS), 1998. [4] G. Golub and C.F. Van Loan. Matrix Computations. The Johns Hopkins University Press, 1989. [5] V. Guilleman and A. Pollack. Differential Topology. Prentice Hall, Englewood Cliffs, New Jersey, 1974. [6] O. Jenkins and M. Mataric. A spatio-temporal extension to isomap nonlinear dimension reduction. In International Conference on Machine Learning (ICML), 2004. [7] F. Jones. Advanced Calculus. http://www.owlnet.rice.edu/?fjones, unpublished. [8] A. Juditsky, H. Hjalmarsson, A. Benveniste, B. Delyon, L. Ljung, J. Sj?oberg, and Q. Zhang. Nonlinear black-box models in system identification: Mathematical foundations. Automatica, 31(12):1725?1750, 1995. [9] N. D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. In Advances in Neural Information Processing Systems (NIPS), 2004. [10] J. Patten, H. Ishii, J. Hines, and G. Pangaro. Sensetable: A wireless object tracking platform for tangible user interfaces. In CHI, 2001. [11] N. Patwari and A. O. Hero. Manifold learning algorithms for localization in wireless sensor networks. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2004. [12] A. Rahimi. Learning to Transform Time Series with a Few Examples. PhD thesis, Massachusetts Institute of Technology, Computer Science and AI Lab, Cambridge, Massachusetts, USA, 2005. [13] A. Rahimi, B. Recht, and T. Darrell. Learning appearance manifolds from video. In Computer Vision and Pattern Recognition (CVPR), 2005. [14] I. K. Rana. An Introduction to Measure Theory and Integration. AMA, second edition, 2002. [15] B. Sch?olkopf, A. Smola, and K-R. M?uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299?1319, 1998. [16] A. Smola, S. Mika, B. Schoelkopf, and R. C. Williamson. Regularized principal manifolds. Journal of Machine Learning, 1:179?209, 2001. [17] M. Pontil T. Evgeniou and T. Poggio. Regularization networks and support vector machines. Advances in Computational Mathematics, 2000. [18] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319?2323, 2000. [19] H. Valpola and J. Karhunen. An unsupervised ensemble learning method for nonlinear dynamic statespace models. Neural Computation, 14(11):2647?2692, 2002. [20] Lieven Vandenberghe, Stephen Boyd, and Shao-Po Wu. Determinant maximization with linear matrix inequality constraints. SIAM Journal on Matrix Analysis and Applications, 19(2):499?533, 1998.
3039 |@word trial:1 determinant:2 version:2 eliminating:1 norm:2 c0:3 open:2 calculus:1 seek:1 accounting:1 covariance:8 tr:5 reduction:2 moment:1 series:7 tuned:1 rkhs:1 existing:1 err:1 recovered:26 com:1 surprising:1 must:2 jkl:3 john:1 informative:1 confirming:1 shape:1 plot:1 juditsky:1 stationary:1 generative:1 accordingly:1 ith:1 diffeomorphically:1 provides:3 node:5 location:4 zhang:1 mathematical:2 registering:1 along:2 become:1 differential:1 consists:1 pairwise:1 expected:2 chi:1 chap:2 spherical:2 automatically:1 increasing:1 becomes:2 begin:1 estimating:3 project:1 unrolling:1 underlying:1 cm:7 developed:1 finding:1 transformation:5 guarantee:2 temporal:5 act:1 concave:5 exactly:1 scaled:1 ly:1 yn:2 appear:1 producing:1 positive:2 local:10 id:4 cliff:1 approximately:2 black:1 mika:1 initialization:1 challenging:1 co:1 deployment:1 nemirovski:1 yj:1 practice:4 block:2 definite:2 pontil:1 intersect:1 significantly:1 projection:1 boyd:1 integrating:1 cannot:3 onto:1 interior:1 operator:1 close:2 storage:2 ast:1 applying:3 influence:1 prentice:1 py:10 optimize:1 confers:1 map:5 demonstrated:1 equivalent:2 maximizing:1 yt:1 center:1 www:1 independently:1 convex:1 simplicity:1 immediately:1 rule:1 vandenberghe:1 stability:1 searching:2 embedding:3 coordinate:16 variation:3 tangible:1 limiting:1 annals:1 target:15 user:1 programming:1 origin:1 element:2 recognition:1 labeled:1 observed:3 bottom:1 solved:1 schoelkopf:1 mz:1 uncalibrated:5 dynamic:5 woven:1 overshoot:1 solving:1 rewrite:1 ali:2 serve:1 creates:1 localization:1 completely:1 shao:1 icassp:1 joint:1 po:1 differently:1 various:3 jersey:1 fast:1 describe:1 artificial:1 formation:1 neighborhood:1 whose:2 solve:1 cvpr:1 distortion:3 relax:1 otherwise:1 statistic:1 transform:4 antenna:4 noisy:1 final:1 itself:2 jointly:1 advantage:3 sequence:3 eigenvalue:2 took:3 product:1 causing:1 poorly:1 ama:1 roweis:3 kc0:4 intuitive:1 olkopf:1 seattle:2 requirement:5 optimum:1 darrell:1 produce:2 converges:3 ben:2 object:4 newtonian:1 derive:2 nearest:1 minor:2 recovering:1 resemble:2 indicate:1 met:1 correct:1 centered:1 require:3 fix:1 generalization:1 exploring:1 kgk2k:1 extension:1 around:1 considered:2 ground:10 sufficiently:1 exp:1 hall:1 lawrence:1 mapping:22 substituting:1 sought:1 diminishes:1 radio:2 currently:1 weighted:1 minimization:1 uller:1 sensor:29 gaussian:9 ck:3 pn:1 shelf:1 shrinkage:2 lifted:1 avoid:1 voltage:5 breiman:1 ax:2 rank:3 likelihood:8 experimenting:1 pentium:1 ishii:1 diffeomorphic:1 sense:1 burdensome:1 typically:2 a0:1 pasadena:1 hidden:1 visualisation:1 pixel:1 denoted:1 priori:1 art:1 smoothing:3 platform:2 integration:1 field:3 equal:1 evgeniou:1 construct:1 aware:1 having:1 sampling:1 once:1 flipped:1 jones:1 unsupervised:6 rfid:11 icml:1 patten:1 future:1 report:2 simplify:1 few:2 randomly:2 n1:1 sandwich:1 stationarity:1 interest:1 englewood:1 highly:3 evaluation:1 severe:2 golub:1 semidefinite:1 yielding:1 chain:3 zigzag:3 edge:4 necessary:2 experience:1 poggio:1 euclidean:1 walk:3 circle:1 pollack:1 kij:1 column:3 modeling:1 markovian:1 cover:1 ijth:1 calibrate:1 kpca:11 cost:3 maximization:2 subset:1 entry:3 uniform:1 rounding:1 reported:3 tolerated:1 chooses:1 recht:2 st:6 international:2 siam:1 off:2 corrects:1 invertible:8 hopkins:1 quickly:1 thesis:1 again:2 containing:2 choose:1 slowly:1 opposed:1 collapsing:3 watching:1 corner:2 resort:1 return:1 account:1 de:1 invertibility:4 coefficient:1 notable:1 explicitly:1 performed:2 closed:2 mataric:2 observing:1 lab:1 start:1 recover:9 arkadi:1 accuracy:2 roll:1 stretching:1 efficiently:2 ensemble:1 maximized:1 yield:3 identify:1 who:1 characteristic:1 millimeter:1 weak:1 identification:8 raw:10 accurately:3 produced:2 iid:2 none:1 trajectory:13 oscillatory:3 infinitesimal:1 against:1 frequency:2 proof:1 recovers:1 popular:1 massachusetts:2 color:1 dimensionality:2 improves:1 higher:1 supervised:2 isometric:1 follow:1 response:1 methodology:1 evaluated:1 box:2 though:3 ergodicity:1 smola:2 langford:1 hand:2 nonlinear:15 quality:6 artifact:1 usa:1 requiring:3 true:10 isomap:16 meticulous:1 regularization:2 memoryless:1 adjacent:1 sin:1 during:1 generalized:2 mina:1 demonstrate:2 performs:2 motion:2 interface:1 silva:1 image:3 common:2 rotation:2 functional:3 empirically:2 physical:1 volume:1 lieven:1 measurement:34 cambridge:1 vec:8 ai:1 smoothness:4 rd:4 grid:2 mathematics:1 similarly:1 nonlinearity:4 moving:6 stable:1 f0:11 surface:2 triangulation:1 optimizing:2 scenario:2 certain:1 inequality:1 yi:14 devise:1 caltech:1 minimum:11 additional:4 relaxed:1 greater:2 converge:2 maximize:1 period:1 signal:2 semi:1 stephen:1 reduces:2 rahimi:4 smooth:3 match:2 long:1 variant:3 regression:5 vision:1 expectation:1 iteration:2 kernel:7 adopting:1 represent:2 sometimes:1 c1:1 folding:6 addition:5 fine:1 sch:1 eliminates:1 exhibited:2 ascent:3 induced:3 subject:1 tend:1 nonconcave:1 leveraging:1 iii:1 affect:1 brighter:2 brecht:1 bandwidth:1 suboptimal:1 topology:1 cn:1 det:8 bounce:1 six:1 passed:2 penalty:1 suffer:1 returned:4 speech:1 passing:1 logdet:3 eigenvectors:1 tune:1 transforms:1 tenenbaum:1 hardware:1 augments:1 generate:1 http:1 exist:1 coherency:2 dotted:1 diagnostic:2 estimated:1 track:8 per:3 diverse:1 write:1 dropping:1 ist:1 four:1 traced:2 drawn:2 prevent:2 registration:8 utilize:1 vaguely:1 asymptotically:3 circumventing:1 relaxation:5 subgradient:1 sum:1 run:2 inverse:7 everywhere:1 dst:2 wu:1 utilizes:1 coherence:1 scaling:4 bound:7 followed:2 replaces:1 quadratic:2 refine:1 strength:1 constraint:9 constrain:2 flat:2 diffuse:2 tal:1 tag:16 argument:2 min:1 approximability:1 extremely:1 px:14 according:1 ball:1 describes:1 em:4 making:1 restricted:1 computationally:1 equation:1 visualization:1 previously:1 r3:2 flip:3 hero:1 serf:3 available:5 jenkins:2 aharon:1 spectral:4 generic:1 substitute:1 original:1 assumes:1 top:1 remaining:1 ck1:1 unsuitable:1 ghahramani:3 warping:2 move:3 quantity:1 parametric:1 strategy:1 costly:1 diagonal:3 exhibit:3 gradient:1 kth:1 affinity:1 distance:2 enhances:1 subspace:1 valpola:1 outer:1 evaluate:1 manifold:17 trivial:1 enforcing:1 assuming:1 relationship:3 setup:1 unfortunately:1 susceptible:2 unknown:10 allowing:1 upper:4 observation:33 markov:4 finite:1 descent:2 kaxt:1 situation:1 extended:1 y1:2 perturbation:6 stack:1 thm:1 mcmillan:1 pair:2 unpublished:1 specified:1 california:1 acoustic:1 learned:1 hour:1 nip:2 able:1 dynamical:3 usually:1 pattern:2 below:1 rf:1 including:3 max:4 memory:1 video:1 circumvent:1 regularized:1 residual:1 advanced:1 technology:2 numerous:1 temporally:1 lk:1 started:1 axis:1 extract:1 prior:21 literature:1 geometric:1 evolve:1 marginalizing:2 relative:1 fully:1 ljung:1 foundation:1 affine:11 proxy:1 consistent:2 bk2:1 benveniste:1 row:2 prone:2 placed:1 wireless:2 lle:5 institute:2 neighbor:2 taking:2 benefit:2 ghz:1 distributed:1 dimension:2 xn:1 transition:1 world:1 rich:1 concavity:1 evaluating:1 made:1 far:1 sj:1 approximate:2 compact:2 implicitly:1 global:2 instantiation:1 automatica:1 assumed:1 spatio:1 xi:3 search:8 latent:27 continuous:1 table:1 learn:5 robust:2 ca:1 obtaining:1 williamson:1 necessarily:1 complex:1 bounding:1 noise:7 edition:1 x1:1 intel:2 deployed:1 shrinking:1 position:11 explicit:1 wish:2 learns:1 formula:2 theorem:2 minute:2 down:1 xt:6 jensen:1 exists:1 adding:1 effectively:1 ci:2 phd:1 magnitude:1 delyon:1 karhunen:1 hole:2 smoothly:3 entropy:1 eij:1 likely:3 simply:3 appearance:1 tracking:4 rana:1 van:1 truth:10 environmental:1 extracted:2 rice:1 hines:1 viewed:1 identity:2 month:1 consequently:1 careful:1 rbf:1 replace:1 feasible:1 change:8 hard:1 loan:1 determined:1 except:1 typical:1 corrected:1 averaging:1 principal:2 total:1 shannon:2 highdimensional:1 support:1 arises:1 statespace:1 regularizing:1
2,248
304
Reconfigurable Neural Net Chip with 32K Connections H.P. Graf, R. Janow, D. Henderson, and R. Lee AT&T Bell Laboratories, Room 4G320, Holmdel, NJ 07733 Abstract We describe a CMOS neural net chip with a reconfigurable network architecture. It contains 32,768 binary, programmable connections arranged in 256 'building block' neurons. Several 'building blocks' can be connected to form long neurons with up to 1024 binary connections or to form neurons with analog connections. Single- or multi-layer networks can be implemented with this chip. We have integrated this chip into a board system together with a digital signal processor and fast memory. This system is currently in use for image processing applications in which the chip extracts features such as edges and corners from binary and gray-level images. 1 INTRODUCTION A key problem for a hardware implementation of neural nets is to find the proper network architecture. With a fixed network structure only few problems can be solved efficiently. Therefore, we opted for a programmable architecture that can be changed under software control. A large, fully interconnected network can, in principle, implement any architecture, but this usually wastes a lot of the connections since many have to be set to zero. To make better use of the silicon, other designs implemented a programmable architecture - either by connecting several chips with switching blocks (Mueller89), or by placing switches between blocks of synapses on one chip (Satyanarayana90). The present design (GramO) consists of building blocks that can be connected to form many different network configurations. Single-layer 1032 Reconfigurable Neural Net Chip with 32K Connections nets or multi-layer nets can be implemented. The connections can be binary or can have an analog depth of up to four bits. We designed this neural net chip mainly for pattern recognition applications, which tYp"ically require nets far too large for a single chip. However, the nets can often be structured so that the neurons have local receptive fields, and many neurons share the same receptive field. Such nets can be split into smaller parts that fit onto a chip, and the small nets are then scanned sequentially over an image. The circuit has been optimized for this type of network by adding shift registers for the data transport. The neural net chip implementation uses a mixture of analog and digital electronics. The weights, the neuron states and all the control signals are digital, while summing the contributions of all the weights is performed in analog form. All the data going on and off the chip are digital, which makes the integration of the network into a digital system straight-forward. 2 2.1 THE CIRCUIT ARCHITECTURE The Building Block INPUT DATA REGISTER 128 BITS CONNECT OTHER 128 CONNECTIONS SUMMING WIRE Figure I: One of the building blocks, a "neuron" Figure 1 shows schematically one of the building blocks. It consists of an array of 128 connections which receive input signals from other neurons or from external sources. The weights as well as the inputs have binary values, +1 or -1. The output of a connection is a current representing the result of the multiplication of a weight with a state, and on a wire the currents from all the connections are summed. This sum is multiplied with a programmable factor and can be added to the currents of other neurons. The result is compared with a reference and is thresholded in a comparator. A total of 256 such building blocks are contained on the chip. Up to 8 of the building blocks can be connected to form a single neuron with up to 1024 connections. The network is not restricted to binary connections. Connections with four bits of analog depth are obtained by joining four building blocks and by setting each of the multipliers to a different value: 1, 1/2, 1/4, 1/8 (see Figure 1033 1034 Graf, Janow, Henderson, and Lee INPUT DATA REGISTER 128 BITS 128 CONNECOONS 128 CONNECTIONS Figure 2: Connecting four building blocks to form connections with four bits of resolution Figure 3: Photo micrograph of the neural net chip Reconfigurable Neural Net Chip with 32K Connections 2). In this case four binary connections, one in each building block, form one connection with an analog depth of four bits. Alternatively, the network can be configured for two-bit input signals and two-bit weights or four-bit inputs and onebit weights. The multiplications of the input signals with the weights are fourquadrant multiplications, whether binary signals are used or multi-bit signals. With this approach only one scaling multiplier is needed per neuron, instead of one per connection as would be the case if connections were implemented with multiplying Dj A converters. The transfer function of a neuron is provided by the comparator. With a single comparator the transfer function has a hard threshold. Other types of transfer functions can be obtained when several building blocks are connected. Then, several comparators receive the same analog input and for each comparator a different reference can be selected (compare figure 2). In this way, for example, eight comparators may work as a three-bit AjD converter. Other transfer functions, such as sigmoids, can be approximated by selecting appropriate thresholds for the comparators. The neurons are arranged in groups of 16. For each group there is one register of 128 bits providing the input data. The whole network contains 16 such groups, split in two halfs, each with eight groups. These groups of neurons can be recognized in Figure 3 that shows a photomicrograph of the circuit. The chip contains 412,000 transistors and measures 4.5mm x 7mm. It is fabricated in 0.9J.lm CMOS technology with one level of poly and two levels of metal. 2.2 Moving Data Through The Circuit From a user's point of view the chip consists of the four different types of registers listed in table 1. Programming of the chip consists in moving the data over a high-speed bus of 128 bits width between these registers. Results produced by the network can be loaded directly into data-input registers and can be used for a next computation. In this way some multi-layer networks can be implemented without loading data off chip between layers. Table 1: Registers in the neural net chip REGISTER FUNCTION Shift register Data-input registers Configuration registers Result registers Input and output of the data Provide input to the connections Determine the connectivity of the network Contain the output of the network In a typical operation 16 bits are loaded from the outside into a shift register. From that register the main bus distributes the data through the whole circuit. They are loaded into one or several of the data-input registers. The analog computation is then started and the results are latched into the result registers. These results are loaded either into data-input registers if a network with feedback or a multi-layer network is implemented, or they are loaded into the output shift register and off chip. 1035 1036 Graf, Janow, Henderson, and Lee In addition to the main bus, the chip contains two 128 bit wide shift registers, one through each half of the connection matrix. All the shift registers were added to speed up the operation when networks with local fields of view are scanned over a signal. In such an application shift registers drastically reduce the amount of new data that have to be loaded into the chip from one run to the next. For example, when an input field of 16 x 16 pixels is scanned over an image, only 16 new data values have to be loaded for each run instead of 256. Loading the data on and off the chip is often the speed-limiting operation. Therefore, it is important to provide some support in hardware. 3 TEST RESULTS The speed of the circuit is limited by the time it takes the analog computation to settle to its final value. This operation requires less than 50 ns. The chip can be operated with instruction cycles of lOOns, where in the first 50 ns the analog computation settles down and during the following 50 ns the results are read out. Simultaneously with reading out the results, new data are loaded into the datainput registers. In this way 32k one-bit multiply-accumulates are executed every lOOns, which amounts to 320 billion connections/second. The accuracy of the analog computation is about ?5%. This means, for example, that a comparator whose threshold is set to a value of 100 connections may already turn on when it receives the current from 95 connections. This limited accuracy is due to mismatches of the devices used for the analog computation. However, the threshold for each comparator may be individually adjusted at the cost of dedicating neurons to the task. Then a threshold can be adjusted to ?1 %. The operation of the digital part of the network and the analog part has been synchronized in such a way that the noise generated by the digital part has a minimal effect on the analog computation. 4 THE BOARD SYSTEM A system was developed to use the neural net chip as a coprocessor of a workstation with a VME bus. A schematic of this system is shown in figure 4. Beside the neural net chip, the board contains a digital signal processor to control the whole system and 256k of fast memory. Pictures are loaded from the host into the board's memory and are then scanned with the neural net chip. The results are loaded back into the board memory and from there to the host. Loading pictures over the VME bus limits the overall speed of this system to about one frame of 512 x 512 pixels per second. Although this corresponds to less than 10% of the chips maximum data throughput, operations such as scanning an image with 32 16 x 16 kernels can be done in one second. The same operation would take around 30 minutes on the workstation alone. Therefore, this system represents a very useful tool for image processing, in particular for developing algorithms. Its architecture makes it very flexible since part of a problem can be solved by the digital signal processor and the computationally intensive parts on the neural net chip. An extra data path for the signals will be added later to take full advantage of the neural net's speed. Reconfigurable Neural Net Chip with 32K Connections ADDRESS BUS ,I , EPROM G I 256k SRAM I , DSP32C ~ I VME INTERFACE I NET32K ~ DATA BUS VME BUS Figure 4: Schematic of the board system for the neural net chip Figure 5: Result of a feature extraction application. Left image: Edges extracted from the milling cutter. Right image: The crosses mark where corners were detected. A total of 16 features were extracted simultaneously with detectors of 16 x 16 pixels In sIze. 1037 1038 Graf,Janow, Henderson, and Lee 5 APPLICATIONS Figure 5 shows the result of an application, where the net is used for extracting simultaneously edges and corners from a gray-level image. The network actually handles only a small resolution in the pixel values. Therefore, the picture is first half-toned with a standard error-diffusion algorithm and then, the halftoned image is scanned with the network. To extract these features, kernels with three levels in the weights are loaded into the network. One neuron with 256 two-bit connections represents one kernel. There are a total of 16 such kernels in the network, each one tuned to a corner or an edge of a different orientation. For each comparator an extra neuron is used to set the threshold. This whole task fills 50% of the chip. Edges and corners are important features that are often used to identify objects or to determine their positions and orientations. We are applying them now to segment complex images. Convolutional algorithms have long been recognized as reliable methods for extracting features. However, they are computationally very expensive so that often special-purpose hardware is required. To our knowledge, no other circuit can extract such a large number of features at a comparable rate. This application demonstrates, how a large number of connections can compensate for a limited resolution in the weights and the states. We took a gray level image and clipped its pixels to binary values. Despite this coarse quantization of the signal the relevant information can be extracted reliably. Since many connections are contributing to each result, uncorrelated errors due to quantization are averaged out. The key to a good result is to make sure that the quantization errors are indeed uncorrelated, at least approximately. This circuit has been designed with pattern matching applications in mind. However, its flexibility makes it suitable for a much wider range of applications. In particular, since its connections as well as its architecture can be changed fast, in the order of lOOns, it can be integrated in an adaptive or a learning system. Acknowledgements We acknowledge many stimulating discussions with the other members of the neural network group at AT&T in Holmdel. Part of this work was supported by the USASDC under contract #DASG60-88-0044. References H. P. Graf & D. Henderson. (1990) A Reconfigurable CMOS Neural Network. in Digest IEEE Int. Solid State Circuits Con/. , 144-145. P. Mueller, J. van der Spiegel, D. Blackman, T. Chiu, T. Clare, J. Dao, Ch. Donham, T.P. Hsieh & M. Loinaz. (1989) A Programmable Analog Neural Computer and Simulator. In D.S. Touretzky (ed.), Advances in Neural Information Processing Systems 1, 712 - 719. San Mateo, CA: Morgan Kaufmann. S. Satyanarayana, Y. Tsividis & H. P. Graf. (1990) A Reconfigurable Analog VLSI Neural Network Chip. In D.S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, 758 - 768. San Mateo, CA: Morgan Kaufmann.
304 |@word coprocessor:1 loading:3 instruction:1 donham:1 hsieh:1 solid:1 electronics:1 configuration:2 contains:5 selecting:1 tuned:1 current:4 janow:4 designed:2 alone:1 half:3 selected:1 ajd:1 device:1 sram:1 coarse:1 consists:4 indeed:1 multi:5 simulator:1 provided:1 circuit:9 developed:1 fabricated:1 nj:1 every:1 demonstrates:1 control:3 local:2 limit:1 switching:1 despite:1 accumulates:1 joining:1 path:1 approximately:1 mateo:2 limited:3 range:1 averaged:1 block:14 implement:1 bell:1 matching:1 onto:1 applying:1 resolution:3 array:1 fill:1 handle:1 limiting:1 user:1 eprom:1 programming:1 us:1 recognition:1 approximated:1 expensive:1 solved:2 connected:4 cycle:1 segment:1 chip:35 fast:3 describe:1 detected:1 outside:1 whose:1 spiegel:1 final:1 advantage:1 transistor:1 net:23 took:1 interconnected:1 relevant:1 flexibility:1 billion:1 cmos:3 object:1 wider:1 implemented:6 synchronized:1 settle:2 require:1 adjusted:2 mm:2 around:1 lm:1 purpose:1 currently:1 individually:1 tool:1 latched:1 blackman:1 mainly:1 opted:1 mueller:1 integrated:2 vlsi:1 going:1 pixel:5 overall:1 flexible:1 orientation:2 integration:1 summed:1 special:1 field:4 extraction:1 placing:1 represents:2 comparators:3 throughput:1 dao:1 loon:3 few:1 simultaneously:3 multiply:1 henderson:5 mixture:1 operated:1 edge:5 minimal:1 cost:1 too:1 connect:1 scanning:1 lee:4 off:4 contract:1 together:1 connecting:2 connectivity:1 corner:5 external:1 clare:1 waste:1 int:1 configured:1 register:23 performed:1 view:2 lot:1 later:1 typ:1 contribution:1 accuracy:2 convolutional:1 loaded:11 kaufmann:2 efficiently:1 identify:1 produced:1 vme:4 multiplying:1 straight:1 processor:3 detector:1 synapsis:1 touretzky:2 ed:2 workstation:2 con:1 knowledge:1 actually:1 back:1 arranged:2 done:1 receives:1 transport:1 gray:3 building:12 effect:1 contain:1 multiplier:2 read:1 laboratory:1 during:1 width:1 interface:1 image:12 analog:16 silicon:1 dj:1 moving:2 binary:9 der:1 morgan:2 recognized:2 determine:2 signal:12 full:1 cross:1 long:2 compensate:1 host:2 schematic:2 kernel:4 dedicating:1 receive:2 schematically:1 addition:1 source:1 extra:2 sure:1 member:1 extracting:2 satyanarayana:1 split:2 switch:1 fit:1 architecture:8 converter:2 reduce:1 intensive:1 shift:7 whether:1 programmable:5 useful:1 listed:1 amount:2 hardware:3 cutter:1 per:3 group:6 key:2 four:9 threshold:6 photomicrograph:1 micrograph:1 thresholded:1 diffusion:1 sum:1 run:2 clipped:1 holmdel:2 scaling:1 comparable:1 bit:17 layer:6 scanned:5 software:1 speed:6 dsp32c:1 structured:1 developing:1 smaller:1 restricted:1 computationally:2 bus:8 turn:1 needed:1 mind:1 photo:1 operation:7 multiplied:1 eight:2 appropriate:1 added:3 already:1 digest:1 receptive:2 providing:1 executed:1 implementation:2 reliably:1 proper:1 design:2 neuron:17 wire:2 acknowledge:1 frame:1 net32k:1 required:1 connection:31 optimized:1 address:1 usually:1 pattern:2 mismatch:1 reading:1 reliable:1 memory:4 suitable:1 representing:1 technology:1 picture:3 started:1 extract:3 acknowledgement:1 multiplication:3 contributing:1 graf:6 beside:1 fully:1 ically:1 digital:9 metal:1 principle:1 uncorrelated:2 share:1 changed:2 supported:1 drastically:1 wide:1 van:1 feedback:1 depth:3 forward:1 adaptive:1 san:2 far:1 sequentially:1 summing:2 alternatively:1 table:2 transfer:4 ca:2 tsividis:1 poly:1 complex:1 main:2 whole:4 noise:1 board:6 n:3 position:1 down:1 minute:1 reconfigurable:7 quantization:3 adding:1 milling:1 sigmoids:1 contained:1 ch:1 corresponds:1 extracted:3 stimulating:1 comparator:7 room:1 hard:1 typical:1 distributes:1 total:3 chiu:1 support:1 mark:1
2,249
3,040
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139, USA {celiu,billf,adelson}@csail.mit.edu Abstract A reliable motion estimation algorithm must function under a wide range of conditions. One regime, which we consider here, is the case of moving objects with contours but no visible texture. Tracking distinctive features such as corners can disambiguate the motion of contours, but spurious features such as T-junctions can be badly misleading. It is difficult to determine the reliability of motion from local measurements, since a full rank covariance matrix can result from both real and spurious features. We propose a novel approach that avoids these points altogether, and derives global motion estimates by utilizing information from three levels of contour analysis: edgelets, boundary fragments and contours. Boundary fragment are chains of orientated edgelets, for which we derive motion estimates from local evidence. The uncertainties of the local estimates are disambiguated after the boundary fragments are properly grouped into contours. The grouping is done by constructing a graphical model and marginalizing it using importance sampling. We propose two equivalent representations in this graphical model, reversible switch variables attached to the ends of fragments and fragment chains, to capture both local and global statistics of boundaries. Our system is successfully applied to both synthetic and real video sequences containing high-contrast boundaries and textureless regions. The system produces good motion estimates along with properly grouped and completed contours. 1 Introduction Humans can reliably analyze visual motion under a diverse set of conditions, including textured as well as featureless objects. Computer vision algorithms have focussed on conditions of texture, where junction or corner-like image structures are assumed to be reliable features for tracking [5, 4, 17]. But under other conditions, these features can generate spurious motions. T-junctions caused by occlusion can move in an image very differently than either of the objects involved in the occlusion event [11]. To properly analyze motions of featureless objects requires a different approach. The spurious matching of T-junctions has been explained in [18] and [9]. We briefly restate it using the simple two bar stimulus in Figure 1 (from [18]). The gray bar is moving rightward in front of the leftward moving black bar, (a). If we analyze the motion locally, i.e. match to the next frame in a local circular window, the flow vectors of the corner and line points are as displayed in Figure 1 (b). The T-junctions located at the intersections of the two bars move downwards, but there is no such a motion by the depicted objects. One approach to handling the spurious motions of corners or T-junctions has been to detect such junctions and remove them from the motion analysis [18, 12]. However, T-junctions are often very difficult to detect in a static image from local, bottom-up information [9]. Motion at occluding boundaries has been studied, for example in [1]. The boundary motion is typically analyzed locally, (a) (b) (c) Figure 1: Illustration of the spurious T-junction motion. (a) The front gray bar is moving to the right and the black bar behind is moving to the left [18]. (b) Based on a local window matching, the eight corners of the bars show the correct motion, whereas the T-junctions show spurious downwards motion. (c) Using the boundarybased representation our system is able to correctly estimate the motion and generate the illusory boundary as well. which can again lead to spurious junction trackings. We are not aware of an existing algorithm that can properly analyze the motions of featureless objects. In this paper, we use a boundary-based approach which does not rely on motion estimates at corners or junctions. We develop a graphical model which integrates local information and assigns probabilities to candidate contour groupings in order to favor motion interpretations corresponding to the motions of the underlying objects. Boundary completion and discounting the motions of spurious features result from optimizing the graphical model states to explain the contours and their motions. Our system is able to automatically detect and group the boundary fragments, analyze the motion correctly, as well as exploit both static and dynamic cues to synthesize the illusory boundaries (c). We represent the boundaries at three levels of grouping: edgelets, boundary fragments and contours, where a fragment is a chain of edgelets and a contour is a chain of fragments. Each edgelet within a boundary fragment has a position and an orientation and carries local evidence for motion. The main task of our model is then to group the boundary fragments into contours so that the local motion uncertainties associated with the edgelets are disambiguated and occlusion or other spurious feature events are properly explained. The result is a specialized motion tracking algorithm that properly analyzes the motions of textureless objects. Our system consists of four conceptual steps, discussed over the next three sections (the last two steps happen together while finding the optimal states in the graphical model): (a) Boundary fragment extraction: Boundary fragments are detected in the first frame. (b) Edgelet tracking with uncertainties: Boundary fragments are broken into edgelets, and, based on local evidence, the probability distribution is found for the motion of each edgelet of each boundary fragment. (c) Grouping boundary fragments into contours: Boundary fragments are grouped, using both temporal and spatial cues. (d) Motion estimation: The final fragment groupings disambiguate motion uncertainties and specify the final inferred motions. We restrict the problem to two-frame motion analysis though the algorithm can easily be extended to multiple frames. 2 Boundary Fragment Extraction Extracting boundaries from images is a nontrivial task by itself. We use a simple algorithm for boundary extraction, analyzing oriented energy using steerable filters [3] and tracking the boundary in a manner similar to that of the Canny edge detector [2]. A more sophisticated boundary detector can be found in [8]; occluding boundaries can also be detected using special cameras [13]. However, for our motion algorithm designed to handle the special case of textureless objects, we find that our simple boundary detection algorithm works well. Mathematically, given an image I, we seek to obtain a set of fragments B = {b i }, where each i fragment bi is a chain of edgelets b i = {eik }nk=1 . Each edgelet e ik = {pik , ?ik } is a particle which 2 embeds both location p ik ? R and orientation ? ik ? [0, 2?) information. (a) (b) (c) (d) Figure 2: The local motion vector is estimated for each contour in isolation by selectively comparing orientation energies across frames. (a) A T-junction of the two bar example showing the contour orientation for this motion analysis. (b) The other frame. (c) The relevant orientation energy along the boundary fragment, both for the 2nd frame. A Gaussian pdf is fit to estimate flow, weighted by the oriented energy. (d) Visualization of the Gaussian pdf. The possible contour motions are unaffected by the occluding contour at a different orientation and no spurious motion is detected at this junction. We use H4 and G4 steerable filters [3] to filter the image and obtain orientation energy per pixel. These filters are selected because they describe the orientation energies well even at corners. For each pixel we find the maximum energy orientation and check if it is local maximum within a slice perpendicular to this orientation. If that is true and the maximum energy is above a threshold T 1 we call this point a primary boundary point. We collect a pool of primary boundary points after running this test for all the pixels. We find the primary boundary point with the maximum orientation energy from the pool and do bidirectional contour tracking, consisting of prediction and projection steps. In the prediction step, the current edgelet generates a new one by following its orientation with a certain step size. In the projection step, the orientation is locally maximized both in the orientation bands and within a small spatial window. The tracking is stopped if the energy is below a threshold T 2 or if the turning angle is above a threshold. The primary boundary points that are close to the tracked trajectory are removed from the pool. This process is repeated until the pool is empty. The two thresholds T 1 and T2 play the same roles as those in Canny edge detection [2]. While the boundary tracker should stop at sharp corners, it can turn around and continue tracking. We run a postprocess to break the boundaries by detecting points of curvature local maxima which exceed a curvature threshold. 3 Edgelet Tracking with Uncertainties We next break the boundary contours into very short edgelets and obtain the probabilities, based on local motion of the boundary fragment, for the motion vector at each edgelet. We cannot use conventional algorithms, such as Lucas-Kanade [5], for local motion estimation since they rely on corners. The orientation ? ik for each edgelet was obtained during boundary fragment extraction. We obtain the motion vector by finding the spatial offsets of the edgelet which match the orientation energy along the boundary fragment in this orientation. We fit a Gaussian distribution N (? ik , ?ik ) of the flow weighted by the orientation energy in the window. The mean and covariance matrix is added to the edgelet: e ik = {pik , ?ik , ?ik , ?ik }. This procedure is illustrated in Figure 2. Grouping the boundary fragments allows the motion uncertainties to be resolved. We next discuss the mathematical model of grouping as well as the computational approach. 4 Boundary Fragment Grouping and Motion Estimation 4.1 Two Equivalent Representations for Fragment Grouping The essential part of our model is to find the connection between the boundary fragments. There are two possible representations for grouping. One representation is the connection of each end of the boundary fragment. We formulate the probability of this connection to model the local saliency of contours. The other equivalent representation is a chain of fragments that forms a contour, on which global statistics are formulated, e.g. structural saliency [16]. Similar local and global modeling of contour saliency was proposed in [14]; in [7], both edge saliency and curvilinear continuity were used to extract closed contours from static images. In [15], contour ends are grouped using loopy belief propagation to interpret contours. The connections between fragment ends are modeled by switch variables. For each boundary frag(0) ment bi , we use a binary variable {0, 1} to denote the two ends of the fragment, i.e. b i = ei1 and (t ) (1) (ti ) bi = ei,ni . Let switch variable S(i, ti ) = (j, tj ) denote the connection from b i to bj j . This b3 b3 b2 b1 b1 (a) b1( 0) (b) b2 b2 b1 b3 b3 b3 (c) b2 b1 (d) b2 b1 (e) Figure 3: A simple example illustrating switch variables, reversibility and fragment chains. The color arrows show the switch variables. The empty circle indicates end 0 and the filled indicates end 1. (a) Shows three (0) boundary fragments. Theoretically b1 can connect to any of the other ends including itself, (b). However, the (0) (0) switch variable is exclusive, i.e. there is only one connection to b1 , and reversible, i.e. if b1 connects to (0) (0) (0) b3 , then b3 should also connect to b1 , as shown in (c). Figures (d) and (e) show two of the legal contour groupings for the boundary fragments: two open contours and a closed loop contour. connection is exclusive, i.e. each end of the fragment should either connect to one end of the other fragment, or simply have no connection. An exclusive switch is further called reversible, i.e. if S(i, ti ) = (j, tj ), then S(j, tj ) = (i, ti ), or in a more compact form S(S(i, ti )) = (i, ti ). (1) (t ) bi i , When there is no connection to we simply set S(i, ti ) = (i, ti ). We use the binary function (t ) (t ) ?[S(i, ti )?(j, tj )] to indicate whether there is a connection between b i i and bj j . The set of all the switches are denoted as S = {S(i, t i )|i = 1 : N, ti = 0, 1}. We say S is reversible if every switch variable satisfies Eqn. (1). The reversibility of switch variables is shown in Figure 3 (b) and (c). From the values of the switch variables we can obtain contours, which are chains of boundary fragments. A fragment chain is defined as a series of the end points c = (x ) (x ) (x ) (x ) {(bi1 1 , bi1 1 ), ? ? ? , (bimm , bimm )}. The chain is specified by fragment label {i 1 , ? ? ? , im } and end label {x1 , ? ? ? , xm }. It can be either open or closed. The order of the chain is determined by the switch variable. Each end appears in the chain at most once. The notation of a chain is not unique. Two open chains are identical if the fragment and end labels are reversed. Two closed chains are identical if they match each other by rotating one of them. These identities are guaranteed from the reversibility of the switch variables. A set of chains C = {c i } can be uniquely extracted based on the values of the reversible switch variables, as illustrated in Figure 3 (d) and (e). 4.2 The Graphical Model Given the observation O, the two images, and the boundary fragments B, we want to estimate the flow vectors V = {vi } and vi = {vik }, where each vik associates with edgelet eik , and the grouping variables S (switches) or equivalently C (fragment chains). Since the grouping variable S plays an essential role in the problem, we shall first infer S and then infer V based on S. 4.2.1 The Graph for Boundary Fragment Grouping We use two equivalent representations for boundary grouping, switch variables and chains. We use ?[S(S(i, ti )) ? (i, ti )] for each end to enforce the reversibility. Suppose otherwise S(i 1 , ti1 ) = S(i2 , ti2 ) = (j, tj ) for i1 = i2 . Let S(j, tj ) = (i1 , ti1 ) without loss of generality, then ?[S(S(i2 , ti2 )) ? (i2 , ti2 )] = 0, which means that the switch variables are not reversible. (ti ) We use a function ?(S(i, t i ); B, O) to measure the distribution of S(i, t i ), i.e. how likely b i connects to the end of other fragments. Intuitively, two ends should be connected if  Motion similarity the distributions of the motion of the two end edgelets are similar;  Curve smoothness the illusory boundary to connect the two ends is smooth;  Contrast consistency the brightness contrast at the two ends consistent with each other. We write ?(?) as a product of three terms, one enforcing each criterion. We shall follow the example in Figure 4 to simplify the notation, where the task is to compute ?(S(1, 0) = (2, 0)). The first term b (20 ) ( ? 21 , ? 21 ) b2 h21 h22 b2 b2 r b1 b1( 0 ) ( ?11 , ?11 ) b1 (a) h11 b1 b1 (b) b2 (c) h12 (d) Figure 4: An illustration of local saliency computation. (a) Without loss of generalization we assume the two (0) (0) ends to be b1 and b2 . (b) The KL divergence between the distributions of flow vectors are used to measure the motion similarity. (c) An illusory boundary ? is generated by minimizing the energy of the curve. The sum of square curvatures are used to measure the curve smoothness. (d) The means of the local patches located at (0) (0) the two ends are extracted, i.e. h11 and h12 from b1 , h21 and h22 from b2 , to compute contrast consistency. is the KL divergence between the two Gaussian distributions of the flow vectors exp{??KL KL(N (?11 , ?11 ), N (?21 , ?21 ))}, (2) where ?KL is a scaling factor. The second term is the local saliency measure on the illusory boundary ? that connects the two ends. The illusory boundary is simply generated by minimizing the energy of the curve. The saliency is defined as    2  d? exp ??? ds , (3) ds ? where ?(s) is the slope along the curve, and d? ds is local curvature [16]. ? ? is a scaling factor. The third term is computed by extracting the mean of local patches located at the two ends   dmax dmin , (4) exp ? 2 ? 2 2?max 2?min where d1 = (h11 ? h21 )2 , d2 = (h12 ? h22 )2 , and dmax = max(d1 , d2 ), dmin = min(d1 , d2 ). ?max > ?min are the scale parameters. h 11 , h12 , h21 , h22 are the means of the pixel values of the four patches located at the two end points. For self connection we simply set a constant value: ?(S(i, ti ) = (i, ti )) = ? . We use a function ?(ci ; B, O) to model the structural saliency of contours. It was discovered in [10] that convex occluding contours are more salient, and additional T-junctions along the contour may increase or decrease the occlusion perception. Here we simply enforce that a contour should have no self-intersection. ?(c i ; B, O) = 1 if there is no self intersection and ?(c i ; B, O) = 0 otherwise. Thus, the (discrete) graphical model favoring the desired fragment grouping is Pr(S; B, O) = N 1 M 1 ?(S(i, ti ); B, O)?[S(S(i, ti )) ? (i, ti )] ? ?(cj ; B, O), ZS i=1 t =0 j=1 (5) i where ZS is a normalization constant. Note that this model measures both the switch variables S(i, ti ) for local saliency and the fragment chains c i to enforce global structural saliency. 4.2.2 Gaussian MRF on Flow Vectors Given the fragment grouping, we model the flow vectors V as a Gaussian Markov random field (GMRF). The edgelet displacement within each boundary fragment should be smooth and match the observation along the fragment. The probability density is formulated as ?(vi ; bi ) = ni k=1 exp{?(vik ? ?ik ) T ??1 ik (vik ? ?ik )} n i ?1 k=1 exp{? 1 vik ? vi,k+1 2 }, 2? 2 (6) where ?ik and ?ik are the motion parameters of each edgelet estimated in Sect 3. We use V(i, ti ) to denote the flow vector of end t i of fragment b i . We define V(S(i, ti )) = V(j, tj ) if S(i, ti ) = (j, tj ). Intuitively the flow vectors of the two ends should be similar if they are connected, or mathematically  1 ifS(i, ti ) = (i, ti ), 1 ?(V(i, ti ), V(S(i, ti ))) = (7) 2 exp{? 2 V(i, ti )?V(S(i, ti )) } otherwise. 2? The (continuous) graphical model of the flow vectors is therefore defined as Pr(V|S; B) = N 1 1 ?(vi ; bi ) ?(V(i, ti ), V(S(i, ti ))) ZV i=1 t =0 (8) i where ZV is a normalization constant. When S is given it is a GMRF which can be solved by least squares. 4.3 Inference Having defined the graphical model to favor the desired motion and grouping interpretations, we need to find the state parameters that best explain the image observations. The natural decomposition of S and V in our graphical model Pr(V, S; B, O) = Pr(S; B, O) ? Pr(V|S; B, O), (9) (where Pr(S; B, O) and Pr(V|S; B, O) are defined in Eqn. (5) and (8) respectively) lends itself to performing two-step inference. We first infer the boundary grouping B, and then infer V based on B. The second step is simply to solve least square problem since Pr(V|S; B, O) is a GMRF. This approach does not globally optimize Eqn. (9) but results in reasonable solution because V strongly depends on S. The density function Pr(S; B, O) is not a random field, so we use importance sampling [6] to obtain the marginal distribution Pr(S(i, t i ); B, O). The proposal density of each switch variable is set to be 1 q (S(i, ti ) = (j, tj )) ? ? (S(i, ti ) = (j, tj )) ? (S(j, tj ) = (i, ti )) (10) Zq where ?(?) has been normalized to sum to 1 for each end. We found that this bidirectional measure is crucial to take valid samples. To sample the proposal density, we first randomly select a boundary fragment, and connect to other fragments based on q(S(i, t i )) to form a contour (a chain of boundary fragments). Each end is sampled only once, to ensure reversibility. This procedure is repeated until no fragment is left. In the importance step we run the binary function ?(c i ) to check that each contour has no self-intersection. If ?(c i ) = 0 then this sample is rejected. The marginal distributions are estimated from the samples. Lastly the optimal grouping is obtained by replacing random sampling with selecting the maximum-probability connection over the estimated marginal distributions. The number of samples needed depends on the number of the fragments. In practice we find that n2 samples are sufficient for n fragments. 5 Experimental Results Figure 6 shows the boundary extraction, grouping, and motion estimation results of our system for both real and synthetic examples 1. All the results are generated using the same parameter settings. The algorithm is implemented in MATLAB, and the running time varies from ten seconds to a few minutes, depending on the number of the boundary fragments found in the image. The two-bar examples in Figure 1(a) yields fourteen detected boundary fragments in Figure 6(a) and two contours in (b). The estimated motion matches the ground truth at the T-junctions. The fragments belonging to the same contour are plotted in the same color and the illusory boundaries are synthesized as shown in (c). The boundaries are warped according to the estimated flow and displayed in (d). The hallucinated illusory boundaries in frame 1 (c) and 2 (d) are plausible amodal completions. The second example is the Kanizsa square where the frontal white square moves to the right bottom. Twelve fragments are detected in (a) and five contours are grouped in (b). The estimated motion and generated illusory boundary also match the ground truth and human perception. Notice that the arcs tend to connect to other ones if we do not impose the structural saliency ?(?). We apply our system to a video of a dancer (Figure 5 (a) and (b)). In this stimulus the right leg moves downwards, but there is weak occluding boundary at the intersection of the legs. Eleven 1 The results can be viewed online http://people.csail.mit.edu/celiu/contourmotions/ (a) Dancer frame 1 (b) Dancer frame 2 (c) Chair frame 1 (d) Chair frame 2 Figure 5: Input images for the non-synthetic examples of Figure 6. The dancer?s right leg is moving downwards and the chair is rotating (note the changing space between the chair?s arms). boundary fragments are extracted in (a) and five contours are extracted in (b). The estimated motion (b) matches the ground truth. The hallucinated illusory boundary in (c) and (d) correctly connect the occluded boundary of the right leg and the invisible boundary of the left leg. The final row shows challenging images of a rotating chair (Figure 5 (c) and (d)), also showing proper contour completion and motion analysis. Thirty-seven boundary fragments are extracted and seven contours are grouped. To complete the occluded contours of this image would be nearly impossible working only from a static image. Exploiting motion as well as static information, our system is able to complete the contours properly. Note that the traditional motion analysis algorithms fail at estimating motion for these examples (see supplementary videos) and would thus also fail at correctly grouping the objects based on the motion cues. 6 Conclusion We propose a novel boundary-based representation to estimate motion under the challenging visual conditions of moving textureless objects. Ambiguous local motion measurements are resolved through a graphical model relating edgelets, boundary fragments, completed contours, and their motions. Contours are grouped and their motions analyzed simultaneously, leading to the correct handling of otherwise spurious occlusion and T-junction features. The motion cues help the contour completion task, allowing completion of contours that would be difficult or impossible using only low-level information in a static image. A motion analysis algorithm such as this one that correctly handles featureless contour motions is an essential element in a visual system?s toolbox of motion analysis methods. References [1] M. J. Black and D. J. Fleet. Probabilistic detection and tracking of motion boundaries. International Journal of Computer Vision, 38(3):231?245, 2000. [2] J. Canny. A computational approach to edge detection. IEEE Trans. Pat. Anal. Mach. Intel., 8(6):679?698, Nov 1986. [3] W. T. Freeman and E. H. Adelson. The design and use of steerable filters. IEEE Trans. Pat. Anal. Mach. Intel., 13(9):891?906, Sep 1991. [4] B. K. P. Horn and B. G. Schunck. Determing optical flow. Artificial Intelligence, 17:185?203, 1981. [5] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 674?679, 1981. [6] D. Mackay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003. [7] S. Mahamud, L. Williams, K. Thornber, and K. Xu. Segmentation of multiple salient closed contours from real images. IEEE Trans. Pat. Anal. Mach. Intel., 25(4):433?444, 2003. [8] D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. Pat. Anal. Mach. Intel., 26(5):530?549, May 2004. [9] J. McDermott. Psychophysics with junctions in real images. Perception, 33:1101?1127, 2004. [10] J. McDermott and E. H. Adelson. The geometry of the occluding contour and its effect on motion interpretation. Journal of Vision, 4(10):944?954, 2004. [11] J. McDermott and E. H. Adelson. Junctions and cost functions in motion interpretation. Journal of Vision, 4(7):552?563, 2004. (a) Extracted boundaries (b) Estimated flow (c) Frame 1 (d) Frame 2 Figure 6: Experimental results for some synthetic and real examples. The same parameter settings were used for all examples. Column (a): Boundary fragments are extracted using our boundary tracker. The red dots are the edgelets and the green ones are the boundary fragment ends. Column (b): Boundary fragments are grouped into contours and the flow vectors are estimated. Each contour is shown in its own color. Columns (c): the illusory boundaries are generated for the first and second frames. The gap between the fragments belonging to the same contour are linked exploiting both static and motion cues in Eq. (5). [12] S. J. Nowlan and T. J. Sejnowski. A selection model for motion processing in area mt primates. The Journal of Neuroscience, 15(2):1195?1214, 1995. [13] R. Raskar, K.-H. Tan, R. Feris, J. Yu, and M. Turk. Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging. ACM Trans. Graph. (SIGGRAPH), 23(3):679?688, 2004. [14] X. Ren, C. Fowlkes, and J. Malik. Scale-invariant contour completion using conditional random fields. In Proceedings of International Conference on Computer Vision, pages 1214?1221, 2005. [15] E. Saund. Logic and MRF circuitry for labeling occluding and thinline visual contours. In Advances in Neural Information Processing Systems 18, pages 1153?1160, 2006. [16] A. Shahua and S. Ullman. Structural saliency: the detection of globally salient structures using a locally connected network. In Proceedings of International Conference on Computer Vision, pages 321?327, 1988. [17] J. Shi and C. Tomasi. Good features to track. In IEEE Conference on Computer Vision and Pattern Recognition, pages 593?600, 1994. [18] Y. Weiss and E. H. Adelson. Perceptually organized EM: A framework for motion segmentaiton that combines information about form and motion. Technical Report 315, M.I.T Media Lab, 1995.
3040 |@word illustrating:1 briefly:1 nd:1 open:3 d2:3 seek:1 covariance:2 decomposition:1 brightness:2 carry:1 liu:1 series:1 fragment:69 selecting:1 existing:1 current:1 comparing:1 nowlan:1 must:1 visible:1 happen:1 eleven:1 remove:1 designed:1 intelligence:3 cue:6 selected:1 postprocess:1 short:1 feris:1 detecting:1 location:1 five:2 mathematical:1 along:6 h4:1 ik:16 consists:1 combine:1 manner:1 g4:1 theoretically:1 multi:1 freeman:2 globally:2 automatically:1 window:4 estimating:1 underlying:1 notation:2 medium:1 z:2 finding:2 temporal:1 ifs:1 every:1 ti:33 local:26 mach:4 analyzing:1 black:3 studied:1 collect:1 challenging:2 range:1 bi:6 perpendicular:1 unique:1 camera:2 thirty:1 horn:1 practice:1 procedure:2 steerable:3 displacement:1 area:1 matching:2 projection:2 cannot:1 close:1 selection:1 impossible:2 optimize:1 equivalent:4 conventional:1 shi:1 williams:1 convex:1 formulate:1 gmrf:3 assigns:1 utilizing:1 handle:2 play:2 suppose:1 tan:1 associate:1 synthesize:1 element:1 recognition:1 located:4 bottom:2 role:2 solved:1 capture:1 region:1 connected:3 sect:1 decrease:1 removed:1 broken:1 occluded:2 dynamic:1 distinctive:1 textured:1 rightward:1 easily:1 resolved:2 sep:1 differently:1 joint:1 stylized:1 siggraph:1 describe:1 sejnowski:1 artificial:3 detected:5 labeling:1 celiu:2 supplementary:1 solve:1 plausible:1 say:1 otherwise:4 favor:2 statistic:2 itself:3 final:3 online:1 sequence:1 h22:4 propose:3 ment:1 product:1 canny:3 relevant:1 loop:1 curvilinear:1 exploiting:2 empty:2 produce:1 object:11 help:1 derive:1 develop:1 completion:6 depending:1 textureless:4 eq:1 edward:1 implemented:1 indicate:1 restate:1 correct:2 filter:5 human:2 h11:3 generalization:1 bi1:2 mathematically:2 im:1 tracker:2 around:1 ground:3 exp:6 bj:2 circuitry:1 thinline:1 estimation:5 integrates:1 label:3 grouped:8 successfully:1 weighted:2 mit:2 gaussian:6 properly:7 rank:1 check:2 indicates:2 contrast:4 detect:4 inference:3 typically:1 spurious:12 favoring:1 i1:2 pixel:4 orientation:18 denoted:1 lucas:2 spatial:3 special:2 mackay:1 psychophysics:1 marginal:3 field:3 aware:1 once:2 extraction:5 reversibility:5 sampling:3 having:1 identical:2 adelson:6 yu:1 nearly:1 eik:2 t2:1 stimulus:2 simplify:1 report:1 few:1 oriented:2 randomly:1 simultaneously:1 divergence:2 geometry:1 occlusion:5 consisting:1 connects:3 william:1 detection:6 circular:1 analyzed:2 behind:1 tj:11 chain:20 edge:5 filled:1 rotating:3 circle:1 desired:2 plotted:1 stopped:1 column:3 modeling:1 amodal:1 loopy:1 cost:1 front:2 connect:7 varies:1 synthetic:4 density:4 twelve:1 international:4 csail:2 probabilistic:1 pool:4 together:1 again:1 containing:1 corner:9 warped:1 leading:1 ullman:1 b2:11 caused:1 vi:5 depends:2 break:2 saund:1 closed:5 lab:1 analyze:5 linked:1 red:1 slope:1 square:5 ni:2 maximized:1 yield:1 saliency:12 weak:1 ren:1 trajectory:1 unaffected:1 explain:2 detector:2 energy:14 involved:1 turk:1 dancer:4 associated:1 static:7 stop:1 sampled:1 photorealistic:1 massachusetts:1 illusory:11 color:4 cj:1 segmentation:1 organized:1 sophisticated:1 appears:1 bidirectional:2 follow:1 specify:1 wei:1 done:1 though:1 strongly:1 generality:1 rejected:1 lastly:1 until:2 d:3 working:1 eqn:3 ei:1 replacing:1 reversible:6 propagation:1 continuity:1 gray:2 b3:7 effect:1 usa:1 normalized:1 true:1 discounting:1 laboratory:1 i2:4 illustrated:2 white:1 during:1 self:4 uniquely:1 ambiguous:1 criterion:1 pdf:2 complete:2 invisible:1 motion:76 image:19 novel:2 specialized:1 mt:1 tracked:1 fourteen:1 attached:1 ti2:3 discussed:1 interpretation:4 relating:1 interpret:1 synthesized:1 measurement:2 cambridge:2 smoothness:2 consistency:2 particle:1 reliability:1 dot:1 moving:7 similarity:2 curvature:4 own:1 leftward:1 optimizing:1 certain:1 frag:1 binary:3 continue:1 mcdermott:3 analyzes:1 additional:1 impose:1 determine:1 full:1 multiple:2 infer:4 smooth:2 technical:1 match:7 prediction:2 mrf:2 vision:8 represent:1 normalization:2 raskar:1 thornber:1 proposal:2 whereas:1 want:1 crucial:1 tend:1 flow:15 call:1 extracting:2 structural:5 edgelet:13 exceed:1 edgelets:11 rendering:1 switch:19 isolation:1 fit:2 restrict:1 vik:5 ti1:2 fleet:1 whether:1 stereo:1 matlab:1 locally:4 band:1 ten:1 generate:2 http:1 notice:1 estimated:10 neuroscience:1 correctly:5 per:1 track:1 diverse:1 write:1 discrete:1 shall:2 group:2 zv:2 four:2 salient:3 threshold:5 changing:1 ce:1 registration:1 imaging:1 graph:2 sum:2 run:2 angle:1 uncertainty:6 reasonable:1 patch:3 h12:4 mahamud:1 pik:2 scaling:2 guaranteed:1 badly:1 nontrivial:1 generates:1 disambiguated:2 min:3 chair:5 performing:1 optical:1 martin:1 according:1 belonging:2 across:1 em:1 primate:1 leg:5 explained:2 intuitively:2 pr:10 invariant:1 legal:1 visualization:1 turn:1 discus:1 dmax:2 fail:2 needed:1 end:30 junction:19 eight:1 apply:1 enforce:3 fowlkes:2 altogether:1 running:2 ensure:1 completed:2 graphical:11 exploit:1 move:4 malik:2 added:1 primary:4 exclusive:3 traditional:1 lends:1 reversed:1 ei1:1 seven:2 enforcing:1 modeled:1 illustration:2 minimizing:2 equivalently:1 difficult:3 anal:4 reliably:1 proper:1 design:1 allowing:1 dmin:2 observation:3 markov:1 arc:1 displayed:2 pat:4 extended:1 frame:15 discovered:1 orientated:1 sharp:1 kanizsa:1 inferred:1 specified:1 kl:5 connection:12 toolbox:1 hallucinated:2 tomasi:1 trans:5 able:3 bar:9 below:1 perception:3 xm:1 pattern:1 regime:1 reliable:2 including:2 video:3 belief:1 max:3 green:1 event:2 natural:2 rely:2 turning:1 arm:1 technology:1 misleading:1 extract:1 marginalizing:1 billf:1 loss:2 sufficient:1 consistent:1 row:1 last:1 institute:1 wide:1 focussed:1 slice:1 boundary:82 curve:5 depth:1 valid:1 avoids:1 contour:55 nov:1 compact:1 logic:1 global:5 conceptual:1 b1:17 assumed:1 continuous:1 iterative:1 zq:1 disambiguate:2 kanade:2 constructing:1 main:1 arrow:1 featureless:4 n2:1 repeated:2 determing:1 x1:1 xu:1 intel:4 downwards:4 embeds:1 position:1 candidate:1 third:1 minute:1 showing:2 offset:1 evidence:3 derives:1 grouping:22 essential:3 importance:3 ci:1 texture:3 perceptually:1 nk:1 gap:1 intersection:5 depicted:1 simply:6 likely:1 visual:4 schunck:1 tracking:11 truth:3 satisfies:1 extracted:7 ma:1 acm:1 conditional:1 identity:1 formulated:2 viewed:1 flash:1 determined:1 called:1 experimental:2 occluding:7 selectively:1 select:1 h21:4 people:1 frontal:1 d1:3 handling:2
2,250
3,041
Parameter Expanded Variational Bayesian Methods Tommi S. Jaakkola MIT CSAIL 32 Vassar street Cambridge, MA 02139 [email protected] Yuan (Alan) Qi MIT CSAIL 32 Vassar street Cambridge, MA 02139 [email protected] Abstract Bayesian inference has become increasingly important in statistical machine learning. Exact Bayesian calculations are often not feasible in practice, however. A number of approximate Bayesian methods have been proposed to make such calculations practical, among them the variational Bayesian (VB) approach. The VB approach, while useful, can nevertheless suffer from slow convergence to the approximate solution. To address this problem, we propose Parameter-eXpanded Variational Bayesian (PX-VB) methods to speed up VB. The new algorithm is inspired by parameter-expanded expectation maximization (PX-EM) and parameterexpanded data augmentation (PX-DA). Similar to PX-EM and -DA, PX-VB expands a model with auxiliary variables to reduce the coupling between variables in the original model. We analyze the convergence rates of VB and PX-VB and demonstrate the superior convergence rates of PX-VB in variational probit regression and automatic relevance determination. 1 Introduction A number of approximate Bayesian methods have been proposed to offset the high computational cost of exact Bayesian calculations. Variational Bayes (VB) is one popular method of approximation. Given a target probability distribution, variational Bayesian methods approximate the target distribution with a factored distribution. While factoring omits dependencies present in the target distribution, the parameters of the factored approximation can be adjusted to improve the match. Specifically, the approximation is optimized by minimizing the KL-divergence between the factored distribution and the target. This minimization can be often carried out iteratively, one component update at a time, despite the fact that the target distribution may not lend itself to exact Bayesian calculations. Variational Bayesian approximations have been widely used in Bayesian learning (e.g., (Jordan et al., 1998; Beal, 2003; Bishop & Tipping, 2000)). Variational Bayesian methods nevertheless suffer from slow convergence when the variables in the factored approximation are actually strongly coupled in the original model. The same problem arises in popular Gibbs sampling algorithm. The sampling process converges slowly in cases where the variables are strongly correlated. The slow convergence can be alleviated by data augmentation (van Dyk & Meng, 2001; Liu & Wu, 1999), where the idea is to identify an optimal reparameterization (within a family of possible reparameterizations) so as to remove coupling. Similarly, in a deterministic context, Liu et al. (1998) proposed over-parameterization of the model to speed up EM convergence. Our work here is inspired by DA sampling and PX-EM. Our approach uses auxiliary parameters to speed up the deterministic approximation of the target distribution. Specifically, we propose Parameter-eXpanded Variational Bayesian (PX-VB) method. The original model is modified by auxiliary parameters that are optimized in conjunction with the variational approximation. The optimization of the auxiliary parameters corresponds to a parameterized joint optimization of the variational components; the role of the new updates is to precisely remove otherwise strong functional couplings between the components thereby facilitating fast convergence. 2 An illustrative example Consider a toy Bayesian model, which has been considered by Liu and Wu (1999) for sampling. p(y|w, z) = N (y | w + z, 1), p(z) = N (z | 0, D) (1) where D is a know hyperparameter and p(w) ? 1. The task is to compute the posterior distribution of w. Suppose we use a VB method to approximate p(w|y), p(z|y) and p(w, z|y) by q(w), q(z) and q(w, z) = q(w)q(z), respectively. The approximation is optimized by minimizing KL(q(w)q(z)kp(y|w, z)p(z)) (the second argument need not be normalized). The general forms of the component updates are given by q(w) ? exp(hln p(y|w, z)p(z)iq(z) ) (2) q(z) ? exp(hln p(y|w, z)p(z)iq(w) ) It is easy to derive the updates in this case: y ? hwi 1 q(w) = N (w|y ? hzi, 1) q(z) = N (z| , ) 1 + D?1 1 + D?1 (3) (4) Now let us analyze the convergence of the mean parameter of q(w), hwi = y ? hzi. Iteratively,  hwi D?1 y+ = D?1 (1 + D?1 )?1 y + (1 + D?1 )?2 y + ? ? ? = y. hwi = ?1 ?1 1+D 1+D The variational estimate hwi converges to y, which actually is the true posterior mean (For this toy problem, p(w|y) = N (w|y, 1+D)). Furthermore, if D is large, hwi converges slowly. Note that the variance parameter of q(w) converges to 1 in one iteration, though underestimates the true posterior variance 1 + D. Intuitively, the convergence speed of hwi and q(w) suffers from strong coupling between the updates of w and z. In other words, the update information has to go through a feedback loop w ? z ? w ? ? ? . To alleviate the coupling, we expand the original model with an additional parameter ?: p(y|w, z) = N (y | w + z, 1) p(z|?) = N (z | ?, D) (5) The expanded model reduces to the original one when ? equals the null value ?0 = 0. Now having computed q(z) given ? = 0, we minimize KL(q(w)q(z)kp(y|w, z)p(z|?)) over ? and obtain the minimizer ? = hzi. Then, we reduce the expanded model to the original one by applying the reduction rule z new = z ? ? = z ? hzi, wnew = w + ? = w + hzi. Correspondingly, we change the measures of q(w) and q(z): 1 q(w + hzi) ? q(wnew ) = N (wnew |y, 1) q(z ? hzi) ? q(z new ) = N (z new |0, ) (6) 1 + D?1 Thus, the PX-VB method converges. Here ? breaks the update loop between q(w) and q(z) and plays the role of a correction force; it corrects the update trajectories of q(w) and q(z) and makes them point directly to the convergence point. 3 The PX-VB Algorithm In the general PX-VB formulation, we over-parameterize the model p(? x, D) to get p? (x, D), where the original model is recovered for some default values of the auxiliary parameters ? = ?0 . The algorithm consists of the typical VB updates relative to p? (x, D), the optimization of auxiliary parameters ?, as well as a reduction step to turn the model back to the original form where ? = ?0 . This last reduction step has the effect of jointly modifying the components of the factored variational approximation. Put another way, we push the change in p? (x, D), due to the optimization of ?, into the variational approximation instead. Changing the variational approximation in this manner permits us to return the model into its original form and set ? = ?0 . Specifically, we first expand p(? x, D) to obtain p? (x, D). Then at the tth iteration, Q 1. q(xs ) are updated sequentially. Note that the approximate distribution q(x) = s q(xs ). 2. We minimize KL(q(x)kp? (x, D)) over the auxiliary parameters ?. This optimization can be done jointly with some components of the variational distribution, if feasible. 3. The expanded model is reduced to the original model through reparameterization. Accordingly, we change q (t+1) (x) to q (t+1) (? x) such that KL(q (t+1) (? x)kp?0 (? x, D)) = KL(q(x)kp?(t+1) (x, D)) (7) where q (t+1) (? x) are the modified components of the variational approximation. 4. Set ? = ?0 . Since each update of PX-VB decreases or maintains the KL divergence KL(q(x)kp(x, D)), which is lower bounded, PX-VB reaches a stationary point for KL(q(x)kp(x, D)). Empirically, PX-VB often achieves solution similar to what VB achieves, with faster convergence. A simple strategy to implement PX-VB is to use a mapping S? , parameterized by ?, over the vari? . After sequentially optimizing over the components {q(xs )}, we maximize hln p? (x)iq(x) ables x over ?. Then, we reduce p? (x, D) to p(? x, D) and q(x) to q(? x) through the inverse mapping of S? , ?1 M? ? S ? . Since we optimize ? after optimizing {q(x?s }, the mapping S should change at least two components of x. Otherwise, the optimization over ? will do nothing since we have already optimized over each q(x?s ). If we jointly optimize ? and one component q(xs ), it suffices (albeit need not be optimal) for the mapping S? to change only q(xs ). Algorithmically, PX-VB bears a strong similarity to PX-EM (Liu et al., 1998). They both expand the original model and both are based on lower bounding KL-divergence. However, the key difference is that the reduction step in PX-VB changes the lower-bounding distributions {q(xs )}, while in PXEM the reduction step is performed only for the parameters in p(x, D). We also note that the PX-VB reduction step via M? leaves the KL-divergence (lower bound on the likelihood) invariant, while in PX-EM the likelihood of the observed data remains the same after the reduction. Because of these differences, general EM acceleration methods (e.g., (Salakhutdinov et al., 2003)) can not be directly applied to speed up VB convergence. In the following sections, we present PX-VB methods for two popular Bayesian models: Probit regression for data classification and Automatic Relevance Determination (ARD) for feature selection and sparse learner. 3.1 Bayesian Probit regression Probit regression is a standard classification technique (see, e.g., (Liu et al., 1998) for the maximum likelihood estimation). Here we demonstrate the use of variational Bayesian methods to train Probit models. The data likelihood for Probit regression is p(t|X, w) = Y ?(tn wT xn ), n where X = [x1 , . . . , xN ] and ? is the standard normal cumulative distribution function. We can rewrite the likelihood in an equivalent form p(zn |w, xn ) = N (zn |wT xn , 1) p(tn |zn ) = sign(tn zn ) (8) Given a Gaussian prior over the parameter, p(w) = N (w|0, v0 I), Q we wish to approximate the Q posterior distribution p(w, z|X, t) by q(w, z) = q(w) n q(zn ). Minimizing KL(q(w) n q(zn )kp(w, z, t|X)), we obtain the following VB updates: q(zn ) = T N (zn |hwiT xn , 1, tn zn ) T q(w) = N (w|(XX + v0?1 I)?1 Xhzi, (XXT (9) + v0?1 I)?1 ) where T N (zn |hwiT xn , 1, tn zn ) stands for a truncated Gaussian such T N (zn |hwiT xn , 1, tn zn ) = N (zn |hwiT xn , 1) when tn zn > 0, and it equals 0 otherwise. (10) that To speed up the convergence of the above iterative updates, we apply the PX-VB method. First, we ? z ?, t|X) to pc (w, z, t|X) with the mapping expand the orginal model p(w, ? ?c w = wc z=z (11) such that pc (zn |w, xn ) = N (zn |wT xn , c2 ) p(w) = N (w|0, c2 v0 I) (12) Setting c = c0 = 1 in the expanded model, we update q(z ) and q(w) as before, via (9) and (10). n  Then, we minimize KL q(z)q(w)kpc (w, z, t|X) over c, yielding X  1 ?1 T T c2 = (hzn2 i ? 2hzn ihwiT xn + xT (13) n hww ixn ) + v0 hww i N +M n where M is the dimension of w. In the degenerate case where v0 = ?, the denominator of the above equation becomes N instead of N + M . Since this equation can be efficiently calculated, the extra computational cost induced by the auxiliary variable is therefore small. We omit the details. The transformation back to pc0 can be made via the inverse map b = w/c b w z = z/c. (14) b Accordingly, we change q(w) to obtain a new posterior approximation qc (w): ?1 ?1 ?1 ?1 2 T T b = N (w|(XX b qc (w) + v0 I) Xhzi/c, (XX + v0 I) /c ) We do not actually need to compute qc (zn ) if this component will be optimized next. (15) b through (14), the KL divergence between the approximate and exact By changing variables w to w b and q(? posteriors remains the same. After obtaining new approximations qc (w) zn ), we reset c = c0 = 1 for the next iteration. Though similar to the PX-EM updates for the Probit regression problem (Liu et al., 1998), the PXVB updates are geared towards providing an approximate posterior distribution. We use both synthetic data and a kidney biopsy data (van Dyk & Meng, 2001) as numerical examples for probit regression. We set v0 = ? in the experiment. The comparison of convergence speeds for VB and PXVB is illustrated in figure 1. ?2 log(||wt+1?wt||) log(||wt+1?wt||) 0 VB PX?VB 0 ?4 ?6 ?8 VB PX?VB ?2 ?4 ?6 ?8 0 1000 2000 3000 4000 5000 Number of iterations (a) 0 2000 4000 6000 Number of iterations (b) Figure 1: Comparison between VB and PX-VB for probit regression on synthetic (a) and kidneybiospy data sets (b). PX-VB converges significantly faster than VB. Note that the Y axis shows the difference between two consecutive estimates of the posterior mean of the parameter w. For the synthetic data, we randomly sample a classifier and use it to define the data labels for sampled inputs. We have 100 training and 500 test data points, each of which is 20 features. The kidney data set has 55 data points, each of which is a 3 dimensional vector. On the synthetic data, PXVB converges immediately while VB updates are slow to converge. Both PX-VB and VB trained classifiers achieve zero test error. On the kidney biopsy data set, PX-VB converges in 507 iterations, while VB converges in 7518 iterations. In other words, PX-VB requires 15 times fewer iterations than VB. In terms of CPU time, which reflects the extra computational cost induced by the auxiliary variables, PX-VB is 14 times more efficient. Among all these runs, PX-VB and VB achieve very similar estimates of the model parameters and the same prediction results. In sum, with a simple modification of VB updates, we significantly improve the convergence speed of variational Bayesian estimation for probit model. 3.2 Automatic Relevance Determination Automatic relevance determination (ARD) is a powerful Bayesian sparse learning technique (MacKay, 1992; Tipping, 2000; Bishop & Tipping, 2000). Here, we focus on variational ARD proposed by Bishop and Tipping (2000) for sparse Bayesian regression and classification. The likelihood for ARD regression is p(t|X, w, ? ) = Y N (tn |wT ?n , ? ?1 ) n where ?n is a feature vector based on xn , such as [k(x1 , xn ), . . . , [k(xN , xn )]T where k(xi , xj ) is a nonlinear basis function. For example, we can choose a radial basis function k(xi , xj ) = exp(?kxi ? xj k/(2?2 ), where ? is the kernel width. QM ?1 In ARD, we assign a Gaussian prior on the model parameters w: p(w|?) = m=0 N (wm |0, ?m ), where the inverse variance diag(?) follows a factorized Gamma distribution: Y Y a?1 ?b?m /?(a) (16) p(?) = Gamma(?m |a, b) = ba ?m e m m where a and b are hyperparameters of the model. The posterior does not have a closed form. Let us approximate p(w, ?, ? |X, t) by a factorized distribution q(w, ?, ? ) = q(w)q(?)q(? ). The sequential VB updates on q(? ), q(w) and q(?) are described by Bishop and Tipping (2000). The variational RVM achieves good generalization performance as demonstrated by Bishop and Tipping (2000). However, its training based on the VB updates can be quite slow. We apply PX-VB to address this issue. b ?, ? ??|X, t) via First, we expand the original model p(w, b w = w/r (17) ? and ?? unchanged. Consequently, the data likelihood and the prior on w become while maintaining ? pr (t|w, X, ? ) = Y N (tn |rwT ?n , ? ?1 ) pr (w|?) = n M Y ?1 N (wm |0, r?2 ?m ) (18) m=0 Setting r = r0 = 1, we update q(? ) and q(?) as in the regular VB. Then, we want to joint optimize over q(w) and r. Instead of performing a fully joint optimization, we optimize q(w) and r separately at the same time. This gives p g + g 2 + 16M f r= (19) 4f P P P T 2 T T where f = h? i n xT n hww ixn + m hwm ih?m i and g = 2h? i m hw ixn tn . where hw i and T T hww i are the first and second order moments of the previous q(w). Since both f and X hwi has been computed previously in VB updates, the added computational cost for r is negligible overall. The separate optimization over q(w) and r often decreases the KL divergence. But it cannot guarantee to achieve a smaller KL divergence than what optimization only over q(w) would achieves. If the regular update over q(w) achieves a smaller KL divergence, we reset r = 1. b = rw to reduce the expanded model to the original one. CorGiven r and q(w), we use w ? = respondingly, we change q(w) = N (w|?w , ?w ) via this reduction rule to obtain qr (w) 2 b N (w|r? w , r ?w ). ? We can also introduce another auxiliary variable s such that ? = ?/s. Similar to the above procedure, we optimize over s the expected log joint probability of the expanded model, and at the same ? using the inverse mapping ? ? = s?. Due to time update q(?). Then we change q(?) back to qs (?) the space limitation, we skip the details here. The auxiliary variables r and s change the individual approximate posteriors q(w) and q(?) separately. We can combine these two variables into one and use it to adjust q(w) and q(?) jointly. Specifically, we introduce the variable c: b w = w/c b ? = c2 ?. VB PX?VB ?6 ?8 t ?w ||) ?4 ?6 500 1000 1500 2000 Number of iterations 2500 ?6 ?7 ?8 ?8 0 ?5 t+1 ?4 ?2 VB PX?VB ?4 log(||w ?2 VB PX?VB 0 log(||wt+1?wt||) log(||wt+1?wt||) 0 0 500 (a) 1000 1500 2000 Number of iterations 2500 (b) 0 500 1000 1500 2000 Number of iterations 2500 (c) Figure 2: Convergence comparison between VB and PX-VB for ARD regression on synthetic data (a,b) and gene expression data (c). The PX-VB results in (a) and (c) are based on independent auxiliar variables on w and ?. The PX-VB result in (b) is based on the auxiliar variable that correlates both w and ?. The added computational cost for PX-VB in each iteraction is negligible overall. Setting c = c0 = 1, we perform the regular updates over q(? ), q(w) and q(?). Then we optimize over c the expected log joint probablity of the expanded model. We cannot find a closed-form solution for the maximization. But we can efficiently compute its gradient and Hessian. Therefore, we perform a few steps of Newton updates to partially optimize c. Again, the additional computational cost for calculating c is small. Then using the inverse mapping, we reduce the expanded model to the original one and adjust both q(w) and q(?) accordingly. Empirically, this approach can achieve faster convergence than using auxiliary variables on q(w) and q(?) separately. This is demonstrated in figure 2(a) and (b). We compare the convergence speed of VB and PX-VB for the ARD model on both synthetic data and gene expression data. The synthetic data are sampled from the function sinc(x) = (sinx)/x for x ? (?10, 10) with added Gaussian noise. We use RBF kernels for the feature expansion ?n with kernel width 3. VB and PX-VB provide basically identical predictions. For gene expression data, we apply ARD to analyze the relationship between binding motifs and the expression of their target genes. For this task, we use 3 order polynomial kernels. The results of convergence comparison are shown in figure 2. With a little modification of VB updates, we increase the convergence speed significantly. Though we demonstrate PX-VB improvement only for ARD regression, the same technique can be used to speed up ARD classification. 4 Convergence properties of VB and PX-VB In this section, we analyze convergence of VB and PX-VB, and their convergence rates. Define the mapping q(t+1) = M (q(t) ) as one VB update of all the approximate distributions. Define an objective function as the unnormalized KL divergence: Q Z Y Z Z Y qi (x) Q(q) = qi (x) log ) + ( p(x)dx ? qi (x)dx). (20) p(x) It is easy to check that minimizing Q(q) gives the same updates as VB which minimizes KL divergence. Based on Theorem 2.1 by Luo and Tseng (1992), an iterative application of this mapping to minimize Q(q) results in at least linear convergence to an element q? in the solution set. Define the mapping q(t+1) = Mx (q(t) ) as one PX-VB update of all the approximate distributions. The convergence of PX-VB follows from similar arguments. i.e.,? = [qT ?T ]T converges to T [q? T ?T 0 ] , where ? ? ? are the expanded model parameters, ?0 are the null value in the original model. 4.1 Convergence rate of VB and PX-VB The matrix rate of convergence DM (q): q(t+1) ? q? = DM (q)T (q(t) ? q? ) (21) where DM (q) =  ?Mj (q) ?qi  . (t+1) ? ?q k Define the global rate of convergence for q: r = limt?? kqkq(t) ?q ?k . Under certain regularity conditions, r = the largest eigenvalue of DM (q). The smaller r is, the faster the algorithm converges. Define the constraint set gs as the constraints for the sth update. Then the following theorem holds: Theorem 4.1 The matrix convergence rate for VB is: S Y DM (q? ) = Ps (22) s=1   2 ? ?1 2 ? ?1 and Bs = ?gs (q? ). Bs ]?1 BT where Ps = Bs [BT s D Q(q ) s D Q(q ) Proof: Define ? as the current approximation q. Let Gs (?) be qs that maximizes the objective function Q(q) under the constraint gs (q) = gs (?) = [? \s ]. Let M0 (q) = q and for all 1 ? s ? S. Ms (q) = Gs (Ms?1 (q)) Then by construction of VB, we have q(t+s/S) = Ms (q(t) ), DMS (q? ). At the stationary points, q? = DMs (q? ) for all s. (23) s = 1, . . . , S and DM (q? ) = We differentiate both sides of equation (23) and evaluate them at q = q? : DMs (q) = DMs?1 (q)DGS (Ms?1 (q? )) = DMs?1 (q? )DGS (q? ) QS It follows that DM (q? ) = s=1 DGS (q? ). (24) To calculate DGS (q? ), we differentiate the constraint gs (Gs (?)) = gs (?) and evaluate both sides at ? = q? , such that DGs (q? )Bs = Bs . (25) Similarly, we differentiate the Lagrange equation DQs (G(?)) ? ?gs (G(?))?s (?) = 0 and evaluate both sides at ? = q? . This yields DGs (q? )D2 Qs (q? ) ? D?s (q? )BT s =0 Equation (26) holds because ? 2 gs ?qi ?qj (26) = 0. Combining (25) and (26) yields   2 ? ?1 2 ? ?1 DGs (q? ) = Bs [BT Bs ]?1 BT .2 s D Qs (q ) s D Qs (q ) (27) In the s update we fix q\s , i.e., gs (q) = q\s . Therefore, Bs is an identity matrix with its sth column removed Bs = I:,s , where I is the identity matrix and s, : means without the sth column. ?1 Denote Cs = D2 Qs (q? ) . Without the loss of generality, we set s = S. It is easy to obtain BT S CBS = C\S,\S (28) where \S, \S means without row S and column S. Inserting (28) into (27) yields  Id?1 ? PS = DGS (q ) = 0 ?1 C\S,\S C\S,S 0   = Id?1 0 ?D2 Q\S,S (D2 QS,S )?1 0 where Id?1 is a (d ? 1) by (d ? 1) identity matrix, and D2 Q\S,S = ? 2 Q(qq (x)kp(x)) ?q\S T ?qS  (29) and D2 QS,S = 2 ? Q(qq (x)kp(x)) . ?qS T ?qS Notice that we use Schur complements to obtain (29). Similar to the calculation of PS via (29), we can derive Ps for s = 1, . . . , S ? 1 with structures similar to PS . The above results help us understand the convergence speed of VB. For example, we have q(t+1) ? q? = PST ? ? ? P1T (q(t) ? q? ).  (t+1) For qS , qS ? q?S = ? (D2 QS,S )?1 D2 QS,\S 0 (q(t+(S?1)/S) ? q? ). (30) Clearly, if we view D2 QS,\S as the correlation between qS and q\S , then the smaller ?correlation?, the faster the convergence. In the extreme case, if there is no correlation between qS and q\S , then (t+1) qS ? q?S = 0 after the first iteration. Since the global convergence rate is bounded by the maximal component convergence rate and generally there are many components with convergence rate same as the global rate. Therefore, the instant convergence of qS could help increase the global convergence rate. For PX-VB, we can compute the matrix rate of convergence similarly. In the toy example in Section 2, PX-VB introduces an auxiliary variable ? which has zero correlation with w, leading an instant convergence of the algorithm. This suggests that PX-VB improves the convergence by reducing the correlation among {qs }. Rigorously speaking, the reduction step in PXVB implictly defines a mapping between q to q?0 through the auxiliary variables ?: (q, p?0 ) ? (q, p? ) ? (q? , p?0 ). Denote this mapping as M? such as q? = M? (q). Then we have DMx (q? ) = DG1 (q? ) ? ? ? DG? (q? ) ? ? ? DGS (q? ) It is known that the spectral norm has the following submultiplicative property kEF k <= kEkkF k, where E and F are two matrices. Thus, as long as the largest eigenvalue of M? is smaller than 1, PX-VB converges faster than VB. The choice of ? affects the convergence rate by controlling the eigenvalue of this mapping. The smaller the largest eigenvalue of M? , the faster PX-VB converges. In practice, we can check this eigenvalue to make sure the constructed PX-VB algorithm enjoys a fast convergence rate. 5 Discussion We have provided a general approach to speeding up convergence of variational Bayesian learning. Faster convergence is guaranteed theoretically provided that the Jacobian of the transformation from auxiliary parameters to variational components has spectral norm bounded by one. This property can be verified in each case separately. Our empirical results show that the performance gain due to the auxiliary method is substantial. Acknowledgments T. S. Jaakkola was supported by DARPA Transfer Learning program. References Beal, M. (2003). Variational algorithms for approximate Bayesian inference. Doctoral dissertation, Gatsby Computational Neuroscience Unit, University College London. Bishop, C., & Tipping, M. E. (2000). Variational relevance vector machines. 16th UAI. Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., & Saul, L. K. (1998). An introduction to variational methods in graphical models. Learning in Graphical Models. http://www.ai.mit.edu/?tommi/papers.html. Liu, C., Rubin, D. B., & Wu, Y. N. (1998). Parameter expansion to accelerate EM: the PX-EM algorithm. Biometrika, 85, 755?770. Liu, J. S., & Wu, Y. N. (1999). Parameter expansion for data augmentation. Journal of the American Statistical Association, 94, 1264?1274. Luo, Z. Q., & Tseng, P. (1992). On the convergence of the coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72, 7?35. MacKay, D. J. (1992). Bayesian interpolation. Neural Computation, 4, 415?447. Salakhutdinov, R., Roweis, S. T., & Ghahramani, Z. (2003). Optimization with EM and Expectation-ConjugateGradient. Proceedings of International Conference on Machine Learning. Tipping, M. E. (2000). The relevance vector machine. NIPS (pp. 652?658). The MIT Press. van Dyk, D. A., & Meng, X. L. (2001). The art of data augmentation (with discussion). Journal of Computational and Graphical Statistics, 10, 1?111.
3041 |@word polynomial:1 norm:2 c0:3 d2:9 thereby:1 moment:1 reduction:9 liu:8 recovered:1 current:1 luo:2 dx:2 alanqi:1 numerical:1 remove:2 update:31 hwit:4 stationary:2 leaf:1 fewer:1 parameterization:1 accordingly:3 dissertation:1 pc0:1 probablity:1 c2:4 constructed:1 become:2 yuan:1 consists:1 combine:1 introduce:2 manner:1 theoretically:1 expected:2 inspired:2 salakhutdinov:2 cpu:1 little:1 becomes:1 provided:2 xx:3 bounded:3 maximizes:1 factorized:2 null:2 what:2 minimizes:1 transformation:2 guarantee:1 expands:1 biometrika:1 classifier:2 qm:1 unit:1 omit:1 orginal:1 before:1 negligible:2 despite:1 vassar:2 id:3 meng:3 interpolation:1 doctoral:1 suggests:1 practical:1 acknowledgment:1 practice:2 implement:1 procedure:1 empirical:1 significantly:3 alleviated:1 word:2 radial:1 regular:3 get:1 cannot:2 selection:1 put:1 context:1 applying:1 optimize:7 equivalent:1 deterministic:2 map:1 demonstrated:2 www:1 go:1 kidney:3 convex:1 qc:4 immediately:1 factored:5 rule:2 q:22 reparameterization:2 coordinate:1 updated:1 qq:2 target:7 play:1 suppose:1 construction:1 exact:4 controlling:1 us:1 element:1 observed:1 role:2 parameterize:1 calculate:1 decrease:2 removed:1 substantial:1 reparameterizations:1 rigorously:1 trained:1 rewrite:1 learner:1 basis:2 accelerate:1 joint:5 darpa:1 xxt:1 train:1 fast:2 london:1 kp:10 quite:1 widely:1 otherwise:3 statistic:1 jointly:4 itself:1 beal:2 differentiate:3 eigenvalue:5 differentiable:1 propose:2 maximal:1 reset:2 inserting:1 loop:2 combining:1 degenerate:1 achieve:4 roweis:1 qr:1 convergence:44 regularity:1 p:6 converges:13 help:2 coupling:5 iq:3 derive:2 kef:1 ard:10 qt:1 strong:3 auxiliary:16 c:1 skip:1 tommi:3 biopsy:2 modifying:1 assign:1 suffices:1 generalization:1 fix:1 alleviate:1 adjusted:1 correction:1 hold:2 considered:1 normal:1 exp:3 mapping:13 m0:1 achieves:5 consecutive:1 estimation:2 kpc:1 label:1 rvm:1 largest:3 reflects:1 minimization:2 mit:6 clearly:1 gaussian:4 modified:2 jaakkola:3 conjunction:1 focus:1 improvement:1 likelihood:7 check:2 inference:2 motif:1 factoring:1 bt:6 expand:5 issue:1 among:3 classification:4 overall:2 html:1 art:1 mackay:2 equal:2 having:1 sampling:4 identical:1 few:1 randomly:1 dg:10 gamma:2 divergence:10 individual:1 sinx:1 adjust:2 introduces:1 extreme:1 hwi:8 pc:2 yielding:1 column:3 zn:19 maximization:2 cost:6 dependency:1 kxi:1 synthetic:7 international:1 csail:4 corrects:1 augmentation:4 again:1 choose:1 slowly:2 hzi:7 american:1 leading:1 return:1 toy:3 hzn:1 performed:1 break:1 view:1 closed:2 analyze:4 wm:2 bayes:1 maintains:1 minimize:4 variance:3 efficiently:2 yield:3 identify:1 dmx:1 bayesian:24 basically:1 trajectory:1 reach:1 suffers:1 underestimate:1 pp:1 hwm:1 dm:12 proof:1 sampled:2 gain:1 pst:1 popular:3 improves:1 actually:3 back:3 tipping:8 formulation:1 done:1 though:3 strongly:2 generality:1 furthermore:1 correlation:5 nonlinear:1 defines:1 effect:1 normalized:1 true:2 implictly:1 iteratively:2 illustrated:1 width:2 illustrative:1 unnormalized:1 m:4 demonstrate:3 tn:10 variational:26 superior:1 functional:1 empirically:2 association:1 cambridge:2 gibbs:1 ai:1 automatic:4 similarly:3 geared:1 similarity:1 v0:9 posterior:10 cbs:1 optimizing:2 certain:1 additional:2 r0:1 dg1:1 converge:1 maximize:1 reduces:1 alan:1 match:1 determination:4 calculation:5 faster:8 long:1 qi:6 prediction:2 regression:12 denominator:1 expectation:2 iteration:12 kernel:4 limt:1 rwt:1 want:1 separately:4 extra:2 sure:1 induced:2 conjugategradient:1 schur:1 jordan:2 easy:3 xj:3 affect:1 reduce:5 idea:1 qj:1 expression:4 suffer:2 hessian:1 speaking:1 useful:1 generally:1 tth:1 reduced:1 rw:1 http:1 notice:1 sign:1 neuroscience:1 algorithmically:1 hyperparameter:1 key:1 nevertheless:2 changing:2 hww:4 verified:1 sum:1 run:1 inverse:5 parameterized:2 powerful:1 family:1 wu:4 vb:89 bound:1 guaranteed:1 g:12 precisely:1 constraint:4 wc:1 speed:12 argument:2 ables:1 performing:1 expanded:13 px:56 smaller:6 increasingly:1 em:11 sth:3 modification:2 b:9 intuitively:1 invariant:1 pr:2 equation:5 remains:2 previously:1 turn:1 know:1 permit:1 apply:3 spectral:2 submultiplicative:1 original:15 graphical:3 maintaining:1 newton:1 instant:2 calculating:1 ghahramani:2 unchanged:1 objective:2 already:1 added:3 strategy:1 gradient:1 mx:1 separate:1 street:2 tseng:2 relationship:1 providing:1 minimizing:4 ba:1 perform:2 descent:1 truncated:1 hln:3 complement:1 kl:19 optimized:5 omits:1 nip:1 address:2 program:1 lend:1 force:1 ixn:3 improve:2 axis:1 carried:1 coupled:1 dyk:3 speeding:1 prior:3 relative:1 probit:10 fully:1 bear:1 loss:1 limitation:1 rubin:1 row:1 supported:1 last:1 enjoys:1 side:3 understand:1 saul:1 correspondingly:1 sparse:3 van:3 feedback:1 default:1 xn:15 stand:1 vari:1 cumulative:1 dimension:1 calculated:1 made:1 correlate:1 approximate:14 gene:4 global:4 sequentially:2 uai:1 xi:2 iterative:2 mj:1 transfer:1 obtaining:1 expansion:3 da:3 diag:1 bounding:2 noise:1 hyperparameters:1 nothing:1 facilitating:1 x1:2 gatsby:1 slow:5 wish:1 jacobian:1 hw:2 theorem:3 bishop:6 xt:2 offset:1 x:6 sinc:1 ih:1 albeit:1 sequential:1 push:1 p1t:1 lagrange:1 partially:1 binding:1 corresponds:1 minimizer:1 wnew:3 ma:2 identity:3 acceleration:1 consequently:1 towards:1 rbf:1 feasible:2 change:10 specifically:4 typical:1 reducing:1 wt:12 college:1 arises:1 relevance:6 evaluate:3 correlated:1
2,251
3,042
Statistical Modeling of Images with Fields of Gaussian Scale Mixtures Siwei Lyu Eero. P. Simoncelli Howard Hughes Medical Institute Center for Neural Science, and Courant Institute of Mathematical Sciences New York University, New York, NY 10003 Abstract The local statistical properties of photographic images, when represented in a multi-scale basis, have been described using Gaussian scale mixtures (GSMs). Here, we use this local description to construct a global field of Gaussian scale mixtures (FoGSM). Specifically, we model subbands of wavelet coe?cients as a product of an exponentiated homogeneous Gaussian Markov random field (hGMRF) and a second independent hGMRF. We show that parameter estimation for FoGSM is feasible, and that samples drawn from an estimated FoGSM model have marginal and joint statistics similar to wavelet coe?cients of photographic images. We develop an algorithm for image denoising based on the FoGSM model, and demonstrate substantial improvements over current state-ofthe-art denoising method based on the local GSM model. Many successful methods in image processing and computer vision rely on statistical models for images, and it is thus of continuing interest to develop improved models, both in terms of their ability to precisely capture image structures, and in terms of their tractability when used in applications. Constructing such a model is di?cult, primarily because of the intrinsic high dimensionality of the space of images. Two simplifying assumptions are usually made to reduce model complexity. The first is Markovianity: the density of a pixel conditioned on a small neighborhood, is assumed to be independent from the rest of the image. The second assumption is homogeneity: the local density is assumed to be independent of its absolute position within the image. The set of models satisfying both of these assumptions constitute the class of homogeneous Markov random fields (hMRFs). Over the past two decades, studies of photographic images represented with multi-scale multiorientation image decompositions (loosely referred to as ?wavelets?) have revealed striking nonGaussian regularities and inter and intra-subband dependencies. For instance, wavelet coe?cients generally have highly kurtotic marginal distributions [1, 2], and their amplitudes exhibit strong correlations with the amplitudes of nearby coe?cients [3, 4]. One model that can capture the nonGaussian marginal behaviors is a product of non-Gaussian scalar variables [5]. A number of authors have developed non-Gaussian MRF models based on this sort of local description [6, 7, 8], among which the recently developed fields of experts model [7] has demonstrated impressive performance in denoising (albeit at an extremely high computational cost in learning model parameters). An alternative model that can capture non-Gaussian local structure is a scale mixture model [9, 10, 11]. An important special case is Gaussian scale mixtures (GSM), which consists of a Gaussian random vector whose amplitude is modulated by a hidden scaling variable. The GSM model provides a particularly good description of local image statistics, and the Gaussian substructure of the model leads to e?cient algorithms for parameter estimation and inference. Local GSM-based methods represent the current state-of-the-art in image denoising [12]. The power of GSM models should be substantially improved when extended to describe more than a small neighborhood of wavelet coe?cients. To this end, several authors have embedded local Gaussian mixtures into tree-structured MRF models [e.g., 13, 14]. In order to maintain tractability, these models are arranged such that coe?cients are grouped in non-overlapping clusters, allowing a graphical probability model with no loops. Despite their global consistency, the artificially imposed cluster boundaries lead to substantial artifacts in applications such as denoising. In this paper, we use a local GSM as a basis for a globally consistent and spatially homogeneous field of Gaussian scale mixtures (FoGSM). Specifically, the FoGSM is formulated as the product of two mutually independent MRFs: a positive multiplier field obtained by exponentiating a homogeneous Gaussian MRF (hGMRF), and a second hGMRF. We develop a parameter estimation procedure, and show that the model is able to capture important statistical regularities in the marginal and joint wavelet statistics of a photographic image. We apply the FoGSM to image denoising, demonstrating substantial improvement over the previous state-of-the-art results obtained with a local GSM model. 1 Gaussian scale mixtures A GSM random vector x is formed as the product of a zero-mean Gaussian random vector u and an d ? d independent random variable z, as x = zu, where = denotes equality in distribution. The density of x is determined by the covariance of the Gaussian vector, ?, and the density of the multiplier, p z (z), through the integral  T ?1    x ? x 1 pz (z)dz. (1) exp ? p(x) = Nx (0, z?)pz (z)dz ? ? 2z z|?| z z A key property of GSMs is that when z determines the scale of the conditional variance of x given z, which ? is a Gaussian variable with zero mean and covariance z?. In addition, the normalized variable x z is a zero mean Gaussian with covariance matrix ?. The GSM model has been used to describe the marginal and joint densities of local clusters of wavelet coe?cients, both within and across subbands [9], where the embedded Gaussian structure a?ords simple and e?cient computation. This local GSM model has been be used for denoising, by independently estimating each coe?cient conditioned on its surrounding cluster [12]. This method achieves state-of-the-art performances, despite the fact that treating overlapping clusters as independent does not give rise to a globally consistent statistical model that satisfies all the local constraints. 2 Fields of Gaussian scale mixtures In this section, we develop fields of Gaussian scale mixtures (FoGSM) as a framework for modeling wavelet coe?cients of photographic images. Analogous to the local GSM model, we use a latent multiplier field to modulate a homogeneous Gaussian MRF (hGMRF). Formally, we define a FoGSM x as the product of two mutually independent MRFs, ? d x = u ? z, (2) where u is a zero-mean hGMRF, and z is a field of positive multipliers that control the local coe?cient variances. The operator ? denotes element-wise multiplication, and the square root operation is applied to each component. Note that x has a one-dimensional GSM marginal distributions, while its components have dependencies captured by the MRF structures of u and z. Analogous to the local GSM, when conditioned on z, x is an inhomogeneous GMRF       ? ?1  ? T ? 1 T   ? ?1 1 |Qu | |Qu | p(x|z) ?  exp ? x D z Qu D z x =  exp ? (x  z) Qu (x  z) , 2 2 i zi i zi (3) where Qu is the inverse covariance matrix of u (also known as the precision matrix), and D(?) denotes the operator that form a diagonal matrix from an input vector. Note also that the element? wise division of the two fields, x  z, yields a hGMRF with precision matrix Q u . To complete the FoGSM model, we need to specify the structure of the multiplier field z. For tractability, we use another hGMRF as a substrate, and map it into positive values by exponentiation, x u log z Fig. 1. Decomposition of a subband from image ?boat? (left) into the normalized subband u (middle) and the multiplier field z (right, in the logarithm domain). Each image is rescaled individually to fill the full range of grayscale intensities. as was done in [10]. To be more specific, we model log(z) as a hGMRF with mean ? and precision matrix Qz , where the log operator is applied element-wise, from which the density of z follows as:   |Qz | 1 exp ? (log z ? ?)T Qz (log z ? ?) . (4) pz (z) ?  2 i zi This is a natural extension of the univariate lognormal prior used previously for the scalar multiplier in the local GSM model [12]. The restriction to hGMRFs greatly simplifies computation with FoGSM. Particularly, we take advantage of the fact that a 2D hGMRF with circular boundary handling has a sparse block-circulant precision matrix with a generating kernel ? specifying its nonzero elements. A block-circulant matrix is diagonalized by the Fourier transform, and its multiplication with a vector corresponds to convolution with the kernel ?. The diagonalizability with a fixed and e?ciently computed transform makes the parameter estimation, sampling, and inference with a hGMRF substantially more tractable than with a general MRF. Readers are referred to [15] for a detailed description of hGMRFs. Parameter estimation: The estimation of the latent multiplier field z and the model parameters (?, Qz , Qu ) may be achieved by maximizing log p(x, z; Q u, Qz , ?) with an iterative coordinate-ascent method, which is guaranteed to converge. Specifically, based on the statistical dependency structures in the FoGSM model, the following three steps are repeated until convergence: (i) z(t+1) = argmaxz log p(x|z; Q(t)u ) + log p(z; Q(t)z , ?(t) ) (ii) Q(t+1) = argmaxQu log p(x|z(t+1); Qu ) u (t+1) (t+1) (iii) (Qz , ? ) = argmaxQz ,? log p(z(t+1) ; Qz , ?) (5) According to the FoGSM model structure, correspond to maximum likelihood  steps ? (ii) and (iii) estimates of the parameters of hGMRFs, x  z(t+1) and log z(t+1) , respectively. Because of this, both steps may be e?ciently implemented by exploiting the diagonalization of the precision matrices with 2D Fourier transforms [15]. Step (i) in (5) may be implemented with conjugate gradient ascent [16]. To simplify description and computation, ? we introduce a new variable for the element-wise inverse square root of the multiplier: s = 1  z. The likelihood in (3) is then changed to:     1 1 T T si exp ? (x ? s) Qu (x ? s) = si exp ? s D(x)Qu D(x)s . p(x|s) ? (6) 2 2 i i The joint density of s is obtained from (4), using the relations between densities of transformed variables, as   1 1 (7) p(s) ?  exp ? (2 log s + ?)T Qz (2 log s + ?) . 2 i si  Combining . (6) and (7), step (i) in (5) is equivalent to computing s? = argmaxs log p(x|s; Qu ) + log p(s; Qz , ?), which is further simplified into:   1 1 (8) argmin sT D (x) Qu D (x) s + (2 log s + ?)T Qz (2 log s + ?) . 2 2 s boat house peppers x x x x log p(x) Barbara Fig. 2. Empirical marginal log distributions of coe?cients from a multi-scale decomposition of photographic images (blue dot-dashed line), synthesized FoGSM samples from the same subband (red solid line), and a Gaussian with the same standard deviation (red dashed line). and the optimal z? is then recovered as z? = 1  (?s ? s? ). We the optimize (8) with conjugate gradient ascent [16]. Specifically, the negative gradient of the objective function in (8) with respect to s is ? ? log p(x|s)p(s) ?s = D (x) Qu D (x) s + 2 D(s)?1 Qz (2 log s + ?) = x ? (?u  (x ? s)) + 2(?z  (2 log s + ?))  s, and the multiplication of any vector h with the Hessian matrix can be computed as:   ?2 log p(x|s)p(s) h = x ? (?u  (x ? h)) + 4 (?z  (h  s))  s ? 2 ?z  (log s + ?) ? h  (s ? s). ?s2 Both operations can be expressed entirely in terms of element-wise operations ( and ?) and 2D convolutions () with the generating kernels of the two precision matrices ? u and ?z , which allows for e?cient implementation. 3 Modeling photographic images We have applied the FoGSM model to subbands of a multi-scale image representation known as a steerable pyramid [17]. This decomposition is a tight frame, constructed from oriented multiscale derivative operators, and is overcomplete by a factor of 4K/3, where K is the number of orientation bands. Note that the marginal and joint statistics we describe are not specific to this decomposition, and are similar for other multi-scale oriented representations. We fit a FoGSM model to each subband of a decomposed photographic image, using the algorithms described in the previous section. For precision matrices Q u and Qz , we assumed a 5 ? 5 Markov neighborhood (corresponding to a 5 ? 5 convolution kernel), which was loosely chosen to optimize the tradeo? between accuracy and overfitting. Figure 1 shows the result of fitting a FoGSM model to an example subband from the ?boat? image (left panel). The subband is decomposed into the product of the u field (middle panel) and the z field (right panel, in the logarithm domain), along with model parameters Q u , ? and Qz (not shown). Visually, the changing spatial variances are represented in the estimated log z field, and the estimated u is much more homogeneous than the original subband and has a marginal distribution close to Gaussian.1 However, the log z field still has a non-Gaussian marginal distribution and is spatially inhomogeneous, suggesting limitations of FoGSM for modeling photographic image wavelet coe?cients (see Discussion). The statistical dependencies captured by the FoGSM model can be further revealed by examining marginal and joint statistics of samples synthesized with the estimated ? model parameters. A sample from FoGSM can be formed by multiplying samples of u and z. The former is obtained by sampling from hGMRF u, and the latter is obtained from the element-wise exponentiation followed by a element-wise square root operation of a sample of hGMRF log z. This procedure is again e?cient for FoGSM due to the use of hGMRFs as building blocks [15]. Marginal distributions: We start by comparing the marginal distributions of the samples and the original subband. Figure 2 shows empirical histograms in the log domain of a particular subband 1 This ?Gaussianizing? behavior was first noted in photographic images by Ruderman [18], who observed that image derivative measurements that were normalized by a local estimate of their standard deviation had approximately Gaussian marginal distributions. close ? = 1 near ? = 4 far ? = 32 orientation scale real sim real sim Fig. 3. Examples of empirically observed distributions of wavelet coe?cient pairs, compared with distributions from synthesized samples with the FoGSM model. See text for details. from four di?erent photographic images (blue dot-dashed line), and those of the synthesized samples of FoGSM models learned from each corresponding subband (red solid line). For comparison, a Gaussian with the same standard deviation as the image subband is also displayed (red dashed line). Note that the synthesized samples have conspicuous non-Gaussian characteristics similar to the real subbands, exemplified by the high peak and heavy tails in the marginal distributions. On the other hand, they are typically less kurtotic than the real subbands. We believe this arises from the imprecise Gaussian approximation of log z (see Discussion). Joint distributions: In addition to one-dimensional marginal statistics, the FoGSM model is capable of capturing the joint behavior of wavelet coe?cients. As described in [4, 9], wavelet coe?cients of photographic images present non-Gaussian dependencies. Shown in the first and the third rows in Fig. 3 are empirical joint and conditional histograms for one subband of the ?boat? image, for five pairs of coe?cients, corresponding to basis functions with spatial separations of ? = {1, 4, 32} samples, two orthogonal orientations and two adjacent scales. Contour lines in the joint histogram are drawn at equal intervals of log probability. Intensities in the conditional histograms correspond to probability, except that each column is independently rescaled to fill the full range of intensity. For a pair of adjacent coe?cients, we observe an elliptical joint distribution and a ?bow-tie? shaped conditional distribution. The latter is indicative of strong non-Gaussian dependencies. For coe?cients that are distant, the dependency becomes weaker and the corresponding joint and conditional histograms become more separable, as would be expected for two independent random variables. Random samples drawn from a FoGSM model, with parameters fitted to the corresponding subband, have statistical characteristics consistent with the general description of wavelet coe?cients of photographic images. Shown in the second and the fourth rows of Fig. 3 are the joint and conditional histograms of synthesized samples from the FoGSM model estimated from the same subband as in the first and the third rows. Note that the joint and conditional histograms of the synthesized samples have similar transition of spatial dependencies as the separation increases (column 1,2 and 3), suggesting that the FoGSM accounts well for pairwise joint dependencies of coe?cients over a full range of spatial separations. On the other hand, the dependencies between subbands of di?erent orientations and scales are not properly modeled by FoGSM (column 4 and 5). This is especially true for subbands at di?erent scales, which exhibit strong dependencies. The current FoGSM model original image noisy image (? = 50) (PSNR = 14.15dB) GSM-BLS (PSNR = 26.34dB) FoGSM (PSNR = 27.01dB) Fig. 4. Denoising results using local GSM [12] and FoGSM. Performances are evaluated in peaksignal-to-noise-ratio (PSNR), 20 log10 (255/?e ), where ?e is the standard deviation of the error. does not exhibit those dependencies as only spatial neighbors are used to make use the 2D hGMRFs (see Discussion). 4 Application to image denoising Let y = x+w be a wavelet subband of an image that has been corrupted with white Gaussian noise of known variance. In an overcomplete wavelet domain such as steerable pyramid, the white Gaussian noise is transformed into correlated Gaussian noise w ? N w (0, ?w ), whose covariance ? w can be derived from the basis functions of the pyramid transform. With FoGSM as prior over x, commonly used denoising methods involve expensive high-dimensional integration: for instance, maximum a posterior estimate, x? MAP = argmaxx log p(x|y), requires a high-dimensional integral over z, and the Bayesian least square estimation, x? BLS = E(x|y) requires a double high-dimensional integral over x and z. Although it is possible to optimize with these criteria using Monte-Carlo Markov sampling or other approximations, we instead develop a more e?cient deterministic algorithm that takes advantage of the hGMRF structure in the FoGSM model. Specifically, we compute (?x, z? , Q?u , Q? z , ?) ? = argmaxx,z,Qu ,Qz ,? log p(x, z|y; Q u, Qz , ?) (9) and take x? as the optimal denoised subband. Note that the model parameters are learned within the inference process rather than in a separate parameter learning step. This strategy, known as partial optimal solution [19], greatly reduces the computational complexity. We optimize (9) with coordinate ascent, iterating between maximizing each of (x, z, Q u , Qz , ?) while fixing the others. With fixed estimates of (z, Q u , Qz , ?), the optimization of x is   argmaxx log p(x, z|y; Q u, Qz , ?) = argmaxx log p(x|z, y; Q u, Qz , ?) + log p(z|y; Q u, Qz , ?) , which reduces to argmax x log p(x|z, y; Q u), with the second term independent of x and can be dropped from optimization. Given the Gaussian structure of x given z, this step is then equivalent to a Wiener filter (linear in y). Fixing (x, Q u , Qz , ?), the optimization of z is   argmaxz log p(x, z|y; Q u, Qz , ?)= argmaxz log p(y|x, z; Q u)+log p(x, z; Q u, Qz , ?)?log p(y; Qu , Qz , ?) , which is further reduced to argmax z log p(x, z; Qu , Qz , ?). Here, the first term was dropped since y is independent of z when conditioned on x. The last term was also dropped since it is also independent of z. Therefore, optimizing z given (x, Q u , Qz , ?) is equivalent to the first step of the algorithm in section 2, which can be implemented with e?cient gradient descent. given (x, z), the FoGSM  ? Finally, (t+1) and log z(t+1) , similar to the model parameters (Q u , Qz , ?) are estimated from hGMRFs x  z second and third step in the algorithm of section 2. However, to reduce the overall computation time, instead of a complete maximum likelihood estimation, these parameters are estimated with a maximum pseudo-likelihood procedure [20], which finds the parameters maximizing the product of all conditional distributions (which are 1D Gaussians in the GMRF case), followed by a projection to the subspace of FoGSM parameters that results in positive definite precision matrices. We tested this denoising method on a standard set of test images [12]. The noise corrupted images were first decomposed these into a steerable pyramid with multiple levels (5 levels for a 512 ? 512 image and 4 levels for a 256 ? 256 image ) and 8 orientations. We assumed a FoGSM model for each subband, with a 5 ? 5 neighborhood for field u and a 1 ? 1 neighborhood for field log z. These sizes were chosen to provide a reasonable combination of performance and computational e?ciency. We then estimate the optimal x with the algorithm described previously, with the initial values of x and z computed from subband denoised with the local GSM model [12]. Shown in Fig. 4 is an example of denoising the ?boat? image corrupted with simulated additive white Gaussian noise of strength ? = 50, corresponding to a peak-signal-to-noise-ratio (PSNR), of 14.15 dB. We compare this with the local GSM method in [12], which, assuming a local GSM model for the neighborhood consisting of 3 ? 3 spatial neighbors plus parent in the next coarsest scale, computes a Bayes least squares estimate of each coe?cient conditioned on its surrounding neighborhood. The FoGSM denoising achieves substantial improvement (+0.68 in PSNR) and is seen to exhibit better contrast and continuation of oriented features (see Fig. 4). On the other hand, FoGSM introduces some noticeable artifacts in low contrast areas, which is caused by numerical instability at locations with small z. We find that the improvement in terms of PSNR is consistent across photographic images and noise levels, as reported in Table 1. But even with a restricted neighborhood for the multiplier field, this PSNR improvement does come at a substantial computational cost. As a rough indication, running on a PowerPC G5 workstation with 2.3 Ghz processor and 16 Gb RAM memory, using unoptimized MATLAB (version R14) code, denoising a 512 ? 512 image takes on average 4.5 hours (results averaging over 5 images), and denoising a 256?256 image takes on average 2.4 hours (result averaging over 2 images), to a convergence precision producing the reported results. Our preliminary investigation indicates that the slow running time is mainly due to the nature of coordinate ascent and the landscape of (9), which requires many iterations to converge. 5 Discussion We have introduced fields of Gaussian scale mixtures as a flexible and e?cient tool for modeling the statistics of wavelet coe?cients of photographic images. We developed a feasible (although admittedly computationally costly) parameter estimation method, and showed that samples synthesized from the fitted FoGSM model are able to capture structures in the marginal and joint wavelet statistics of photographic images. Preliminary results of applying FoGSM to image denoising indicate substantial improvements over the state-of-the-art methods based on the local GSM model. Although FoGSM has a structure that is similar to the local scale mixture model [9, 10], there is a fundamental di?erence between them. In FoGSM, hGMRF structures are enforced in u and log z, while the local scale mixture models impose minimal statistical structure on these variables. Because of this, our model easily extends to images of arbitrary size, while the local scale mixture models are essentially confined to describing small image patches (the curse of dimensionality, and the increase in computational cost prevent one from scaling the patch size up). On the other hand, the close relation to Gaussian MRF makes the analysis and computation of FoGSM significantly easier than other non-Gaussian MRF based image models [6, 7, 5]. We envision, and are currently working on, a number of model improvements. First, the model should benefit from the introduction of more general Markov neighborhoods, including wavelet coe?cients from subbands at other scales and orientations [4, 12], since the current model is clearly not accounting for these dependencies (see Fig. 3). Secondly, the log transformation used to derive the multiplier field from a hGMRF is somewhat ad hoc, and we believe that substitution of another nonlinear transformation (e.g., a power law [14]) might lead to a more accurate description of the image statistics. Thirdly, the current denoising method estimates model parameter during the process of denoising, which produces image adaptive model parameters. We are exploring the possibility of using a set of generic model parameter learned a priori on a large set of photographic images, so that a generic statistical model for all photographic images based on FoGSM can be built. Finally, there exist residual inhomogeneous structures in the log z field (see Fig. 1) that can likely be captured by explicitly incorporating local orientation [21] or phase into the model. Finding tractable models and algorithms for handling such circular variables is challenging, but we believe their inclusion will result in substantial improvements in modeling and in denoising performance. ?/PSNR 10/28.13 25/20.17 50/14.15 100/8.13 ?/PSNR 10/28.13 25/20.17 50/14.15 100/8.13 Barbara 35.01 (34.01) 30.10 (29.07) 26.40 (25.45) 23.01 (22.61) Flintstones 32.47 (31.78) 28.29 (27.48) 24.82 (24.02) 21.24 (20.49) barco 35.05 (34.42) 30.44 (29.73) 27.36 (26.63) 24.44 (23.84) house 35.63 (35.27) 31.64 (31.32) 28.51 (28.23) 25.33 (25.31) boat 34.12 (33.58) 30.03 (29.34) 27.01 (26.35) 24.20 (23.79) Lena 35.94 (35.60) 32.11 (31.70) 29.12 (28.62) 26.12 (25.77) fingerprint 33.28 (32.45) 28.45 (27.44) 25.11 (24.13) 21.78 (21.21) peppers 34.38 (33.73) 29.78 (29.18) 26.43 (25.93) 23.17 (22.80) Table 1. Denoising results with FoGSM on di?erent images and di?erent noise levels. Shown in the table are PSNRs (20 log10 (255/?e ), where ?e is the standard deviation of the error) of the denoised images, and in the parenthesis are the PSNRs of the same images denoised with a local GSM model [12]. References [1] P. J. Burt. Fast filter transforms for image processing. Comp. Graph. Image Proc., 16:20?51, 1981. [2] D. J. Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am., 4(12):2379?2394, 1987. [3] B. Wegmann and C. Zetzsche. Statistical dependencies between orientation filter outputs used in human vision based image code. In Proc. Visual Comm. and Image Proc., volume 1360, pages 909?922, 1990. [4] R. W. Buccigrossi and E. P. Simoncelli. Image compression via joint statistical characterization in the wavelet domain. IEEE Trans. on Image Proc., 8(12):1688?1701, 1999. [5] Y. W. Teh, M. Welling, S. Osindero, and G. E. Hinton. Energy-based models for sparse overcomplete representations. J. of Machine Learning Res., 4:1235?1260, 2003. [6] S. C. Zhu, Y. Wu, and D. Mumford. Filters, random fields and maximum entropy (FRAME): Towards a unified theory for texture modeling. Int?l. J. Comp. Vis., 27(2):107?126, 1998. [7] S. Roth and M. J. Black. Fields of experts: a framework for learning image priors. In IEEE Conf. on Comp. Vis. and Pat. Rec., volume 2, pages 860?867, 2005. [8] P. Gehler and M. Welling. Products of ?edge-perts?. In Adv. in Neural Info. Proc. Systems (NIPS*05). MIT Press, 2006. [9] M. J. Wainwright and E. P. Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In Adv. Neural Info. Proc. Sys. (NIPS*99), volume 12, pages 855?861, May 2000. [10] Y. Karklin and M. S. Lewicki. A hierarchical Bayesian model for learning non-linear statistical regularities in non-stationary natural signals. Neural Computation, 17(2):397?423, 2005. [11] A. Hyv?arinen, P. O. Hoyer, and M. Inki. Topographic ICA as a model of natural image statistics. In the First IEEE Int?l. Workshop on Bio. Motivated Comp. Vis., London, UK, 2000. [12] J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli. Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans. on Image Proc., 12(11):1338?1351, 2003. [13] J. Romberg, H. Choi, and R. G. Baraniuk. Bayesian tree-structured image modeling using wavelet domain hidden Markov models. IEEE Trans. on Image Proc., 10(7):303?347, 2001. [14] M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky. Random cascades on wavelet trees and their use in modeling and analyzing natural imagery. Appl. and Comp. Harm. Ana., 11(1):89?123, 2001. [15] H. Rue and L. Held. Gaussian Markov Random Fields: Theory And Applications. Monographs on Statistics and Applied Probability. Chapman and Hall/CRC, 2005. [16] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes. Cambridge, 2nd edition, 2002. [17] E. P. Simoncelli and W. T. Freeman. The steerable pyramid: A flexible architecture for multi-scale derivative computation. In IEEE Int?l. Conf. on Image Proc., volume 3, pages 444?447, 1995. [18] D. Ruderman. The statistics of natural images. Network : Comp. in Neural Sys., 5:598?605, 1994. [19] M. Figueiredo and J. Leit?ao. Unsupervised image restoration and edge location using compound GaussMarkov random fields and MDL principle. IEEE Trans. on Image Proc., 6(8):1089?1122, 1997. [20] J. Besag. On the statistical analysis of dirty pictures. J. of the Royal Stat. Soc., Series B, 48:259?302, 1986. [21] D. K. Hammond and E. P. Simoncelli. Image denoising with an orientation-adaptive Gaussian scale mixture model. In Proc. 13th IEEE Int?l. Conf. on Image Proc., pages 1433?1436, October 2006.
3042 |@word version:1 middle:2 compression:1 nd:1 hyv:1 simplifying:1 decomposition:5 covariance:5 accounting:1 solid:2 initial:1 substitution:1 series:1 envision:1 past:1 diagonalized:1 current:5 recovered:1 comparing:1 elliptical:1 si:3 distant:1 additive:1 numerical:2 treating:1 stationary:1 indicative:1 cult:1 sys:2 provides:1 characterization:1 location:2 five:1 mathematical:1 along:1 constructed:1 become:1 consists:1 fitting:1 introduce:1 pairwise:1 inter:1 ica:1 expected:1 behavior:3 multi:6 lena:1 freeman:1 globally:2 decomposed:3 curse:1 becomes:1 estimating:1 panel:3 strela:1 argmin:1 substantially:2 developed:3 unified:1 finding:1 transformation:2 pseudo:1 tie:1 uk:1 control:1 bio:1 medical:1 producing:1 positive:4 dropped:3 local:29 gsms:2 despite:2 analyzing:1 approximately:1 might:1 plus:1 black:1 specifying:1 challenging:1 appl:1 range:3 hughes:1 block:3 definite:1 procedure:3 steerable:4 area:1 empirical:3 erence:1 significantly:1 cascade:1 projection:1 imprecise:1 close:3 operator:4 romberg:1 applying:1 instability:1 restriction:1 equivalent:3 imposed:1 demonstrated:1 center:1 dz:2 map:2 maximizing:3 optimize:4 deterministic:1 independently:2 roth:1 gmrf:2 fill:2 coordinate:3 analogous:2 substrate:1 homogeneous:6 element:8 satisfying:1 particularly:2 expensive:1 rec:1 gehler:1 observed:2 capture:5 adv:2 rescaled:2 substantial:7 monograph:1 comm:1 complexity:2 tight:1 division:1 basis:4 easily:1 joint:17 represented:3 surrounding:2 fast:1 describe:3 london:1 monte:1 neighborhood:9 whose:2 ability:1 statistic:14 topographic:1 transform:3 noisy:1 hoc:1 advantage:2 indication:1 product:8 cients:19 loop:1 combining:1 argmaxs:1 bow:1 description:7 recipe:1 exploiting:1 convergence:2 regularity:3 cluster:5 double:1 parent:1 produce:1 generating:2 derive:1 develop:5 stat:1 fixing:2 erent:5 noticeable:1 sim:2 strong:3 soc:2 implemented:3 come:1 indicate:1 inhomogeneous:3 filter:4 human:1 ana:1 crc:1 arinen:1 ao:1 preliminary:2 investigation:1 opt:1 secondly:1 extension:1 exploring:1 hall:1 exp:7 visually:1 lyu:1 achieves:2 estimation:9 proc:12 currently:1 individually:1 grouped:1 tool:1 rough:1 clearly:1 mit:1 gaussian:41 rather:1 derived:1 improvement:8 properly:1 likelihood:4 indicates:1 mainly:1 greatly:2 contrast:2 besag:1 am:1 inference:3 mrfs:2 wegmann:1 vetterling:1 typically:1 hidden:2 relation:3 transformed:2 unoptimized:1 pixel:1 overall:1 among:1 orientation:9 flexible:2 priori:1 art:5 special:1 spatial:6 integration:1 marginal:17 field:30 construct:1 equal:1 shaped:1 sampling:3 chapman:1 unsupervised:1 others:1 simplify:1 primarily:1 oriented:3 homogeneity:1 argmax:2 consisting:1 phase:1 maintain:1 interest:1 highly:1 circular:2 intra:1 possibility:1 mdl:1 introduces:1 mixture:17 zetzsche:1 held:1 accurate:1 integral:3 capable:1 partial:1 edge:2 orthogonal:1 tree:3 continuing:1 loosely:2 logarithm:2 re:1 overcomplete:3 minimal:1 fitted:2 instance:2 column:3 modeling:9 kurtotic:2 restoration:1 tractability:3 cost:3 deviation:5 markovianity:1 successful:1 examining:1 osindero:1 reported:2 dependency:14 corrupted:3 st:1 density:8 peak:2 fundamental:1 nongaussian:2 again:1 imagery:1 conf:3 expert:2 derivative:3 suggesting:2 account:1 int:4 caused:1 explicitly:1 ad:1 vi:3 root:3 red:4 start:1 sort:1 denoised:4 bayes:1 substructure:1 square:5 formed:2 accuracy:1 wiener:1 variance:4 who:1 characteristic:2 yield:1 ofthe:1 correspond:2 landscape:1 bayesian:3 hammond:1 carlo:1 multiplying:1 comp:6 processor:1 gsm:21 siwei:1 energy:1 di:7 workstation:1 dimensionality:2 psnr:10 amplitude:3 ords:1 courant:1 specify:1 improved:2 response:1 arranged:1 done:1 evaluated:1 correlation:1 until:1 hand:4 working:1 ruderman:2 multiscale:1 overlapping:2 nonlinear:1 artifact:2 believe:3 building:1 normalized:3 multiplier:11 true:1 former:1 equality:1 spatially:2 nonzero:1 white:3 adjacent:2 during:1 noted:1 criterion:1 complete:2 demonstrate:1 image:79 wise:7 recently:1 inki:1 empirically:1 volume:4 thirdly:1 tail:1 synthesized:8 measurement:1 cambridge:1 consistency:1 inclusion:1 had:1 dot:2 fingerprint:1 impressive:1 posterior:1 showed:1 optimizing:1 barbara:2 compound:1 captured:3 seen:1 somewhat:1 impose:1 converge:2 signal:2 dashed:4 ii:2 full:3 simoncelli:7 photographic:18 reduces:2 multiple:1 parenthesis:1 mrf:8 vision:2 essentially:1 histogram:7 represent:1 kernel:4 tradeo:1 pyramid:5 achieved:1 iteration:1 confined:1 cell:1 addition:2 interval:1 rest:1 ascent:5 db:4 perts:1 ciently:2 near:1 revealed:2 iii:2 fit:1 zi:3 pepper:2 architecture:1 reduce:2 simplifies:1 motivated:1 gb:1 york:2 hessian:1 constitute:1 matlab:1 generally:1 iterating:1 detailed:1 involve:1 transforms:2 band:1 reduced:1 continuation:1 exist:1 estimated:7 blue:2 bls:2 key:1 four:1 demonstrating:1 drawn:3 changing:1 prevent:1 ram:1 graph:1 enforced:1 inverse:2 exponentiation:2 fourth:1 baraniuk:1 striking:1 extends:1 reader:1 reasonable:1 wu:1 separation:3 patch:2 scaling:2 entirely:1 capturing:1 guaranteed:1 followed:2 strength:1 precisely:1 constraint:1 nearby:1 fourier:2 extremely:1 separable:1 coarsest:1 structured:2 according:1 combination:1 conjugate:2 across:2 conspicuous:1 qu:15 restricted:1 handling:2 computationally:1 mutually:2 previously:2 describing:1 tractable:2 end:1 operation:4 gaussians:3 subbands:8 apply:1 observe:1 hierarchical:1 generic:2 alternative:1 original:3 denotes:3 gaussianizing:1 running:2 dirty:1 graphical:1 log10:2 coe:23 subband:19 especially:1 objective:1 mumford:1 strategy:1 costly:1 diagonal:1 g5:1 exhibit:4 gradient:4 hoyer:1 subspace:1 separate:1 simulated:1 nx:1 willsky:1 assuming:1 code:2 modeled:1 ratio:2 october:1 info:2 negative:1 rise:1 implementation:1 allowing:1 teh:1 convolution:3 markov:7 howard:1 descent:1 displayed:1 pat:1 extended:1 psnrs:2 hinton:1 frame:2 portilla:1 arbitrary:1 intensity:3 burt:1 introduced:1 pair:3 learned:3 hour:2 nip:2 trans:4 able:2 usually:1 exemplified:1 built:1 including:1 memory:1 royal:1 wainwright:3 power:2 natural:7 rely:1 boat:6 residual:1 karklin:1 zhu:1 picture:1 text:1 prior:3 multiplication:3 law:1 embedded:2 limitation:1 consistent:4 principle:1 heavy:1 row:3 changed:1 last:1 buccigrossi:1 figueiredo:1 exponentiated:1 weaker:1 institute:2 circulant:2 neighbor:2 lognormal:1 absolute:1 sparse:2 ghz:1 benefit:1 boundary:2 cortical:1 transition:1 contour:1 computes:1 author:2 made:1 commonly:1 exponentiating:1 simplified:1 adaptive:2 far:1 welling:2 global:2 overfitting:1 harm:1 assumed:4 eero:1 grayscale:1 latent:2 iterative:1 decade:1 table:3 qz:27 nature:1 argmaxx:4 artificially:1 constructing:1 domain:7 rue:1 s2:1 noise:9 edition:1 repeated:1 fig:10 referred:2 cient:11 ny:1 slow:1 precision:9 position:1 ciency:1 house:2 third:3 wavelet:22 choi:1 specific:2 zu:1 pz:3 intrinsic:1 incorporating:1 workshop:1 albeit:1 diagonalization:1 texture:1 conditioned:5 argmaxz:3 easier:1 entropy:1 flannery:1 univariate:1 likely:1 visual:1 expressed:1 scalar:2 lewicki:1 corresponds:1 determines:1 satisfies:1 teukolsky:1 conditional:8 modulate:1 formulated:1 towards:1 feasible:2 determined:1 specifically:5 except:1 averaging:2 denoising:22 admittedly:1 formally:1 latter:2 modulated:1 arises:1 tested:1 correlated:1
2,252
3,043
Hyperparameter Learning for Graph Based Semi-supervised Learning Algorithms Xinhua Zhang? Statistical Machine Learning Program National ICT Australia, Canberra, Australia and CSL, RSISE, ANU, Canberra, Australia [email protected] Wee Sun Lee Department of Computer Science National University of Singapore 3 Science Drive 2, Singapore 117543 [email protected] Abstract Semi-supervised learning algorithms have been successfully applied in many applications with scarce labeled data, by utilizing the unlabeled data. One important category is graph based semi-supervised learning algorithms, for which the performance depends considerably on the quality of the graph, or its hyperparameters. In this paper, we deal with the less explored problem of learning the graphs. We propose a graph learning method for the harmonic energy minimization method; this is done by minimizing the leave-one-out prediction error on labeled data points. We use a gradient based method and designed an efficient algorithm which significantly accelerates the calculation of the gradient by applying the matrix inversion lemma and using careful pre-computation. Experimental results show that the graph learning method is effective in improving the performance of the classification algorithm. 1 Introduction Recently, graph based semi-supervised learning algorithms have been used successfully in various machine learning problems including classification, regression, ranking, and dimensionality reduction. These methods create graphs whose vertices correspond to the labeled and unlabeled data while the edge weights encode the similarity between each pair of data points. Classification is performed using these graphs by labeling unlabeled data in such a way that instances connected by large weights are given similar labels. Example graph based semi-supervised algorithms include min-cut [3], harmonic energy minimization [11], and spectral graphical transducer [8]. The performance of the classifier depends considerably on the similarity measure of the graph, which is normally defined in two steps. Firstly, the weights are defined locally in a pair-wise parametric form using functions that are essentially based on a distance metric such as radial basis functions (RBF). It is argued in [7] that modeling error can degrade performance of semi-supervised learning. As the distance metric is an important part of graph based semi-supervised learning, it is crucial to use a good distance metric. In the second step, smoothing is applied globally, typically, based on the spectral transformation of the graph Laplacian [6, 10]. There have been only a few existing approaches which address the problem of graph learning. [13] learns a nonparametric spectral transformation of the graph Laplacian, assuming that the weight and distance metric are given. [9] learns the spectral parameters by performing evidence maximization using approximate inference and gradient descent. [12] uses evidence maximization and Laplace approximation to learn simple parameters of the similarity function. Instead of learning one single good graph, [4] proposed building robust graphs by applying random perturbation and edge removal ? This work was done when the author was at the National University of Singapore. from an ensemble of minimum spanning trees. [1] combined graph Laplacians to learn a graph. Closest to our work is [11], which learns different bandwidths for different dimensions by minimizing the entropy on unlabeled data; like the maximum margin motivation in transductive SVM, the aim here is to get confident labeling of the data by the algorithm. In this paper, we propose a new algorithm to learn the hyperparameters of distance metric, or more specifically, the bandwidth for different dimensions in the RBF form. In essence, these bandwidths are just model parameters and normal model selection methods include k-fold cross validation or leave-one-out (LOO) cross validation in the extreme case can be used for selecting the bandwidths. Motivated by the same spirit, we base our learning algorithm on the aim of achieving low LOO prediction loss on labeled data, i.e., each labeled data can be correctly classified by the other labeled data in a semi-supervised style with as high probability as possible. This idea is similar to [5] which learns multiple parameters for SVM. Since most LOO style algorithms are plagued with prohibitive computational cost, an efficient algorithm is designed. With a simple regularizer, the experimental results show that learning the hyperparameters by minimizing the LOO loss is effective. 2 Graph Based Semi-supervised Learning Suppose we have a set of labeled data points {(xi , yi )} for i ? L , {1, ..., l}. In this paper, we only consider binary classification, i.e., yi ? {1 (positive), 0 (negative)}. In addition, we also have a set of unlabeled data points {xi } for i ? U , {l + 1, ..., l + u}. Denote n , l + u. Suppose the dimensionality of input feature vectors is m. 2.1 Graph Based Classification Algorithms One of the earliest graph based semi-supervised learning algorithms is min-cut by [3], which miniX mizes: E(f ) , wij (fi ? fj )2 (1) i,j where the nonnegative wij encodes the similarity between instance i and j. The label fi is fixed to yi ? {1, 0} if i ? L. The optimization variables fi (i ? U ) are constrained to {1, 0}. This combinatorial optimization problem can be efficiently solved by the max-flow algorithm. [11] relaxed the constraint fi ? {1, 0} (i ? U ) to real numbers. The optimal solution of the unlabeled data?s soft labels can be written neatly as: fU = (DU ? WU U )?1 WU L fL = (I ? PU U )?1 PU L fL (2) where fL is the vector of soft labels (fixed to yi ) for L. D , diag(di ), where di , j wij and DU is the submatrix of D associated with unlabeled data. P , D?1 W . WU U , WU L , PU U , and PU L are defined by:     WLL WLU PLL PLU W = ,P = . WU L WU U PU L PU U P The solution (2) has a number of interesting properties pointed out by [11]. All fi (i ? U ) are automatically bounded by [0, 1], so it is also known as square interpolation. They can be interpreted by using Markov random walk on the graph. Imagine a graph with n nodes corresponding to the n data points. Define the probability of transferring from xi to xj as pij , which is actually row-wise normalization of wij . The random walk starts from any unlabeled points, and stops once it hits any labeled point (absorbing boundary). Then fi is the probability of hitting a positive labeled point. In this sense, the labeling of each unlabeled point is largely based on its neighboring labeled points, which helps to alleviate the problem of noisy data. (1) can also be interpreted as a quadratic energy function and its minimizer is known to be harmonic: fi (i ? U ) equals the average of fj (j 6= i) weighted by pij . So we call this algorithm Harmonic Energy Minimization (HEM). By (1), fU is independent of wii (i = 1, ..., n), so henceforth we fix wii = pii = 0. Finally, to translate the soft labels fi to hard labels pos/neg, the simplest way is by thresholding at 0.5, which works well when the two classes are well separated. [11] proposed another approach, called Class Mass Normalization (CMN), to make use of prior information such as class ratio in unlabeled labels to fi+ , .P data, estimated by that in labeled data. Specifically, they normalize the.soft P n n ? fi j=1 (1 ? fj ) as j=1 fj as the probabilistic score of being positive, and to fi , (1 ? fi ) the score of being negative. Suppose there are r+ positive points and r? negative points in the labeled data, then we classify xi to positive iff fi+ r+ > fi? r? . 2.2 Basic Hyperparameter Learning Algorithms One of the simplest parametric form of wij is RBF:  X   wij = exp ? (xi,d ? xj,d )2 ?d2 d (3) th where xi,d is the d component of xi , and likewise the meaning of fU,i in (4). The bandwidth ?d has considerable influence on the classification accuracy. HEM uses one common bandwidth for all dimensions, which can be easily selected by cross validation. However, it will be desirable to learn different ?d for different dimensions; this allows a form of feature selection. [11] proposed learning the hyperparameters ?d by minimizing the entropy on unlabeled data points (we call it MinEnt): Xu H (fU ) = ? (fU,i log fU,i + (1 ? fU,i ) log(1 ? fU,i )) (4) i=1 The optimization is conducted by gradient descent. To prevent numerical problems, they replaced P with P? = U + (1 ? )P , where  ? [0, 1), and U is the uniform matrix with Uij = n?1 . 3 Leave-one-out Hyperparameter Learning In this section, we present the formulation and efficient calculation of our graph learning algorithm. 3.1 Formulation and Efficient Calculation We propose a graph learning algorithm which is similar to minimizing the leave-one-out cross validation error. Suppose we hold out a labeled example xt and predict its label by using the rest of the labeled and unlabeled examples. Making use of the result in (2), the soft label for xt is s> fUt (the first component of fUt ), where t s , (1, 0, ..., 0)> ? Ru+1 , fUt , (f0t , fl+1 , ..., fnt )> . Here, the value of fUt can be determined by fUt = (I ? P?Ut U )?1 P?Ut L fLt , where   ptt ptU > fLt , (f1 , .., ft?1 , ft+1 , ..., fl ) , p?ij , (1 ? ?)pij + ?/n , PUt U , , pU t P U U pU t , (pl+1,t , ..., pn,t )> , ptU , (pt,l+1 , ..., pt,n ) , ? ? pt,1 ??? pt,t?1 pt,t+1 ??? pt,l ? ? ? pl+1,t?1 pl+1,t+1 ? ? ? pl+1,l ? ? ? p PUt L = ? l+1,1 . ??? ??? ??? ??? ??? ??? ? pn,1 ? ? ? pn,t?1 pn,t+1 ? ? ? pn,l t If xt is positive, then we hope that fU,1 is as close to 1 as possible. Otherwise, if xt is negative, we t hope that fU,1 is as close to 0 as possible. So the cost function to be minimized can be written as:   Xl  Xl t ht s> (I ? P?Ut U )?1 P?Ut L fLt (5) Q= ht fU,1 = t=1 t=1 where ht (x) is the cost function for instance t. We denote ht (x) = h+ (x) for yt = 1 and ht (x) = h? (x) for yt = 0. Possible choices of h+ (x) include 1 ? x, (1 ? x)a , a?x , and ? log x with a > 1.  t Possible choices for h? (x) include x, xa , ax?1 , and ? log(1?x). Let Loo loss(xt , yt ) , ht fU,1 . To minimize Q, we use gradient-based optimization methods.  .The gradient is: .   > Pl 0 t t ?1 t ? ? ?Q/??d = t=1 ht fU,1 s (I ? PU U ) ? PU U ??d ? fUt + ? P?Ut L ??d ? fLt , t using matrix property dX ?1 = ?X ?1 (dX)X ?1 . Denoting (? t )> , h0t (fU,1 )s> (I ? P?Ut U )?1 and noting P? = ?U + (1 ? ?)P, we have Xl    ?Q/??d = (1 ? ?) (? t )> ?PUt U ??d ? fUt + ?PUt L ??d ? fLt . (6) t=1 Since in both PUt U and PUt L , the first row corresponds to xt , and the ith row (i ? 2) corresponds to xi+l?1 , denoting PUt N , (PUt L PUt U ) makes sense as each row of PUt N corresponds to a well defined P single data point. Let all about P carry over to the corresponding W . We now use Pnnotations n t t swit , k=1 wU (i, k) and ?w N U N (i, k)/??d (i = 1, ..., u + 1) to denote the sum of these k=1 corresponding rows. Now (6) can be rewritten in ground terms by the following ?two? equations:   Xn    t t t ?PUt ? (i, j) ??d = (swit )?1 ?wU ?wU ? (i, j) ??d ? pU ? (i, j) N (i, k) ??d , k=1  3 2 where ? can be U or L. ?wij /??d = 2wij (xi,d ? xj,d ) ?d by (3). The na??ve way to calculate the function value Q and its gradient is presented in Algorithm 1. We call it leave-one-out hyperparameter learning (LOOHL). Algorithm 1 na??ve form of LOOHL 1: function value Q ? 0, gradient g ? (0, ..., 0)> ? Rm 2: for each t = 1, ..., l (leave-one-out loop for each labeled point) do 3: fLt ? (f1 , .., ft?1 , ft+1 , ..., fl )> , fUt ? (I ? P?Ut U )?1 P?Ut L fLt ,  t t Q ? Q + ht fU,1 , (? t )> ? h0t (fU,1 )s> (I ? P?Ut U )?1 4: for each d = 1, ..., m(for all feature dimensions) do  n t t t P ?wU ?PU ?wU t 1 U (i,j) N (i,k) U (i,j) ? ? p (i, j) 5: UU ??d ??d ??d swit k=1 P t where swit = n i, j = 1, ..., u + 1 k=1 wU N (i, k),  n t t t P ?wU N (i,k) ?PU L (i,j) ?wU L (i,j) t 1 ? swt ? pU L (i, j) i = 1, ..., u + 1, j = 1, ..., l ? 1 6: ??d ??d ??d i k=1   t t ?P ?P 7: gd ? gd + (1 ? )(? t )> ??UdU fUt + ??UdL fLt 8: end for 9: end for The computational complexity of the na??ve algorithm is expensive: O(lu(mn+u2 )), just to calculate the gradient once. Here we assume the cost of inverting a u ? u matrix is O(u3 ). We reduce the two terms in the cost by means of using matrix inversion lemma and careful pre-computation. One part of the cost, O(lu3 ), stems from inverting I ? P?Ut U , a (u + 1) ? (u + 1) matrix, for l times in (5). We note that for different t, I ? P?Ut U differs only by the first row and first column. So there exist two vectors ?, ? ? Ru+1 such that I ? P?Ut1U = (I ? P?Ut2U ) + e?> + ?e> , where e = (1, 0, ..., 0)> ? Ru+1 . With I ? P? t expressed in this form, we are ready to apply matrix inversion lemma: UU A + ?? > ?1 = A?1 ? A?1 ? ? ? > A?1   1 + ?> A? . (7) We only need to invert I ? P?Ut U for t = 1 from scratch, and thenapply (7) twice for each t > 2. The new total complexity related to matrix inversion is O u3 + lu2 . The other part of the cost, O(lumn) , can be reduced by using careful pre-computation. Written in detail, we have: l u+1 X X ?it ?Q = ??d swit t=1 i=1 ? n X k=1 u+1 X j=1 t ?wU N (i, k) ??d l?1 t X ?wt (i, j) t ?wU U (i, j) t UL fU,j + fL,j ??d ??d j=1 u+1 X j=1 ptU U t (i, j) fU,j + l?1 X ptU L t (i, j) fL,j j=1 !! , n n X X i=1 j=1 ?ij ?wij ??d The crucial observation is the existence of ?ij , which are independent of dimension index d. Therefore, they can be pre-computed efficiently. The Algorithm 2 below presents the efficient approach to gradient calculation. Algorithm 2 Efficient algorithm to gradient calculation 1: for i, j = 1, ..., n do 2: for all feature dimension d on which either xi or xj is nonzero do 3: gd = gd + ?ij ? ?wij /??d 4: end for 5: end for Figure 1: Examples of degenerative graphs learned by pure LOOHL. Letting swi , ?ij = swi?1 ?ij = Pn l X k=1 wik and ?(?) be Kroneker delta, we derive the form of ?ij as: t ?i?l+1 t=1 l X swi?1 t=1 ?ij = t fU,j?l+1 ? n X t pik fU,k?l+1 t ?pit fU,1 k=l+1 ? t t t ?fU,1 ?i?l+1 ?(t = j) + fj ?(t 6= j) ? pit fU,1 ? swi?1 ?1i i fU,j?l+1 ? n X i pik fU,k?l+1 k=l+1 ?ij = swi?1 ?1i fj ? n X i pik fU,k?l+1 k=l+1 ? l X ? l X pik fk k=1 ? l X k=1 pik fk ? for i > l and j > l; ? l X t pik fU,k?l+1 ? pik fk ? k=1:k6=t n X k=l+1 pik fk ! ? ! k=1:k6=t for i > l and j 6 l; for i 6 l and j > l; for i 6 l and j 6 l, and ?ii are fixed to 0 for all i since we fix wii = pii = 0. All ?ij can be computed in O(u2 l) time and Algorithm 2 can be completed in O(n2 m) ? time, where X m ? , 2n?1 (n ? 1)?1 ? |{ d ? 1...m| xi or xj is not zero on feature d}|. 16i<j6n In many applications such as text classification and image pattern recognition, the data is very sparse and m ?  m. In sum, the computational cost has been reduced from O(lu(mn + u2 )) to O(lnu + 2 n m ? + u3 ) . The space cost is mild at O(n2 + nm). ? 4 Regularizing the Graph Learning Similar to the MinEnt method, purely applying LOOHL can lead to degenerative graphs. In this section, we show two such examples and then propose a simple approach which regularizes the graph learning process. Two degenerative graphs are shown in Figure 1. In example (a), the points with the same xv coordinate are from the same classes. For each labeled point, there is another labeled point from the opposite class which has the same xh coordinate. So the leave-one-out hyperparameter learning will push 1/?h to zero and 1/?v to infinity, i.e., all points can transfer only horizontally. Therefore the graph will effectively split into six disconnected sub-graphs, each sharing the same xv coordinate as showed in (a). So the desired gradual change of label from positive to negative along dimension xv cannot appear. As a result, the point at question mark cannot hit any labeled points and cannot be classified. One way to preventP such degenerate graphs is to prevent 1/?v from growing too large, e.g., with a regularizer such as d (1/?d )2 . In example (b), although the negative points will encourage both horizontal and vertical walk, horizontal walk will make the leave-one-out error large on positive points. So the learned 1/?v will be far smaller than 1/?h , i.e., the result strongly encourages walking in vertical direction and ignoring the information from the horizontal direction. As a result, the point at the question mark will be labeled as positive, although by nearest neighbor intuition, it should be labeled as negative. We notice that the four negativePpoints will be partitioned into two groups as shown in the figure. In such a case, the regularizer d (1/?d )2 will not be helpful with utilizing dimensions that are informative. A different regularizer that encourages the use of more dimensions may be better in this case. One is to minimize the variance of the inverse P P simple regularizer that has this property bandwidth d (1/?d ? ?)2 , where ? = m?1 d 1/?d , assuming that the mean is non-zero. It Table 1: Dataset properties. Sparsity is the average frequency of features to be zero in the whole dataset. The rightmost column gives the size of the whole dataset from which the labeled data in experiment is sampled. Some data in text dataset has unknown label, thus always used as unlabeled. is a priori unclear which regularizer will be better empirically, but for the datasets in our experiments, the minimum variance regularizer is overwhelmingly better, even when useless features are intentionally added to the datasets. Since the gradient based optimization can get stuck in local minima, it is advantageous to test several different parameter initialization. With this in mind, we implement a simple approximation to the minimum variance regularizer that tests different parameter initialization as well. We discretize ? P 2 and minimize the leave-one-out loss plus d (1/?d ? 1/? ? ) , where ? ? is fixed a priori to several different possible values. We run with different ? ? and set all initial ?d to ? ? . Then we choose the function produced by the value of ? ? that has the smallest regularized cost function value. This process is similar to restarting from various values to avoid local minima, but now we are also trying with different mean of estimated optimal bandwidth at the same time. A similar way to regularize is P 2 by using a Gaussian prior with mean ??1 and minimizing Q + C d (1/?d ? 1/?) with respect to ?d and ? simultaneously. 5 Experimental Results Using HEM as a basis for classification, we compare the test accuracy of three model selection methods: LOOHL, 5-CV (tying all bandwidths and choose by 5-fold cross validation), and MinEnt, each with both thresholding and CMN. Since the topic of this paper is how to learn the hyperparameters of a graph, we pay more attention to how the performance of a given recognized classifier can be improved by means of learning the graph, than to the comparison between different classifers? performance, i.e., comparing with other semi-supervised or supervised learning algorithms. Ionosphere is from UCI repository. The other four datasets used in the experiment are from NIPS 2003 Workshop on feature selection challenge. Each of them has two versions: original version and probe version which adds useless probing features in order to investigate the algorithm?s performance in the presence of useless features, though at current stage we do not use the algorithm as feature selector. Since the workshop did not provide the original datasets, we downloaded the original datasets from other sites. Our original intention was to use original versions that we downloaded and to reproduce the probe version ourself using the pre-processing described in NIPS 2003 workshop, so that we can check the performance of the algorithms on datasets with and without redundant features. Unfortunately, we find that with our own effort at pre-processing, the datasets with probes yield far different accuracies compared with the datasets with probes downloaded from the workshop web site. Thus we are using the original version and the probe version downloaded from difference sources, and the comparison between them should be done with care, though the demonstration of LOOHL?s efficacy is not affected. The properties of the five datasets are summarized in Table 1. We randomly pick the labeled subset L from all labeled data available under the constraint that both classes must be present in L. The remaining labeled and unlabeled data are used as unlabeled data. For example, by saying |L| = 20 for text dataset, we mean randomly picking 20 points from the 600 labeled data as labeled, and label the other 1980 points by using our algorithm. Finally we calculate the prediction accuracy on the 580 (originally) labeled points. For other datasets, say cancer, testing is on 180 points since we know the label of all points. For each fixed |L|, this random test is conducted for 10 times and the average accuracy is reported. Then |L| is varied. We normalized all input feature vectors to have length 1. (a) 4 vs 9 (original) (b) cancer (original) (c) text (original) (d) thrombin (original) (e) 4 vs 9 (probe) (f) cancer (probe) (g) text (probe) (h) thrombin (probe) Figure 2: Accuracy of original and probe versions in percentage vs. number of labeled data. The initial common bandwidth and smoothing factor  in MinEnt are selected by five fold cross validation. For LOOHL, We fix h+ (x) = (1 ? x)2 and h? (x) = x2 . The final objective function is: . P 2 C1 ? Loo loss N ormal + C2 ? d (1/?d ? 1/? ? ) m, X X Loo loss N ormal , (2r+ )?1 Loo loss(xi , yi )+(2r? )?1 Loo loss(xi , yi ), (8) yi =1 yi =0 and there are r+ positive labeled examples and r? negative labeled examples. For each C1 :C2 ratio, we run on ? ? = 0.05, 0.1, 0.15, 0.2, 0.25, 0.3 for all datasets and select the function that corresponds to the smallest objective function value for use in cross validation testing. The final C1 :C2 value was picked by five fold cross validation, with discrete levels at 10?i , where i = 1, 2, 3, 4, 5, since strong regularizer is needed given the large number of features (variables) and much fewer labeled points. The optimization solver we use is the Toolkit for Advanced Optimization [2]. From the results in Figure 2 and Figure 3, we can make the following observations and conclusions: 1. LOOHL generally outperforms 5-CV and MinEnt. Both LOOHL+Thrd and LOOHL+CMN outperform 5-CV and MinEnt (regardless of Thrd or CMN) on all datasets except thrombin and ionosphere, where either LOOHL+CMN or LOOHL+Thrd finally performs best. 2. For 5-CV, CMN is almost always better than thresholding, except on the original form of cancer and thrombin dataset, where CMN hurts 5-CV. In [11], it is claimed that although the theory of HEM is sound, CMN is still necessary to achieve reasonable performance because the underlying graph is often poorly estimated and may not reflect the classification goal, i.e., one should not rely exclusively on the graph. Now that our LOOHL is aimed at learning a good graph, the ideal case is that the graph learned is suitable for our classification such that the improvement by CMN will not be large. In other words, the difference between LOOHL+CMN and LOOHL+Thrd, compared with the difference between 5-CV+CMN and 5-CV+Thrd, can be viewed as an approximate indicator of how well the graph is learned by LOOHL. The efficacy of LOOHL can be clearly observed in datasets 4vs9, cancer, text, ionosphere and original version of thrombin. In these cases, we see that LOOHL+Thrd is already achieving high accuracy and LOOHL+CMN does not offer much improvement then or even hurts performance due to inaccurate class ratio estimation. In fact, LOOHL+Thrd performs reliably well on all datasets. It is thus desirable to learn the bandwidth for each dimension of the feature vector, and there is no longer any need to post-process by using class ratio information. 3. The performance of MinEnt is generally inferior to 5-CV and LOOHL. MinEnt+Thrd has equal chance of out-performing or losing to 5-CV+Thrd, while 5-CV+CMN is almost always better than MinEnt+CMN. Most of the time, MinEnt+CMN performs significantly better than MinEnt+Thrd, so we can conclude that MinEnt fails to learn a good graph. This may be due to converging to a poor local minimum, or that the idea of minimizing the entropy on unlabeled data is by itself insufficient. Figure 3: Accuracy of Ionosphere in percentage vs. number of labeled data. Figure 4: Accuracy comparison of priors in percentage between P ?2 minimizing sum of square inverse bandwidth d ?d and minimizing variance of inverse bandwidth. 4. For these datasets, assuming low variance of inverse bandwidth with discretization as regularizer is more reasonable than assuming that many features are irrelevant to the classification. This is even true for probe versions of the datasets. Figure 4 shows the comparison. 6 Conclusions In this paper, we proposed learning the graph for graph based semi-supervised learning by minimizing the leave-one-out prediction error, with a simple regularizer. Efficient gradient calculation algorithms are designed and the empirical result is encouraging. Acknowledgements This work is partially funded by the Singapore-MIT Alliance. National ICT Australia is funded through the Australian Government?s Backing Australia?s Ability initiative, in part through the Australian Research Council. References [1] Andreas Argyriou, Mark Herbster, and Massimiliano Pontil. Combining Graph Laplacians for SemiSupervised Learning. In NIPS 2005, Vancouver, Canada, 2005. [2] Steven Benson, Lois McInnes, Jorge Mor?e, and Jason Sarich. TAO User Manual ANL/MCS-TM-242, http://www.mcs.anl.gov/tao, 2005. [3] Avrin Blum, and Shuchi Chawla. Learning From Labeled and Unlabeled Data using Graph Mincuts. In ICML 2001. ? Carreira-Perpi?na? n, and Richard S. Zemel. Proximity Graphs for Clustering and Manifold Learn[4] Miguel A ing. In NIPS 2004. [5] Olivier Chapelle, Vladimir Vapnik, Olivier Bousquet, and Sayan Mukherjee. Choosing Multiple Parameters for Support Vector Machines. Machine Learning, 46, 131?159, 2002. [6] Olivier Chapelle, Jason Weston, and Bernhard Sch?olkopf. Cluster Kernels for Semi-Supervised Learning. In NIPS 2002. [7] Fabio G. Cozman, Ira Cohen, and Marcelo C. Cirelo. Semi-Supervised Learning of Mixture Models and Bayesian Networks. In ICML 2003. [8] Thorsten Joachims. Transductive Learning via Spectral Graph Partitioning. In ICML 2003. [9] Ashish Kapoor, Yuan Qi, Hyungil Ahn, and Rosalind Picard. Hyperparameter and Kernel Learning for Graph Based Semi-Supervised Classification. In NIPS 2005. [10] Alexander Smola, and Risi Kondor. Kernels and Regularization on Graphs. In COLT 2003. [11] Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. In ICML 2003. [12] Xiaojin Zhu, John Lafferty, and Zoubin Ghahramani. Semi-Supervised Learning: From Gaussian Fields to Gaussian Processes. CMU Technical Report CMU-CS-03-175. [13] Xiaojin Zhu, Jaz Kandola, Zoubin Ghahramani, and John Lafferty. Non-parametric Transforms of Graph Kernels for Semi-Supervised Learning. In NIPS 2004.
3043 |@word mild:1 repository:1 version:10 inversion:4 kondor:1 advantageous:1 d2:1 gradual:1 pick:1 carry:1 reduction:1 initial:2 score:2 selecting:1 efficacy:2 exclusively:1 denoting:2 rightmost:1 outperforms:1 existing:1 current:1 com:1 comparing:1 discretization:1 jaz:1 dx:2 written:3 must:1 john:3 numerical:1 informative:1 wll:1 designed:3 v:4 prohibitive:1 selected:2 fewer:1 ith:1 node:1 firstly:1 zhang:2 five:3 along:1 c2:3 initiative:1 transducer:1 yuan:1 growing:1 globally:1 automatically:1 gov:1 csl:1 encouraging:1 solver:1 bounded:1 underlying:1 mass:1 tying:1 interpreted:2 transformation:2 classifier:2 hit:2 rm:1 partitioning:1 normally:1 appear:1 positive:10 local:3 xv:3 interpolation:1 plus:1 twice:1 au:1 initialization:2 pit:2 plu:1 testing:2 implement:1 differs:1 pontil:1 empirical:1 significantly:2 lois:1 pre:6 radial:1 intention:1 word:1 zoubin:3 get:2 cannot:3 unlabeled:17 selection:4 close:2 put:11 applying:3 influence:1 www:1 yt:3 attention:1 regardless:1 pure:1 utilizing:2 regularize:1 ormal:2 coordinate:3 hurt:2 laplace:1 imagine:1 suppose:4 pt:6 user:1 losing:1 olivier:3 us:2 expensive:1 recognition:1 h0t:2 walking:1 cut:2 mukherjee:1 labeled:33 observed:1 ft:4 steven:1 solved:1 calculate:3 connected:1 sun:1 intuition:1 complexity:2 xinhua:2 purely:1 basis:2 po:1 easily:1 various:2 cozman:1 regularizer:11 lu2:1 separated:1 massimiliano:1 effective:2 zemel:1 labeling:3 choosing:1 whose:1 say:1 otherwise:1 ability:1 transductive:2 noisy:1 itself:1 final:2 propose:4 neighboring:1 uci:1 loop:1 combining:1 kapoor:1 translate:1 iff:1 degenerate:1 achieve:1 poorly:1 pll:1 normalize:1 olkopf:1 cluster:1 leave:10 help:1 derive:1 miguel:1 nearest:1 ij:10 strong:1 c:1 pii:2 uu:2 australian:2 direction:2 australia:5 argued:1 government:1 fix:3 f1:2 alleviate:1 pl:5 hold:1 proximity:1 ground:1 normal:1 plagued:1 exp:1 predict:1 u3:3 smallest:2 estimation:1 label:13 combinatorial:1 council:1 create:1 successfully:2 weighted:1 minimization:3 hope:2 mit:1 clearly:1 always:3 gaussian:4 aim:2 pn:6 avoid:1 overwhelmingly:1 earliest:1 encode:1 ax:1 ira:1 joachim:1 improvement:2 check:1 degenerative:3 sense:2 helpful:1 inference:1 inaccurate:1 typically:1 transferring:1 uij:1 wij:10 reproduce:1 tao:2 backing:1 classification:12 udl:1 colt:1 k6:2 priori:2 smoothing:2 constrained:1 equal:2 once:2 field:2 icml:4 minimized:1 report:1 richard:1 few:1 randomly:2 wee:1 simultaneously:1 national:4 ve:3 kandola:1 replaced:1 investigate:1 picard:1 mixture:1 extreme:1 edge:2 fu:27 encourage:1 necessary:1 tree:1 walk:4 desired:1 alliance:1 instance:3 column:2 modeling:1 soft:5 classify:1 maximization:2 cost:10 vertex:1 subset:1 uniform:1 conducted:2 too:1 loo:9 hem:4 reported:1 considerably:2 combined:1 confident:1 gd:4 herbster:1 lee:1 probabilistic:1 picking:1 ashish:1 swi:5 na:4 reflect:1 nm:1 choose:2 henceforth:1 style:2 summarized:1 ranking:1 depends:2 performed:1 picked:1 jason:2 start:1 minimize:3 square:2 marcelo:1 accuracy:9 variance:5 largely:1 efficiently:2 ensemble:1 correspond:1 likewise:1 yield:1 bayesian:1 produced:1 lu:2 mc:2 comp:1 drive:1 classified:2 sharing:1 manual:1 energy:4 frequency:1 intentionally:1 ptu:4 associated:1 di:2 stop:1 sampled:1 dataset:6 ut:12 dimensionality:2 actually:1 originally:1 supervised:19 improved:1 formulation:2 done:3 though:2 strongly:1 just:2 xa:1 stage:1 smola:1 horizontal:3 web:1 quality:1 semisupervised:1 building:1 normalized:1 true:1 regularization:1 nonzero:1 deal:1 encourages:2 inferior:1 essence:1 trying:1 performs:3 fj:6 meaning:1 harmonic:5 wise:2 image:1 recently:1 fi:14 regularizing:1 common:2 absorbing:1 empirically:1 cohen:1 classifers:1 mor:1 cv:10 fk:4 pointed:1 neatly:1 funded:2 toolkit:1 chapelle:2 similarity:4 longer:1 ahn:1 base:1 pu:14 add:1 closest:1 own:1 showed:1 irrelevant:1 claimed:1 binary:1 jorge:1 yi:8 neg:1 minimum:6 relaxed:1 care:1 recognized:1 redundant:1 semi:18 ii:1 multiple:2 desirable:2 sound:1 stem:1 ing:1 technical:1 ptt:1 calculation:6 cross:8 offer:1 post:1 laplacian:2 qi:1 prediction:4 converging:1 regression:1 basic:1 essentially:1 metric:5 cmu:2 normalization:2 kernel:4 invert:1 c1:3 addition:1 source:1 crucial:2 sch:1 rest:1 flow:1 spirit:1 lafferty:3 call:3 noting:1 presence:1 ideal:1 split:1 vs9:1 xj:5 bandwidth:14 opposite:1 reduce:1 idea:2 andreas:1 tm:1 motivated:1 six:1 ul:1 effort:1 fnt:1 generally:2 aimed:1 transforms:1 nonparametric:1 locally:1 category:1 simplest:2 reduced:2 http:1 outperform:1 exist:1 percentage:3 singapore:4 notice:1 estimated:3 delta:1 correctly:1 discrete:1 hyperparameter:6 affected:1 group:1 four:2 blum:1 achieving:2 prevent:2 ht:8 graph:51 sum:3 run:2 inverse:4 shuchi:1 wlu:1 saying:1 almost:2 reasonable:2 wu:16 pik:8 submatrix:1 accelerates:1 fl:8 pay:1 fold:4 quadratic:1 nonnegative:1 constraint:2 infinity:1 x2:1 encodes:1 bousquet:1 min:2 performing:2 department:1 leews:1 disconnected:1 poor:1 smaller:1 partitioned:1 making:1 benson:1 thorsten:1 equation:1 needed:1 mind:1 letting:1 know:1 end:4 available:1 wii:3 rewritten:1 apply:2 probe:11 spectral:5 chawla:1 rosalind:1 existence:1 original:13 remaining:1 include:4 clustering:1 completed:1 graphical:1 risi:1 ghahramani:3 objective:2 question:2 added:1 already:1 parametric:3 swit:5 swt:1 unclear:1 gradient:13 fabio:1 distance:5 degrade:1 topic:1 ourself:1 manifold:1 spanning:1 nicta:1 assuming:4 ru:3 length:1 index:1 useless:3 insufficient:1 ratio:4 minimizing:10 demonstration:1 vladimir:1 unfortunately:1 negative:8 reliably:1 unknown:1 discretize:1 vertical:2 observation:2 markov:1 datasets:16 descent:2 regularizes:1 perturbation:1 varied:1 canada:1 inverting:2 pair:2 rsise:1 kroneker:1 learned:4 mizes:1 nu:1 nip:7 address:1 below:1 pattern:1 laplacians:2 sparsity:1 challenge:1 program:1 including:1 max:1 suitable:1 rely:1 regularized:1 indicator:1 scarce:1 advanced:1 mn:2 wik:1 zhu:3 ready:1 xiaojin:3 text:6 prior:3 ict:2 sg:1 removal:1 acknowledgement:1 vancouver:1 loss:8 interesting:1 validation:8 downloaded:4 pij:3 thresholding:3 row:6 cancer:5 neighbor:1 sparse:1 boundary:1 dimension:11 xn:1 author:1 stuck:1 thrombin:5 far:2 approximate:2 restarting:1 selector:1 bernhard:1 conclude:1 xi:13 anl:2 table:2 learn:8 transfer:1 robust:1 ignoring:1 improving:1 du:2 diag:1 cmn:15 did:1 motivation:1 whole:2 hyperparameters:5 n2:2 xu:1 site:2 canberra:2 probing:1 sub:1 fails:1 xh:1 xl:3 learns:4 sayan:1 perpi:1 xt:6 udu:1 explored:1 svm:2 flt:8 evidence:2 ionosphere:4 workshop:4 vapnik:1 effectively:1 push:1 anu:1 margin:1 entropy:3 horizontally:1 hitting:1 expressed:1 partially:1 u2:3 corresponds:4 minimizer:1 chance:1 weston:1 goal:1 viewed:1 careful:3 rbf:3 fut:9 considerable:1 hard:1 change:1 carreira:1 specifically:2 determined:1 except:2 wt:1 lemma:3 called:1 total:1 mincuts:1 experimental:3 select:1 mark:3 support:1 alexander:1 argyriou:1 scratch:1
2,253
3,044
Cross-Validation Optimization for Large Scale Hierarchical Classification Kernel Methods Matthias W. Seeger Max Planck Institute for Biological Cybernetics P.O. Box 2169, 72012 T?ubingen, Germany [email protected] Abstract We propose a highly efficient framework for kernel multi-class models with a large and structured set of classes. Kernel parameters are learned automatically by maximizing the cross-validation log likelihood, and predictive probabilities are estimated. We demonstrate our approach on large scale text classification tasks with hierarchical class structure, achieving state-of-the-art results in an order of magnitude less time than previous work. 1 Introduction In many real-world statistical problems, we would like to fit a model with a large number of dependent variables to a training sample with very many cases. For example, in multi-way classification problems with a structured label space, modern applications demand predictions on thousands of classes, and very large datasets become available. If n and C denote dataset size and number of classes respectively, nonparametric kernel methods like SVMs or Gaussian processes typically scale superlinearly in n C, if dependencies between the latent class functions are properly represented. Furthermore, most large scale kernel methods proposed so far refrain from solving the problem of learning hyperparameters (kernel or loss function parameters). The user has to run cross-validation schemes, which require frequent human interaction and are not suitable for learning more than a few hyperparameters. In this paper, we propose a general framework for learning in probabilistic kernel classification models. While the basic model is standard, a major feature of our approach is the high computational efficiency with which the primary fitting (for fixed hyperparameters) is done, allowing us to deal with hundreds of classes and thousands of datapoints within a few minutes. The primary fitting scales linearly in C, and depends on n mainly via a fixed number of matrix-vector multiplications (MVM) with n ? n kernel matrices. In many situations, these MVM primitives can be computed very efficiently, as will be demonstrated. Furthermore, we optimize hyperparameters automatically by minimizing the cross-validation log likelihood, making use of our primary fitting technology as inner loop in order to compute the CV criterion and its gradient. Our approach can be used to learn a large number of hyperparameters and does not need user interaction. Our framework is generally applicable to structured label spaces, which we demonstrate here for hierarchical classification of text documents. The hierarchy is represented through an ANOVA setup. While the C latent class functions are fully dependent a priori, the scaling of our method stays within a factor of two compared to unstructured classification. We test our framework on the same tasks treated in [1], achieving comparable results in at least an order of magnitude less time. Our method estimates predictive probabilities for each test point, which can allow better predictions w.r.t. loss functions different from zero-one. The primary fitting method is given in Section 2, the extension to hierarchical classification in Section 3. Hyperparameter learning is discussed in Section 4. Computational details are provided in Section 5. We present experimental results in Section 6. Our highly efficient implementation is publicly available, as project klr in the LHOTSE1 toolbox for adaptive statistical models. 2 Penalized Multiple Logistic Regression Our problem is to predict y ? {1, . . . , C} from x ? X , given some i.i.d. data D = {(xi , y i ) | i = 1, . . . , n}. We use zero-one coding, i.e. y i ? {0, 1}C , 1T y i = 1. We elpoy the multiple logistic regression model, consisting of C latent (unobserved) class functions P uc feeding into the multiple logistic (or softmax) likelihood P (yi,c = 1|xi , ui ) = euc (xi ) /( c0 euc0 (xi ) ). We write uc = fc + bc for intercept parameters bc ? R and functions fc living in a reproducing kernel Hilbert space (RKHS) with kernel K (c) , and consider the penalized negative log likelihood Pn PC ? = ? i=1 log P (y i |ui ) + (1/2) c=1 kfc k2c + (1/2)? ?2 kbk2 , which we minimize for primary fitting. k ? kc is the RKHS norm for kernel K (c) . Details on such setups can be found in [4]. Our notation for n C vectors2 (and matrices) uses the ordering y = (y1,1 , y2,1 , . . . , yn,1 , y1,2 , . . . ). We set u = (uc (xi )) ? RnC . ? denotes the Kronecker product, 1 is the vector of all ones. Selection indexes I are applied to i only: y I = (yi,c )i?I,c ? R|I|C . Since the likelihood on the fc only through fc (xi ), every minimizer of ? must be a kernel P depends (c) expansion: fc = i ?i,c K (?, xi ) (representer theorem, see [4]). Plugging this in, the regularizer becomes (1/2)?T K ? + (1/2)? ?2 kbk2 . K (c) = (K (c) (xi , xj ))i,j ? Rn,n , K = diag(K (c) )c is block-diagonal. We refer to this setup as flat classification model. The bc may be eliminated as ? = K + ? 2 (I ? 1)(I ? 1T ), then ? becomes b = ? 2 (I ? 1T )?. Thus, if K 1 ? ?, ?lh = ?y T u + 1T l, li = log 1T exp(ui ), u = K ? ?. ? = ?lh + ?T K (1) 2 ? is strictly convex in ? (because the likelihood is log-concave), so it has a unique minimum point P (c) ? The corresponding kernel expansions are u ?. ?c = ? ? (K (?, xi ) + ? 2 ). Estimates of the i i,c conditional probability on test points x? are obtained by plugging u ?c (x? ) into the likelihood. We note that this setup can also be seen as MAP approximation to a Bayesian model, where the fc are given independent Gaussian process priors, e.g.[7]. It is also related to the multi-class SVM [2], where ? log P (yi |ui ) is replaced by the margin loss ?uyi (xi ) + maxc {uc (xi ) + 1 ? ?c,yi }. The negative log multiple logistic likelihood has similar properties, but is smooth as a function of u, and the primary fitting of ? does not require constrained convex optimization. We minimize ? using the Newton-Raphson (NR) algorithm, the details are provided in Section 5. The complexity of our fitting algorithm is dominated by k1 (k2 + 2) matrix-vector multiplications with K , where k1 is the number of NR iterations, k2 the number of linear conjugate gradient (LCG) steps for computing each Newton direction. Since NR is a second-order convergent method, k1 can be chosen small. k2 determines the quality of each Newton direction, for both fairly small values are sufficient (see Section 6.2). 3 Hierarchical Classification So far we dealt with flat classification, the classes being independent a priori, with block-diagonal kernel matrix K . However, if the label set has a known structure3 , we can benefit from representing it in the model. Here we focus on hierarchical classification, the label set {1, . . . , C} being the leaf nodes of a tree. Classes with lower common ancestor should be more closely related. In this Section, we propose a model for this setup and show how it can be dealt with in our framework with minor modifications and minor extra cost. In flat classification, the latent class functions uc are modelled as a priori independent, in that the regularizer (which plays the role of a log prior) is a sum of individual terms for each uc , without any 1 See www.kyb.tuebingen.mpg.de/bs/people/seeger/lhotse/. In Matlab, reshape(y,n,C) would give the matrix (yi,c ) ? Rn,C . 3 Learning an unknown label set structure may be achieved by expectation maximization techniques, but this is subject to future work. 2 interaction terms. Analysis of variance (ANOVA) models go beyond this independent design, they have previously been applied to text classification by [1]. Let {0, . . . , P } be the nodes of the tree, 0 being the root, and the numbers are assigned breadth first (1, 2, . . . are the root?s children). The tree is determined by P and np , p = 0, . . . , P , the number of children of node p. Let L be the set of leaf nodes, |L| = C. Assign a pair of latent functions up , u ?p to each node, except the root. The u ?p are assumed a priori independent, as in flat classification. up is the sum of u ?p0 , p0 running over the nodes (including p) on the path from the root to p. The class functions to be fed into the likelihood are the uL(c) of the leafs. This setup represents similarities conditioned on the hierarchy. For example, if leafs L(c), L(c0 ) have the common parent p, then uL(c) = up + u ?L(c) , uL(c0 ) = up + u ?L(c0 ) , so the class functions share the effect up . Since regularization forces all independent effects u ?p0 to be smooth, the classes c, c0 are urged to behave similarly a priori. ? = (? ? ? ? Let u = (up (xi ))i,p , u up (xi ))i,p ? RnP . The vectors are related as u = (? ? I)u, P,P {0, 1} . Importantly, ? has a simple structure which allows MVM with ? or ?T to be computed easily in O(P ), without having to compute or store ? explicitly. MVM with ? is described in Algorithm 1, and MVM with ?T works in a similar manner [8]. Under the hierarchical model, the class functions uL(c) are strongly dependent a priori. We may represent this prior coupling in our framework by simply plugging in the implied kernel matrix K : ? (?T ? I), (2) K = (?L,? ? I)K L,? ? is block-diagonal. K is not sparse and certainly not block-diagonal, but the where the inner K important point is that we are still able to do kernel MVMs efficiently: pre- and postmultiplying by ? is block-diagonal just as in the flat case. ? is cheap, and K We note that the step from flat to hierarchical classification requires minor modifications of Algorithm 1: Matrix-vector multiplication existing code only. If code for representing a y = ?x block-diagonal K is available, we can use it y ? (). y0 := 0. s := 0. ? , just replacing C by to represent the inner K for p = 0, . . . , P do P . This simplicity carries through to the hyperif np > 0 (p not a leaf node) then parameter learning case (see Section 4). The Let J(p) = {s + 1, . . . , s + np }. cost of a kernel MVM is increased by a factor y ? (y T , yp 1T + xTJ(p) )T . s ? s + np . P/C < 2, which in most hierarchies in pracend if tice is close to 1. However, it would be wrong end for to claim that hierarchical classification in general comes as cheap as flat classification. The subtle issue is that the primary fitting becomes more costly, precisely because there is more coupling between the variables. In the flat case, the Hessian of ? is close to block-diagonal. The LCG algorithm to compute Newton directions converges quickly, because it nearly decomposes into C independent ones, and fewer NR steps are required (see Section 5). In the hierarchical case, both LCG and NR need more iterations to attain the same accuracy. In numerical mathematics, much work has been done to approximately decouple linear systems by preconditioning. In some of these strategies, knowledge about the structure of the system matrix (in our case: the hierarchy) can be used to drive preconditioning. An important point for future research is to find a good preconditioning strategy for the system of Eq. 5. However, in all our experiments so far the fitting of the hierarchical model took less than twice the time required for the flat model on the same task. Some further extensions, such as learning with incomplete label information, are discussed in [8]. 4 Hyperparameter Learning In any model of interest, there will be free hyperparameters h, for example parameters of the kernels K (c) . These were assumed to be fixed in the primary fitting method introduced in Section 2. In this Section, we describe a scheme for learning h which makes use of the primary fitting algorithm as inner loop. Note that such nested strategies are commonplace in Bayesian Statistics, where (marginal) inference is typically used as subroutine for parameter learning. Recall that primary fitting consists of minimizing ? of Eq. 1 w.r.t. ?. If we minimize ? w.r.t. h as well, we run into the problem of overfitting. A common remedy is to minimize the negative cross- validation log likelihood ? instead. Let {Ik } be a partition of {1, . . . , n}, with Jk = {1, . . . , n}\Ik , and let ?Jk = uT[Jk ] ((1/2)?[Jk ] ? y Jk ) + 1T l[Jk ] be the primary criterion on the subset Jk of the ? J ?[J ] . The ?[J ] are independent variables, not part of a common ?. The data. Here, u[Jk ] = K k k k CV criterion is X ? I ,J ?[J ] , ?= ?Ik , ?Ik = ?y TIk u[Ik ] + 1T l[Ik ] , u[Ik ] = K (3) k k k k where ?[Jk ] minimizes ?Jk . Since for each k, we fit and evaluate on disjoint parts of y, ? is an unbiased estimator of the test negative log likelihood, and minimizing ? should be robust to overfitting. In order to select h, we pick a fixed partition at random, then do gradient-based minimization of ? w.r.t. h. To this end, we keep the set {?[Jk ] } of primary variables, and iterate between re-fitting those for each fold Ik , and computing ? and ?h ?. The latter can be determined analytically, requiring ? J V [J ] already encountered during us to solve a linear system with the Hessian matrix I + V T[Jk ] K k k primary fitting (see Section 5). This means that the same LCG code used to compute Newton directions there can be applied here in order to compute the gradient of ?. The details are given in Section 5. As for the complexity, suppose there are q folds. The update of the ?[Jk ] requires q primary fitting applications, but since they are initialized with the previous values ?[Jk ] , they do converge very rapidly, especially during later outer iterations. Computing ? based on the ?[Jk ] comes basically for free. The gradient computation decomposes into two parts: accumulation, and kernel derivative MVMs. The accumulation part requires solving q systems of size ((q ? 1)/q)n C, ? J if linear conjugate gradients (LCG) is used, k3 being the number thus q k3 kernel MVMs on the K k of LCG steps. We also need two buffer matrices E , F of q n C elements each. Note that the accumulation step is independent of the number of hyperparameters. The kernel derivative MVM part consists of q derivative MVM calls for each independent component of h, see Section 5.1. As opposed to the accumulation part, this part consists of a simple large matrix operation and can be run very efficiently using specialized numerical linear algebra code. As shown in Section 5, the extension of hyperparameter learning to the hierarchical case of Section 3 is simply done by wrapping the accumulation part, the coding and additional memory effort being minimal. Given a method for computing ? and ?h ?, we plug these into a custom optimizer such as Quasi-Newton in order to learn h. 5 Computational Details In this Section, we provide details for the general plan laid out above. It is precisely these which characterize our framework and allow us to apply a standard model to domains beyond its usual applications, but of interest to Machine Learning. Recall Section 2. We minimize ? by choosing search directions s, and doing line minimizations ? ?. We have: along ? + ?s, ? > 0. For the latter, we maintain the pair (?, u), u = K ?u ? = ? ? y + ?, ? = exp(u ? 1 ? l), i .e. ?i,c = P (yi,c = 1|ui ). (4) Given (?, u), ? and ?u ? can be computed in O(n C), without requiring MVMs. This suggests to ? s, the corresponding ? can be constructed perform the line search in u along the direction s? = K from the final ?. Since kernel MVMs are significantly more expensive than these O(n C) operations, the line searches basically come for free! We choose search directions by Newton-Raphson (NR)4 , since the Hessian of ? is required anyway for hyperparameter learning. Let D = diag ?, P = (1 ? I)(1T ? I), and W = D ? DP D. We ? )?0 = have ??u ?lh = W , and g = ?u ?lh = ? ? y from Eq. 4. The NR system is (I + W K 1/2 0 W u ? g, with the NR direction being s = ? ? ?. If V = (I ? DP )D , then W = V V T , because (1T ? I)D = I. We see that ?0 = V ? (using (1T ? I)g = 0), and we can obtain it from the equivalent symmetric system   ? V ? = V T u ? D ?1/2 (? ? y), ?0 = V ? I + V TK (5) 4 Initial experiments with conjugate gradients in ? gave very slow convergence, due to poor conditioning, but experiments with a different dual criterion are in preparation. P 0 (details are in [8]). Note that P x = ( c0 x(c ) )c , so that MVM with V can be done in O(n C). The NR direction is obtained by solving this system approximately by the linear conjugate gradients (LCG) method, requiring a MVM with the system matrix in each iteration, thus a single MVM with K . Our implementation includes diagonal preconditioning and numerical stability safeguards [8]. The NR system need not be solved to high accuracy (see Section 6.2). Initially, ? = D ?1/2 ?, because then V ? = ? if only (1T ? I)? = 0, which is true if the initial ? fulfils it. We now show how to compute the gradient ?h ? for the CV criterion ? (Eq. 3). Note that ?[J] is determined by the stationary equation ?[J] + g [J] = 0. Taking the derivative gives ? J (d?[J] )). We obtain a system for d?[J] which is symd?[J] = ?W [J] ((dK J )?[J] + K T ? metrized as above: (I + V [J] K J V [J] )? = ?V T[J] (dK J )?[J] , d?[J] = V [J] ?. Also, ? I,J (d?[J] )). With s = I ?,I (? [I] ? y ) ? I ?,J V [J] (I + d?I = (? [I] ? y I )T ((dK I,J )?[J] + K I T ? ?1 T ? V K J V [J] ) V K J,I (? [I] ?y I ), we have that d?I = (I ?,J ?[J] )T (dK )s. If we collect these [J] [J] vectors as columns of E , F ? RnC,q , we have that d? = tr E T (dK )F . In the hierarchical setup, ? = (?T ? I)E ? RnP,q , F? accordingly, then d? = tr E ? T (dK ? )F? . Here, we we use Eq. 2: E L,? ? , F? , then transform them later in place. build E , F in the buffers allocated for E We finally mention some of the computational ?tricks?, without which we could not have dealt with the largest tasks in Section 6.2 (for section B, a single n C vector requires 88M of memory). For the linear kernel (see Section 5.1), the main primitive A 7? X X T A can be coded very efficiently using a standard sparse matrix format for X . If A is stored row-major (a1,1 , a1,2 , . . . ), the computation becomes faster by a factor of 4 to 6 compared to the standard column-major format5 . For ? J . ?Covariance reprehyperparameter learning, we work on subsets Jk and need MVMs with K k ? sentation shuffling? permutes the representation s.t. K Jk sits in the upper left part, and MVM can use flat rather than indexed code, which is many times faster. We also share memory blocks of size n C between LCG, gradient accumulation, line searches, keeping the overall memory requirements at r n C for a small constant r, and avoiding frequent reallocations. 5.1 Matrix-Vector Multiplication MVM with K is the bottleneck of our framework, and all efforts should be concentrated on this primitive. We can tap into much prior work in numerical mathematics. With many classes C, we may share kernels: K (c) = vc M (lc ) , vc > 0 variance parameters, M (l) independent correlation functions. Our generic implementation stores two symmetric matrices M (l) in a n ? n buffer. The linear kernel K (c) (x, x0 ) = vc xT x0 is frequently used for text classification (see Section 6.2). If the data matrix X is sparse, kernel MVM can be done in much less than the generic O(C n2 ), typically in O(C n), requiring O(n) storage for X only, even if the dimension of x is way beyond n. If the K (c) are isotropic kernels (depending on kx ?x0 k only) and the x are low-dimensional, MVM with K (c) can be approximated using specialized nearest neighbour data structures such as KD trees [12, 9]. Again, the MVM cost is typically O(C n) in this case. For general kernels whose kernel matrices have a rapidly decaying eigenspectrum, one can approximate MVM by using low-rank matrices instead of the K (c) [10], whence MVM is O(C n d), d the rank. In Section 4 we also need MVM with the derivatives (?/?hj )K (c) . Note that (?/? log vc )K (c) = K (c) , reducing to kernel MVM. For isotropic kernels, K (c) = f (A), ai,j = kxi ? xj k, so (?/?hj )K (c) = gj (A). If KD trees are used to approximate A, they can be used equivalently (and with little additional cost) for computing derivative MVMs. 5 The innermost vector operations work on contiguous chunks of memory, rather than strided ones, thus supporting cacheing or vector functions of the processor. 6 Experiments In this Section, we provide experimental results for our framework on data from remote sensing, and on a set of large text classification tasks with very many classes, the latter are hierarchical. 6.1 Flat Classification: Remote Sensing We use the satimage remote sensing task from the statlog repository.6 This task has been used in the extensive SVM multi-class study of [5], where it is among the datasets on which the different methods show the most variance. It has n = 4435 training, m = 2000 test cases, and C = 6 classes. We use the isotropic Gaussian (RBF) kernel  w  c K (c) (x, x0 ) = vc exp ? kx ? x0 k2 , vc , wc > 0, x, x0 ? Rd . (6) 2d We compare the methods mc-sep (ours with separate kernels for each class; 12 hyperparameters), mc-tied (ours with a single shared kernel; 2 hyperparameters), 1rest (one-against-rest: C binary classifiers are trained separately to discriminate c from the rest, they are voted by log probability upon prediction; 12 hyperparameters). Note that 1rest is arguably the most efficient method which can be used for multi-class, because its binary classifiers can be fitted separately and in parallel. Even if run sequentially, 1rest requires less memory by a factor of C than a joint multi-class method. We use our 5-fold CV criterion ? for each method. Results here are averaged over ten randomly drawn 5-partitions of the training set (the same partitions are used for the different methods). The test error (in percent) of mc-sep is 7.81 vs. 8.01 for 1rest. The result for mc-sep is state-of-the-art, for example the best SVM technique tested in [5] attained 7.65, and SVM one-against-rest attained 8.30 in this study. Note that while 1rest also may choose 12 independent kernel parameters, it does not make good use of this possibility, as opposed to mc-sep. mc-tied has test error 8.37, suggesting that tying kernels leads to significant degradation. ROC curves for the different methods are given in [8], showing that mc-sep also profits from estimating the predictive probabilities in a better way. 6.2 Hierarchical Classification: Patent Text Classification We use the WIPO-alpha collection7 previously studied in [1], where patents (title and claim text) are to be classified w.r.t. the standard taxonomy IPC, a tree with 4 levels and 5229 nodes. Sections A, B,. . . , H. form the first level. As in [1], we concentrate on the 8 subtasks rooted at the sections, ranging from D (n = 1140, C = 160, P = 187) to B (n = 9794, C = 1172, P = 1319). We use linear kernels (see Section 5.1) with variance parameters vc . All experiments are averaged over three training/test splits, different methods using the same ones. ? is used with a different 5-partition per section and split, the same across all methods. Our method outputs a predictive pj ? RC for each test case xj . The standard prediction y(xj ) = argmaxc pj,c maximizes expected accuracy, classes are ranked as rj (c) ? rj (c0 ) iff pj,c ? pj,c0 .P The test scores are the same as in [1]: P ?1 ?1 ?1 accuracy (acc) m I , precision (prec) m , parent accuracy (pacc) j {y(xj )=yj } j rj (yj ) P ?1 m c0 ) be half the length of j I{par(y(xj ))=par(yj )} , par(c) being the parent of L(c). Let ?(c,P 0 ?1 the shortest path between leafs L(c), L(c ). The taxo-loss (taxo) is m j ?(y(xj ), yj ). These scores are motivated in [1]. For taxo-loss and parent accuracy, we better choose y(xj ) to minimize expected loss8 , different from the standard prediction. We compare methods F1, F2, H1, H2 (F: flat; H: hierarchical). F1: all vc shared (1); H1: vc shared across each level of the tree (3). F2, H2: vc shared across each subtree rooted at root?s children (A: 15, B: 34, C: 17, D: 7, E: 7, F: 17, G: 12, H: 5). Recall that there are 3 accuracy parameters. For hyperparameter learning: k1 = 8, k2 = 4, k3 = 15 (F1, F2); k1 = 10, k2 = 4, k3 = 25 (H1, H2)9 . 6 Available at http://www.niaad.liacc.up.pt/old/statlog/. Raw data from www.wipo.int/ibis/datasets. Label hierarchy described at www.wipo.int/classifications/en. Thanks to L. Cai, T. Hofmann for providing us with the count data and dictionary. We did Porter stemming, stop word removal, and removal of empty categories. The attributes are bag-of-words over the dictionary of occuring words. All cases xi were scaled to unit norm. 8 For parent accuracy, let p(j) be the node with maximal mass (under pj ) of its children which are leafs, then y(xj ) must be a child of p(j). 9 Except for section C, where k1 = 14, k2 = 6, k3 = 35. 7 A B C D E F G H F1 40.6 32.0 33.7 40.0 33.0 31.4 40.1 39.3 A B C D E F G H F1 1.28 1.54 1.33 1.20 1.43 1.43 1.32 1.19 acc (%) H1 F2 41.9 40.5 32.9 31.7 34.7 34.1 40.6 39.7 34.2 32.8 32.4 31.4 40.7 40.2 39.6 39.4 taxo[0-1] H1 F2 1.19 1.29 1.44 1.56 1.26 1.32 1.12 1.22 1.33 1.44 1.34 1.44 1.26 1.32 1.16 1.19 H2 41.9 32.7 34.5 40.8 34.1 32.5 40.7 39.7 F1 51.6 41.8 45.2 52.4 45.1 42.8 51.2 52.4 H2 1.18 1.44 1.26 1.12 1.34 1.34 1.26 1.15 F1 58.9 53.6 58.9 64.6 56.0 56.8 58.0 61.6 prec (%) H1 F2 53.4 51.4 43.8 41.6 46.6 45.4 54.1 52.2 47.1 45.0 44.9 42.8 52.5 51.3 53.3 52.5 pacc (%) H1 F2 61.6 58.2 56.4 52.7 62.6 58.5 67.0 64.4 59.1 56.2 59.7 56.8 59.7 57.6 62.5 61.8 H2 53.4 43.7 46.4 54.3 47.1 45.0 52.5 53.4 F1 1.27 1.52 1.34 1.19 1.39 1.43 1.32 1.17 H2 61.5 56.6 62.0 67.1 59.2 59.8 59.6 62.5 F1 57.2 51.9 58.6 63.5 54.0 54.9 56.8 59.9 taxo H1 F2 1.19 1.29 1.44 1.55 1.26 1.35 1.11 1.18 1.31 1.38 1.34 1.43 1.26 1.32 1.15 1.17 pacc[0-1] (%) H1 F2 61.3 56.9 55.9 51.4 61.8 58.9 67.1 62.6 58.2 53.5 58.7 54.6 59.2 56.6 61.6 60.0 H2 1.19 1.44 1.27 1.11 1.31 1.34 1.26 1.14 H2 61.4 55.9 61.6 67.0 57.9 58.9 58.9 61.8 Table 1: Results on tasks A-H. Methods F1, F2 flat, H1, H2 hierarchical. taxo[0-1], pacc[0-1] for argmaxc pj,c rule, rather than minimize expected loss. A B C D Final NR (s) F1 H1 2030 3873 3751 8657 4237 7422 56.3 118.5 CV Fold (s) F1 H1 573 598 873 1720 719 1326 9.32 20.2 E F G H Final NR (s) F1 H1 131.5 203.4 1202 2871 1342 2947 971.7 1052 CV Fold (s) F1 H1 32.2 49.6 426 568 232 579 146 230 Table 2: Running times for tasks A-H. Method F1 flat, H1 hierarchical. CV Fold: Re-optimization of ?[J] , gradient accumulation for single fold. For final fitting: k1 = 25, k2 = 12 (F1, F2); k1 = 30, k2 = 17 (H1, H2). The optimization is started from vc = 5 for all methods. Results are given in Table 1. The hierarchical model outperforms the flat one consistently. While the differences in accuracy and precision are hardly significant (as also found in [1]), they (partly) are in taxo-loss and parent accuracy. Also, minimizing expected loss is consistently better than using the standard rule for the latter, although the differences are very small. H1 and H2 do not perform differently: choosing many different vc in the linear kernel seems no advantage here (but see Section 6.1). The results are very similar to the ones of [1]. However, for our method, the recommendation in [1] to use vc = 1 leads to significantly worse results in all scores, the vc chosen by our methods are generally larger. In Table 2, we present running times10 for the final fitting and for a single fold during hyperparameter optimization (5 of them are required for ?, ?h ?). Cai and Hofmann [1] quote a final fitting time of 2200s on the D section, while we require 119s (more than 18 times faster). It is precisely this high efficiency of primary fitting which allows us to use it as inner loop for hyperparameter learning. 7 Discussion We presented a general framework for very efficient large scale kernel multi-way classification with structured label spaces and demonstrated its features on hierarchical text classification tasks with many classes. As shown for the hierarchical case, the framework is easily extended to novel struc10 Processor time on 64bit 2.33GHz AMD machines. tural priors or covariance functions, and while not shown here, it is also easy to extend it to different likelihoods (as long as they are log-concave). We solve the kernel parameter learning problem by optimizing the CV log likelihood, whose gradient can be computed within the framework. Our method provides estimates of the predictive distribution at test points, which may result in better predictions for non-standard losses or ROC curves. Efficient and easily extendable code is publicly available (see Section 1). An extension to multi-label classification is planned. More advanced label set structures can be adressed, noting that Hessian vector products can often be computed in about the same way as gradients. An application to label sequence learning is work in progress, which may even be combined with a hierarchical prior. Infering a hierarchy from data is possible in principle, using expectation maximization techniques (note that the primary fitting can deal with target distributions y i ), as well as incorporating uncertain data. Empirical Bayesian methods or approximate CV scores for hyperparameter learning have been proposed in [11, 3, 6], but they are orders of magnitude more expensive than our proposal here, and do not apply to a massive number of classes. Many multi-class SVM techniques are available (see [2, 5] for references). Here, fitting is a constrained convex problem, and often fairly sparse solutions (many zeros in ?) are found. However, if the degree of sparsity is not large, first-order conditional gradient methods typically applied can be slow11 . SVM methods typically do not come with efficient automatic kernel parameter learning schemes, and they do not provide estimates of predictive probabilities which are asymptotically correct. Acknowledgments Thanks to Olivier Chapelle for many useful discussions. Supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. References [1] L. Cai and T. Hofmann. Hierarchical document categorization with support vector machines. In CIKM 13, pages 78?87, 2004. [2] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. J. M. Learn. Res., 2:265?292, 2001. [3] P. Craven and G. Wahba. Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation. Numerische Mathematik, 31:377?403, 1979. [4] P.J. Green and B. Silverman. Nonparametric Regression and Generalized Linear Models. Monographs on Statistics and Probability. Chapman & Hall, 1994. [5] C.-W. Hsu and C.-J. Lin. A comparison of methods for multi-class support vector machines. IEEE Transactions on Neural Networks, 13:415?425, 2002. [6] Y. Qi, T. Minka, R. Picard, and Z. Ghahramani. Predictive automatic relevance determination by expectation propagation. In Proceedings of ICML 21, 2004. [7] M. Seeger. Gaussian processes for machine learning. International Journal of Neural Systems, 14(2):69? 106, 2004. [8] M. Seeger. Cross-validation optimization for structured Hessian kernel methods. Technical report, Max Planck Institute for Biologic Cybernetics, T?ubingen, Germany, 2006. See www.kyb.tuebingen.mpg.de/bs/people/seeger. [9] Y. Shen, A. Ng, and M. Seeger. Fast Gaussian process regression using KD-trees. In Advances in NIPS 18, 2006. [10] A. Smola and P. Bartlett. Sparse greedy Gaussian process regression. In Advances in NIPS 13, pages 619?625, 2001. [11] C. K. I. Williams and D. Barber. 20(12):1342?1351, 1998. Bayesian classification with Gaussian processes. IEEE PAMI, [12] C. Yang, R. Duraiswami, and L. Davis. Efficient kernel machines using the improved fast Gauss transform. In Advances in NIPS 17, pages 1561?1568, 2005. 11 These methods solve a very large number of small problems iteratively, as opposed to ours which does few expensive Newton steps. The latter kind, if feasible at all, often makes better use of hardware features such as cacheing and vector operations, and therefore is the preferred approach in numerical optimization.
3044 |@word repository:1 norm:2 seems:1 c0:9 covariance:2 p0:3 innermost:1 pick:1 mention:1 tr:2 profit:1 tice:1 carry:1 initial:2 score:4 document:2 bc:3 rkhs:2 ours:3 outperforms:1 existing:1 must:2 stemming:1 numerical:5 partition:5 hofmann:3 kyb:2 cheap:2 update:1 v:1 stationary:1 half:1 leaf:7 fewer:1 greedy:1 accordingly:1 isotropic:3 provides:1 node:9 sits:1 rc:1 along:2 constructed:1 become:1 ik:8 uyi:1 consists:3 fitting:21 manner:1 excellence:1 x0:6 biologic:1 expected:4 mpg:3 frequently:1 multi:10 automatically:2 little:1 becomes:4 provided:2 project:1 notation:1 mvms:7 maximizes:1 mass:1 estimating:2 tying:1 kind:1 superlinearly:1 minimizes:1 unobserved:1 every:1 concave:2 k2:9 wrong:1 classifier:2 scaled:1 unit:1 yn:1 planck:2 arguably:1 path:2 approximately:2 pami:1 twice:1 studied:1 suggests:1 collect:1 averaged:2 unique:1 acknowledgment:1 yj:4 block:8 euc:1 silverman:1 empirical:1 attain:1 significantly:2 pre:1 word:3 close:2 selection:1 storage:1 intercept:1 optimize:1 www:5 map:1 demonstrated:2 accumulation:7 maximizing:1 equivalent:1 primitive:3 go:1 williams:1 convex:3 shen:1 numerische:1 simplicity:1 unstructured:1 permutes:1 estimator:1 rule:2 importantly:1 datapoints:1 stability:1 anyway:1 hierarchy:6 play:1 suppose:1 user:2 pt:1 target:1 massive:1 us:1 olivier:1 trick:1 element:1 expensive:3 jk:17 approximated:1 adressed:1 role:1 solved:1 thousand:2 commonplace:1 ordering:1 remote:3 monograph:1 ui:5 complexity:2 trained:1 solving:3 algebra:1 predictive:7 upon:1 efficiency:2 f2:11 liacc:1 preconditioning:4 easily:3 sep:5 joint:1 differently:1 represented:2 regularizer:2 cacheing:2 fast:2 describe:1 choosing:2 whose:2 larger:1 solve:3 statistic:2 transform:2 noisy:1 final:6 advantage:1 sequence:1 matthias:1 cai:3 took:1 propose:3 interaction:3 product:2 maximal:1 frequent:2 loop:3 rapidly:2 iff:1 parent:6 convergence:1 requirement:1 empty:1 categorization:1 converges:1 tk:1 coupling:2 depending:1 nearest:1 minor:3 progress:1 eq:5 come:4 direction:9 lhotse:1 concentrate:1 closely:1 correct:2 attribute:1 vc:14 human:1 require:3 feeding:1 assign:1 f1:16 biological:1 statlog:2 extension:4 strictly:1 hall:1 exp:3 k3:5 algorithmic:1 predict:1 claim:2 rnc:2 major:3 optimizer:1 dictionary:2 applicable:1 tik:1 label:11 bag:1 quote:1 title:1 largest:1 minimization:2 lcg:8 gaussian:7 rather:3 pn:1 hj:2 focus:1 properly:1 consistently:2 rank:2 likelihood:13 mainly:1 seeger:7 whence:1 inference:1 dependent:3 typically:6 initially:1 kc:1 ancestor:1 quasi:1 subroutine:1 germany:2 issue:1 classification:27 dual:1 pascal:1 overall:1 priori:6 among:1 plan:1 smoothing:2 art:2 softmax:1 constrained:2 uc:6 fairly:2 marginal:1 having:1 ng:1 eliminated:1 chapman:1 represents:1 icml:1 nearly:1 representer:1 future:2 report:1 np:4 spline:1 few:3 strided:1 modern:1 randomly:1 neighbour:1 individual:1 xtj:1 replaced:1 consisting:1 maintain:1 interest:2 highly:2 possibility:1 custom:1 picard:1 certainly:1 pc:1 lh:4 tree:8 incomplete:1 indexed:1 old:1 initialized:1 re:3 minimal:1 fitted:1 uncertain:1 increased:1 column:2 planned:1 contiguous:1 maximization:2 cost:4 subset:2 infering:1 vectors2:1 hundred:1 characterize:1 stored:1 dependency:1 kxi:1 extendable:1 chunk:1 thanks:2 combined:1 international:1 stay:1 probabilistic:1 safeguard:1 quickly:1 again:1 opposed:3 choose:3 worse:1 derivative:6 yp:1 li:1 suggesting:1 de:3 coding:2 includes:1 int:2 explicitly:1 depends:2 later:2 root:5 h1:17 doing:1 decaying:1 parallel:1 minimize:7 publicly:2 accuracy:10 voted:1 variance:4 efficiently:4 dealt:3 modelled:1 bayesian:4 raw:1 basically:2 mc:7 drive:1 cybernetics:2 processor:2 classified:1 acc:2 maxc:1 against:2 minka:1 argmaxc:2 stop:1 hsu:1 dataset:1 recall:3 knowledge:1 ut:1 hilbert:1 subtle:1 attained:2 improved:1 duraiswami:1 done:5 box:1 strongly:1 furthermore:2 just:2 smola:1 sentation:1 correlation:1 replacing:1 propagation:1 porter:1 logistic:4 quality:1 effect:2 requiring:4 y2:1 remedy:1 unbiased:1 true:1 regularization:1 assigned:1 analytically:1 symmetric:2 iteratively:1 deal:2 during:3 rooted:2 davis:1 criterion:6 generalized:2 occuring:1 demonstrate:2 percent:1 ranging:1 novel:1 common:4 specialized:2 ipc:1 patent:2 conditioning:1 discussed:2 extend:1 refer:1 significant:2 cv:9 ai:1 shuffling:1 rd:1 automatic:2 mathematics:2 similarly:1 chapelle:1 similarity:1 gj:1 optimizing:1 store:2 buffer:3 ubingen:2 binary:2 refrain:1 yi:6 seen:1 minimum:1 additional:2 converge:1 shortest:1 living:1 multiple:4 rj:3 smooth:2 technical:1 faster:3 determination:1 plug:1 cross:7 raphson:2 long:1 lin:1 coded:1 plugging:3 a1:2 qi:1 prediction:6 basic:1 regression:5 expectation:3 iteration:4 kernel:44 represent:2 wipo:3 achieved:1 proposal:1 separately:2 allocated:1 extra:1 rest:8 subject:1 call:1 noting:1 yang:1 split:2 easy:1 iterate:1 xj:9 fit:2 gave:1 wahba:1 inner:5 multiclass:1 bottleneck:1 motivated:1 bartlett:1 ul:4 effort:2 metrized:1 hessian:5 hardly:1 matlab:1 generally:2 useful:1 nonparametric:2 ten:1 concentrated:1 svms:1 category:1 hardware:1 http:1 estimated:1 disjoint:1 per:1 cikm:1 write:1 hyperparameter:8 ist:2 achieving:2 drawn:1 pj:6 anova:2 breadth:1 asymptotically:1 sum:2 run:4 place:1 laid:1 k2c:1 scaling:1 comparable:1 bit:1 convergent:1 fold:8 encountered:1 kronecker:1 precisely:3 flat:15 fulfils:1 dominated:1 wc:1 format:1 structured:5 poor:1 rnp:2 conjugate:4 kd:3 across:3 craven:1 y0:1 making:1 modification:2 b:2 ibis:1 equation:1 previously:2 mathematik:1 count:1 singer:1 fed:1 end:2 available:6 operation:4 reallocation:1 apply:2 hierarchical:23 generic:2 reshape:1 prec:2 denotes:1 running:3 newton:8 k1:8 especially:1 build:1 ghahramani:1 implied:1 already:1 wrapping:1 strategy:3 primary:16 costly:1 usual:1 diagonal:8 nr:12 gradient:14 dp:2 separate:1 outer:1 amd:1 barber:1 tuebingen:3 eigenspectrum:1 code:6 length:1 index:1 providing:1 minimizing:4 equivalently:1 setup:7 taxonomy:1 negative:4 implementation:4 design:1 unknown:1 perform:2 allowing:1 upper:1 datasets:3 behave:1 supporting:1 situation:1 extended:1 y1:2 rn:2 reproducing:1 community:1 subtasks:1 introduced:1 pair:2 required:4 toolbox:1 extensive:1 tap:1 learned:1 nip:3 beyond:3 able:1 sparsity:1 max:2 including:1 memory:6 green:1 suitable:1 treated:1 force:1 ranked:1 advanced:1 representing:2 scheme:3 technology:1 started:1 text:8 prior:6 removal:2 multiplication:4 loss:9 fully:1 par:3 validation:7 h2:12 degree:2 sufficient:1 principle:1 share:3 klr:1 row:1 penalized:2 supported:1 free:3 keeping:1 allow:2 institute:2 taking:1 sparse:5 benefit:1 ghz:1 curve:2 dimension:1 world:1 adaptive:1 programme:1 far:3 transaction:1 approximate:3 alpha:1 preferred:1 keep:1 overfitting:2 sequentially:1 assumed:2 xi:14 search:5 latent:5 decomposes:2 table:4 learn:3 robust:1 expansion:2 european:1 domain:1 diag:2 did:1 main:1 linearly:1 hyperparameters:10 n2:1 child:5 en:1 roc:2 slow:1 lc:1 precision:2 mvm:20 tied:2 minute:1 theorem:1 xt:1 showing:1 sensing:3 dk:6 svm:6 incorporating:1 magnitude:3 subtree:1 conditioned:1 demand:1 margin:1 kx:2 kbk2:2 fc:6 simply:2 recommendation:1 nested:1 minimizer:1 determines:1 symd:1 conditional:2 rbf:1 satimage:1 shared:4 feasible:1 determined:3 except:2 kfc:1 reducing:1 decouple:1 degradation:1 discriminate:1 partly:1 experimental:2 gauss:1 select:1 people:2 support:2 latter:5 crammer:1 relevance:1 preparation:1 evaluate:1 tested:1 avoiding:1
2,254
3,045
Multiple timescales and uncertainty in motor adaptation Konrad P. Ko? rding Rehabilitation Institute of Chicago Northwestern University, Dept. PM&R Chicago, IL 60611 [email protected] Joshua B. Tenenbaum Massachusetts Institute of Technology Cambridge, MA 02139 [email protected] Reza Shadmehr Johns Hopkins University Baltimore, MD 21205 [email protected] Abstract Our motor system changes due to causes that span multiple timescales. For example, muscle response can change because of fatigue, a condition where the disturbance has a fast timescale or because of disease where the disturbance is much slower. Here we hypothesize that the nervous system adapts in a way that reflects the temporal properties of such potential disturbances. According to a Bayesian formulation of this idea, movement error results in a credit assignment problem: what timescale is responsible for this disturbance? The adaptation schedule influences the behavior of the optimal learner, changing estimates at different timescales as well as the uncertainty. A system that adapts in this way predicts many properties observed in saccadic gain adaptation. It well predicts the timecourses of motor adaptation in cases of partial sensory deprivation and reversals of the adaptation direction. 1 Introduction Saccades are rapid eye movements that shift the direction of gaze from one target to another. The eyes move so fast [1] that visual feedback can not usually be used during the movement. For that reason, without adaptation any changes in the properties of the oculomotor plant would lead to inaccurate saccades [2]. Motor gain is the ratio of actual and desired movement distances. If the motor gain decreases to below one then the nervous system must send a stronger command to produce a movement of the same size. Indeed, it has been observed that if saccades overshoot the target, the gain tends to decrease and if they undershoot, the gain tends to increase. The saccadic jump paradigm [3] is often used to probe such adaptation [4]: while the subject moves its eyes towards a target, the target is moved. This is not distinguishable to the subject from a change in the properties of the oculomotor plant [5]. Using this paradigm it is possible to probe the mechanism that is normally used to adapt to ongoing changes of the oculomotor plant. 1.1 Disturbances to the motor plant Properties of the oculomotor plant may change due to a variety of disturbances, such as various kinds of fatigue and disease. The fundamental characteristic of these disturbances is that their effects unfold over a wide range of timescales. Here we model each disturbance as a random walk with a characteristic timescale (Figures 1A and B) over which the disturbance is expected to go away. disturbance? (t + ?) = (1 ? 1/? )disturbance? (t) + ? (1) where ? is drawn from a mean zero normal distribution of width ? ? , and ? is the timescale. The larger ? the closer (1 ? 1/? is to 1 and the longer does a disturbance typically last. 1.2 Parameter choice For the experiments that we want to explain only those timescales will matter that are not much longer than the overall time of the experiments (because they would already have been integrated out) and that are not much shorter than the time of an individual saccade (because they would average out). For that reason we chose the distribution of ? to be 30 values exponentially scaled between 1 and 33333 saccades. The distribution of expected gains thus only depends on the distribution of ?? , a characterization of how important disturbances are at various timescales. It seems plausible that disturbances that have a short timescale tend to be more variable than those that have a long timescale, and we choose: ? ? = c/? where c is one of the two free parameters of our model. Moreover, as we expect each disturbance to be relatively small, we assume linearity and that the motor gain is simply one plus the sum of all the disturbances:  disturbance? (t) (2) gain(t) = 1 + ? If the motor plant underwent such changes in its properties, and if the nervous system produced the same motor commands without adaptation, then saccade gain would differ from one, resulting in motor error. However, with each saccade, the brain observes consequences of the motor commands. We assume that this observation is corrupted by noise: observation(t) = gain(t) + w (3) where w is the second free parameter of our model, the observation noise with a width ? w . Throughout this paper we choose ? w = 0.05 which we estimated from the spread of saccade gains over typical periods of 200 saccades and c = 0.002 because that yielded good fits to the data by Hopp and Fuchs [2]. We chose to model all data using the same set of parameters to avoid issues of overfitting. 1.3 Inference Given this explicit model, Bayesian statistics allows deriving an optimal adaptation strategy. We observe that the system is equivalent to the generative model of the Kalman filter [6] with a diagonal transition matrix M = diag(1 ? 1/? ) and an observation matrix H that is a vector consisting of one 1 for each of the 30 potential disturbances, and a diagonal process noise matrix of Q = diag(? ?1 ). Process noise is what is driving the changes of each of the disturbances. We obtain the solution that is well known from the Kalman Filter literature. We use the Kalman filter toolbox written by Kevin Murphy to numerically solve these equations. An optimally adapting system needs to explicitly represent contribution of each timescale. Because the contribution of each timescale can never be known precisely, the Bayesian learner represents what it knows as a probability distribution. As the model is linear and the noises are Gaussian, it is sufficient to keep first and second order statistics. And so the learner represents what it knows about the contribution of each timescale as a best estimate, but also keeps a measure of uncertainty around this estimate (Fig 1C). Any point along the +0% gain line is a point where the fast and slow timescale cancel each other. There is a line associated with any possible gain (e.g. +30% and -30%). Every timestep the system starts with its belief that it has from the previous timestep (sketched in yellow) and combines this with information from the current saccade (sketched in blue) to come up with a new estimate (sketched in red). Two important changes happen to the belief of the learner over time. (1) When time passes, disturbances can be expected to get smaller but at the same time our uncertainty about them increases. (2) when a movement error is observed then this biases the sum of the disturbances to the observed error value and it also decreases the uncertainty. These effects are sketched in Figure 1D. Normally the adaptation mechanism is responding to the small drifts that happen to the oculomotor plant and the estimate from the saccade is largely overlapping with the prior belief and with the new belief. When the light is turned off the estimate of each of the B 0 -45-45 error gain error gain -0.1 gain fast disturbance [%] d ?3 0 motor gain =0 in ga d ?3 fast (e.g. fatigue) 0.1 0 =3 d ?2 0 - 0.1 30 =in ga d ?2 0.1 in t+1 d ?1 45 C slow (e.g. disease) ga t d ?1 disturbance A 0 45 slow disturbance [%] 1.1 state estimate 1 0.9 prior belief 4000 0 evidence from saccade time [saccades] D Normal saccades In the dark Saccadic jump +30 saccades Washout Reversal Figure 1: A generative model for changes in the motor plant and the corresponding optimal inference. A) Various disturbances d evolve over time as independent random walks. The gain is a linear function of all these random walks. The observed error is a noisy version of the gain. B) An example of a system with two timescales (fast and slow), and the resulting gain. C) Optimal inference during a saccade adaptation experiment. For illustrative purposes, here we assume only two timescales. The yellow cloud represents the learners belief about the current combination of disturbances (prior). The system observes a saccade with an error of +30%. The region about the blue line is the uncertainty about the observation (i.e., the likelihood). Combining this information with the prior belief (yellow) leads to the posterior estimate (red). After a single observation of the +30% condition the most probable estimate thus is that it is a fast disturbance. D) The changes of estimates under various perturbations. Here we simulated a saccade on every 10th time step of the model. Each column shows three consecutive trials (top to bottom). Only in the darkness case saccades 1 3 and 50 are shown. In the dark, parameter uncertainties increase because the learner is not allowed to make observations (sensory noise is effectively infinite). In a gain increase paradigm, initially most of the error is associated with the fast perturbations. After 30 saccades in the gain increase paradigm, most of the error is associated with slow perturbations. Washout trials that follow gain increase do not return the system to a naive state. Rather, estimates of fast and slow perturbations cancel each other. Gain decrease following gain increase training will mostly affect the fast system. D 1400 Saccade Number 2800 Day 3 0.8 Timecourse [saccades] p<0.06 800 400 0 Day 1 Day 2 F Day 5 Bayesian adaptation 1200 0.6 0.4 0 Robinson et al Darkness Day 4 Darkness Day 2 Darkness Darkness Saccade Gain Gain 1.1 1.0 0.9 0.8 0.7 -200 0 1 E 1200 Bayesian adaptation Day 1 Optimal adaptation Darkness B Darkness Monkey experiment: Hopp and Fuchs Darkness Darkness Monkey Experiment: Robinson et al Timecourse [saccades] C A 2000 4000 Saccade Numb er 6000 800 400 0 Day 1 Day 2 Figure 2: Saccadic gain adaptation in a target jump paradigm. A) Data replotted from Hopp and Fuchs [2] with permission. Each dot is one saccade, the thick lines are exponential fits to the intervals [0 1400] and [1400:2800]. Starting at saccade number 0 the target is jumping 30% short of the target giving the impression of muscles that are too strong. The gain then decreases until the manipulation is ended at saccade number 1400. B) The same plot is shown for the optimal Bayesian learner. Changes without feedback. C) Data reprinted from [7]. Normal saccadic gain change paradigm as in Figure 2, however now the monkey spends its nights without vision and the paradigm is continued for many days. D) The same plot as in C) but for the Bayesian learner. E) Comparison of the saccadic gain change timecourses obtained by fitting an exponential. F) the same figure as in E) for the Bayesian learner disturbances slowly creeps towards zero. At the same time, however the uncertainty increases a lot and larger uncertainty allows faster learning because the new information is more precise than the prior information. In the saccadic jump paradigm the error is much larger than it would be during normal life and this is first interpreted by the learner as a fast change and as it persists progressively interpreted as a slow change. When the saccadic jumps ends then the fast timescale goes negative fast and the slow timescale slowly approaches zero. In a reversal setting the fast timescale becomes very negative and the slow timescale goes towards zero. Already with two timescales the optimal learner can thus exhibit a large number of interesting properties. 2 Results: Comparison with experimental data 2.1 Saccadic gain adaptation In an impressive range of experiments started by Mclaughlin [3], investigators have examined how monkeys adapt their saccadic gain. Figure 2A shows how the gain changes over time so that saccades progressively become more precise. The rate of adaptation typically starts fast and then progressively gets slower. This is a classic pattern that is reflected in numerous motor adaptation paradigms [8, 9]. The same patterns are seen for the Bayesian multiscale learner (Figure 2B). Fast timescale disturbances are assumed to increase and decrease faster than slow timescale disturbances. Therefore, when the gain rapidly changes, it is a priori most likely that it will go away fast. (Fig. 1D, saccadic jump). Between trials, the estimates of the fast disturbances decay fast, but this decay is smaller in the slower timescales. If the gain change is maintained, the relative contribution of the fast timescales diminishes in comparison to the slow timescales (Fig. 1D, +30 saccades). As fast timescales adapt fast but decay fast as well and slow timescales adapt and decay slowly, this implies that the gain change is driven by progressively slower timescales resulting in the transition from initial fast adapting to a progressively slower adapting. 2.2 Saccadic gain adaptation after sensory deprivation The effects of a wide range of timescales and uncertainty about the causes of changes of the oculomotor plant will largely be hidden if experiments are of a relatively short duration and no uncertainty is produced. However, in a recent experiment Robinson et al analyzed saccadic gain adaptation [7] in a way that allowed insight into many timescales as well as insight into the way the nervous system deals with uncertainty. The adaptation target was set to -50%. The monkey adapted for about 1500 saccades every day for 21 consecutive days. Because of the long duration many different timescales are involved in this process. Interestingly, during the rest of the day the monkey wore goggles that blocked vision. During these breaks monkeys will accumulate uncertainty about the state of their oculomotor plant. Figure 2C shows results from such an experiment and figure 2D shows the results we are getting from the Bayesian learner. The results are surprisingly similar given that we used the same parameters that we had used the model parameters inferred from the Hopp and Fuchs data. Two effects are visible in the data. (1) There are several timescales during adaptation: there is a fast (100 saccades) and a slow (10 days) timescale. Closer examination of the data reveals a wide spectrum of timescales. (2) The state estimate is affected by the periods of darkness. During the breaks that are paired with darkness the system is decaying back to a gain of zero, as predicted by the model. Moreover, darkness leads to increased uncertainty. Increased uncertainty means that new information is relatively more precise than old information which in turn leads to faster learning. Consequently monkeys learn faster during the second day (after spending a night without feedback) than during the first (quantified in figure 2E and F). The finding that the Bayesian learner seems to change faster than the monkey may be related to the context being somewhat different than in the Hopp and Fuchs experiment. The system seems to represent uncertainty and clearly represents the way the motor plant is expected to change in the absence of feedback. It has been proposed that the nervous system may use a set of integrators where one is learning fast and the other is learning slowly [10, 11]. The Bayesian learner, however, keeps a measure of uncertainty about its estimates. For that reason only the Bayesian learner can explain the fact that sensory deprivation appears to enhance learning rates. 2.3 Gain adaptation with reversals Kojima et al [12] reported a host of surprising behavioral results during saccade adaptation. In these experiments the adaptation direction was changed 3 times. The saccadic gain was initially increased, then decreased until it reached unity, and finally increased again (Figure 3A). The saccadic gain increased faster during the second gain-up session than during the first(Figure 3B). Therefore, the reversal learning did not washout the system. The Bayesian learner shows a similar phenomenon and provides a rationale: At the end of the first gain-up session for the Bayesian learner, most of the gain change is associated with a slow timescale (Figure 3C). In the subsequent gain-down session, errors produce rapid changes in the fast timescales so that by the time the gain estimate reaches unity, the fast and slow timescales have opposite estimates. Therefore, the gain-down session did not reset the system, but the latent variables store the history of adaptation. In the subsequent gain-up session, the rate of re-adaptation is faster than initial adaptation because the fast timescales decay upwards in between trials (Figure 3D). After about 100 saccades the speed gain from the low frequencies is over and is turned into a slowed increase due to the decreased error term. In a second experiment, Kojima et al [12] found that saccade gains could change despite the fact that the animal was provided with no feedback to guide its performance. In this experiment the monkeys were again trained in a gain-up following by a gain-down session. Afterwards they spent some time in the dark. When they come out of the dark their gain had spontaneously increased (Figure 3E). The same effect is seen for the Bayesian learner (Figure 3F). In the dark period, the system makes no observations and therefore cannot learn from error. However, the estimates are still affected by their timescales of change: the estimate moves up fast along the fast timescales but slowly along the slow timescales. At the start of the darkness period there is a positive upward and a negative downward disturbance inferred by the system (Figure 1C, reversal). Consequently, by the end of the dark period, the estimate has become gain-up, the gain learned in the initial session. This produces the apparent spontaneous recovery observed in Figure 3F. Updating without feedback leads the system to infer unobserved dynamics of the oculomotor plant and these dynamics lead to the observed changes. Kojima et al B C Optimal D Kojima et al E Optimal F Kojima et al Gain 1.2 1.0 0.8 0 500 1000 Saccade number 1500 Optimal 10 Dark 1.4 Gain 6.4 4.9 1.4 Gain change rate [x 10 -4 /saccade] Gain change rate [x 10 -4 /saccade] A 1.2 5 0.05 1.0 1 control test 0 1056 Saccade number 2000 Figure 3: The double reversal paradigm. A) The gain is first adapted up until it reaches about 1.2 with a negative target jump of 35%. Then it is adapted down with a positive target jump of 35%. Once the gain reaches 1 again it is adapted up with a positive target jump again. Data reprinted from [12] with permission. B) The speed of adaptation is compared between the first adaptation and the second positive adaptation. C) the same as in A) for the Bayesian learner. D) the same as in B for the Bayesian learner. E) Double reversal paradigm with darkness, reprinted from [12]. The gain used by the monkey is changing during this interval. F) The same graph is shown for the Bayesian learner. 3 Discussion Traditional models of adaptation simply change motor commands to reduce prediction errors [13]. Our approach differs from traditional approaches in three major ways. (1) The system represents its knowledge of the properties of the motor system at different timescales and explicitly models how these disturbances evolve over time. (2) It represents the uncertainty it has about the magnitude of the disturbances. (3) It formulates the computational aim of adaptation in terms of optimally predicting ongoing changes in the properties of the motor plant. Multiple studies address each single of these points on its own. Multi-timescale learning is a classical phenomenon described frequently [14, 8]. Two timescales had been proposed in the context of connectionist learning theory [11]. In the context of motor adaptation Smith et al. [10] proposed a model where the motor system responds to error with two systems: one that is highly sensitive to error but rapidly forgets and another that has poor sensitivity to error but has strong retention. In the context of classical conditioning, it has been proposed that the nervous system should keep a measure of uncertainty about its current parameter estimates to allow an optimal combination of new information with current knowledge [15]. Even the earliest studies of oculomotor adaptation realized that the objective of adaptation is to allow precise movement with a relentlessly changing motor plant [3]. Our approach unifies these ideas in a consistent computational framework and explains a wide range of experiments. Multi timescale adaptation and learning is a near universal phenomenon [14, 8, 16, 17]. Within the area of psychology it was found that learning follows multiscale behavior [17]. It has been proposed that multiscale learning may arise from chunking effects [14, 18]. The work presented here suggests a different interpretation. Multiscale learning in cognitive systems may be a result of a system that has originally evolved to deal with ever changing motor problems. Multiscale adaptation can also be seen in the way visual neurons adapt to changing visual stimuli [16]. The phenomenon of spontaneous recovery in classical conditioning [19, 20] is largely equivalent to the findings of Kojima et al [12] and can also be explained within the Bayesian multiscale learner framework. The presented model obviously does not explain all known effects in motor or even saccadic gain adaptation. For example it has been found that adapting up usually has a somewhat different timecourse to adapting down [21, 16, 12]. Moreover it seems that adaptation speed of monkeys can be very different on one day than the other and from one experimental setting to the other (e.g. Figure 2E and F). In learning reach control, there is more direct evidence that people can actually modify their rates of adaptation as a function of the auto-correlations of the perturbation [22]. This can be seen as the system learning about the size of the change parameter ? ? in this theory. Moreover, we certainly estimate the uncertainty we have about a visual stimulus in a continuous fashion: uncertainty is smallest for a high contrast stimulus in our fovea and progressively larger with decreasing contrast and increasing eccentricity. An important question for further enquiry is how the nervous system solves problems that require multiple timescale adaptation. The necessary effects could potentially be implemented directly by synapses that could exhibit LTP with powerlaw characteristics [23, 24]. Alternatively, small groups of neurons may jointly represent the estimates along with their uncertainties. In summary, if we begin with the assumption that the nervous system optimally solves the problem of producing reliable movements with a motor plant that is affected by perturbations that have multiple timescales, then the learner will exhibit numerous properties that appear to match those reported in saccade and reach adaptation experiments. References [1] W Becker. Metrics. In R. H. Wurtz and M Goldberg, editors, The Neurobiology of Saccadic Eye Movements, pages 13?67. Elsevier, Amsterdam, 1989. [2] J. J. Hopp and A. F. Fuchs. The characteristics and neuronal substrate of saccadic eye movement plasticity. Prog Neurobiol, 72(1):27?53, 2004. [3] SC McLaughlin. Parametric adjustment in saccadic eye movement. Percept. Psychophys., 2:359?362, 1967. [4] J. Wallman and A. F. Fuchs. Saccadic gain modification: visual error drives motor adaptation. J Neurophysiol, 80(5):2405?16, 1998. [5] D. O. Bahcall and E. Kowler. Illusory shifts in visual direction accompany adaptation of saccadic eye movements. Nature, 400(6747):864?6, 1999. [6] R. E. Kalman. A new approach to linear filtering and prediction problems. J. of Basic Engineering (ASME), 82D:35?45, 1960. [7] F. R. Robinson, R. Soetedjo, and C. Noto. Distinct short-term and long-term adaptation to reduce saccade size in monkey. J Neurophysiol, 2006. [8] K. M. Newell. Motor skill acquisition. Annu Rev Psychol, 42:213?37, 1991. [9] J. W. Krakauer, C. Ghez, and M. F. Ghilardi. Adaptation to visuomotor transformations: consolidation, interference, and forgetting. J Neurosci, 25(2):473?8, 2005. [10] A.M. Smith, A. Ghazzizadeh, and R. Shadmehr. Interacting adaptive processes with different timescales underlie short-term motor learning. PLoS Biol, 4(e179), 2006. [11] G. Hinton and C. Plaut. Using fast weights to deblur old memories. In Erlbaum, editor, 9th Annual Conference of the Cognitive Science Society, pages 177?186, Hillsdale,NJ, 1987. [12] Y. Kojima, Y. Iwamoto, and K. Yoshida. Memory of learning facilitates saccadic adaptation in the monkey. J Neurosci, 24(34):7531?9, 2004. [13] K. A. Thoroughman and R. Shadmehr. Learning of action through adaptive combination of motor primitives. Nature, 407(6805):742?7, 2000. [14] John R. Anderson. The adaptive character of thought. Erlbaum, Hillsdale, NJ, 1990. [15] A. J. Yu and P. Dayan. Uncertainty, neuromodulation, and attention. Neuron, 46(4):681?92, 2005. [16] A. L. Fairhall, G. D. Lewen, W. Bialek, and R. R. de Ruyter Van Steveninck. Efficiency and ambiguity in an adaptive neural code. Nature, 412(6849):787?92, 2001. [17] H.P. Bahrick, L.E. Bahrick, A.S. Bahrick, and P.O. Bahrick. Maintenance of foreign language vocabulary and the spacing effect. Psychological Science, 4:31321, 1993. [18] P. I. Pavlik and J. R. Anderson. An act-r model of the spacing effect. In F. Detje, D. Doerner, and H. Schaub, editors, In Proceedings of the Fifth International Conference on Cognitive Modeling, pages 177?182, Bamberg, Germany, 2003. Universitats-Verlag Bamberg. [19] D. C. Brooks and M. E. Bouton. A retrieval cue for extinction attenuates spontaneous recovery. J Exp Psychol Anim Behav Process, 19(1):77?89, 1993. [20] R. A. Rescorla. Spontaneous recovery varies inversely with the training-extinction interval. Learn Behav, 32(4):401?8, 2004. [21] J. M. Miller, T. Anstis, and W. B. Templeton. Saccadic plasticity: parametric adaptive control by retinal feedback. J Exp Psychol Hum Percept Perform, 7(2):356?66, 1981. [22] M. Smith, E. Hwang, and R. Shadmehr. Learning to learn- optimal adjustment of the rate at which the motor system adapts. In In Proceedings of the Society for Neuroscience, 2004. [23] C. A. Barnes. Memory deficits associated with senescence: a neurophysiological and behavioral study in the rat. J Comp Physiol Psychol, 93(1):74?104, 1979. [24] S. Fusi, P. J. Drew, and L. F. Abbott. Cascade models of synaptically stored memories. Neuron, 45(4):599? 611, 2005.
3045 |@word trial:4 version:1 stronger:1 seems:4 extinction:2 initial:3 interestingly:1 current:4 com:1 kowler:1 surprising:1 must:1 written:1 john:2 physiol:1 visible:1 subsequent:2 chicago:2 happen:2 plasticity:2 motor:29 hypothesize:1 plot:2 progressively:6 generative:2 cue:1 nervous:8 smith:3 short:5 characterization:1 provides:1 plaut:1 timecourses:2 along:4 direct:1 become:2 combine:1 fitting:1 behavioral:2 rding:1 forgetting:1 expected:4 indeed:1 rapid:2 behavior:2 frequently:1 multi:2 brain:1 integrator:1 decreasing:1 actual:1 increasing:1 becomes:1 provided:1 begin:1 moreover:4 linearity:1 what:4 evolved:1 kind:1 interpreted:2 spends:1 monkey:14 neurobiol:1 finding:2 unobserved:1 transformation:1 ended:1 nj:2 temporal:1 every:3 act:1 scaled:1 control:3 normally:2 underlie:1 appear:1 producing:1 positive:4 retention:1 persists:1 engineering:1 modify:1 tends:2 consequence:1 despite:1 chose:2 plus:1 kojima:7 examined:1 quantified:1 suggests:1 range:4 steveninck:1 responsible:1 spontaneously:1 differs:1 unfold:1 area:1 universal:1 jhu:1 adapting:5 thought:1 cascade:1 get:2 cannot:1 ga:3 context:4 influence:1 darkness:14 equivalent:2 send:1 go:4 yoshida:1 starting:1 duration:2 primitive:1 attention:1 recovery:4 powerlaw:1 insight:2 continued:1 deriving:1 classic:1 target:11 spontaneous:4 substrate:1 goldberg:1 updating:1 predicts:2 observed:7 cloud:1 bottom:1 region:1 plo:1 movement:12 decrease:6 observes:2 disease:3 dynamic:2 overshoot:1 koerding:1 trained:1 efficiency:1 learner:24 neurophysiol:2 bahrick:4 various:4 distinct:1 fast:32 pavlik:1 sc:1 visuomotor:1 kevin:1 apparent:1 larger:4 plausible:1 solve:1 statistic:2 timescale:21 jointly:1 noisy:1 obviously:1 rescorla:1 reset:1 adaptation:48 turned:2 combining:1 rapidly:2 adapts:3 schaub:1 moved:1 getting:1 double:2 eccentricity:1 produce:3 spent:1 bme:1 solves:2 strong:2 implemented:1 predicted:1 come:2 implies:1 differ:1 direction:4 thick:1 filter:3 hillsdale:2 explains:1 require:1 probable:1 around:1 credit:1 normal:4 exp:2 driving:1 major:1 consecutive:2 numb:1 smallest:1 noto:1 purpose:1 diminishes:1 sensitive:1 reflects:1 mit:1 clearly:1 gaussian:1 aim:1 rather:1 avoid:1 command:4 earliest:1 likelihood:1 contrast:2 elsevier:1 inference:3 dayan:1 foreign:1 inaccurate:1 typically:2 integrated:1 initially:2 hidden:1 germany:1 sketched:4 overall:1 issue:1 upward:1 priori:1 animal:1 once:1 never:1 represents:6 yu:1 cancel:2 connectionist:1 stimulus:3 individual:1 murphy:1 consisting:1 highly:1 goggles:1 certainly:1 analyzed:1 light:1 closer:2 partial:1 necessary:1 shorter:1 jumping:1 old:2 walk:3 desired:1 re:1 psychological:1 increased:6 column:1 modeling:1 formulates:1 assignment:1 erlbaum:2 too:1 optimally:3 reported:2 stored:1 varies:1 corrupted:1 fundamental:1 sensitivity:1 international:1 off:1 gaze:1 enhance:1 hopkins:1 ghez:1 again:4 ambiguity:1 choose:2 slowly:5 cognitive:3 return:1 potential:2 de:1 retinal:1 matter:1 explicitly:2 depends:1 break:2 lot:1 red:2 start:3 decaying:1 reached:1 contribution:4 il:1 characteristic:4 largely:3 percept:2 miller:1 yellow:3 bayesian:20 unifies:1 produced:2 comp:1 drive:1 history:1 explain:3 synapsis:1 reach:5 acquisition:1 frequency:1 involved:1 associated:5 gain:66 massachusetts:1 illusory:1 knowledge:2 schedule:1 actually:1 back:1 appears:1 originally:1 day:16 follow:1 reflected:1 response:1 formulation:1 anderson:2 until:3 correlation:1 night:2 multiscale:6 overlapping:1 hwang:1 effect:10 undershoot:1 jbt:1 deal:2 konrad:2 during:13 width:2 maintained:1 illustrative:1 rat:1 fatigue:3 asme:1 impression:1 upwards:1 spending:1 reza:2 exponentially:1 conditioning:2 interpretation:1 numerically:1 accumulate:1 blocked:1 cambridge:1 pm:1 session:7 ongoing:2 language:1 had:3 dot:1 longer:2 impressive:1 posterior:1 own:1 recent:1 driven:1 manipulation:1 store:1 verlag:1 life:1 joshua:1 muscle:2 seen:4 creep:1 somewhat:2 paradigm:11 period:5 multiple:5 afterwards:1 infer:1 washout:3 faster:7 adapt:5 match:1 long:3 retrieval:1 host:1 paired:1 prediction:2 ko:1 basic:1 maintenance:1 vision:2 metric:1 wurtz:1 represent:3 synaptically:1 want:1 spacing:2 baltimore:1 interval:3 decreased:2 rest:1 pass:1 subject:2 tend:1 ltp:1 accompany:1 facilitates:1 near:1 variety:1 affect:1 fit:2 psychology:1 opposite:1 reduce:2 idea:2 reprinted:3 shift:2 mclaughlin:2 fuchs:7 becker:1 cause:2 behav:2 action:1 ghilardi:1 dark:7 tenenbaum:1 estimated:1 neuroscience:1 blue:2 affected:3 group:1 drawn:1 changing:5 abbott:1 timestep:2 graph:1 sum:2 uncertainty:23 prog:1 throughout:1 fusi:1 hopp:6 yielded:1 annual:1 fairhall:1 adapted:4 barnes:1 precisely:1 speed:3 span:1 relatively:3 according:1 combination:3 poor:1 smaller:2 character:1 unity:2 templeton:1 rev:1 rehabilitation:1 modification:1 slowed:1 explained:1 interference:1 chunking:1 equation:1 turn:1 mechanism:2 neuromodulation:1 know:2 reversal:8 end:3 probe:2 observe:1 away:2 permission:2 slower:5 responding:1 top:1 krakauer:1 giving:1 classical:3 society:2 move:3 objective:1 already:2 realized:1 hum:1 question:1 strategy:1 saccadic:23 parametric:2 md:1 diagonal:2 traditional:2 responds:1 exhibit:3 bialek:1 fovea:1 distance:1 deficit:1 simulated:1 reason:3 kalman:4 code:1 ratio:1 mostly:1 potentially:1 negative:4 attenuates:1 perform:1 observation:8 neuron:4 neurobiology:1 ever:1 precise:4 hinton:1 interacting:1 perturbation:6 drift:1 inferred:2 toolbox:1 timecourse:3 learned:1 anstis:1 robinson:4 address:1 brook:1 usually:2 below:1 pattern:2 psychophys:1 oculomotor:9 replotted:1 reliable:1 memory:4 belief:7 examination:1 disturbance:34 predicting:1 technology:1 eye:7 numerous:2 inversely:1 started:1 psychol:4 naive:1 auto:1 prior:5 literature:1 lewen:1 evolve:2 relative:1 plant:15 expect:1 rationale:1 northwestern:1 interesting:1 filtering:1 sufficient:1 consistent:1 editor:3 thoroughman:1 changed:1 summary:1 surprisingly:1 last:1 free:2 consolidation:1 bias:1 guide:1 allow:2 institute:2 wide:4 wore:1 underwent:1 fifth:1 van:1 feedback:7 vocabulary:1 transition:2 sensory:4 jump:9 adaptive:5 skill:1 keep:4 overfitting:1 reveals:1 assumed:1 alternatively:1 spectrum:1 continuous:1 latent:1 learn:4 nature:3 ruyter:1 diag:2 did:2 timescales:30 spread:1 neurosci:2 noise:6 arise:1 allowed:2 neuronal:1 fig:3 fashion:1 slow:16 explicit:1 exponential:2 forgets:1 deprivation:3 down:5 annu:1 er:1 decay:5 evidence:2 effectively:1 drew:1 magnitude:1 downward:1 distinguishable:1 simply:2 likely:1 neurophysiological:1 visual:6 amsterdam:1 adjustment:2 deblur:1 saccade:41 newell:1 ma:1 consequently:2 towards:3 bamberg:2 absence:1 change:33 typical:1 infinite:1 shadmehr:4 anim:1 experimental:2 people:1 phenomenon:4 investigator:1 dept:1 bouton:1 biol:1
2,255
3,046
Approximate inference using planar graph decomposition Amir Globerson Tommi Jaakkola Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 gamir,[email protected] Abstract A number of exact and approximate methods are available for inference calculations in graphical models. Many recent approximate methods for graphs with cycles are based on tractable algorithms for tree structured graphs. Here we base the approximation on a different tractable model, planar graphs with binary variables and pure interaction potentials (no external field). The partition function for such models can be calculated exactly using an algorithm introduced by Fisher and Kasteleyn in the 1960s. We show how such tractable planar models can be used in a decomposition to derive upper bounds on the partition function of non-planar models. The resulting algorithm also allows for the estimation of marginals. We compare our planar decomposition to the tree decomposition method of Wainwright et. al., showing that it results in a much tighter bound on the partition function, improved pairwise marginals, and comparable singleton marginals. Graphical models are a powerful tool for modeling multivariate distributions, and have been successfully applied in various fields such as coding theory and image processing. Applications of graphical models typically involve calculating two types of quantities, namely marginal distributions, and MAP assignments. The evaluation of the model partition function is closely related to calculating marginals [12]. These three problems can rarely be solved exactly in polynomial time, and are provably computationally hard in the general case [1]. When the model conforms to a tree structure, however, all these problems can be solved in polynomial time. This has prompted extensive research into tree based methods. For example, the junction tree method [6] converts a graphical model into a tree by clustering nodes into cliques, such that the graph over cliques is a tree. The resulting maximal clique size (cf. tree width) may nevertheless be prohibitively large. Wainwright et. al. [9, 11] proposed an approximate method based on trees known as tree reweighting (TRW). The TRW approach decomposes the potential vector of a graphical model into a mixture over spanning trees of the model, and then uses convexity arguments to bound various quantities, such as the partition function. One key advantage of this approach is that it provides bounds on partition function value, a property which is not shared by approximations based on Bethe free energies [13]. In this paper we focus on a different class of tractable models: planar graphs. A graph is called planar if it can be drawn in the plane without crossing edges. Works in the 1960s by physicists Fisher [5] and Kasteleyn [7], among others, have shown that the partition function for planar graphs may be calculated in polynomial time. This, however, is true under two key restrictions. One is that the variables xi are binary. The other is that the interaction potential depends only on xi xj (where xi ? {?1}), and not on their individual values (i.e., the zero external field case). Here we show how the above method can be used to obtain upper bounds on the partition function for non-planar graphs. As in TRW, we decompose the potential of a non-planar graph into a sum over spanning planar models, and then use a convexity argument to obtain an upper bound on the log partition function. The bound optimization is a convex problem, and can be solved in polynomial time. We compare our method with TRW on a planar graph with an external field, and show that it performs favorably with respect to both pairwise marginals and the bound on the partition function, and the two methods give similar results for singleton marginals. 1 Definitions and Notations Given a graph G with n vertices and a set of edges E, we are interested in pairwise Markov Random Fields (MRF) over the graph G. A pairwise MRF [13] is a multivariate distribution over variables x = {x1 , . . . , xn } defined as 1 P p(x) = e ij?E fij (xi ,xj ) (1) Z where fij are a set of |E| functions, or interaction potentials, defined over pairs of variables. The P P fij (xi ,xj ) ij?E partition function is defined as Z = x e . Here we will focus on the case where xi ? {?1}. Furthermore, we will be interested in interaction potentials which only depend on agreement or disagreement between the signs of their variables. We define those by 1 ?ij (1 + xi xj ) = ?ij I(xi = xj ) (2) 2 so that fij (xi , xj ) is zero if xi 6= xj and ?ij if xi = xj . The model is then defined via the set of parameters ?ij . We use ? to denote the vector of parameters ?ij , and denote the partition function by Z(?) to highlight its dependence on these parameters. f (xi , xj ) = A graph G is defined as planar if it can be drawn in the plane without any intersection of edges [4]. With some abuse of notation, we define E as the set of line segments in <2 corresponding to the edges in the graph. The regions of <2 \ E are defined as the faces of the graph. The face which corresponds to an unbounded region is called the external face. Given a planar graph G, its dual graph G? is defined in the following way: the vertices of G? correspond to faces of G, and there is an edge between two vertices in G? iff the two corresponding faces in G share an edge. If the graph G is weighted, the weight on an edge in G? is the weight on the edge shared by the corresponding faces in G. A plane triangulation of a planar graph G is obtained from G by adding edges such that all the faces of the resulting graph have exactly three vertices. Thus a plane triangulated graph has a dual where all vertices have degree three. It can be shown that every plane graph can be plane triangulated [4]. We shall also need the notion of a perfect matching on a graph. A perfect matching on a graph G is defined as a set of edges H ? E such that every vertex in G has exactly one edge in H incident on it. If the graph is weighted, the weight of the matching is defined as the product of the weights of the edges in the matching. Finally, we recall the definition of a marginal polytope of a graph [12]. Consider an MRF over a graph G where fij are given by Equation 2. Denote the probability of the event I(xi = xj ) under p(x) by ?ij . The marginal polytope of G, denoted by M(G), is defined as the set of values ?ij that can be obtained under some assignment to the parameters ?ij . For a general graph G the polytope M(G) cannot be described using a polynomial number of inequalities. However, for planar graphs, it turns out that a set of O(n3 ) constraints, commonly referred to as triangle inequalities, suffice to describe M(G) (see [3] page 434). The triangle inequalities are defined by 1 TRI(n) = {?ij : ?ij + ?jk ? ?ik ? 1, ?ij + ?jk + ?ik ? 1, ?i, j, k ? {1, . . . , n}} (3) Note that the above inequalities actually contain variables ?ij which do not correspond to edges in the original graph G. Thus the equality M(G) = TRI(n) should be understood as referring only to the values of ?ij that correspond to edges in the graph. Importantly, the values of ?ij for edges not in the graph need not be valid marginals for any MRF. In other words M(G) is a projection of TRI(n) on the set of edges of G. It is well known that the marginal polytope for trees is described via pairwise constraints. It is thus interesting that for planar graphs, it is triplets, rather than pairwise 1 The definition here is slightly different from that in [3], since here we refer to agreement probabilities, whereas [3] refers to disagreement probabilities. This polytope is also referred to as the cut polytope. constraints, that characterize the polytope. In this sense, planar graphs and trees may be viewed as a hierarchy of polytope complexity classes. It remains an interesting problem to characterize other structures in this hierarchy and their related inference algorithms. 2 Exact calculation of partition function using perfect matching The seminal works of Kasteleyn [7] and Fisher [5] have shown how one can calculate the partition function for a binary MRF over a planar graph with pure interaction potentials. We briefly review Fisher?s construction, which we will use in what follows. Our interpretation of the method differs somewhat from that of Fisher, but we believe it is more straightforward. The key idea in calculating the partition function is to convert the summation over values of x to the problem of calculating the sum of weights of all perfect matchings in a graph constructed from G, as shown below. In this section, we consider weighted graphs (graphs with numbers assigned to their edges). For the graph G associated with the pairwise MRF, we assign weights wij = e2?ij to the edges. The first step in the construction is to plane triangulate the graph G. Let us call the resulting graph GT . We define an MRF on GT by assigning a parameter ?ij = 0 to the edges that have been added to G, and the corresponding weight wij = 1. Thus GT essentially describes the same distribution as G, and therefore has the same partition function. We can thus restrict our attention to calculating the partition function for the MRF on GT . As a first step in calculating a partition function over GT , we introduce the following definition: a ? in GT is an agreement edge set (or AES) if for every triangle face F in GT one of the set of edges E ? or exactly one of the edges in F is in E. ? The weight following holds: The edges in F are all in E, ? ? of a set E is defined as the product of the weights of the edges in E. It can be shown that there exists a bijection between pairs of assignments {x, ?x} and agreement edge sets. The mapping from x to an edge set is simply the set of edges such that xi = xj . It is easy to see that this is an agreement edge set. The reverse mapping is obtained by finding an assignment x such that xi = xj iff the corresponding edge is in the agreement edge set. The existence of this mapping can be shown by induction on the number of (triangle) faces. P The contribution of a given assignment x to the partition function is e ? it is easy to see that sponds to an AES denoted by E P e ij?E ?ij I(xi =xj ) = e? P ij?E ?ij P e ? ij?E 2?ij = ce P ? ij?E ij?E 2?ij ?ij I(xi =xj ) =c Y wij . If x corre(4) ? ij?E P where c = e? ij?E ?ij . P Define Q the superset ? as the set of agreement edge sets. The above then implies that Z(?) = 2c E?? ? ? wij , and is thus proportional to the sum of AES weights. ij?E To sum over agreement edge sets, we use the following elegant trick introduced by Fisher [5]. Construct a new graph GPM from the dual of GT by introducing new vertices and edges according to the following rule: Replace each original vertex with three vertices that are connected to each other, and assign a weight of one to the new edges. Next, consider the three neighbors of the original vertex 2 . Connect each of the three new vertices to one of these three neighbors, keeping the original weights on these edges. The transformation is illustrated in Figure 1. The new graph GPM has O(3n) vertices, and is also planar. It can be seen that there is a one to one correspondence between perfect matchings in GPM andP agreement Q edge sets in GT . Define ? to be the set of perfect matchings in GPM . Then Z(?) = 2c M ?? ij?M wij where we have used the fact that all the new weights have a value of one. Thus, the partition function is a sum over the weights of perfect matchings in GPM . Finally, we need a way of summing over the weights of the set of perfect matchings in a graph. Kasteleyn [7] proved that for a planar graph GPM , this sum may be obtained using the following sequence of steps: ? Direct the edges of the graph GPM such that for every face (except possibly the external face), the number of edges on its perimeter oriented in a clockwise manner is odd. Kasteleyn showed that such a so called Pfaffian orientation may be constructed in polynomial time for a planar graph (see also [8] page 322). 2 Note that in the dual of GT all vertices have degree three, since GT is plane triangulated. 1.2 0.7 0.6 1 1 1 0.8 0.6 0.8 1.5 1.4 1.5 1 1 1.2 1 1 1 1 0.7 1.4 1 1 1 Figure 1: Illustration of the graph transformations in Section 2 for a complete graph with four vertices. Left panel shows the original weighted graph (dotted edges and grey vertices) and its dual (solid edges and black vertices). Right panel shows the dual graph with each vertex replaced by a triangle (the graph GPM in the text). Weights for dual graph edges correspond to the weights on the original graph. ? Define the matrix P (GPM ) to be a skew symmetric matrix such that Pij = 0 if ij is not an edge, Pij = wij if the arrow on edge ij runs from i to j and Pij = ?wij otherwise. p ? The sum over weighted matchings can then be shown to equal |P (GPM )|. p The partition function is thus given by Z(?) = 2c |P (GPM )|. To conclude this section we reiterate the following two key points: the partition function of a binary MRF over a planar graph with interaction potentials as in Equation 2 may be calculated in polynomial time by calculating the determinant of a matrix of size O(3n). An important outcome of this result is that the functional relation between Z(?) and the parameters ?ij is known, a fact we shall use in what follows. 3 Partition function bounds via planar decomposition Given a non-planar graph G over binary variables with a vector of interaction potentials ?, we wish to use the exact planar computation to obtain a bound on the partition function of the MRF on G. We assume for simplicity that the potentials on the MRF for G are given in the form of Equation 2. Thus, G violates the assumptions of the previous section only in its non-planarity. Define G(r) as a set of spanning planar subgraphs of G, i.e., each graph G(r) is planar and contains all the vertices of G and some its edges. Denote by m the number of such graphs. Introduce the following definitions: (r) ? ? (r) is a set of parameters on the edges of G(r) , and ?ij is an element in this set. Z(? (r) ) is the partition function of the MRF on G(r) with parameters ? (r) . ? (r) is a set of parameters on the edges of G such that if edge (ij) is in G(r) then ??(r) = ? ? ij (r) (r) ?ij , and otherwise ??ij = 0. P Given a distribution ?(r) on the graphs G(r) (i.e., ?(r) ? 0 for r = 1, . . . , m and r ?(r) = 1), assume that the parameters for G(r) are such that X ? (r) ?= ?(r)? (5) r Then, by the convexity of the log partition function, as a function of the model parameters, we have X log Z(?) ? ?(r) log Z(? (r) ) ? f (?, ?, ? (r) ) (6) r Since by assumption the graphs G(r) are planar, this bound can be calculated in polynomial time. Since this bound is true for any set of parameters ? (r) which satisfies the condition in Equation 5 and for any distribution ?(r), we may optimize over these two variables to obtain the tightest bound possible. Define the optimal bound for a fixed value of ?(r) by g(?, ?) (optimization is w.r.t. ? (r) ) g(?, ?) = f (?, ?, ? (r) ) min ? (r) : P ? ?(r)? (r) =? (7) Also, define the optimum of the above w.r.t. ? by h(?). h(?) = min g(?, ?) P ?(r) ? 0, ?(r) = 1 (8) Thus, h(?) is the optimal upper bound for the given parameter vector ?. In the following section we argue that we can in fact find the global optimum of the above problem. 4 Globally Optimal Bound Optimization First consider calculating g(?, ?) from Equation 7. Note that since log Z(? (r) ) is a convex function of ? (r) , and the constraints are linear, the overall optimization is convex and can be solved efficiently. In the current implementation, we use a projected gradient algorithm [2]. The gradient of f (?, ?, ? (r) ) w.r.t. ? (r) is given by h i (r) ?f (?, ?, ? (r) ) (r) (r)  ?ij ?1 P (G = ?(r) 1 + e ) Sign(P (G (9) k(i,j) PM PM )) (r) k(i,j) ??ij where k(i, j) returns the row and column indices of the element in the upper triangular matrix of (r) (r) P (GPM ), which contains the element e2?ij . Since the optimization in Equation 7 is convex, it has an equivalent convex dual. Although we do not use this dual for optimization (because of the difficulty of expressing the entropy of planar models solely in terms of triplet marginals), it nevertheless allows some insight into the structure of the problem. The dual in this case is closely linked to the notion of the marginal polytope defined in Section 1. Using a derivation similar to [11], we arrive at the following characterization of the dual X g(?, ?) = max ? ? ? + ?(r)H(? (r) (? )) (10) ? ?TRI(n) r where ? (r) (? ) denotes the parameters of an MRF on G(r) such that its marginals are given by the restriction of ? to the edges of G(r) , and H(? (r) (? )) denotes the entropy of the MRF over G(r) with parameters ? (r) (? ). The maximized function in Equation 10 is linear in ? and thus g(?, ?) is a pointwise maximum over (linear) convex functions in ? and is thus convex in ?. It therefore has no (r) local minima. Denote by ?min (?) the set of parameters that minimizes Equation 7 for a given value of ?. Using a derivation similar to that in [11], the gradient of g(?, ?) can be shown to be ?g(?, ?) (r) = H(?min (?)) ??(r) (11) Since the partition function for G(r) can be calculated efficiently, so can the entropy. We can now summarize the algorithm for calculating h(?) ? Initialize ?0 . Iterate: ? For ?t , find ? (r) which solves the minimization in Equation 7. ? Calculate the gradient of g(?, ?) at ?t using the expression in Equation 11 ? Update ?t+1 = ?t + ?v where v is a feasible search direction calculated from the gradient of g(?, ?) and the simplex constraints on ?. The step size ? is calculated via an Armijo line search. ? Halt when the change in g(?, ?) is smaller than some threshold. Note that the minimization w.r.t. ? (r) is not very time consuming since we can initialize it with the minimum from the previous step, and thus only a few iterations are needed to find the new optimum, provided the change in ? is not too big. The above algorithm is guaranteed to converge to a global optimum of ? [2], and thus we obtain the tightest possible upper bound on Z(?) given our planar graph decomposition. The procedure described here is asymmetric w.r.t. ? and ? (r) . In a symmetric formulation the minimizing gradient steps could be carried out jointly or in an alternating sequence. The symmetric P ? (r) = ?. formulation can be obtained by decoupling ? and ? (r) in the bi-linear constraint ?(r)? Field Figure 2: Illustration of planar subgraph construction for a rectangular lattice with external field. Original graph is shown on the left. The field vertex is connected to all vertices (edges not shown). The graph on the right results from isolating the 4th ,5th columns of the original graph (shown in grey), and connecting the field vertex to the external vertices of the three disconnected components. Note that the resulting graph is planar. Specifically, we introduce ??(r) = ? (r) ?(r) and perform the optimization w.r.t. ? and ??(r) . It can be shown that a stationary point of f (?, ?, ??(r) ) with the relevant (de-coupled) constraint is equivalent to the procedure described above. The advantage of this approach is that the exact minimization w.r.t ? (r) is not required before modifying ?. Our experiments have shown, however, that the methods take comparable times to converge, although this may be a property of the implementation. 5 Estimating Marginals The optimization problem as defined above minimizes an upper bound on the partition function. However, it may also be of interest to obtain estimates of the marginals of the MRF over G. To obtain marginal estimates, we follow the approach in [11]. We first characterize the optimum of Equation 7 for a fixed value of ?. Deriving the Lagrangian of Equation 7 w.r.t. ? (r) we obtain the (r) following characterization of ?min (?): Marginal Optimality Criterion: For any two graphs G(r) , G(s) such that the edge (ij) is in both (r) (s) graphs, the optimal parameter vector satisfies ?ij (?min (?)) = ?ij (?min (?)). Thus, the optimal set of parameters for the graphs G(r) is such that every two graphs agree on the marginals of all the edges they share. This implies that at the optimum, there is a well defined set of marginals over all the edges. We use this set as an approximation to the true marginals. A different method for estimating marginals uses the partition function bound directly. We first P P calculate partition function bounds on the sums: ?i (1) = x:xi =1 e ij?E fij (xi ,xj ) and ?i (?1) = P P ?i (1) ij?E fij (xi ,xj ) and then normalize x:xi =?1 e ?i (1)+?i (?1) to obtain an estimate for p(xi = 1). This method has the advantage of being more numerically stable (since it does not depend on derivatives of log Z). However, it needs to be calculated separately for each variable, so that it may be time consuming if one is interested in marginals for a large set of variables. 6 Experimental Evaluation We study the application of our Planar Decomposition (PDC) P method to a binary MRF on a square P lattice with an external field. The MRF is given by p(x) ? e ij?E ?ij xi xj + i?V ?i xi where V are the lattice vertices, and ?i and ?ij are parameters. Note that this interaction does not satisfy the conditions for exact calculation of the partition function, even though the graph is planar. This problem is in fact NP hard [1]. However, it is possible to obtain the desired interaction form by introducing an additional variable xn+1 that is connected to all the original variables.PDenote the correspondP ?ij xi xj + i?V ?i,n+1 xi xn+1 ij?E ing graph by Gf . Consider the distribution p(x, xn+1 ) ? e , where ?i,n+1 = ?i . It is easy to see that any property of p(x) (e.g., partition function, marginals) may be calculated from the corresponding property of p(x, xn+1 ). The advantage of the latter distribution is that it has the desired interaction form. We can thus apply PDC by choosing planar subgraphs of the non-planar graph Gf . 0.25 0.15 0.1 0.05 0.5 1 1.5 Interaction Strength 0.03 Singleton Marginal Error Z Bound Error Pairwise Marginals Error 0.08 PDC TRW 0.2 0.07 0.06 0.05 0.04 0.03 0.02 2 0.5 1 1.5 Interaction Strength 0.025 0.02 0.015 0.01 0.005 2 0.5 1 1.5 Interaction Strength 2 !3 x 10 0.025 0.02 0.015 0.5 1 Field Strength 1.5 2 Singleton Marginal Error Pairwise Marginals Error Z Bound Error 0.03 0.03 0.025 0.02 0.015 0.5 1 Field Strength 1.5 2 9 8 7 6 5 4 3 0.5 1 Field Strength 1.5 2 Figure 3: Comparison of the TRW and Planar Decomposition (PDC) algorithms on a 7?7 square lattice. TRW results shown in red squares, and PDC in blue circles. Left column shows the error in the log partition bound. Middle column is the mean error for pairwise marginals, and right column is the error for the singleton marginal of the variable at the lattice center. Results in upper row are for field parameters drawn from U[?0.05, 0.05] and various interaction parameters. Results in the lower row are for interaction parameters drawn from U [?0.5, 0.5] and various field parameters. Error bars are standard errors calculated from 40 random trials. There are clearly many ways to choose spanning planar subgraphs of Gf . Spanning subtrees are one option, and were used in [11]. Since our optimization is polynomial in the number of subgraphs, ? we preferred to use a number of subgraphs that is linear in n. The key idea in generating these planar subgraphs is to generate disconnected components of the lattice and connect xn+1 only to the external vertices of these components. Here we generate three disconnected components by isolating two neighboring columns (or rows) from ? the rest of the graph, resulting in three components. This is illustrated in Figure 2. To this set of 2 n graphs, we add the independent variables graph consisting only of edges from the field node to all the other nodes. We compared the performance of the PDC and TRW methods 3 4 on a 7 ? 7 lattice . Since the exact partition function and marginals can be calculated for this case, we could compare both algorithms to the true values. The MRF parameters were set according to the two following scenarios: 1) Varying Interaction - The field parameters ?i were drawn uniformly from U[?0.05, 0.05], and the interaction ?ij from U[??, ?] where ? ? {0.2, 0.4, . . . , 2}. This is the setting tested in [11]. 2) Varying Field ?i was drawn uniformly from U[??, ?], where ? ? {0.2, 0.4, . . . , 2} and ?ij from U[?0.5, 0.5]. For each scenario, we calculated the following measures: 1) log partition error PNormalized 1 1 alg true alg (log Z ? log Z ). 2) Error in pairwise marginals |p (x = 1, xj = 1) ? i ij?E 49 |E| true p (xi = 1, xj = 1)|. Pairwise marginals were calculated jointly using the marginal optimality criterion of Section 5. 3) Error in singleton marginals. We calculated the singleton marginals for the innermost node in the lattice (i.e., coordinate [3, 3]), which intuitively should be the most difficult for the planar based algorithm. This marginal was calculated using two partition functions, as explained in Section 5 5 . The same method was used for TRW. The reported error measure is |palg (xi = 1) ? ptrue (xi = 1)|. Results were averaged over 40 random trials. Results for the two scenarios and different evaluation measures are given in Figure 3. It can be seen that the partition function bound for PDC is significantly better than TRW for almost all parameter settings, although the difference becomes smaller for large field values. Error for the PDC pairwise 3 TRW and PDC bounds were optimized over both the subgraph parameters and the mixture parameters ?. In terms of running time, PDC optimization for a fixed value of ? took about 30 seconds, which is still slower than the TRW message passing implementation. 5 Results using the marginal optimality criterion were worse for PDC, possibly due to its reduced numerical precision. 4 marginals are smaller than those of TRW for all parameter settings. For the singleton parameters, TRW slightly outperforms PDC. This is not surprising since the field is modeled by every spanning tree in the TRW decomposition, whereas in PDC not all the structures model a given field. 7 Discussion We have presented a method for using planar graphs as the basis for approximating non-planar graphs such as planar graphs with external fields. While the restriction to binary variables limits the applicability of our approach, it remains relevant in many important applications, such as coding theory and combinatorial optimization. Moreover, it is always possible to convert a non-binary graphical model to a binary one by introducing additional variables. The resulting graph will typically not be planar, even when the original graph over k?ary variables is. However, the planar decomposition method can then be applied to this non-planar graph. The optimization of the decomposition is carried out explicitly over the planar subgraphs, thus limiting the number of subgraphs that can be used in the approximation. In the TRW method this problem is circumvented since it is possible to implicitly optimize over all spanning trees. The reason this can be done for trees is that the entropy of an MRF over a tree may be written as a function of its marginal variables. We do not know of an equivalent result for planar graphs, and it remains a challenge to find one. It is however possible to combine the planar and tree decompositions into one single bound, which is guaranteed to outperform the tree or planar approximations alone. The planar decomposition idea may in principle be applied to bounding the value of the MAP assignment. However, as in TRW, it can be shown that the solution is not dependent on the decomposition (as long as each edge appears in some structure), and the problem is equivalent to maximizing a linear function over the marginal polytope (which can be done in polynomial time for planar graphs). However, such a decomposition may suggest new message passing algorithms, as in [10]. Acknowledgments The authors acknowledge support from the Defense Advanced Research Projects Agency (Transfer Learning program). Amir Globerson is also supported by the Rothschild Yad-Hanadiv fellowship. The authors also wish to thank Martin Wainwright for providing his TRW code. References [1] F. Barahona. On the computational complexity of ising spin glass models. J. Phys. A., 15(10):3241?3253, 1982. [2] D. P. Bertsekas, editor. Nonlinear Programming. Athena Scientific, Belmont, MA, 1995. [3] M.M. Deza and M. Laurent. Geometry of Cuts and Metrics. Springe-Verlag, 1997. [4] R. Diestel. Graph Theory. Springer-Verlag, 1997. [5] M.E. Fisher. On the dimer solution of planar ising models. J. Math. Phys., 7:1776?1781, 1966. [6] M.I. Jordan, editor. Learning in graphical models. MIT press, Cambridge, MA, 1998. [7] P.W. Kasteleyn. Dimer statistics and phase transitions. Journal of Math. Physics, 4:287?293, 1963. [8] L. Lovasz and M.D. Plummer. Matching Theory, volume 29 of Annals of discrete mathematics. NorthHolland, New-York, 1986. [9] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Tree-based reparameterization framework for analysis of sum-product and related algorithms. IEEE Trans. on Information Theory, 49(5):1120?1146, 2003. [10] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Map estimation via agreement on trees: messagepassing and linear programming. IEEE Trans. on Information Theory, 51(11):1120?1146, 2005. [11] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. IEEE Trans. on Information Theory, 51(7):2313?2335, 2005. [12] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. Technical report, UC Berkeley Dept. of Statistics, 2003. [13] J.S. Yedidia, W.T. W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. on Information Theory, 51(7):2282?2312, 2005.
3046 |@word trial:2 determinant:1 middle:1 briefly:1 polynomial:10 barahona:1 grey:2 decomposition:15 innermost:1 solid:1 contains:2 outperforms:1 current:1 surprising:1 assigning:1 written:1 belmont:1 numerical:1 partition:38 update:1 stationary:1 intelligence:1 alone:1 amir:2 plane:8 provides:1 characterization:2 node:4 bijection:1 math:2 unbounded:1 constructed:2 direct:1 ik:2 combine:1 manner:1 introduce:3 pairwise:13 freeman:1 globally:1 becomes:1 provided:1 estimating:2 notation:2 suffice:1 panel:2 moreover:1 project:1 what:2 minimizes:2 finding:1 transformation:2 berkeley:1 every:6 exactly:5 prohibitively:1 bertsekas:1 before:1 understood:1 local:1 limit:1 physicist:1 laurent:1 planarity:1 solely:1 abuse:1 black:1 bi:1 averaged:1 acknowledgment:1 globerson:2 differs:1 procedure:2 significantly:1 matching:6 projection:1 word:1 refers:1 suggest:1 cannot:1 seminal:1 restriction:3 equivalent:4 map:3 optimize:2 lagrangian:1 center:1 maximizing:1 straightforward:1 attention:1 convex:7 rectangular:1 simplicity:1 pure:2 subgraphs:8 rule:1 insight:1 importantly:1 deriving:1 his:1 reparameterization:1 notion:2 coordinate:1 limiting:1 annals:1 hierarchy:2 construction:3 exact:6 programming:2 us:2 agreement:10 trick:1 crossing:1 element:3 jk:2 asymmetric:1 cut:2 ising:2 solved:4 calculate:3 region:2 cycle:1 connected:3 agency:1 convexity:3 complexity:2 depend:2 segment:1 basis:1 triangle:5 matchings:6 various:4 derivation:2 describe:1 plummer:1 artificial:1 outcome:1 choosing:1 otherwise:2 triangular:1 statistic:2 jointly:2 advantage:4 sequence:2 took:1 interaction:17 maximal:1 product:3 neighboring:1 relevant:2 iff:2 subgraph:2 normalize:1 optimum:6 generating:1 perfect:8 derive:1 ij:57 odd:1 solves:1 implies:2 triangulated:3 tommi:2 direction:1 fij:7 closely:2 modifying:1 violates:1 assign:2 decompose:1 tighter:1 summation:1 hold:1 mapping:3 dimer:2 estimation:2 combinatorial:1 successfully:1 tool:1 weighted:5 minimization:3 lovasz:1 mit:2 clearly:1 always:1 rather:1 varying:2 jaakkola:4 focus:2 rothschild:1 sense:1 glass:1 inference:4 dependent:1 typically:2 relation:1 wij:7 interested:3 provably:1 overall:1 among:1 dual:11 orientation:1 denoted:2 initialize:2 uc:1 marginal:15 field:22 construct:1 equal:1 triangulate:1 simplex:1 others:1 np:1 report:1 few:1 kasteleyn:6 oriented:1 individual:1 replaced:1 geometry:1 consisting:1 phase:1 interest:1 message:2 gamir:1 evaluation:3 mixture:2 perimeter:1 subtrees:1 edge:54 conforms:1 tree:21 desired:2 circle:1 isolating:2 column:6 modeling:1 assignment:6 lattice:8 applicability:1 introducing:3 vertex:24 too:1 characterize:3 reported:1 connect:2 referring:1 csail:1 physic:1 connecting:1 choose:1 possibly:2 worse:1 external:10 derivative:1 return:1 potential:10 singleton:8 de:1 coding:2 satisfy:1 explicitly:1 depends:1 reiterate:1 linked:1 red:1 option:1 contribution:1 square:3 spin:1 efficiently:2 maximized:1 correspond:4 ary:1 phys:2 definition:5 energy:2 e2:2 associated:1 proved:1 massachusetts:1 recall:1 actually:1 trw:18 appears:1 follow:1 planar:54 improved:1 wei:1 formulation:2 done:2 though:1 furthermore:1 nonlinear:1 reweighting:1 propagation:1 scientific:1 believe:1 contain:1 true:6 equality:1 assigned:1 alternating:1 symmetric:3 laboratory:1 illustrated:2 width:1 criterion:3 generalized:1 complete:1 performs:1 image:1 variational:1 functional:1 volume:1 interpretation:1 marginals:26 numerically:1 refer:1 expressing:1 cambridge:2 pm:2 mathematics:1 stable:1 gt:11 base:1 add:1 multivariate:2 recent:1 triangulation:1 showed:1 reverse:1 scenario:3 verlag:2 inequality:4 binary:9 seen:2 minimum:2 additional:2 somewhat:1 converge:2 clockwise:1 ing:1 technical:1 calculation:3 long:1 halt:1 mrf:19 essentially:1 metric:1 iteration:1 whereas:2 fellowship:1 separately:1 rest:1 tri:4 elegant:1 jordan:2 call:1 easy:3 superset:1 iterate:1 xj:20 restrict:1 idea:3 expression:1 defense:1 passing:2 york:1 involve:1 reduced:1 generate:2 outperform:1 dotted:1 sign:2 blue:1 discrete:1 shall:2 key:5 four:1 nevertheless:2 threshold:1 drawn:6 ce:1 graph:85 convert:3 sum:9 run:1 powerful:1 arrive:1 almost:1 family:1 comparable:2 bound:27 guaranteed:2 corre:1 correspondence:1 strength:6 constraint:7 n3:1 argument:2 min:7 optimality:3 sponds:1 martin:1 circumvented:1 structured:1 according:2 disconnected:3 describes:1 slightly:2 smaller:3 intuitively:1 explained:1 computationally:1 equation:12 agree:1 remains:3 turn:1 skew:1 needed:1 know:1 tractable:4 available:1 junction:1 tightest:2 yedidia:1 apply:1 disagreement:2 slower:1 existence:1 original:10 denotes:2 clustering:1 cf:1 pfaffian:1 running:1 graphical:8 calculating:9 pdc:13 approximating:1 added:1 quantity:2 dependence:1 gradient:6 thank:1 athena:1 polytope:10 argue:1 spanning:7 induction:1 aes:3 reason:1 willsky:3 code:1 index:1 pointwise:1 prompted:1 illustration:2 minimizing:1 modeled:1 providing:1 difficult:1 favorably:1 implementation:3 perform:1 upper:9 markov:1 acknowledge:1 introduced:2 namely:1 pair:2 required:1 extensive:1 optimized:1 trans:4 andp:1 bar:1 below:1 summarize:1 challenge:1 program:1 max:1 belief:1 wainwright:7 event:1 difficulty:1 advanced:1 technology:1 carried:2 coupled:1 gf:3 text:1 review:1 highlight:1 interesting:2 proportional:1 degree:2 incident:1 pij:3 principle:1 editor:2 share:2 row:4 deza:1 supported:1 free:2 keeping:1 institute:1 neighbor:2 face:11 calculated:15 xn:6 valid:1 transition:1 author:2 commonly:1 projected:1 approximate:4 preferred:1 implicitly:1 clique:3 global:2 summing:1 conclude:1 consuming:2 xi:29 search:2 triplet:2 decomposes:1 bethe:1 transfer:1 decoupling:1 messagepassing:1 northholland:1 correspondp:1 alg:2 constructing:1 arrow:1 big:1 bounding:1 x1:1 referred:2 precision:1 wish:2 exponential:1 showing:1 exists:1 adding:1 entropy:4 intersection:1 simply:1 ptrue:1 springer:1 corresponds:1 satisfies:2 ma:3 viewed:1 shared:2 fisher:7 replace:1 hard:2 feasible:1 change:2 specifically:1 except:1 uniformly:2 called:3 experimental:1 rarely:1 support:1 latter:1 armijo:1 dept:1 tested:1
2,256
3,047
Multi-Instance Multi-Label Learning with Application to Scene Classification Zhi-Hua Zhou Min-Ling Zhang National Laboratory for Novel Software Technology Nanjing University, Nanjing 210093, China {zhouzh,zhangml}@lamda.nju.edu.cn Abstract In this paper, we formalize multi-instance multi-label learning, where each training example is associated with not only multiple instances but also multiple class labels. Such a problem can occur in many real-world tasks, e.g. an image usually contains multiple patches each of which can be described by a feature vector, and the image can belong to multiple categories since its semantics can be recognized in different ways. We analyze the relationship between multi-instance multi-label learning and the learning frameworks of traditional supervised learning, multiinstance learning and multi-label learning. Then, we propose the M IML B OOST and M IML S VM algorithms which achieve good performance in an application to scene classification. 1 Introduction In traditional supervised learning, an object is represented by an instance (or feature vector) and associated with a class label. Formally, let X denote the instance space (or feature space) and Y the set of class labels. Then the task is to learn a function f : X ? Y from a given data set {(x1 , y1 ), (x2 , y2 ), ? ? ? , (xm , ym )}, where xi ? X is an instance and yi ? Y the known label of xi . Although the above formalization is prevailing and successful, there are many real-world problems which do not fit this framework well, where a real-world object may be associated with a number of instances and a number of labels simultaneously. For example, an image usually contains multiple patches each can be represented by an instance, while in image classification such an image can belong to several classes simultaneously, e.g. an image can belong to mountains as well as Africa. Another example is text categorization, where a document usually contains multiple sections each of which can be represented as an instance, and the document can be regarded as belonging to different categories if it was viewed from different aspects, e.g. a document can be categorized as scientific novel, Jules Verne?s writing or even books on travelling. Web mining is a further example, where each of the links can be regarded as an instance while the web page itself can be recognized as news page, sports page, soccer page, etc. In order to deal with such problems, in this paper we formalize multi-instance multi-label learning (abbreviated as M IML). In this learning framework, a training example is described by multiple instances and associated with multiple class labels. Formally, let X denote the instance space and Y the set of class labels. Then the task is to learn a function fM IM L : 2X ? 2Y from a given data (i) (i) (i) set {(X1 , Y1 ), (X2 , Y2 ), ? ? ? , (Xm , Ym )}, where Xi ? X is a set of instances {x1 , x2 , ? ? ? , xni }, (i) (i) (i) (i) (i) xj ? X (j = 1, 2, ? ? ? , ni ), and Yi ? Y is a set of labels {y1 , y2 , ? ? ? , yli }, yk ? Y (k = 1, 2, ? ? ? , li ). Here ni denotes the number of instances in Xi and li the number of labels in Yi . After analyzing the relationship between M IML and the frameworks of traditional supervised learning, multi-instance learning and multi-label learning, we propose two M IML algorithms, M IML - B OOST and M IML S VM. Application to scene classification shows that, solving some real-world problems in the M IML framework can achieve better performance than solving them in existing frameworks such as multi-instance learning and multi-label learning. 2 Multi-Instance Multi-Label Learning We start by investigating the relationship between M IML and the frameworks of traditional supervised learning, multi-instance learning and multi-label learning, and then we develop some solutions. Multi-instance learning [4] studies the problem where a real-world object described by a number of instances is associated with one class label. Formally, the task is to learn a function fM IL : 2X ? {?1, +1} from a given data set {(X1 , y1 ), (X2 , y2 ), ? ? ? , (Xm , ym )}, where Xi ? X is a set of (i) (i) (i) (i) instances {x1 , x2 , ? ? ? , xni }, xj ? X (j = 1, 2, ? ? ? , ni ), yi ? {?1, +1} is the label of Xi .1 Multi-instance learning techniques have been successfully applied to diverse applications including scene classification [3, 7]. Multi-label learning [8] studies the problem where a real-world object described by one instance is associated with a number of class labels. Formally, the task is to learn a function fM LL : X ? 2Y from a given data set {(x1 , Y1 ), (x2 , Y2 ), ? ? ? , (xm , Ym )}, where xi ? X is an instance and Yi ? Y (i) (i) (i) (i) a set of labels {y1 , y2 , ? ? ? , yli }, yk ? Y (k = 1, 2, ? ? ? , li ).2 Multi-label learning techniques have also been successfully applied to scene classification [1]. In fact, the multi- learning frameworks result from the ambiguity in representing real-world objects. Multi-instance learning studies the ambiguity in the input space (or instance space), where an object has many alternative input descriptions, i.e. instances; multi-label learning studies the ambiguity in the output space (or label space), where an object has many alternative output descriptions, i.e. labels; while M IML considers the ambiguity in the input and output spaces simultaneously. We illustrate the differences among these learning frameworks in Figure 1. (a) Traditional supervised learning (b) Multi-instance learning (c) Multi-label learning (d) Multi-instance multi-label learning Figure 1: Four different learning frameworks Traditional supervised learning is evidently a degenerated version of multi-instance learning as well as a degenerated version of multi-label learning, while traditional supervised learning, multi-instance learning and multi-label learning are all degenerated versions of M IML. Thus, we can tackle M IML by identifying its equivalence in the traditional supervised learning framework, using multi-instance learning or multi-label learning as the bridge. 1 According to notions used in multi-instance learning, (Xi , yi ) is a labeled bag while Xi an unlabeled bag. Although most works on multi-label learning assume that an instance can be associated with multiple valid labels, there are also works assuming that only one of the labels associated with an instance is correct [6]. We adopt the former assumption in this paper. 2 Solution 1: Using multi-instance learning as the bridge: We can transform a M IML learning task, i.e. to learn a function fM IM L : 2X ? 2Y , into a multi-instance learning task, i.e. to learn a function fM IL : 2X ? Y ? {?1, +1}. For any y ? Y, fM IL (Xi , y) = +1 if y ? Yi and ?1 otherwise. The proper labels for a new example X ? can be determined according to Y ? = {y| argy?Y [fM IL (X ? , y) = +1]}. We can transform this multi-instance learning task further into a traditional supervised learning task, i.e. to learn a function fSISL : X ? Y ? {?1, +1}, under (i) a constraint specifying how to derive fM IL (Xi , y) from fSISL (xj , y) (j = 1, ? ? ? , ni ). For any (i) y ? Y, fSISL (xj , y) = +1 if y ? Yi and ?1 otherwise. Here the constraint can be fM IL (Xi , y) = Pni (i) sign[ j=1 fSISL (xj , y)] which has been used in transforming multi-instance learning tasks into traditional supervised learning tasks [9].3 Note that other kinds of constraint can also be used here. Solution 2: Using multi-label learning as the bridge: We can also transform a M IML learning task, i.e. to learn a function fM IM L : 2X ? 2Y , into a multi-label learning task, i.e. to learn a function fM LL : Z ? 2Y . For any zi ? Z, fM LL (zi ) = fM IM L (Xi ) if zi = ?(Xi ), ? : 2X ? Z. The proper labels for a new example X ? can be determined according to Y ? = fM LL (?(X ? )). We can transform this multi-label learning task further into a traditional supervised learning task, i.e. to learn a function fSISL : Z ? Y ? {?1, +1}. For any y ? Y, fSISL (zi , y) = +1 if y ? Yi and ?1 otherwise. That is, fM LL (zi ) = {y| argy?Y [fSISL (zi , y) = +1]}. Here the mapping ? can be implemented with constructive clustering which has been used in transforming multi-instance bags into traditional single-instances [11]. Note that other kinds of mapping can also be used here. 3 Algorithms In this section, we propose two algorithms for solving M IML problems: M IML B OOST works along the first solution described in Section 2, while M IML S VM works along the second solution. 3.1 M IML B OOST Given any set ?, let |?| denote its size, i.e. the number of elements in ?; given any predicate ?, let [[?]] be 1 if ? holds and 0 otherwise; given (Xi , Yi ), for any y ? Y, let ?(Xi , y) = +1 if y ? Yi and ?1 otherwise, where ? is a function ? : 2X ? Y ? {?1, +1}. The M IML B OOST algorithm is presented in Table 1. In the first step, each M IML example (Xu , Yu ) (u = 1, 2, ? ? ? , m) is transformed into a set of |Y| number of multi-instance bags, i.e. {[(Xu , y1 ), ?(Xu , y1 )], [(Xu , y2 ), ?(Xu , y2 )], ? ? ? , [(Xu , y|Y| ), ?(Xu , y|Y| )]}. Note that [(Xu , yv ), ?(Xu , yv )] (v = 1, 2, ? ? ? , |Y|) is a labeled multi-instance (u) (u) bag where (Xu , yv ) is a bag containing nu number of instances, i.e. {(x1 , yv ), (x2 , yv ), ? ? ? , (u) (xnu , yv )}, and ?(Xu , yv ) ? {+1, ?1} is the label of this bag. Thus, the original M IML data set is transformed into a multi-instance data set containing m ? |Y| number of bags, i.e. {[(X1 , y1 ), ?(X1 , y1 )], ? ? ? , [(X1 , y|Y| ), ?(X1 , y|Y| )], [(X2 , y1 ), ?(X2 , y1 )], ? ? ? , [(Xm , y|Y| ), ?(Xm , y|Y| )]}. Let [(X (i) , y (i) ), ?(X (i) , y (i) )] denote the ith of these m ? |Y| number of bags, that is, (X (1) , y (1) ) denotes (X1 , y1 ), ? ? ? , (X (|Y|) , y (|Y|) ) denotes (X1 , y|Y| ), ? ? ? , (X (m?|Y|) , y (m?|Y|) ) denotes (Xm , y|Y| ), where (X (i) , y (i) ) contains ni number of instances, i.e. (i) (i) (i) {(x1 , y (i) ), (x2 , y (i) ), ? ? ? , (xni , y (i) )}. Then, from the data set a multi-instance learning function fM IL can be learned, which can accomplish the desired M IML function because fM IM L (X ? ) = {y| argy?Y (sign[fM IL (X ? , y)] = +1)}. Here we use M I B OOSTING [9] to implement fM IL . For convenience, let (B, g) denote the bag [(X, y), ?(X, y)]. Then, here the goal is to learn a function F(B) minimizing the bag-level exponential loss EB EG|B [exp(?gF(B))], which ultimately 3 This constraint assumes that all instances contribute equally and independently to a bag?s label, which is different from the standard multi-instance assumption that there is one ?key? instance in a bag that triggers whether the bag?s class label will be positive or negative. Nevertheless, it has been shown that this assumption is reasonable and effective [9]. Note that the standard multi-instance assumption does not always hold, e.g. the label Africa of an image is usually triggered by several patches jointly instead of by only one patch. Table 1: The M IML B OOST algorithm 1 Transform each M IML example (Xu , Yu ) (u = 1, 2, ? ? ? , m) into |Y| number of multiinstance bags {[(Xu , y1 ), ?(Xu , y1 )], ? ? ? , [(Xu , y|Y| ), ?(Xu , y|Y| )]}. Thus, the original data set is transformed into a multi-instance data set containing m ? |Y| number of multi-instance bags, denoted by {[(X (i) , y (i) ), ?(X (i) , y (i) )]} (i = 1, 2, ? ? ? , m ? |Y|). 2 Initialize weight of each bag to W (i) = 1 m?|Y| (i = 1, 2, ? ? ? , m ? |Y|). Repeat for t = 1, 2, ? ? ? , T iterations: 3 (i) 3a Set Wj = W (i) /ni (i = 1, 2, ? ? ? , m ? |Y|), assign the bag?s label ?(X (i) , y (i) ) (i) to each of its instances (xj , y (i) ) (j = 1, 2, ? ? ? , ni ), and build an instance-level (i) predictor ht [(xj , y (i) )] ? {?1, +1}. 3b 3c For the ith bag, compute the error rate e(i) ? [0, 1]P by counting the number of 3d 3e Compute ct = arg minct If ct ? 0, go to Step 4. 3f Set W (i) = W (i) exp[(2e(i) ? 1)ct ] (i = 1, 2, ? ? ? , m ? |Y|) and re-normalize such Pm?|Y| that 0 ? W (i) ? 1 and i=1 W (i) = 1. ni misclassified instances within the bag, i.e. e(i) = (i) If e (i) [[ht [(x j ,y (i) )]6=?(X (i) ,y (i) )]] ni . < 0.5 for all i ? {1, 2, ? ? ? , m ? |Y|}, go to Step 4. Return Y ? = {y| argy?Y sign 4 j=1 Pm?|Y| W (i) exp[(2e(i) ? 1)ct ]. ?P P ct ht [(x?j , y)] = +1} (x?j is X ? ?s jth instance). i=1 j t ? r(g=1|B) estimates the bag-level log-odds function 12 log PPr(g=?1|B) . In each boosting round, the aim is to expand F(B) into F(B) + cf (B), i.e. adding a new weak classifier, so that the exponential loss is minimized. Assuming P all instances in a bag contribute equally and independently to the bag?s label, f (B) = n1B j h(bj ) can be derived, where h(bj ) ? {?1, +1} is the prediction of the instance-level classifier h(?) for the jth instance in bag B, and nB is the number of instances in B. It has been shown by [9] that the best f (B) to be added can be achieved by seeking h(?) which P Pni 1 (i) (i) (i) maximizes i j=1 [ ni W g h(bj )], given the bag-level weights W = exp(?gF(B)). By assigning each instance the label of its bag and the corresponding weight W (i) /ni , h(?) can be learned by minimizing the weighted instance-level classification error. This actually corresponds to the Step 3a of M IML B OOST. When f (B) is found, the best multiplier c > 0 can be got by directly optimizing the exponential loss: ? ! P (i) (i) X g h(b ) j j EB EG|B [exp(?gF(B) + c(?gf (B)))] = W (i) exp[c ? ] i ni X = W (i) exp[(2e(i) ? 1)c] i (i) 1 ni P (i) j [[(h(bj ) (i) where e = 6= g )]] (computed in Step 3b). Minimization of this expectation actually corresponds to Step 3d, where numeric optimization techniques such as quasi-Newton method can be used. Finally, the bag-level weights are updated in Step 3f according to the additive structure of F(B). 3.2 M IML S VM Given (Xi , Yi ) and zi = ?(Xi ) where ? : 2X ? Z, for any y ? Y, let ?(zi , y) = +1 if y ? Yi and ?1 otherwise, where ? is a function ? : Z ? Y ? {?1, +1}. The M IML S VM algorithm is presented in Table 2. In the first step, the Xu of each M IML example (Xu , Yu ) (u = 1, 2, ? ? ? , m) is collected and put into a data set ?. Then, in the second step, k-medoids clustering is performed on ?. Since each Table 2: The M IML S VM algorithm 1 For M IML examples (Xu , Yu ) (u = 1, 2, ? ? ? , m), ? = {Xu |u = 1, 2, ? ? ? , m}. 2 Randomly select k elements from ? to initialize the medoids Mt (t = 1, 2, ? ? ? , k), repeat until all Mt do not change: 2a ?t = {Mt } (t = 1, 2, ? ? ? , k). 2b Repeat for each Xu ? (? ? {Mt |t = 1, 2, ? ? ? , k}): index = arg mint?{1,???,k} dH (Xu , Mt ), ?index = ?index ? {Xu }. P 2c Mt = arg min dH (A, B) (t = 1, 2, ? ? ? , k). A??t B?? t 3 Transform (Xu , Yu ) into a multi-label example (zu , Yu ) (u = 1, 2, ? ? ? , m), where zu = (zu1 , zu2 , ? ? ? , zuk ) = (dH (Xu , M1 ), dH (Xu , M2 ), ? ? ? , dH (Xu , Mk )). 4 For each y ? Y, derive a data set Dy = {(zu , ? (zu , y)) |u = 1, 2, ? ? ? , m}, and then train an S VM hy = SV M T rain(Dy ). 5 Return Y ? = {arg max hy (z ? )} ? {y|hy (z ? ) ? 0, y ? Y}, where z ? = (dH (X ? , M1 ), y?Y dH (X ? , M2 ), ? ? ? , dH (X ? , Mk )). data item in ?, i.e. Xu , is an unlabeled multi-instance bag instead of a single instance, we employ Hausdorff distance [5] to measure the distance. In detail, given two bags A = {a1 , a2 , ? ? ? , anA } and B = {b1 , b2 , ? ? ? , bnB }, the Hausdorff distance between A and B is defined as dH (A, B) = max{max min ka ? bk, max min kb ? ak} a?A b?B b?B a?A where ka ? bk measures the distance between the instances a and b, which takes the form of Euclidean distance here. After the clustering process, we divide the data set ? into k partitions whose medoids are Mt (t = 1, 2, ? ? ? , k), respectively. With the help of these medoids, we transform the original multi-instance example Xu into a k-dimensional numerical vector zu , where the ith (i = 1, 2, ? ? ? , k) component of zu is the distance between Xu and Mi , that is, dH (Xu , Mi ). In other words, zui encodes some structure information of the data, that is, the relationship between Xu and the ith partition of ?. This process reassembles the constructive clustering process used by [11] in transforming multiinstance examples into single-instance examples except that in [11] the clustering is executed at the instance level while here we execute it at the bag level. Thus, the original M IML examples (Xu , Yu ) (u = 1, 2, ? ? ? , m) have been transformed into multi-label examples (zu , Yu ) (u = 1, 2, ? ? ? , m), which corresponds to the Step 3 of M IML S VM. Note that this transformation may lose information, nevertheless the performance of M IML S VM is still good. This suggests that M IML is a powerful framework which has captured more original information than other learning frameworks. Then, from the data set a multi-label learning function fM LL can be learned, which can accomplish the desired M IML function because fM IM L (X ? ) = fM LL (z ? ). Here we use M L S VM [1] to implement fM LL . Concretely, M L S VM decomposes the multi-label learning problem into multiple independent binary classification problems (one per class), where each example associated with the label set Y is regarded as a positive example when building S VM for any class y ? Y , while regarded as a negative example when building S VM for any class y ? / Y , as shown in the Step 4 of M IML S VM. In making predictions, the T-Criterion [1] is used, which actually corresponds to the Step 5 of the M IML S VM algorithm. That is, the test example is labeled by all the class labels with positive S VM scores, except that when all the S VM scores are negative, the test example is labeled by the class label which is with the top (least negative) score. 4 Application to Scene Classification The data set consists of 2,000 natural scene images belonging to the classes desert, mountains, sea, sunset, and trees, as shown in Table 3. Some images were from the C OREL image collection while some were collected from the Internet. Over 22% images belong to multiple classes simultaneously. Table 3: The image data set (d: desert, m: mountains, s: sea, su: sunset, t: trees) label d m s su t 4.1 # images 340 268 341 216 378 label d+m d+s d + su d+t m+s # images 19 5 21 20 38 label m + su m+t s + su s+t su + t # images 19 106 172 14 28 label d + m + su d + su + t m+s+t m + su + t s + su + t # images 1 3 6 1 4 Comparison with Multi-Label Learning Algorithms Since the scene classification task has been successfully tackled by multi-label learning algorithms [1], we compare the M IML algorithms with established multi-label learning algorithms A D A B OOST.MH [8] and M L S VM [1]. The former is the core of a successful multi-label learning system B OOS T EXTER [8], while the latter has achieved excellent performance in scene classification [1]. For M IML B OOST and M IML S VM, each image is represented as a bag of nine instances generated by the S BN method [7]. Here each instance actually corresponds to an image patch, and better performance can be expected with better image patch generation method. For A DA B OOST.MH and M L S VM, each image is represented as a feature vector obtained by concatenating the instances of M IML B OOST or M IML S VM. Gaussian kernel L IBSVM [2] is used to implement M L S VM, where the cross-training strategy is used to build the classifiers while the T-Criterion is used to label the images [1]. The M IML S VM algorithm is also realized with a Gaussian kernel, while the parameter k is set to be 20% of the number of training images.4 Note that the instance-level predictor used in Step 3a of M IML B OOST is also a Gaussian kernel L IBSVM (with default parameters). Since A DA B OOST.MH and M L S VM make multi-label predictions, here the performance of the compared algorithms are evaluated according to five multi-label evaluation metrics, as shown in Tables 4 to 7, where ??? indicates ?the smaller the better? while ??? indicates ?the bigger the better?. Details of these evaluation metrics can be found in [8]. Tenfold cross-validation is performed and ?mean ? std? is presented in the tables, where the best performance achieved by each algorithm is bolded. Note that since in each boosting round M IML B OOST performs more operations than A DA B OOST.MH does, for fair comparison, the boosting rounds used by A DA B OOST.MH are set to ten times of that used by M IML B OOST such that the time cost of them are comparable. Table 4: The performance of M IML B OOST with different boosting rounds boosting rounds 5 10 15 20 25 evaluation metric hamm. loss .202?.011 .197?.010 .195?.009 .193?.008 .189?.009 ? ? one-error .373?.045 .362?.040 .361?.034 .355?.037 .351?.039 coverage ? 1.026?.093 1.013?.109 1.004?.101 .996?.102 .989?.103 rank. loss ? .208?.028 .191?.027 .186?.025 .183?.025 .181?.026 ave. prec. ? .764?.027 .770?.026 .772?.023 .775?.024 .777?.025 Table 5: The performance of A DA B OOST.MH with different boosting rounds boosting rounds 50 100 150 200 250 4 evaluation metric hamm. loss ? .228?.013 .234?.019 .233?.020 .232?.012 .231?.018 one-error ? .473?.031 .465?.042 .465?.053 .453?.031 .451?.046 coverage ? 1.299?.099 1.292?.138 1.279?.140 1.269?.107 1.258?.137 rank. loss ? .263?.022 .259?.030 .255?.032 .253?.022 .250?.031 ave. prec. ? .695?.022 .698?.033 .700?.033 .706?.020 .708?.030 In preliminary experiments, several percentage values have been tested ranging from 20% to 100% with an interval of 20%. The results show that these values do not significantly affect the performance of M IML S VM. Table 6: The performance of M IML S VM with different ? used in Gaussian kernel Gaussian kernel ? = .1 ? = .2 ? = .3 ? = .4 ? = .5 evaluation metric hamm. loss .181?.017 .180?.017 .188?.016 .193?.014 .196?.014 ? ? one-error .332?.036 .327?.033 .344?.032 .358?.030 .370?.033 coverage ? 1.024?.089 1.022?.085 1.065?.094 1.080?.099 1.109?.101 rank. loss ? .187?.018 .187?.018 .196?.020 .202?.022 .209?.023 ave. prec. ? .780?.021 .783?.020 .772?.020 .764?.021 .757?.023 Table 7: The performance of M L S VM with different ? used in Gaussian kernel Gaussian kernel ?=1 ?=2 ?=3 ?=4 ?=5 evaluation metric hamm. loss ? .200?.014 .196?.013 .195?.015 .196?.016 .202?.015 one-error ? .379?.032 .368?.032 .370?.034 .372?.034 .388?.032 coverage ? 1.125?.115 1.115?.122 1.129?.113 1.151?.122 1.181?.128 rank. loss ? .214?.020 .211?.023 .214?.022 .220?.024 .229?.026 ave. prec. ? .751?.022 .756?.022 .754?.023 .751?.023 .741?.023 Comparing Tables 4 to 7 we can find that both M IML B OOST and M IML S VM are apparently better than A DA B OOST.MH and M L S VM. Impressively, pair-wise t-tests with .05 significance level reveal that the worst performance of M IML B OOST (with 5 boosting rounds) is even significantly better than the best performance of A DA B OOST.MH (with 250 boosting rounds) on all the evaluation metrics, and is significantly better than the best performance of M L S VM (with ? = 2) in terms of coverage while comparable on the remaining metrics; the worse performance of M IML S VM (with ? = .5) is even comparable to the best performance of M L S VM and is significantly better than the best performance of A DA B OOST.MH on all the evaluation metrics. These observations confirm that formalizing the scene classification task as a M IML problem to solve by M IML B OOST or M IML S VM is better than formalizing it as a multi-label learning problem to solve by A DA B OOST.MH or M L S VM. 4.2 Comparison with Multi-Instance Learning Algorithms Since the scene classification task has been successfully tackled by multi-instance learning algorithms [7], we compare the M IML algorithms with established multi-instance learning algorithms D IVERSE D ENSITY [7] and E M - DD [10]. The former is one of the most influential multi-instance learning algorithm and has achieved excellent performance in scene classification [7], while the latter has achieved excellent performance on multi-instance benchmark tests [10]. Here all the compared algorithms use the same input representation. That is, each image is represented as a bag of nine instances generated by the S BN method [7]. The parameters of D IVERSE D ENSITY and E M - DD are set according to the settings that resulted in the best performance [7, 10]. The M IML B OOST and M IML S VM algorithms are implemented as described in Section 4.1, with 25 boosting rounds for M IML B OOST while ? = .2 for M IML S VM. Since D IVERSE D ENSITY and E M - DD make single-label predictions, here the performance of the compared algorithms are evaluated according to predictive accuracy, i.e. classification accuracy on test set. Note that for M IML B OOST and M IML S VM, the top ranked class is regarded as the single-label prediction. Tenfold cross-validation is performed and ?mean ? std? is presented in Table 8, where the best performance on each image class is bolded. Note that besides the predictive accuracies on each class, the overall accuracy is also presented, which is denoted by ?overall?. We can find from Table 8 that M IML B OOST achieves the best performance on image classes desert and trees while M IML S VM achieves the best performance on the remaining image classes. Overall, M IML S VM achieves the best performance. Pair-wise t-tests with .05 significance level reveal that the overall performance of M IML S VM is comparable to that of M IML B OOST, both are significantly better than that of D IVERSE D ENSITY and E M - DD. These observations confirm that formalizing the scene classification task as a M IML problem to solve by M IML B OOST or M IML S VM is better than formalizing it as a multi-instance learning problem to solve by D IVERSE D ENSITY or E M - DD. Table 8: Compare predictive accuracy of M IML B OOST, M IML S VM, D IVERSE D ENSITY and E M - DD Image class desert mountains sea sunset trees overall 5 M IML B OOST .869?.014 .791?.024 .729?.026 .864?.033 .801?.015 .811?.022 Compared algorithms M IML S VM D IVERSE D ENSITY .868?.026 .768?.037 .820?.022 .721?.030 .730?.030 .587?.038 .883?.023 .841?.036 .798?.017 .781?.028 .820?.024 .739?.034 E M - DD .751?.047 .717?.036 .639?.063 .815?.063 .632?.060 .711?.054 Conclusion In this paper, we formalize multi-instance multi-label learning where an example is associated with multiple instances and multiple labels simultaneously. Although there were some works investigating the ambiguity of alternative input descriptions or alternative output descriptions associated with an object, this is the first work studying both these ambiguities simultaneously. We show that an M IML problem can be solved by identifying its equivalence in the traditional supervised learning framework, using multi-instance learning or multi-label learning as the bridge. The proposed algorithms, M IML B OOST and M IML S VM, have achieved good performance in the application to scene classification. An interesting future issue is to develop M IML versions of other popular machine learning algorithms. Moreover, it remains an open problem that whether M IML can be tackled directly, possibly by exploiting the connections between the instances and the labels. It is also interesting to discover the relationship between the instances and labels. By unravelling the mixed connections, maybe we can get deeper understanding of ambiguity. Acknowledgments This work was supported by the National Science Foundation of China (60325207, 60473046). References [1] M. R. Boutell, J. Luo, X. Shen, and C. M. Brown. Learning multi-label scene classification. Pattern Recognition, 37(9):1757?1771, 2004. [2] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, 2001. [3] Y. Chen and J. Z. Wang. Image categorization by learning and reasoning with regions. Journal of Machine Learning Research, 5:913?939, 2004. [4] T. G. Dietterich, R. H. Lathrop, and T. Lozano-P?erez. Solving the multiple-instance problem with axisparallel rectangles. Artificial Intelligence, 89(1-2):31?71, 1997. [5] G. A. Edgar. Measure, Topology, and Fractal Geometry. Springer, Berlin, 1990. [6] R. Jin and Z. Ghahramani. Learning with multiple labels. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 897?904. MIT Press, Cambridge, MA, 2003. [7] O. Maron and A. L. Ratan. Multiple-instance learning for natural scene classification. In Proceedings of the 15th International Conference on Machine Learning, pages 341?349, Madison, MI, 1998. [8] R. E. Schapire and Y. Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 39(2-3):135?168, 2000. [9] X. Xu and E. Frank. Logistic regression and boosting for labeled bags of instances. In H. Dai, R. Srikant, and C. Zhang, editors, Lecture Notes in Artificial Intelligence 3056, pages 272?281. Springer, Berlin, 2004. [10] Q. Zhang and S. A. Goldman. EM-DD: An improved multi-instance learning technique. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 1073?1080. MIT Press, Cambridge, MA, 2002. [11] Z.-H. Zhou and M.-L. Zhang. Solving multi-instance problems with classifier ensemble based on constructive clustering. Knowledge and Information Systems, in press.
3047 |@word version:4 open:1 ratan:1 bn:2 contains:4 zuk:1 score:3 document:3 africa:2 existing:1 ka:2 comparing:1 luo:1 assigning:1 numerical:1 additive:1 partition:2 intelligence:2 item:1 ith:4 core:1 boosting:12 contribute:2 zhang:4 five:1 along:2 consists:1 expected:1 multi:78 zhouzh:1 goldman:1 zhi:1 tenfold:2 discover:1 moreover:1 formalizing:4 maximizes:1 mountain:4 kind:2 transformation:1 tackle:1 classifier:4 positive:3 nju:1 engineering:1 ak:1 analyzing:1 eb:2 china:2 equivalence:2 specifying:1 suggests:1 acknowledgment:1 hamm:4 implement:3 got:1 significantly:5 word:1 get:1 nanjing:2 convenience:1 unlabeled:2 nb:1 put:1 writing:1 go:2 independently:2 boutell:1 shen:1 identifying:2 m2:2 regarded:5 notion:1 updated:1 trigger:1 element:2 recognition:1 std:2 labeled:5 sunset:3 solved:1 wang:1 worst:1 wj:1 region:1 news:1 yk:2 transforming:3 ultimately:1 solving:5 predictive:3 mh:10 represented:6 train:1 effective:1 axisparallel:1 artificial:2 whose:1 solve:4 otherwise:6 transform:7 itself:1 jointly:1 triggered:1 evidently:1 propose:3 achieve:2 boostexter:1 description:4 normalize:1 exploiting:1 sea:3 categorization:3 object:8 help:1 illustrate:1 develop:2 derive:2 implemented:2 coverage:5 correct:1 kb:1 ana:1 unravelling:1 assign:1 preliminary:1 im:6 hold:2 exp:7 mapping:2 bj:4 achieves:3 adopt:1 a2:1 bag:33 label:75 lose:1 bridge:4 successfully:4 weighted:1 minimization:1 mit:2 always:1 gaussian:7 aim:1 lamda:1 zhou:2 derived:1 rank:4 indicates:2 ave:4 expand:1 transformed:4 misclassified:1 quasi:1 semantics:1 arg:4 classification:19 among:1 overall:5 denoted:2 issue:1 prevailing:1 zui:1 initialize:2 yu:8 future:1 minimized:1 report:1 employ:1 randomly:1 simultaneously:6 national:3 resulted:1 geometry:1 mining:1 evaluation:8 xni:3 n1b:1 tree:4 euclidean:1 divide:1 desired:2 re:1 mk:2 instance:92 cost:1 predictor:2 successful:2 predicate:1 sv:1 accomplish:2 international:1 vm:44 ym:4 ambiguity:7 containing:3 possibly:1 worse:1 book:1 return:2 li:3 b2:1 performed:3 analyze:1 apparently:1 start:1 yv:7 il:9 ni:13 accuracy:5 iml:80 bolded:2 ensemble:1 weak:1 edgar:1 associated:11 mi:3 popular:1 knowledge:1 formalize:3 actually:4 supervised:12 improved:1 execute:1 evaluated:2 until:1 web:2 su:10 logistic:1 maron:1 reveal:2 scientific:1 building:2 dietterich:2 brown:1 y2:8 multiplier:1 hausdorff:2 former:3 lozano:1 laboratory:1 deal:1 eg:2 round:10 ll:8 soccer:1 criterion:2 performs:1 reasoning:1 image:28 ranging:1 wise:2 novel:2 mt:7 belong:4 m1:2 cambridge:2 pm:2 erez:1 etc:1 optimizing:1 mint:1 binary:1 yi:13 captured:1 dai:1 recognized:2 multiple:16 technical:1 cross:3 lin:1 ensity:7 equally:2 bigger:1 a1:1 prediction:5 regression:1 expectation:1 metric:9 iteration:1 kernel:7 achieved:6 interval:1 odds:1 counting:1 xj:7 fit:1 zi:8 affect:1 fm:23 topology:1 cn:1 whether:2 becker:2 nine:2 fractal:1 maybe:1 ten:1 category:2 schapire:1 percentage:1 srikant:1 sign:3 per:1 diverse:1 key:1 four:1 nevertheless:2 libsvm:1 ht:3 rectangle:1 powerful:1 reasonable:1 patch:6 dy:2 comparable:4 ct:5 internet:1 tackled:3 multiinstance:3 occur:1 constraint:4 scene:16 software:1 x2:10 encodes:1 hy:3 aspect:1 min:4 influential:1 department:1 according:7 belonging:2 smaller:1 em:1 making:1 medoids:4 remains:1 abbreviated:1 singer:1 travelling:1 studying:1 operation:1 prec:4 alternative:4 original:5 assumes:1 denotes:4 clustering:6 cf:1 rain:1 top:2 remaining:2 newton:1 madison:1 taipei:1 ghahramani:2 build:2 seeking:1 added:1 realized:1 strategy:1 traditional:13 obermayer:1 distance:6 link:1 berlin:2 thrun:1 considers:1 collected:2 taiwan:1 degenerated:3 assuming:2 besides:1 index:3 relationship:5 minimizing:2 executed:1 frank:1 negative:4 proper:2 yli:2 observation:2 benchmark:1 jin:1 y1:15 bk:2 pair:2 connection:2 learned:3 established:2 nu:1 usually:4 pattern:1 xm:7 including:1 max:4 natural:2 ranked:1 representing:1 technology:1 library:1 ppr:1 gf:4 text:2 understanding:1 loss:11 lecture:1 mixed:1 generation:1 impressively:1 interesting:2 validation:2 foundation:1 oos:1 dd:8 editor:3 repeat:3 supported:1 jth:2 deeper:1 pni:2 default:1 world:7 valid:1 numeric:1 concretely:1 collection:1 confirm:2 investigating:2 b1:1 xi:18 decomposes:1 table:16 learn:11 excellent:3 da:9 significance:2 ling:1 fair:1 categorized:1 x1:14 xu:34 formalization:1 exponential:3 concatenating:1 zu:7 adding:1 chen:1 sport:1 chang:1 hua:1 springer:2 corresponds:5 dh:10 ma:2 viewed:1 goal:1 oost:35 change:1 determined:2 except:2 lathrop:1 desert:4 formally:4 select:1 support:1 latter:2 bnb:1 constructive:3 tested:1
2,257
3,048
Greedy Layer-Wise Training of Deep Networks Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle Universit?e de Montr?eal Montr?eal, Qu?ebec {bengioy,lamblinp,popovicd,larocheh}@iro.umontreal.ca Abstract Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization. 1 Introduction Recent analyses (Bengio, Delalleau, & Le Roux, 2006; Bengio & Le Cun, 2007) of modern nonparametric machine learning algorithms that are kernel machines, such as Support Vector Machines (SVMs), graph-based manifold and semi-supervised learning algorithms suggest fundamental limitations of some learning algorithms. The problem is clear in kernel-based approaches when the kernel is ?local? (e.g., the Gaussian kernel), i.e., K(x, y) converges to a constant when ||x ? y|| increases. These analyses point to the difficulty of learning ?highly-varying functions?, i.e., functions that have a large number of ?variations? in the domain of interest, e.g., they would require a large number of pieces to be well represented by a piecewise-linear approximation. Since the number of pieces can be made to grow exponentially with the number of factors of variations in the input, this is connected with the well-known curse of dimensionality for classical non-parametric learning algorithms (for regression, classification and density estimation). If the shapes of all these pieces are unrelated, one needs enough examples for each piece in order to generalize properly. However, if these shapes are related and can be predicted from each other, ?non-local? learning algorithms have the potential to generalize to pieces not covered by the training set. Such ability would seem necessary for learning in complex domains such as Artificial Intelligence tasks (e.g., related to vision, language, speech, robotics). Kernel machines (not only those with a local kernel) have a shallow architecture, i.e., only two levels of data-dependent computational elements. This is also true of feedforward neural networks with a single hidden layer (which can become SVMs when the number of hidden units becomes large (Bengio, Le Roux, Vincent, Delalleau, & Marcotte, 2006)). A serious problem with shallow architectures is that they can be very inefficient in terms of the number of computational units (e.g., bases, hidden units), and thus in terms of required examples (Bengio & Le Cun, 2007). One way to represent a highly-varying function compactly (with few parameters) is through the composition of many non-linearities, i.e., with a deep architecture. For example, the parity function with d inputs requires O(2d ) examples and parameters to be represented by a Gaussian SVM (Bengio et al., 2006), O(d2 ) parameters for a one-hidden-layer neural network, O(d) parameters and units for a multi-layer network with O(log2 d) layers, and O(1) parameters with a recurrent neural network. More generally, boolean functions (such as the function that computes the multiplication of two numbers from their d-bit representation) expressible by O(log d) layers of combinatorial logic with O(d) elements in each layer may require O(2d ) elements when expressed with only 2 layers (Utgoff & Stracuzzi, 2002; Bengio & Le Cun, 2007). When the representation of a concept requires an exponential number of elements, e.g., with a shallow circuit, the number of training examples required to learn the concept may also be impractical. Formal analyses of the computational complexity of shallow circuits can be found in (Hastad, 1987) or (Allender, 1996). They point in the same direction: shallow circuits are much less expressive than deep ones. However, until recently, it was believed too difficult to train deep multi-layer neural networks. Empirically, deep networks were generally found to be not better, and often worse, than neural networks with one or two hidden layers (Tesauro, 1992). As this is a negative result, it has not been much reported in the machine learning literature. A reasonable explanation is that gradient-based optimization starting from random initialization may get stuck near poor solutions. An approach that has been explored with some success in the past is based on constructively adding layers. This was previously done using a supervised criterion at each stage (Fahlman & Lebiere, 1990; Lengell?e & Denoeux, 1996). Hinton, Osindero, and Teh (2006) recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. The training strategy for such networks may hold great promise as a principle to help address the problem of training deep networks. Upper layers of a DBN are supposed to represent more ?abstract? concepts that explain the input observation x, whereas lower layers extract ?low-level features? from x. They learn simpler concepts first, and build on them to learn more abstract concepts. This strategy, studied in detail here, has not yet been much exploited in machine learning. We hypothesize that three aspects of this strategy are particularly important: first, pre-training one layer at a time in a greedy way; second, using unsupervised learning at each layer in order to preserve information from the input; and finally, fine-tuning the whole network with respect to the ultimate criterion of interest. We first extend DBNs and their component layers, Restricted Boltzmann Machines (RBM), so that they can more naturally handle continuous values in input. Second, we perform experiments to better understand the advantage brought by the greedy layer-wise unsupervised learning. The basic question to answer is whether or not this approach helps to solve a difficult optimization problem. In DBNs, RBMs are used as building blocks, but applying this same strategy using auto-encoders yielded similar results. Finally, we discuss a problem that occurs with the layer-wise greedy unsupervised procedure when the input distribution is not revealing enough of the conditional distribution of the target variable given the input variable. We evaluate a simple and successful solution to this problem. 2 Deep Belief Nets Let x be the input, and gi the hidden variables at layer i, with joint distribution P (x, g1 , g2 , . . . , g` ) = P (x|g1 )P (g1 |g2 ) ? ? ? P (g`?2 |g`?1 )P (g`?1 , g` ), where all the conditional layers P (g i |gi+1 ) are factorized conditional distributions for which computation of probability and sampling are easy. In Hinton et al. (2006) one considers the hidden layer g i a binary random vector with ni elements gji : i i P (g |g i+1 )= n Y P (gji |gi+1 ) with P (gji = 1|g i+1 )= sigm(bij j=1 + i+1 n X i i+1 Wkj gk ) (1) k=1 where sigm(t) = 1/(1 + e?t ), the bij are biases for unit j of layer i, and W i is the weight matrix for layer i. If we denote g0 = x, the generative model for the first layer P (x|g 1 ) also follows (1). 2.1 Restricted Boltzmann machines The top-level prior P (g`?1 , g` ) is a Restricted Boltzmann Machine (RBM) between layer ` ? 1 and layer `. To lighten notation, consider a generic RBM with input layer activations v (for visible units) and hidden layer activations h (for hidden units). It has the following joint distribution: 0 0 0 P (v, h) = Z1 eh W v+b v+c h , where Z is the normalization constant for this distribution, b is the vector of biases for visible units, c is the vector of biases for the hidden units, and W is the weight matrix for the layer. Minus the argument of the exponential is called the energy function, energy(v, h) = ?h0 W v ? b0 v ? c0 h. (2) We denote the RBM parameters together with ? = (W, b, c). We denote Q(h|v) and P (v|h) the layer-to-layer conditional distributions associated with the above RBM joint distribution. The layer-to-layer conditionals P associated with the RBM factorize likePin (1) and give rise to P (vk = 1|h) = sigm(bk + j Wjk hj ) and Q(hj = 1|v) = sigm(cj + k Wjk vk ). 2.2 Gibbs Markov chain and log-likelihood gradient in an RBM To obtain an estimator of the gradient on the log-likelihood of an RBM, we consider a Gibbs Markov chain on the (visible units, hidden units) pair of variables. Gibbs sampling from an RBM proceeds by sampling h given v, then v given h, etc. Denote vt for the t-th v sample from that chain, starting at t = 0 with v0 , the ?input observation? for the RBM. Therefore, (v k , hk ) for k ? ? is a sample from the joint P (v, h). The log-likelihood of a value v0 under the model of the RBM is X X X log P (v0 ) = log P (v0 , h) = log e?energy(v0 ,h) ? log e?energy(v,h) h h v,h and its gradient with respect to ? = (W, b, c) is X X ? log P (v0 ) ?energy(v0 , h0 ) ?energy(vk , hk ) =? Q(h0 |v0 ) + P (vk , hk ) ?? ?? ?? h0 vk ,hk   ?energy(v0 , h0 ) ?energy(vk , hk ) + E hk |vk , for k ? ?. An unbiased sample is ? ?? ?? where h0 is a sample from Q(h0 |v0 ) and (vk , hk ) is a sample of the Markov chain, and the expectation can be easily computed thanks to P (hk |vk ) factorizing. The idea of the Contrastive Divergence algorithm (Hinton, 2002) is to take k small (typically k = 1). A pseudo-code for Contrastive Divergence training (with k = 1) of an RBM with binomial input and hidden units is presented in the Appendix (Algorithm RBMupdate(x, , W, b, c)). This procedure is called repeatedly with v 0 = x sampled from the training distribution for the RBM. To decide when to stop one may use a proxy for the training criterion, such as the reconstruction error ? log P (v 1 = x|v0 = x). 2.3 Greedy layer-wise training of a DBN A greedy layer-wise training algorithm was proposed (Hinton et al., 2006) to train a DBN one layer at a time. One first trains an RBM that takes the empirical data as input and models it. Denote Q(g 1 |g0 ) the posterior over g1 associated with that trained RBM (we recall that g 0 = x with x the observed 1 0 input). This gives rise to an ?empirical? distribution pb1 over X the first layer g , when g is sampled 1 1 0 1 0 from the data empirical distribution pb: we have pb (g ) = pb(g )Q(g |g ). g0 Note that a 1-level DBN is an RBM. The basic idea of the greedy layer-wise strategy is that after training the top-level RBM of a `-level DBN, one changes the interpretation of the RBM parameters to insert them in a (` + 1)-level DBN: the distribution P (g `?1 |g` ) from the RBM associated with layers ` ? 1 and ` is kept as part of the DBN generative model. In the RBM between layers ` ? 1 and `, P (g ` ) is defined in terms on the parameters of that RBM, whereas in the DBN P (g ` ) is defined in terms of the parameters of the upper layers. Consequently, Q(g ` |g`?1 ) of the RBM does not correspond to P (g` |g`?1 ) in the DBN, except when that RBM is the top layer of the DBN. However, we use Q(g` |g`?1 ) of the RBM as an approximation of the posterior P (g ` |g`?1 ) for the DBN. The samples of g`?1 , with empirical distribution pb`?1 , are converted stochastically into samples of g ` P ` ` ` `?1 `?1 with distribution pb through pb (g ) = b (g )Q(g` |g`?1 ). Although pb` cannot be repg`?1 p resented explicitly it is easy to sample unbiasedly from it: pick a training example and propagate it stochastically through the Q(g i |gi?1 ) at each level. As a nice side benefit, one obtains an approximation of the posterior for all the hidden variables in the DBN, at all levels, given an input g 0 = x. Mean-field propagation (see below) gives a fast deterministic approximation of posteriors P (g ` |x). Note that if we consider all the layers of a DBN from level i to the top, we have a smaller DBN, which generates the marginal distribution P (g i ) for the complete DBN. The motivation for the greedy procedure is that a partial DBN with ` ? i levels starting above level i may provide a better model for P (gi ) than does the RBM initially associated with level i itself. The above greedy procedure is justified using a variational bound (Hinton et al., 2006). As a consequence of that bound, when inserting an additional layer, if it is initialized appropriately and has enough units, one can guarantee that initial improvements on the training criterion for the next layer (fitting pb` ) will yield improvement on the training criterion for the previous layer (likelihood with respect to pb`?1 ). The greedy layer-wise training algorithm for DBNs is quite simple, as illustrated by the pseudo-code in Algorithm TrainUnsupervisedDBN of the Appendix. 2.4 Supervised fine-tuning As a last training stage, it is possible to fine-tune the parameters of all the layers together. For example Hinton et al. (2006) propose to use the wake-sleep algorithm (Hinton, Dayan, Frey, & Neal, 1995) to continue unsupervised training. Hinton et al. (2006) also propose to optionally use a mean-field approximation of the posteriors P (g i |g0 ), by replacing the samples gji?1 at level i ? 1 by their bit-wise i i i i?1 mean-field expected value ?i?1 ). According to these propagation j , with ? = sigm(b + W ? rules, the whole network now deterministically computes internal representations as functions of the network input g0 = x. After unsupervised pre-training of the layers of a DBN following Algorithm TrainUnsupervisedDBN (see Appendix) the whole network can be further optimized by gradient descent with respect to any deterministically computable training criterion that depends on these representations. For example, this can be used (Hinton & Salakhutdinov, 2006) to fine-tune a very deep auto-encoder, minimizing a reconstruction error. It is also possible to use this as initialization of all except the last layer of a traditional multi-layer neural network, using gradient descent to fine-tune the whole network with respect to a supervised training criterion. Algorithm DBNSupervisedFineTuning in the appendix contains pseudo-code for supervised fine-tuning, as part of the global supervised learning algorithm TrainSupervisedDBN. Note that better results were obtained when using a 20-fold larger learning rate with the supervised criterion (here, squared error or cross-entropy) updates than in the contrastive divergence updates. 3 Extension to continuous-valued inputs With the binary units introduced for RBMs and DBNs in Hinton et al. (2006) one can ?cheat? and handle continuous-valued inputs by scaling them to the (0,1) interval and considering each input continuous value as the probability for a binary random variable to take the value 1. This has worked well for pixel gray levels, but it may be inappropriate for other kinds of input variables. Previous work on continuous-valued input in RBMs include (Chen & Murray, 2003), in which noise is added to sigmoidal units, and the RBM forms a special form of Diffusion Network (Movellan, Mineiro, & Williams, 2002). We concentrate here on simple extensions of the RBM framework in which only the energy function and the allowed range of values are changed. Linear energy: exponential or truncated exponential Consider a unit with value y of an RBM, connected to units z of the other layer. p(y|z) can be obtained from the terms in the exponential that contain y , which can be grouped in ya(z) for linear energy functions as in (2), where a(z) = b + w 0 z with b the bias of unit y , and w the vector of weights connecting unit y to units z. If we allow y to take any value in interval I , the conditional density exp(ya(z))1y?I of y becomes p(y|z) = R . When I = [0, ?), this is an exponential density v exp(va(z))1v?I dv with parameter a(z), and the normalizing integral equals ?1/a(z), but only exists if ?z, a(z) < 0 Computing the density, computing the expected value (= ?1/a(z)) and sampling would all be easy. Alternatively, if I is a closed interval (as in many applications of interest), or if we would like to use such a unit as a hidden unit with non-linear expected value, the above density is a truncated exponential. For simplicity we consider the case I = [0, 1] here, for which the normalizing integral, exp(?a(z))?1 which always exists, is . The conditional expectation of u given z is interesting because a(z) 1 1 it has a sigmoidal-like saturating and monotone non-linearity: E[y|z] = 1?exp(?a(z)) ? a(z) .A sampling from the truncated exponential is easily obtained from a uniform sample U , using the inverse cumulative F ?1 of the conditional density y|z: F ?1 (U ) = log(1?U ?(1?exp(a(z)))) . In both truncated a(z) and not truncated cases, the Contrastive Divergence updates have the same form as for binomial units (input value times output value), since the updates only depend on the derivative of the energy with respect to the parameters. Only sampling is changed, according to the unit?s conditional density. Quadratic energy: Gaussian units P To obtain Gaussian-distributed units, one adds quadratic terms to the energy. Adding i d2i yi2 gives rise to a diagonal covariance matrix between units of the same layer, where y i is the continuous value of a Gaussian unit and d2i is a positive parameter that is equal to the inverse of the variance of y i . In classification error on training set 0.6 Deep Network with no pre?training DBN with partially supervised pre?training DBN with unsupervised pre?training 0.55 0.5 0.45 0.4 0.35 0.3 0.25 0 1. 2. 3. 4. 5. 6. 50 100 150 200 250 300 350 Figure 1: Training classification error vs training iteration, on the Cotton price task, for deep network without pre-training, for DBN with unsupervised pre-training, and DBN with partially supervised pre-training. Illustrates optimization difficulty of deep networks and advantage of partially supervised training. 400 Deep Network with no pre-training Logistic regression DBN, binomial inputs, unsupervised DBN, binomial inputs, partially supervised DBN, Gaussian inputs, unsupervised DBN, Gaussian inputs, partially supervised train. 4.23 ? 4.59 4.39 4.25 4.23 Abalone valid. 4.43 ? 4.60 4.45 4.42 4.43 test. 4.2 ? 4.47 4.28 4.19 4.18 train. 45.2% 44.0% 44.0% 43.3% 35.7% 27.5% Cotton valid. 42.9% 42.6% 42.6% 41.1% 34.9% 28.4% test. 43.0% 45.0% 45.0% 43.7% 35.8% 31.4% Table 1: Mean squared prediction error on Abalone task and classification error on Cotton task, showing improvement with Gaussian units. this case the variance is unconditional, whereas the mean depends on the inputs of the unit: for a unit y with inputs z and inverse variance d2 , E[y|z] = a(z) 2d2 . The Contrastive Divergence updates are easily obtained by computing the derivative of the energy with respect to the parameters. For the parameters in the linear terms of the energy function (e.g., b and w above), the derivatives have the same form (input unit value times output unit value) as for the case of binomial units. For quadratic parameter d > 0, the derivative is simply 2dy 2 . Gaussian units were previously used as hidden units of an RBM (with binomial or multinomial inputs) applied to an information retrieval task (Welling, Rosen-Zvi, & Hinton, 2005). Our interest here is to use them for continuous-valued inputs. Using continuous-valued hidden units Although we have introduced RBM units with continuous values to better deal with the representation of input variables, they could also be considered for use in the hidden layers, in replacement or complementing the binomial units which have been used in the past. However, Gaussian and exponential hidden units have a weakness: the mean-field propagation through a Gaussian unit gives rise to a purely linear transformation. Hence if we have only such linear hidden units in a multi-layered network, the mean-field propagation function that maps inputs to internal representations would be completely linear. In addition, in a DBN containing only Gaussian units, one would only be able to model Gaussian data. On the other hand, combining Gaussian with other types of units could be interesting. In contrast with Gaussian or exponential units, remark that the conditional expectation of truncated exponential units is non-linear, and in fact involves a sigmoidal form of non-linearity applied to the weighted sum of its inputs. Experiment 1 This experiment was performed on two data sets: the UCI repository Abalone data set (split in 2177 training examples, 1000 validation examples, 1000 test examples) and a financial data set. The latter has real-valued input variables representing averages of returns and squared returns for which the binomial approximation would seem inappropriate. The target variable is next month?s return of a Cotton futures contract. There are 13 continuous input variables, that are averages of returns over different time-windows up to 504 days. There are 3135 training examples, 1000 validation examples, and 1000 test examples. The dataset is publicly available at http://www.iro.umontreal.ca/?lisa/ fin_data/. In Table 1 (rows 3 and 5), we show improvements brought by DBNs with Gaussian inputs over DBNs with binomial inputs (with binomial hidden units in both cases). The networks have two hidden layers. All hyper-parameters are selected based on validation set performance. 4 Understanding why the layer-wise strategy works A reasonable explanation for the apparent success of the layer-wise training strategy for DBNs is that unsupervised pre-training helps to mitigate the difficult optimization problem of deep networks by better initializing the weights of all layers. Here we present experiments that support and clarify this. Training each layer as an auto-encoder We want to verify that the layer-wise greedy unsupervised pre-training principle can be applied when using an auto-encoder instead of the RBM as a layer building block. Let x be the input vector with xi ? (0, 1). For a layer with weights matrix W , hidden biases column vector b and input biases column vector c, the reconstruction probability for bit i is p i (x), with the vector of probabilities p(x) = sigm(c + W sigm(b + W 0 x)). The training criterion for the layer is the average of negative log-likelihoods for predicting x from p(x). For example, if x is interpreted either as a sequence Pof bits or a sequence of bit probabilities, we minimize the reconstruction cross-entropy: R = ? i xi log pi (x) + (1 ? xi ) log(1 ? pi (x)). We report several experimental results using this training criterion for each layer, in comparison to the contrastive divergence algorithm for an RBM. Pseudo-code for a deep network obtained by training each layer as an auto-encoder is given in Appendix (Algorithm TrainGreedyAutoEncodingDeepNet). One question that arises with auto-encoders in comparison with RBMs is whether the auto-encoders will fail to learn a useful representation when the number of units is not strictly decreasing from one layer to the next (since the networks could theoretically just learn to be the identity and perfectly minimize the reconstruction error). However, our experiments suggest that networks with non-decreasing layer sizes generalize well. This might be due to weight decay and stochastic gradient descent, preventing large weights: optimization falls in a local minimum which corresponds to a good transformation of the input (that provides a good initialization for supervised training of the whole net). Greedy layer-wise supervised training A reasonable question to ask is whether the fact that each layer is trained in an unsupervised way is critical or not. An alternative algorithm is supervised, greedy and layer-wise: train each new hidden layer as the hidden layer of a one-hidden layer supervised neural network NN (taking as input the output of the last of previously trained layers), and then throw away the output layer of NN and use the parameters of the hidden layer of NN as pre-training initialization of the new top layer of the deep net, to map the output of the previous layers to a hopefully better representation. Pseudo-code for a deep network obtained by training each layer as the hidden layer of a supervised one-hidden-layer neural network is given in Appendix (Algorithm TrainGreedySupervisedDeepNet). Experiment 2. We compared the performance on the MNIST digit classification task obtained with five algorithms: (a) DBN, (b) deep network whose layers are initialized as auto-encoders, (c) above described supervised greedy layer-wise algorithm to pre-train each layer, (d) deep network with no pre-training (random initialization), (e) shallow network (1 hidden layer) with no pre-training. The final fine-tuning is done by adding a logistic regression layer on top of the network and training the whole network by stochastic gradient descent on the cross-entropy with respect to the target classification. The networks have the following architecture: 784 inputs, 10 outputs, 3 hidden layers with variable number of hidden units, selected by validation set performance (typically selected layer sizes are between 500 and 1000). The shallow network has a single hidden layer. An L2 weight decay hyper-parameter is also optimized. The DBN was slower to train and less experiments were performed, so that longer training and more appropriately chosen sizes of layers and learning rates could yield better results (Hinton 2006, unpublished, reports 1.15% error on the MNIST test set). DBN, unsupervised pre-training Deep net, auto-associator pre-training Deep net, supervised pre-training Deep net, no pre-training Shallow net, no pre-training Experiment 2 train. valid. test 0% 1.2% 1.2% 0% 1.4% 1.4% 0% 1.7% 2.0% .004% 2.1% 2.4% .004% 1.8% 1.9% Experiment 3 train. valid. test 0% 1.5% 1.5% 0% 1.4% 1.6% 0% 1.8% 1.9% .59% 2.1% 2.2% 3.6% 4.7% 5.0% Table 2: Classification error on MNIST training, validation, and test sets, with the best hyperparameters according to validation error, with and without pre-training, using purely supervised or purely unsupervised pre-training. In experiment 3, the size of the top hidden layer was set to 20. On MNIST, differences of more than .1% are statistically significant. The results in Table 2 suggest that the auto-encoding criterion can yield performance comparable to the DBN when the layers are finally tuned in a supervised fashion. They also clearly show that the greedy unsupervised layer-wise pre-training gives much better results than the standard way to train a deep network (with no greedy pre-training) or a shallow network, and that, without pre-training, deep networks tend to perform worse than shallow networks. The results also suggest that unsupervised greedy layer-wise pre-training can perform significantly better than purely supervised greedy layer-wise pre-training. A possible explanation is that the greedy supervised procedure is too greedy: in the learned hidden units representation it may discard some of the information about the target, information that cannot be captured easily by a one-hidden-layer neural network but could be captured by composing more hidden layers. Experiment 3 However, there is something troubling in the Experiment 2 results (Table 2): all the networks, even those without greedy layer-wise pre-training, perform almost perfectly on the training set, which would appear to contradict the hypothesis that the main effect of the layer-wise greedy strategy is to help the optimization (with poor optimization one would expect poor training error). A possible explanation coherent with our initial hypothesis and with the above results is captured by the following hypothesis. Without pre-training, the lower layers are initialized poorly, but still allowing the top two layers to learn the training set almost perfectly, because the output layer and the last hidden layer form a standard shallow but fat neural network. Consider the top two layers of the deep network with pre-training: it presumably takes as input a better representation, one that allows for better generalization. Instead, the network without pre-training sees a ?random? transformation of the input, one that preserves enough information about the input to fit the training set, but that does not help to generalize. To test that hypothesis, we performed a second series of experiments in which we constrain the top hidden layer to be small (20 hidden units). The Experiment 3 results (Table 2) clearly confirm our hypothesis. With no pre-training, training error degrades significantly when there are only 20 hidden units in the top hidden layer. In addition, the results obtained without pre-training were found to have extremely large variance indicating high sensitivity to initial conditions. Overall, the results in the tables and in Figure 1 are consistent with the hypothesis that the greedy layer-wise procedure essentially helps to better optimize the deep networks, probably by initializing the hidden layers so that they represent more meaningful representations of the input, which also yields to better generalization. Continuous training of all layers of a DBN With the layer-wise training algorithm for DBNs (TrainUnsupervisedDBN in Appendix), one element that we would like to dispense with is having to decide the number of training iterations for each layer. It would be good if we did not have to explicitly add layers one at a time, i.e., if we could train all layers simultaneously, but keeping the ?greedy? idea that each layer is pre-trained to model its input, ignoring the effect of higher layers. To achieve this it is sufficient to insert a line in TrainUnsupervisedDBN, so that RBMupdate is called on all the layers and the stochastic hidden values are propagated all the way up. Experiments with this variant demonstrated that it works at least as well as the original algorithm. The advantage is that we can now have a single stopping criterion (for the whole network). Computation time is slightly greater, since we do more computations initially (on the upper layers), which might be wasted (before the lower layers converge to a decent representation), but time is saved on optimizing hyper-parameters. This variant may be more appealing for on-line training on very large data-sets, where one would never cycle back on the training data. 5 Dealing with uncooperative input distributions In classification problems such as MNIST where classes are well separated, the structure of the input distribution p(x) naturally contains much information about the target variable y . Imagine a supervised learning task in which the input distribution is mostly unrelated with y . In regression problems, which we are interested in studying here, this problem could be much more prevalent. For example imagine a task in which x ? p(x) and the target y = f (x) + noise (e.g., p is Gaussian and f = sinus) with no particular relation between p and f . In such settings we cannot expect the unsupervised greedy layer-wise pre-training procedure to help in training deep supervised networks. To deal with such uncooperative input distributions, we propose to train each layer with a mixed training criterion that combines the unsupervised objective (modeling or reconstructing the input) and a supervised objective (helping to predict the target). A simple algorithm thus adds the updates on the hidden layer weights from the unsupervised algorithm (Contrastive Divergence or reconstruction error gradient) with the updates from the gradient on a supervised prediction error, using a temporary output layer, as with the greedy layer-wise supervised training algorithm. In our experiments it appeared sufficient to perform that partial supervision with the first layer only, since once the predictive information about the target is ?forced? into the representation of the first layer, it tends to stay in the upper layers. The results in Figure 1 and Table 1 clearly show the advantage of this partially supervised greedy training algorithm, in the case of the financial dataset. Pseudo-code for partially supervising the first (or later layer) is given in Algorithm TrainPartiallySupervisedLayer (in the Appendix). 6 Conclusion This paper is motivated by the need to develop good training algorithms for deep architectures, since these can be much more representationally efficient than shallow ones such as SVMs and one-hiddenlayer neural nets. We study Deep Belief Networks applied to supervised learning tasks, and the principles that could explain the good performance they have yielded. The three principal contributions of this paper are the following. First we extended RBMs and DBNs in new ways to naturally handle continuous-valued inputs, showing examples where much better predictive models can thus be obtained. Second, we performed experiments which support the hypothesis that the greedy unsupervised layer-wise training strategy helps to optimize deep networks, but suggest that better generalization is also obtained because this strategy initializes upper layers with better representations of relevant highlevel abstractions. These experiments suggest a general principle that can be applied beyond DBNs, and we obtained similar results when each layer is initialized as an auto-associator instead of as an RBM. Finally, although we found that it is important to have an unsupervised component to train each layer (a fully supervised greedy layer-wise strategy performed worse), we studied supervised tasks in which the structure of the input distribution is not revealing enough of the conditional density of y given x. In that case the DBN unsupervised greedy layer-wise strategy appears inadequate and we proposed a simple fix based on partial supervision, that can yield significant improvements. References Allender, E. (1996). Circuit complexity before the dawn of the new millennium. In 16th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 1?18. Lecture Notes in Computer Science 1180. Bengio, Y., Delalleau, O., & Le Roux, N. (2006). The curse of highly variable functions for local kernel machines. In Weiss, Y., Sch?olkopf, B., & Platt, J. (Eds.), Advances in Neural Information Processing Systems 18, pp. 107?114. MIT Press, Cambridge, MA. Bengio, Y., & Le Cun, Y. (2007). Scaling learning algorithms towards AI. In Bottou, L., Chapelle, O., DeCoste, D., & Weston, J. (Eds.), Large Scale Kernel Machines. MIT Press. Bengio, Y., Le Roux, N., Vincent, P., Delalleau, O., & Marcotte, P. (2006). Convex neural networks. In Weiss, Y., Sch?olkopf, B., & Platt, J. (Eds.), Advances in Neural Information Processing Systems 18, pp. 123?130. MIT Press, Cambridge, MA. Chen, H., & Murray, A. (2003). A continuous restricted boltzmann machine with an implementable training algorithm. IEE Proceedings of Vision, Image and Signal Processing, 150(3), 153?158. Fahlman, S., & Lebiere, C. (1990). The cascade-correlation learning architecture. In Touretzky, D. (Ed.), Advances in Neural Information Processing Systems 2, pp. 524?532 Denver, CO. Morgan Kaufmann, San Mateo. Hastad, J. T. (1987). Computational Limitations for Small Depth Circuits. MIT Press, Cambridge, MA. Hinton, G. E., Osindero, S., & Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527?1554. Hinton, G. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8), 1771?1800. Hinton, G., Dayan, P., Frey, B., & Neal, R. (1995). The wake-sleep algorithm for unsupervised neural networks. Science, 268, 1558?1161. Hinton, G., & Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504?507. Lengell?e, R., & Denoeux, T. (1996). Training MLPs layer by layer using an objective function for internal representations. Neural Networks, 9, 83?97. Movellan, J., Mineiro, P., & Williams, R. (2002). A monte-carlo EM approach for partially observable diffusion processes: theory and applications to neural networks. Neural Computation, 14, 1501?1544. Tesauro, G. (1992). Practical issues in temporal difference learning. Machine Learning, 8, 257?277. Utgoff, P., & Stracuzzi, D. (2002). Many-layered learning. Neural Computation, 14, 2497?2539. Welling, M., Rosen-Zvi, M., & Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. In Advances in Neural Information Processing Systems, Vol. 17 Cambridge, MA. MIT Press.
3048 |@word repository:1 c0:1 d2:3 stracuzzi:2 propagate:1 covariance:1 contrastive:8 pick:1 minus:1 initial:3 contains:2 series:1 tuned:1 past:2 activation:2 yet:1 visible:3 uncooperative:2 shape:2 hypothesize:1 update:7 v:1 greedy:33 generative:4 intelligence:1 selected:3 pb1:1 complementing:1 provides:1 sigmoidal:3 simpler:1 unbiasedly:1 five:1 become:1 dan:1 fitting:1 combine:1 theoretically:1 expected:3 multi:5 salakhutdinov:2 decreasing:2 curse:2 inappropriate:2 considering:1 window:1 becomes:2 decoste:1 pof:1 linearity:4 unrelated:2 circuit:6 factorized:1 notation:1 kind:1 interpreted:1 transformation:3 impractical:1 guarantee:1 pseudo:6 mitigate:1 temporal:1 ebec:1 fat:1 universit:1 platt:2 unit:52 appear:1 positive:1 before:2 local:6 frey:2 tends:1 consequence:1 representationally:1 encoding:1 cheat:1 might:2 initialization:6 studied:2 mateo:1 suggests:1 co:1 range:1 statistically:1 practical:1 block:2 movellan:2 digit:1 procedure:7 denoeux:2 empirical:4 significantly:2 revealing:3 cascade:1 pre:35 suggest:6 get:2 cannot:3 layered:2 context:1 applying:1 www:1 optimize:2 deterministic:1 map:2 demonstrated:1 williams:2 starting:4 convex:1 roux:4 simplicity:1 estimator:1 rule:1 lamblin:1 financial:2 handle:3 variation:2 dbns:10 target:8 imagine:2 hypothesis:8 element:7 particularly:1 observed:1 initializing:3 region:1 connected:2 cycle:1 complexity:3 utgoff:2 dispense:1 d2i:2 trained:4 depend:1 harmonium:1 predictive:2 purely:4 completely:1 compactly:2 easily:4 joint:4 represented:2 sigm:7 train:15 separated:1 forced:1 fast:2 monte:1 artificial:1 hyper:3 h0:7 quite:1 apparent:1 larger:1 solve:1 valued:7 delalleau:4 whose:1 encoder:4 ability:1 gi:5 g1:4 itself:1 final:1 advantage:4 sequence:2 highlevel:1 net:9 reconstruction:6 propose:3 product:1 inserting:1 uci:1 combining:1 relevant:1 poorly:1 achieve:1 supposed:1 olkopf:2 wjk:2 converges:1 help:9 supervising:1 recurrent:1 develop:1 b0:1 throw:1 predicted:2 involves:1 larochelle:1 direction:1 concentrate:1 saved:1 stochastic:3 require:2 fix:1 generalization:4 insert:2 extension:2 clarify:1 hold:1 strictly:1 helping:1 considered:1 exp:5 great:1 presumably:1 predict:1 estimation:1 combinatorial:1 grouped:1 weighted:1 brought:2 clearly:3 mit:5 gaussian:17 always:1 hj:2 varying:3 properly:1 vk:9 improvement:5 likelihood:5 prevalent:1 hk:8 contrast:1 abstraction:2 dependent:1 dayan:2 nn:3 stopping:1 typically:2 initially:2 hidden:46 relation:1 expressible:1 interested:1 pixel:1 overall:1 classification:8 issue:1 pascal:1 special:1 marginal:1 field:5 equal:2 never:1 having:1 once:1 sampling:6 unsupervised:26 future:1 rosen:2 report:2 yoshua:1 piecewise:1 serious:1 few:1 lighten:1 modern:1 simultaneously:1 preserve:2 divergence:8 replacement:1 montr:2 interest:4 highly:5 weakness:1 unconditional:1 hiddenlayer:1 chain:4 integral:2 partial:3 necessary:1 initialized:4 causal:2 theoretical:1 eal:2 column:2 boolean:1 modeling:1 hastad:2 uniform:1 successful:1 inadequate:1 osindero:2 too:2 iee:1 zvi:2 reported:1 encoders:4 answer:1 thanks:1 density:8 fundamental:1 sensitivity:1 stay:1 contract:1 together:2 connecting:1 squared:3 containing:1 worse:3 stochastically:2 expert:1 inefficient:1 derivative:4 return:4 potential:1 converted:1 de:1 resented:1 explicitly:2 depends:2 piece:5 performed:5 later:1 closed:1 contribution:1 minimize:2 mlps:1 ni:1 publicly:1 variance:4 kaufmann:1 correspond:1 yield:5 generalize:4 vincent:2 carlo:1 explain:2 touretzky:1 ed:4 rbms:5 energy:16 pp:4 lebiere:2 naturally:3 rbm:33 associated:5 propagated:1 sampled:2 stop:1 dataset:2 ask:1 recall:1 dimensionality:2 cj:1 back:1 appears:2 higher:1 supervised:33 day:1 wei:2 done:2 strongly:1 just:1 stage:2 until:2 correlation:1 hand:1 expressive:1 replacing:1 hopefully:1 propagation:4 logistic:2 gray:1 building:2 effect:2 concept:5 true:1 unbiased:1 contain:1 verify:1 hence:1 neal:2 illustrated:1 deal:2 abalone:3 criterion:13 complete:1 image:1 wise:29 variational:1 recently:4 umontreal:2 dawn:1 multinomial:1 hugo:1 empirically:2 denver:1 exponentially:2 extend:2 interpretation:1 significant:2 composition:1 cambridge:4 gibbs:3 ai:1 tuning:4 dbn:34 language:1 chapelle:1 longer:1 supervision:2 v0:11 etc:1 base:1 add:3 something:1 posterior:5 recent:1 optimizing:1 tesauro:2 discard:1 binary:3 success:3 continue:1 vt:1 exploited:1 captured:3 minimum:2 additional:1 greater:1 morgan:1 converge:1 signal:1 semi:1 believed:1 cross:3 retrieval:2 va:1 prediction:2 variant:3 regression:4 basic:2 vision:2 expectation:3 essentially:1 sinus:1 iteration:2 sometimes:1 represent:5 kernel:8 normalization:1 robotics:1 justified:1 whereas:3 conditionals:1 fine:7 addition:2 interval:3 want:1 wake:2 grow:1 wkj:1 appropriately:2 sch:2 bringing:1 probably:1 tend:1 seem:2 marcotte:2 near:2 feedforward:1 bengio:10 enough:6 easy:3 split:1 decent:1 fit:1 architecture:8 perfectly:3 idea:3 computable:1 whether:3 motivated:1 ultimate:1 speech:1 repeatedly:1 remark:1 deep:35 generally:2 useful:1 clear:2 covered:1 tune:3 nonparametric:1 svms:3 http:1 promise:1 vol:1 pb:9 diffusion:2 kept:1 graph:1 wasted:1 monotone:1 sum:1 inverse:3 almost:2 reasonable:3 decide:2 family:1 appendix:8 scaling:2 dy:1 comparable:1 bit:5 layer:141 bound:2 fold:1 sleep:2 quadratic:3 yielded:2 annual:1 worked:1 constrain:1 software:1 generates:1 aspect:1 argument:1 extremely:1 according:3 poor:4 smaller:1 slightly:1 reconstructing:1 em:1 appealing:1 qu:1 shallow:13 cun:4 dv:1 restricted:4 previously:3 discus:1 fail:1 studying:1 available:1 away:1 generic:1 alternative:1 slower:1 original:1 top:11 binomial:10 include:1 log2:1 giving:1 build:1 murray:2 classical:1 objective:3 g0:5 question:3 added:1 occurs:1 initializes:1 strategy:13 parametric:1 degrades:1 traditional:1 diagonal:1 gradient:11 manifold:1 considers:1 iro:2 code:6 gji:4 minimizing:2 optionally:1 difficult:3 mostly:2 troubling:1 gk:1 negative:2 rise:5 constructively:1 boltzmann:4 perform:5 allowing:2 teh:2 upper:5 observation:2 allender:2 markov:3 implementable:1 descent:4 truncated:6 hinton:18 extended:1 introduced:4 bk:1 pair:1 required:3 unpublished:1 z1:1 optimized:2 cotton:4 learned:1 coherent:1 temporary:1 address:1 able:1 beyond:1 proceeds:1 below:1 appeared:1 explanation:4 belief:5 critical:1 difficulty:2 eh:1 predicting:1 representing:1 millennium:1 technology:1 extract:1 auto:11 popovici:1 literature:1 prior:1 nice:1 understanding:1 multiplication:1 l2:1 fully:1 expect:2 lecture:1 mixed:1 interesting:2 limitation:2 larocheh:1 validation:6 foundation:1 sufficient:2 proxy:1 consistent:1 principle:4 pi:2 row:1 changed:2 parity:1 fahlman:2 last:4 keeping:1 formal:1 bias:6 understand:2 side:1 allow:1 lisa:1 fall:1 taking:1 distributed:2 benefit:1 depth:1 valid:4 cumulative:1 computes:2 preventing:1 stuck:2 made:1 san:1 welling:2 obtains:1 contradict:1 observable:1 logic:1 confirm:2 dealing:1 global:1 factorize:1 xi:3 alternatively:1 factorizing:1 continuous:14 mineiro:2 why:1 table:8 learn:6 ca:2 associator:2 composing:1 ignoring:1 bottou:1 complex:1 domain:2 did:1 yi2:1 main:1 whole:7 motivation:1 noise:2 hyperparameters:1 allowed:1 fashion:1 deterministically:2 exponential:12 bengioy:1 bij:2 showing:2 explored:1 decay:2 svm:1 normalizing:2 exists:2 mnist:5 adding:3 illustrates:1 chen:2 entropy:3 simply:1 explore:1 expressed:1 saturating:1 g2:2 partially:8 corresponds:1 ma:4 weston:1 conditional:10 month:1 identity:1 consequently:1 towards:1 price:1 change:1 except:2 reducing:1 principal:1 called:3 experimental:1 ya:2 meaningful:1 indicating:1 internal:4 support:3 latter:1 arises:1 evaluate:1
2,258
3,049
Doubly Stochastic Normalization for Spectral Clustering Ron Zass and Amnon Shashua ? Abstract In this paper we focus on the issue of normalization of the affinity matrix in spectral clustering. We show that the difference between N-cuts and Ratio-cuts is in the error measure being used (relative-entropy versus L1 norm) in finding the closest doubly-stochastic matrix to the input affinity matrix. We then develop a scheme for finding the optimal, under Frobenius norm, doubly-stochastic approximation using Von-Neumann?s successive projections lemma. The new normalization scheme is simple and efficient and provides superior clustering performance over many of the standardized tests. 1 Introduction The problem of partitioning data points into a number of distinct sets, known as the clustering problem, is central in data analysis and machine learning. Typically, a graph-theoretic approach to clustering starts with a measure of pairwise affinity Kij measuring the degree of similarity between points xi , xj , followed by a normalization step, followed by the extraction of the leading eigenvectors which form an embedded coordinate system from which the partitioning is readily available. In this domain there are three principle dimensions which make a successful clustering: (i) the affinity measure, (ii) the normalization of the affinity matrix, and (iii) the particular clustering algorithm. Common practice indicates that the former two are largely responsible for the performance whereas the particulars of the clustering process itself have a relatively smaller impact on the performance. In this paper we focus on the normalization of the affinity matrix. We first show that the existing popular methods Ratio-cut (cf. [1]) and Normalized-cut [7] employ an implicit normalization which corresponds to L1 and Relative Entropy based approximations of the affinity matrix K to a doubly stochastic matrix. We then introduce a Frobenius norm (L2 ) normalization algorithm based on a simple successive projections scheme (based on Von-Neumann?s [5] successive projection lemma for finding the closest intersection of sub-spaces) which finds the closest doubly stochastic matrix under the least-squares error norm. We demonstrate the impact of the various normalization schemes on a large variety of data sets and show that the new normalization algorithm often induces a significant performance boost in standardized tests. Taken together, we introduce a new tuning dimension to clustering algorithms allowing better control of the clustering performance. 2 The Role of Doubly Stochastic Normalization It has been shown in the past [11, 4] that K-means and spectral clustering are intimately related where in particular [11] shows that the popular affinity matrix normalization such as employed by Normalized-cuts is related to a doubly-stochastic constraint induced by K-means. Since this background is a key to our work we will briefly introduce the relevant arguments and derivations. Let xi ? RN , i = 1, ..., n,Pbe points arranged in k (mutually exclusive) clusters ?1 , .., ?k with nj points in cluster ?j and j nj = n. Let Kij = ?(xi , xj ) be a symmetric positive-semi-definite ? School of Engineering and Computer Science, Hebrew University of Jerusalem, Jerusalem 91904, Israel. affinity function, e.g. Kij = exp?kxi ?xj k by maximizing: max 2 ?1 ,...,?k /? 2 . Then, the problem of finding the cluster assignments k X 1 n j=1 j X Kr,s , (1) (r,s)??j is equivalent to minimizing the ?kernel K-means? problem: min c1 ,...,ck ?1 ,...,?k k X X k?(xi ) ? cj k2 , j=1 i??j where P ?(xi ) is a mapping associated with the kernel ?(xi , xj ) = ?(xi )> ?(xj ) and cj = (1/nj ) i??j ?(xi ) are the class centers. After some algebraic manipulations it can be shown that the optimization setup of eqn. 1 is equivalent to the matrix form: max tr(G> KG) s.t G ? 0, GG> 1 = 1, G> G = I G (2) ? where G is the desired assignment matrix with Gij = 1/ nj if i ? ?j and zero otherwise, and 1 is a column vector of ones. Note that the feasible set of matrices satisfying the constraints G ? 0, GG> 1 = 1, G> G = I are of this form for some partitioning ?1 , ..., ?k . Note also that the matrix F = GG> must be doubly stochastic (F is non-negative, symmetric and F 1 = 1). Taken together, we see that the desire is to find P a doubly-stochastic matrix F as close as possible to the input matrix K (in the sense that ij Fij Kij is maximized over all feasible F ), such that the symmetric decomposition F = GG> satisfies non-negativity (G ? 0) and orthonormality constraints (G> G = I). To see the connection with spectral clustering, and N-cuts in particular, relax the non-negativity condition of eqn. 2 and define a two-stage approach: find the closest doubly stochastic matrix K 0 to K and we are left with a spectral decomposition problem: max tr(G> K 0 G) s.t G> G = I G (3) where G contains the leading k eigenvectors of K 0 . We will refer to the process of transforming K to K 0 as a normalization step. In N-cuts, the normalization takes the form K 0 = D?1/2 KD?1/2 where D = diag(K1) (a diagonal matrix containing the row sums of K) [9]. In [11] it was shown that repeating the N-cuts normalization, i.e., setting up the iterative step K (t+1) = D?1/2 K (t) D?1/2 where D = diag(K (t) 1) and K (0) = K converges to a doubly-stochastic matrix (a symmetric version of the well known ?iterative proportional fitting procedure? [8]). The conclusion of this brief background is to highlight the motivation for seeking a doubly-stochastic approximation to the input affinity matrix as part of the clustering process. The open issue is under what error measure is the approximation to take place? It is not difficult to show that repeating the N-cuts normalization converges to the global optimum under the relative entropy measure (see Appendix). Noting that spectral clustering optimizes the Frobenius norm it seems less natural to have the normalization step optimize a relative entropy error measure. We will derive in this paper the normalization under the L1 norm and under the Frobenius norm. The purpose of the L1 norm is to show that the resulting scheme is equivalent to a ratio-cut clustering ? thereby not introducing a new clustering scheme but only contributing to the unification and better understanding the differences between the N-cuts and Ratio-cuts schemes. The Frobenius norm normalization is a new formulation and is based on a simple iterative scheme. The resulting normalization provides a new clustering performance which proves quite practical and boosts the clustering performance in many of the standardized tests we conducted. 3 Ratio-cut and the L1 Normalization Given that our desire is to find a doubly stochastic approximation K 0 to the input affinity matrix K, we begin with the L1 norm approximation: Proposition 1 (ratio-cut) The closest doubly stochastic matrix K 0 under the L1 error norm is K 0 = K ? D + I, which leads to the ratio-cut clustering algorithm, i.e., the partitioning of the data set into two clusters is determined by the second smallest eigenvector of the Laplacian D ? K, where D = diag(K1). P Proof: Let r = minF kK ? F k1 s.t. F 1 = 1, F = F > , where kAk1 = ij abs(Aij ) is the L1 norm. Since kK ? F k1 ? k(K ? F )1k1 for any matrix F , we must have: r ? k(K ? F )1k1 = kD1 ? 1k1 = kD ? Ik1 . Let F = K ? D + I, then kK ? (K ? D + I)k1 = kD ? Ik1 . If v is an eigenvector of the Laplacian D ? K with eigenvalue ?, then v is also an eigenvector of K 0 = K ? D + I with eigenvalue 1 ? ? and since (D ? K)1 = 0 then the smallest eigenvector v = 1 of the Laplacian is the largest of K 0 , and the second smallest eigenvector of the Laplacian (the ratio-cut result) corresponds to the second largest eigenvector of K 0 . What we have so far is that the difference between N-cuts and Ratio-cuts as two popular spectral clustering schemes is that the former uses the relative entropy error measure in finding a doubly stochastic approximation to K and the latter uses the L1 norm error measure (which turns out to be the negative Laplacian with an added identity matrix). 4 Normalizing under Frobenius Norm Given that spectral clustering optimizes the Frobenius norm, there is a strong argument in favor of finding a Frobenius-norm optimum doubly stochastic approximation to K. The optimization setup is that of a quadratic linear programming (QLP). However, the special circumstances of our problem render the solution to the QLP to consist of a very simple iterative computation, as described next. The closest doubly-stochastic matrix K 0 under Frobenius norm is the solution to the following QLP: K 0 = argminF kK ? F k2F s.t. F ? 0, F 1 = 1, F = F > , (4) P 2 where kAk2F = ij Aij is the Frobenius norm. We define next two sub-problems, each with a closed-form solution, and have our QLP solution derived by alternating successively between the two until convergence. Consider the affine sub-problem: P1 (X) = argminF kX ? F k2F s.t. F 1 = 1, F = F > (5) and the convex sub-problem: P2 (X) = argminF kX ? F k2F s.t. F ? 0 (6) We will use the Von-Neumann [5] successive projection lemma stating that P1 P2 P1 P2 ...P1 (K) will converge onto the projection of K onto the intersection of the affine and conic subspaces described above1 . Therefore, what remains to show is that the projections P1 and P2 can be solved efficiently (and in closed form). We begin with the solution for P1 . The Lagrangian corresponding to eqn. 5 takes the form: > > L(F, ?1 , ?2 ) = trace(F > F ? 2X > F ) ? ?> 1 (F 1 ? 1) ? ?2 (F 1 ? 1), where from the condition F = F > we have that ?1 = ?2 = ?. Setting the derivative with respect to F to zero yields: F = X + ?1> + 1?> . 1 actually, the Von-Neumann lemma applies only to linear subspaces. The extension to convex subspaces involves a ?deflection? component described by Dykstra [3]. However, it is possible to show that for this specific problem the deflection component is redundant and the Von-Neumann lemma still applies. 4000 4 Projection Matlab QP 2000 1000 0 L1 Frobenius Relative Entropy 3 seconds seconds 3000 2 1 10 20 30 # of data?points 40 50 0 500 1000 1500 # of data?points (a) 2000 (b) Figure 1: Running times of the normalization algorithms. (a) the Frobenius scheme compared to a general Matlab QLP solver, (b) running time of the three normalization schemes. Isolate ? by multiplying by 1 on both sides: ? = (nI + 11> )?1 (I ? X)1. Noting that (nI + 11> )?1 = (1/n)(I ? (1/2n)11> ) we obtain a closed form solution: ! 1> X1 1 1 1 I+ I ? X 11> ? 11> X. (7) P1 (X) = X + n n2 n n The projection P2 (X) can also be described in a simple closed form manner. Let I+ be the set of indices corresponding to non-negative entries of X and I? the set of negative entries of X. The criterion function kX ? F k2F becomes: X X kX ? F k2F = (Xij ? Fij )2 + (Xij ? Fij )2 . (i,j)?I+ (i,j)?I? Clearly, the minimum energy over F ? 0 is obtained when Fij = Xij for all (i, j) ? I+ and zero otherwise. Let th?0 (X) stand for the operator that zeroes out all negative entries of X. Then, P2 (X) = th?0 (X). To conclude, the global optimum of eqn. 4 which returns the closest doubly stochastic matrix K 0 in Frobenius error norm to the input affinity matrix K is obtained by repeating the following steps: Algorithm 1 (Frobenius-optimal Doubly Stochastic Normalization) finds the closest doubly stochastic approximation in Frobenius error norm to a given matrix K (global optimum of eqn. 4). 1. Let X (0) = K. 2. Repeat t = 0, 1, 2, ... (a) X (t+1) = P1 (X (t) ) (b) If X (t+1) ? 0 then stop and set K 0 = X (t+1) , otherwise set X (t+1) = th?0 (X (t+1) ). This algorithm is simple and very efficient. Fig. 1a shows the running time of the algorithm compared to an off-the-shelf QLP Matlab solver over random matrices of increasing size ? one can see that the run-time of our algorithm is a fraction of the standard QLP solver and scales very well with dimension. In fact the standard QLP solver can handle only small problem sizes. In Fig. 1b we plot the running times of all three normalization schemes: the L1 norm (computing the Laplacian), the relative-entropy (the iterative D?1/2 KD?1/2 ), and the Frobenius scheme presented in this section. The Frobenius is more efficient than the relative-entropy normalization (which is the least efficient among the three). 5 Experiments For the clustering algorithm into k ? 2 clusters we experimented with the spectral algorithms described in [10] and [6]. The latter uses the N-cuts normalization D?1/2 KD?1/2 followed by K-means on the embedded coordinates (the leading k eigenvectors of the normalized affinity) and the former uses a certain discretization scheme to turn the k leading eigenvectors into an indicator matrix. Both algorithms produced similar results thus we focused on [10] while replacing the normalization with the three schemes presented above. We refer to ?Ncuts? as the original normalization D?1/2 KD?1/2 , by ?RE? to the iterative application of the original normalization (which is proven to converge to a doubly stochastic matrix [11]), by ?L1? to the L1 doubly-stochastic normalization (which we have shown is equivalent to Ratio-cuts) and by ?Frobenius? to the iterative Frobenius scheme based on Von-Neumann?s lemma described in Section 4. We also included a ?None? field which corresponds to no normalization being applied. Dataset SPECTF heart Pima Wine SpamBase BUPA WDBC Kernel RBF RBF RBF RBF Poly Poly k Size 2 2 3 2 2 2 267 768 178 4601 345 569 Dim. 44 8 13 57 6 30 L1 27.5 36.2 38.8 36.1 37.4 18.8 Lowest Error Rate Frobenius RE NCuts 19.2 27.5 27.5 35.2 34.9 35.2 27.0 34.3 29.2 30.3 37.7 31.8 37.4 41.7 41.7 11.1 37.4 37.4 None 29.5 35.4 27.5 30.4 37.4 18.8 Table 1: UCI datasets used, together with some characteristics and the best result achieved using the different methods. Dataset Leukemia Lung Prostate Prostate Outcome Kernel Poly Poly RBF RBF k 2 2 2 2 Size 72 181 136 21 #PC 5 5 5 5 L1 27.8 15.5 40.4 28.6 Lowest Error Rate Frobenius RE NCuts 16.7 36.1 38.9 9.9 16.6 15.5 19.9 43.4 40.4 4.8 23.8 28.6 None 30.6 15.5 40.4 28.6 Table 2: Cancer datasets used, together with some characteristics and the best result achieved using the different methods. We begin with evaluating the clustering quality obtained under the different normalization methods taken over a number of well studied datasets from the UCI repository2 . The data-sets are listed in Table 1 together with some of their characteristics. The best performance (lowest error rate) kxi ?xj k2 is presented in Boldface. With the first four datasets we used an RBF kernel e ?2 for the affinity matrix, while for the latter two a polynomial kernel (xTi xj + 1)d was used. The kernel parameters were calibrated independently for each method and for each dataset. In most cases the best performance was obtained with the Frobenius norm approximation, but as a general rule the type of normalization depends on the data. Also worth noting are instances, such as Wine and SpamBase, when the RE or Ncuts actually worsen the performance. In that case the RE performance is worse the Ncuts as the entire normalization direction is counter-productive. When RE outperforms None it also outperforms Ncuts (as can be expected since Ncuts is the first step in the iterative scheme of RE). With regard to tuning the affinity measure, we show in Fig. 2 the clustering performance of each dataset under each normalization scheme under varying kernel setting (? and d values). Generally, the performance of the Frobenius normalization behaves in a smoother manner and is more stable under varying kernel settings than the other normalization schemes. Our next set of experiments was over some well studied cancer data-sets3 . The data-sets are listed in Table 2 together with some of their characteristics. The column ?#PC? refers to the number of principal components used in a PCA pre-processing for the purpose of dimensionality reduction prior to clustering. Note that better results can be achieved when using a more sophisticated preprocessing, but since the focus is on the performances of the clustering algorithms and not on the datasets, we prefer not to use the optimal pre-processing and leave the data noisy. The AML/ALL 2 3 http://www.ics.uci.edu/? mlearn/MLRepository.html All cancer datasets can be found at http://sdmc.i2r.a-star.edu.sg/rp/ 50 40 45 % errors % errors 50 30 20 10 40 35 20 40 60 sigma 80 30 100 1 2 3 sigma (SPECTF) 4 5 6 250 300 (Pima) 60 45 50 % errors % errors 40 40 35 30 20 200 400 600 sigma 30 800 50 100 (Wine) 150 200 sigma (SpamBase) 50 50 40 % errors % errors 45 30 40 20 35 10 20 30 40 degree (BUPA) 50 60 10 1 2 3 degree 4 5 6 (WDBC) Figure 2: Error rate vs. similarity measure, for the UCI datasets listed in Table 1 L1 in magenta +; Forbenius in blue o; Relative Entropy in black ?; and Normalized-Cuts in red 50 50 40 % errors % errors 40 30 20 10 30 20 10 2 4 6 degree 8 0 10 2 4 (AML/ALL Leukemia) 10 50 40 % errors 40 % errors 8 (Lung Cancer) 50 30 20 10 6 degree 30 20 10 50 sigma 100 (Prostate) 150 0 200 400 sigma 600 800 (Prostate Outcome) Figure 3: Error rate vs. similarity measure, for the cancer datasets listed in Table 2. L1 in magenta +; Forbenius in blue o; Relative Entropy in black ?; and Normalized-Cuts in red Leukemia dataset is a challenging benchmark common in the cancer community, where the task is to distinguish between two types of Leukemia. The original dataset consists of 7129 coordinates probed from 6817 human genes, and we perform PCA to obtain 5 leading principal components prior to clustering using a polynomial kernel. Lung Cancer (Brigham and Women?s Hospital, Harvard Medical School) dataset is another common benchmark that describes 12533 genes sampled from 181 tissues. The task is to distinguish between malignant pleural mesothelioma (MPM) and adenocarcinoma (ADCA) of the lung. The Prostate dataset consists of 12,600 coordinates representing different genes, where the task is to identify prostate samples as tumor or non-tumor. We use the first five principal components as input for clustering using an RBF kernel. The Prostate Outcome dataset uses the same genes from another set of prostate samples, where the task is to predict the clinical outcome (relapse or non-relapse for at least four years). Finally, Fig. 3 shows the clustering performance of each dataset under each normalization scheme under varying kernel settings (? and d values). 6 Summary Normalization of the affinity matrix is a crucial element in the success of spectral clustering. The type of normalization performed by N-cuts is a step towards a doubly-stochastic approximation of the affinity matrix under relative entropy [11]. In this paper we have extended the normalization via doubly-stochasticity in three ways: (i) we have shown that the difference between N-Cuts and Ratio-cuts is in the error measure used to find the closest doubly stochastic approximation to the input affinity matrix, (ii) we have introduced a new normalization scheme based on Frobenius norm approximation. The scheme involves a succession of simple computations, is very simple to implement and is efficient computation-wise, and (iii) throughout extensive experimentation on standard data-sets we have shown the importance of normalization to the performance of spectral clustering. In the experiments we have conducted the Frobenius normalization had the upper-hand in most cases. We have also shown that the relative-entropy normalization is not always the right approach as in some data-sets the performance worsened after the relative-entropy but never worsened when the Frobenius normalization was applied. References [1] P. K. Chan, M. D. F. Schlag, and J. Y. Zien. Spectral k-way ratio-cut partitioning and clustering. IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 13(9):1088? 1096, 1994. [2] I. Csiszar. I-divergence geometry of probability distributions and minimization problems. The Annals of Probability, 3(1):146?158, 1975. [3] R.L. Dykstra. An algorithm for restricted least squares regression. J. of the Amer. Stat. Assoc., 78:837?842, 1983. [4] I.S.Dhillon, Y.Guan, and B.Kulis. Kernel k-means, spectral clustering and normalized cuts. In International Conference on Knowledge Discovery and Data Mining(KDD), pages 551?556, Aug. 2004. [5] J. Von Neumann. Functional Operators Vol. II. Princeton University Press, 1950. [6] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In Proceedings of the conference on Neural Information Processing Systems (NIPS), 2001. [7] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 2000. [8] R. Sinkhorn and P. Knopp. Conerning non-negative matrices and doubly stochastic matrices. Pacific J. Math., 21:343?348, 1967. [9] Y. Weiss. Segmentation using eigenvectors: a unifying view. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1999. [10] S.X. Yu and J. Shi. Multiclass spectral clustering. In Proceedings of the International Conference on Computer Vision, 2003. [11] R. Zass and A. Shashua. A unifying approach to hard and probabilistic clustering. In Proceedings of the International Conference on Computer Vision, Beijing, China, Oct. 2005. A Normalized Cuts and Relative Entropy Normalization The following proposition is an extension (symmetric version) of the claim about the iterative proportional fitting procedure converging in relative entropy error measure [2]: Proposition 2 The closest doubly-stochastic matrix F under the relative-entropy error measure to a given symmetric matrix K, i.e., which minimizes: min RE(F ||K) s.t. F ? 0, F = F > , F 1 = 1, F > 1 = 1 F has the form F = DKD for some (unique) diagonal matrix D. Proof: The Lagrangian of the problem is: X X X X X X fij X L() = fij ln + kij ? fij ? ?i ( fij ? 1) ? ?j ( fij ? 1) kij ij ij ij i j j i The derivative with respect to fij is: ?L = ln fij + 1 ? ln kij ? 1 ? ?i ? ?j = 0 ?fij from which we obtain: fij = e?i e?j kij Let D1 = diag(e?1 , ..., e?n ) and D2 = diag(e?1 , ..., e?n ), then we have: F = D1 KD2 > Since F = F and K is symmetric we must have D1 = D2 .
3049 |@word kulis:1 version:2 briefly:1 polynomial:2 norm:23 seems:1 open:1 d2:2 decomposition:2 thereby:1 tr:2 reduction:1 contains:1 past:1 existing:1 spambase:3 outperforms:2 discretization:1 must:3 readily:1 kdd:1 plot:1 v:2 intelligence:1 mpm:1 provides:2 math:1 ron:1 successive:4 five:1 above1:1 consists:2 doubly:27 fitting:2 manner:2 introduce:3 pairwise:1 expected:1 p1:8 xti:1 solver:4 increasing:1 becomes:1 begin:3 circuit:1 lowest:3 israel:1 kg:1 what:3 minimizes:1 eigenvector:6 finding:6 nj:4 k2:2 assoc:1 partitioning:5 control:1 medical:1 positive:1 engineering:1 black:2 studied:2 china:1 challenging:1 bupa:2 practical:1 responsible:1 unique:1 practice:1 definite:1 implement:1 qlp:8 procedure:2 projection:8 pre:2 refers:1 onto:2 close:1 operator:2 www:1 equivalent:4 optimize:1 lagrangian:2 center:1 maximizing:1 shi:2 jerusalem:2 independently:1 convex:2 focused:1 rule:1 handle:1 coordinate:4 annals:1 programming:1 us:5 harvard:1 element:1 satisfying:1 recognition:1 cut:29 role:1 solved:1 counter:1 transforming:1 productive:1 various:1 worsened:2 derivation:1 distinct:1 outcome:4 quite:1 relax:1 otherwise:3 favor:1 itself:1 noisy:1 eigenvalue:2 relevant:1 uci:4 kak1:1 frobenius:26 convergence:1 cluster:5 optimum:4 neumann:7 converges:2 leave:1 derive:1 develop:1 stating:1 stat:1 ij:6 school:2 aug:1 p2:6 strong:1 involves:2 direction:1 fij:13 aml:2 stochastic:26 human:1 proposition:3 extension:2 ic:1 exp:1 mapping:1 predict:1 claim:1 smallest:3 wine:3 purpose:2 largest:2 minimization:1 clearly:1 always:1 ck:1 shelf:1 varying:3 derived:1 focus:3 indicates:1 sense:1 dim:1 typically:1 entire:1 relapse:2 integrated:1 issue:2 among:1 html:1 special:1 field:1 never:1 extraction:1 ng:1 yu:1 k2f:5 leukemia:4 minf:1 kd2:1 prostate:8 employ:1 divergence:1 geometry:1 ab:1 kd1:1 mining:1 pc:2 csiszar:1 unification:1 desired:1 re:8 kij:8 column:2 instance:1 measuring:1 assignment:2 introducing:1 entry:3 successful:1 conducted:2 schlag:1 kxi:2 calibrated:1 international:3 probabilistic:1 off:1 pbe:1 together:6 von:7 central:1 successively:1 containing:1 woman:1 worse:1 derivative:2 leading:5 return:1 star:1 depends:1 performed:1 view:1 closed:4 shashua:2 start:1 lung:4 red:2 worsen:1 square:2 ni:2 largely:1 efficiently:1 maximized:1 yield:1 characteristic:4 identify:1 succession:1 produced:1 none:4 multiplying:1 worth:1 tissue:1 mlearn:1 energy:1 associated:1 proof:2 stop:1 sampled:1 dataset:10 popular:3 knowledge:1 dimensionality:1 cj:2 segmentation:2 sophisticated:1 actually:2 wei:2 formulation:1 arranged:1 amer:1 implicit:1 stage:1 until:1 hand:1 eqn:5 replacing:1 quality:1 normalized:8 orthonormality:1 ncuts:7 former:3 alternating:1 symmetric:7 dhillon:1 mlrepository:1 criterion:1 gg:4 theoretic:1 demonstrate:1 l1:17 image:1 wise:1 superior:1 common:3 behaves:1 functional:1 qp:1 significant:1 refer:2 tuning:2 stochasticity:1 had:1 ik1:2 stable:1 similarity:3 sinkhorn:1 closest:10 chan:1 optimizes:2 manipulation:1 certain:1 success:1 minimum:1 employed:1 converge:2 redundant:1 semi:1 zien:1 ii:3 smoother:1 clinical:1 zass:2 laplacian:6 impact:2 converging:1 regression:1 circumstance:1 vision:3 normalization:48 kernel:13 achieved:3 c1:1 whereas:1 background:2 crucial:1 induced:1 isolate:1 sets3:1 jordan:1 noting:3 iii:2 variety:1 xj:7 multiclass:1 amnon:1 pca:2 render:1 algebraic:1 matlab:3 generally:1 eigenvectors:5 listed:4 repeating:3 induces:1 http:2 xij:3 blue:2 probed:1 vol:1 key:1 four:2 graph:1 fraction:1 sum:1 year:1 beijing:1 deflection:2 run:1 place:1 throughout:1 appendix:1 prefer:1 followed:3 distinguish:2 quadratic:1 constraint:3 argument:2 min:2 relatively:1 pacific:1 kd:6 smaller:1 describes:1 intimately:1 restricted:1 taken:3 heart:1 ln:3 mutually:1 remains:1 turn:2 malignant:1 available:1 experimentation:1 spectral:15 rp:1 original:3 standardized:3 clustering:36 cf:1 running:4 unifying:2 k1:8 prof:1 dykstra:2 seeking:1 malik:1 added:1 exclusive:1 diagonal:2 affinity:18 subspace:3 kak2f:1 boldface:1 index:1 kk:4 ratio:12 minimizing:1 hebrew:1 setup:2 difficult:1 pima:2 argminf:3 trace:1 negative:6 sigma:6 design:1 perform:1 allowing:1 upper:1 datasets:8 benchmark:2 extended:1 rn:1 community:1 introduced:1 extensive:1 connection:1 boost:2 nip:1 pattern:2 max:3 natural:1 indicator:1 representing:1 scheme:22 brief:1 conic:1 negativity:2 knopp:1 prior:2 understanding:1 l2:1 sg:1 discovery:1 contributing:1 relative:16 embedded:2 highlight:1 proportional:2 proven:1 versus:1 degree:5 affine:2 principle:1 row:1 cancer:7 summary:1 repeat:1 aij:2 side:1 regard:1 dimension:3 stand:1 evaluating:1 preprocessing:1 far:1 transaction:2 gene:4 global:3 conclude:1 xi:8 iterative:9 table:6 dkd:1 poly:4 domain:1 diag:5 motivation:1 n2:1 x1:1 fig:4 sub:4 adenocarcinoma:1 guan:1 magenta:2 specific:1 experimented:1 normalizing:1 brigham:1 consist:1 kr:1 importance:1 kx:4 wdbc:2 entropy:16 intersection:2 forbenius:2 desire:2 applies:2 corresponds:3 satisfies:1 oct:1 identity:1 rbf:8 towards:1 feasible:2 hard:1 aided:1 included:1 determined:1 lemma:6 principal:3 i2r:1 gij:1 hospital:1 tumor:2 repository2:1 latter:3 spectf:2 princeton:1 d1:3
2,259
3,050
Nonlinear physically-based models for decoding motor-cortical population activity Gregory Shakhnarovich Sung-Phil Kim Michael J. Black Department of Computer Science Brown University Providence, RI 02912 {gregory,spkim,black}@cs.brown.edu Abstract Neural motor prostheses (NMPs) require the accurate decoding of motor cortical population activity for the control of an artificial motor system. Previous work on cortical decoding for NMPs has focused on the recovery of hand kinematics. Human NMPs however may require the control of computer cursors or robotic devices with very different physical and dynamical properties. Here we show that the firing rates of cells in the primary motor cortex of non-human primates can be used to control the parameters of an artificial physical system exhibiting realistic dynamics. The model represents 2D hand motion in terms of a point mass connected to a system of idealized springs. The nonlinear spring coefficients are estimated from the firing rates of neurons in the motor cortex. We evaluate linear and a nonlinear decoding algorithms using neural recordings from two monkeys performing two different tasks. We found that the decoded spring coefficients produced accurate hand trajectories compared with state-of-the-art methods for direct decoding of hand kinematics. Furthermore, using a physically-based system produced decoded movements that were more ?natural? in that their frequency spectrum more closely matched that of natural hand movements. 1 Introduction Neural motor prostheses (NMPs) aim to restore lost motor function to people with intact cerebral motor areas who, through disease or injury, have lost the ability to control their limbs. Central to these devices is a method for decoding the firing activity of motor cortical neurons to produce a voluntary control signal. A number of groups have recently demonstrated the real-time neural control of 2D or 3D computer cursors or simple robotic limbs in monkeys [1, 13, 18, 20, 22] and humans [6]. Previous work on decoding motor cortical signals however has focused on modeling the relationship between neural firing rates and simple hand kinematics including hand direction, speed, position, velocity, or acceleration [2, 4, 8, 10]. While the relationship between neural firing rates and hand kinematics is well established in ablebodied monkeys, the situation of a human NMP is quite different. For a paralyzed human, the NMP represents an artificial motor system with different physical properties than the intact human motor system. In particular, a human NMP may involve the control of devices as different as computer cursors or robotic wheelchairs. It remains an open question whether motor cortical neurons can successfully control such varied systems with dynamics that are quite different from human limbs. Here we propose a model that makes a first step toward neural control of novel artificial motor systems. We show that motor cortical firing rates can be nonlinearly related to the parameters of an idealized physical system. This provides an important proof-of-concept for human NMPs. Our model decodes the dynamics of hand movement directly from the neural activity. Ultimately, such a model should reflect the actuator being controlled. For a biological actuator this means the activation of individual muscles; for a robotic one, the forces and torques produced by the motors in the system. A model incorporating direct cortical control of dynamics has been proposed in [19]. There are two major distinctions between that work and ours. First, we consider the task of controlling an artificial system, rather than the subject?s real limb. Second, applying the model in [19] in practice would require constructing a very complex biomechanical model and controlling its many degrees of freedom with a limited neural bandwidth. Here we propose a much simpler approach, that does not attempt to accurately model the musculoskeletal structure of the arm. Instead, it provides a computationally effective framework to model the dynamics of the limb moving in two dimensional plane. Our approach is inspired by the recent work of Hinton and Nair [5], that suggested a generative model for images of handwritten digits. In that work, observed images were assumed to have been generated by a pen connected to a set of springs, the trajectory of the pen controlled by varying the stiffness of the springs according to a digit-specific ?motor program?. The goal was to infer the motor program from an observed image, in order to classify the digit. In the context of neural decoding, the image observation is replaced with the recorded neural signal, from which we need to recover the ?motor program?, and thus the intended movement. This is where the parallels between our work and [5] end. One particularly important difference is that the neural decoder may be learned in a supervised procedure, where the groud truth for the movement associated with a given neural signal is known. An advantage of this spring-based model (SBM) over previous kinematics-based decoding methods is that the realistic dynamics of the model produce smoother recovered movement. We show that the motions are more natural in that they better match the power spectrum of true hand movements. This suggests that the control of a physical system (even an artificial one) may prove more natural for a human NMP. The experimental setup we consider in this paper involves an electrode array, implanted in the arm/hand area of the MI cortex of a behaving monkey [17]. The animals are trained to control the cursor by moving the endpoint of a two-link manipulandum constrained to a plane, much like a human would use a computer mouse [11, 13]. Neural data and hand kinematics were recorded from two monkeys performing two different tasks. The data was separated into training and testing segments and we quantitatively compared a variety of popular linear and nonlinear algorithms for decoding hand kinematics and the spring coefficients of our SBM. As expected, nonlinear methods tend to outperform linear ones. Moreover, movement reconstructed with the SBM has a power spectrum significantly closer to that of natural movement. These results suggest that the control of idealized physical systems with real-time nonlinear decoding algorithms may form the basis for a practical human NMP. D y2 x2 B A ax = kA x2 ? kB x1 ? ?vx ] ay = kC y1 ? kD y2 ? ?vy y y1 ,a [ax C = ?L a x1 L Figure 1: Sketch of the spring-based model. The outer endpoints of the springs are assumed to slide without friction, so that A and B are always orthogonal to C and D. The rest length is assumed to be zero for all springs. Movement is controlled by varying the stiffness coefficients kA , kB , kC and kD . 2 The spring-based model Decoding neural activity in N cells involves estimating the values of a hidden state X(t) at time t given an observed sequence of firing rates Z(0) . . . Z(t) up to time t, with each Z(i) being a 1 ? N vector. The state here is typically taken to be either hand position, velocity, etc. Methods described in the literature can be roughly divided into two classes. Generative methods formulate the likelihood of the observed firing rates conditioned on the state and use Bayesian inference methods such as the Kalman filter [21] or particle filter [3] to estimate the system state from observations. In contrast, direct (or discriminative) methods learn a function that maps firing rates over some preceding temporal window into hand kinematics. Various methods have been explored including linear regression [1, 13], support-vector regression [15] and neural network algorithms [12, 20]. All these previous methods have focused on direct decoding of kinematic properties of the hand movement and have ignored the arm dynamics. 2.1 Parametrization of dynamics Our approach to incorporating dynamics into the decoding process has been inspired by the following model of [5], sketched out in Figure 1. Without loss of generality, let the work area (that fully contains the movement range) be an axis-aligned square [?L, L] ? [?L, L]. The endpoint of the limb (wrist) is assumed to be connected to one end of four imaginary springs, the other end of which is sliding with no friction along rails forming the boundaries of the ?work area?. Thus, at every time instance each spring is parallel to one of the axes. The analysis of dynamics therefore can be easily decomposed to x and y components. Below we focus on the x component. All four springs are assumed to have rest length of zero. Suppose that the position of the wrist at time t is [x(t), y(t)]. Then the springs A and B apply forces determined by Hooke?s law, namely, kA (t) (L ? x(t))) and ?kB (t) (x(t) + L), where kA (t) and kB (t) are the stiffness coefficients of A and B at time t. To reflect physical constraints on movement in the real world, the model presumes a point mass m in the center of the wrist (i.e. at the cursor location). Furthermore, it is assumed that the movement is damped by a viscous force proportional to the instantaneous velocity, ??vx (t). The viscosity is meant to represent both the medium resistance and the elasticity of the muscles. In summary, according to Newton?s second law the acceleration of the hand at time t is given by m ? ax (t) = kA (t) ? (L ? x(t)) ? kB (t) ? (L + x(t)) ? ? ? vx (t), (1) where vx (t) is the instantaneous velocity of the wrist at time t along the x axis. Control of movement in this model is realized through varying the stiffness coefficients of the springs: given the current position of the wrist x, the desired acceleration a is achieved by setting kA (t) and kB (t) so as to solve (1). This solution is not unique, in general. We note, however, that the physiological meaning of the k?s requires them to be non-negative, since the muscles can not ?push?. This motivates us to introduce the total stiffness constraint kA + kB = ?, (2) where ? is a constant chosen so that no feasible acceleration would yield negative kA or kB . We can now recover the underlying parameters K = [kA , kB , kC , kD ] for the observed movement by applying (1) at each time step as follows. First we estimate the velocities v?x (t) = x(t + 1) ? x(t) and accelerations a ?x (t) = v?x (t + 1) ? v?x (t). Then, we substitute (2) into (1), yielding m?a ?x (t) + v?x (t) + ? ? (L + x(t)) . k?A (t) = 2L (3) The value of kB (t) is then uniquely determined from (2). Repeating these calculations for the y-axis produces the coefficients for springs C and D. 2.2 Decoding neural activity We now turn to our main goal: inferring the desired movement from a recorded neural signal. We treat this as a supervised learning task. In the training stage, we take a data set in which we have both the recorded neural signal Z(t) and the observed trajectory of hand positions X(t) associated with that signal. From this, we can learn a mapping g from the neural signal to the desired representation of movement. For direct kinematic decoding this means inference g : Z(t) ? X(t). For decoding with the SBM, this means inference of spring coefficients in the SBM, g : Z(t) ? K(t), followed by the calculation K(t) ? X(t) as described above. The SBM formulation also requires a preprocessing step for the training data: we need to convert the observed position trajectory X to the trajectory through K, acording to (3). We have focused on two ways of constructing g, described below. Linear filter. The linear filter (LF) approach [13] consists of modeling the mapping from firing rate to movement by a linear transformation W that is applied on a concatenated firing rate vector for a fixed history depth l: ? X(t) = x0 + WZ(t), (4) where x0 is a constant (bias) term and  T ? Z(t) = ZT (t ? l), . . . , ZT (t) . (5) ? for a recording from N channels, is 1 ? lN . The transformation W is fit to The dimension of Z(t) the training data by solving the least squares problem, and then used at the decoding stage to predict values of X. Application of the LF to the SBM is straightforward: the target of the mapping is in the space of coefficients K, rather than position X. Support vector regression. Support Vector Machines (SVM) are a widely popular learning architecture that relies on two key ideas: mapping the data into a (possibly infinite-dimensional) feature space using a kernel function, and optimizing the bound on generalization error. In the context of regression [16] this means using an -insensitive loss function, that does not penalize training errors up to , to fit a linear function in the feature space. SVMs also aim at reducing model complexity by penalizing the objective for the norm of the resulting function. The solution is finally expressed in terms of kernel functions involving a subset of the training examples (the support vectors). The key parameters that affect the performance of SVMs are the value of , the tradeoff c that governs the penalty of training error, and parameters of the kernel function. SVMs have been widely successful in many applications of machine learning. However, their application to the task of neural decoding has been limited to the directional center-out task [15]. Here we evaluate SV regression as a method for decoding more general 2D movement. Again, the SVM formulation is readily extended to the SBM (with the target functions being components of K). Alternative decoders. A variety of other decoding approaches has been proposed in the literature. We conducted experiments with three additional algorithms: Kalman filter [21], Multilayer Perceptrons [20] and Echo-state Networks, a recurrent neural network architecture [7]. The Kalman filter uses a linear model of the mapping of neural signals to movement, while the models underlying the other two methods are nonlinear. Our findings can be summarized as follows, for both kinematic decoding and decoding with the spring-based model. First, nonlinear methods perform significantly better than linear ones. Second, there was a trend for the Kalman filter to perform better than the linear filter. Third, among nonlinear methods SVM tended to perform better than the two neural network architectures. However, these latter differences could not be established with significance. In the following section, we focus on experiments with the linear filter (the de-facto standard decoding method today) and SVM, which achieved the overall best results in our experiments. 3 Experiments We evaluated the performance of the proposed approach on data sets obtained from two behaving monkeys (Macaca Mulatta). The neural signal was obtained with a Cyberkinetics microelectrode array [9] (96 electrodes) implanted in the arm/hand area of MI cortex. The experimental animals performed the tasks described below. Sequential reaching movement , described in [13] Reach targets and a hand position feedback cursor were presented on a video screen in front of the monkey. When a reach target was presented Table 1: Details of experiments. Units: number of distinct units identified after spike sorting. Train, test: length of train and test sequences in seconds. Session CL-sequential LA-continuous CL-continuous Units 49 96 55 Train 623 244 448 Test 140 165 140 the animal?s task was to move a manipulandum so that the feedback cursor moved into the target and remained in the target for 500ms, at which time that target was extinguished and a new reach target was presented in a different location. Target locations were drawn i.i.d. from the uniform distribution over the screen surface. This was repeated for up to 10 targets per trial. Upon successful completion of a trial the animal received a juice reward. Hand kinematics and neural activity were simultaneously recorded while the animal performed the task. Continuous tracking , described in [14] Monkey was viewing a computer screen on which a visual target appeared in a random, but smooth, sequence of locations. The monkey was trained to follow the target?s position with a cursor, using a manipulandum, and received a reward for each successful trial (i.e. when the cursor remained within the target for a duration drawn for each target randomly between 3 and 10 seconds). The recorded neural activity was converted to spike trains by computer-assisted spike-sorting software, and the spike counts were calculated in non-overlapping 70ms windows. The hand kinematics (obtained by recording the 2D position of the manipulandum) were averaged within each window, to produce an aligned representation. 3.1 Evaluation protocol In each of the data sets, we selected a segment of the recording to train all the decoders, and a subsequent segment to test the decoding accuracy. Tuning of parameters (the kernel parameters of the SVM or the mass and viscosity of the spring model) was done on a held-out portion of the training segment. We built the firing rate history matrix by concatenating for each time step the firing rates for 15 bins. For instance, for monkey CL, continuous tracking, the dimension of the neural signal representation was 825 (55 channels ? 15 history bins). This firing rates were then normalized so that all values would be within [-1,1]. Basic statistics of the data used in the experiments are given in Table 1. We considered three evaluation criteria: Correlation coefficients (CC): between the estimated and true value for each of the two spatial coordinates over the entire trajectory: P ??t ) (xt ? x ?t )(? xt ? x . CC = qP t P 2 ??t )2 (x ? x ? ) (? x ? x t t t t t Mean absolute error (MAE): in the estimated position versus the ground truth: MAE = PN 1 ? t=1 kX(t) ? X(t)k. N Power spectrum reconstruction : One of the objectives of a practical decoding algorithm, especially in the context of assistive technology, is to produce movement that appears ?natural?. As a criterion for evaluating the degree of ?naturalness? we use the similarity between power spectrum densities of the true movement and the reconstructed one. Specifically, we calculated the L1 norm between the energy distributions over normalized angular frequencies, taken in the log domain (see Figure 2 for illustration). 3.2 Results The reported results for SVM were obtained with quadratic kernel, k(x, y) = (x ? y + 1)2 ; the tradeoff term c was fixed to 100, and the insensitivity parameter  was set to 5 for the spring coefficient and 2 for direct position decoding. The number of support vectors was between 20% and 65% of the training set size. Decoder Linear-kinematics Linear-SBM SVM-kinematics SVM-SBM CL/sequential MAE CCx CCy 5.3 0.69 0.79 5.7 0.64 0.74 4.45 0.80 0.85 4.91 0.76 0.81 LA/continuous MAE CCx CCy 5.03 0.5 0.75 5.26 0.46 0.72 4.44 0.60 0.82 4.69 0.55 0.80 CL/continuous MAE CCx CCy 6.66 0.80 0.83 6.82 0.77 0.81 3.82 0.86 0.86 4.05 0.83 0.84 Table 2: Summary of results on the three datasets. MAE is given in cm, over workspace of roughly 30?30 cm. Table 2 summarizes the MAE and CC measured on the test segment for each method. One observation is that SVM tends to outperform the linear filter, in line with previous observations [12, 15]. We believe that this is due to inherent nonlinearity in the underlying relationship, which is better captured by the SVM. Moreover, it is apparent that the decoding accuracy of the SBM is on par with that of the conventional kinematic decoding (the observed differences were not significant at the 0.05 level, measured over the per-bin position errors). Power/freq. unit, db/sample 40 20 Figure 2: Example of power spectrum densities for true hand trajectory (dotted black), reconstruction with SVM-kinematics (dashed blue) and reconstruction with SVM-SBM (solid red). Estimated using Burg?s algorithm (pburg in Matlab, order 4). Data from x coordinate, LA-continuous. 0 ?20 ?40 ?60 0 0.2 0.4 0.6 0.8 Normalized frequency, ? rad/sample 1 Figure 3: A 1.5 second path segment, true (circles) and reconstructed (squares). Left: SVM on kinematics, right: SVM with SBM. Markers show position averaged in each 70ms bin. Note the ragged form of the SVM-kinematics trajectory. Results in Table 2, however, tell only a part of the story. Figure 3 shows, for a segment of 4.2 sec, a typical example of the movement reconstructed with SVM on kinematics versus SVM on spring coefficients. The accuracy in terms of deviation from ground truth is similar, however the estimate produced by the direct kinematic decoding is significantly more ?ragged?. Such discrepancy is not necessarily reflected in the standard measures of accuracy such as CC or MAE. Quantitavely, this can be assessed by calculating the L1 norm between the power spectrum densities of the true and reconstructed hand trajectories. The estmated values of this quantity in our experiments are shown in Table 3. These results reflect the relationship shown in Figure 2 (a typical case). Table 3: Estimated L1 norm between power spectrum density of true and reconstructed trajectories. Decoder CL-sequential LA-continuous CL-continuous x y x y x y Linear-kinematics 147.41 154.80 199.24 206.61 49.68 44.37 Linear-SBM 71.58 68.24 72.99 80.37 35.68 43.72 SVM-kinematics 143.78 151.35 188.65 196.14 33.96 28.44 SVM-SBM 51.45 52.31 53.05 66.20 20.83 21.15 4 Discussion The spring-based model proposed in this paper represents a first attempt to directly incorporate realistic physical constraints into a neural decoding model. Our experiments illustrate that the coefficients of an idealized physical system can be decoded from motor cortical firing rates, without sttistically significant loss of decoding accuracy compared to more standard direct decoding of kinematics. An advantage of such an approach is that the physical properties of the system damp high frequency motions resulting in decoded movements that inherently have the properties of natural movement, with no ad-hoc smoothing. Future work should consider more sophisticated physical models such as a simulated robotic arm and a biophysically motivated musculoskeletal system. With the current state of the art in neural recording and decoding, recovering the parameters of such models may be challenging. In contrast, the approach presented here ?summarizes? the effect of a more complicated system with just a few idealized muscle-like elements. Additional experiments are also warranted. In particular using a robotic feedback device we can simulate the physical system of springs presented here such that the monkeys control a device with the properties of our model. We hypothesize that the accuracy of decoding spring coefficients from motor cortical activity in this condition will improve. This would suggest that matching the decoding model to the physical system being controlled will improve decoding accuracy. Finally, the real test of physically-based models will come in human NMP experiments. We plan to test human cursor control with kinematic and physically-based decoders. We hypothesize that the dynamics of the physically-based model will make it easier to control accurately (and perhaps provide a more satisfying experience for the user). This could be a first step toward the neural control of mechanical actuators in the physical world. Acknowledgments This work is partially supported by NIH-NINDS R01 NS 50867-01 as part of the NSF/NIH Collaborative Research in Computational Neuroscience Program and by the Office of Naval Research (award N0014-04-1-082). We also thank the European Neurobotics Program FP6-IST-001917. We thank Matthew Fellows and John Donoghue for providing data, and Reza Shadmehr for helpful conversations. References [1] J. M. Carmena, M. A. Lebedev, R. E. Crist, J. E. O?Doherty, D. M. Santucci, D. F. Dimitrov, P. G. Patil, C. S. Henriquez, and M. A. L. Nicolelis. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS, Biology, 1(2):001?016, 2003. [2] D. Flament and J. Hore. Relations of motor cortex neural discharge to kinematics of passive and active elbow movements in the monkey. Journal of Neurophysiology, 60(4):1268?1284, 1988. [3] Y. Gao, M. J. Black, E. Bienenstock, S. Shoham, and J. P. Donoghue. Probabilistic inference of hand motion from neural activity in motor cortex. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 213?220, Cambridge, MA, 2002. MIT Press. [4] A. Georgopoulos, A. Schwartz, and R. Kettner. Neural population coding of movement direction. Science, 233:1416?1419, 1986. [5] G. E. Hinton and V. Nair. Inferring motor programs from images of handwritten digits. In Advances in Neural Information Processing, 2005. [6] L. R. Hochberg, J. A. Mukand, G. I. Polykoff, G. M. Friehs, and J. P. Donoghue. Braingate neuromotor prosthesis: Nature and use of neural control signals. In Society for Neuroscience Abst. Program No. 520.17, Online, 2005. [7] H. Jaeger. The ?echo state? approach to analyzing and training recurrent neural networks. Technical Report GMD Report 148, German National Research Institute for Computer Science, 2001. [8] R. Kettner, A. Schwartz, and A. Georgopoulos. Primary motor cortex and free arm movements to visual targets in three-dimensional space. iii. positional gradients and population coding of movement direction from various movement origins. Journal of Neuroscience, 8(8):2938? 2947, 1988. [9] E. Maynard, C. Nordhausen, and R. Normann. The Utah intracortical electrode array: A recording structure for potential brain-computer interfaces. Electroencephalography and Clinical Neurophysiology, 102:228?239, 1997. [10] D. Moran and A. Schwartz. Motor cortical representation of speed and direction during reaching. Jrnl. of Neurophysiology, 82(5):2676?2692, 1999. [11] L. Paninski, M. Fellows, N. Hatsopoulos, and J. P. Donoghue. Spatiotemporal tuning of motor cortical neurons for hand position and velocity. J. of Neurophysiology, 91:515?532, 2004. [12] Y. N. Rao, S.-P. Kim, J. Sanchez, D. Erdogmus, J. Principe, J. Carmena, M. Lebedev, and M. Nicolelis. Learning mappings in brain-machine interfaces with echo state networks. In IEEE Int. Conf. on Acou., Speech, and Sig. Proc., volume 5, pages 233?236, March 2005. [13] M. D. Serruya, N. G. Hatsopoulos, L. Paninski, M. R. Fellows, and J. P. Donoghue. Brainmachine interface: Instant neural control of a movement signal. Nature, 416:141?142, 2002. [14] S. Shoham, L. M. Paninsky, M. R. Fellows, N. G. Hatsopoulos, J.P. Donoghue, and R. A. Normann. Statistical encoding model for a primary motor cortical brain-machine interface. IEEE Transactions on Biomedical Engineering, 52(7):1312?1322, 2005. [15] L. Shpigelman, K. Crammerr, R. Paz, E. Vaadia, and Y. Singer. A temporal kernel-based model for tracking hand-movements from neural activities. In Advances in Neural Information Processing, Vancouver, BC, December 2005. [16] A. J. Smola and B. Sch?olkopf. A tutorial on support vector regression. Statistics and Computing, 14:199?222, 2004. [17] S. Suner, M. R. Fellows, C. Vargas-Irwin, G. K. Nakata, and J. P. Donoghue. Reliability of signals from a chronically implanted, silicon-based electrode array in non-human primate primary motor cortex. IEEE Trans. on Neural Systems and Rehab. Eng., 13(4):524?541, 2005. [18] D. Taylor, S. Helms Tillery, and A. Schwartz. Direct cortical control of 3D neuroprosthetic devices. Science, 296(5574):1829?1832, 2002. [19] E. Todorov. Direct cortical control of muscle activation in voluntary arm movements: a model. Nature Neuroscience, 3(4):391?398, April 2000. [20] J. Wessberg, C. Stambaugh, J. Kralik, Laubach M. Beck, P., J. Chapin, J. Kim, S. Biggs, M. Srinivasan, and M. Nicolelis. Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature, 408:361?365, 2000. [21] W. Wu, Y. Gao, E. Bienenstock, J. P. Donoghue, and M. J Black. Bayesian population decoding of motor cortical activity using a Kalman filter. Neural Computation, 18(1):80?118, 2006. [22] W. Wu, A. Shaikhouni, J. P. Donoghue, and M. J. Black. Closed-loop neural control of cursor motion using a Kalman filter. In Proc. IEEE Engineering in Medicine and Biology Society, pages 4126?4129, Sep 2004.
3050 |@word neurophysiology:4 trial:3 norm:4 open:1 eng:1 solid:1 contains:1 ours:1 nordhausen:1 bc:1 imaginary:1 recovered:1 ka:9 current:2 activation:2 readily:1 john:1 biomechanical:1 subsequent:1 realistic:3 motor:31 hypothesize:2 generative:2 selected:1 device:6 manipulandum:4 wessberg:1 plane:2 parametrization:1 provides:2 location:4 simpler:1 along:2 direct:10 prove:1 consists:1 shpigelman:1 introduce:1 x0:2 expected:1 roughly:2 mukand:1 brain:4 torque:1 inspired:2 decomposed:1 window:3 electroencephalography:1 elbow:1 estimating:1 matched:1 moreover:2 underlying:3 mass:3 medium:1 chapin:1 viscous:1 cm:2 monkey:12 finding:1 transformation:2 sung:1 temporal:2 ragged:2 every:1 fellow:5 facto:1 control:24 unit:4 schwartz:4 engineering:2 treat:1 tends:1 encoding:1 analyzing:1 firing:15 path:1 crist:1 black:6 suggests:1 challenging:1 limited:2 range:1 averaged:2 practical:2 unique:1 acknowledgment:1 testing:1 wrist:5 lost:2 practice:1 shaikhouni:1 lf:2 digit:4 procedure:1 area:5 significantly:3 shoham:2 matching:1 suggest:2 context:3 applying:2 conventional:1 map:1 demonstrated:1 phil:1 center:2 straightforward:1 duration:1 focused:4 formulate:1 recovery:1 sbm:15 array:4 population:5 coordinate:2 discharge:1 controlling:2 suppose:1 target:15 today:1 user:1 us:1 origin:1 sig:1 velocity:6 trend:1 element:1 particularly:1 satisfying:1 observed:8 connected:3 grasping:1 movement:35 plo:1 hatsopoulos:3 disease:1 complexity:1 reward:2 dynamic:11 ultimately:1 trained:2 shakhnarovich:1 segment:7 solving:1 upon:1 basis:1 biggs:1 easily:1 sep:1 various:2 assistive:1 train:5 separated:1 distinct:1 effective:1 artificial:6 tell:1 quite:2 apparent:1 widely:2 solve:1 ability:1 statistic:2 echo:3 online:1 hoc:1 advantage:2 sequence:3 abst:1 vaadia:1 propose:2 reconstruction:3 rehab:1 aligned:2 loop:1 insensitivity:1 moved:1 tillery:1 macaca:1 olkopf:1 carmena:2 electrode:4 jaeger:1 produce:5 illustrate:1 recurrent:2 completion:1 measured:2 received:2 recovering:1 c:1 involves:2 come:1 exhibiting:1 direction:4 closely:1 musculoskeletal:2 filter:12 kb:10 human:15 vx:4 viewing:1 bin:4 require:3 generalization:1 biological:1 assisted:1 stambaugh:1 considered:1 ground:2 mapping:6 predict:1 matthew:1 major:1 proc:2 successfully:1 mit:1 always:1 aim:2 rather:2 reaching:3 pn:1 varying:3 office:1 ax:4 focus:2 naval:1 likelihood:1 contrast:2 kim:3 helpful:1 inference:4 typically:1 entire:1 hidden:1 kc:3 relation:1 bienenstock:2 microelectrode:1 sketched:1 overall:1 among:1 plan:1 animal:5 art:2 constrained:1 spatial:1 smoothing:1 biology:2 represents:3 discrepancy:1 future:1 report:2 quantitatively:1 extinguished:1 inherent:1 few:1 randomly:1 simultaneously:1 national:1 individual:1 ccy:3 beck:1 replaced:1 intended:1 attempt:2 freedom:1 kinematic:6 evaluation:2 yielding:1 damped:1 held:1 accurate:2 closer:1 elasticity:1 experience:1 orthogonal:1 taylor:1 prosthesis:3 desired:3 circle:1 instance:2 classify:1 modeling:2 rao:1 injury:1 deviation:1 subset:1 uniform:1 successful:3 paz:1 conducted:1 front:1 reported:1 providence:1 damp:1 spatiotemporal:1 gregory:2 sv:1 density:4 workspace:1 probabilistic:1 decoding:38 michael:1 mouse:1 lebedev:2 again:1 central:1 reflect:3 recorded:6 possibly:1 conf:1 presumes:1 converted:1 potential:1 de:1 intracortical:1 summarized:1 sec:1 coding:2 coefficient:14 int:1 idealized:5 ad:1 performed:2 closed:1 portion:1 red:1 recover:2 parallel:2 complicated:1 collaborative:1 square:3 accuracy:7 who:1 ensemble:1 yield:1 directional:1 handwritten:2 groud:1 decodes:1 accurately:2 produced:4 bayesian:2 biophysically:1 trajectory:11 cc:4 history:3 reach:3 tended:1 energy:1 frequency:4 proof:1 associated:2 mi:2 wheelchair:1 popular:2 conversation:1 sophisticated:1 appears:1 friehs:1 supervised:2 follow:1 reflected:1 april:1 formulation:2 evaluated:1 done:1 generality:1 furthermore:2 angular:1 stage:2 just:1 biomedical:1 correlation:1 smola:1 hand:28 sketch:1 nonlinear:9 overlapping:1 marker:1 maynard:1 perhaps:1 believe:1 utah:1 effect:1 dietterich:1 brown:2 concept:1 true:7 y2:2 normalized:3 freq:1 during:1 uniquely:1 m:3 criterion:2 ay:1 doherty:1 motion:5 l1:3 interface:5 passive:1 image:5 meaning:1 instantaneous:2 novel:1 recently:1 nih:2 nakata:1 mulatta:1 juice:1 physical:14 qp:1 endpoint:3 insensitive:1 cerebral:1 reza:1 volume:1 mae:8 significant:2 silicon:1 cambridge:1 tuning:2 session:1 particle:1 nonlinearity:1 reliability:1 moving:2 cortex:8 behaving:2 surface:1 etc:1 similarity:1 jrnl:1 recent:1 optimizing:1 muscle:5 captured:1 additional:2 preceding:1 dashed:1 signal:14 paralyzed:1 smoother:1 sliding:1 infer:1 smooth:1 technical:1 match:1 calculation:2 clinical:1 divided:1 award:1 controlled:4 prediction:1 involving:1 regression:6 basic:1 implanted:3 multilayer:1 physically:5 represent:1 kernel:6 serruya:1 achieved:2 cell:2 penalize:1 dimitrov:1 sch:1 rest:2 recording:6 subject:1 tend:1 db:1 sanchez:1 december:1 iii:1 variety:2 affect:1 fit:2 todorov:1 architecture:3 bandwidth:1 identified:1 cyberkinetics:1 idea:1 tradeoff:2 donoghue:9 whether:1 motivated:1 becker:1 penalty:1 resistance:1 speech:1 matlab:1 ignored:1 governs:1 involve:1 viscosity:2 slide:1 repeating:1 svms:3 gmd:1 outperform:2 vy:1 nsf:1 tutorial:1 dotted:1 estimated:5 neuroscience:4 per:2 flament:1 blue:1 srinivasan:1 group:1 key:2 four:2 ist:1 drawn:2 penalizing:1 fp6:1 convert:1 wu:2 summarizes:2 ninds:1 hochberg:1 bound:1 followed:1 quadratic:1 activity:12 constraint:3 georgopoulos:2 ri:1 x2:2 software:1 speed:2 friction:2 simulate:1 spring:25 performing:2 vargas:1 department:1 according:2 march:1 kd:3 primate:4 taken:2 computationally:1 ln:1 remains:1 turn:1 kinematics:20 count:1 german:1 singer:1 end:3 naturalness:1 stiffness:5 apply:1 limb:6 actuator:3 alternative:1 substitute:1 burg:1 ccx:3 patil:1 newton:1 instant:1 calculating:1 medicine:1 concatenated:1 ghahramani:1 especially:1 society:2 r01:1 objective:2 move:1 question:1 realized:1 spike:4 quantity:1 primary:4 gradient:1 link:1 thank:2 simulated:1 decoder:6 outer:1 toward:2 length:3 kalman:6 relationship:4 illustration:1 providing:1 laubach:1 setup:1 negative:2 kralik:1 motivates:1 zt:2 perform:3 neuron:5 observation:4 datasets:1 voluntary:2 situation:1 hinton:2 extended:1 y1:2 varied:1 nonlinearly:1 namely:1 mechanical:1 rad:1 distinction:1 learned:1 established:2 trans:1 suggested:1 dynamical:1 below:3 appeared:1 program:7 built:1 including:2 wz:1 video:1 power:8 natural:7 force:3 restore:1 nicolelis:3 santucci:1 arm:7 improve:2 technology:1 axis:3 normann:2 literature:2 vancouver:1 law:2 loss:3 fully:1 par:1 proportional:1 versus:2 degree:2 editor:1 story:1 summary:2 supported:1 free:1 bias:1 institute:1 absolute:1 boundary:1 depth:1 cortical:17 world:2 dimension:2 feedback:3 calculated:2 evaluating:1 neuroprosthetic:1 preprocessing:1 transaction:1 reconstructed:6 robotic:6 active:1 chronically:1 assumed:6 discriminative:1 spectrum:8 continuous:9 pen:2 helm:1 table:7 learn:2 channel:2 kettner:2 nature:4 inherently:1 henriquez:1 warranted:1 complex:1 cl:7 constructing:2 protocol:1 domain:1 necessarily:1 european:1 significance:1 main:1 repeated:1 x1:2 screen:3 n:1 position:15 decoded:4 inferring:2 concatenating:1 rail:1 third:1 remained:2 specific:1 xt:2 moran:1 explored:1 physiological:1 svm:19 incorporating:2 sequential:4 conditioned:1 push:1 cursor:11 nmp:6 sorting:2 kx:1 easier:1 paninski:2 forming:1 gao:2 visual:2 positional:1 expressed:1 tracking:3 partially:1 truth:3 relies:1 ma:1 nair:2 goal:2 acceleration:5 erdogmus:1 brainmachine:1 feasible:1 determined:2 infinite:1 reducing:1 specifically:1 typical:2 shadmehr:1 total:1 experimental:2 la:4 intact:2 perceptrons:1 principe:1 people:1 support:6 latter:1 irwin:1 meant:1 assessed:1 incorporate:1 evaluate:2
2,260
3,051
Large Margin Hidden Markov Models for Automatic Speech Recognition Fei Sha Computer Science Division University of California Berkeley, CA 94720-1776 [email protected] Lawrence K. Saul Department of Computer Science and Engineering University of California (San Diego) La Jolla, CA 92093-0404 [email protected] Abstract We study the problem of parameter estimation in continuous density hidden Markov models (CD-HMMs) for automatic speech recognition (ASR). As in support vector machines, we propose a learning algorithm based on the goal of margin maximization. Unlike earlier work on max-margin Markov networks, our approach is specifically geared to the modeling of real-valued observations (such as acoustic feature vectors) using Gaussian mixture models. Unlike previous discriminative frameworks for ASR, such as maximum mutual information and minimum classification error, our framework leads to a convex optimization, without any spurious local minima. The objective function for large margin training of CD-HMMs is defined over a parameter space of positive semidefinite matrices. Its optimization can be performed efficiently with simple gradient-based methods that scale well to large problems. We obtain competitive results for phonetic recognition on the TIMIT speech corpus. 1 Introduction As a result of many years of widespread use, continuous density hidden Markov models (CDHMMs) are very well matched to current front and back ends for automatic speech recognition (ASR) [21]. Typical front ends compute real-valued feature vectors from the short-time power spectra of speech signals. The distributions of these acoustic feature vectors are modeled by Gaussian mixture models (GMMs), which in turn appear as observation models in CD-HMMs. Viterbi decoding is used to solve the problem of sequential classification in ASR?namely, the mapping of sequences of acoustic feature vectors to sequences of phonemes and/or words, which are modeled by state transitions in CD-HMMs. The simplest method for parameter estimation in CD-HMMs is the Expectation-Maximization (EM) algorithm. The EM algorithm is based on maximizing the joint likelihood of observed feature vectors and label sequences. It is widely used due to its simplicity and scalability to large data sets, which are common in ASR. A weakness of this approach, however, is that the model parameters of CDHMMs are not optimized for sequential classification: in general, maximizing the joint likelihood does not minimize the phoneme or word error rates, which are more relevant metrics for ASR. Noting this weakness, many researchers in ASR have studied alternative frameworks for parameter estimation based on conditional maximum likelihood [11], minimum classification error [4] and maximum mutual information [20]. The learning algorithms in these frameworks optimize discriminative criteria that more closely track actual error rates, as opposed to the EM algorithm for maximum likelihood estimation. These algorithms do not enjoy the simple update rules and relatively fast convergence of EM, but carefully and skillfully implemented, they lead to lower error rates [13, 20]. Recently, in a new approach to discriminative acoustic modeling, we proposed the use of ?large margin GMMs? for multiway classification [15]. Inspired by support vector machines (SVMs), the learning algorithm in large margin GMMs is designed to maximize the distance between labeled examples and the decision boundaries that separate different classes [19]. Under mild assumptions, the required optimization is convex, without any spurious local minima. In contrast to SVMs, however, large margin GMMs are very naturally suited to problems in multiway (as opposed to binary) classification; also, they do not require the kernel trick for nonlinear decision boundaries. We showed how to train large margin GMMs as segment-based phonetic classifiers, yielding significantly lower error rates than maximum likelihood GMMs [15]. The integrated large margin training of GMMs and transition probabilities in CD-HMMs, however, was left as an open problem. We address that problem in this paper, showing how to train large margin CD-HMMs in the more general setting of sequential (as opposed to multiway) classification. In this setting, the GMMs appear as acoustic models whose likelihoods are integrated over time by Viterbi decoding. Experimentally, we find that large margin training of HMMs for sequential classification leads to significant improvement beyond the frame-based and segment-based discriminative training in [15]. Our framework for large margin training of CD-HMMs builds on ideas from many previous studies in machine learning and ASR. It has similar motivation as recent frameworks for sequential classification in the machine learning community [1, 6, 17], but differs in its focus on the real-valued acoustic feature representations used in ASR. It has similar motivation as other discriminative paradigms in ASR [3, 4, 5, 11, 13, 20], but differs in its goal of margin maximization and its formulation of the learning problem as a convex optimization over positive semidefinite matrices. The recent margin-based approach of [10] is closest in terms of its goals, but entirely different in its mechanics; moreover, its learning is limited to the mean parameters in GMMs. 2 Large margin GMMs for multiway classification Before developing large margin HMMs for ASR, we briefly review large margin GMMs for multiway classification [15]. The problem of multiway classification is to map inputs x ? ?d to labels y ? {1, 2, . . . , C}, where C is the number of classes. Large margin GMMs are trained from a set of labeled examples {(xn , yn )}N n=1 . They have many parallels to SVMs, including the goal of margin maximization and the use of a convex surrogate to the zero-one loss [19]. Unlike SVMs, where classes are modeled by half-spaces, in large margin GMMs the classes are modeled by collections of ellipsoids. For this reason, they are more naturally suited to problems in multiway as opposed to binary classification. Sections 2.1?2.3 review the basic framework for large margin GMMs: first, the simplest setting in which each class is modeled by a single ellipsoid; second, the formulation of the learning problem as a convex optimization; third, the general setting in which each class is modeled by two or more ellipsoids. Section 2.4 presents results on handwritten digit recognition. 2.1 Parameterization of the decision rule The simplest large margin GMMs model each class by a single ellipsoid in the input space. The ellipsoid for class c is parameterized by a centroid vector ?c ? ?d and a positive semidefinite matrix ?c ? ?d?d that determines its orientation. Also associated with each class is a nonnegative scalar offset ?c ? 0. The decision rule labels an example x ? ?d by the class whose centroid yields the smallest Mahalanobis distance:  y = argmin (x??c )T ?c (x??c ) + ?c . (1) c The decision rule in eq. (1) is merely an alternative way of parameterizing the maximum a posterior (MAP) label in traditional GMMs with mean vectors ?c , covariance matrices ??1 c , and prior class probabilities pc , given by y = argminc { pc N (?c , ??1 ) }. c The argument on the right hand side of the decision rule in eq. (1) is nonlinear in the ellipsoid parameters ?c and ?c . As shown in [15], however, a useful reparameterization yields a simpler expression. For each class c, the reparameterization collects the parameters {?c , ?c , ?c } in a single enlarged matrix ?c ? ?(d+1)?(d+1) :   ?c ??c ?c ?c = . (2) ??T ?T c ?c c ?c ?c + ?c Note that ?c is positive semidefinite. Furthermore, if ?c is strictly positive definite, the parameters {?c , ?c , ?c } can be uniquely recovered from ?c . With this reparameterization, the decision rule in eq. (1) simplifies to:    T x y = argmin z ?c z where z = . (3) 1 c The argument on the right hand side of the decision rule in eq. (3) is linear in the parameters ?c . In what follows, we will adopt the representation in eq. (3), implicitly constructing the ?augmented? vector z for each input vector x. Note that eq. (3) still yields nonlinear (piecewise quadratic) decision boundaries in the vector z. 2.2 Margin maximization Analogous to learning in SVMs, we find the parameters {?c } that minimize the empirical risk on the training data?i.e., parameters that not only classify the training data correctly, but also place the decision boundaries as far away as possible. The margin of a labeled example is defined as its distance to the nearest decision boundary. If possible, each labeled example is constrained to lie at least one unit distance away from the decision boundary to each competing class: ?c 6= yn , z T n (?c ? ?yn ) z n ? 1. (4) Fig. 1 illustrates this idea. Note that in the ?realizable? setting where these constraints can be simultaneously satisfied, they do not uniquely determine the parameters {?c }, which can be scaled to yield arbitrarily large margins. Therefore, as in SVMs, we propose a convex optimization that selects the ?smallest? parameters that satisfy the large margin constraints in eq. (4). In this case, the optimization is an instance of semidefinite programming [18]: P min c trace(?c ) (5) s.t. 1 + z T ?c 6= yn , n = 1, 2, . . . , N n (?yn ? ?c )z n ? 0, ?c ? 0, c = 1, 2, . . . , C Note that the trace of the matrix ?c appears in the above objective function, as opposed to the trace of the matrix ?c , as defined in eq. (2); minimizing the former imposes the scale regularization only on the inverse covariance matrices of the GMM, while the latter would improperly regularize the mean vectors as well. The constraints ?c ? 0 restrict the matrices to be positive semidefinite. The objective function must be modified for training data that lead to infeasible constraints in eq. (5). As in SVMs, we introduce nonnegative slack variables ?nc to monitor the amount by which the margin constraints in eq. (4) are violated [15]. The objective function in this setting balances the margin violations versus the scale regularization: P P min nc ?nc + ? c trace(?c ) s.t. 1 + z T (? ? ? y c )z n ? ?nc , n n (6) ?nc ? 0, ?c 6= yn , n = 1, 2, . . . , N ?c ? 0, c = 1, 2, . . . , C where the balancing hyperparameter ? > 0 is set by some form of cross-validation. This optimization is also an instance of semidefinite programming. 2.3 Softmax margin maximization for multiple mixture components Lastly we review the extension to mixture modeling where each class is represented by multiple ellipsoids [15]. Let ?cm denote the matrix for the mth ellipsoid (or mixture component) in class c. We imagine that each example xn has not only a class label yn , but also a mixture component label mn . Such labels are not provided a priori in the training data, but we can generate ?proxy? labels by fitting GMMs to the examples in each class by maximum likelihood estimation, then for each example, computing the mixture component with the highest posterior probability. In the setting where each class is represented by multiple ellipsoids, the goal of learning is to ensure that each example is closer to its ?target? ellipsoid than the ellipsoids from all other classes. Specifically, for a labeled example (xn , yn , mn ), the constraint in eq. (4) is replaced by the M constraints: ?c 6= yn , ?m, z T n (?cm ? ?yn mn )z n ? 1, (7) a d e c i s i o n b o u n d a r y r g i n m mixture 1 2 4 8 Figure 1: Decision boundary in a large margin GMM: labeled examples lie at least one unit of distance away. EM 4.2% 3.4% 3.0% 3.3% margin 1.4% 1.4% 1.2% 1.5% Table 1: Test error rates on MNIST digit recognition: maximum likelihood versus large margin GMMs. where M is the number of mixture components (assumed, for simplicity, to be the same for each class). We fold these multiple constraints into a single one by appealing to the ?softmax? inequalP ity: minm am ? ? log m e?am . Specifically, using the inequality to derive a lower bound on minm z T n ?cm z n , we replace the M constraints in eq. (7) by the stricter constraint: X T (8) ?c 6= yn , ? log e?zn ?cm zn ? z T n ?yn mn z n ? 1. m We will use a similar technique in section 3 to handle the exponentially many constraints that arise in sequential classification. Note that the inequality in eq. (8) implies the inequality of eq. (7) but not vice versa. Also, though nonlinear in the matrices {?cm }, the constraint in eq. (8) is still convex. The objective function in eq. (6) extends straightforwardly to this setting. It balances a regularizing term that sums over ellipsoids versus a penalty term that sums over slack variables, one for each constraint in eq. (8). The optimization is given by: P P min ) nc ?nc + ? cm trace(? Pcm ?zT ?cm zn T s.t. 1 + z n ?yn mn z n + log m e n ? ?nc , (9) ?nc ? 0, ?c 6= yn , n = 1, 2, . . . , N ?cm ? 0, c = 1, 2, . . . , C, m = 1, 2, . . . , M This optimization is not an instance of semidefinite programming, but it is convex. We discuss how to perform the optimization efficiently for large data sets in appendix A. 2.4 Handwritten digit recognition We trained large margin GMMs for multiway classification of MNIST handwritten digits [8]. The MNIST data set has 60000 training examples and 10000 test examples. Table 1 shows that the large margin GMMs yielded significantly lower test error rates than GMMs trained by maximum likelihood estimation. Our best results are comparable to the best SVM results (1.0-1.4%) on deskewed images [8] that do not make use of prior knowledge. For our best model, with four mixture components per digit class, the core training optimization over all training examples took five minutes on a PC. (Multiple runs of this optimization on smaller validation sets, however, were also required to set two hyperparameters: the regularizer for model complexity, and the termination criterion for early stopping.) 3 Large margin HMMs for sequential classification In this section, we extend the framework in the previous section from multiway classification to sequential classification. Particularly, we have in mind the application to ASR, where GMMs are used to parameterize the emission densities of CD-HMMs. Strictly speaking, the GMMs in our framework cannot be interpreted as emission densities because their parameters are not constrained to represent normalized distributions. Such an interpretation, however, is not necessary for their use as discriminative models. In sequential classification by CD-HMMs, the goal is to infer the correct hidden state sequence y = [y1 , y2 , . . . , yT ] given the observation sequence X = [x1 , x2 , . . . , xT ]. In the application to ASR, the hidden states correspond to phoneme labels, and the observations are acoustic feature vectors. Note that if an observation sequence has length T and each label can belong to C classes, then the number of incorrect state sequences grows as O(C T ). This combinatorial explosion presents the main challenge for large margin methods in sequential classification: how to separate the correct hidden state sequence from the exponentially large number of incorrect ones. The section is organized as follows. Section 3.1 explains the way that margins are computed for sequential classification. Section 3.2 describes our algorithm for large margin training of CD-HMMs. Details are given only for the simple case where the observations in each hidden state are modeled by a single ellipsoid. The extension to multiple mixture components closely follows the approach in section 2.3 and can be found in [14, 16]. Margin-based learning of transition probabilities is likewise straightforward but omitted for brevity. Both these extensions were implemented, however, for the experiments on phonetic recognition in section 3.3. 3.1 Margin constraints for sequential classification We start by defining a discriminant function over state (label) sequences of the CD-HMM. Let a(i, j) denote the transition probabilities of the CD-HMM, and let ?s denote the ellipsoid parameters of state s. The discriminant function D(X, s) computes the score of the state sequence s = [s1 , s2 , . . . , sT ] on an observation sequence X = [x1 , x2 , . . . , xT ] as: D(X, s) = X log a(st?1 , st ) ? t T X zT t ?st z t . (10) t=1 This score has the same form as the log-probability log P (X, s) in a CD-HMM with Gaussian emission densities. The first term accumulates the log-transition probabilities along the state sequence, while the second term accumulates ?acoustic scores? computed as the Mahalanobis distances to each state?s centroid. In the setting where each state is modeled by multiple mixture components, the acoustic scores from individual Mahalanobis distances are replaced with ?softmax? distances of P ?z T t ?st m z t , as described in section 2.3 and [14, 16]. the form log M m=1 e We introduce margin constraints in terms of the above discriminant function. Let H(s, y) denote the Hamming distance (i.e., the number of mismatched labels) between an arbitrary state sequence s and the target state sequence y. Earlier, in section 2 on multiway classification, we constrained each labeled example to lie at least one unit distance from the decision boundary to each competing class; see eq. (4). Here, by extension, we constrain the score of each target sequence to exceed that of each competing sequence by an amount equal to or greater than the Hamming distance: ?s 6= y, D(X, y) ? D(X, s) ? H(s, y) (11) Intuitively, eq. (11) requires that the (log-likelihood) gap between the score of an incorrect sequence s and the target sequence y should grow in proportion to the number of individual label errors. The appropriateness of such proportional constraints for sequential classification was first noted by [17]. 3.2 Softmax margin maximization for sequential classification The challenge of large margin sequence classification lies in the exponentially large number of constraints, one for each incorrect sequence s, embodied by eq. (11). We will use the same softmax inequality, previously introduced in section 2.3, to fold these multiple constraints into one, thus considerably simplifying the optimization required for parameter estimation. We first rewrite the constraint in eq. (11) as: ?D(X, y) + max{H(s, y) + D(X, s)} ? 0 s6=y (12) We obtain a more manageable constraint by substituting a softmax upper bound for the max term and requiring that the inequality still hold: X ?D(X, y) + log eH(s,y)+D(X,s) ? 0 (13) s6=y Note that eq. (13) implies eq. (12) but not vice versa. As in the setting for multiway classification, the objective function for sequential classification balances two terms: one regularizing the scale of the GMM parameters, the other penalizing margin violations. Denoting the training sequences by {X n , y n }N n=1 and the slack variables (one for each training sequence) by ?n ? 0, we obtain the following convex optimization: P P min n ?n + ? cm trace(? P cm ) s.t. ?D(X n , y n ) + log s6=yn eH(s,yn )+D(X n ,s) ? ?n , (14) ?n ? 0, n = 1, 2, . . . , N ?cm ? 0, c = 1, 2, . . . , C, m = 1, 2, . . . , M It is worth emphasizing several crucial differences between this optimization and previous ones [4, 11, 20] for discriminative training of CD-HMMs for ASR. First, the softmax large margin constraint in eq. (13) is a differentiable function of the model parameters, as opposed to the ?hard? maximum in eq. (12) and the number of classification errors in the MCE training criteria [4]. The constraint and its gradients with respect to GMM parameters ?cm and transition parameters a(?, ?) can be computed efficiently using dynamic programming, by a variant of the standard forward-backward procedure in HMMs [14]. Second, due to the reparameterization in eq. (2), the discriminant function D(X n , y n ) and the softmax function are convex in the model parameters. Therefore, the optimization eq. (14) can be cast as convex optimization, avoiding spurious local minima [14]. Third, the optimization not only increases the log-likelihood gap between correct and incorrect state sequences, but also drives the gap to grow in proportion to the number of individually incorrect labels (which we believe leads to more robust generalization). Finally, compared to the large margin framework in [17], the softmax handling of exponentially large number of margin constraints makes it possible to train on larger data sets. We discuss how to perform the optimization efficiently in appendix A. 3.3 Phoneme recognition We used the TIMIT speech corpus [7, 9, 12] to perform experiments in phonetic recognition. We followed standard practices in preparing the training, development, and test data. Our signal processing front-end computed 39-dimensional acoustic feature vectors from 13 mel-frequency cepstral coefficients and their first and second temporal derivatives. In total, the training utterances gave rise to roughly 1.2 million frames, all of which were used in training. We trained baseline maximum likelihood recognizers and two different types of large margin recognizers. The large margin recognizers in the first group were ?low-cost? discriminative CD-HMMs whose GMMs were merely trained for frame-based classification. In particular, these GMMs were estimated by solving the optimization in eq. (8), then substituted into first-order CD-HMMs for sequence decoding. The large margin recognizers in the second group were fully trained for sequential classification. In particular, their CD-HMMs were estimated by solving the optimization in eq. (14), generalized to multiple mixture components and adaptive transition parameters [14, 16]. In all the recognizers, the acoustic feature vectors were labeled by 48 phonetic classes, each represented by one state in a first-order CD-HMM. For each recognizer, we compared the phonetic state sequences obtained by Viterbi decoding to the ?ground-truth? phonetic transcriptions provided by the TIMIT corpus. For the purpose of computing error rates, we followed standard conventions in mapping the 48 phonetic state labels down to 39 broader phone categories. We computed two different types of phone error rates, one based on Hamming distance, the other based on edit distance. The former was computed simply from the percentage of mismatches at the level of individual frames. The latter was computed by aligning the Viterbi and ground truth transcriptions using dynamic programming [9] and summing the substitution, deletion, and insertion error rates from the alignment process. The ?frame-based? phone error rate computed from Hamming distances is more closely tracked by our objective function for large margin training, while the ?string-based? phone error rate computed from edit distances provides a more relevant metric for ASR. Tables 2 and 3 show the results of our experiments. For both types of error rates, and across all model sizes, the best performance was consistently obtained by large margin CD-HMMs trained for sequential classification. Moreover, among the two different types of large margin recognizers, utterance-based training generally yielded significant improvement over frame-based training. Discriminative learning of CD-HMMs is an active research area in ASR. Two types of algorithms have been widely used: maximum mutual information (MMI) [20] and minimum classification er- mixture (per state) 1 2 4 8 baseline (EM) 45% 45% 42% 41% margin (frame) 37% 36% 35% 34% margin (utterance) 30% 29% 28% 27% Table 2: Frame-based phone error rates, from Hamming distance, of different recognizers. See text for details. mixture (per state) 1 2 4 8 baseline (EM) 40.1% 36.5% 34.7% 32.7% margin (frame) 36.3% 33.5% 32.6% 31.0% margin (utterance) 31.2% 30.8% 29.8% 28.2% Table 3: String-based phone error rates, from edit distance, of different recognizers. See text for details. ror [4]. In [16], we compare the large margin training proposed in this paper to both MMI and MCE systems for phoneme recognition trained on the exact same acoustic features. There we find that the large margin approach leads to lower error rates, owing perhaps to the absence of local minima in the objective function and/or the use of margin constraints based on Hamming distances. 4 Discussion Discriminative learning of sequential models is an active area of research in both ASR [10, 13, 20] and machine learning [1, 6, 17]. This paper makes contributions to lines of work in both communities. First, in distinction to previous work in ASR, we have proposed a convex, margin-based cost function that penalizes incorrect decodings in proportion to their Hamming distance from the desired transcription. The use of the Hamming distance in this context is a crucial insight from the work of [17] in the machine learning community, and it differs profoundly from merely penalizing the log-likelihood gap between incorrect and correct transcriptions, as commonly done in ASR. Second, in distinction to previous work in machine learning, we have proposed a framework for sequential classification that naturally integrates with the infrastructure of modern speech recognizers. Using the softmax function, we have also proposed a novel way to monitor the exponentially many margin constraints that arise in sequential classification. For real-valued observation sequences, we have shown how to train large margin HMMs via convex optimizations over their parameter space of positive semidefinite matrices. Finally, we have demonstrated that these learning algorithms lead to improved sequential classification on data sets with over one million training examples (i.e., phonetically labeled frames of speech). In ongoing work, we are applying our approach to large vocabulary ASR and other tasks such as speaker identification and visual object recognition. A Solver The optimizations in eqs. (5), (6), (9) and (14) are convex: specifically, in terms of the matrices that parameterize large margin GMMs and HMMs, the objective functions are linear, while the constraints define convex sets. Despite being convex, however, these optimizations cannot be managed by off-the-shelf numerical optimization solvers or generic interior point methods for problems as large as the ones in this paper. We devised our own special-purpose solver for these purposes. For simplicity, we describe our solver for the optimization of eq. (6), noting that it is easily extended to eqs. (9) and (14). To begin, we eliminate the slack variables and rewrite the objective function in terms of the hinge loss function: hinge(z) = max(0, z). This yields the objective function: X X  L= hinge 1 + z T trace(?c ), (15) n (?yn ??c )z n + ? n,c6=yn c which is convex in terms of the positive semidefinite matrices ?c . We minimize L using a projected subgradient method [2], taking steps along the subgradient of L, then projecting the matrices {?c } back onto the set of positive semidefinite matrices after each update. This method is guaranteed to converge to the global minimum, though it typically converges very slowly. For faster convergence, we precede this method with an unconstrained conjugate gradient optimization in the square-root matrices {?c }, where ?c = ?c ?T c . The latter optimization is not convex, but in practice it rapidly converges to an excellent starting point for the projected subgradient method. Acknowledgment This work was supported by the National Science Foundation under grant Number 0238323. We thank F. Pereira, K. Crammer, and S. Roweis for useful discussions and correspondence. Part of this work was conducted while both authors were affiliated with the University of Pennsylvania. References [1] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In T. Fawcett and N. Mishra, editors, Proceedings of the Twentieth International Conference (ICML 2003), pages 3?10, Washington, DC, USA, 2003. AAAI Press. [2] D. P. Bertsekas. Nonlinear programming. Athena Scientific, 2nd edition, 1999. [3] P. S. Gopalakrishnan, D. Kanevsky, A. N?adas, and D. Nahamoo. An inequality for rational functions with applications to some statistical estimation problems. IEEE Trans. Info. Theory, 37(1):107?113, 1991. [4] B.-H. Juang and S. Katagiri. Discriminative learning for minimum error classification. IEEE Trans. Sig. Proc., 40(12):3043?3054, 1992. [5] S. Kapadia, V. Valtchev, and S. Young. MMI training for continuous phoneme recognition on the TIMIT database. In Proc. of ICASSP 93, volume 2, pages 491?494, Minneapolis, MN, 1993. [6] J. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilisitc models for segmenting and labeling sequence data. In Proc. 18th International Conf. on Machine Learning (ICML 2001), pages 282?289. Morgan Kaufmann, San Francisco, CA, 2001. [7] L. F. Lamel, R. H. Kassel, and S. Seneff. Speech database development: design and analsysis of the acoustic-phonetic corpus. In L. S. Baumann, editor, Proceedings of the DARPA Speech Recognition Workshop, pages 100?109, 1986. [8] Y. LeCun, L. Jackel, L. Bottou, A. Brunot, C. Cortes, J. Denker, H. Drucker, I. Guyon, U. Muller, E. Sackinger, P. Simard, and V. Vapnik. Comparison of learning algorithms for handwritten digit recognition. In F. Fogelman and P. Gallinari, editors, Proceedings of the International Conference on Artificial Neural Networks, pages 53?60, 1995. [9] K. F. Lee and H. W. Hon. Speaker-independent phone recognition using hidden Markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(11):1641?1648, 1988. [10] X. Li, H. Jiang, and C. Liu. Large margin HMMs for speech recognition. In Proceedings of ICASSP 2005, pages 513?516, Philadelphia, 2005. [11] A. N?adas. A decision-theoretic formulation of a training problem in speech recognition and a comparison of training by unconditional versus conditional maximum likelihood. IEEE Transactions on Acoustics, Speech and Signal Processing, 31(4):814?817, 1983. [12] T. Robinson. An application of recurrent nets to phone probability estimation. IEEE Transactions on Neural Networks, 5(2):298?305, 1994. [13] J. L. Roux and E. McDermott. Optimization methods for discriminative training. In Proceedings of Nineth European Conference on Speech Communication and Technology (EuroSpeech 2005), pages 3341?3344, Lisbon, Portgual, 2005. [14] F. Sha. Large margin training of acoustic models for speech recognition. PhD thesis, University of Pennsylvania, 2007. [15] F. Sha and L. K. Saul. Large margin Gaussian mixture modeling for phonetic classification and recognition. In Proceedings of ICASSP 2006, pages 265?268, Toulouse, France, 2006. [16] F. Sha and L. K. Saul. Comparison of large margin training to other discriminative methods for phonetic recognition by hidden Markov models. In Proceedings of ICASSP 2007, Hawaii, 2007. [17] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In S. Thrun, L. Saul, and B. Sch?olkopf, editors, Advances in Neural Information Processing Systems (NIPS 16). MIT Press, Cambridge, MA, 2004. [18] L. Vandenberghe and S. P. Boyd. Semidefinite programming. SIAM Review, 38(1):49?95, March 1996. [19] V. Vapnik. Statistical Learning Theory. Wiley, N.Y., 1998. [20] P. C. Woodland and D. Povey. Large scale discriminative training of hidden Markov models for speech recognition. Computer Speech and Language, 16:25?47, 2002. [21] S. J. Young. Acoustic modelling for large vocabulary continuous speech recognition. In K. Ponting, editor, Computational Models of Speech Pattern Processing, pages 18?39. Springer, 1999.
3051 |@word mild:1 briefly:1 manageable:1 proportion:3 nd:1 open:1 termination:1 covariance:2 simplifying:1 substitution:1 liu:1 score:6 denoting:1 mishra:1 current:1 recovered:1 must:1 numerical:1 hofmann:1 designed:1 update:2 half:1 parameterization:1 mccallum:1 short:1 core:1 infrastructure:1 provides:1 c6:1 simpler:1 five:1 along:2 incorrect:8 fitting:1 introduce:2 roughly:1 mechanic:1 inspired:1 actual:1 solver:4 provided:2 begin:1 matched:1 moreover:2 what:1 argmin:2 cm:12 interpreted:1 string:2 temporal:1 berkeley:2 stricter:1 classifier:1 scaled:1 gallinari:1 unit:3 grant:1 enjoy:1 appear:2 yn:18 bertsekas:1 segmenting:1 positive:9 before:1 engineering:1 local:4 despite:1 accumulates:2 jiang:1 studied:1 argminc:1 collect:1 hmms:24 limited:1 minneapolis:1 acknowledgment:1 lecun:1 practice:2 definite:1 differs:3 digit:6 procedure:1 area:2 empirical:1 significantly:2 boyd:1 word:2 altun:1 cannot:2 interior:1 onto:1 tsochantaridis:1 risk:1 context:1 applying:1 optimize:1 map:2 demonstrated:1 yt:1 maximizing:2 straightforward:1 starting:1 convex:18 simplicity:3 roux:1 rule:7 parameterizing:1 insight:1 regularize:1 vandenberghe:1 s6:3 reparameterization:4 ity:1 handle:1 analogous:1 diego:1 imagine:1 target:4 exact:1 programming:7 sig:1 trick:1 recognition:23 particularly:1 labeled:9 database:2 observed:1 taskar:1 parameterize:2 highest:1 complexity:1 insertion:1 dynamic:2 trained:8 rewrite:2 segment:2 solving:2 ror:1 division:1 easily:1 joint:2 icassp:4 darpa:1 represented:3 regularizer:1 train:4 fast:1 describe:1 artificial:1 labeling:1 whose:3 widely:2 valued:4 solve:1 larger:1 toulouse:1 sequence:27 differentiable:1 kapadia:1 net:1 took:1 propose:2 relevant:2 rapidly:1 roweis:1 scalability:1 olkopf:1 convergence:2 juang:1 converges:2 object:1 derive:1 recurrent:1 nearest:1 eq:32 implemented:2 c:2 implies:2 convention:1 appropriateness:1 closely:3 correct:4 owing:1 explains:1 require:1 generalization:1 strictly:2 extension:4 hold:1 ground:2 lawrence:1 viterbi:4 mapping:2 substituting:1 adopt:1 smallest:2 early:1 omitted:1 purpose:3 recognizer:1 estimation:9 proc:3 integrates:1 precede:1 label:15 combinatorial:1 jackel:1 individually:1 edit:3 vice:2 mit:1 feisha:1 gaussian:4 modified:1 shelf:1 broader:1 focus:1 emission:3 improvement:2 consistently:1 modelling:1 likelihood:14 contrast:1 centroid:3 baseline:3 realizable:1 am:2 stopping:1 integrated:2 eliminate:1 typically:1 hidden:11 spurious:3 mth:1 koller:1 france:1 selects:1 fogelman:1 classification:38 orientation:1 among:1 hon:1 priori:1 development:2 constrained:3 softmax:10 special:1 mutual:3 equal:1 field:1 asr:20 washington:1 preparing:1 icml:2 piecewise:1 modern:1 simultaneously:1 national:1 individual:3 replaced:2 alignment:1 weakness:2 violation:2 mixture:16 semidefinite:12 yielding:1 pc:3 unconditional:1 closer:1 explosion:1 necessary:1 penalizes:1 desired:1 instance:3 classify:1 earlier:2 modeling:4 zn:3 maximization:7 ada:2 cost:2 conducted:1 front:3 eurospeech:1 straightforwardly:1 considerably:1 st:5 density:5 international:3 siam:1 lee:1 off:1 decoding:5 thesis:1 aaai:1 satisfied:1 opposed:6 slowly:1 hawaii:1 conf:1 derivative:1 simard:1 li:1 coefficient:1 satisfy:1 performed:1 root:1 competitive:1 start:1 parallel:1 timit:4 contribution:1 minimize:3 square:1 phonetically:1 kaufmann:1 phoneme:6 efficiently:4 likewise:1 yield:5 correspond:1 handwritten:4 identification:1 mmi:3 worth:1 researcher:1 drive:1 minm:2 frequency:1 naturally:3 associated:1 hamming:8 rational:1 knowledge:1 organized:1 carefully:1 back:2 appears:1 improved:1 formulation:3 done:1 though:2 furthermore:1 lastly:1 hand:2 sackinger:1 nonlinear:5 widespread:1 perhaps:1 scientific:1 grows:1 believe:1 usa:1 normalized:1 y2:1 requiring:1 managed:1 former:2 regularization:2 mahalanobis:3 uniquely:2 noted:1 mel:1 speaker:2 criterion:3 generalized:1 theoretic:1 lamel:1 image:1 novel:1 recently:1 common:1 tracked:1 exponentially:5 volume:1 million:2 extend:1 interpretation:1 belong:1 significant:2 versa:2 cambridge:1 automatic:3 unconstrained:1 multiway:11 language:1 katagiri:1 geared:1 recognizers:9 aligning:1 closest:1 posterior:2 showed:1 recent:2 own:1 jolla:1 phone:8 phonetic:11 inequality:6 binary:2 arbitrarily:1 seneff:1 muller:1 mcdermott:1 morgan:1 minimum:9 greater:1 guestrin:1 determine:1 maximize:1 paradigm:1 converge:1 signal:4 multiple:9 infer:1 faster:1 cross:1 devised:1 variant:1 basic:1 expectation:1 metric:2 kernel:1 represent:1 fawcett:1 grow:2 crucial:2 sch:1 unlike:3 lafferty:1 gmms:26 noting:2 exceed:1 kanevsky:1 gave:1 pennsylvania:2 competing:3 restrict:1 idea:2 simplifies:1 drucker:1 expression:1 improperly:1 penalty:1 speech:20 speaking:1 useful:2 generally:1 woodland:1 amount:2 svms:7 category:1 simplest:3 generate:1 baumann:1 percentage:1 estimated:2 track:1 correctly:1 per:3 hyperparameter:1 profoundly:1 group:2 four:1 monitor:2 gmm:4 penalizing:2 povey:1 backward:1 subgradient:3 merely:3 year:1 sum:2 run:1 inverse:1 parameterized:1 place:1 extends:1 guyon:1 decision:15 appendix:2 comparable:1 entirely:1 bound:2 followed:2 guaranteed:1 correspondence:1 fold:2 quadratic:1 nonnegative:2 yielded:2 nahamoo:1 constraint:26 fei:1 constrain:1 x2:2 argument:2 min:4 relatively:1 department:1 developing:1 march:1 conjugate:1 smaller:1 describes:1 em:7 across:1 appealing:1 s1:1 brunot:1 intuitively:1 projecting:1 previously:1 turn:1 slack:4 discus:2 mind:1 end:3 denker:1 away:3 generic:1 alternative:2 ensure:1 hinge:3 build:1 kassel:1 objective:11 sha:4 traditional:1 surrogate:1 analsysis:1 gradient:3 distance:20 separate:2 thank:1 thrun:1 hmm:4 athena:1 discriminant:4 reason:1 gopalakrishnan:1 length:1 modeled:8 ellipsoid:14 minimizing:1 balance:3 nc:9 info:1 trace:7 rise:1 design:1 zt:2 affiliated:1 perform:3 upper:1 observation:8 markov:9 defining:1 extended:1 communication:1 frame:10 y1:1 ucsd:1 dc:1 arbitrary:1 community:3 introduced:1 namely:1 required:3 cast:1 optimized:1 acoustic:17 california:2 distinction:2 deletion:1 nip:1 trans:2 address:1 beyond:1 robinson:1 pattern:1 mismatch:1 challenge:2 max:5 including:1 power:1 lisbon:1 eh:2 mn:6 technology:1 utterance:4 embodied:1 philadelphia:1 text:2 review:4 prior:2 loss:2 fully:1 proportional:1 versus:4 validation:2 foundation:1 proxy:1 imposes:1 editor:5 cd:21 balancing:1 supported:1 infeasible:1 side:2 mismatched:1 saul:5 taking:1 cepstral:1 boundary:8 xn:3 transition:7 vocabulary:2 computes:1 forward:1 collection:1 adaptive:1 san:2 commonly:1 projected:2 author:1 far:1 transaction:3 implicitly:1 transcription:4 global:1 active:2 corpus:4 summing:1 assumed:1 francisco:1 discriminative:14 spectrum:1 continuous:4 table:5 robust:1 ca:3 excellent:1 bottou:1 european:1 constructing:1 substituted:1 main:1 valtchev:1 motivation:2 s2:1 arise:2 hyperparameters:1 edition:1 x1:2 enlarged:1 augmented:1 fig:1 wiley:1 pereira:2 lie:4 third:2 young:2 minute:1 emphasizing:1 down:1 xt:2 showing:1 er:1 cdhmms:2 offset:1 svm:1 cortes:1 deskewed:1 workshop:1 mnist:3 vapnik:2 sequential:21 phd:1 illustrates:1 margin:69 mce:2 gap:4 suited:2 simply:1 pcm:1 twentieth:1 visual:1 scalar:1 springer:1 truth:2 determines:1 ma:1 conditional:3 goal:6 replace:1 absence:1 experimentally:1 hard:1 specifically:4 typical:1 total:1 la:1 support:3 latter:3 crammer:1 brevity:1 violated:1 ongoing:1 regularizing:2 avoiding:1 handling:1
2,261
3,052
Logarithmic Online Regret Bounds for Undiscounted Reinforcement Learning Peter Auer Ronald Ortner University of Leoben, Franz-Josef-Strasse 18, 8700 Leoben, Austria {auer,rortner}@unileoben.ac.at Abstract We present a learning algorithm for undiscounted reinforcement learning. Our interest lies in bounds for the algorithm?s online performance after some finite number of steps. In the spirit of similar methods already successfully applied for the exploration-exploitation tradeoff in multi-armed bandit problems, we use upper confidence bounds to show that our UCRL algorithm achieves logarithmic online regret in the number of steps taken with respect to an optimal policy. 1 1.1 Introduction Preliminaries Definition 1. A Markov decision process (MDP) M on a finite set of states S with a finite set of actions A available in each state ? S consists of (i) an initial distribution ?0 over S, (ii) the transition probabilities p(s, a, s0 ) that specify the probability of reaching state s0 when choosing action a in state s, and (iii) the payoff distributions with mean r(s, a) and support in [0, 1] that specify the random reward for choosing action a in state s. A policy on an MDP M is a mapping ? : S ? A. We will mainly consider unichain MDPs, in which under any policy any state can be reached (after a finite number of transitions) from any state. For a policy ? let ?? be the stationary distribution induced by ? on M .1 The average reward of ? then is defined as X ?(M, ?) := ?? (s)r(s, ?(s)). (1) s?S A policy ? ? is called optimal on M , if ?(M, ?) ? ?(M, ? ? ) =: ?? (M ) =: ?? for all policies ?. Our measure for the quality of a learning algorithm is the total regret after some finite number of steps. When a learning algorithm A executes action at in state st at step t obtaining reward rt , then PT ?1 RT := t=0 rt ? T ?? denotes the total regret of A after T steps. The total regret RT? with respect to an ?-optimal policy (i.e. a policy whose return differs from ?? by at most ?) is defined accordingly. 1.2 Discussion We would like to compare this approach with the various PAC-like bounds in the literature as given for the E3 -algorithm of Kearns, Singh [1] and the R-Max algorithm of Brafman, Tennenholtz [2] (cf. also [3]). Both take as inputs (among others) a confidence parameter ? and an accuracy parameter 1 Every policy ? induces a Markov chain C? on M . If C? is ergodic with transition matrix P , then there exists a unique invariant andP strictly positive distribution ?? , such that independent of ?0 one has ?n = j ?0 P?n ? ?? , where P?n = n1 n j=1 P . If C? is not ergodic, ?? will depend on ?0 . ?. The algorithms then are shown to yield ?-optimal return after time polynomial in 1? , 1? (among others) with probability 1??. In contrast, our algorithm has no such input parameters and converges to an optimal policy with expected logarithmic online regret in the number of steps taken. Obviously, by using a decreasing sequence ?t , online regret bounds for E3 and R-Max can be achieved. However, it is not clear whether such a procedure can give logarithmic online regret bounds. We rather conjecture that these bounds either will be not logarithmic in the total number of steps (if ?t decreases quickly) or that the dependency on the parameters of the MDP ? in particular on the distance between the reward of the best and a second best policy ? won?t be polynomial (if ?t decreases slowly). Moreover, although our UCRL algorithm shares the ?optimism under uncertainty? maxim with RMax, our mechanism for the exploitation-exploration tradeoff is implicit, while E3 and R-Max have to distinguish between ?known? and ?unknown? states explicitly. Finally, in their original form both E3 and R-Max need a policy?s ?-return mixing time T? as input parameter. The knowledge of this parameter then is eliminated by calculating the ?-optimal policy for T? = 1, 2, . . ., so that sooner or later the correct ?-return mixing time is reached. This is sufficient to obtain polynomial PACbounds, but seems to be intricate for practical purposes. Moreover, as noted in [2], at some time step the assumed T? may be exponential in the true T? , which makes policy computation exponential in T? . Unlike that, we need our mixing time parameter only in the analysis. This makes our algorithm rather simple and intuitive. Recently, more refined performance measures such as the sample complexity of exploration [3] were introduced. Strehl and Littman [4] showed that in the discounted setting, efficiency in the sample complexity implies efficiency in the average loss. However, average loss is defined in respect to the actually visited states, so that small average loss does not guarantee small total regret, which is defined in respect to the states visited by an optimal policy. For this average loss polylogarithmic online bounds were shown for for the MBIE algorithm [4], while more recently logarithmic bounds for delayed Q-learning were given in [5]. However, discounted reinforcement learning is a bit simpler than undiscounted reinforcement learning, as depending on the discount factor only a finite number of steps is relevant. This makes discounted reinforcement learning similar to the setting with trials of constant length from a fixed initial state [6]. For this case logarithmic online regret bounds in the number of trials have already been given in [7]. Since we measure performance during exploration, the exploration vs. exploitation dilemma becomes an important issue. In the multi-armed bandit problem, similar exploration-exploitation tradeoffs were handled with upper confidence bounds for the expected immediate returns [8, 9]. This approach has been shown to allow good performance during the learning phase, while still converging fast to a nearly optimal policy. Our UCRL algorithm takes into account the state structure of the MDP, but is still based on upper confidence bounds for the expected return of a policy. Upper confidence bounds have been applied to reinforcement learning in various places and different contexts, e.g. interval estimation [10, 11], action elimination [12], or PAC-learning [6]. Our UCRL algorithm is similar to Strehl, Littman?s MBIE algorithm [10, 4], but our confidence bounds are different, and we are interested in the undiscounted case. Another paper with a similar approach is Burnetas, Katehakis [13]. The basic idea of their rather complex index policies is to choose the action with maximal return in some specified confidence region of the MDP?s probability distributions. The online-regret of their algorithm is asymptotically logarithmic in the number of steps, which is best possible. Our UCRL algorithm is simpler and achieves logarithmic regret not only asymptotically but uniformly over time. Moreover, unlike in the approach of [13], knowledge about the MDP?s underlying state structure is not needed. More recently, online reinforcement learning with changing rewards chosen by an adversary was considered under the presumption that the learner has full knowledge of the transition probabilities ? [14]. The given algorithm achieves best possible regret of O( T ) after T steps. In the subsequent Sections 2 and 3 we introduce our UCRL algorithm and show that its expected online regret in unichain MDPs is O(log T ) after T steps. In Section 4 we consider problems that arise when the underlying MDP is not unichain. 2 The UCRL Algorithm To select good policies, we keep track of estimates for the average rewards and the transition probabilities. For each step t let Nt (s, a) Rt (s, a) = |{0 ? ? < t : s? = s, a? = a}|, X = r? , 0?? <t: s? =s, a? =a 0 Pt (s, a, s ) = |{0 ? ? < t : s? = s, a? = a, s? +1 = s0 }|, be the number of steps when action a was chosen in state s, the sum of rewards obtained when choosing this action, and the number of times the transition was to state s0 , respectively. From these numbers we immediately get estimates for the average rewards and transition probabilities, Rt (s, a) , Nt (s, a) Pt (s, a, s0 ) p?t (s, a, s0 ) := , Nt (s, a) r?t (s, a) := provided that the number of visits in (s, a), Nt (s, a) > 0. In general, these estimates will deviate from the respective true values. However, together with appropriate confidence intervals they may be used to define a set Mt of plausible MDPs. Our algorithm then chooses an optimal policy ? ?t for ? ? ? ? an MDP Mt with maximal average reward ??t := ? (Mt ) among the MDPs in Mt . That is, ? ?t ?t M := arg max{?(M, ?) : M ? Mt }, ? and := arg max {?(M, ? ?t )}. M ?Mt More precisely, we want Mt to be a set of plausible MDPs in the sense that P {?? > ???t } < t?? (2) for some ? > 2. Essentially, condition (2) means that it is unlikely that the true MDP M is not in Mt . Actually, Mt is defined to contain exactly those unichain MDPs M 0 whose transition probabilities p0 (?, ?, ?) and rewards r0 (?, ?) satisfy for all states s, s0 and actions a q ? |S||A|) , and (3) r0 (s, a) ? r?t (s, a) + log(2t 2Nt (s,a) q log(4t? |S|2 |A|) |p0 (s, a, s0 ) ? p?t (s, a, s0 )| ? . (4) 2Nt (s,a) Conditions (3) and (4) describe confidence bounds on the rewards and transition probabilities of the true MDP M such that (2) is implied (cf. Section 3.1 below). The intuition behind the algorithm is that if a non-optimal policy is followed, then this is eventually observed and something about the MDP is learned. In the proofs we show that this learning happens sufficiently fast to approach an optimal policy with only logarithmic regret. As switching policies too often may be harmful, and estimates don?t change very much after few steps, our algorithm discards the policy ? ?t only if there was considerable progress concerning the estimates p?(s, ? ?t (s), s0 ) or r?(s, ? ?t (s)). That is, UCRL sticks to a policy until the length of some of the confidence intervals given by conditions (3) and (4) is halved. Only then a new policy is calculated. We will see below (cf. Section 3.3) that this condition limits the number of policy changes without paying too much for not changing to an optimal policy earlier. Summing up, Figure 1 displays our algorithm. Remark 1. The optimal policy ? ? in the algorithm can be efficiently calculated by a modified version of value iteration (cf. [15]). 3 3.1 Analysis for Unichain MDPs An Upper Bound on the Optimal Reward We show that with high probability the true MDP M is contained in the set Mt of plausible MDPs. Notation: n q o n q o ? |S|2 |A|) ? |S||A|) and confr (t, s, a) := min 1, log(2t . Set confp (t, s, a) := min 1, log(4t 2Nt (s,a) 2Nt (s,a) Initialization: ? Set t = 0. ? Set N0 (s, a) := R0 (s, a) := P0 (s, a, s0 ) = 0 for all s, a, s0 . ? Observe first state s0 . For rounds k = 1, 2, . . . do Initialize round k: 1. Set tk := t. 2. Recalculate estimates r?t (s, a) and p?t (s, a, s0 ) according to r?t (s, a) := Rt (s,a) Nt (s,a) , and p?t (s, a, s0 ) := Pt (s,a,s0 ) Nt (s,a) , provided that Nt (s, a) > 0. Otherwise set r?t (s, a) := 1 and p?t (s, a, s0 ) := 1 |S| . 3. Calculate new policy ? ?tk := arg max{?(M, ?) : M ? Mt }, ? where Mt consists of plausible unichain MDPs M 0 with rewards r0 (s, a) ? r?t (s, a) ? confr (t, s, a) and transition probabilities |p0 (s, a, s0 ) ? p?t (s, a, s0 )| ? confp (t, s, a). Execute chosen policy ? ?tk : 4. While confr (t, S, A) > confr (tk , S, A)/2 and confp (t, S, A) > confp (tk , S, A)/2 do (a) Choose action at := ? ?tk (st ). (b) Observe obtained reward rt and next state st+1 . (c) Update: ? Set Nt+1 (st , at ) := Nt (st , at ) + 1. ? Set Rt+1 (st , at ) := Rt (st , at ) + rt . ? Set Pt+1 (st , at , st+1 ) := Pt (st , at , st+1 ) + 1. ? All other values Nt+1 (s, a), Rt+1 (s, a), and Pt+1 (s, a, s0 ) are set to Nt (s, a), Rt (s, a), and Pt (s, a, s0 ), respectively. (d) Set t := t + 1. Figure 1: The UCRL algorithm. Lemma 1. For any t, any reward r(s, a) and any transition probability p(s, a, s0 ) of the true MDP M we have q n o ? |S||A|) P r?t (s, a) < r(s, a) ? log(2t 2Nt (s,a) < q n o ? |S|2 |A|) P |? pt (s, a, s0 ) ? p(s, a, s0 )| > log(4t < 2Nt (s,a) Proof. By Chernoff-Hoeffding?s inequality. t?? , 2|S||A| t?? . 2|S|2 |A| (5) (6) Using the definition of Mt as given by (3) and (4) and summing over all s, a, and s0 , Lemma 1 shows that M ? Mt with high probability. This implies that the maximal average reward ???t assumed by our algorithm when calculating a new policy at step t is an upper bound on ?? (M ) with high probability. Corollary 1. For any t: 3.2 P {?? > ???t } < t?? . Sufficient Precision and Mixing Times In order to upper bound the loss, we consider the precision needed to guarantee that the policy calculated by UCRL is (?-)optimal. This sufficient precision will of course depend on ? or ? in case one wants to compete with an optimal policy ? the minimal difference between ?? and the average reward of some suboptimal policy, ? := min ?:?(M,?)<?? ?? ? ?(M, ?). ? t, ? It is sufficient that the difference between ?(M ?t ) and ?(M, ? ?t ) is small in order to guarantee ? that ? ?t is an (?-)optimal policy. For if |?(Mt , ? ?t ) ? ?(M, ? ?t )| < ?, then by Corollary 1 with high probability ? t, ? ? > |?(M ?t ) ? ?(M, ? ?t )| ? |?? (M ) ? ?(M, ? ?t )|, (7) so that ? ?t is already an ?-optimal policy on M . For ? = ?, (7) implies the optimality of ? ?t . Thus, we consider bounds on the deviation of the transition probabilities and rewards for the assumed ? t from the true values, such that (7) is implied. This is handled in the subsequent proposition, MDP M where we use the notion of the MDP?s mixing time, which will play an essential role throughout the analysis. Definition 2. Given an ergodic Markov chain C, let Ts,s0 be the first passage time for two states s, s0 , that is, the time needed to reach s0 when starting in s. Furthermore let Ts,s the return time to max 0 E(T 0 ) s 6=s s ,s state s. Let TC := maxs,s0 ?S E(Ts,s0 ), and ?C := maxs?S . Then the mixing time 2E(Ts,s ) of a unichain MDP M is TM := max? TC? , where C? is the Markov chain induced by ? on M . Furthermore, we set ?M := max? ?C? . Our notion of mixing time is different from the notion of ?-return mixing time given in [1, 2], which depends on an additional parameter ?. However, it serves a similar purpose. Proposition 1. Let p(?, ?), p?(?, ?) and r(?), r?(?) be the transition probabilities and rewards of the ? under the policy ? MDPs M and M ? , respectively. If for all states s, s0 |? r(s) ? r(s)| < ?r := ? 2 and |? p(s, s0 ) ? p(s, s0 )| < ?p := ? , 2?M |S|2 ?,? then |?(M ? ) ? ?(M, ? ? )| < ?. The proposition is an easy consequence of the following result about the difference in the stationary distributions of ergodic Markov chains. Theorem 1 (Cho, Meyer[16]). Let C, C? be two ergodic Markov chains on the same state space S with transition probabilities p(?, ?), p?(?, ?) and stationary distributions ?, ? ?. Then the difference in the distributions ?, ? ? can be upper bounded by the difference in the transition probabilities as follows: X max |?(s) ? ? ?(s)| ? ?C max |p(s, s0 ) ? p?(s, s0 )|, (8) s?S s?S s0 ?S where ?C is as given in Definition 2. Proof of Proposition 1. By (8), X X |?(s) ? ? ?(s)| ? |S|?M max |? p(s, s0 ) ? p(s, s0 )| ? ?M |S|2 ?p . s?S s?S s0 ?S As the rewards are ? [0, 1] and P ?(s) = 1, we have by (1) X X ?,? |?(M ? ) ? ?(M, ? ? )| ? |? ?(s) ? ?(s)|? r(s) + |? r(s) ? r(s)|?(s) s s?S < s?S 2 ?M |S| ?p + ?r = ?. Since ?r > ?p and the confidence intervals for rewards are smaller than for transition probabilities (cf. Lemma 1), in the following we only consider the precision needed for transition probabilities. 3.3 Bounding the Regret As can be seen from the description of the algorithm, we split the sequence of steps into rounds, where a new round starts whenever the algorithm recalculates its policy. The following facts follow immediately from the form of our confidence intervals and Lemma 1, respectively. Proposition 2. For halving a confidence interval of a reward or transition probability for some (s, a) ? S ? A, the number Nt (s, a) of visits in (s, a) has to be at least doubled. Corollary 2. The number of rounds after T steps cannot exceed |S||A| log2 Proposition 3. If Nt (s, a) ? ?. log(4t? |S|2 |A|) , 2? 2 T |S||A| . then the confidence intervals for (s, a) are smaller than We need to consider three sources of regret: first, by executing a suboptimal policy in a round of length ? , we may lose reward up to ? within this round; second, there may be some loss when changing policies; third, we have to consider the error probabilities with which some of our confidence intervals fail. 3.3.1 Regret due to Suboptimal Rounds Proposition 3 provides an upper bound on the number of visits needed in each (s, a) in order to guarantee that a newly calculated policy is optimal. This can be used to upper bound the total number of steps in suboptimal rounds. Consider all suboptimal rounds with |? ptk (s, a, s0 ) ? p(s, a, s0 )| ? ?p for some s0 , where a policy ? ?tk with ? ?tk (s) = a is played. Let m(s, a) be the number of these rounds and ?i (s, a) (i = 1, . . . , m(s, a)) their respective lengths. The mean passage time between any state s00 and s is upper bounded by TM . Then by Markov?s inequality, the probability that it takes more than 2TM steps  (s,a)  intervals to reach s from s00 is smaller than 21 . Thus we may separate each round i into ?i2T M of length ? 2TM , in each of which the probability of visiting state s is at least 12 . Thus we may lower bound the number of visits Ns,a (n) in (s, a) within n such intervals by an application of Chernoff-Hoeffding?s inequality: o n 1 n p ? n log T ? 1 ? . (9) P Ns,a (n) ? 2 T ? Since by Proposition 3, Nt (s, a) < 2 log(4T?p|S| 2 m(s,a)  X i=1 ?i (s, a) 2TM 2 |A|) , we get  < c log(4T ? |S|2 |A|) ?p 2 with probability 1 ? T1 for a suitable constant c < 11. This gives for the expected regret in these rounds  m(s,a)  X log(4T ? |S|2 |A|) 1 E ?i (s, a) < 2 c ? TM + 2 m(s, a) TM + T. ? 2 T p i=1 Applying Corollary 2 and summing up over all (s, a), one sees that the expected regret due to suboptimal rounds cannot exceed 2 c |S||A|TM log(4T ? |S|2 |A|) T + 2TM |S|2 |A|2 log2 + |S||A|. ?p 2 |S||A| 3.3.2 Loss by Policy Changes For any policy ? ?t there may be some states from which the expected average reward for the next ? steps is larger than when starting in some other state. This does not play a role if ? ? ?. However, as we are playing our policies only for a finite number of steps before considering a change, we have to take into account that every time we switch policies, we may need a start-up phase to get into such a favorable state. In average, this cannot take more than TM steps, as this time is sufficient to reach any ?good? state from some ?bad? state. This is made more precise in the following lemma. We omit a detailed proof. Lemma 2. For all policies ?, all starting states s0 and all T ? 0 ?1  TX  E r(st , ?(st )) ? T ?(?, M ) ? TM . t=0 By Corollary 2, the corresponding expected regret after T steps is ? |S||A|TM log2 T |S||A| . 3.3.3 Regret if Confidence Intervals Fail Finally, we have to take into account the error probabilities, with which in each round a transition probability or a reward, respectively, is not contained in its confidence interval. According to Lemma 1, the probability that this happens at some step t t?? t?? t?? for a given state-action pair is < 2|S||A| + |S| 2|S| 2 |A| = |S||A| . Now let t1 = 1, t2 , . . . , tN ? T be the steps in which a new round starts. As the regret in each round can be upper bounded by its length, one obtains for the regret caused by failure of confidence intervals N ?1 X i=1 N ?1 ? X X t1?? t?? t?? i i c (ti+1 ? ti ) ? cti < < c0 , |S||A| |S||A| |S||A| t=1 i=1 using that ti+1 ? ti < cti for a suitable constant c = c(|S|, |A|, TM ) and provided that ? > 2. 3.3.4 Putting Everything Together Summing up over all the sources of regret and replacing for ?p yields the following theorem, which is a generalization of similar results that were achieved for the multi-armed bandit problem in [8]. Theorem 2. On unichain MDPs, the expected total regret of the UCRL algorithm with respect to an (?-)optimal policy after T > 1 steps can be upper bounded by T |A|TM ?2M |S|5 log T + 3TM |S|2 |A|2 log2 , and ?2 |S||A| |A|TM ?2M |S|5 T E(RT ) < const ? log T + 3TM |S|2 |A|2 log2 . ?2 |S||A| E(RT? ) < const ? 4 Remarks and Open Questions on Multichain MDPs In a multichain MDP a policy ? may split up the MDP into ergodic subchains Si? . Thus it may happen during the learning phase that one goes wrong and ends up in a part of the MDP that gives suboptimal return but cannot be left under no policy whatsoever. As already observed by Kearns, Singh [1], in this case it seems fair to compete with ?? (M ) := max? minSi? ?(Si? , ?). Unfortunately, the original UCRL algorithm may not work very well in this setting, as it is impossible for the algorithm to distinguish between a very low probability for a transition and its impossibility. Here the ?optimism in the face of uncertainty? idea fails, as there is no way to falsify the wrong belief in a possible transition. Obviously, if we knew for each policy which subchains it induces on M (the MDP?s ergodic struc? t and a policy ? ture), UCRL could choose an MDP M ?t that maximizes the reward among all plausible MDPs with the given ergodic structure. However, only the empiric ergodic structure (based on the observations so far) is known. As the empiric ergodic structure may not be reliable, one may additionally explore the ergodic structures of all policies. Alas, the number of additional exploration steps will depend on the smallest positive transition probability. If the latter is not known, it seems that logarithmic online regret bounds can be no longer guaranteed. However, we conjecture that for a slightly modified algorithm the logarithmic online regret bounds still hold for communicating MDPs, in which for any two states s, s0 there is a suitable policy ? such that s is reachable from s0 under ? (i.e., s, s0 are contained in the same subchain Si? ). As Theorem 1 does not hold for communicating MDPs in general, a proof would need a different analysis. 5 Conclusion and Outlook Beside the open problems on multichain MDPs, it is an interesting question whether our results also hold when assuming for the mixing time not the slowest policy for reaching any state but the fastest. Another research direction is to consider value function approximation and continuous reinforcement learning problems. For practical purposes, using the variance of the estimates will reduce the width of the upper confidence bounds and will make the exploration even more focused, improving learning speed and regret bounds. In this setting, we have experimental results comparable to those of the MBIE algorithm [10], which clearly outperforms other learning algorithms like R-Max or ?-greedy. Acknowledgements. This work was supported in part by the the Austrian Science Fund FWF (S9104-N04 SP4) and the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002506778. This publication only reflects the authors? views. References [1] Michael J. Kearns and Satinder P. Singh. Near-optimal reinforcement learning in polynomial time. Mach. Learn., 49:209?232, 2002. [2] Ronen I. Brafman and Moshe Tennenholtz. R-max ? a general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res., 3:213?231, 2002. [3] Sham M. Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003. [4] Alexander L. Strehl and Michael L. Littman. A theoretical analysis of model-based interval estimation. In Proc. 22nd ICML 2005, pages 857?864, 2005. [5] Alexander L. Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L. Littman. Pac model-free reinforcement learning. In Proc. 23nd ICML 2006, pages 881?888, 2006. [6] Claude-Nicolas Fiechter. Efficient reinforcement learning. In Proc. 7th COLT, pages 88?97. ACM, 1994. [7] Peter Auer and Ronald Ortner. Online regret bounds for a new reinforcement learning algorithm. In Proc. ? 1st ACVW, pages 35?42. OCG, 2005. [8] Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res., 3:397? 422, 2002. [9] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multi-armed bandit problem. Mach. Learn., 47:235?256, 2002. [10] Alexander L. Strehl and Michael L. Littman. An empirical evaluation of interval estimation for Markov decision processes. In Proc. 16th ICTAI, pages 128?135. IEEE Computer Society, 2004. [11] Leslie P. Kaelbling. Learning in Embedded Systems. MIT Press, 1993. [12] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for reinforcement learning. In Proc. 20th ICML, pages 162?169. AAAI Press, 2003. [13] Apostolos N. Burnetas and Michael N. Katehakis. Optimal adaptive policies for Markov decision processes. Math. Oper. Res., 22(1):222?255, 1997. [14] Eyal Even-Dar, Sham M. Kakade, and Yishay Mansour. Experts in a Markov decision process. In Proc. 17th NIPS, pages 401?408. MIT Press, 2004. [15] Martin L. Puterman. Markov Decision Processes. Discrete Stochastic Programming. Wiley, 1994. [16] Grace E. Cho and Carl D. Meyer. Markov chain sensitivity measured by mean first passage times. Linear Algebra Appl., 316:21?28, 2000.
3052 |@word trial:2 exploitation:5 version:1 polynomial:5 seems:3 nd:2 c0:1 open:2 p0:4 outlook:1 initial:2 ala:1 outperforms:1 nt:20 si:3 john:1 ronald:2 subsequent:2 wiewiora:1 happen:1 unichain:8 update:1 n0:1 v:1 stationary:3 greedy:1 fund:1 accordingly:1 provides:1 mannor:1 math:1 simpler:2 katehakis:2 apostolos:1 consists:2 subchains:2 introduce:1 excellence:1 intricate:1 expected:9 falsify:1 multi:4 discounted:3 decreasing:1 armed:4 considering:1 becomes:1 provided:3 moreover:3 underlying:2 notation:1 bounded:4 maximizes:1 rmax:1 whatsoever:1 guarantee:4 every:2 ti:4 exactly:1 wrong:2 stick:1 omit:1 positive:2 t1:3 before:1 limit:1 consequence:1 switching:1 mach:4 initialization:1 appl:1 fastest:1 unique:1 practical:2 regret:30 differs:1 procedure:1 strasse:1 empirical:1 confidence:20 get:3 doubled:1 cannot:4 context:1 applying:1 impossible:1 go:1 starting:3 ergodic:11 focused:1 immediately:2 communicating:2 notion:3 i2t:1 pt:9 play:2 yishay:2 programming:1 carl:1 observed:2 role:2 ocg:1 recalculate:1 calculate:1 region:1 decrease:2 trade:1 intuition:1 complexity:3 reward:26 littman:5 singh:3 depend:3 algebra:1 dilemma:1 efficiency:2 learner:1 eric:1 various:2 tx:1 fast:2 describe:1 london:1 choosing:3 refined:1 whose:2 larger:1 plausible:5 otherwise:1 fischer:1 online:14 obviously:2 ptk:1 sequence:2 claude:1 maximal:3 relevant:1 mixing:9 intuitive:1 description:1 undiscounted:4 converges:1 executing:1 tk:8 depending:1 ac:1 measured:1 progress:1 paying:1 implies:3 direction:1 correct:1 stochastic:1 exploration:9 elimination:2 everything:1 generalization:1 preliminary:1 proposition:8 strictly:1 hold:3 sufficiently:1 considered:1 mapping:1 achieves:3 smallest:1 purpose:3 estimation:3 favorable:1 proc:7 lose:1 visited:2 successfully:1 reflects:1 offs:1 clearly:1 mit:2 modified:2 reaching:2 rather:3 publication:1 corollary:5 ucrl:13 mainly:1 slowest:1 impossibility:1 contrast:1 sense:1 stopping:1 unlikely:1 bandit:4 interested:1 josef:1 issue:1 among:4 arg:3 pascal:1 colt:1 initialize:1 fiechter:1 eliminated:1 chernoff:2 icml:3 nearly:1 others:2 t2:1 ortner:2 few:1 delayed:1 phase:3 recalculates:1 n1:1 interest:1 evaluation:1 behind:1 chain:6 respective:2 sooner:1 harmful:1 re:3 theoretical:1 minimal:1 earlier:1 leslie:1 kaelbling:1 deviation:1 leoben:2 too:2 burnetas:2 dependency:1 struc:1 chooses:1 cho:2 st:14 sensitivity:1 michael:5 together:2 quickly:1 thesis:1 s00:2 cesa:1 aaai:1 choose:3 slowly:1 hoeffding:2 expert:1 return:10 li:1 oper:1 account:3 satisfy:1 explicitly:1 caused:1 depends:1 later:1 view:1 eyal:2 reached:2 start:3 accuracy:1 variance:1 efficiently:1 yield:2 ronen:1 executes:1 reach:3 whenever:1 definition:4 failure:1 proof:5 newly:1 austria:1 knowledge:3 auer:5 actually:2 follow:1 specify:2 execute:1 furthermore:2 implicit:1 until:1 langford:1 replacing:1 quality:1 mdp:21 contain:1 true:7 puterman:1 round:17 during:3 width:1 noted:1 won:1 tn:1 passage:3 recently:3 mt:15 sp4:1 reachable:1 lihong:1 longer:1 something:1 halved:1 showed:1 discard:1 inequality:3 seen:1 additional:2 r0:4 ii:1 full:1 sham:2 empiric:2 concerning:1 visit:4 converging:1 halving:1 basic:1 austrian:1 essentially:1 iteration:1 achieved:2 want:2 interval:15 source:2 unlike:2 induced:2 shie:1 spirit:1 fwf:1 near:2 ictai:1 exceed:2 iii:1 easy:1 split:2 ture:1 switch:1 suboptimal:7 reduce:1 idea:2 tm:17 tradeoff:3 whether:2 optimism:2 handled:2 rortner:1 peter:4 e3:4 action:12 remark:2 dar:2 clear:1 detailed:1 discount:1 induces:2 mbie:3 track:1 discrete:1 ist:2 putting:1 changing:3 asymptotically:2 sum:1 compete:2 uncertainty:2 place:1 throughout:1 decision:5 comparable:1 bit:1 bound:28 followed:1 distinguish:2 display:1 played:1 guaranteed:1 precisely:1 speed:1 min:3 optimality:1 martin:1 conjecture:2 according:2 smaller:3 slightly:1 kakade:2 happens:2 invariant:1 taken:2 eventually:1 mechanism:1 fail:2 needed:5 serf:1 end:1 available:1 observe:2 appropriate:1 original:2 denotes:1 cf:5 log2:5 const:2 calculating:2 society:1 implied:2 already:4 question:2 moshe:1 rt:15 grace:1 visiting:1 distance:1 separate:1 assuming:1 length:6 index:1 unfortunately:1 n04:1 policy:57 unknown:1 bianchi:1 upper:14 observation:1 markov:12 finite:8 t:4 immediate:1 payoff:1 precise:1 mansour:2 community:1 introduced:1 pair:1 specified:1 learned:1 polylogarithmic:1 nip:1 tennenholtz:2 andp:1 adversary:1 below:2 max:18 reliable:1 belief:1 suitable:3 mdps:16 deviate:1 literature:1 acknowledgement:1 nicol:1 beside:1 loss:7 embedded:1 presumption:1 interesting:1 sufficient:5 s0:46 playing:1 share:1 strehl:5 course:1 brafman:2 supported:1 free:1 allow:1 face:1 calculated:4 transition:22 author:1 made:1 reinforcement:15 adaptive:1 franz:1 programme:1 far:1 subchain:1 obtains:1 keep:1 satinder:1 summing:4 assumed:3 knew:1 don:1 continuous:1 additionally:1 learn:4 nicolas:1 obtaining:1 improving:1 complex:1 european:1 bounding:1 arise:1 paul:1 fair:1 multichain:3 wiley:1 n:2 precision:4 fails:1 meyer:2 exponential:2 lie:1 third:1 theorem:4 bad:1 pac:3 minsi:1 exists:1 essential:1 maxim:1 phd:1 tc:2 logarithmic:12 explore:1 contained:3 acm:1 cti:2 considerable:1 change:4 uniformly:1 kearns:3 lemma:7 called:1 total:7 experimental:1 select:1 college:1 support:1 latter:1 alexander:3 unileoben:1
2,262
3,053
The Robustness-Performance Tradeoff in Markov Decision Processes Huan Xu, Shie Mannor Department of Electrical and Computer Engineering McGill University Montreal, Quebec, Canada, H3A2A7 [email protected] [email protected] Abstract Computation of a satisfactory control policy for a Markov decision process when the parameters of the model are not exactly known is a problem encountered in many practical applications. The traditional robust approach is based on a worstcase analysis and may lead to an overly conservative policy. In this paper we consider the tradeoff between nominal performance and the worst case performance over all possible models. Based on parametric linear programming, we propose a method that computes the whole set of Pareto efficient policies in the performancerobustness plane when only the reward parameters are subject to uncertainty. In the more general case when the transition probabilities are also subject to error, we show that the strategy with the ?optimal? tradeoff might be non-Markovian and hence is in general not tractable. 1 Introduction In many decision problems the parameters of the problem are inherently uncertain. This uncertainty, termed parameter uncertainty, can be the result of estimating the parameters from a finite sample or a specification of the parameters that itself includes uncertainty. The standard approach in decision making to circumvent the adverse effect of the parameter uncertainty is to find a solution that performs best under the worst possible parameters. This approach, termed the ?robust? approach, has been used in both single stage ([1]) and multi-stage decision problems (e.g., [2]). In robust optimization problems, it is usually assumed that the constraint parameters are uncertain. By requiring the solution to be feasible to all possible parameters within the uncertainty set, Soyester ([1]) solved the column-wise independent uncertainty case, and Ben-Tal and Nemirovski ([3]) solved the row-wise independent case. In robust MDP problems, there may be two different types of parameter uncertainty, namely, the reward uncertainty and the transition probability uncertainty. Under the assumption that the uncertainty is state-wise independent (an assumption made by all papers to date, to the best of our knowledge), the optimality principle holds and this problem can be decomposed as a series of step by step mini-max problems solved by backward induction ([2, 4, 5]). The above cited results focus on worst-case analysis. This implies that the vector of nominal parameters (the parameters used as an approximation of the true one regardless of the uncertainty) is not treated in a special way and is just an element of the set of feasible parameters. The objective of the worst-case analysis is to eliminate the possibility of disastrous performance. There are several disadvantages to this approach. First, worst-case analysis may lead to an overly conservative solution, i.e., a solution which provides mediocre performance under all possible parameters. Second, the desirability of the solution highly depends on the precise modeling of the uncertainty set which is often based on some ad-hoc criterion. Third, it may happen that the nominal parameters are close to the real parameters, so that the performance of the solution under nominal parameters may provide important information for predicting the performance under the true parameters. Finally, there is a certain tradeoff relationship between the worst-case performance and the nominal performance, that is, if the decision maker insists on maximizing one criterion, the other criterion may decrease dramatically. On the other hand, relaxing both criteria may lead to a well balanced solution with both satisfactory nominal performance and also reasonable robustness to parameter uncertainty. In this paper we capture the Robustness-Performance (RP) tradeoff explicitly. We use the worstcase behavior of a solution as the function representing its robustness, and formulate the decision problem as an optimization of both the robustness criterion and the performance under nominal parameters simultaneously. Here, ?simultaneously? is achieved by optimizing the weighted sum of the performance criterion and the robustness criterion. To the best of our knowledge, this is the first attempt to address the overly conservativeness of worst-case analysis in robust MDP. Instead of optimizing the weighted sum of the robustness and performance for some specific weights, we show how to efficiently find the solutions for all possible weights. We prove that the set of these solutions is in fact equivalent to the set of all Pareto efficient solutions in the robustness-performance space. Therefore, we solve the tradeoff problem without choosing a specific tradeoff parameter, and leave the subjective decision of determining the exact tradeoff to the decision maker. Instead of arbitrarily claiming that a certain solution is a good tradeoff, our algorithm computes the whole tradeoff relationship so that the decision maker can choose the most desirable solution according to her preference, which is usually complicated and an explicit form is not available. Our approach thus avoids the tuning of tradeoff parameters, where generally no good a-priori method exists. This is opposed to certain relaxations of the worst-case robust optimization approach like [6] (for single stage only) where some explicit tradeoff parameters have to be chosen. Unlike risk sensitive learning approaches [7, 8, 9] which aim to tune a strategy online, our approach compute a robust strategy offline without trial and error. The paper is organized as follows. Section 2 is devoted to the RP tradeoff for Linear Programming. In Section 3 and Section 4 we discuss the RP tradeoff for MDP with uncertain rewards, and uncertain transition probabilities, respectively. In Section 5 we present a computational example. Some concluding remarks are offered in Section 6. 2 Parametric linear programming and RP tradeoffs in optimization In this section, we briefly recall Parametric Linear Programming (PLP) [10, 11, 12], and show how it can be used to find the whole set of Pareto efficient solutions for RP tradeoffs in Linear Programming. This serves as the base for the discussion of RP tradeoffs in MDPs. 2.1 Parametric Linear Programming A Parametric Linear Programming is the following set of infinitely many optimization problems: For all ? ? [0, 1]: Minimize: Subject to: > > > ?c(1) x + (1 ? ?)c(2) x Ax = b x ? 0. (1) > We call c(1) x the first objective, and c(2) x the second objective. We assume that the Linear Program (LP) is feasible and bounded for both objectives. Although there are uncountably many possible ?, Problem (1) can be solved by a simplex-like algorithm. Here, ?solve? means that for each ?, we find at least one optimal solution. An outline of the PLP algorithm is described in Algorithm 1, which is essentially a tableau simplex algorithm while the entering variable is determined in a specific way. See [10] for a precise description. Algorithm 1. 1. Find a basic feasible optimal solution for ? = 0. If multiple solutions exist, > choose one among those with minimal c(1) x. 2. Record current basic feasible solution. Check the reduced cost (i.e., the zero row in the (1) simplex table) of the first objective, denoted as c?j . If none of them is negative, end. (1) (1) (2) 3. Among all columns with negative c?j , choose the one with largest ratio |? cj /? cj | as the entering variable. 4. Pivot the base, go to 2. This algorithm is based on the observation that for any ?, there exists an optimal basic feasible solution. Hence, by finding a suitable subset of all vertices of the feasible region, we can solve the PLP. Furthermore, we can find this subset by sequentially pivoting among neighboring extreme points like the simplex algorithm does. This algorithm terminates after finitely many iterations. It is also known that the optimal value for PLP is a continuous piecewise linear function of ?. The theoretical computational cost is exponential, although practically it works well. Such property is shared by all simplex based algorithm. A detailed discussion on PLP can be found in [10, 11, 12]. 2.2 RP tradeoffs in Linear Programming Consider the following LP: NOMINAL PROBLEM : Minimize: c> x Subject to: Ax ? b Here A ? Rn?m , x ? Rm , b ? Rn , c ? Rm . (2) Suppose that the constraint matrix A is only a guess of the unknown true parameter Ar which is known to belonging to set A (we call A the uncertainty set). We assume Qn that A is constraint-wise independent and polyhedral for each of the constraints. That is, A = i=1 Ai , and for each i, there  exists a matrix T (i) and a vector v(i) such that Ai = a(i)> |T (i)a(i) ? v(i) . To quantify how a solution x behaves with respect to the parameter uncertainty, we define the following criterion to be minimized as its robustness measure (more accurately, non-robustness measure). h i+ ? p(x) , sup Ax ? b ? 1 A?A (" # ) (3) n n X   X > > max sup ? a(i) x ? bi , 0 . max ? a(i) x ? bi , 0 = = sup ? A?A i=1 ? a(i):T (i)? a(i)?v(i) i=1 ? and bi is the Here [?] stands for the positive part of a vector, ? a(i) is the ith row of the matrix A, ith element of b. In words, the function p(x) is the largest possible sum of constraint violations. + > Using the weighted sum of the performance and robustness objective as the minimizing objective, we formulate the explicit tradeoff between robustness and performance as: GENERAL PROBLEM : ? ? [0, 1] Minimize: ?c> x + (1 ? ?)p(x) Subject to: Ax ? b. (4) Here A ? Rn?m , x ? Rm , b ? Rn , c ? Rm . By duality theorem, for a given x, sup?a(i):T (i)?a(i)?v(i) ? a(i)> x equals to the optimal value of the following LP on y(i): Minimize: v(i)> y(i) Subject to: T (i)> y(i) = x y(i) ? 0. Thus, by adding slack variables, we rewrite GENERAL PROBLEM as the following PLP and solve it using Algorithm 1: GENERAL PROBLEM (PLP) : ? ? [0, 1] Minimize: ?c> x + (1 ? ?)1> z Subject to: Ax ? b, T (i)> y(i) = x, v(i)> y(i) ? bi ? zi , z ? 0, y(i) ? 0; i = 1, 2, ? ? ? , n. (5) Here, 1 stands for a vector of ones of length n, zi is the ith element of z, and x, y(i), z are the optimization variables. 3 The robustness-performance tradeoff for MDPs with uncertain rewards A (finite) MDP is defined as a 5-tuple < T, S, As , p(?|s, a), r(s, a) > where: T is the (possibly infinite) set of decision stages; S is the state set; As is the action set of state s; p(?|s, a) is the transition probability; and r(s, a) is the expected reward of state s with action a ? As . We use r to denote the vector combining the reward for all state-action pairs and rs to denote the vector combining all reward of state s. Thus, r(s, a) = rs (a). Both S and As are assumed finite. Both p and r are time invariant. In this section, we consider the case where r is not known exactly. More specifically, we have a nominal parameter r(s, a) which is believed to be a reasonably good guess of the true reward. The reward r is known to belong to a bounded set R. We further assume Q that the uncertainty set R is state-wise independent and a polytope for each state. That is, R = s?S Rs , and for each s ? S, there exists a matrix Cs and a vector ds such that Rs = {rs |Cs rs ? ds }. We assume that for different visits of one state, the realization of the reward need not be identical and may take different values within the uncertainty set. The set of admissible control policies for the decision maker is the set of randomized history dependent policies, which we denote by ?HR . In the following three subsections we discuss different standard reward criteria: cumulative reward with a finite horizon, discounted reward with infinite horizon, and limiting average reward with infinite horizon under a unichain assumption. 3.1 Finite horizon case In the finite horizon case (T = {1, ? ? ? , N }), we assume without loss of generality that each state belongs to only one stage, which is equivalent to the assumption of non-stationary reward realization, and use Si to denote the set of states at the ith stage. We also assume that the first stage consists of only one state s1 , and that there are no terminal rewards. We define the following two functions as the performance measure and the robustness measure of a policy ? ? ?HR : N ?1 X P (?) , E? { r(si , ai )}, i=1 N ?1 X R(?) , min E? { r?R (6) r(si , ai )}. i=1 The minimum is attainable, since R is compact and the total expected reward is a continuous function of r. We say that a strategy ? is Pareto efficient if it obtains the maximum of P (?) among all strategies that have a certain value of R(?). The following result is straightforward; the proof can be found in the full version of the paper. Proposition 1. 1. If ? ? is a Pareto efficient strategy, then there exists a ? ? [0, 1] such that ? ? ? arg max???HR {?P (?) + (1 ? ?)R(?)}. 2. If ? ? ? arg max???HR {?P (?) + (1 ? ?)R(?)} for some ? ? (0, 1). Then ? ? is a Pareto efficient strategy. For 0 ? t ? N , s ? St , and ? ? [0, 1] define: (N ?1 ) X Pt (?, s) , E? r(si , ai )|st = s i=t Rt (?, s) , min E? r?R c?t (s) (N ?1 X i=t ) r(si , ai )|st = s , max {?Pt (?, s) + (1 ? ?)Rt (?, s)} . ???HR (7) We set PN ? RN ? cN ? 0, and note that c?1 (s1 ) is the optimal RP tradeoff with weight ?. The following theorem shows that the principle of optimality holds for c. The proof is omitted since it follows similarly to standard backward induction in finite horizon robust decision problems. Theorem 1. For s ? St , t < N , let ?s be the probability simplex on As , then n  P  P c?t (s) = max min ? a?As r(s, a)q(a) + (1 ? ?) a?As r(s, a)q(a) + q??s rs ?Rs o P P 0 ? 0 s0 ?St+1 a?As p(s |s, a)q(a)ct+1 (s ) . We now consider the maximin problem in each state and show how to find the solutions for all ? in one pass. We also prove that c?t (s) is piecewise linear in ?. Let St+1 = {s1 , ? ? ? , sk }. Assume for all j ? {1, ? ? ? , k}, c?t+1 (sj ) are continuous piece-wise linear functions. Thus, we can divide [0, 1] into finite (say n) intervals [0, ?1 ], ? ? ? [?n?1 , 1] such that in each interval, all ct+1 functions are linear. That is, there exist constants lij and mji such that c?t+1 (sj ) = lij ? + mji , for ? ? [?i?1 , ?i ]. By the duality theorem, we have that c?t (s) equals to the optimal value of the following LP on y and q. > Maximize: (1 ? ?)d> s y + ?rs q + Subject to: Cs> y > = q, k X X j=1 a?As p(sj |s, a)q(a)c?t+1 (sj ) (8) 1 q = 1, q, y ? 0. Observe that the feasible set is the same for all ?. Substituting c?t+1 (sj ) and rearranging, it follows that for ? ? [?i?1 , ?i ] the objective function equals to nP hP i o k j j > (1 ? ?) a?As j=1 p(s |s, a)mi q(a) + ds y i o nP h Pk j j j r(s, a) + p(s |s, a)(l + m ) q(a) . +? i i j=1 a?As Thus, for ? ? [?i?1 , ?i ], from the optimal solution for ?i?1 , we can solve for all ? using Algorithm 1. Furthermore, we need not to re-initiate for each interval, since the optimal solution for the end of ith interval is also the optimal solution for the begin of the next interval. It is obvious that the resulting c?t (s) is also continuous, piecewise linear. Thus, since cN = 0, the assumption of continuous and piecewise linear value functions holds by backward induction. 3.2 Discounted reward infinite horizon case In this section we address the RP tradeoff for infinite horizon MDPs with a discounted reward criterion. For a fixed ?, the problem is equivalent to a zero-sum game, with the decision maker trying to maximize the weighted sum and Nature trying to minimize it by selecting an adversarial reward realization. A well known result in discounted zero-sum stochastic games states that, even if non-stationary policies are admissible, a Nash equilibrium in which both players choose a stationary policy exists; see Proposition 7.3 in [13]. Given an initial state distribution ?(s), it is also a known result [14] that there exists a one-toP? one correspondence relationship between the state-action frequencies i=1 ? i?1 E(1si =s,ai =a ) for stationary strategies and vectors belonging to the following polytope X : X X X x(s0 , a) ? ?p(s0 |s, a)x(s, a) = ?(s0 ), x(s, a) ? 0, ?s, ?a ? As . (9) a?As0 s?S a?As Since it suffices to consider a stationary policy for Nature, the tradeoff problem becomes: X X Maximize: inf [?r(s, a)x(s, a) + (1 ? ?)r(s, a)x(s, a)] r?R s?S a?As Subject to: x ? X . (10) By duality of LP, Equation (10) could be rewritten as the following PLP and solved by Algorithm 1. X X X  Maximize:? r(s, a)x(s, a) + (1 ? ?) d> s ys s?S a?As Subject to: X a?As0 s?S 0 x(s , a) ? x(s, a) ? 0, X X s?S a?As 0 ?p(s |s, a)x(s, a) = ?(s0 ), ?s0 , (11) ?s, ?a, Cs> ys = xs ?s, ys ? 0, ?s. 3.3 Limiting average reward case (unichain) In the unichain the set of limiting average state-action frequency vectors (that is, all limit points  case, PT of sequences T1 n=1 E? [1sn =s,an =a ] , for ? ? ?HR ) is the following polytope X : X X X x(s0 , a) ? p(s0 |s, a)x(s, a) = 0, ?s0 ? S, a?As0 s?S a?As X X (12) x(s, a) = 1, s?S a?As x(s, a) ? 0, ?s, ?a ? As . As before, there exists an optimal maximin stationary policy. By a similar argument as for the discounted case, the tradeoff problem can be converted to the following PLP: X X X  Maximize:? r(s, a)x(s, a) + (1 ? ?) d> s ys s?S a?As Subject to: X a?As0 0 x(s , a) ? X X s?S X X s?S a?As 0 p(s |s, a)x(s, a) = 0, ?s0 , x(s, a) = 1, (13) s?S a?As Cs> ys = xs , ?s, ys ? 0, ?s, x(s, a) ? 0, ?s, ?a. 4 The RP tradeoff in MDPs with uncertain transition probabilities In this section we provide a counterexample which demonstrates that the weighted sum criterion in the most general case, i.e., the uncertain transition probability case, may lead to non-Markovian optimal policies. In the finite horizon MDP shown in the Figure 1, S = {s1, s2, s3, s4, s5, t1, t2, t3, t4}; As1 = {a(1, 1)}; As2 = {a(2, 1)}; As3 = {a(3, 1)}; As4 = {a(4, 1)} and As5 = {a(5, 1), a(5, 2)}. Rewards are only available at the final stage, and are perfectly known. The nominal transition probabilities are p (s2|s1, a(1, 1)) = 0.5, p (s4|s2, a(2, 1)) = 1, and p (t3|s5, a(5, 2)) = 1. The set of possible realization is p (s2|s1, a(1, 1)) ? {0.5}, p (s4|s2, a(2, 1)) ? [0, 1], and p (t3|s5, a(5, 2)) ? [0, 1]. Observe that the worst parameter realization is p(s4|s2, a(2, 1)) = p(t3|s5, a(5, 2)) = 0. We look for the strategy that maximizes the sum of the nominal reward and the worst-reward (i.e., ? = 0.5). Since multiple actions only exist in state s5, a strategy is determined by the action chosen on s5. Let the probability of choosing action a(5, 1) and a(5, 2) be p and 1 ? p, respectively. Consider the history ?s1 ? s2?. In this case, with the nominal transition probability, this trajectory will reach t1 with a reward of 10, regardless of the choice of p. The worst transition is that action a(2, 1) leads to s5 and action a(5, 2) leads to t4, hence the expected reward is 5p + 4(1 ? p). Therefore the optimal p equals to 1, i.e., the optimal action is to choose a(5, 1) deterministically. 0.5 (0.5) s1 s2 a(2,1) 1 (0) s4 a(4,1) t1 r=10 a(1,1) 0 (1) a(5,1) 0.5 (0.5) s3 a(3,1) s5 a(5,2) 1 (0) 0 (1) t2 r=5 t3 r=8 t4 r=4 Figure 1: Example of non-Markovian best strategy Consider the history ?s1 ? s3?. In this case, the nominal reward is 5p + 8(1 ? p), and the worst case reward is 5p + 4(1 ? p). Thus p = 0 optimize the weighted sum, i.e., the optimal strategy is to choose a(5, 2). The unique optimal strategy for this example is thus non-Markovian. This non-Markovian property implies a possibility that past actions affect the choice of future actions, and hence could render the problem intractable. The optimal strategy is non-Markovian because we are taking expectation over two different probability measures, hence the smoothing property of conditional expectation cannot be used in finding the optimal strategy. 5 A computational example We apply our algorithm to a T -stage machine maintenance problem. Let S , {1, ? ? ? , n} denote the state space for each stage. In state h, the decision maker can choose either to replace the machine which will lead to state 1 deterministically, or to continue running, which with probability p will lead to state h + 1. If the machine is in state n, then the decision maker has to replace it. The replacing cost is perfectly known to be cr , and the nominal running cost in state h is ch . We? assume that the realization of the running cost lies in the interval [ch ? ?h , ch + ?h ]. We set ch = h ? 1 and ?h = 2h/n. The objective is to minimize the total cost, in a risk-averse attitude. Figure 2(a) shows the tradeoff of this MDP. For each solution found, we sample the reward 300 times according to a uniform distribution. We normalize the cost for each simulation, i.e., we divide the cost by the smallest expected nominal cost. Denoting the normalized cost of the ith simulation for strategy j as si (j), we use the following function to compare the solutions: s P300 ? ? i=1 |si (j)| vj (?) = . 300 Note that ? = 1 is the mean of the simulation cost, whereas larger ? puts higher penalty on deviation representing a risk-averse decision maker. Figure 2(b) shows that, the solutions that focus on nominal parameters (i.e., ? close to 1) achieve good performance for small ?, but worse performance for large ?. That is, if the decision maker is risk neutral, then the solutions based on nominal parameters are good. However, these solutions are not robust and are not good choices for risk-averse decision makers. Note that, in this example, the nominal cost is the expected cost for each stage, i.e., the parameters are exactly formulated. Even in such case, we see that risk-averse decision makers can benefit from considering the RP tradeoff. 6 Concluding remarks In this paper we proposed a method that directly addresses the robustness versus performance tradeoff by treating the robustness as an optimization objective. Based on PLP, for MDPs where only (a) (b) 29.28 29 ?=1 1.4 Normalized Modified Mean 1.35 Worst Case Performance 28.5 28 27.5 27 26.5 26 25.68 16.79 17.2 17.4 17.6 17.8 18 18.2 18.4 18.6 1.2 1.15 1.1 1.05 ?=1 ?=10 ?=100 ?=1000 1 0.95 ?=0 17 1.3 1.25 18.76 0.9 0 Norminal Performance 0.2 0.4 ? 0.6 0.8 1 Figure 2: The machine maintenance problem: (a) the PR tradeoff; (b) normalized mean of the simulation for different values of ?. rewards are uncertain, we presented an efficient algorithm that computes the whole set of optimal RP tradeoffs for MDPs with finite horizon, infinite horizon discounted reward, and limiting average reward (unichain). For MDPs with uncertain transition probabilities, we showed an example where the solution may be non-Markovian and hence may in general be intractable. The main advantage of the presented approach is that it addresses robustness directly. This frees the decision maker from the need to make probabilistic assumptions on the problems parameters. It also allows the decision maker to determine the desired robustness-performance tradeoff based on observing the whole curve of possible tradeoffs rather than guessing a single value. References [1] A. L. Soyster. Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res., 1973. [2] A. Bagnell, A. Ng, and J. Schneider. Solving uncertain markov decision processes. Technical Report CMU-RI-TR-01-25, Carnegie Mellon University, August 2001. [3] A. Ben-Tal and A. Nemirovski. Robust solutions of uncertain linear programs. Oper. Res. Lett., 25(1):1? 13, August 1999. [4] C. C. White III and H. K. El-Deib. Markov decision process with imprecise transition probabilities. Oper. Res., 42(4):739?748, July 1992. [5] A. Nilim and L. El Ghaoui. Robust control of markov decision processes with uncertain transition matrices. Oper. Res., 53(5):780?798, September 2005. [6] D. Bertsimas and M. Sim. The price of robustness. Oper. Res., 52(1):35?53, January 2004. [7] M. Heger. Consideration of risk in reinforcement learning. In Proc. 11th International Conference on Machine Learning, pages 105?111. Morgan Kaufmann, 1994. [8] R. Neuneier and O. Mihatsch. Risk sensitive reinforcement learning. In Advances in Neural Information Processing Systems 11, pages 1031?1037, Cambridge, MA, USA, 1999. MIT Press. [9] P. Geibel. Reinforcement learning with bounded risk. In Proc. 18th International Conf. on Machine Learning, pages 162?169. Morgan Kaufmann, San Francisco, CA, 2001. [10] D. Bertsimas and J. N. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, 1997. [11] M. Ehrgott. Multicriteria Optimization. Springer-Verlag Berlin Heidelberg, 2000. [12] K. G. Murty. Linear Programming. John Wiley & Sons, 1983. [13] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [14] M. L. Puterman. Markov Decision Processes. John Wiley & Sons, INC, 1994.
3053 |@word trial:1 version:1 briefly:1 simulation:4 r:9 attainable:1 tr:1 initial:1 series:1 selecting:1 denoting:1 subjective:1 past:1 current:1 neuneier:1 si:8 john:2 happen:1 unichain:4 treating:1 stationary:6 guess:2 plane:1 ith:6 record:1 provides:1 mannor:1 preference:1 prove:2 consists:1 polyhedral:1 expected:5 behavior:1 multi:1 terminal:1 discounted:6 decomposed:1 considering:1 becomes:1 begin:1 estimating:1 bounded:3 maximizes:1 heger:1 finding:2 exactly:3 rm:4 demonstrates:1 control:3 bertsekas:1 positive:1 t1:4 engineering:1 before:1 limit:1 might:1 relaxing:1 nemirovski:2 bi:4 practical:1 unique:1 murty:1 imprecise:1 word:1 cannot:1 close:2 mediocre:1 risk:9 put:1 optimize:1 equivalent:3 maximizing:1 go:1 regardless:2 straightforward:1 convex:1 formulate:2 limiting:4 mcgill:3 pt:3 nominal:18 suppose:1 exact:1 programming:12 element:3 electrical:1 solved:5 worst:13 capture:1 region:1 averse:4 decrease:1 balanced:1 nash:1 reward:32 dynamic:1 rewrite:1 solving:1 attitude:1 choosing:2 larger:1 solve:5 say:2 itself:1 final:1 online:1 hoc:1 sequence:1 advantage:1 propose:1 neighboring:1 combining:2 realization:6 date:1 p300:1 achieve:1 description:1 normalize:1 leave:1 ben:2 montreal:1 finitely:1 sim:1 as0:4 c:5 implies:2 quantify:1 stochastic:1 suffices:1 proposition:2 hold:3 practically:1 equilibrium:1 substituting:1 smallest:1 omitted:1 proc:2 maker:13 sensitive:2 largest:2 weighted:6 mit:1 desirability:1 aim:1 modified:1 rather:1 pn:1 cr:1 as3:1 ax:5 focus:2 check:1 adversarial:1 dependent:1 el:2 eliminate:1 her:1 arg:2 among:4 denoted:1 priori:1 smoothing:1 special:1 equal:4 ng:1 identical:1 look:1 future:1 simplex:6 minimized:1 np:2 piecewise:4 t2:2 report:1 simultaneously:2 attempt:1 possibility:2 highly:1 violation:1 extreme:1 devoted:1 tuple:1 huan:1 divide:2 mihatsch:1 re:6 desired:1 theoretical:1 minimal:1 uncertain:12 column:2 modeling:1 markovian:7 ar:1 disadvantage:1 cost:13 vertex:1 subset:2 deviation:1 neutral:1 uniform:1 st:6 cited:1 international:2 randomized:1 probabilistic:1 opposed:1 choose:7 possibly:1 worse:1 conf:1 oper:5 converted:1 includes:1 inc:1 explicitly:1 depends:1 ad:1 piece:1 observing:1 sup:4 complicated:1 minimize:7 kaufmann:2 efficiently:1 t3:5 mji:2 accurately:1 none:1 trajectory:1 history:3 reach:1 inexact:1 frequency:2 obvious:1 proof:2 mi:1 recall:1 knowledge:2 subsection:1 organized:1 cj:2 insists:1 higher:1 generality:1 furthermore:2 just:1 stage:11 d:3 hand:1 replacing:1 scientific:2 mdp:6 usa:1 effect:1 requiring:1 true:4 normalized:3 hence:6 entering:2 satisfactory:2 white:1 puterman:1 game:2 plp:10 criterion:11 trying:2 outline:1 performs:1 wise:6 consideration:1 pivoting:1 behaves:1 belong:1 s5:8 mellon:1 cambridge:1 counterexample:1 ai:7 tuning:1 similarly:1 hp:1 specification:1 base:2 showed:1 optimizing:2 belongs:1 inf:1 termed:2 certain:4 verlag:1 arbitrarily:1 continue:1 morgan:2 minimum:1 schneider:1 determine:1 maximize:5 july:1 multiple:2 desirable:1 full:1 technical:1 believed:1 visit:1 y:6 neuro:1 basic:3 maintenance:2 geibel:1 essentially:1 expectation:2 cmu:1 iteration:1 achieved:1 whereas:1 interval:6 unlike:1 subject:11 shie:2 quebec:1 call:2 iii:1 h3a2a7:1 affect:1 zi:2 perfectly:2 cn:2 tradeoff:32 pivot:1 penalty:1 render:1 as4:1 action:13 remark:2 dramatically:1 generally:1 detailed:1 tune:1 s4:5 reduced:1 exist:3 s3:3 overly:3 carnegie:1 backward:3 bertsimas:2 relaxation:1 sum:10 uncertainty:18 reasonable:1 decision:26 ct:2 correspondence:1 encountered:1 constraint:6 as2:1 inclusive:1 ri:1 tal:2 argument:1 optimality:2 concluding:2 min:3 department:1 according:2 belonging:2 terminates:1 son:2 lp:5 making:1 s1:9 invariant:1 pr:1 ghaoui:1 equation:1 deib:1 discus:2 slack:1 tableau:1 initiate:1 ehrgott:1 tractable:1 serf:1 end:2 available:2 rewritten:1 apply:1 observe:2 robustness:19 rp:12 top:1 running:3 objective:10 parametric:5 strategy:16 rt:2 traditional:1 guessing:1 bagnell:1 september:1 berlin:1 athena:2 polytope:3 induction:3 length:1 relationship:3 mini:1 ratio:1 minimizing:1 disastrous:1 claiming:1 negative:2 policy:11 unknown:1 observation:1 markov:6 finite:10 january:1 precise:2 rn:5 august:2 canada:1 namely:1 pair:1 maximin:2 address:4 usually:2 program:2 max:7 suitable:1 treated:1 circumvent:1 predicting:1 hr:6 representing:2 mdps:7 cim:1 lij:2 sn:1 determining:1 loss:1 versus:1 offered:1 s0:10 principle:2 pareto:6 row:3 uncountably:1 free:1 offline:1 tsitsiklis:2 taking:1 benefit:1 curve:1 lett:1 transition:12 avoids:1 stand:2 computes:3 qn:1 cumulative:1 made:1 reinforcement:3 san:1 sj:5 compact:1 obtains:1 soyster:1 sequentially:1 assumed:2 francisco:1 continuous:5 sk:1 table:1 as1:1 nature:2 reasonably:1 robust:11 ca:3 inherently:1 rearranging:1 heidelberg:1 vj:1 pk:1 main:1 conservativeness:1 whole:5 s2:8 xu:1 wiley:2 nilim:1 explicit:3 deterministically:2 exponential:1 lie:1 third:1 admissible:2 theorem:4 specific:3 x:2 multicriteria:1 exists:8 intractable:2 adding:1 t4:3 horizon:11 infinitely:1 springer:1 ch:4 worstcase:2 ma:1 conditional:1 formulated:1 shared:1 replace:2 feasible:8 adverse:1 price:1 determined:2 infinite:6 specifically:1 conservative:2 total:2 pas:1 ece:1 duality:3 player:1
2,263
3,054
Clustering appearance and shape by learning jigsaws Anitha Kannan, John Winn, Carsten Rother Microsoft Research Cambridge [ankannan, jwinn, carrot]@microsoft.com Abstract Patch-based appearance models are used in a wide range of computer vision applications. To learn such models it has previously been necessary to specify a suitable set of patch sizes and shapes by hand. In the jigsaw model presented here, the shape, size and appearance of patches are learned automatically from the repeated structures in a set of training images. By learning such irregularly shaped ?jigsaw pieces?, we are able to discover both the shape and the appearance of object parts without supervision. When applied to face images, for example, the learned jigsaw pieces are surprisingly strongly associated with face parts of different shapes and scales such as eyes, noses, eyebrows and cheeks, to name a few. We conclude that learning the shape of the patch not only improves the accuracy of appearance-based part detection but also allows for shape-based part detection. This enables parts of similar appearance but different shapes to be distinguished; for example, while foreheads and cheeks are both skin colored, they have markedly different shapes. 1 Introduction Many computer vision tasks require the use of appearance and shape models to represent objects in the scene. The choices for appearance models range from histogram-based representations that throws away spatial information, to template-based representations that try to capture the entire spatial layout of the objects but cope poorly with articulation, deformation or variation in appearance. In the middle of this spectrum lie patch-based models that aim to find the right balance between the two extremes. However, a central problem with existing patch-based models is that there is no way to choose the shape and size of a patch; typically a predefined set of patch sizes and shapes (often rectangles or circles) are used. We believe that natural images can provide enough cues to allow patches to be discovered of varying shape and size corresponding to the shape and size of object parts present in the images. Indeed, we will show that the patches discovered by the jigsaw model can become strongly associated with semantic object parts. With this motivation, we introduce a generative model for a set of images that learns to extract irregularly shaped and sized patches from a latent image which are combined to generate each training image. We call this latent image a jigsaw as it contains all the necessary ?jigsaw pieces? that can be used to generate the target image set. We present an inference algorithm for learning the jigsaw and for finding the jigsaw pieces that make up each image. As our proposed jigsaw model is a generative model for an image, it can be readily used as a component in many computer vision applications for both image understanding and image synthesis. These include object recognition, detection, image segmentation and image classification, object synthesis, image de-noising, super resolution, texture transfer between images and image in-painting. In fact, the jigsaw model is likely to be useable as a direct replacement for a fixed patch model in any existing patch-based system. 2 Related work The closest work to ours is the epitome model of Jojic et al. [1]. This is a generative model for image patches, or alternatively a model for images if patches that share coordinates in the image are averaged together (although this averaging often leads to a blurry result). Epitomes are learned using a set of fixed shaped patches over a small range of sizes. In contrast, in the jigsaw model, the inference process chooses appropriately shaped and sized pieces from the training images when learning the jigsaw. The difference between these two models is illustrated in section 4. Our work also closely relates to the seminal work of Freeman et al. [2] that proposed a general machinery for inferring underlying scenes from images, with goals such as in optical flow estimation and super-resolution. They define a Markov random field over image patches and infer the hidden scene representation using belief propagation. Again, they use a set of fixed size image patches, hoping to reach a reasonable trade-off between capturing sufficient statistics in each patch, and disambiguating different kinds of features. Along these lines, Markov random field with larger cliques have also been used to capture the statistic of natural images, such as the field of experts model proposed in [3] which represents the field potentials as non-linear functions of linear filters. Again, the underlying linear filters are applied to patches of a fixed size. In the domain of image synthesis the work of Freeman et al. [2] has inspired many patch-based synthesis algorithms including super resolution, texture transfer, image in-painting or photo synthesis. They can be viewed as a data-driven way of sampling from the Markov random field with high-order cliques given by the overlapping patches. The texture synthesis and transfer algorithm of Efros et al. [4] constructs a new image by greedily selecting overlapping patches so that the seam transition is not visible. Whilst this work does allow different patch shapes, it does not learn patch appearance since it works from a supplied texture image. Recently a similar approach has been proposed in [5] for synthesising a collage image from a given set of input images, although in this case a probabilistic model is defined and optimised. Patch-based models are also widely applied in object recognition research [6, 7, 8, 9, 10]. These models use hand-selected patch shapes (typically rectangles) which can lead to poor results given that different object parts have different sizes and shapes. In fact, the use of fixed patches reduces accuracy when the object part is of different size and shape than the chosen patch; in this case, the patch model has to cope with the variability outside the object part. This effect is particularly evident when the part is at the edge of the object as the model then has to try and capture the variability of the background. In addition, such models ignore the shape of the object part which is frequently much more discriminative than appearance alone. The paper is structured as follows: In section 3 we introduce the probabilistic model and describe a method for performing learning and inference in the model. In section 4 we show results for synthetic and real data and present a comparison to the epitome model. Finally, in section 5, we discuss possible extensions to the model. 3 Probabilistic model This section describes the probabilistic model that we use to learn a jigsaw from a set of training images. We aim to learn a jigsaw such that, given an image set, pieces of the jigsaw image satisfy the following criteria: ? each piece is similar in appearance and shape to several regions of the training images; ? any of the training images can be approximately reconstructed using only pieces from the jigsaw (a piece may be used more than once in a single image); ? pieces are as large as possible for a particular accuracy of reconstruction. Thus, while allowing the jigsaw pieces to have arbitrary shape, we ensure that such pieces are shared across the entire image set, exhaustively explain the input image set, and are also large enough to be discriminative. By meeting these criteria, we can capture both the appearance and the shape of repeated image structures, for example, eyes, noses and mouths in a set of face images. We define a jigsaw J to be an image such that each pixel z in J has an intensity value ?(z) and an associated variance ??1 (z) (so ? is the inverse variance, also called the precision). A set of spatially Figure 1: Graphical model showing how the jigsaw J is used to generate a set of images I1 . . . IN by combining the jigsaw pieces in different ways. Each image has a corresponding offset map L which defines the jigsaw pieces used to generate that image (see text for details). Notice that several jigsaw pieces can overlap and hence share parts of their appearance. grouped pixels in J is a jigsaw piece. We can combine many of these pieces to generate images, noting that pixels in the jigsaw be re-used in multiple pieces. Our probabilistic model is a generative image model which generates an image by joining together pieces of the jigsaw and then adding Gaussian noise of variance given by the jigsaw. For each image I, we have an associated offset map L of the same size which determines the jigsaw pieces used to make that image. This offset map defines a position in the jigsaw for each pixel in the image (more than one image pixel can map to the same jigsaw pixel). Each entry in the offset map is a two-dimensional offset li = (lx , ly ), which maps a 2D point i in the image to a 2D point z in the jigsaw using z = (i ? li ) mod |J|, where |J| = (width, height) are the dimensions of the jigsaw. Notice that if two adjacent pixels in the image have the same offset label, then they map to adjacent pixels in the jigsaw. Figure 1 provides a schematic view of the overall probabilistic model, as it is used to generate a set of N face images. Given this mapping and the jigsaw, the probability distribution of an image is assumed to be independent for each pixel and is given by Y P (I | J, L) = N (I(i); ?(i ? li ), ?(i ? li )?1 ) (1) i where the product is over image pixel positions and both subtractions are modulo |J|. We want the images to consist of coherent pieces of the jigsaw, and so we define a Markov random field on the offset map to encourage neighboring pixels to have the same offsets. ? ? X 1 ?(li , lj )? (2) P (L) = exp ?? Z (i,j)?E where E is the set of edges in a 4-connected grid. The interaction potential ? defines a Pott?s model on the offsets: ?(li , lj ) = ? ?(li 6= lj ) (3) where ? is a parameter which influences the typical size of the learned jigsaw pieces. Currently, ? is set to give the largest pieces whilst maintaining reasonable quality when the image is reconstructed from the jigsaw. When learning the jigsaw, it is possible for regions of the jigsaw to be unused, that is, to have no image pixels mapped to them. To allow for this case, we define a Normal-Gamma prior on ? and ? for each jigsaw pixel z, P (J) = Y N (?(z); ?0 , (??(z))?1 ) Gamma(?(z); a, b). (4) z This prior means that the behaviour of the model is well defined for unused regions. For our experiments, we fix the hyperparameters ? to .5, ? to 1, b to three times the inverse data variance and a to the square of b. The local interaction strength ? is set to 5 per channel. Inference and learning: The model defines the joint probability distribution on a jigsaw J, a set of images I1 . . . IN , and their offset maps L1 . . . LN to be N Y ? ? P J, {I, L}N = P (J) P (In |J, Ln )P (L). 1 (5) n=1 When learning a jigsaw, the image set I1 . . . IN is known and we aim to achieve MAP learning of the remaining variables. In other words, our goal is to find the jigsaw J and offset maps L1 . . . LN that maximise the joint probability (5). We achieve this in an iterative manner. First, the jigsaw is initialised by setting the precisions ? to the expected value under the prior b/a and the means ? to Gaussian noise with the same mean and variance as the data. Given this initialisation, the offset maps are updated for each image by applying the alpha-expansion graph-cut algorithm of [11] (note that our energy is submodular, also known as regular). Whilst this process will not necessarily find the most probable offset map, it is guaranteed to find at least a strong local minimum such that no single expansion move can increase (5). ? ? can be found analytiGiven the inferred offset maps, the jigsaw J that maximises P J, {I, L}N 1 cally. This is achieved for a jigsaw pixel z, the optimal mean ?? and precision ?? by using P ??0 + x?X(z) I(x) ? (6) ? = ? + |X(z)| P b + ??20 ? (? + |X(z)|)(?? )2 + x?X(z) I(x)2 ?1? (7) ? = a + |X(z)| where X(z) is the set of image pixels that are mapped to the jigsaw pixel z across all images. We iterate between finding the offset maps holding the jigsaw fixed, and updating the jigsaw using the recently updated offset maps. When inference has converged, we apply a clustering step to determine the jigsaw pieces (in future we plan to extend the model so that this clustering arises directly during learning). Regions of the image are placed in clusters according to the degree of overlap they have in the jigsaw. The degree of overlap is measured as the ratio of the intersection to the union of the two regions of the jigsaw the image regions map to. This has the effect of clustering image regions by both appearance and shape. Each cluster then corresponds to a region of the jigsaw with an (approximately) consistent shape that explains a large number of image regions. 4 Results A toy example: In this experiment, we applied our model to the hand-crafted 150x150 RGB image shown in Fig. 2a. This image was constructed by placing four distinct objects (star, triangle, square and circle), at random positions on a black background image, with the pixels from the more recently placed object replacing the previously drawn pixels. Hence, we can see substantial amount of occlusion of parts of these objects. Using this image as the only input, we would like our model to automatically infer the appearances and shapes of the objects present in the image. Existing patch-based models are not well-suited to analyzing this image for two reasons: first, there is no clear way to choose the appropriate patch shapes and sizes, and secondly, even if such a choice is known, it is difficult for these existing methods (such as epitome [1]) to learn the shape as they cannot allow for occlusion boundaries without having an explicit occlusion model. For instance, in [1], a separate shape epitome is learned in conjunction with the appearance epitome so that image patches can be explained as a two-layered composition of appearance patches using the shape patch. However, this type of image is difficult to model with a small number of layers due to the large (a) Input image (b) Input image showing segmentation into patches (c) Jigsaw mean (d) Jigsaw variance Figure 2: Toy example: (a) The input image (b) Input image with segmentation boundaries superimposed. Red boundary lines have been drawn on the edge of neighboring pixels that have differing offsets. This segmentation illustrates the different shaped jigsaw pieces found when learning the jigsaw shown in (c)-(d). (c) Jigsaw mean with the four most-used jigsaw pieces are outlined in white. (d) The jigsaw variance summed across the RGB channels; white is high, black is low. number of objects present. In contrast, our model can infer any number of overlapping objects, without any explicit modelling of layers or depth. This is because our learning algorithm has the freedom to appropriately adjust a patch?s shape and size to explain only a portion of an object without explicitly having to represent a global layer ordering. Moreover, we have the potential to infer the relative depth ordering of neighboring patches by treating rare transitions as occlusions. Fig. 2b-d shows the results of learning a jigsaw of this toy image. In fig. 2b, we show how the image decomposes into jigsaw pieces. When two neighboring pixels have different labels, they map to non-neighboring locations in the jigsaw. With this understanding, we can look at the change in the labels of the adjacent pixels and plot such a change as a red line. Hence, each region bounded by the red lines indicates a region from the input image being mapped to the jigsaw. From Fig. 2b, we can see that the model has discovered well-defined parts (in this example, objects) present in the image. This is further illustrated in the 36 ? 36 learned jigsaw whose mean and variance are shown in Fig. 2c,d. The learned jigsaw has captured the shapes and appearances of the four objects and a black region for modelling the background. Under our Bayesian model, pixels in the jigsaw that have never been used in explaining the observation are set to ?0 , which we have fixed to .5 (gray). We can obtain jigsaw pieces by doing the clustering step outlined in Section. 3. In Fig. 2c, we also show the four most-used jigsaw pieces thus obtained by outlining them in white. Comparison to epitome model: In this section, we compare the jigsaw model with the epitome model [1], as applied to the dog image in Fig. 3a. We learned a 32 ? 32 epitome (Fig. 3d) using all the possible 7 ? 7 patches from the input image. We then learned a 32 ? 32 jigsaw (Fig. 3c) such that the average patch area was 49 pixels, the same as in the epitome model. This was achieved by modifying the compatibility parameter ?. Fig. 3b shows the segmentation of the image after (a) Input image (b) Image showing segmentation (c) Jigsaw mean Reconstructions from: (e) Jigsaw Mean squared error: .0537 (f) Epitome (no averaging) Mean squared error: .0711 (d) Epitome mean (g) Epitome (averaging 49 patches) Mean squared error: : .0541 Figure 3: Comparison between jigsaw and epitome. (a) The input image (b) The segmentation of the image given by the jigsaw model (c,d) The means of the learned jigsaw and epitome models (e) Reconstruction of the image using the jigsaw (f) Reconstruction from the epitome where each image pixel is reconstructed using only one fixed-size patch (g) Reconstruction from the epitome where each image pixel is the average of 49 patches. While this reconstruction has similar mean squared error to the jigsaw reconstruction, it is more blurry and less visually pleasing. learning, with patch boundaries overlaid in red. We can see that the pieces correspond to meaningful regions such as flowers, and also that patch boundaries tend to follow object boundaries. Comparing Figs. 3c & d, we find that the jigsaw is much less blurred than the epitome and also doesn?t have the epitome?s artificial ?block? structure. Instead, the boundaries between different textures are placed to allocate the appropriate amount of jigsaw space to each texture, for example, entire flowers are represented as one coherent region. Epitome models can use multi-resolution learning to reduce, but not eliminate, block artifacts. However, whilst this technique can also be applied to jigsaw learning, it has not been found to be necessary in order to obtain a good solution. In Fig. 3e-g, we compare reconstructions of the input image from the learned jigsaw and epitome models. Since the jigsaw is a generative model for an image, we can reconstruct the image by mapping pixel colors from the jigsaw according to the offset map. When reconstructing from the epitome, we can choose to either use one patch per pixel, or to average a number of patches per pixel. The first approach is most comparable to the jigsaw reconstruction, as it requires only one offset per pixel. However, we find that, as shown in Fig. 3f, the reconstruction is very blocky. When we reconstruct the each pixel from the 49 overlapping patches (Fig. 3g), we find that the reconstruction is overly blurry compared to the jigsaw, despite having a similar mean squared reconstruction error. In addition, this method requires 49 parameters per pixel rather than one and hence is a significantly less compact representation of the image. The reconstruction from the jigsaw is noticeably less blurry and is more visibly pleasing as there is no averaging in the generative process and patch boundaries tend to occur at actual object boundaries. Modelling face images: We next applied the jigsaw model to a set of 100 face images from the Olivetti database at AT&T consisting of 10 different images of 10 people. Each of these grayscale images are of size 64?64 pixels. We set the jigsaw size to 128?128 pixels so that the jigsaw has only 1/25 of the area of the input images combined. Figure 4a shows the inferred segmentation of the images into different shaped and sized pieces (each row contains the images of one person). When the faces depict the same person with similar pose, the resulting segmentations for these images are typically similar, showing that similar jigsaw pieces are being used to explain each image. This can be seen, for instance, from the first row of images in that figure. Figure 4b shows the mean of the learned jigsaw which can be seen to contain a number of face ?elements? such as eyes, noses etc. To obtain the jigsaw pieces, we applied the clustering step outlined in Section. 3. The obtained clusters are shown in Figure 5(left), which also shows the (a) 100 face images showing learned segmentation (b) Jigsaw mean Figure 4: Face images: (a) A set of 100 images, each row containing ten different images of the same person, with the segmentation given by the jigsaw model shown in red. (b) Jigsaw learned from these face images, see Figure 5 for clustering results. sharing of these jigsaw pieces. With the jigsaw pieces known, we can now retrieve the regions from the image set that correspond to each jigsaw piece. In Figure 5 (right), we show a random selection of image regions corresponding to several of the most common jigsaw pieces (shown color-coded). What is surprising is that a particular jigsaw piece becomes very strongly associated with a particular face part (far more so than when clustering by appearance alone). Thus, by learning the shape of each jigsaw piece, our model has effectively identified small and large face parts of widely different Figure 5: Left: The learned face jigsaw of Fig. 4 with overlaid white outlines showing different overlapping jigsaw pieces. For clarity, pieces used five or fewer times are not shown. Areas of the jigsaw not used by the remaining pieces have been blacked out. Seven of the most frequently used jigsaw pieces are shown colored. Right: Unsupervised part learning. For each color-coded jigsaw piece in the left image, a column shows randomly chosen images from the image set, for which that piece was selected. Notice how these pieces are very strongly associated with different face parts ? the model has achieved unsupervised discovery of two different nose shapes, eyes, eyebrows, cheeks etc, despite their widely different shapes and sizes. shapes and aspect ratios. We can also see from that figure that certain jigsaw pieces are conserved across different people ? for example, the nose piece shown in the first column of that figure. 5 Discussion We have presented a generative jigsaw model which is capable of learning the shape, size and appearance of repeated regions in a set of images. We have also shown that, for a set of face images, the learned jigsaw pieces are strongly associated with particular face parts. Currently, we apply a post-hoc clustering step to learn the jigsaw pieces. This process can be incorporated into the model by extending the pixel offset to include a cluster label and learning the region of jigsaw used by each cluster. We are investigating how best to achieve this. While we chose a Gaussian as the model for pixel appearance, alternative models can be used, such as histograms, whilst retaining the ability to achieve translation-invariant clustering. Indeed, by using appearance models of other forms, we believe that our model could be used to find repeated structures in other domains such as audio and biology, as well as in images. Other transformations, such as rotation, scalings and flip, can be incorporated in the model with cost increasing linearly with the number of transformations. We can also extend the model to allow the jigsaw pieces to undergo deformation by favoring neighboring offsets that are similar as well as being identical, using a scheme similar to that of [12]. A practical issue with learning jigsaws is the computational requirement. Every iteration of learning involves solving as many binary graph cuts as there are pixels in the jigsaw. For instance, for the toy example, it took about 30 minutes to learn a 36 ? 36 jigsaw from a 150 ? 150 image. We have since developed a significantly faster inference algorithm based on sparse belief propagation. This speed-up allows the model to be applied to larger image sets and to learn larger jigsaws. Currently, our model does not explicitly account for multiple sources of appearance variability, such as illumination. This means that the same object under different illuminations, for example, will be modelled by different parts of the jigsaw. To account for this, we are investigating factored variants of the jigsaw which separate out different latent causes of appearance variability. Despite this limitation, however, we are already achieving very promising results when using the jigsaw for image synthesis, motion segmentation and object recognition. Acknowledgments: We acknowledge helpful discussions with Nebojsa Jojic, and thank the reviewers for their valuable feedback. References [1] N. Jojic, B. Frey, and A. Kannan. Epitomic analysis of appearance and shape. In ICCV, 2003. [2] W. Freeman, E. Pasztor, and O. Carmichael. Learning low-level vision. IJCV, 40(1), 2000. [3] S. Roth and M. J. Black. Fields of experts: A framework for learning image priors. In Proceedings of IEEE CVPR, 2005. [4] A. Efros and W. Freeman. Image quilting for texture synthesis and transfer. In ACM Transactions on Graphics (Siggraph), 2001. [5] C. Rother, S. Kumar, V. Kolmogorov, and A. Blake. Digital tapestry. In Proc. Conf. Computer Vision and Pattern Recognition, 2005. [6] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. In CVPR, volume 2, pages 264?271, June 2003. [7] B. Leibe and B. Schiele. Interleaved object categorization and segmentation. In BMVC, 2003. [8] E. Borenstein, E. Sharon, and S. Ullman. Combining top-down and bottom-up segmentation. In Proceedings IEEE workshop on Perceptual Organization in Computer Vision, CVPR 2004, 2004. [9] E. Borenstein and S. Ullman. Class-specific, top-down segmentation. In Proceedings of ECCV, 2003. [10] D. Huttenlocher, D. Crandall, and P. Felzenszwalb. Spatial priors for part-based recognition using statistical models. In Proceedings of IEEE CVPR, 2005. [11] Y Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI, 23(11), 2001. [12] J. Winn and J. Shotton. The layout consistent random field for recognizing and segmenting partially occluded objects. In Proceedings of IEEE CVPR, 2006.
3054 |@word middle:1 rgb:2 contains:2 selecting:1 initialisation:1 ours:1 existing:4 com:1 comparing:1 surprising:1 readily:1 john:1 visible:1 shape:37 enables:1 hoping:1 treating:1 plot:1 depict:1 alone:2 cue:1 generative:7 selected:2 fewer:1 nebojsa:1 colored:2 provides:1 location:1 lx:1 five:1 height:1 along:1 constructed:1 direct:1 become:1 ijcv:1 combine:1 manner:1 introduce:2 expected:1 indeed:2 frequently:2 multi:1 freeman:4 inspired:1 automatically:2 actual:1 increasing:1 becomes:1 discover:1 underlying:2 moreover:1 bounded:1 what:1 kind:1 developed:1 whilst:5 differing:1 finding:2 transformation:2 every:1 ly:1 segmenting:1 maximise:1 local:2 frey:1 despite:3 joining:1 analyzing:1 optimised:1 approximately:2 pami:1 black:4 chose:1 range:3 averaged:1 practical:1 acknowledgment:1 union:1 block:2 useable:1 carmichael:1 area:3 significantly:2 word:1 regular:1 cannot:1 layered:1 selection:1 noising:1 epitome:22 influence:1 seminal:1 applying:1 map:19 reviewer:1 roth:1 layout:2 resolution:4 factored:1 retrieve:1 variation:1 coordinate:1 updated:2 target:1 modulo:1 element:1 recognition:6 particularly:1 updating:1 cut:3 database:1 huttenlocher:1 bottom:1 capture:4 region:18 connected:1 ordering:2 trade:1 valuable:1 substantial:1 schiele:1 occluded:1 exhaustively:1 solving:1 triangle:1 joint:2 siggraph:1 represented:1 kolmogorov:1 distinct:1 fast:1 describe:1 artificial:1 crandall:1 outside:1 whose:1 larger:3 widely:3 cvpr:5 reconstruct:2 ability:1 statistic:2 hoc:1 took:1 reconstruction:13 interaction:2 product:1 neighboring:6 combining:2 poorly:1 achieve:4 cluster:5 requirement:1 extending:1 categorization:1 object:29 pose:1 measured:1 strong:1 throw:1 involves:1 closely:1 filter:2 modifying:1 noticeably:1 explains:1 require:1 behaviour:1 fix:1 probable:1 secondly:1 extension:1 blake:1 normal:1 exp:1 visually:1 overlaid:2 mapping:2 efros:2 estimation:1 proc:1 label:4 currently:3 grouped:1 largest:1 minimization:1 gaussian:3 aim:3 super:3 rather:1 varying:1 conjunction:1 epitomic:1 june:1 modelling:3 superimposed:1 indicates:1 visibly:1 contrast:2 greedily:1 helpful:1 inference:6 entire:3 typically:3 lj:3 eliminate:1 hidden:1 perona:1 favoring:1 i1:3 pixel:36 overall:1 classification:1 compatibility:1 issue:1 retaining:1 plan:1 spatial:3 summed:1 field:8 construct:1 once:1 shaped:6 having:3 sampling:1 never:1 biology:1 represents:1 placing:1 look:1 unsupervised:3 identical:1 future:1 few:1 randomly:1 gamma:2 occlusion:4 replacement:1 consisting:1 microsoft:2 freedom:1 detection:3 pleasing:2 organization:1 adjust:1 blocky:1 extreme:1 predefined:1 edge:3 encourage:1 capable:1 necessary:3 machinery:1 circle:2 re:1 deformation:2 instance:3 column:2 cost:1 entry:1 rare:1 veksler:1 recognizing:1 graphic:1 synthetic:1 combined:2 chooses:1 person:3 probabilistic:6 off:1 synthesis:8 together:2 again:2 central:1 squared:5 containing:1 choose:3 conf:1 expert:2 ullman:2 li:7 toy:4 account:2 potential:3 de:1 star:1 blurred:1 satisfy:1 explicitly:2 piece:51 try:2 view:1 jigsaw:120 doing:1 red:5 portion:1 square:2 accuracy:3 variance:8 correspond:2 painting:2 modelled:1 bayesian:1 converged:1 explain:3 reach:1 sharing:1 energy:2 initialised:1 associated:7 color:3 improves:1 segmentation:15 seam:1 follow:1 specify:1 zisserman:1 bmvc:1 strongly:5 hand:3 replacing:1 overlapping:5 propagation:2 defines:4 artifact:1 gray:1 quality:1 believe:2 name:1 effect:2 contain:1 hence:4 jojic:3 spatially:1 semantic:1 illustrated:2 white:4 adjacent:3 during:1 width:1 criterion:2 evident:1 outline:1 l1:2 motion:1 image:128 recently:3 boykov:1 common:1 rotation:1 anitha:1 volume:1 extend:2 forehead:1 composition:1 cambridge:1 grid:1 outlined:3 submodular:1 supervision:1 etc:2 closest:1 olivetti:1 driven:1 certain:1 binary:1 meeting:1 conserved:1 captured:1 minimum:1 seen:2 subtraction:1 determine:1 relates:1 multiple:2 infer:4 reduces:1 faster:1 post:1 coded:2 schematic:1 variant:1 vision:6 histogram:2 represent:2 iteration:1 achieved:3 background:3 addition:2 want:1 winn:2 source:1 appropriately:2 borenstein:2 markedly:1 tapestry:1 tend:2 undergo:1 flow:1 mod:1 call:1 noting:1 unused:2 shotton:1 enough:2 iterate:1 identified:1 reduce:1 quilting:1 allocate:1 cause:1 clear:1 amount:2 ten:1 zabih:1 generate:6 supplied:1 notice:3 overly:1 per:5 four:4 achieving:1 drawn:2 clarity:1 rectangle:2 sharon:1 graph:3 inverse:2 cheek:3 reasonable:2 patch:49 scaling:1 comparable:1 capturing:1 layer:3 interleaved:1 guaranteed:1 strength:1 occur:1 scene:3 generates:1 aspect:1 speed:1 kumar:1 performing:1 optical:1 structured:1 according:2 poor:1 describes:1 across:4 reconstructing:1 explained:1 invariant:2 iccv:1 ln:3 previously:2 discus:1 nose:5 irregularly:2 flip:1 photo:1 apply:2 leibe:1 away:1 appropriate:2 blurry:4 distinguished:1 alternative:1 top:2 clustering:10 include:2 ensure:1 remaining:2 graphical:1 maintaining:1 cally:1 carrot:1 skin:1 move:1 already:1 separate:2 mapped:3 thank:1 seven:1 reason:1 kannan:2 rother:2 ratio:2 balance:1 difficult:2 holding:1 allowing:1 maximises:1 observation:1 markov:4 pasztor:1 acknowledge:1 variability:4 incorporated:2 discovered:3 arbitrary:1 intensity:1 inferred:2 dog:1 coherent:2 learned:16 able:1 flower:2 pattern:1 articulation:1 eyebrow:2 including:1 belief:2 mouth:1 suitable:1 overlap:3 natural:2 scheme:1 eye:4 extract:1 text:1 prior:5 understanding:2 discovery:1 relative:1 x150:1 limitation:1 outlining:1 digital:1 degree:2 sufficient:1 consistent:2 share:2 translation:1 row:3 eccv:1 surprisingly:1 placed:3 allow:5 wide:1 template:1 face:17 explaining:1 felzenszwalb:1 sparse:1 boundary:9 dimension:1 depth:2 transition:2 feedback:1 doesn:1 far:1 cope:2 transaction:1 reconstructed:3 alpha:1 compact:1 ignore:1 approximate:1 clique:2 global:1 investigating:2 conclude:1 assumed:1 discriminative:2 fergus:1 alternatively:1 spectrum:1 grayscale:1 latent:3 iterative:1 decomposes:1 promising:1 learn:8 transfer:4 channel:2 expansion:2 necessarily:1 domain:2 linearly:1 motivation:1 noise:2 hyperparameters:1 repeated:4 crafted:1 fig:15 precision:3 inferring:1 position:3 explicit:2 lie:1 perceptual:1 collage:1 learns:1 minute:1 down:2 specific:1 showing:6 offset:21 consist:1 workshop:1 adding:1 effectively:1 texture:7 illumination:2 illustrates:1 suited:1 intersection:1 appearance:26 likely:1 synthesising:1 partially:1 corresponds:1 determines:1 acm:1 carsten:1 sized:3 goal:2 viewed:1 disambiguating:1 shared:1 change:2 typical:1 averaging:4 called:1 meaningful:1 people:2 arises:1 audio:1
2,264
3,055
Theory and Dynamics of Perceptual Bistability Paul R. Schrater? Departments of Psychology and Computer Sci. & Eng. University of Minnesota Minneapolis, MN 55455 [email protected] Rashmi Sundareswara Department of Computer Sci. & Eng. University of Minnesota [email protected] Abstract Perceptual Bistability refers to the phenomenon of spontaneously switching between two or more interpretations of an image under continuous viewing. Although switching behavior is increasingly well characterized, the origins remain elusive. We propose that perceptual switching naturally arises from the brain?s search for best interpretations while performing Bayesian inference. In particular, we propose that the brain explores a posterior distribution over image interpretations at a rapid time scale via a sampling-like process and updates its interpretation when a sampled interpretation is better than the discounted value of its current interpretation. We formalize the theory, explicitly derive switching rate distributions and discuss qualitative properties of the theory including the effect of changes in the posterior distribution on switching rates. Finally, predictions of the theory are shown to be consistent with measured changes in human switching dynamics to Necker cube stimuli induced by context. 1 Introduction Our visual system is remarkably good at producing consistent, crisp percepts of the world around us, in the process hiding interpretation uncertainty. Perceptual bistability is one of the few circumstances where ambiguity in the visual processing is exposed to conscious awareness. Spontaneous switching of perceptual states frequently occurs during continuously viewing an ambiguous image, and when a new interpretation of a previously stable stimuli is revealed (as in the sax/girl in figure ??a), spontaneous switching begins to occur[?]. Moreover, although perceptual switching can be modulated by conscious effort[?, ?], it cannot be completely controlled. (a) (b) Figure 1: Examples of ambiguous figures: (a) can be interpreted as a woman?s face or a saxophone player. (b) can be interpreted as a cube viewed from two different viewpoints. Stimuli that produce bistability are characterized by having several distinct interpretations that are in some sense equally plausible. Given the successes of Bayesian inference as a model of perception ? http://www.schrater.org (for instance [?, ?, ?]), these observations suggest that bistability is intimately connected with making perceptual decisions in the presence of a multi-modal posterior distribution, as previously noted by several authors[?, ?]. However, typical Bayesian models of perceptual inference have no dynamics, and probabilistic inference per se provides no reason for spontaneous switching, raising the possibility that switching stems from idiosyncracies in the brain?s implementation of probabilistic inference, rather than from general principles. In fact, most explanations of bistability have been historically rooted in proposals about the nature of neural processing of visual stimuli, involving low-level visual processes like retinal adaptation and neural fatigue[?, ?, ?]. However, the abundance of behavioral and brain imaging data that show high level influences on switching (like intentional control which can produce 3-fold changes in alternation rate[?]) have revised current views toward neural hypotheses involving combinations of both sensory and higher order cortical processing[?]. The goal of this paper is to provide a simple explanation for the origins of bistability based on general principles that can potentially handle both top-down and bottom-up effects. 2 Basic theory The basic ideas that constitute our theory are simple and partly form standard assumptions about perceptual processing. The core assumptions are: 1. Perception performs Bayesian inference by exploring and updating the posterior distribution across time by a kind of sampling process (e.g. [?]). 2. Conscious percepts result from a decision process that picks the interpretations by finding sample interpretations with the highest posterior probability (possibly weighted by the cost of making errors). 3. The results of these decisions and their associated posterior probabilities are stored in memory until a better interpretation is sampled. 4. The posterior probability associated with the interpretation in memory decays with time. The intuition behind the model is that most percepts of objects in a scene are built up across a series of fixations. When an object previously fixated is eccentrically viewed or occluded, the brain should store the previous interpretation in memory until better data comes along or the memory becomes too old to be trusted. Finally, the interpretation space required for direct Bayesian inference is too large for even simple images, but sampling schemes may provide a simple way to perform approximate inference. The theory provides a natural interface to interpret both high-level and low-level effects on bistability, because any event that has an impact on the relative heights or positions of the modes in the posterior can potentially influence durations. For example, patterns of eye fixations have long been known to influence the dominant percept[?]. Because eye movement events create sudden changes in image information, it is natural that they should be associated with changes in the dominant mode. Similarly, control of information via selective attention and changes in decision thresholds offer concrete loci for intentional effects on bistability. 3 Analysis To analyze the proposed theory, we need to develop temporal distributions for the maxima of a multimodal posterior based on a sampling process and describe circumstances under which a current sample will produce an interpretation better than the one in memory. We proceed as follows. First we develop a general approximation to multi-modal posterior distributions that can vary over time, and analyze the probability that a sample from the posterior are close to maximal. We then describe how the samples close to the max interact with a sample in memory with decay. A tractable approximation to a multi-modal distribution can be formed using a mixture of uni-modal distributions centered at each maxima. #maxima X P (Dt |?t )P (?t |D0:t??t ) ? ? ? pi (Dt |?t ; ?t,i )Pi (?|D0:t??t ; ?t,i ) (1) P (?t |D0:t ) = P (Dt |D0:t??t ) i=1 ? where ?t is the vector of unknown parameters (e.g. shape for Necker Cube) at time t, ?t,i is the location of the maxima of the ith mode, Dt is the most recent data, D0:t??t is the data history, ? and Pi (?|D0:t??t ; ?t,i ) is the predictive distribution (prior) for the current data based on recent 1 experience . Near the maxima, the negative log of the uni-modal distributions can be expanded into a secondorder Taylor series: d2i + ki ?Li (?t |Dt ) ? = (?t ? (2) ? T ? ?t,i ) Ii (?t,i |D0:t )(? ? ? ?t,i ) + 1/2 log(|Ii?1 |) + ci (3) 2 (?t |D0:t )) ? ? where Ii (?t,i |D0:t ) = ? log(P is the observed information matrix and ci = |?t,i ???? T  ? log P (?t,i |D0:t ) represents the effect of the predictive prior on the posterior height at the ith mode. Thus, samples from a posterior mode will be approximately ?2 distributed near the maximum with effective degrees of freedom n given by the number of significant eigenvalues of Ii?1 . Essentially n encodes the effective degrees of freedom in interpretation space. 2 3.1 Distribution of transition times We assume that the perceptual interpretation is selected by a decision process that updates the interpretation in memory m? (t) whenever the posterior probability of the most recent sample both exceeds a decision threshold and the discounted probability of the sample in memory. Given these assumptions, we can approximate the probability distribution for update events. Assuming the sampling forms a locally stationary process d2i (t), update events involving entry into mode i are first passage times Ti of d2i (t) below both the minimum of the current memory sample ?t and the decision threshold ?: Ti (?, ?t ) = min{t : ?ti ? min{?, ?t }} where ?ti = d2i (t) ? ki , time t is the duration since the last update event and ?t = log(P (m? (t)|D0:t )) is the log posterior of the sample in memory at time t. Let Mti = inf 0?s?t ?si . The probability of waiting at least t for an update event is related to the minima of the process by: P (Ti (?, ?t ) < t) = P (Mti < min{?, ?t }) This probability can be expressed as: P (Mti < min{?, ?t }) = Z t p(??i < ?t ) 0 (4) p(?t < ?)P (i|? )d? + Rt 0 p(??i < ?) (1 ? P (?t < ?)) P (i|? )d? where P (i|t) = P (?t ? Si ) denotes the probability that a sample drawn between times 0 and t is in the support Si of the ith mode. To generate tractable expressions from equation ??, we make the following assumptions. Memory distribution Assume that the memory decay process is slow relative to the sampling events, and that thePdecay process can be modeled as a random walk in the interpretation space m? (t) = m? (0) + 0??i ?t ?? (?i ), where ?i are sample times, and ?? are small disturbances with zero mean and variance ? we assume to be small. Because variances add, the average effect on the distance ?t is a linear increase: ?t = ?0 + ??t, where ? is the sampling rate. These disturbances could represent changes in the local of the maxima of the posterior due to the incorporation of new data, neural noise, or even discounting (note that linearly increasing ?t corresponds to exponential or multiplicative discounting in probability). 1 Because time is critical to our arguments, we assume that the posterior is updated across time (and hence new data) using a process that resembles Bayesian updating. 2 For the Necker cube, the interpretation space can be thought of as the depths of the vertices. A strong prior assumption that world angles between vertices are close to 90deg produces two dominate modes in the posterior that correspond to the typical interpretations. Within a mode, the brain must still decide whether the vertices conform exactly to a cube. Thus for the Necker cube, n might be as high as 8 (one depth value per vertex) or as low as 1 (all vertices fixed once the front corner depth is determined). To understand the behavior of this memory process, notice that every m? (0) must be within distance ? of the maximum of the posterior for an update to occur. Due to the properties of extrema of distributions of ?2 random variables, an m? (0) will be (in expectation) a characteristic distance ?m (?) below ? and for t > 0 drifts with linear dynamics3. This suggests the approximation, p(?t < ?) ? ?(?m + ??t ? ?t ), which can be formally justified because p(?t < ?) will be highly peaked with respect to the distribution of the sampling process p(??i ). Finally assuming slow drift means (1 ? P (?t < ?)) ? 0 on the time scale that transitions occur4. Under these assumptions, equation ?? reduces to: Z t P (Mti < min{?, ?t }) = p(??i < ?t )?(?m + ??t ? ?t )P (i|? )d? (5) 0 ? P (Mti < ?m + ??t)P (i) where P (i) is the average frequency of sampling from the i th (6) mode. Extrema of the posterior sampling process If the sampling process has no long-range temporal dependence, then under mild assumptions the distribution of extrema converge in distribution5 to one of three characteristic forms that depend only on the domain of the random variable[?]. For ?2 samples, the distribution of minima converges to P (Mti ? b) = 1 ? exp(?cN ba?1 )  ?a+1 where N is the number of samples, c(n) = 2?(a) , a(n) = n2 . Set N = ?t and let ? = 1 for convenience, where ? is the effective sampling rate, and equation ?? can be written as: P (Ti < t) = P (Mti ? min{?, ?t }) ? = P (Mti < ?m + ?t)P (i) (7) a?1 1 ? exp ?c t(?m + ?t)  P (i) (8) The probability distribution shows a range of behavior depending on the values of a = n/2 and ?m (?). Note that the time scale for switching. In particular, for n > 4 and ?m (?) relatively small, the distribution has a gamma-like behavior, where new memory update transitions are suppressed near recent transitions. For n = 2, or for ?m (?) large, the above equation reduces to exponential. This behavior shows the effect of the decision threshold, as without a decision threshold the asymptotic behavior of simple sampling schemes will generate approximately exponentially distributed update event times, as a consequence of extreme value theory. Finally, for n = 1and small ?m (?), the distribution becomes Cauchy-like with extremely long tails. See figure ?? for example distributions. Note that the time scale of events can be arbitrarily controlled by appropriately selecting ? (controls the time scale of the sampling process) and ? (controls the time scale of the memory decay process). Effects of posterior parameters on update events The memory update distributions are effected primarily by two factors, the log posterior heights and their difference ?kij = ki ? kj , and the effective number of degrees of freedom per mode n. Effect of ki , ?kij The variable ?kij has possible effects both on the probability that a mode is sampled, and the temporal distributions. When the modes are strongly peaked (and the sampling procedure is unbiased) log P (i) ? ?kij . Secondly, ?kij effectively sets different thresholds for each mode, because memory update events occur when: ?ti = d2i (t) ? ki > min{?t , ?} Increasing the effective threshold for mode i makes updates of type i more frequent, and should drive the temporal dynamics of the dominant mode toward exponential. Finally, if the posterior becomes more peaked while the threshold remains fixed, the update rates should increase and the temporal distributions will move toward exponential. If we assume increased viewing time makes the posterior more peaked, then our model predicts the common finding of increased transition rates with viewing duration. 3 In the simulations, ?m is chosen as the expected value of the set of events below the threshold xi Conversely fast drift in the limit means P (?t < ?) ? 0, which results in transitions entirely determined by the minima of the sampling process and ?. 5 Corresponds to the limit assertion supb |P (Mt ? b) ? exp(??(b, t)t)| ? 0 as t ? ? 4 Prob. Time to next update > t (P(T>t)) 1 Effective degrees of freedom n=8 n=4 n=2 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 500 1000 1500 2000 2500 3000 3500 4000 Time(t) (in number of samples) Figure 2: Examples of cumulative distribution functions of memory update times. Solid curves are generated by simulating the sampling with decision process described in the text. Dashed lines represent theoretical curves based on the approximation in equation ??, showing the quality of the approximation. Effect of n One of the surprising aspects of the theory above is the strong dependence on the effective number of degrees of freedom. The theory makes a strong prediction that stimuli that have more interpretation degrees of freedom will have longer durations between transitions, which appears to be qualitatively true across both rivalry and bistability experiments[?, ?] (of course, depending how you interpret the number of degrees of freedom). Relating theory to behavioral data via induced Semi-Markov Renewal Process Assuming that memory update events involving transitions to the same mode are not perceptually accessible, only update events that switch modes are potentially measurable. However, the process described above fits the description of a generator for a semi-Markov renewal process. A semi-Markov renewal process involves Markov transitions between discrete states i and j determined by a matrix with entries Pij , coupled with random durations spent in that state sampled from time distributions Fij (t). The product of these distributions Qij (t) = Pij Fij (t) is the generator of the process, that describes the conditional probability of first transition between states i ? j in time less than t, given first entry into state i occurs at time t = 0. In the theory above, Fij (t) = Fii (t) = P (Ti < t), while Pij = Pjj = P (j)6 The main reason for introducing the notion of a renewal process is that they can be used to express the relationship between the theoretical distributions and observable quantities. The most commonly collected data are times between transitions and (possibly contingent) percept frequencies. Here we present results found in Ross[?]. Let the state s(t) = i refer to when the memory process is in the support of mode i: m? (t) ? Si at time t. The distribution of first transition times from state s = i can be expressed formally as a cumulative probability of first transition: Gij (t) = P (Nj (t) > 0|s(0) = i) = P (Tj < t|s(0) = i) where Nj (t) is the number of transitions into state j in time <= t, Tj is the time until first memory update of type j. For two state processes, only G01 (t) and G10 (t) are measurable. Let P (0), denote the probability of sampling from mode 0. The relationship between the generating process and the distribution of first transitions is given by: Z t G01 (t) = G01 (t ? ? )dQ00 (? ) + Q01 (t) (9) 0 G01 (t) = P (0) Z 0 t G01 (t ? ? ) dP (T0 < ? ) d? + P (1)P (T0 < t) dt (10) which appears only to be solvable numerically for the general form of our memory update transition functions, however, for the case in which P (T0 < t) is exponential, G01 (t) is as well. Moreover, 6 The independence relations are a consequence of an assumption of independence in the sampling procedure, and relaxing that assumption can produce state contingencies in Qij (t). Therefore, we do not consider this to be a prediction of the theory. For example, mild temporal dependence (e.g. MCMC-like sampling with large steps) can create contingencies in the frequency of sampling from the ith mode that will produce a non-independent transition matrix Pij = P (?t ? Si |?t???t ? Sj ). for gamma-like distributions, the convolution integral tends to increase the shape parameter, which means that gamma parameter estimates produced by fitting transition durations will overestimate the amount of ?memory? in the process7 . Finally note the limiting behavior as P (0) ? 0, G01 (t) = P (T0 < t), so that direct measurement of the temporal distributions is possible but only for the (almost) supressed perceptual state. Similar relationships exist for survival probabilities, defined as Sij (t) = P (s(t) = j|s(0) = i) 4 Experiments In this section we investigate simple qualitative predictions of the theory, that biasing perception toward one of the interpretations will produce a coupled set of changes in both percept frequencies and durations, under the assumption that perceptual biases result from differences in posterior heights . To bias perception of a bistable stimuli, we had observers view a Necker cube flanked with ?fields of cubes? that are perceptually unambiguous and match one of the two percepts (see figure ??). Subjects are typically biased toward seeing the Necker cube in the ?looking down? state (65-70% response rates), and the context stimuli shown in figure ??a) have little effect on Necker cube reversals. We found that the looking up context, boosts ?looking up? response rates from 30% to 55%. 4.1 Methods Subject?s perceptual state while viewing the stimuli in fig. ?? were collected using the methods described in[?]. Eye movement effects[?] were controlled by having observers focus on a tiny sphere in the center of the Necker cube, and attention was controlled using catch trials. Base rates for reversals were established for each observer (18 total) in a training phase. Each observer viewed 100 randomly generated context stimuli and each stimulus was viewed long enough to acquire 10 responses (taking 10-12 sec on average). For ease of notation, we represent the ?Looking down? condition as state 0 and the ?Looking Up? as state 1. (a) An instance of the ?Looking down? con- (b) An instance of the ?Looking up? context text with the Necker cube in the middle with the Necker cube in the middle Figure 3: The two figures are examples of the ?Looking down? and ?Looking up? context conditions. 4.2 Results We measured the effect of context on estimates of perceptual switching rates, Ri = P (s(t) = i), first transition durations Gij , and survival probabilities Pii = P (s(t) = i|s(0) = i) by counting the number of events of each type. Additionally, we fit a semi-Markov renewal process Qij (t) = Pij Fij (t) to the data using a sampling based procedure. The procedure is too complex to fully describe in this paper, so a brief description follows. For ease of sampling, Fij (t) were gamma with separate parameters for each of the four conditionals {00, 01, 10, 11}, resulting in 10 parameters 7 gamma shape parameters are frequently interpreted as the number of events in some abstract Poisson process that must occur before transition overall. The process was fit by iteratively choosing parameter values for Qij (t), simulating response data and measuring the mismatch between the simulated and human Gij and Pii distributions. The effect of context on Gij and Pii is shown in Fig.?? and Fig.?? for the contexts ?Looking Down? and ?Looking Up? respectively. The figures also show the maximum likelihood fitted gamma functions. Testable predictions generated by simulating the memory process described above were verified, including changes in mean durations of about 2sec, coupling of the duration distributions, and an increase in the underlying renewal process shape parameters when the percepts are closer to equally probable. Shown "Down" context P(1st transition|S(O)=?Up?) Probability P( 1st transition|S(O)="Down") 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 P(Surv. |S(0)="Down") 0.3 0.2 Human data Max. Likelihood fit 0.1 0 P(Survival |S(0)="Up") 0 5 10 0.3 0.2 Human data Max. Likelihood fit 0.1 0 0 5 Time 10 Time Figure 4: Data pooled across subjects for the ?Looking Down? context condition. (a) Prob. of first transition and the survival probability of the ?Looking down? percept. (b)Prob. of first transition and conditional survival probability of the ?Looking Up? percept. A semi-Markov renewal process with transition paramters Pij , gamma means mij and gamma variances vij was fit to all the data via max. likelihood. The best fit curves are superimposed on the data. Shown "Up" context Probability 1 P( 1st transition|S(O)="Down") 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 P(Survival |S(0)="Down") 0.2 P(Survival |S(0)="Up") 0.3 0.2 Human data Max. Likelihood fit 0.1 0 P( 1st transition|S(O)="Up" ) 0 5 10 Human data Max. Likelihood fit 0.1 0 0 5 10 Figure 5: Same as figure ??, but for the ?Looking Up? context condition. 5 Discussion/Conclusions Although [?] also presents a theory for average transitions times in bistability based on random processes and a multi-modal posterior distribution, their theory is fundamentally different as it derives switching events from tunneling probabilities that arise from input noise. Moreover, their theory predicts increasing transition times as the posterior becomes increasingly peaked, exactly opposite our predictions. In conclusion, we have presented a novel theory of perceptual bistability based on simple assumptions about how the brain makes perceptual decisions. In addition, results from a simple experiment show that manipulations which change the dominance of a percept produce coupled changes in the probability of transition events as predicted by theory. However, we do not regard the experiment as a strong test of the theory. We believe the strength of the theory is that it can make a large set of qualitative predictions about the distribution of transition events by coupling transition times to simple properties of the posterior distribution. Our theory suggests that the basic descriptive model sufficient to capture perceptual bistability is a semi-Markov renewal process, which we showed could successfully simulate the temporal dynamics of human data for the Necker cube. References [1] Aldous, D .(1989) Probability approximations via the Poisson clumping heuristic. Applied Math. Sci, 77. Springer-Verlag, New York. [2] Bialek, W., DeWeese, M. (1995) Random Switching and Optimal Processing in the Perception of Ambiguous Signals. Physics Review Letters 74(15) 3077-80. [3] Brascamp, J. W., van Ee, R., Pestman, W. R., & van den Berg, A. V. (2005). Distributions of alternation rates in various forms of bistable perception. J. of Vision 5(4), 287-298. [4] Einhauser, W., Martin, K. A., & Konig, P. (2004). Are switches in perception of the Necker cube related to eye position? Eur J Neuroscience 20(10), 2811-2818. [5] Freeman, W.T (1994) The generic viewpoint assumption in a framework for visual perception Nature vol. 368, April 1994. [6] von Grunau, M. W., Wiggin, S. & Reed, M. (1984). The local character of perspective organization. Perception and Psychophysics 35(4), 319-324. [7] Kersten, D., Mamassian, P. & Yuille, A. (2004) Object Perception as Bayesian Inference Annual Review of Psychology Vol. 55, 271-304. [8] Lee, T.S. & Mumford, D. (2003) Hierarchical Bayesian Inference in the Visual Cortex Journal of the Optical Society of America Vol. 20, No. 7. [9] Leopold, D. and Logothetis, N.(1999) Multistable phenomena: Changing views in Perception. Trends in Cognitive Sciences. Vol.3, No.7, 254-264. [10] Long, G., Toppino, T. & Mondin, G. (1992) Prime Time: Fatigue and set effects in the perception of reversible figures. Perception and Psychophysics Vol.52, No.6, 609-616. [11] Mamassian, P. & Goutcher, R. (2005) Temporal dynamics in Bistable Perception. Journal of Vision. No. 5, 361-375. [12] Rock, I. and Mitchener, K.(1992) Further evidence of the failure of reversal of ambiguous figures by uninformed subjects. Perception 21, 39-45. [13] Ross, S. M. (1970) Applied Probability Models with Optimization Applications. Holden-Day. [14] Stocker, A. & Simoncelli, E. (2006) Noise characteristics and prior expectations in human visual speed perception Nature Neuroscience vol.9, no.4, 578-585. [15] Toppino, T. C. (2003). Reversible-figure perception: mechanisms of intentional control. Perception and Psychophysics 65(8), 1285-1295. [16] Toppino, T. C. & Long, G. M. (1987). Selective adaptation with reversible figures: don?t change that channel. Perception and Psychophysics 42(1), 37-48. [17] van Ee, R., Adams, W. J., & Mamassian, P. (2003). Bayesian modeling of cue interaction: Bi-stability in stereo-scopic slant perception. J.of the Opt. Soc. of Am. A, 20, 1398-1406. [18] van Ee, R., van Dam, L.C.J., Brouwer,G.J. (2005) Dynamics of perceptual bi-stability for stereoscopic slant rivalry. Vision Res., 45, 29-40.
3055 |@word mild:2 trial:1 middle:2 simulation:1 eng:2 pick:1 solid:1 series:2 selecting:1 current:5 surprising:1 si:5 must:3 written:1 shape:4 update:20 clumping:1 stationary:1 cue:1 selected:1 ith:4 core:1 sudden:1 provides:2 math:1 location:1 org:1 height:4 along:1 direct:2 qualitative:3 qij:4 fixation:2 fitting:1 behavioral:2 expected:1 rapid:1 behavior:7 frequently:2 multi:4 brain:7 freeman:1 discounted:2 little:1 increasing:3 hiding:1 begin:1 becomes:4 moreover:3 notation:1 underlying:1 kind:1 interpreted:3 finding:2 extremum:3 nj:2 temporal:9 every:1 ti:8 exactly:2 control:5 producing:1 overestimate:1 before:1 local:2 tends:1 limit:2 consequence:2 switching:16 approximately:2 might:1 resembles:1 suggests:2 conversely:1 relaxing:1 ease:2 range:2 minneapolis:1 bi:2 spontaneously:1 procedure:4 thought:1 refers:1 seeing:1 suggest:1 cannot:1 close:3 convenience:1 context:13 influence:3 dam:1 kersten:1 crisp:1 www:1 measurable:2 center:1 elusive:1 toppino:3 attention:2 duration:10 dominate:1 stability:2 handle:1 notion:1 updated:1 limiting:1 spontaneous:3 logothetis:1 hypothesis:1 origin:2 secondorder:1 surv:1 trend:1 updating:2 predicts:2 bottom:1 observed:1 capture:1 connected:1 movement:2 highest:1 intuition:1 occluded:1 dynamic:7 d2i:5 sundareswara:1 depend:1 exposed:1 predictive:2 yuille:1 completely:1 girl:1 multimodal:1 various:1 america:1 einhauser:1 distinct:1 fast:1 describe:3 effective:7 choosing:1 heuristic:1 plausible:1 descriptive:1 eigenvalue:1 rock:1 propose:2 interaction:1 maximal:1 product:1 adaptation:2 frequent:1 description:2 konig:1 produce:8 generating:1 adam:1 converges:1 object:3 spent:1 derive:1 develop:2 depending:2 coupling:2 uninformed:1 measured:2 strong:4 soc:1 c:1 involves:1 come:1 pii:3 predicted:1 fij:5 centered:1 human:8 viewing:5 bistable:3 opt:1 probable:1 secondly:1 exploring:1 around:1 intentional:3 exp:3 vary:1 ross:2 create:2 successfully:1 weighted:1 trusted:1 rather:1 focus:1 likelihood:6 superimposed:1 sense:1 am:1 inference:10 typically:1 holden:1 relation:1 selective:2 overall:1 renewal:8 psychophysics:4 cube:15 field:1 once:1 having:2 sampling:23 represents:1 peaked:5 stimulus:10 fundamentally:1 few:1 primarily:1 randomly:1 gamma:8 pjj:1 phase:1 freedom:7 organization:1 possibility:1 highly:1 investigate:1 umn:2 mixture:1 extreme:1 behind:1 tj:2 stocker:1 integral:1 closer:1 experience:1 old:1 taylor:1 walk:1 mamassian:3 re:1 theoretical:2 fitted:1 instance:3 kij:5 increased:2 modeling:1 assertion:1 measuring:1 bistability:13 cost:1 introducing:1 vertex:5 entry:3 too:3 front:1 stored:1 eur:1 st:4 explores:1 accessible:1 probabilistic:2 physic:1 lee:1 continuously:1 concrete:1 von:1 ambiguity:1 possibly:2 woman:1 corner:1 cognitive:1 li:1 supb:1 retinal:1 sec:2 pooled:1 explicitly:1 multiplicative:1 view:3 observer:4 analyze:2 effected:1 formed:1 variance:3 characteristic:3 percept:11 correspond:1 necker:12 bayesian:9 produced:1 drive:1 history:1 whenever:1 failure:1 frequency:4 naturally:1 associated:3 con:1 sampled:4 formalize:1 appears:2 higher:1 dt:6 day:1 sax:1 modal:6 response:4 april:1 strongly:1 until:3 flanked:1 reversible:3 mode:21 quality:1 believe:1 effect:16 unbiased:1 true:1 discounting:2 hence:1 iteratively:1 during:1 ambiguous:4 noted:1 rooted:1 unambiguous:1 fatigue:2 performs:1 q01:1 interface:1 passage:1 image:5 novel:1 common:1 mt:1 exponentially:1 tail:1 interpretation:24 relating:1 schrater:3 interpret:2 numerically:1 significant:1 refer:1 measurement:1 slant:2 similarly:1 had:1 minnesota:2 stable:1 longer:1 cortex:1 add:1 base:1 dominant:3 fii:1 posterior:28 recent:4 showed:1 perspective:1 aldous:1 inf:1 prime:1 manipulation:1 store:1 verlag:1 success:1 arbitrarily:1 alternation:2 minimum:4 contingent:1 converge:1 dashed:1 ii:4 semi:6 signal:1 simoncelli:1 reduces:2 stem:1 d0:11 exceeds:1 match:1 characterized:2 offer:1 long:6 sphere:1 equally:2 controlled:4 impact:1 prediction:7 involving:4 basic:3 circumstance:2 essentially:1 expectation:2 poisson:2 vision:3 represent:3 proposal:1 justified:1 remarkably:1 conditionals:1 addition:1 appropriately:1 biased:1 induced:2 subject:4 ee:3 near:3 presence:1 counting:1 revealed:1 enough:1 switch:2 independence:2 fit:9 psychology:2 opposite:1 idea:1 cn:1 t0:4 whether:1 expression:1 effort:1 stereo:1 proceed:1 york:1 constitute:1 se:1 amount:1 conscious:3 locally:1 http:1 generate:2 exist:1 notice:1 stereoscopic:1 neuroscience:2 per:3 paramters:1 conform:1 discrete:1 vol:6 waiting:1 express:1 dominance:1 four:1 saxophone:1 threshold:9 drawn:1 changing:1 deweese:1 verified:1 imaging:1 mti:8 angle:1 prob:3 uncertainty:1 you:1 letter:1 almost:1 decide:1 decision:11 tunneling:1 entirely:1 ki:5 fold:1 annual:1 strength:1 occur:4 incorporation:1 scene:1 ri:1 encodes:1 aspect:1 speed:1 argument:1 min:7 extremely:1 simulate:1 performing:1 expanded:1 optical:1 relatively:1 martin:1 department:2 combination:1 remain:1 across:5 increasingly:2 intimately:1 suppressed:1 describes:1 character:1 making:2 den:1 sij:1 equation:5 previously:3 remains:1 discus:1 mechanism:1 locus:1 tractable:2 reversal:3 hierarchical:1 generic:1 simulating:3 top:1 denotes:1 brouwer:1 testable:1 society:1 move:1 quantity:1 occurs:2 mumford:1 rt:1 dependence:3 bialek:1 dp:1 distance:3 separate:1 sci:3 simulated:1 cauchy:1 collected:2 reason:2 toward:5 assuming:3 modeled:1 relationship:3 reed:1 acquire:1 potentially:3 negative:1 ba:1 implementation:1 unknown:1 perform:1 brascamp:1 observation:1 revised:1 markov:7 convolution:1 looking:15 drift:3 required:1 raising:1 leopold:1 established:1 boost:1 below:3 perception:20 pattern:1 mismatch:1 biasing:1 built:1 including:2 memory:24 explanation:2 max:6 event:19 critical:1 natural:2 disturbance:2 solvable:1 mn:1 scheme:2 historically:1 brief:1 eye:4 catch:1 coupled:3 kj:1 text:2 prior:4 review:2 relative:2 asymptotic:1 fully:1 multistable:1 generator:2 contingency:2 awareness:1 degree:7 pij:6 consistent:2 sufficient:1 principle:2 viewpoint:2 vij:1 tiny:1 pi:3 course:1 last:1 bias:2 understand:1 face:1 taking:1 rivalry:2 distributed:2 regard:1 curve:3 depth:3 cortical:1 world:2 transition:32 cumulative:2 van:5 sensory:1 author:1 qualitatively:1 commonly:1 sj:1 approximate:2 observable:1 uni:2 idiosyncracies:1 deg:1 fixated:1 xi:1 don:1 continuous:1 search:1 additionally:1 nature:3 channel:1 interact:1 complex:1 domain:1 main:1 linearly:1 noise:3 paul:1 arise:1 n2:1 fig:3 slow:2 position:2 exponential:5 perceptual:18 abundance:1 down:13 showing:1 decay:4 survival:7 derives:1 evidence:1 effectively:1 g10:1 ci:2 perceptually:2 visual:7 expressed:2 springer:1 mij:1 corresponds:2 conditional:2 viewed:4 goal:1 change:12 typical:2 determined:3 total:1 gij:4 partly:1 g01:7 player:1 formally:2 berg:1 support:2 arises:1 modulated:1 mcmc:1 phenomenon:2
2,265
3,056
Kernels on Structured Objects Through Nested Histograms Marco Cuturi Institute of Statistical Mathematics Minami-azabu 4-6-7, Minato ku, Tokyo, Japan. Kenji Fukumizu Institute of Statistical Mathematics Minami-azabu 4-6-7, Minato ku, Tokyo, Japan. Abstract We propose a family of kernels for structured objects which is based on the bag-ofcomponents paradigm. However, rather than decomposing each complex object into the single histogram of its components, we use for each object a family of nested histograms, where each histogram in this hierarchy describes the object seen from an increasingly granular perspective. We use this hierarchy of histograms to define elementary kernels which can detect coarse and fine similarities between the objects. We compute through an efficient averaging trick a mixture of such specific kernels, to propose a final kernel value which weights efficiently local and global matches. We propose experimental results on an image retrieval experiment which show that this mixture is an effective template procedure to be used with kernels on histograms. 1 Introduction Kernel methods have shown to be competitive with other techniques in classification or regression tasks where the input data lie in a vector space. Arguably, this success rests on two factors: first, the good ability of kernel algorithms, such as the support vector machine, to generalize and provide a sparse formulation for the underlying learning problem; second, the capacity of nonlinear kernels, such as the polynomial and gaussian kernels, to quantify meaningful similarities between vectors, notably non-linear correlations between their components. Using kernel machines with non-vectorial data (e.g., in bioinformatics, image and text analysis or signal processing) requires more arbitrary choices, both to represent the objects in a malleable form, and to choose suitable kernels on these representations. The challenge of using kernel methods on real-world data has thus recently fostered many proposals for kernels on complex objects, notably strings, trees, images or graphs to cite a few. In common practice, most of these objects can be regarded as structured aggregates of smaller components, and the coarsest approach to study such aggregates is to consider them directly as bags of components. In the field of kernel methods, such a representation has not only been widely adopted (Haussler, 1999; Joachims, 2002; Sch?olkopf et al., 2004), but it has also spurred the proposal of kernels better suited to the geometry of the underlying histograms (Kondor & Jebara, 2003; Lafferty & Lebanon, 2005; Hein & Bousquet, 2005; Cuturi et al., 2005). However, one of the drawbacks of the bag-of-components representation is that it implicitly assumes that each component sampled in the object has been generated independently from an identical distribution. While this viewpoint may translate into adequate properties for some learning tasks, such as translation or rotation invariance when using histograms of colors to manipulate images (Chapelle et al., 1999), it may however appear too restrictive when such a strong invariance may just be too coarse to be of practical use. A possible way to cope with this limitation is to expand artificially the size of the components? space, either by considering families of larger components to take into account more contextual information, or by considering histograms which index both components and their possible location in the object (R?atsch & Sonnenburg, 2004). As one would expect, these histograms are usually sparse and need to be regularized using ad-hoc rules and prior knowledge (Leslie et al., 2003) before being directly compared using kernels on histograms. For sequential data, other state-of-the-art methods compute an optimal alignment between the sequences based on elementary operations such as substitutions, deletions and insertions of components. Such alignment scores may yield positive definite (p.d.) kernels if particular care is taken to adapt them (Vert et al., 2004) and have shown very competitive performances. However, their computational cost can be prohibitive when dealing with large datasets, and can only be applied to sequential data. Following these contributions, we propose t1 t2 t1 t2.1 t2.1 t2.2 t2.2 t2 Figure 1: From the bag of components representation to a set of nested bags, using a set of labels. in this paper new families of kernels which can be easily tuned to detect both coarse and fine similarities between the objects, in a range spanned from kernels which only consider coarse histograms to kernels which only detect strict local matches. To size such types of similarities between two objects, we elaborate on the elementary bag-of-components perspective to consider instead families of nested histograms (indexed by a set of hierarchical labels to be defined) to describe each object. In this framework, the root label corresponds to the global representation introduced before, while longer labels represent a specific condition under which the components have been sampled. We then define kernels that take into account mixtures of similarities, spanning from detailed resolutions which only compare the smallest bags to the coarsest one. This trade-off between fine and coarse perspectives sets an averaging framework to define kernels, which we introduce formally in Section 2. This theoretical framework would not be tractable without an efficient factorization detailed in Section 3 which yields computations which grow linearly in time and space with respect to the number of labels to evaluate the value of the kernel. We then provide experimental results in Section 4 on an image retrieval task which shows that the methodology improves the performance of kernel based state-of-the art techniques in this field with a low extra computational cost. 2 Kernels Defined through Hierarchies of Histograms In the kernel literature, structured objects are usually represented as histograms of components, e.g., images as histograms of colors and/or features, texts as bags of words and sequences as histograms of letters or n-grams. The obvious drawback of this representation is that it usually loses all the contextual information which may be useful to characterize each sampled component in the original object. One may instead create families of histograms, indexed by specific sampling conditions: ? In image analysis, create color or feature histograms following a prior partition of the image into predefined patches, as in (Grauman & Darrell, 2005). Another possibility would be to define families of histograms, all for the same image, which would consider increasingly granular discretizations of the color space. ? In sequence analysis, extract local histograms which may correspond to predefined regions of the original sequence, as in (Matsuda et al., 2005). A different option would be to associate to each histogram a context of arbitrary length, e.g. by considering the 26 histogram of letters sampled just after the letters {A, B, ? ? ? , Z}, or the 26 ? 26 histograms of letters after contexts {AA, AB, ? ? ? , ZZ}. ? In text analysis, use histograms of words found after grammatical categories of increasing complexity, such as verbs, nouns, articles or adverbs. ? For synchronous time series (e.g. financial time series or gene expression profiles), define a reference series (e.g. an index or a specific gene) and decompose each of the subsequent series into histograms of values conditioned to the value of the reference series. We write L for an arbitrary index set to label such specific histograms. Structured objects are thus def b represented as a family ? of ML (X ) = (M+ (X ))L , that is ? = {?t }t?L where for each t ? L, ?t P b is a bounded measure of M+ (X ). We write |?| for t?L |?t |. 2.1 Local Similarities Between Measures To compare two objects under the light of any sampling condition t, that is comparing their respecb tive decompositions as measures ?t and ?0t , we make use of an arbitrary p.d. kernel k on M+ (X ) to which we will refer as the base kernel throughout the paper. For interpretation purposes only, we will assume in the following sections that k is an infinitely divisible kernel which can be written 1 b as k = e? ? ? , ? > 0, where ? is a negative definite (Berg et al., 1984) kernel on M+ (X ), or equivalently ?? is a conditionally p.d. kernel. Note also that k has to be p.d. not only on probability measures, but on any bounded measure. For two elements ?, ?0 of ML (X ) and a given element t ? L, the kernel def kt (?, ?0 ) = k(?t , ?0t ) 0 quantifies the similarity of ? and ? by measuring how similarly their components were observed with respect to label t. For two different labels s and t of L, ks and kt can be associated through polynomial combinations with positive coefficients to result in new kernels, notably their sum ks +kt or their product ks kt . This is particularly adequate if some complementarity is assumed between s and t, so that their combination can provide new insights for a given learning task. If on the contrary these labels are assumed to be similar, then they can be regarded as a grouped label {s} ? {t} and result in the kernel def k{s}?{t} (?, ?0 ) = k(?s + ?t , ?0s + ?0t ), which will measure the similarity of m and m0 under both s or t labels. Let us give an intuition for this definition by considering two texts A, B built up with words from a dictionary D. As an b alternative to the general histograms of words ?A and ?B of M+ (D), one may consider for instance A A B B ?can , ?may and ?can , ?may , the respective histograms of words that follow the words can and may in texts A and B respectively. If one considers that can and may are different words, then the following kernel quantifies the similarity of A and B taking advantage of this difference: B A B A ). , ?may k{can},{may} (A, B) = k(?can , ?can ) ? k(?may If on the contrary one decides that can and may are equivalent, an adequate kernel would first merge the histograms, and then compare them: A A B B k{can,may} (A, B) = k(?can + ?may , ?can + ?may ). The previous formula can be naturally extended to define kernels indexed on a set T ? L of grouped labels, through X X def def def kT (?, ?0 ) = k (?T , ?0T ) , where ?T = ?t and ?0T = ?0t . t? T t? T 2.2 Resolution Specific Kernels Having defined a family of kernels {kT , T ? L} which can detect conditional similarities between two elements of ML (X ) given a subset T of L, we define in this section different ways to combine them to obtain a kernel which can take into account all of their histograms. Let P be a finite partition of L,Sthat is a finite family P = (T1 , ..., Tn ) of sets of L, such that Ti ? Tj = ? if 1 ? i < j ? n n and i=1 Ti = L. We write P(L) for the set of all partitions of L. Consider now the kernel defined by a partition P as n Y def kP (?, ?0 ) = kTi (?, ?0 ). (1) i=1 The kernel kP quantifies the similarity between two objects by detecting their joint similarity under all possible labels of L, assuming a priori that certain labels can be grouped together, following the subsets Ti enumerated in the partition P . Note that there is some arbitrary in this definition since a simple multiplication of base kernels kTi is used to define kP , rather than any other polynomial combination. We follow in that sense the convolution kernels (Haussler, 1999) approach, and indeed, for each partition P , kP can be regarded as a convolution kernel. More precisely, the multiplicative structure of Equation (1) quantifies how similar two objects are given a partition P , in a way that imposes for the objects to be similar according to all subsets Ti . If the base kernel k can be written 1 as k = e? ? ? , where ? is a negative definite kernel, then kP can be expressed as the exponential of minus n n X X def ?P (?, ?0 ) = ?Ti (?, ?0 ) = ?(?Ti , ?0Ti ), i=1 i=1 0 a quantity which penalizes local differences between the decompositions P P 0of ? and ? over L, as opposed to the coarsest approach where P = {L} and only ?( t ?t , t ?t ) is considered. Figure 2: A useful set of labels L for images which would focus on pixel localization can be represented by a grid, such as the 8 ? 8 one represented above. In this case P3 corresponds to the 43 windows presented in the left image, P2 to the 16 larger squares obtained when grouping 4 small windows, P1 to the image divided into 4 equal parts and P0 is simply the whole image. Any partition P of the image which complies with the hierarchy P03 in the example above, can in turn be used to represent an image as a family of sub-probability measures, which reduces in the case of two-color images to binary histograms as illustrated in the right-most image. For two images, these respective histograms can be directly compared through the kernel kP . As illustrated in Figure 2, where images are summarized through histograms indexed by patches, a partition of L reflects a given belief on how patches may or may not be associated or split to focus on local dissimilarities. Hence, all partitions contained in the set P(L) of all possible partitions1 are not likely to be equally meaningful given that some labels may a natural form of grouping. If the index is built to highlight differences in locations, one would naturally favor mergers between neighboring indexes. If one uses a Markovian analysis, that is consider histograms of components conditioned by contexts, a natural way to group contexts would be to group them according to their semantic or grammatical content for text analysis or according to their suffix for sequence analysis. Such meaningful partitions can be intuitively obtained when a hierarchical structure which groups elements of L together is known a priori. A hierarchy on L, such as the triadic hierarchy shown in Figure 3, is a family (Pd )D d=0 = {P0 = {L}, .., PD = {{t}, t ? L}} of partitions of L. To provide a hierarchical information, the family (Pd )D d=0 is such that any subset present in a partition Pd is strictly included in a (unique by definition of a partition) subset from the coarser partition Pd?1 . This is equivalent to stating that each subset T in a partition Pd is divided in Pd+1 as a partition of T which is not T itself. We write s(T ) for this partition (e.g., in Figure 3, s(1) = {11 , ? ? ? , 19 }) and name its elements the siblings of T . Consider now the subset PD ? P(L) of all partitions of L obtained by using only sets contained in the collection def SD def P0D = d=0 Pd , namely PD = {P ? P(L) s.t. ? T ? P, T ? P0D }. The set PD contains both the coarsest and the finest resolutions, respectively P0 and PD , but also all variable resolutions for sets enumerated in P0D , as can be seen for instance in the third image of Figure 2. 1 P P(L) is quite a big space, since if L is a finite set of cardinal r, the cardinal of the set of partitions is known ur as the Bell Number of order r with Br = 1e ? ? er ln r . u=1 u! r?? 11 1 2 3 4 5 6 7 8 9 19 61 0 73 P0 P1 99 P2 Figure 3: A hierarchy generated by two successive triadic partitions. 2.3 Averaging Resolution Specific Kernels Each partition P contained in PD provides a resolution to compare two objects, which generates a large family of kernels kP when P spans PD . Some partitions are likely to be better suited for certain tasks, which may call for an efficient estimation scheme to select an optimal partition for a given task. This would be similar in spirit to estimating a maximum a posteriori model for the data and use it consequently to compare the objects. We take in this section a different direction which has a more Bayesian flavor by considering an averaging of such kernels based on a prior on the set of partitions. In practice, this averaging favours objects which share similarities under a large collection of resolutions, and may also be interpreted as a Bayesian averaging of convolution kernels (Haussler, 1999). Definition 1 Let L be an index set endowed with a hierarchy (Pd )D d=0 , ? be a prior measure on the b b corresponding set of partitions PD and k a base kernel on M+ (X ) ? M+ (X ). The averaged kernel k? on ML (X ) ? ML (X ) is defined as X k? (?, ?0 ) = ?(P ) kP (?, ?0 ). (2) P ? PD As can be observed in Equation (2), the kernel automatically detects in the range of all partitions the ones which provide a good match between the compared objects, to increase subsequently the resulting similarity score. Also note that in an image-analysis context, the pyramid-matching kernel proposed in (Grauman & Darrell, 2005) only considers the original partitions of the hierarchy (Pd )D d=0 , while Equation (2) considers all possible partitions of PD . This can be carried out with little cost if an adequate set of priors ? is selected as seen below. 3 Kernel Computation We provide in this section hierarchies (Pd )D d=0 and priors ? for which the computation of k? is both meaningful and tractable, yielding namely a computational time to calculate k? which is loosely upperbounded by D ? card L ? c(k) where c(k) is the time required to compute the base kernel. 3.1 Partitions Generated by Branching Processes All partitions P of PD can be generated through the following rule, starting from the initial root partition P := P0 = {L}. For each set T of P : 1. either leave the set as it is in P with probability 1 ? ?T , 2. either replace it by its siblings in s(T ) with probability ?T , and reapply this rule to each sibling unless they belong to the finest partition PD . The resulting prior for PD depends on the overall coarseness of the considered partitions, and can be tuned through parameters ?T to favor adaptively coarse or fine partitions. For a partition P ? PD , ? Q Q D ?(P ) = T ? P (1 ? ?T ) ? (?T ), where the set P = {T ? P0 s.t. ?V ? P, V ( T } gathers T?P all coarser sets belonging to coarser resolutions than P , and can be regarded as the set of all ancestors in P0D of sets enumerated in P . 3.2 Factorization of k? We use the branching-process prior can be used to factorize the formula in Equation (2): Proposition 2 For two elements ?, ?0 of ML (X ), define for T spanning recursively all sets contained in PD , PD?1 , ..., P0 the quantity KT below; then k? (?, ?0 ) = KL . Y KT = (1 ? ?T )kT (?, ?0 ) + ?T KU . U ? s(T ) Proof The proof follows from a factorization which uses the branching process prior used for the tree generation, and can be derived from the proof of (Catoni, 2004, Proposition 5.2). The opposite figure underlines the importance of incorporating to each node KT a weighted product of the sibling kernel evaluations KU . The update rule for the computation of k? takes into account the branching process prior by weighting the kernel kT with all values kti obtained for finer resolutions ti in s(T ). K t1 ?t1 ?0t1 K t2 ?t2 ?0t2 K t3 KT = (1 ? ?T )k(?T , ?0T ) + ?T Q K ti P ?T = P ?ti 0 ?T = ?0ti ?t3 ?0t3 If the hierarchy of L is such that the cardinality of s(T ) is fixed to a constant ? for any set T , typically ? = 4 for images in the case described in Figure 2, then the computation of k? is upperbounded by (?D+1 ? 1)c(k). This complexity is also upperbounded by the total amount of components considered in the compared objects, as in (Cuturi & Vert, 2005) for instance. 3.3 Choosing the Base Kernel b Any kernel on M+ (X ) can be used to comply with the terms of Definition 1 and apply an average scheme on families of measures. We also note that an even more general formulation can be obtained by using a different kernel kt for each label t of L, without altering the overall applicability of the factorization above. However, we only consider in this discussion a unique choice k for all t ? L. First, one can note that kernels such as the information diffusion kernel (Lafferty & Lebanon, 2005) and variance based kernels (Kondor & Jebara, 2003; Cuturi et al., 2005) may not work in this b setting since they are not p.d., nor sometimes defined, on the whole of M+ (X ). The most adequate b geometry of M+ (X ), following the denormalization scheme proposed in (Amari & Nagaoka, 2001, ? p.47), may arguably be derived from the Riemannian embedding ? 7? ?, where the Euclidian distance between two measures in this representation is equal to the geodesic distance between ? b and ? 0 in M+ (X ) endowed with the Fisher metric, as expressed in ?H2 below. More generally, one can consider the whole family of kernels for bounded measures described in (Hein & Bousquet, 1 2005) to choose the base kernel k, namely the family of Hilbertian metrics ? such that k = e? ? ? . We thus use in our experiments the Jensen divergence, the ?2 distance, the total variation, and two variations of the Hellinger distance:   X (?i ? ?0 )2 ? + ?0 h(?) + h(?0 ) i 0 ?JD (?, ? ) = h ? , ??2 (?, ?0 ) = 0 , 2 2 ? + ? i i i X X p X p p p | ?i ? ?i0 |. ?T V (?, ?0 ) = |?i ? ?i0 |, ?H2 (?, ?0 ) = | ?i ? ?i0 |2 , ?H1 (?, ?0 ) = i i i 4 Experiments in Image Retrieval We present in this section experiments inspired by the image retrieval task first considered in (Chapelle et al., 1999) and reused in (Hein & Bousquet, 2005). Our dataset was also extracted from the Corel Stock database and includes 12 families of labeled images, each class containing 2 log2 ( ?1 ) 0 -6 -12 0 1/2 ? 1 0.23 0.22 0.21 0.2 0.19 0.18 0.17 0.16 0.15 0.14 0.13 Figure 4: Misclassification rate on the corel experiment, using the Hellinger H1 distance between histograms coupled with one-vs-all SVM classification (C = 100) as a function of ? and ?. ?1 is taken in {2?12 , ? ? ? , 22 } while ? spans {0, 0.1, ? ? ? , 0.9, 1}. ? controls the granularity of the averaging kernel, ranging from the coarsest perspective (? = 0) when only the global histogram is used, to the finest one (? = 1) when only the finest histograms are considered. Dark values represent error rates which are greater or equal to 24%. The central values are roughly 14.5% while the best value obtained in the columns ? = 0 and ? = 1 are 18.4% and 17.3% respectively 100 color images of 256 ? 384 pixels. The families depict images of bears, African specialty animals, monkeys, cougars, fireworks, mountains, office interiors, bonsais, sunsets, clouds, apes and rocks and gems. The database is randomly split into balanced sets of 800 training images and 400 test images. The task consists in classifying the test images with the rule learned by training 12 one-versus-all SVM?s on the learning fold. Note that previous work conducted in (Chapelle et al., 1999) illustrates the competitiveness of SVM?s in this context over other algorithms such as nearest neighbors. Our results are averaged over 3 random splits, using the Spider toolbox. We used 9 bits for the color of each pixel to reduce the size of the RGB color space to 83 = 512 from the original set of 2563 = 16, 777, 216 colors, and we defined centered grids of 4, 42 = 16 and 43 = 64 local patches. We provide results for each of the 5 considered kernels and for each considered depth D ranging from 1 to 3. Figure 5 presents 15 = 5?3 plots, where each plot displays the misclassification rate as a function of the width parameter ?1 and the branching process prior ? set over all nodes of the tree. The constant C is set to 100, but other choices for C (1000 and 10) gave comparable plots, although a bit different in shape. By considering values of ? ranging from 0 to 1, we aim at giving a sketch of the robustness of the averaging approach, since the SVM?s seem to perform better when 0 < ? < 1 for a large span of ? values. For a better understanding of these plots, the reader may refer to Figure 4 which focuses on ?H1 and D = 2, noting that the color scales used for Figures 4 and 5 are the same. Finally, the Gaussian kernel was also tested but its very poor performance (with error rate above 22% for all parameters) illustrates once more that the Gaussian kernel is usually a poor choice to compare histograms directly. 5 Discussion The computation of averaged kernels can be performed almost as fast as kernels which only rely on fine resolutions, which along with their robustness and improved performance might advocate their use, notably as an extension of kernels based on arbitrary partitions (Grauman & Darrell, 2005; Matsuda et al., 2005). Principled ways of estimating in a semi-supervised setting both ? and ?, or preferably localized priors ?T and ?T , T ? P0D , might give them an additional edge. This is a topic of current research, and we suggest to set these parameters through cross-validation at the moment, while H1 seems to be a reasonable choice to define the base kernel. Our approach is related to the Multiple Kernel Learning framework (Lanckriet et al., 2004), although we do not aim here at learning linear combinations of the kernels kT , but rather start from an hierarchical belief on them to propose an algebraic combination. Acknowledgments: This research was supported by the Function and Induction Research Project, Transdisciplinary Research Integration Center - Research Organization of Information and Systems. H1 H2 TV Xi2 JD D=1 D=2 D=3 Figure 5: Error-rate results for different kernels and depths are displayed in the same way that in Figure 4, using the same colorscale across experiments. References Amari, S.-I., & Nagaoka, H. (2001). Methods of information geometry. AMS vol. 191. Berg, C., Christensen, J. P. R., & Ressel, P. (1984). Harmonic analysis on semigroups. No. 100 in Graduate Texts in Mathematics. Springer Verlag. Catoni, O. (2004). Statistical learning theory and stochastic optimization. No. 1851 in Lecture Notes in Mathematics. Springer Verlag. Chapelle, O., Haffner, P., & Vapnik, V. (1999). SVMs for histogram based image classification. IEEE Transactions on Neural Networks, 10, 1055. Cuturi, M., Fukumizu, K., & Vert, J.-P. (2005). Semigroup kernels on measures. JMLR, 6, 1169? 1198. Cuturi, M., & Vert, J.-P. (2005). The context-tree kernel for strings. Neural Networks, 18, 1111 ? 1123. Grauman, K., & Darrell, T. (2005). The pyramid match kernel: Discriminative classification with sets of image features. ICCV (pp. 1458?1465). IEEE Computer Society. Haussler, D. (1999). Convolution kernels on discrete structures (Technical Report). UC Santa Cruz. CRL-99-10. Hein, M., & Bousquet, O. (2005). Hilbertian metrics and positive definite kernels on probability measures. Proceedings of AISTATS. Joachims, T. (2002). Learning to classify text using support vector machines: Methods, theory, and algorithms. Kluwer Academic Publishers. Kondor, R., & Jebara, T. (2003). A kernel between sets of vectors. Proc. of ICML?03 (pp. 361?368). Lafferty, J., & Lebanon, G. (2005). Diffusion kernels on statistical manifolds. JMLR, 6, 129?163. Lanckriet, G., Cristianini, N., Bartlett, P., El Ghaoui, L., & Jordan, M. (2004). Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5, 27?72. Leslie, C., Eskin, E., Weston, J., & Noble, W. S. (2003). Mismatch string kernels for svm protein classification. NIPS 15. MIT Press. Matsuda, A., Vert, J.-P., Saigo, H., Ueda, N., Toh, H., & Akutsu, T. (2005). A novel representation of protein sequences for prediction of subcellular location using support vector machines. Protein Sci., 14, 2804?2813. R?atsch, G., & Sonnenburg, S. (2004). Accurate splice site prediction for caenorhabditis elegans, 277?298. MIT Press series on Computational Molecular Biology. MIT Press. Sch?olkopf, B., Tsuda, K., & Vert, J.-P. (2004). Kernel methods in computational biology. MIT Press. Vert, J.-P., Saigo, H., & Akutsu, T. (2004). Local alignment kernels for protein sequences. In B. Sch?olkopf, K. Tsuda and J.-P. Vert (Eds.), Kernel methods in computational biology. MIT Press.
3056 |@word kondor:3 polynomial:3 coarseness:1 seems:1 underline:1 reused:1 rgb:1 decomposition:2 p0:7 euclidian:1 minus:1 recursively:1 moment:1 initial:1 substitution:1 series:6 score:2 contains:1 tuned:2 current:1 contextual:2 comparing:1 toh:1 written:2 finest:4 cruz:1 subsequent:1 partition:37 shape:1 plot:4 update:1 depict:1 v:1 prohibitive:1 selected:1 merger:1 eskin:1 coarse:6 detecting:1 provides:1 location:3 successive:1 node:2 along:1 competitiveness:1 consists:1 combine:1 advocate:1 hellinger:2 introduce:1 notably:4 indeed:1 roughly:1 p1:2 nor:1 inspired:1 detects:1 automatically:1 little:1 window:2 considering:6 increasing:1 cardinality:1 project:1 estimating:2 underlying:2 bounded:3 matsuda:3 mountain:1 interpreted:1 string:3 monkey:1 preferably:1 ti:11 grauman:4 control:1 appear:1 arguably:2 before:2 positive:3 t1:6 local:8 sd:1 merge:1 might:2 k:3 specialty:1 factorization:4 range:2 graduate:1 averaged:3 practical:1 unique:2 acknowledgment:1 practice:2 definite:4 procedure:1 discretizations:1 bell:1 vert:8 matching:1 word:7 suggest:1 protein:4 interior:1 context:7 equivalent:2 center:1 starting:1 independently:1 resolution:10 rule:5 haussler:4 insight:1 regarded:4 spanned:1 financial:1 embedding:1 variation:2 hierarchy:11 programming:1 us:2 lanckriet:2 trick:1 associate:1 element:6 complementarity:1 particularly:1 coarser:3 database:2 labeled:1 observed:2 sunset:1 cloud:1 calculate:1 region:1 sonnenburg:2 trade:1 balanced:1 intuition:1 pd:26 principled:1 insertion:1 cuturi:6 complexity:2 colorscale:1 cristianini:1 geodesic:1 localization:1 easily:1 joint:1 stock:1 represented:4 fast:1 effective:1 describe:1 kp:8 aggregate:2 choosing:1 quite:1 widely:1 larger:2 amari:2 ability:1 favor:2 nagaoka:2 itself:1 final:1 hoc:1 sequence:7 advantage:1 rock:1 propose:5 product:2 caenorhabditis:1 neighboring:1 translate:1 subcellular:1 olkopf:3 darrell:4 leave:1 object:26 stating:1 nearest:1 p2:2 strong:1 kenji:1 quantify:1 direction:1 drawback:2 tokyo:2 subsequently:1 stochastic:1 centered:1 decompose:1 proposition:2 elementary:3 minami:2 enumerated:3 strictly:1 extension:1 marco:1 considered:7 m0:1 dictionary:1 smallest:1 purpose:1 estimation:1 proc:1 bag:8 label:17 grouped:3 create:2 reflects:1 weighted:1 fukumizu:2 mit:5 azabu:2 gaussian:3 aim:2 rather:3 office:1 derived:2 focus:3 joachim:2 detect:4 sense:1 posteriori:1 am:1 suffix:1 el:1 i0:3 typically:1 ancestor:1 expand:1 pixel:3 overall:2 classification:5 priori:2 hilbertian:2 animal:1 art:2 noun:1 integration:1 uc:1 field:2 equal:3 once:1 having:1 sampling:2 zz:1 identical:1 biology:3 icml:1 noble:1 t2:9 report:1 cardinal:2 few:1 randomly:1 divergence:1 semigroups:1 geometry:3 firework:1 ab:1 organization:1 possibility:1 evaluation:1 alignment:3 mixture:3 upperbounded:3 yielding:1 light:1 semidefinite:1 tj:1 predefined:2 kt:14 accurate:1 edge:1 respective:2 unless:1 tree:4 indexed:4 loosely:1 penalizes:1 tsuda:2 p03:1 hein:4 theoretical:1 instance:3 column:1 classify:1 markovian:1 measuring:1 altering:1 leslie:2 cost:3 applicability:1 subset:7 conducted:1 too:2 characterize:1 bonsai:1 adaptively:1 off:1 together:2 central:1 cougar:1 opposed:1 choose:2 containing:1 japan:2 account:4 summarized:1 includes:1 coefficient:1 ad:1 depends:1 multiplicative:1 root:2 h1:5 performed:1 competitive:2 start:1 option:1 contribution:1 square:1 variance:1 efficiently:1 yield:2 correspond:1 t3:3 generalize:1 bayesian:2 finer:1 african:1 ed:1 definition:5 pp:2 obvious:1 naturally:2 associated:2 proof:3 riemannian:1 sampled:4 dataset:1 color:10 knowledge:1 improves:1 supervised:1 follow:2 methodology:1 improved:1 formulation:2 just:2 correlation:1 sketch:1 nonlinear:1 name:1 hence:1 semigroup:1 semantic:1 illustrated:2 conditionally:1 branching:5 width:1 tn:1 image:32 ranging:3 harmonic:1 novel:1 recently:1 common:1 rotation:1 corel:2 belong:1 interpretation:1 kluwer:1 refer:2 grid:2 mathematics:4 similarly:1 saigo:2 chapelle:4 similarity:14 longer:1 base:8 perspective:4 adverb:1 certain:2 verlag:2 binary:1 success:1 seen:3 greater:1 care:1 additional:1 paradigm:1 signal:1 semi:1 multiple:1 reduces:1 technical:1 match:4 adapt:1 academic:1 cross:1 retrieval:4 divided:2 manipulate:1 equally:1 molecular:1 prediction:2 regression:1 metric:3 histogram:40 kernel:92 represent:4 sometimes:1 pyramid:2 proposal:2 fine:5 grow:1 publisher:1 sch:3 extra:1 rest:1 strict:1 ape:1 elegans:1 contrary:2 lafferty:3 spirit:1 seem:1 jordan:1 call:1 noting:1 granularity:1 split:3 divisible:1 gave:1 opposite:1 reduce:1 haffner:1 sibling:4 br:1 synchronous:1 favour:1 expression:1 bartlett:1 algebraic:1 adequate:5 useful:2 generally:1 detailed:2 santa:1 amount:1 dark:1 svms:1 category:1 write:4 discrete:1 vol:1 group:3 diffusion:2 graph:1 sum:1 letter:4 family:19 throughout:1 reader:1 almost:1 reasonable:1 patch:4 p3:1 ueda:1 comparable:1 bit:2 def:10 display:1 fold:1 vectorial:1 precisely:1 bousquet:4 generates:1 span:3 coarsest:5 structured:5 tv:1 according:3 combination:5 poor:2 belonging:1 describes:1 smaller:1 increasingly:2 across:1 ur:1 christensen:1 intuitively:1 iccv:1 ghaoui:1 taken:2 ln:1 equation:4 turn:1 xi2:1 tractable:2 complies:1 adopted:1 decomposing:1 operation:1 endowed:2 apply:1 hierarchical:4 triadic:2 reapply:1 alternative:1 robustness:2 jd:2 original:4 assumes:1 spurred:1 log2:1 giving:1 restrictive:1 society:1 quantity:2 distance:5 card:1 sci:1 capacity:1 topic:1 manifold:1 ressel:1 considers:3 spanning:2 induction:1 assuming:1 length:1 index:6 equivalently:1 p0d:5 spider:1 negative:2 perform:1 convolution:4 datasets:1 finite:3 displayed:1 extended:1 arbitrary:6 verb:1 jebara:3 introduced:1 tive:1 namely:3 required:1 kl:1 toolbox:1 learned:1 deletion:1 fostered:1 nip:1 usually:4 below:3 mismatch:1 challenge:1 built:2 belief:2 suitable:1 misclassification:2 natural:2 rely:1 regularized:1 scheme:3 carried:1 extract:1 coupled:1 text:8 prior:12 literature:1 comply:1 understanding:1 multiplication:1 expect:1 highlight:1 bear:1 lecture:1 generation:1 limitation:1 versus:1 granular:2 localized:1 validation:1 h2:3 kti:3 gather:1 imposes:1 article:1 viewpoint:1 classifying:1 share:1 translation:1 supported:1 institute:2 neighbor:1 template:1 taking:1 sparse:2 grammatical:2 depth:2 world:1 gram:1 collection:2 cope:1 transaction:1 lebanon:3 implicitly:1 gene:2 dealing:1 ml:6 global:3 decides:1 assumed:2 gem:1 factorize:1 discriminative:1 quantifies:4 ku:4 complex:2 artificially:1 aistats:1 linearly:1 whole:3 big:1 profile:1 minato:2 site:1 elaborate:1 sub:1 exponential:1 lie:1 jmlr:2 third:1 weighting:1 splice:1 formula:2 specific:7 er:1 jensen:1 svm:5 grouping:2 incorporating:1 vapnik:1 sequential:2 importance:1 dissimilarity:1 catoni:2 conditioned:2 illustrates:2 flavor:1 suited:2 simply:1 likely:2 infinitely:1 akutsu:2 expressed:2 contained:4 springer:2 aa:1 corresponds:2 nested:4 cite:1 loses:1 extracted:1 weston:1 conditional:1 consequently:1 replace:1 fisher:1 content:1 crl:1 included:1 averaging:8 total:2 invariance:2 experimental:2 meaningful:4 atsch:2 formally:1 select:1 berg:2 support:3 bioinformatics:1 evaluate:1 tested:1
2,266
3,057
Inferring Network Structure from Co-Occurrences Michael G. Rabbat Electrical and Computer Eng. University of Wisconsin Madison, WI 53706 [email protected] M?ario A.T. Figueiredo Instituto de Telecomunicac?o? es Instituto Superior T?ecnico Lisboa, Portugal [email protected] Robert D. Nowak Electrical and Computer Eng. University of Wisconsin Madison, WI 53706 [email protected] Abstract We consider the problem of inferring the structure of a network from cooccurrence data: observations that indicate which nodes occur in a signaling pathway but do not directly reveal node order within the pathway. This problem is motivated by network inference problems arising in computational biology and communication systems, in which it is difficult or impossible to obtain precise time ordering information. Without order information, every permutation of the activated nodes leads to a different feasible solution, resulting in combinatorial explosion of the feasible set. However, physical principles underlying most networked systems suggest that not all feasible solutions are equally likely. Intuitively, nodes that co-occur more frequently are probably more closely connected. Building on this intuition, we model path co-occurrences as randomly shuffled samples of a random walk on the network. We derive a computationally efficient network inference algorithm and, via novel concentration inequalities for importance sampling estimators, prove that a polynomial complexity Monte Carlo version of the algorithm converges with high probability. 1 Introduction The study of complex networked systems is an emerging field impacting nearly every area of engineering and science, including the important domains of biology, cognitive science, sociology, and telecommunications. Inferring the structure of signalling networks from experimental data precedes any such analysis and is thus a basic and fundamental task. Measurements which directly reveal network structure are often beyond experimental capabilities or are excessively expensive. This paper addresses the problem of inferring the structure of a network from co-occurrence data: observations which indicate nodes that are activated in each of a set of signaling pathways but do not directly reveal the order of nodes within each pathway. Co-occurrence observations arise naturally in a number of interesting contexts, including biological and communication networks, and networks of neuronal colonies. Biological signal transduction networks describe fundamental cell functions and responses to environmental stress [1]. Although it is possible to test for individual, localized interactions between gene pairs, this approach (called genetic epistatic analysis) is expensive and time-consuming. Highthroughput measurement techniques such as microarrays have successfully been used to identify the components of different signal transduction pathways [2]. However, microarray data only reflects order information at a very coarse, unreliable level. Developing computational techniques for inferring pathway orders is a largely unexplored research area [3]. A similar problem has been studied in telecommunication networks [4]. In this context, each path corresponds to a transmission between an origin and destination. The origin and destination are observed, in addition to the activated switches/routers carrying the transmission through the network. However, due to the geographically distributed nature of the measurement infrastructure and the rapidity at which transmissions are completed, it is not possible to obtain precise ordering information. Another exciting potential application arises in neuroimaging [5, 6]. Functional magnetic resonance imaging provides images of brain activity with high spatial resolution but has relatively poor temporal resolution. Treating distinct brain regions as nodes in a functional brain network that co-activate when a subject performs different tasks may lead to a similar network inference problem. Given a collection of co-occurrences, a feasible network (consistent with the observations) is easily obtained by assigning an order to the elements of each co-occurrence, thereby specifying a path through the hypothesized network. Since any arbitrary order of each co-occurrence leads to a feasible network, the number of feasible solutions is proportional to the number of permutations of all the co-occurrence observations. Consequently we are faced with combinatorial explosion of the feasible set, and without additional assumptions or side information there is no reason to prefer one particular feasible network over the others. See the supplementary document [7] for further discussion. Despite the apparent intractability of the problem, physical principles governing most networks suggest that not all feasible solutions are equally plausible. Intuitively, nodes that co-occur more frequently are more likely to be connected in the underlying network. This intuition has been used as a stepping stone by recent approaches proposed in the context of telecommunications [4], and in learning networks of collaborators [8]. However, because of their heuristic nature, these approaches do not produce easily interpreted results and do not readily lend themselves to analysis or to the incorporation of side information. In this paper, we model co-occurrences as randomly permuted samples of a random walk on the underlying network. The random permutation accounts for lack of observed order. We refer to this process as the shuffled Markov model. In this framework, network inference amounts to maximum likelihood estimation of the parameters governing the random walk (initial state distribution and transition matrix). Direct maximization is intractable due to the highly non-convex log-likelihood function and exponential feasible set arising from simultaneously considering all permutations of all co-occurrences. Instead, we derive a computationally efficient EM algorithm, treating the random permutations as hidden variables. In this framework the likelihood factorizes with respect to each pathway/observation, so that the computational complexity of the EM algorithm is determined by the E-step which is only exponential in the longest path. In order to handle networks with long paths, we propose a Monte Carlo E-step based on a simple, linear complexity importance sampling scheme. Whereas the exact E-step has computational complexity which is exponential in path length, we prove that a polynomial number of importance samples suffices to retain desirable convergence properties of the EM algorithm with high probability. In this sense, our Monte Carlo EM algorithm breaks the curse of dimensionality using randomness. It is worth noting that the approach described here differs considerably from that of learning the structure of a directed graphical model or Bayesian network [9, 10]. The aim of graphical modelling is to find a graph corresponding to a factorization of a high-dimensional distribution which predicts the observations well. These probabilistic models do not directly reflect physical structures, and applying such an approach to co-occurrences would ignore physical constraints inherent to the observations: co-occurring vertices must lie along a path in the network. 2 2.1 Model Formulation and EM Algorithm The Shuffled Markov Model We model a network as a directed graph G = (V, E), where V = {1, . . . , |V |} is the vertex (node) set and E ? V 2 is the set of edges (direct connections between vertices). An observation, y ? V , is a subset of vertices co-activated when a particular stimulus is applied to the network (e.g., collection of signaling proteins activated in response to an environmental stress). Given a set of T (m) (m) observations, Y = {y(1), . . . , y(T ) }, each corresponding to a path, where y(m) = {y1 , . . . , yNm }, we say that a graph (V, E) is feasible w.r.t. Y if for each y(m) ? Y there is an ordered path (m) (m) (m) (m) (m) (m) z(m) = (z1 , . . . , zNm ) and a permutation ? (m) = (?1 , . . . , ?Nm ) such that zt = y (m) , and ?t (zt?1 , zt ) ? E, for t = 2, ..., Nm . The (unobserved) ordered paths, Z = {z(1) , ..., z(T ) }, are modelled as T independent samples of a first-order Markov chain with state set V . The Markov chain is parameterized by the initial state distribution ? and the (stochastic) transition matrix A. We assume that the support of the transition matrix is determined by the adjacency structure of the graph; i.e., Ai,j > 0 ? (i, j) ? E. Each observation y(m) results from shuffling the elements of z(m) via an unobserved permutation ? (m) , (m) (m) drawn uniformly from SNm (the set of all permutations of Nm objects); i.e., zt = y (m) , for ?t t = 1, . . . , Nm . All the ? (m) are assumed mutually independent and independent of all the z(m) . Under this model, the log-likelihood of the set of observations Y is ? ? ? ? T X X ?log ? log P [Y|A, ?] = P [y(m) |? , A, ?]? ? log(Nm !)? . (1) m=1 ? ?SNm QN where P [y|? , A, ?] = ?y?1 t=2 Ay?t?1 ,y?t , and network inference consists in computing the maximum likelihood (ML) estimates (AML , ? ML ) = arg maxA,? log P [Y|A, ?]. With the ML estimates in hand, we may determine the most likely permutation for each y(m) and obtain a feasible reconstruction from the ordered paths. In general, log P [Y|A, ?] is a non-concave function of (A, ?), so finding (AML , ? ML ) is not easy. Next, we derive an EM algorithm for this purpose, by treating the permutations as missing data. 2.2 EM Algorithm (m) (m) (m) Let w(m) = (w1 , ..., wNm ) be a binary representation of z(m) , defined by wt (m) (m) ..., wt,|V | ) ? {0, 1}|V | , with (wt,i (m) = 1) ? (zt (m) = (wt,1 , = i); let W = {w(1) , ..., w(T ) }. Let X = {x(1) , . . . , x(T ) } be the binary representation for Y, defined in a similar way: x(m) = (m) (m) (m) (m) (m) (m) (m) (x1 , ..., xNm ), where xt = (xt,1 , ..., xt,|V | ) ? {0, 1}|V | , with (xt,i = 1) ? (yt = i). Finally, let R = {r(1) , . . . , r(T ) } be the collection of permutation matrices corresponding to (m) (m) = t0 ). With this notation in place, the comT = {? (1) , . . . , ? (T ) }; i.e., (rt,t0 = 1) ? (?t plete log-likelihood can be written as log P [X , R|A, ?] = log P [X |R, A, ?] + log P [R], where log P [X |R, A, ?] = T X log P [x(m) |r(m) , A, ?] m=1 = |V | T X X Nm X Nm X m=1 i,j=1 t0 ,t00 =1 (m) (m) (m) (m) rt,t0 rt?1,t00 xt00 ,i xt0 ,j log Ai,j + |V | Nm T X X X (m) (m) r1,t0 xt0 ,i log ?i , (2) m=1 i=1 t0 =1 t=2 and P [R] is the probability of the set of permutations, which is constant and thus dropped, since the permutations are independent and equiprobable.  The by (the E-step) computing Q A, ?; Ak , ? k =  EM algorithm proceeds  E log P [X , R|A, ?] X , Ak , ? k , the expected value of log P [X , R|A, ?] w.r.t. the missing R, conditioned on the observations and on the current model estimate (Ak , ? k ). Examining log P [X , R|A, ?] reveals that it is linear w.r.t. simple functions of R: (a) the first row of each PNm (m) (m) (m) (m) r(m) , i.e., r1,t0 ; (b) sums of transition indicators, i.e., ?t0 ,t00 ? t=2 rt,t0 rt?1,t00 . Consequently, (m) (m) (m) the E-step reduces to computing the conditional expectations of r1,t0 and ?t0 ,t00 , denoted r?1,t0 (m) and ? ? t0 ,t00 , respectively, and plugging them into the complete log-likelihood (2), which yields  Q A, ?; Ak , ? k .  (m) Since the permutations are (a priori) equiprobable, we have P [r(m) ] = (Nm !)?1 , P r1,t0 = 1] = (m) (Nm ? 1)!/Nm ! = 1/Nm , and P [r(m) |r1,t0 = 1] = 1/(Nm ? 1)!. Using these facts, the mutual independence among different observations, and Bayes law, it is not hard to show that (m) ?0 (m) r?1,t0 = PN t m (m) t0 =1 ?t0 with (m) ?t0 = X r: r1,t0 =1   P x(m) r, Ak , ? k , (3)   where each term P x(m) r, Ak , ? k is easily computed after using r to ?unshuffle? x(m) : N m Y  (m)   (m)  k k k k k P x r, A , ? = P y ? , A , ? = ?y(m) Aky(m) ?1 (m) t=2 (m) ?t?1 ,y?t (m) . (m) (m) The computation of ? ? t0 ,t00 is similar to that of r?1,t0 ; the key observations are that P [rt,t0 rt?1,t00 = (m) (m) 1] = (Nm ? 2)!/Nm ! and P [r(m) |rt,t0 rt?1,t00 = 1] = 1/(Nm ? 2)!, leading to (m) (m) ? ? t0 ,t00 ?t0 ,t00 = PN m t0 =1 (m) , with ?t0 (m) ?t0 ,t00 = X (m) P [x k k |r, A , ? ] r Nm X rt,t0 rt?1,t00 . (4) t=2  (m) (m) Computing {? r1,t0 } and {? ?t0 ,t00 } requires O Nm ! operations. For large Nm , this is a heavy load; in Section 3, we describe a sampling approach for computing approximations to r?1,t0 and ? ? t0 ,t00 .  Maximization of Q A, ?; Ak , ? k w.r.t. A and ?, under the normalization constraints, leads to the M-step: PT PNm PT PNm (m) (m) (m) (m) (m) ? t0 ,t00 xt00 ,i xt0 ,j ?1,t0 xt0 ,i m=1 t0 ,t00 =1 ? m=1 t0 =1 r k+1 k+1 and ?i = P|S| PT Ai,j = P|S| PT PN PNm (m) (m) . (m) (m) (m) m ? t0 ,t00 xt00 ,i xt0 ,j ?1,t0 xt0 ,i j=1 i=1 m=1 m=1 t0 ,t00 =1 ? t0 =1 r (5) Standard convergence results for the EM algorithm due to Boyles and Wu [11,12] guarantee that the sequence {(Ak , ? k )} converges monotonically to a local maximum of the likelihood. 2.3 Handling Known Endpoints In some applications, (one or both of) the endpoints of each path are known and only the internal nodes are shuffled. For example, in telecommunications problems, the origin and destination of each transmission are known, but not the network connectivity. In estimating biological signal transduction pathways, a physical stimulus (e.g., hypotonic shock) causes a sequence of protein interactions, resulting in another observable physical response (e.g., a change in cell wall structure); in this case, the stimulus and response act as fixed endpoints, the goal is to infer the order of the sequence of protein interactions. Knowledge of the endpoints of each path imposes the constraints (m) (m) r1,1 = 1 and rNm ,Nm = 1. Under the first constraint, estimates of the initial state probabilities PT (m) are simply given by ?i = T1 m=1 x1,i . Thus, EM only needs to be used to estimate A. In this setup, the E-step has a similar form as (4) but with sums over r replaced by sums over permutation matrices satisfying r1,1 = 1 and rN,N = 1. The M-step update for Ak+1 remains unchanged. 3 Large Scale Inference via Importance Sampling For long paths, the combinatorial nature of the exact E-step ? summing over all permutations of each sequence in (3) and (4) ? may render exact computation intractable. This section presents a Monte Carlo importance sampling (see, e.g., [13]) version of the E-step, along with finite sample bounds guaranteeing that a polynomial complexity Monte Carlo EM algorithm retains desirable convergence properties of the EM algorithm; i.e., monotonic convergence to a local maximum. 3.1 Monte Carlo E-Step by Importance Sampling To lighten notation in this section we drop the superscripts from (Ak , ? k ), using simply (A, ?) (m) (m) for the current parameter estimates. Moreover, since the statistics ? ? t0 ,t00 and r?1,t0 depend only on the mth co-activation observation, y(m) , we focus on a particular length-N path observation y = (y1 , y2 , . . . , yN ) and drop the superscript (m). A na??ve Monte Carlo approximation would be based on random permutations sampled from the uniform distribution on SN . However, the reason we resort to approximation techniques in the first place is that SN is large, but typically only a small fraction of its elements have non-negligible posterior probability, P [? |y, A, ?]. Although we would ideally sample directly from the posterior, this would require determining its value for all N ! permutations. Instead, we propose the following sequential scheme for sampling a permutation using the current parameter estimates, (A, ?). To ensure the same element is not sampled twice we introduce a vector of binary flags, f = (f1 , f2 , . . . , f|V | ) ? {0, 1}|V | . Given a probability distribution p = (p1 , p2 , . . . , p|V | ) on the vertex set, V , denote by p|f the restriction of p to those elements i ? V for which fi = 1; i.e., pi fi (p|f )i = P|V | , j=1 pj fj for i = 1, 2, . . . , |V |. (6) Our sampling scheme proceeds as follows: Step 1: Initialize f so that fi = 1 if yt = i for some t = 1, . . . , N , and fi = 0 otherwise. Sample an element v from V according to the distribution ?|f on V . Find t such that yt = v. Set ?1 = t. Set fv = 0 to prevent yt from being sampled again (ensure ? is a permutation). Set i = 2. Step 2: Let Av denote the vth row of the transition matrix. Sample an element v 0 from V according to the distribution Av |f on V . Find t such that yt = v 0 . Set ?i = t. Set fv0 = 0. Step 3: While i < N , update v ? v 0 and i ? i + 1 and repeat Step 2; otherwise, stop. Repeating this sampling procedure L times yields a collection of iid permutations ? 1 , ? 2 , . . . , ? L , where the superscript now identifies the sample number; the corresponding permutation matrices are r1 , r2 , . . . , rL . Samples generated according to the scheme described above are drawn from a distribution R[? |x, A, ?] on SN which is different from the posterior P [? |x, A, ?]. Importance sample estimates correct for this disparity and are given by the expressions PL PL PN ` ` ` `=1 u` r1,t0 `=1 u` t=2 rt,t0 rt?1,t00 0 00 0 rb1,t = PL and ? bt ,t = , (7) PL `=1 u` `=1 u` where the correction factor (or weight) for sample r` is given by N u` = N P [? ` |y, A, ?] Y X P [r` |x, A, ?] = = Ay? ` ,y? ` . R[r` |x, A, ?] R[? ` |y, A, ?] t=2 0 t?1 t0 (8) t =t A detailed derivation of the exact form of the induced distribution, R, and the correction factor, u` , based on the sequential nature of the sampling scheme, along with further discussion and comparison with alternative sampling schemes can be found in the supplementary document [7]. In fact, terms in the product (8) are readily available as a byproduct of Step 2 (denominator of Av |f ). 3.2 Monotonicity and Convergence Standard EM convergence results directly apply when the exact E-step is used [11, 12]. Let ? k = (Ak , ? k ). By choosing ? k+1 according to (5) we have ? k+1 = arg max? Q(?; ? k ), and the monotonicity property, Q(? k+1 ; ? k ) ? Q(? k ; ? k ), is satisfied. Together with the fact that the marginal log-likelihood (1) is continuous in ? and bounded above, the monotonicity property guarantees that the exact EM iterates converge monotonically to a local maximum of log P [Y|?]. When the Monte Carlo E-step is used, we no longer have monotonicity since now the M-step solves (m) (m) bk+1 = arg max Q(?; bk ), where Q b ? b is defined analogously to Q but with ? ? ? t0 ,t00 and r?1,t0 replaced ? (m) (m) bk+1 ; ? bk ) ? Q(? bk ; ? bk ). To assure the Monte Carlo by ? bt0 ,t00 and rb1,t0 ; for monotonicity we need Q(? EM algorithm (MCEM) converges, the number of importance samples, L, must be chosen carefully b approximates Q well enough; otherwise the MCEM may be swamped with error. so that Q Recently, Caffo et al. [14] have proposed a method, based on central limit theorem-like arguments, for automatically adapting the number of Monte Carlo samples used at each EM iteration. They guarantee what we refer to as an (, ?)-probably approximately monotonic (PAM) update, stating bk+1 ; ? bk ) ? Q(? bk ; ? bk ) ? , with probability at least 1 ? ?. that Q(? Rather than resorting to asymptotic approximations, we take advantage of the specific form of Q bk+1 ; ? bk ) involves terms b ? in our problem to obtain the finite-sample PAM result below. Because Q( bk and log ? bk and ? b does not log A bik , in practice we bound A bik away from zero to ensure that Q i,j i,j k b blow up. Specifically, we assume a small positive constant ?min so that A ? ?min and ? bk ? ?min . i,j i Theorem 1 Let , ? > 0 be given. There exist finite constants bm > 0, independent of Nm , so that if   4 2 2b2m T 2 Nm | log ?min |2 2Nm Lm = log (9) 2 1 ? (1 ? ?)1/T b importance samples are used for the mth observation, then Q(? probability greater than 1 ? ?. k+1 k k k b ) ? Q(? b ;? b ) ? , with ;? The proof involves two key steps. First, we derive finite sample concentration-style bounds for (m) (m) the importance sample estimates showing, e.g., that ? bt0 ,t00 converges to ? ? t0 ,t00 at a rate which is exponential in the number of importance samples used. These bounds are based on rather novel concentration inequalities for importance sampling estimators, which may be of interest in their own right (see the supplementary document [7] for details). Then, accounting for the explicit form of Q in our problem, the result follows from application of the union bound and the assumptions that k bk , ? A i,j bi ? ?min . In fact, by making a slightly stronger assumption it can be shown that the MCEM update is probably monotonic (i.e., (0, ?)-PAM, not approximately monotonic) if L0m importance samples are used for the mth observation, where L0m also depends polynomially on Nm and T . See the supplementary document [7] for further discussion and for the full proof of Theorem 1. Recall that exact E-step computation requires Nm ! operations for the mth observation (enumerating all permutations). The bound above stipulates that the number of importance samples required for a 2 4 . Generating one importance sample using the sequential log Nm PAM update is on the order of Nm procedure described above requires Nm operations. In contrast to the (exponential complexity) exact EM algorithm, this clearly demonstrates that the MCEM converges with high probability while only having polynomial computational complexity, and, in this sense, the MCEM meaningfully breaks the curse of dimensionality by using randomness to preserve the monotonic convergence property. 4 Experimental Results The performance of our algorithm for network inference from co-occurrences (NICO, pronounced ?nee-koh?) has been evaluated on both simulated data and on a biological data set. In these experiments, network structure is inferred by first executing the EM algorithm to infer the parameters (A, ?) of a Markov chain. Then, inserting edges in the inferred graph based on the most likely order of each path according to (A, ?) ensures the resulting graph is feasible with respect to the observations. Because the EM algorithm is only guaranteed to converge to a local maximum, we rerun the algorithm from multiple random initializations and chose the mostly likely of these solutions. To gauge the performance of our algorithm we use the edge symmetric difference error: the total number of false positives (edges in the inferred network which do not exist in the true network) plus the number of false negatives (edges in the true network not appearing in the inferred network). We simulate co-occurrence observations in the following fashion. A random graph on 50 vertices is sampled. Disjoint sets of vertices are randomly chosen as path origins and destinations, paths are generated between each origin-destination pair using the shortest path algorithm with either unit weight per edge (?shortest path?) or a random weight on each edge (?random routing?), and then co-occurrence observations are formed from each path. We keep the number of origins fixed at 5 and vary the number of destinations between 5 and 40 to see how the number of observations effects performance. NICO performance is compared against the frequency method (FM) described in [4]. Figure 1 plots the edge error for synthetic data generated using (a) shortest path routing, and (b) random routing. Each curve is the average performance over 100 different network and path real- 7 5 4 3 2 1 0 5 Freq. Method (Sparsest) Freq. Method (Best) NICO (ML) 6 Edge Symmetric Difference 6 Edge Symmetric Difference 7 Freq. Method (Sparsest) Freq. Method (Best) NICO (ML) 5 4 3 2 1 10 15 20 25 Num. Destinations 30 (a) Shortest path routes 35 40 0 5 10 15 20 25 Num. Destinations 30 35 40 (b) Random routes Figure 1: Edge symmetric differences between inferred networks and the network one would obtain using co-occurrence measurements arranged in the correct order. Performance is averaged over 100 different network realizations. For each configuration 10 NICO and FM solutions are obtained via different initializations. We then choose the NICO solution yielding the largest likelihood, and compare with both the sparsest (fewest edges) and clairvoyant best (lowest error) FM solution. izations. For each network/path realization, the EM algorithm is executed with 10 random initializations. Exact E-step calculation is used for observations with Nm ? 12, and importance sampling is used for longer paths. The longest observation in our data has Nm = 19. The FM uses simple pairwise frequencies of co-occurrence to assign an order independently to each path observation. Of the 10 NICO solutions (different random initializations), we use the one based on parameter estimates yielding the highest likelihood score which also always gives the best performance. Because it is a heuristic, the FM does not provide a similar mechanism for ranking solutions from different initializations. We plot FM performance for two schemes; one based on choosing the sparsest FM solution (the one with the fewest edges), and one based on clairvoyantly choosing the FM solution with lowest error. NICO consistently outperforms even the clairvoyant best FM solution. Our method has also been applied to infer the stress-activated protein kinease (SAPK)/Jun N terminal kinase (JNK) and NF?B signal transduction pathways1 (biological networks). The clustering procedure described in [2] is applied to microarray data in order to identify 18 co-occurrences arising from different environmental stresses or growth factors (path source) and terminating in the production of SAPK/JNK or NF?B proteins. The reconstructed network (combined SAPK/JNK and NF?B signal transduction pathways) is depicted in Figure 2. This structure agrees with the signalling pathways identified using traditional experimental techniques which test individually for each possible edge (e.g., ?MAPK? and ?NF-?B Signaling? on http://www.cellsignal.com). 5 Conclusion This paper describes a probabilistic model and statistical inference procedure for inferring network structure from incomplete ?co-occurrence? measurements. Co-occurrences are modelled as samples of a first-order Markov chain subjected to a random permutation. We describe exact and Monte Carlo EM algorithms for calculating maximum likelihood estimates of the Markov chain parameters (initial state distribution and transition matrix), treating the random permutations as hidden variables. Standard results for the EM algorithm guarantee convergence to a local maximum. Although our exact EM algorithm has exponential computational complexity, we provide finite-sample bounds guaranteeing convergence of the Monte Carlo EM variation to a local maximum with high probability and with only polynomial complexity. Our algorithm is easily extended to compute maximum a posteriori estimates, applying a Dirichlet prior to the initial state distribution and to each row of the Markov transition matrix. 1 NF?B proteins control genes regulating a broad range of biological processes including innate and adaptive immunity, inflammation and B cell development. The NF?B pathway is a collection of paths activated by various environmental stresses and growth factors, and terminating in the production of NF?B. LT Ag NIK PI3K ArtCot AgMHC PLCgamma2 PKC MALT1 TRAF6 RHO CS2 TAK1 IKK IL1 RAC NFkappaBC1 dsRNA GF RAS NFkappaBC2 NFKappaB PKR CS1 bTrCP CDC42 MEKK MKK JNK HPK GCKs UV FAS TNF ASK1 OS Figure 2: Inferred topology of the combined SAPK/JNK and NF?B signal transduction pathways. Co-occurrences are obtained from gene expression data via the clustering algorithm described in [2], and then network is inferred using NICO. Acknowledgments The authors of this paper would like to thank D. Zhu and A.O. Hero for providing the data and collaborating on the biological network experiment reported in Section 4. This work was supported in part by the Portuguese Foundation for Science and Technology grant POSC/EEA-SRI/61924/2004, the Directorate of National Intelligence, and National Science Foundation grants CCF-0353079 and CCR-0350213. References [1] E. Klipp, R. Herwig, A. Kowald, C. Wierling, and H. Lehrach. Systems Biology in Practice: Concepts, Implementation and Application. John Wiley & Sons, 2005. [2] D. Zhu, A. O. Hero, H. Cheng, R. Khanna, and A. Swaroop. Network constrained clustering for gene microarray data. Bioinformatics, 21(21):4014?4020, 2005. [3] Y. Liu and H. Zhao. A computational approach for ordering signal transduction pathway components from genomics and proteomics data. BMC Bioinformatics, 5(158), October 2004. [4] M. G. Rabbat, J. R. Treichler, S. L. Wood, and M. G. Larimore. Understanding the topology of a telephone network via internally-sensed network tomography. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. [5] O. Sporns and G. Tononi. Classes of network connectivity and dynamics. Complexity, 7(1):28?38, 2002. [6] O. Sporns, D. R. Chialvo, M. Kaiser, and C. C. Hilgetag. Organization, development and function of complex brain networks. Trends in Cognitive Science, 8(9), 2004. [7] M.G. Rabbat, M.A.T. Figueiredo, and R.D. Nowak. Supplement to inferring network structure from co-occurrences. Technical report, University of Wisconsin-Madison, October 2006. [8] J. Kubica, A. Moore, D. Cohn, and J. Schneider. cGraph: A fast graph-based method for link analysis and queries. In Proc. IJCAI Text-Mining and Link-Analysis Workshop, Acapulco, Mexico, August 2003. [9] D. Heckerman, D. Geiger, and D. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20:197?243, 1995. [10] N. Friedman and D. Koller. Being Bayesian about Bayesian network structure: A Bayesian approach to structure discovery in Bayesian networks. Machine Learning, 50(1?2):95?125, 2003. [11] R. A. Boyles. On the convergence of the EM algorithm. J. Royal Statistical Society B, 45(1):47?50, 1983. [12] C. F. J. Wu. On the convergence properties of the EM algorithm. Ann. of Statistics, 11(1):95?103, 1983. [13] C. Robert and G. Casella. Monte Carlo Statistical Methods. Springer Verlag, New York, 1999. [14] B. S. Caffo, W. Jank, and G. L. Jones. Ascent-based Monte Carlo EM. J. Royal Statistical Society B, 67(2):235?252, 2005.
3057 |@word sri:1 version:2 polynomial:5 stronger:1 sensed:1 eng:2 accounting:1 thereby:1 initial:5 configuration:1 liu:1 disparity:1 score:1 genetic:1 document:4 outperforms:1 current:3 com:1 activation:1 assigning:1 router:1 must:2 readily:2 written:1 portuguese:1 john:1 treating:4 drop:2 update:5 plot:2 intelligence:1 signalling:2 num:2 infrastructure:1 coarse:1 provides:1 node:10 iterates:1 lx:1 along:3 direct:2 xnm:1 mapk:1 clairvoyant:2 prove:2 consists:1 pathway:13 introduce:1 pairwise:1 ra:1 expected:1 themselves:1 frequently:2 p1:1 brain:4 terminal:1 automatically:1 curse:2 considering:1 estimating:1 underlying:3 notation:2 moreover:1 bounded:1 lowest:2 what:1 interpreted:1 emerging:1 maxa:1 unobserved:2 finding:1 ag:1 guarantee:4 temporal:1 every:2 unexplored:1 act:1 nf:8 growth:2 concave:1 demonstrates:1 control:1 unit:1 grant:2 internally:1 yn:1 t1:1 negligible:1 engineering:1 dropped:1 local:6 positive:2 limit:1 instituto:2 despite:1 ak:11 mtf:1 path:29 approximately:2 pam:4 twice:1 initialization:5 studied:1 chose:1 plus:1 specifying:1 co:27 factorization:1 bi:1 range:1 averaged:1 directed:2 acknowledgment:1 practice:2 union:1 differs:1 signaling:4 procedure:4 znm:1 area:2 pnm:4 adapting:1 suggest:2 protein:6 lehrach:1 context:3 impossible:1 applying:2 restriction:1 www:1 missing:2 yt:5 independently:1 convex:1 resolution:2 boyle:2 estimator:2 tnf:1 handle:1 variation:1 pt:6 exact:11 us:1 origin:6 element:7 assure:1 expensive:2 satisfying:1 trend:1 predicts:1 observed:2 electrical:2 region:1 ensures:1 connected:2 ordering:3 highest:1 intuition:2 complexity:10 cooccurrence:1 ideally:1 dynamic:1 terminating:2 carrying:1 depend:1 mcem:5 f2:1 cae:1 easily:4 various:1 derivation:1 fewest:2 distinct:1 fast:1 describe:3 activate:1 monte:14 precedes:1 query:1 choosing:3 apparent:1 heuristic:2 supplementary:4 plausible:1 rho:1 say:1 otherwise:3 statistic:2 t00:25 superscript:3 sequence:4 advantage:1 chialvo:1 propose:2 reconstruction:1 interaction:3 product:1 inserting:1 networked:2 realization:2 rb1:2 pronounced:1 convergence:11 ijcai:1 transmission:4 r1:11 produce:1 generating:1 guaranteeing:2 converges:5 executing:1 object:1 derive:4 stating:1 colony:1 solves:1 p2:1 involves:2 indicate:2 closely:1 aml:2 correct:2 stochastic:1 kubica:1 routing:3 adjacency:1 require:1 assign:1 suffices:1 f1:1 wall:1 collaborator:1 biological:7 acapulco:1 pl:4 correction:2 posc:1 b2m:1 lm:1 pi3k:1 vary:1 purpose:1 estimation:1 proc:2 combinatorial:3 individually:1 largest:1 agrees:1 gauge:1 successfully:1 reflects:1 clearly:1 always:1 aim:1 rather:2 pn:4 factorizes:1 ikk:1 geographically:1 focus:1 longest:2 consistently:1 modelling:1 likelihood:12 hpk:1 contrast:1 sense:2 posteriori:1 inference:8 caffo:2 typically:1 bt:1 hidden:2 mth:4 koller:1 rerun:1 arg:3 among:1 denoted:1 impacting:1 priori:1 development:2 resonance:1 spatial:1 constrained:1 initialize:1 mutual:1 marginal:1 field:1 having:1 sampling:13 biology:3 bmc:1 broad:1 jones:1 nearly:1 report:1 others:1 stimulus:3 lighten:1 inherent:1 equiprobable:2 randomly:3 simultaneously:1 ve:1 national:2 individual:1 preserve:1 replaced:2 friedman:1 organization:1 interest:1 regulating:1 mining:1 tononi:1 highly:1 yielding:2 activated:7 ynm:1 chain:5 edge:14 nowak:3 explosion:2 byproduct:1 incomplete:1 walk:3 sociology:1 inflammation:1 retains:1 maximization:2 vertex:7 subset:1 uniform:1 examining:1 reported:1 considerably:1 synthetic:1 combined:2 fundamental:2 international:1 retain:1 destination:8 probabilistic:2 michael:1 together:1 analogously:1 na:1 w1:1 connectivity:2 reflect:1 nm:30 again:1 satisfied:1 central:1 choose:1 nico:9 cognitive:2 resort:1 zhao:1 leading:1 style:1 account:1 potential:1 de:1 blow:1 ranking:1 depends:1 break:2 bayes:1 capability:1 formed:1 largely:1 yield:2 identify:2 modelled:2 bayesian:6 iid:1 carlo:14 worth:1 randomness:2 casella:1 against:1 telecomunicac:1 frequency:2 naturally:1 proof:2 sampled:4 stop:1 recall:1 knowledge:2 dimensionality:2 carefully:1 cs1:1 snm:2 response:4 formulation:1 evaluated:1 arranged:1 governing:2 hand:1 cohn:1 o:1 lack:1 khanna:1 reveal:3 innate:1 building:1 effect:1 excessively:1 hypothesized:1 y2:1 true:2 ccf:1 concept:1 shuffled:4 symmetric:4 moore:1 freq:4 stone:1 stress:5 ay:2 ecnico:1 complete:1 performs:1 fj:1 image:1 novel:2 fi:4 recently:1 superior:1 plete:1 permuted:1 functional:2 physical:6 rl:1 stepping:1 rapidity:1 endpoint:4 approximates:1 measurement:5 refer:2 ai:3 shuffling:1 uv:1 resorting:1 portugal:1 clairvoyantly:1 longer:2 posterior:3 own:1 recent:1 pkc:1 route:2 verlag:1 inequality:2 binary:3 additional:1 greater:1 schneider:1 determine:1 converge:2 shortest:4 monotonically:2 signal:8 full:1 lisboa:1 desirable:2 reduces:1 infer:3 multiple:1 technical:1 calculation:1 long:2 equally:2 plugging:1 basic:1 denominator:1 proteomics:1 expectation:1 iteration:1 normalization:1 cell:3 addition:1 whereas:1 source:1 microarray:3 probably:3 ascent:1 subject:1 induced:1 meaningfully:1 bt0:2 bik:2 noting:1 easy:1 enough:1 switch:1 independence:1 rabbat:4 fm:9 identified:1 topology:2 microarrays:1 enumerating:1 t0:51 motivated:1 expression:2 fv0:1 render:1 speech:1 york:1 cause:1 detailed:1 amount:1 repeating:1 tomography:1 http:1 exist:2 arising:3 disjoint:1 per:1 ccr:1 stipulates:1 ario:1 key:2 rnm:1 drawn:2 wisc:2 prevent:1 pj:1 shock:1 imaging:1 graph:8 fraction:1 sum:3 wood:1 parameterized:1 telecommunication:4 place:2 izations:1 wu:2 geiger:1 prefer:1 bound:7 guaranteed:1 cheng:1 unshuffle:1 activity:1 occur:3 incorporation:1 constraint:4 larimore:1 simulate:1 argument:1 min:5 relatively:1 developing:1 according:5 combination:1 poor:1 describes:1 slightly:1 em:27 son:1 heckerman:1 wi:2 making:1 swamped:1 intuitively:2 koh:1 computationally:2 mutually:1 remains:1 mechanism:1 hero:2 subjected:1 available:1 operation:3 highthroughput:1 apply:1 away:1 magnetic:1 occurrence:21 appearing:1 rac:1 alternative:1 clustering:3 ensure:3 dirichlet:1 completed:1 graphical:2 madison:3 calculating:1 society:2 unchanged:1 kaiser:1 fa:1 concentration:3 rt:13 traditional:1 thank:1 link:2 simulated:1 reason:2 length:2 providing:1 mexico:1 difficult:1 neuroimaging:1 setup:1 robert:2 mostly:1 executed:1 october:2 negative:1 implementation:1 zt:5 kinase:1 av:3 observation:27 markov:8 finite:5 extended:1 communication:2 precise:2 y1:2 rn:1 arbitrary:1 august:1 inferred:7 bk:16 eea:1 pair:2 required:1 connection:1 z1:1 immunity:1 acoustic:1 fv:1 nee:1 address:1 beyond:1 proceeds:2 below:1 including:3 max:2 lend:1 royal:2 sporns:2 indicator:1 zhu:2 scheme:7 technology:1 mkk:1 identifies:1 jun:1 gf:1 sn:3 genomics:1 faced:1 prior:1 understanding:1 text:1 discovery:1 determining:1 asymptotic:1 wisconsin:3 law:1 permutation:25 interesting:1 proportional:1 localized:1 foundation:2 consistent:1 wnm:1 imposes:1 principle:2 exciting:1 intractability:1 pi:1 heavy:1 production:2 row:3 repeat:1 supported:1 figueiredo:2 side:2 distributed:1 curve:1 transition:7 qn:1 author:1 collection:5 adaptive:1 bm:1 polynomially:1 reconstructed:1 observable:1 ignore:1 gene:4 unreliable:1 ml:6 monotonicity:5 keep:1 reveals:1 summing:1 assumed:1 consuming:1 continuous:1 nature:4 complex:2 domain:1 directorate:1 arise:1 x1:2 neuronal:1 transduction:7 fashion:1 il1:1 wiley:1 cs2:1 inferring:7 explicit:1 sparsest:4 exponential:6 lie:1 chickering:1 theorem:3 load:1 xt:4 specific:1 showing:1 r2:1 intractable:2 workshop:1 false:2 sequential:3 importance:16 nik:1 supplement:1 conditioned:1 occurring:1 jnk:5 depicted:1 lt:1 simply:2 likely:5 xt0:6 vth:1 ordered:3 monotonic:5 springer:1 corresponds:1 collaborating:1 environmental:4 conditional:1 goal:1 consequently:2 ann:1 feasible:13 hard:1 change:1 determined:2 specifically:1 uniformly:1 telephone:1 wt:4 flag:1 called:1 total:1 ece:1 e:1 experimental:4 internal:1 support:1 arises:1 bioinformatics:2 handling:1
2,267
3,058
Tighter PAC-Bayes Bounds Amiran Ambroladze Dep. of Mathematics Lund University/LTH Box 118, S-221 00 Lund, SWEDEN [email protected] Emilio Parrado-Hern?andez Dep. of Signal Processing and Communications University Carlos III of Madrid Legan?es, 28911, SPAIN [email protected] John Shawe-Taylor Dep. of Computer Science University College London Gower Street, London WC1E 6BT, UK [email protected] Abstract This paper proposes a PAC-Bayes bound to measure the performance of Support Vector Machine (SVM) classifiers. The bound is based on learning a prior over the distribution of classifiers with a part of the training samples. Experimental work shows that this bound is tighter than the original PAC-Bayes, resulting in an enhancement of the predictive capabilities of the PAC-Bayes bound. In addition, it is shown that the use of this bound as a means to estimate the hyperparameters of the classifier compares favourably with cross validation in terms of accuracy of the model, while saving a lot of computational burden. 1 Introduction Support vector machines (SVM) implement linear classifiers in a high-dimensional feature space using the kernel trick to enable a dual representation and efficient computation. The danger of overfitting in such high-dimensional spaces is countered by maximising the margin of the classifier on the training examples. For this reason there has been considerable interest in bounds on the generalisation in terms of the margin. Early bounds have relied on covering number computations [7], while later bounds have considered Rademacher complexity. The tightest bounds for practical applications appear to be the PAC-Bayes bound [4, 5]. In particular the form given in [3] is specially attractive for margin classifiers, like SVM. The PAC-Bayesian bounds are also present in other Machine Learning models such as Gaussian Processes [6]. The aim of this paper is to consider a refinement of the PAC-Bayes approach and investigate whether it can improve on the original PAC-Bayes bound and uphold its capabilities of delivering reliable model selection. The standard PAC-Bayes bound uses a Gaussian prior centred at the origin in weight space. The key to the new bound is to use part of the training set to compute a more informative prior and then compute the bound on the remainder of the examples relative to this prior. The bounds are tested experimentally in several classification tasks, including the model selection, on common benchmark datasets. The rest of the document is organised as follows. Section 2 briefly reviews the PAC-Bayes bound for SVMs obtained in [3]. The new bound obtained by means of the refinement of the prior is presented in Section 3. The experimental work, included in Section 4, compares the tightness of the new bound with the original one and indicates about its usability in a model selection task. Finally, the main conclusions of this work are outlined in Section 5. 2 PAC-Bayes Bound This section is devoted to a brief review of the PAC-Bayes Bound Theorem of [3]. Let us consider a distribution D of patterns x lying in a certain input space X , with their corresponding output labels y, y ? {?1, 1}. In addition, let us also consider a distribution Q over the classifiers c. For every classifier c, the following two error measures are defined: Definition (True error) The true error cD of a classifier c is defined as the probability of misclassifying a pair pattern-label (x, y) selected at random from D cD ? Pr(x,y)?D (c(x) 6= y) Definition (Empirical error) The empirical error c?S of a classifier c on a sample S of size m is defined as the rate of errors on a set S m c?S ? Pr(x,y)?S (c(x) 6= y) = 1 X I(c(xi ) 6= yi ) m i=1 where I(?) is a function equal to 1 if the argument is true and equal to 0 if the argument is false. Now we can define two error measures on the distribution of classifiers: the true error, QD ? Ec?Q cD , as the probability of misclassifying an instance x chosen from D with a classifier c chosen ? S ? Ec?Q c?S , as the probability of classifier c chosen according to Q; and the empirical error Q according to Q misclassifying an instance x chosen from a sample S. For these two quantities we can derive the PAC-Bayes Bound on the true error of the distribution of classifiers: Theorem 2.1 (PAC-Bayes Bound) For all prior distributions P (c) over the classifiers c, and for any ? ? (0, 1]   KL(Q(c)||P (c)) + ln( m+1 ? ) ? m PrS?D ?Q(c) : KL(QS ||QD ) ? ? 1 ? ?, m 1?q and where KL is the Kullback-Leibler divergence, KL(p||q) = q ln pq + (1 ? q) ln 1?p KL(Q(c)||P (c)) = Ec?Q ln Q(c) P (c) . The proof of the theorem can be found in [3]. This bound can be particularised for the case of linear classifiers in the following way. The m training patterns define a linear classifier that can be represented by the following equation1 : c(x) = sign(wT ?(x)) (1) where ?(x) is a nonlinear projection to a certain feature space where a linear classification actually takes place, and w is a vector from that feature space that determines the separating plane. For any vector w we can define a stochastic classifier in the following way: we choose the distribution Q = Q(w, ?) to be a spherical Gaussian with identity covariance matrix centred on the direction given by w at a distance ? from the origin. Moreover, we can choose the prior P (c) to be a spherical Gaussian with identity covariance matrix centred on the origin. Then, for classifiers of the form in equation (1) performance can be bounded by 1 We are considering here unbiased classifiers, i.e., with b = 0. Corollary 2.2 (PAC-Bayes Bound for margin classifiers [3]) For all distributions D, for all classifiers given by w and ? > 0, for all ? ? (0, 1], we have ! ?2 m+1 + ln( ) 2 ? ? S (w, ?)||QD (w, ?)) ? ? 1 ? ?. Pr KL(Q m It can be shown (see [3]) that ? S (w, ?) = Em [F? (??(x, y))] Q (2) where Em is the average over the m training examples, ?(x, y) is the normalised margin of the training patterns ywT ?(x) ?(x, y) = (3) k?(x)kkwk and F? = 1 ? F , where F is the cumulative normal distribution Z x 2 1 ? e?x /2 dx. F (x) = (4) 2? ?? Note that the SVM is a thresholded linear classifier expressed as (1) computed by means of the kernel trick [2]. The generalisation error of such a classifier can be bounded by at most twice the true (stochastic) error QD (w, ?) in Corollary 2.2, (see [4]);  Pr(x,y)?D sign(wT ?(x)) 6= y ? 2QD (w, ?) for all ?. 3 Choosing a prior for the PAC-Bayes Bound Our first contribution is motivated by the fact that the PAC-Bayes bound allows us to choose the prior distribution, P (c). In the standard application of the bound this is chosen to be a Gaussian centred at the origin. We now consider learning a different prior based on training an SVM on a subset R of the training set comprising r training patterns and labels. In the experiments this is taken as a random subset but for simplicity of the presentation we will assume these to be the last r examples {xk , yk }m k=m?r+1 in the description below. With these r examples we can determine an SVM classifier, wr and form a prior P (w|wr ) consisting of a Gaussian distribution with identity covariance matrix centred on wr . The introduction of this prior P (w|wr ) in Theorem 2.1 results in the following new bound. Corollary 3.1 (Single Prior based PAC-Bayes Bound for margin classifiers) Let us consider a prior on the distribution of classifiers consisting in a spherical Gaussian with identity covariance centred along the direction given by wr at a distance ? from the origin. Then, for all distributions D, for all classifiers wm and ? > 0, for all ? ? (0, 1], we have ! ||?wr ??wm ||2 m?r+1 + ln( ) 2 ? ? S\R (wm , ?)||QD (wm , ?)) ? ?1?? PrS?D KL(Q m?r ? S\R is a stochastic measure of the error of the classifier on the m ? r samples not used to where Q learn the prior. This stochastic error is computed as indicated in equation (2) averaged over S\R. Proof Since we separate r instances to learn the prior, the actual size of the training set to which we apply the bound is m ? r. In addition, the stochastic error must be computed only on the instances not used to learn the prior, i.e. the subset S\R. The KL divergence between prior and posterior is computed as follows: KL(Q(w)||P (w)) = Ew?Q ln Q(w) P (w) exp(? 12 (w??wm )T (w??wm )) = Ew?Q ln exp ? 1 (w??w )T (w??w ) ( 2 r r )   1 T T = Ew?Q ? 12 (w ? ?w ) (w ? ?w m m ) + 2 (w ? ?w r ) (w ? ?wr )   T T = Ew?Q ?wm w ? 21 ?2 wm wm ? Ew?Q ?wT wr + 21 ? 2 wrT wr Taking expectations using Ew?Q w = ?wm we arrive at 1 ||?wm ? ?wr ||2 2 Intuitively, if the selection of the prior is appropriate, the bound can be tighter than the one given in Corollary 2.2 when applied to the SVM weight vector on the whole training set. It is perhaps worth stressing that the bound holds for all wm and so can be applied to the SVM trained on the whole set. This might at first appear as ?cheating?, but the critical point is that the bound is evaluated on the set S\R not involved in generating the prior. The experimental work illustrates how in fact this bound can be tighter than the standard PAC-Bayes bound. Moreover, the selection of the prior may be further refined in exchange for a very small increase in the penalty term. This can be achieved with the application of the following result. Theorem 3.2 (Bound for several priors) Let {Pj (c)}Jj=1 be a set of possible priors that can be sePJ lected with positive weights {?j }Jj=1 so that j=1 ?j = 1. Then, for all priors P (c) ? {Pj (c)}Jj=1 , for all posterior distributions Q(c), for all ? ? (0, 1], ! 1 KL(Q(c)||Pj (c)) + ln m+1 ? + ln ?j ? ? 1 ? ?, PrS?Dm ?Q(c), ?j : KL(QS ||QD ) ? m Proof The bound in Theorem 2.1 can be particularised for a certain Pj (c) with associated weight ?j and with confidence ??j PrS?Dm ? S ||QD ) > ?Q(c) : KL(Q KL(Q(c)||Pj (c)) + ln( m+1 ??j ) m ! < ??j , Now let us combine the bounds for all the priors {Pj (c)}Jj=1 with the union operation (we use the fact that P (a ? b) ? P (a) + P (b)). ! ?Q(c), ?P (c) ? {Pj (c)}Jj=1 : 1 < ?, (5) PrS?Dm KL(Q(c)||Pj (c))+ln m+1 ? +ln ?j ? S ||QD ) > KL(Q m Finally, let us take the negation of (5) to arrive at the final result. This result can be also particularised for the case of SVM classifiers. The set of priors is constructed by allocating Gaussian distributions with identity covariance matrix along the direction given by wr at distances {?j }Jj=1 from the origin where {?j }Jj=1 are real numbers. In such a case, we obtain Corollary 3.3 (Multiple Prior PAC-Bayes Bound for linear classifiers) Let us consider a set {Pj (w|wr , ?j )}Jj=1 of prior distributions of classifiers consisting in spherical Gaussian distributions with identity covariance matrix centred on ?j wr , where {?j }Jj=1 are real numbers. Then, for all distributions D, for all classifiers w, for all ? > 0, for all ? ? (0, 1], we have ! ||?j wr ??w||2 m?r+1 + ln( ) + ln J 2 ? ? S\R (w, ?)||QD (w, ?)) ? ?1?? PrS?D KL(Q m?r Proof The proof is straightforward, substituting ?j = J1 for all j in Theorem 3.2 and computing the KL divergence between prior and posterior as in the proof of Corollary 3.1. Note that the {?j }Jj=1 must be chosen before we actually compute the posterior. However, the bound holds for all ?. Therefore, a linear search can be implemented for the value of ? that leads to the tightest bound. In the case of several priors, the search is repeated for every prior and the reported value of the bound is the tightest. In Section 4 we present experimental results comparing this new bound to the standard PAC-Bayes bound and using it to guide model selection. 4 Experiments The tightness of the new bound is evaluated in a model selection and classification task using some UCI [1] datasets (see their description in terms of number of instances, input dimension and number of positive/negative examples in Table 1). Problem Wdbc Image Waveform Ringnorm # samples 569 2310 5000 7400 input dim. 30 18 21 20 Pos/Neg 357 / 212 1320 / 990 1647 / 3353 3664 / 3736 Table 1: Description of the datasets: for every set we give the number of patterns, number of input variables and number of positive/negative examples. For every dataset, we obtain 50 different training/test set partitions with 80% of the samples forming the training set and the remaining 20% forming the test set. With each of the partitions we learn a SVM classifier with Gaussian RBF kernel preceded by a model selection. The model selection consists in the determination of an optimal pair of hyperparameters (C, ?). C is the SVM trade-off between the maximisation of the margin and the minimisation of the hinge loss of the training samples, while ? is the width of the Gaussian kernel. The best pair is sought in a 15 ? 15 grid of parameters 0.2, ? 0.5, 1,?2, 5, ? 10, 20, ? ? where ? C? ? {0.02, ? 0.05, ? 0.1,? ?50, 100, ? 200, 500, 1000} and ? ? { 81 d, 17 d, 16 d, 15 d, 14 d, 13 d, 12 d, d, 2 d, 3 d, 4 d, 5 d, ? ? ? 6 d, 7 d, 8 d}, where d is the input space dimension. For completeness, this model selection is guided by the PAC-Bayes bound: we select the model corresponding to the pair that yields a lower value of QD in the bound. Table 2 shows the value of the PAC-Bayes Bound averaged over the 50 training/test partitions. For every partition we use the minimum value of the bound resulting from all the pairs (C, ?) of the grid. Note that this procedure is computationally less costly than the commonly used N -fold cross validation model selection, since it saves the training of N classifiers (one for each fold) for each parameter combination. Problem Wdbc Image Waveform Ringnorm PAC-Bayes Bound 0.334 ? 0.005 0.254 ? 0.003 0.198 ? 0.002 0.212 ? 0.002 Test error rate 0.073 ? 0.021 0.074 ? 0.014 0.089 ? 0.008 0.026 ? 0.005 Table 2: Averaged PAC-Bayes Bound and Test Error Rate obtained by the model that yielded the lowest bound in each of the 50 training/test partitions. We repeated this experiment using the Prior PAC-Bayes Bound with different configurations for learning the prior distribution of classifiers. These configurations are defined by variations on the percentage of training patterns separated to compute the prior and on the number of scalings of the magnitude of that prior. The scalings represent different lengths ? of ||wr || equally spaced between ? = 1 and ? = 100. To summarize, for every training/test partition and for every pair (% patterns, # of scalings) we look at the pair (C, ?) that outputs the smaller value of QD . In this case, the use of the Prior PAC-Bayes Bound to perform the model selection increases the computational burden of using the PAC-Bayes one in the training of one classifier (the one used to learn the prior), in comparison to the extra N classifiers needed by N -fold cross validation. Table 3 displays both the average value and the sample standard deviation over the 50 realisations. It seems that ten scalings of the prior are enough to obtain tighter bounds, since the use of 100 or 500 scalings does not improve the best results. With respect to the percentage of training instances left out to learn the prior, something close to 50% of the training set works well in the considered problems. It is worth mentioning that we treat each position in the Table as a separate experiment. Winsconsin Database of Breast Cancer (PAC-Bayes Bound = 0.334?0.005) Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.341 ? 0.006 0.351 ? 0.007 0.364 ? 0.009 0.379 ? 0.011 0.398 ? 0.013 10 0.337 ? 0.010 0.323 ? 0.012 0.314 ? 0.012 0.310 ? 0.013 0.306 ? 0.018 100 0.319 ? 0.007 0.315 ? 0.010 0.313 ? 0.011 0.315 ? 0.013 0.315 ? 0.017 500 0.324 ? 0.007 0.320 ? 0.009 0.319 ? 0.011 0.321 ? 0.013 0.322 ? 0.017 Image Segmentation (PAC-Bayes Bound = 0.254?0.003) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.255 ? 0.003 0.262 ? 0.005 0.274 ? 0.003 0.284 ? 0.005 0.300 ? 0.008 10 0.215 ? 0.004 0.203 ? 0.006 0.200 ? 0.005 0.188 ? 0.007 0.184 ? 0.010 100 0.217 ? 0.004 0.203 ? 0.007 0.196 ? 0.005 0.187 ? 0.007 0.186 ? 0.009 500 0.218 ? 0.004 0.204 ? 0.007 0.198 ? 0.005 0.189 ? 0.007 0.188 ? 0.009 Waveform (PAC-Bayes Bound = 0.198?0.002) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.197 ? 0.003 0.201 ? 0.003 0.207 ? 0.003 0.214 ? 0.004 0.222 ? 0.005 10 0.161 ? 0.004 0.156 ? 0.004 0.153 ? 0.004 0.150 ? 0.005 0.151 ? 0.005 100 0.161 ? 0.004 0.155 ? 0.004 0.153 ? 0.004 0.152 ? 0.005 0.153 ? 0.005 500 0.162 ? 0.004 0.157 ? 0.004 0.155 ? 0.004 0.154 ? 0.005 0.155 ? 0.005 Ringnorm (PAC-Bayes Bound = 0.212?0.002) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.216 ? 0.001 0.225 ? 0.002 0.236 ? 0.002 0.249 ? 0.004 0.265 ? 0.002 10 0.172 ? 0.068 0.140 ? 0.047 0.126 ? 0.037 0.116 ? 0.030 0.109 ? 0.024 100 0.173 ? 0.068 0.139 ? 0.047 0.126 ? 0.037 0.117 ? 0.030 0.110 ? 0.024 500 0.173 ? 0.068 0.140 ? 0.047 0.127 ? 0.037 0.117 ? 0.030 0.110 ? 0.024 Scalings Table 3: Averaged Prior PAC-Bayes bound for different settings of percentage of training instances reserved to compute the prior and of number of scalings of the normalised prior. However, one could have included the tuning of the pair (% patterns, # of scalings) in the model selection. This would have involved a further application of the union bound with the 20 entries of the Table for each problem, at the cost of adding an extra ln(20)/m (0.0053 for Wdbc and less for the other datasets) in the right part of Theorem 3.2. We decided to fix the number of scalings and the amount of training patterns to compute the prior since to perform all of the different options would augment the computational burden of the model selection. In order to evaluate the predictive capabilities of the Prior PAC-Bayes bound as a means to select models with low test error rate, Table 4 displays the averaged test error corresponding to the models selected in the previous experiment (note that in this case the computational burden involved in determining the model is increased by the training of the SVM that learns the prior wr ). Table 5 displays the test error rate obtained by SVMs with their hyperparameters tuned on the above mentioned grid by means of ten-fold cross-validation, that serves as a baseline method for comparison purposes. According to the values shown in the tables, the Prior PAC-Bayes bound achieves tighter predictions of the generalization error of the randomized classifier in almost all cases. Notice how the length of the prior is not so critical in comparison with its direction. The goodness of the latter relying on the subset of samples left out for the purpose of learning the prior classifier. Moreover it has to be remarked that this tightening of the bound does not appear to deliver any reduction in the capabilities to select a good model (such a case would imply that we can predict more accurately a bigger error rate, but our bound is able to predict accurately the same error rate as the PAC-Bayes Bound). Winsconsin Database of Breast Cancer (PAC-Bayes Test Error = 0.073?0.021) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.076 ? 0.020 0.076 ? 0.021 0.076 ? 0.021 0.076 ? 0.021 0.076 ? 0.021 10 0.075 ? 0.021 0.076 ? 0.021 0.075 ? 0.021 0.074 ? 0.021 0.072 ? 0.021 100 0.076 ? 0.021 0.076 ? 0.021 0.074 ? 0.021 0.074 ? 0.020 0.072 ? 0.021 500 0.076 ? 0.020 0.076 ? 0.021 0.074 ? 0.020 0.073 ? 0.020 0.072 ? 0.021 Image Segmentation (PAC-Bayes Test Error = 0.074?0.014) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.078 ? 0.011 0.078 ? 0.011 0.078 ? 0.011 0.083 ? 0.019 0.100 ? 0.019 10 0.064 ? 0.011 0.066 ? 0.011 0.063 ? 0.014 0.054 ? 0.010 0.056 ? 0.011 100 0.064 ? 0.011 0.063 ? 0.011 0.061 ? 0.011 0.059 ? 0.011 0.057 ? 0.012 500 0.064 ? 0.011 0.063 ? 0.011 0.061 ? 0.011 0.059 ? 0.011 0.057 ? 0.012 Waveform (PAC-Bayes Test Error = 0.089?0.008) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.089 ? 0.008 0.089 ? 0.008 0.090 ? 0.009 0.091 ? 0.009 0.091 ? 0.009 10 0.089 ? 0.008 0.089 ? 0.008 0.089 ? 0.008 0.089 ? 0.008 0.089 ? 0.009 100 0.089 ? 0.008 0.089 ? 0.008 0.089 ? 0.008 0.089 ? 0.008 0.089 ? 0.009 500 0.089 ? 0.008 0.089 ? 0.008 0.089 ? 0.008 0.089 ? 0.008 0.089 ? 0.009 Ringnorm (PAC-Bayes Test Error = 0.026?0.005) Scalings Percentage of training set used to compute the prior 10 % 20% 30% 40% 50% 1 0.025 ? 0.004 0.030 ? 0.007 0.038 ? 0.005 0.036 ? 0.007 0.038 ? 0.005 10 0.020 ? 0.007 0.021 ? 0.007 0.021 ? 0.007 0.025 ? 0.008 0.026 ? 0.008 100 0.020 ? 0.007 0.021 ? 0.007 0.021 ? 0.007 0.025 ? 0.008 0.026 ? 0.008 500 0.020 ? 0.007 0.021 ? 0.007 0.021 ? 0.007 0.025 ? 0.008 0.025 ? 0.005 Table 4: Averaged Test Error Rate corresponding to the model determined by the bound for the different settings of Table 3. Problem Wdbc Image Waveform Ringnorm Cross-validation error rate 0.060 ? 0.006 0.022 ? 0.002 0.079 ? 0.011 0.015 ? 0.001 Test error rate 0.072 ? 0.024 0.024 ? 0.008 0.085 ? 0.009 0.017 ? 0.004 Table 5: Averaged test error rate. For every partition we select the test error rate corresponding to the model reporting the smaller cross-validation error. However, the comparison with Table 5 points out that the PAC-Bayes bound is not as accurate as Ten Fold cross-validation when it comes to selecting a model that yields a low test error rate. Nevertheless, in two out of the four problems (waveform, and wdbc) the bound provided a model as good as the one found by cross-validation, added to the fact that in ringnorm the error bars overlap. We conclude the discussion by pointing that the Cross-validation error rate cannot be used directly as a prediction on the expected test error rate in the sense of worse case performances. Of course the values of the cross-validation error rate and the test error rate are close, but it is difficult to predict how close they are going to be. 5 Conclusions and ongoing research In this paper we have presented a version of the PAC-Bayes bound for linear classifiers that introduces the learning of the prior distribution over the classifiers. This prior distribution is a Gaussian with identity covariance matrix. The mean weight vector is learnt in the following way: its direction is determined from a separate subset of the training examples, while its length has to be chosen from an a priori fixed set of lengths. The experimental work shows that this new version of the bound achieves tighter predictions of the generalization error of the stochastic classifier, compared to the original PAC-Bayes bound predictions. Moreover, if the model selection is driven by the bound, the Prior PAC-Bayes does not degrade the quality of the model selected by the original bound. Nevertheless, it has to be said that in some of our experiments the model selected by the bounds resulted as accurate as the ones selected by ten-fold cross-validation in terms of test error rate on a separate test. This fact is remarkable since to include the model selection in the training of the classifier roughly multiplies by ten the computational burden of the training when using ten-fold cross-validation but roughly by two when using the prior PAC-Bayes bound. Of course the original PAC-Bayes provides with a cheaper model selection, but its predictions about the generalization capabilities are more pessimistic. The amount of training patterns used to learn the prior seems to be a key aspect in the goodness of this prior and thus in the tightness of the bound. Therefore, ongoing research includes methods to systematically determine an amount of patterns that provides with suitable priors. Another line of research explores the use of these bounds to reinforce different properties of the design of classifiers, such as sparsity. Finally, a deeper study about which dataset structure causes differences among the performances of cross-validation and bound-driven model selections is also being carried out. Acknowledgments This work has been supported by the IST Programme of the European Community under the PASCAL Network of Excellence IST2002-506788. E. P-H. acknowledges support from Spain CICYT grant TEC2005-04264/TCM. References [1] C L Blake and C J Merz. UCI Repository of machine learning databases. University of California, Irvine, Dept. of Information and Computer Sciences, [http://www.ics.uci.edu/?mlearn/MLRepository.html], 1998. [2] Bernhard E. Boser, Isabelle Guyon, and Vladimir Vapnik. A training algorithm for optimal margin classifiers. In Computational Learing Theory, pages 144?152, 1992. [3] J Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning Research, 6(Mar):273?306, 2005. [4] J Langford and J Shawe-Taylor. PAC-Bayes & Margins. In Advances in Neural Information Processing Systems, volume 14, Cambridge MA, 2002. MIT Press. [5] D McAllester. Pac-bayesian stochastic model selection. Machine Learning, 51(1):5?21, 2003. [6] M Seeger. PAC-Bayesian Generalization Error Bounds for Gaussian Process Classification. Journal of Machine Learning Research, 3:233?269, 2002. [7] J Shawe-Taylor, P L Bartlett, R C Williamson, and M Anthony. Structural risk minimization over data-dependent hierarchies. IEEE Trans. Information Theory, 44(5):1926 ? 1940, 1998.
3058 |@word repository:1 version:2 briefly:1 seems:2 covariance:7 uphold:1 reduction:1 configuration:2 selecting:1 tuned:1 document:1 comparing:1 dx:1 must:2 john:1 partition:7 j1:1 informative:1 selected:5 plane:1 xk:1 kkwk:1 completeness:1 math:1 provides:2 along:2 constructed:1 learing:1 consists:1 combine:1 excellence:1 expected:1 roughly:2 relying:1 spherical:4 actual:1 considering:1 spain:2 provided:1 moreover:4 bounded:2 lowest:1 every:8 classifier:46 uk:2 grant:1 appear:3 positive:3 before:1 treat:1 might:1 twice:1 ringnorm:6 mentioning:1 averaged:7 decided:1 practical:2 acknowledgment:1 union:2 maximisation:1 implement:1 procedure:1 danger:1 empirical:3 projection:1 confidence:1 cannot:1 close:3 selection:19 risk:1 www:1 straightforward:1 simplicity:1 q:2 variation:1 hierarchy:1 us:1 origin:6 trick:2 database:3 trade:1 yk:1 mentioned:1 complexity:1 trained:1 ist2002:1 predictive:2 deliver:1 po:1 represented:1 separated:1 london:2 choosing:1 refined:1 tightness:3 final:1 ucl:1 remainder:1 uci:3 description:3 enhancement:1 rademacher:1 generating:1 derive:1 ac:1 dep:3 implemented:1 c:1 come:1 qd:12 direction:5 waveform:6 guided:1 stochastic:7 enable:1 mcallester:1 jst:1 exchange:1 fix:1 andez:1 generalization:4 tighter:7 pessimistic:1 hold:2 lying:1 considered:2 blake:1 normal:1 exp:2 ic:1 predict:3 pointing:1 substituting:1 sought:1 early:1 achieves:2 lected:1 purpose:2 label:3 minimization:1 mit:1 stressing:1 gaussian:13 aim:1 minimisation:1 corollary:6 indicates:1 seeger:1 baseline:1 sense:1 dim:1 dependent:1 bt:1 going:1 comprising:1 dual:1 classification:5 among:1 augment:1 priori:1 multiplies:1 proposes:1 pascal:1 html:1 equal:2 saving:1 look:1 realisation:1 divergence:3 resulted:1 cheaper:1 consisting:3 negation:1 interest:1 investigate:1 introduces:1 devoted:1 allocating:1 accurate:2 sweden:1 taylor:3 instance:7 increased:1 goodness:2 cost:1 deviation:1 subset:5 entry:1 reported:1 learnt:1 explores:1 randomized:1 off:1 choose:3 worse:1 centred:7 ywt:1 includes:1 later:1 lot:1 wm:12 bayes:46 carlos:1 capability:5 relied:1 option:1 contribution:1 accuracy:1 reserved:1 yield:2 spaced:1 bayesian:3 accurately:2 worth:2 mlearn:1 definition:2 involved:3 remarked:1 dm:3 proof:6 associated:1 irvine:1 dataset:2 cicyt:1 segmentation:2 actually:2 uc3m:1 evaluated:2 box:1 mar:1 langford:2 favourably:1 nonlinear:1 quality:1 perhaps:1 indicated:1 true:6 unbiased:1 leibler:1 attractive:1 width:1 covering:1 mlrepository:1 tsc:1 image:5 common:1 preceded:1 volume:1 isabelle:1 cambridge:1 tuning:1 outlined:1 mathematics:1 grid:3 shawe:3 pq:1 something:1 posterior:4 driven:2 certain:3 yi:1 neg:1 minimum:1 determine:2 signal:1 multiple:1 emilio:1 usability:1 determination:1 cross:13 equally:1 bigger:1 amiran:2 prediction:6 breast:2 expectation:1 kernel:4 represent:1 achieved:1 addition:3 extra:2 rest:1 specially:1 structural:1 iii:1 enough:1 whether:1 motivated:1 tcm:1 bartlett:1 penalty:1 cause:1 jj:10 se:1 delivering:1 amount:3 ten:6 svms:2 http:1 percentage:11 misclassifying:3 tutorial:1 notice:1 sign:2 wr:16 ist:1 key:2 four:1 nevertheless:2 pj:9 thresholded:1 place:1 arrive:2 almost:1 reporting:1 guyon:1 scaling:16 bound:82 display:3 fold:7 yielded:1 aspect:1 argument:2 according:3 combination:1 smaller:2 em:2 intuitively:1 pr:10 taken:1 ln:16 equation:2 computationally:1 hern:1 wrt:1 needed:1 serf:1 ambroladze:2 tightest:3 operation:1 apply:1 appropriate:1 save:1 original:6 remaining:1 include:1 hinge:1 wc1e:1 gower:1 added:1 quantity:1 costly:1 countered:1 said:1 distance:3 separate:4 reinforce:1 separating:1 street:1 degrade:1 evaluate:1 reason:1 maximising:1 length:4 vladimir:1 difficult:1 negative:2 tightening:1 design:1 perform:2 datasets:4 benchmark:1 communication:1 community:1 pair:8 cheating:1 kl:17 california:1 boser:1 trans:1 able:1 bar:1 below:1 pattern:12 lund:2 sparsity:1 summarize:1 reliable:1 including:1 critical:2 overlap:1 suitable:1 improve:2 brief:1 imply:1 carried:1 acknowledges:1 prior:63 review:2 determining:1 relative:1 loss:1 organised:1 remarkable:1 validation:13 systematically:1 cd:3 cancer:2 course:2 supported:1 last:1 guide:1 normalised:2 deeper:1 taking:1 dimension:2 cumulative:1 commonly:1 refinement:2 programme:1 ec:3 kullback:1 bernhard:1 overfitting:1 conclude:1 xi:1 search:2 parrado:1 table:15 learn:7 williamson:1 european:1 anthony:1 main:1 whole:2 hyperparameters:3 repeated:2 madrid:1 position:1 learns:1 theorem:8 pac:49 svm:12 burden:5 false:1 adding:1 vapnik:1 magnitude:1 illustrates:1 margin:9 wdbc:5 forming:2 expressed:1 determines:1 ma:1 lth:2 identity:7 presentation:1 rbf:1 considerable:1 experimentally:1 included:2 generalisation:2 determined:2 wt:3 e:2 experimental:5 merz:1 ew:6 select:4 college:1 support:3 latter:1 ongoing:2 dept:1 tested:1
2,268
3,059
An Efficient Method for Gradient-Based Adaptation of Hyperparameters in SVM Models S. Sathiya Keerthi Vikas Sindhwani Olivier Chapelle Yahoo! Research 3333 Empire Avenue Burbank, CA 91504 Department of Computer Science University of Chicago Chicago, IL 60637 MPI for Biological Cybernetics Spemannstra?e 38 72076 T? ubingen [email protected] [email protected] [email protected] Abstract We consider the task of tuning hyperparameters in SVM models based on minimizing a smooth performance validation function, e.g., smoothed k-fold crossvalidation error, using non-linear optimization techniques. The key computation in this approach is that of the gradient of the validation function with respect to hyperparameters. We show that for large-scale problems involving a wide choice of kernel-based models and validation functions, this computation can be very efficiently done; often within just a fraction of the training time. Empirical results show that a near-optimal set of hyperparameters can be identified by our approach with very few training rounds and gradient computations. . 1 Introduction Consider the general SVM classifier model in which, given n training examples {(x i , yi )}ni=1 , the primal problem consists of solving the following problem: n X 1 min kwk2 + C l(oi , yi ) (1) (w,b) 2 i=1 where l denotes a loss function over labels yi ? {+1, ?1} and the outputs Pn oi on the training set. The machine?s output o for any example x is given as o = w ? ?(x) ? b = j=1 ?j yj k(x, xi ) ? b where the ?i are the dual variables, b is the threshold parameter and, as usual, computations involving ? are handled using the kernel function: k(x, z) = ?(x) ? ?(z). For example, the Gaussian kernel is given by k(x, z) = exp(??kx ? zk2 ) (2) The regularization parameter C and kernel parameters such as ? comprise the vector h of hyperparameters in the model. h is usually chosen by optimizing a validation measure (such as the k-fold cross validation error) on a grid of values (e.g. a uniform grid in the (log C, log ?) space). Such a grid search is usually expensive. Particularly, when n is large, this search is so time-consuming that one usually resorts to either default hyperparameter values or crude search strategies. The problem becomes more acute when there are more than two hyperparameters. For example, for feature weighting/selection purposes one may wish to use the following ARD-Gaussian kernel: X k(x, z) = exp(? ? t kxt ? z t k2 ) (3) t where ? t = weight on the tth feature. In such cases, a grid based search is ruled out. In Figure 1 (see section 5) we show contour plots of performance of an SVM on the log C ? log ? plane for a realworld binary classification problem. These plots show that learning performance behaves ?nicely? as a function of hyperparameters. Intuitively, as C and ? are varied one expects the SVM to smoothly transition from providing underfitting solutions to overfitting solutions. Given that this phenomenon seems to occur routinely on real-world learning tasks, a very appealing and principled alternative to grid search is to consider a differentiable version of the performance validation function and invoke non-linear gradient-based optimization techniques for adapting hyperparameters. Such an approach requires the computation of the gradient of the validation function with respect to h. Chapelle et al. (2002) give a number of possibilities for such an approach. One of their most promising methods is to use a differentiable version of the leave-one-out (LOO) error. A major disadvantage of this method is that it requires the expensive computation and storage of the inverse of a kernel sub-matrix corresponding to the support vectors. It is worth noting that, even if, on some large scale problems, the support vector set is of a manageable size at the optimal hyperparameters, the corresponding set can be large when the hyperparameter vector is away from the optimal; on many problems, such a far-off region in the hyperparameter space is usually traversed during the adaptation process! We highlight the contributions of this paper. (1) We consider differentiable versions of validation-set-based objective functions for model selection (such as k-fold error) and give an efficient method for computing the gradient of this function with respect to h. Our method does not require the computation of the inverse of a large kernel sub-matrix. Instead, it only needs a single linear system of equations to be solved, which can be done either by decomposition or conjugate-gradient techniques. In essence, the cost of computing the gradient with respect to h is about the same, and usually much lesser than the cost of solving (1) for a given h. (2) Our method is applicable to a wide range of validation objective functions and SVM models that may involve many hyperparameters. For example, a variety of loss functions can be used together with multiclass classification, regression, structured output or semi-supervised SVM algorithms. (3) Large-scale empirical results show that with BFGS optimization, just trying about 10-20 hyperparameter points leads to the determination of optimal hyperparameters. Moreover, even as compared to a fine grid search, the gradient procedure provides a more precise placement of hyperparameters leading to better generalization performance. The benefit in terms of efficiency over the grid approach is evident even with just two hyperparameters. We also show the usefulness of our method for tuning more than two hyperparameters when optimizing validation functions such as the F measure and weighted error rate. This is particularly useful for imbalanced problems. This paper is organized as follows: In section 2, we discuss the general class of SVM models to which our method can be applied. In section 3, we describe our framework and provide the details of the gradient computation for general validation functions. In section 4, we discuss how to develop differentiable versions of several common performance validation functions. Empirical results are presented in section 5. We conclude this paper in section 6. Due to space limitations, several details have been omitted but can be found in the technical report (Keerthi et al. (2006)). 2 SVM Classification Models In this section, we discuss the assumptions required for our method to be applicable. Consider SVM classification models of the form in (1). We assume that the kernel function k is a continuously differentiable function of h. Three commonly used SVM loss functions are: (1) hinge loss; (2) squared hinge loss; and (3) squared loss. In each of these cases, the solution of (1) is obtained by computing the vector ? that solves a dual problem. The solution usually leads to a linear system relating ? and b:   ? P =q (4) b where P and q are, in general, functions of h. We make the following assumption: Locally around h (at which we are interested in calculating the gradient of the validation function to be defined soon) P and q are continuously differentiable functions of h. We write down P and q for the hinge loss function and discuss the validity of the above assumption. Details for other loss functions are similar. Hinge loss. l(oi , yi ) = max{0, 1 ? yi oi }. After the solution of (1), the training set indices get partitioned into three sets: I0 = {i : ?i = 0}, Ic = {i : ?i = C} and Iu = {i : 0 < ?i < C}. Let ?0 , ?c , ?u , yc , yu , ec , eu , ?uc , ?uu etc be appropriately defined vectors and matrices. Then (4) is given by      eu ? ?uc ?c ?uu ?yu ?u = (5) ?0 = 0, ?c = Cec , b ycT ?c ?yuT 0 If the partitions I0 , Ic and Iu do not change locally around a given h then assumption 2 holds. Generically, this happens for almost all h. The modified Huber loss function can also be used, though the derivation of (4) for it is more complex than for the three loss functions mentioned above. Recently, weighted hinge loss with asymmetric margins (Grandvalet et al., 2005) has been explored for treating imbalanced problems. Weighted Hinge loss. l(oi , yi ) = Ci max{0, mi ? yi oi }. where Ci = C+ , mi = m+ if yi = 1 and Ci = C? , mi = m? if yi = ?1. Because C+ and C? are present, the hyperparameter C in (1) can be omitted. The SVM model with weighted hinge loss has four extra hyperparameters, C + , C? , m+ and m? , apart from the kernel hyperparameters. Our methods in this paper allow the possibility of efficiently tuning all these parameters together with kernel parameters. The method described in this paper is not special to classification models only. It extends to a wide class of kernel methods for which the optimality conditions for minimizing a training objective function can be expressed as a linear system (4) in a continuously differentiable manner 1 . These include many models for multiclass classification, regression, structured output and semi-supervised learning (see Keerthi et al. (2006)). 3 The gradient of a validation function Suppose that for the purpose of hyperparameter tuning, we are given a validation scheme involving a small number of (training set, validation set) partitions, such as: (1) using a single validation set, (2) k-fold cross validation, or (3) averaging over k randomly chosen (training set, validation set) partitions. Our method applies to any of these three schemes. To keep notations simple, we explain the ideas only for scheme (1) and expand on the other schemes towards the end of this section. Note that throughout the hyperparameter optimization process, the training-validation splits are fixed. ? ? li = k(? Let {? xl , y?l }nl=1 denote the validation set. Let K xl , xi ) involving a kernel calculation between an element of a validation set with an element of the training set. The output on the l th validation P ? example is o?l = i ?i yi Kli ? b which, for convenience, we will rewrite as o?l = ?lT ? (6) ? li , i = 1, . . . , n and ?1 as where ? is a vector containing ? and b, and ?l is a vector containing yi K the last element (corresponding to b). Let us suppose that the model selection problem is formulated as a non-linear optimization problem: h? = argmin f (? o1 , . . . , o?n? ) (7) h where f is a differentiable validation function of the outputs o?l which implicitly depend on h. In the next section, we will outline the construction of such functions for criteria like error rate, F measure etc. We now discuss the computation of ?h f . Let ? denote a generic parameter in h and let us represent partial derivative of some quantity, say v, with respect to ? as v. ? Before writing down ? ? expressions for f , let us discuss how to get ?. Differentiating (4) with respect to ? gives P ?? + P? ? = q? Now let us write down f?. f? = ? n ? X ?? = P ?1 (q? ? P? ?) (?f /? o?l )o?? l (8) (9) l=1 1 Infact, the main ideas easily extend when the optimality conditions form a non-linear system in (?, b) (e.g., in Kernel Logistic Regression). where o?? l is obtained by differentiating (6): o?? l = ?lT ?? + ?? lT ? (10) The computation of ?? in (8) is the most expensive step, mainly because it requires P ?1 . Note that, for hinge loss, P ?1 can be computed in a somewhat cheaper way: only a matrix of the dimension of Iu needs to be inverted. Even then, in large scale problems the dimension of the matrix to be inverted can become so large that even storing it may be a problem; even when large storage is possible, the inverse can be very expensive. Most times, the effective rank of P is much smaller than its dimension. Thus, instead of computing P ?1 in (8), we can instead solve P ?? = (q? ? P? ?) (11) for ?? approximately using decomposition methods or iterative methods such as conjugate-gradients. This can improve efficiency as well as take care of memory issues by storing P only partially and computing the remaining parts of P as and when needed. Since the right-hand-side vector (q? ? P? ?) in (11) changes for each different ? with respect to which we are differentiating, we need to solve (11) for each element of h. If the number of elements of h is not small (say, we want to use (3) with the MNIST dataset which has more than 700 features) then, even with (11), the computations can still remain very expensive. We now give a simple trick that shows that if the gradient calculations are re-organized, then obtaining the solution of just a single linear system suffices for computing the full gradient of f with respect to all elements of h. Let us denote the coefficient of o?? l in the expression for f? in (9) by ?l , i.e., ?l = ?f /? o?l (12) Using (10) and plugging the expression for ?? from (8) into (9) gives X X X f? = ?l o?? l = ?l (?lT P ?1 (q? ? P? ?) + ?? lT ?) = dT (q? ? P? ?) + ( ?l ?? l )T ? (13) l where d is the solution of l l PTd = ( X ? l ?l ) (14) l The beauty of the reorganization in (13) is that d is the same for all variables ? in h about which the differentiation is being done. Thus (14) needs to be solved only once. In concurrent work (Seeger, 2006) has used a similar idea for kernel logistic regression. As a word of caution, note that P may not be symmetric. See, e.g., the P arising from (5) for the hinge loss case. Also, the parts corresponding to zero components should be omitted from calculations and the special structure of P should be utilized,e.g., for hinge loss when computing P? ? the parts of P? corresponding to ?0 (see (5)) can be ignored. The linear system in the above equation can be efficiently solved using conjugate gradient techniques. The sequence of steps for the computation of the full gradient of f with respect to h is as follows. First compute ?l from (12). For various choices of validation function, we outline this computation in the next section. Then solve (14) for d. Then, for each ? use (13) to get all the derivatives of f . The computation of P? ? has to be performed for each hyperparameter separately. In problems with many hyperparameters, this is the most expensive part of the gradient computation. Note that in some cases, e.g., ? = C, P? ? is immediately obtained. For ? = ? or ?t , when using (2,3), one can cache pairwise distance computations while computing the kernel matrix. We have found (see section 5) that the cost of computing the gradient of f with respect to h to be usually much less than the cost of solving (1) and then obtaining f . We can also employ the above ideas in a validation scheme where one uses k training-validation splits (e.g in k-fold cross-validation). In this case, for each partition one obtains the linear system (4), corresponding validation outputs (6) and the linear system in (14). The gradient is simply Pk computed by summing over the k partitions, i.e., f? = f?(k) where f?(k) is given by (13) using the quantities P, q, d etc associated with the k th partition. j=1 The model selection problem (7) may now be solved using, e.g., Quasi-Newton methods such as BFGS which only require function value and gradient at a hyperparameter setting. In particular, reaching the minimizer of f too closely is not important. In our implementations we terminate optimization iterations when the following loose termination criterion is met: |f (h k+1 ) ? f (hk )| ? 10?3 |f (hk )|, where hk+1 and hk are consecutive iterates in the optimization process. A general concern with descent methods is the presence of local minima. In section 5, we make some encouraging empirical observations in this regard, e.g., local minima problems did not occur for the C, ? tuning task; for several other tasks, starting points that work surprisingly well could be easily obtained. 4 Smooth validation functions We consider validation functions that are general functions of the confusion matrix, of the form f (tp, f p) where tp is the number of true positives and f p is the number of false positives. Let u(z) denote the unit step function which is 0 when z < 0 and 1 otherwise. Denote u ? l = u(? yl o?l ), which evaluates to 1 ifP the lth example is P correctly classified and 0 otherwise. Then, tp and f p can be ?l ). Let n ? + and n ? ? be the number of validation ?l , f p = l:?yl =?1 (1 ? u written as tp = l:?yl =+1 u examples in the positive and negative classes. The most commonly used validation function is error rate. Error rate (er) is simply the percentage of incorrect predictions, i.e., er = (? n + ? tp + f p)/? n. For classification problems with imbalanced classes it is usual to consider either weighted error rate or a function of precision and recall such as the F measure. Weighted Error rate (wer) is given by wer = (? n+ ? tp + ?f p)/(? n+ + ?? n? ), where ? is the ratio of the cost of misclassifications of the negative class to that of the positive class. F measure (F ) is the harmonic mean of precision and recall: F = 2tp/(? n+ + tp + f p) Alternatively, one may want to maximize precision under a recall constraint, or maximize the area under the ROC Curve or maximize the precision-recall breakeven point. See Keerthi et al. (2006) for a discussion on how to treat these cases. It is common practice to evaluate measures like precision, recall and F measure while varying the threshold on the real-valued classifier output, i.e., at any given threshold ?0 , tp and f p can be redefined in terms of the following, u ?l = u (? yl (? ol ? ?0 )) (15) For imbalanced problems one may wish to maximize a score such as the F measure over all values of ?0 . In such cases, it is appropriate to incorporate ?0 as an additional hyperparameter that needs to be tuned. Such bias-shifting is particularly also useful as a compensation mechanism for the mismatch between training objective function and validation function; often one uses an SVM as the underlying classifier even though it is not explicitly trained to minimize the validation function that the practitioner truly cares about. In section 5, we make some empirical observations related to this point. The validation functions discussed above are based on discrete counts. In order to use gradient-based methods smooth functions of h are needed. To develop smooth versions of validation functions, we define s?l , which is a sigmoidal approximation to u ?l (15) of the following form: s?l = 1/[1 + exp (??1 y?l (? ol ? ?0 ))] (16) where ?1 > 0 is a sigmoidal scale factor. In general, ?0 , ?1 may be functions of the validation outputs. (As discussed above, one may alternatively wish to treat ?0 as an additional hyperparameter.) The scale factor ?1 influences how closely s?l approximates the step function u ?l and hence controls the degree of smoothness in building the sigmoidal approximation. As the hyperparameter space is probed, the magnitude of the outputs can vary quite a bit. ?1 takes the scale of the outputs into account. Below we discuss various methods to set ?0 , ?1 . We build a differentiable version of such a function by simply replacing u ? l by s?l . Thus, we have f = f (? s1 . . . s?n? ). The value of ?l (12) is given by: ! ! X ?f ?? X ?f ?? ?f ?? sl sr ??0 sr ??1 ?l = + + (17) ?? sl ? o?l ?? sr ??0 ? o?l ?? sr ??1 ? o?l r r Smooth Val Error Rate (er) 4 2 0 ?2 0 Log gamma 2 Test Error Rate 6 Log C Log C 6 4 2 0 ?2 0 Log gamma 2 Figure 1: Performance contours for IJCNN with 2000 training points. The sequence of points generated by Grad are shown by  (best is in red). The point chosen by Grid is shown by ? in red. where the partial derivatives of s?l with respect to o?l , ?0 , ?1 can be easily derived from (16) and (?f /?? sl ) = (?f /?tp)(?tp/?? sl ) + (?f /?f p)(?f p/?? sl ). We now discuss three methods to compute the sigmoidal parameters ?0 , ?1 . For each of these methods the partial derivatives of ?0 , ?1 with respect to o?l can be obtained (Keerthi et al. (2006)) and used for computing (17). Direct Method. Here, we simply set, ?0 = 0, ?1 = t/?, where ? denotes standard deviation of the outputs {? ol } and t is a constant which is heuristically set to some fixed value in order to well-approximate the step function. In our implementation we use t = 10. Hyperparameter Bias Method. Here, we treat ?0 as a hyperparameter and set ?1 as above. Minimization Method. In this method, we obtain ?0 , ?1 by performing sigmoidal fitting based on unconstrained minimization of some smooth criterion N , i.e., (?0 , ?1 ) = argminR2 N . A natural choice of N is based on Platt?s method (Platt (1999)) where s?l is interpreted as the posterior probability that the class of P l th validation example is y?l , and we take N to be the Negative-LogLikelihood: N = Nnll = ? l log(? sl ). Sigmoidal fitting based on NP nll has also been previously proposed in Chapelle et al. (2002). The probabilistic error rate: per = l (1 ? s?l )/? n and f = Nnll are suitable validation functions which go well with the choice N = Nnll . 5 Empirical Results We demonstrate the effectiveness of our method on several binary classification problems. The SVM model with hinge loss was used. SVM training was done using the SMO algorithm. Five fold cross validation was used to form the validation functions. Four datasets were used: Adult, IJCNN, Vehicle and Splice. The first three were taken from http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/ and Splice was taken from http://ida.first.fraunhofer.de/?raetsch/. The number of examples/features in these datasets are: Adult: 32561/123; IJCNN: 141691/22; Vehicle: 98528/100; and Splice: 3175/60. For each dataset, training sets of different sizes were chosen in a class-wise stratified fashion; the remaining examples formed the test set. The Gaussian kernel (2) and the ARD-Gaussian kernel (3) were used. For (C, ?) tuning with the Gaussian Kernel, we also tried the popular Grid over a 15 ? 15 grid of values. For C, ? tuning with the gradient method, the starting point C = ? = 1 was used. Comparison of validation functions. Figure 1 shows the contours of the smoothed validation error rate and the actual test error rate for the IJCNN dataset with 2000 training examples on the (log C, log ?) plane. Grid and Grad respectively denote the grid and the gradient methods applied to the (C, ?) tuning task. We used f = er smoothed with the direct method for Grad. It can be seen that the contours are quite similar. We also generated corresponding contours (omitted) for f = per and f = Nnll (see end of section 4) and found that the validation er with the direct method better represents the test error rate. Figure 1 also shows that the gradient method very quickly plunges into the high-performance region in the (C, ?) space. Comparison of Grid and Grad methods. For various training set sizes of IJCNN, in Table 1, we compare the speed and generalization performance of Grid and Grad, Clearly Grad is much more efficient than Grid. The good speed improvement is seen even at small training set sizes. Although the efficiency of Grid can be improved in certain ways (say, by performing a crude search followed by a refined search, by avoiding unnecessary exploration of difficult regions in the hyperparameter space etc) Grad determines the optimal hyperparameters more precisely. Table 2 compares Grid and Grad on Adult and Vehicle datasets for various training sizes. Though the generalization performance of the two methods are close, Grid is much slower. Table 1: Comparison of Grid, Grad & Grad-ARD on IJCNN & Splice. nf= number of hyperparameter vectors tried. (For Grid, nf= 225.) cpu= cpu time in minutes. erate=% test error rate. ntrg cpu Grid erate nf 2000 4000 8000 16000 32000 10.03 38.77 218.92 1130.37 5331.15 2.95 2.42 1.76 1.24 0.91 11 12 14 12 9 2000 11.42 9.19 13 Grad cpu IJCNN 4.58 11.40 68.58 127.03 382.20 Splice 7.57 erate nf Grad-ARD cpu erate 2.87 2.42 1.77 1.26 0.91 28 13 17 20 7 5.63 8.40 38.58 154.03 269.16 2.65 2.14 1.50 1.08 0.82 8.17 37 35.04 3.49 Table 2: Comparison of Grad & Grid methods on Adult & Vehicle. Definitions of nf, cpu & erate are as in Table 1. For Vehicle and ntrg =16000, Grid was discontinued after 5 days of computation. ntrg 2000 4000 8000 16000 nf 9 16 10 6 Grad cpu 3.62 15.98 52.17 256.40 Adult erate 16.21 15.64 15.69 15.40 Grid cpu 8.66 37.53 306.25 3667.90 erate 16.14 15.95 15.59 15.37 nf 7 5 9 6 Grad cpu 2.50 8.60 83.10 360.88 Vehicle erate 13.58 13.29 12.84 12.58 Grid cpu erate 15.25 13.84 135.28 13.30 1458.12 12.82 ? ? Feature Weighting Experiments. To study the effectiveness of our gradient-based approach when many hyperparameters are present, we use the ARD-Gaussian kernel in (3) and tune C together with all the ? t ?s. As before, we used f = er smoothed with the direct method. The solution for Gaussian kernel was seeded as the starting point for the optimization. Results are reported in Table 1 as Grad-ARD where cpu denotes the extra time for this optimization. We see that Grad-ARD achieves significant improvements in generalization performance over Grad without increasing the computational cost by much even though a large number of hyperparameters are being tuned. Maximizing F-measure by threshold adjustment. In section 4 we mentioned about the possible value of threshold adjustment when the validation/test function of interest is a quantity that is different from error rate. We now illustrate this by taking the Adult dataset, with F measure. The size of the training set is 2000. Gaussian kernel (2) was used. We implemented two methods: (1) we set ?0 = 0 and tuned only C and ?; (2) we tuned the three hyperparameters C, ? and ? 0 . We ran the methods on ten different random training set/test set splits. Without ?0 , the mean (standard deviation) of F measure values on 5-fold cross validation and on the test set were: 0.6385 (0.0062) and 0.6363 (0.0081). With ?0 , the corresponding values improved: 0.6635 (0.0095) and 0.6641 (0.0044). Clearly, the use of ?0 yields a very significant improvement on the F-measure. The ability to easily include the threshold as an extra hyperparameter is a very useful advantage for our method. Optimizing weighted error rate in imbalanced problems. In imbalanced problems where the proportion of examples in the positive class is small, one usually minimizes weighted error rate wer (see section 4) with a small value of ?. One can think of four possible methods in which, apart from the kernel parameter ? and threshold ?0 (we used the Hyperparameter bias method for smoothening), we include other parameters by considering sub-cases of the weighted hinge loss model (see section 2) ? (1) Usual SVM: Set m+ = m? = 1, C+ = C, C? = C and tune C. (2) Set m+ = m? = 1, C+ = C, C? = ?C and tune C. (3) Set m+ = m? = 1 and tune C+ and C? treating them as independent parameters. (4) Use the full Weighted Hinge loss model and tune C+ , C? , m+ and m? . To compare the performance of these methods we took the IJCNN dataset, randomly choosing 2000 training examples and keeping the remaining examples as the test set. Ten such random splits were tried. We take ? = 0.01. The top half of Table 3 reports weighted error rates associated with validation and test. The weighted hinge loss model performs best. Table 3: Mean (standard deviation) of weighted (? = 0.01) error rate values on the IJCNN dataset. C+ = C, C? = C Validation Test 0.0571 (0.0183) 0.0638 (0.0160) Validation Test 0.1953 (0.0557) 0.1861 (0.0540) C+ = C, C? = ?C C+ , C? tuned With ?0 0.0419 (0.0060) 0.0490 (0.0104) 0.0549 (0.0098) 0.0571 (0.0136) Without ?0 0.1051 (0.0164) 0.1008 (0.0607) 0.0897 (0.0154) 0.0969 (0.0502) Full Weighted Hinge 0.0357 (0.0063) 0.0461 (0.0078) 0.0364 (0.0061) 0.0469 (0.0076) The presence of the threshold parameter ?0 is important for the first three methods. The bottom half of Table 3 gives the performance statistics of the methods when threshold is not tuned. Interestingly, for the weighted hinge loss method, tuning of threshold has little effect. Grandvalet et al. (2005) also make the observation that this method appropriately sets the threshold on its own. Cost Break-up. In the gradient-based solution process, each step of the optimization requires the evaluation of f and ?h f . In doing this, there are three steps that take up the bulk of the computational cost: (1) training using the SMO algorithm; (2) the solution of the linear system in (14); and (3) the remaining computations associated with the gradient, of which the computation of P? ? in (13) is the major part. We studied the relative break-up of the costs for the IJCNN dataset (training set sizes ranging from 2000 to 32000), for solution by Grad and Grad-ARD methods. On an average, the cost of solution by SMO forms 85 to 95% of the total computational time. Thus, the gradient computation is very cheap. We also found that the P? ? cost of Grad-ARD doesn?t become large in spite of the fact that 23 hyperparameters are tuned there. This is mainly due to the efficient reusage of terms in the ARD-Gaussian calculations that we mentioned in section 4. 6 Conclusion The main contribution of this paper is a fast method of computing the gradient of a validation function with respect to hyperparameters for a range of SVM models; together with a nonlinear optimization technique it can be used to efficiently determine the optimal values of many hyperparameters. Even in models with just two hyperparameters our approach is faster and offers a more precise hyperparameter placement than the Grid approach. Our approach is particularly of great value for large scale problems. The ability to tune many hyperparameters easily should be used with care. On a text classification problem involving many thousands of features we placed an independent feature weight for each feature and optimized all these weights (together with C) only to find severe overfitting taking place. So, for a given problem it is important to choose the set of hyperparameters carefully, in accordance with the richness of the training set. References S. S. Keerthi, V. Sindhwani and O. Chapelle. An efficient method for gradient-based adaptation of hyperparameters in SVM models. Technical Report, 2006. O. Chapelle, V. Vapnik, O. Bousquet and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46:131?159, 2002. Y. Grandvalet, J. Mari?ethoz and S. Bengio. A probabilistic interpretation of SVMs with an application to unbalanced classification. NIPS, 2005. J. Platt. Probabilities for support vector machines. In Advances in Large Margin Classifiers. MIT Press, Cambridge, Massachusetts, 1999. M. Seeger. Cross validation optimization for structured Hessian kernel methods. Tech. Report, MPI for Biological Cybernetics, T?ubingen, Germany, May 2006.
3059 |@word erate:9 version:6 manageable:1 yct:1 seems:1 proportion:1 termination:1 heuristically:1 tried:3 decomposition:2 reusage:1 score:1 tuned:7 interestingly:1 com:1 ida:1 mari:1 written:1 chicago:2 partition:6 cheap:1 plot:2 treating:2 half:2 plane:2 provides:1 iterates:1 sigmoidal:6 five:1 direct:4 become:2 incorrect:1 consists:1 fitting:2 underfitting:1 manner:1 pairwise:1 huber:1 mpg:1 ol:3 little:1 encouraging:1 actual:1 cache:1 cpu:11 increasing:1 becomes:1 considering:1 moreover:1 notation:1 underlying:1 argmin:1 interpreted:1 minimizes:1 caution:1 differentiation:1 nf:7 classifier:4 k2:1 platt:3 control:1 unit:1 before:2 positive:5 local:2 treat:3 accordance:1 approximately:1 studied:1 stratified:1 range:2 yj:1 practice:1 burbank:1 procedure:1 area:1 empirical:6 adapting:1 word:1 spite:1 get:3 convenience:1 close:1 selection:4 storage:2 influence:1 writing:1 www:1 maximizing:1 go:1 starting:3 selvarak:1 immediately:1 construction:1 suppose:2 olivier:2 us:2 trick:1 element:6 expensive:6 particularly:4 utilized:1 asymmetric:1 mukherjee:1 discontinued:1 bottom:1 csie:1 solved:4 breakeven:1 thousand:1 region:3 richness:1 eu:2 ran:1 principled:1 mentioned:3 trained:1 depend:1 solving:3 rewrite:1 efficiency:3 easily:5 routinely:1 various:4 derivation:1 fast:1 describe:1 effective:1 choosing:2 refined:1 quite:2 solve:3 valued:1 say:3 loglikelihood:1 otherwise:2 ability:2 statistic:1 think:1 nll:1 kxt:1 differentiable:9 sequence:2 advantage:1 took:1 adaptation:3 crossvalidation:1 leave:1 illustrate:1 develop:2 ard:10 solves:1 implemented:1 c:1 uu:2 met:1 closely:2 exploration:1 libsvmtools:1 require:2 suffices:1 generalization:4 ntu:1 biological:2 traversed:1 hold:1 around:2 ic:2 exp:3 great:1 major:2 vary:1 consecutive:1 achieves:1 omitted:4 purpose:2 applicable:2 label:1 ntrg:3 concurrent:1 weighted:15 minimization:2 mit:1 clearly:2 gaussian:9 modified:1 reaching:1 pn:1 beauty:1 varying:1 derived:1 improvement:3 rank:1 mainly:2 hk:4 seeger:2 tech:1 i0:2 expand:1 quasi:1 interested:1 germany:1 iu:3 issue:1 dual:2 classification:10 yahoo:2 special:2 uc:2 comprise:1 once:1 nicely:1 represents:1 yu:2 report:4 np:1 few:1 employ:1 randomly:2 gamma:2 cheaper:1 keerthi:6 interest:1 possibility:2 evaluation:1 severe:1 generically:1 truly:1 nl:1 primal:1 partial:3 spemannstra:1 ruled:1 re:1 disadvantage:1 tp:11 cost:11 deviation:3 expects:1 uniform:1 usefulness:1 too:1 loo:1 reported:1 probabilistic:2 invoke:1 off:1 yl:4 together:5 continuously:3 quickly:1 squared:2 yut:1 containing:2 choose:1 resort:1 derivative:4 leading:1 li:2 account:1 de:2 bfgs:2 coefficient:1 inc:1 explicitly:1 performed:1 vehicle:6 break:2 doing:1 red:2 contribution:2 minimize:1 il:1 ni:1 oi:6 formed:1 efficiently:4 yield:1 worth:1 cybernetics:2 classified:1 explain:1 definition:1 evaluates:1 associated:3 mi:3 dataset:7 popular:1 massachusetts:1 recall:5 organized:2 carefully:1 dt:1 supervised:2 day:1 improved:2 done:4 though:4 just:5 hand:1 replacing:1 nonlinear:1 logistic:2 ptd:1 empire:1 building:1 effect:1 validity:1 true:1 regularization:1 hence:1 seeded:1 symmetric:1 round:1 during:1 essence:1 mpi:2 criterion:3 trying:1 evident:1 outline:2 demonstrate:1 confusion:1 performs:1 ranging:1 harmonic:1 wise:1 recently:1 ifp:1 common:2 behaves:1 extend:1 discussed:2 approximates:1 relating:1 interpretation:1 kwk2:1 raetsch:1 significant:2 cambridge:1 smoothness:1 tuning:9 unconstrained:1 grid:26 chapelle:6 acute:1 etc:4 posterior:1 imbalanced:6 own:1 optimizing:3 apart:2 certain:1 ubingen:2 binary:2 yi:11 inverted:2 seen:2 minimum:2 additional:2 somewhat:1 care:3 determine:1 maximize:4 semi:2 full:4 multiple:1 smooth:6 technical:2 faster:1 determination:1 calculation:4 cross:6 offer:1 plugging:1 prediction:1 involving:5 regression:4 iteration:1 kernel:23 represent:1 want:2 fine:1 separately:1 appropriately:2 extra:3 sr:4 effectiveness:2 practitioner:1 near:1 noting:1 presence:2 split:4 bengio:1 variety:1 misclassifications:1 identified:1 idea:4 lesser:1 avenue:1 multiclass:2 grad:21 expression:3 handled:1 hessian:1 ignored:1 useful:3 involve:1 tune:6 locally:2 ten:2 svms:1 tth:1 http:2 sl:6 percentage:1 arising:1 correctly:1 per:2 bulk:1 write:2 hyperparameter:19 discrete:1 probed:1 key:1 four:3 threshold:11 fraction:1 realworld:1 inverse:3 wer:3 extends:1 almost:1 throughout:1 place:1 bit:1 followed:1 fold:7 occur:2 placement:2 constraint:1 ijcnn:10 precisely:1 bousquet:1 speed:2 min:1 optimality:2 performing:2 department:1 structured:3 conjugate:3 smaller:1 remain:1 partitioned:1 appealing:1 tw:1 happens:1 s1:1 intuitively:1 taken:2 equation:2 previously:1 discus:8 loose:1 mechanism:1 count:1 needed:2 cjlin:1 zk2:1 end:2 away:1 generic:1 appropriate:1 vikass:1 alternative:1 slower:1 vikas:1 denotes:3 remaining:4 include:3 top:1 hinge:16 newton:1 calculating:1 build:1 objective:4 quantity:3 strategy:1 usual:3 gradient:31 distance:1 tuebingen:1 o1:1 index:1 reorganization:1 providing:1 minimizing:2 ratio:1 difficult:1 negative:3 implementation:2 redefined:1 observation:3 datasets:4 descent:1 compensation:1 precise:2 varied:1 smoothed:4 required:1 optimized:1 smo:3 nip:1 adult:6 usually:8 below:1 mismatch:1 yc:1 max:2 memory:1 shifting:1 suitable:1 natural:1 scheme:5 improve:1 fraunhofer:1 text:1 val:1 relative:1 loss:22 highlight:1 limitation:1 validation:52 ethoz:1 degree:1 grandvalet:3 storing:2 surprisingly:1 last:1 soon:1 keeping:1 placed:1 side:1 uchicago:1 allow:1 bias:3 wide:3 taking:2 differentiating:3 benefit:1 regard:1 curve:1 default:1 dimension:3 transition:1 world:1 contour:5 doesn:1 commonly:2 far:1 ec:1 approximate:1 obtains:1 implicitly:1 keep:1 overfitting:2 summing:1 conclude:1 sathiya:1 consuming:1 xi:2 unnecessary:1 alternatively:2 infact:1 search:8 iterative:1 table:9 promising:1 terminate:1 ca:1 obtaining:2 complex:1 did:1 pk:1 main:2 hyperparameters:28 roc:1 fashion:1 precision:5 sub:3 wish:3 xl:2 crude:2 weighting:2 splice:5 down:3 minute:1 cec:1 er:6 explored:1 svm:18 concern:1 mnist:1 false:1 vapnik:1 ci:3 magnitude:1 kx:1 margin:2 smoothly:1 lt:5 simply:4 expressed:1 adjustment:2 partially:1 sindhwani:2 applies:1 minimizer:1 determines:1 lth:1 kli:1 formulated:1 towards:1 change:2 averaging:1 total:1 support:4 unbalanced:1 phenomenon:1 incorporate:1 evaluate:1 avoiding:1
2,269
306
Cholinergic Modulation May Enhance Cortical Associative Memory Function Michael E. Hasselmo? Computation and Neural Systems Caltech 216-76 Pasadena, CA 91125 Brooke P. Andersont Computation and Neural Systems Caltech 139-74 Pasadena, CA 91125 James M. Bower Computation and Neural Systems Caltech 216-76 Pasadena, CA 91125 Abstract Combining neuropharmacological experiments with computational modeling, we have shown that cholinergic modulation may enhance associative memory function in piriform (olfactory) cortex. We have shown that the acetylcholine analogue carbachol selectively suppresses synaptic transmission between cells within piriform cortex, while leaving input connections unaffected. When tested in a computational model of piriform cortex, this selective suppression, applied during learning, enhances associative memory performance. 1 INTRODUCTION A wide range of behavioral studies support a role for the neurotransmitter acetylcholine in memory function (Kopelman, 1986; Hagan and Morris, 1989). However, the role of acetylcholine in memory function has not been linked to the specific neuropharmacological effects of this transmitter within cerebral cortical networks. For several years, we have explored cerebral cortical associative memory function using the piriform cortex as a model system (Wilson and Bower, 1988, Bower, 1990; Hasselmo et a/., 1991). The anatomical structure of piriform cortex (represented schematically in figure 1) shows the essential features of more abstract associative ? e-mail: [email protected] t e-mail: [email protected] 46 Cholinergic Modulation May Enhance Cortical Associative Memory Function matrix memory models (Haberly and Bower, 1989) 1. Afferent fibers in layer la provide widely distributed input, while intrinsic fibers in layer Ib provide extensive excitatory connections between cells within the cortex. Computational models of piriform cortex demonstrate a theoretical capacity for associative memory function (Wilson and Bower, 1988; Bower, 1990; Hasselmo et at., 1991). Recently, we have investigated differences in the physiological properties of the afferent and intrinsic fiber systems, using modeling to test how these differences affect memory function. In the experiments described below, we found a selective cholinergic suppression of intrinsic fiber synaptic transmission. When tested in a simplified model of piriform cortex, this modulation enhances associative memory performance. Afferent fiber synapses (layer la) - -- -- ------ } - - - _. Afferent input = } Intrinsic fiber synapses (layer lb) = A .I BIJ.. +- Neuron activation = ai(t) } Lateral inhibition (via interneurons) = HIJ.. Neuron output = g( ai (t? Figure 1: Schematic representation of piriform cortex, showing afferent input Ai (layer la) and intrinsic connections Bij (layer Ib) 2 EXPERIMENTS To study differences in the effect of acetylcholine on afferent and intrinsic fiber systems, we applied the pharmacological agent carbachol (a chemical analogue of acetylcholine) to a brain slice preparation of piriform cortex while monitoring changes in the strength of synaptic transmission associated with each fiber system. In these experiments, both extracellular and intracellular recordings demonstrated clear differences in the effects of carbachol on synaptic transmission (Hasselmo and Bower, 1991). The results in figure 2 show that synaptic potentials evoked by activating intrinsic fibers in layer 1b were strongly suppressed in the presence of 1 For descriptions of standard associative memory models, see for example (Anderson et al., 1977; Kohonen et al., 1977). 47 48 Hasselmo, Anderson, and Bower 100llM carbachol, while at the same concentration, synaptic potentials evoked by stimulation of afferent fibers in layer la showed almost no change. 1x 10? o M Carbachol Control Washout 3 2 V V V LAYER lA 1 V LAYER 1B O.SmV 3 2 V V.5mv 20m. I Figure 2: Synaptic potentials recorded in layer la and layer Ib before, during, and after perfusion with the acetylcholine analogue carbachol. Carbachol selectively suppresses layer Ib (intrinsic fiber) synaptic transmission. These experiments demonstrate that there is a substantial difference in the neurochemical modulation of synapses associated with the afferent and intrinsic fiber systems within piriform cortex. Cholinergic agents selectively suppress intrinsic fiber synaptic transmission without affecting afferent fiber synaptic transmission. While interesting in purely pharmacological terms, these differential effects are even more intriguing when considered in the context of our computational models of memory function in this region. 3 MODELING To investigate the effects of cholinergic suppression of intrinsic fiber synaptic transmission on associative memory function, we developed a simplified model of the piriform cortex. This simplified model is shown schematically in figure 1. At each time step, a neuron was picked at random, and its activation was updated as N ai(t + 1) = Ai(t) + 2)(1- C)Bij - Hij]g(aj(t?. j=1 where N = the number of neurons; t = time E {O, 1,2, ... }; c = a parameter representing the amount of acetylcholine present. c E [0,1]; ai = the activation or membrane potential of neuron i; g(ai) = the output or firing frequency of neuron i given ai; Ai the input to neuron i, representing the afferent input from the olfactory bulb; Bij = the weight matrix or the synaptic strength from neuron j to neuron i; and Hij = the inhibition matrix or the amount that neuron j inhibits neuron i. To account for the local nature of inhibition in the piriform cortex, Hij = 0 = Cholinergic Modulation May Enhance Cortical Associative Memory Function for Ii - il > r and Hi; = h for Ii - il < r, where r is the inhibition radius. The function g(ai) was set to 0 if at < (Ja, where (Ja = a firing threshold; otherwise, it was set to "Yatanh(ai - (Ja), where "Ya = a firing gain. The weights were updated every N time steps according to the following hebbian learning rule. Bij = !(Wij) AWij = Wij(t + N) - Wij(t) = (1- c)"Yl(ai - (Jl)g(aj) The function !(.) is a saturating function, similar to g( .), used so that the weights could not become negative or grow arbitrarily large (representing a restriction on how effective synapses could become). "Yl is a parameter that adjusts learning speed, and (Jl is a learning threshold. The weights were updated every N time steps to account for the different time scales between synapse modification and neuron settling. 3.1 TRAINING OF THE MODEL During learning, the model was presented with various vectors (taken to represent odors) at the input (Ai(t?. The network was then allowed to run and the weights to adapt. The procedure for creating the set of vectors {Am 1m E {I, ... , M} } was: set Ai" = max{O, G(J.l, where G = gaussian with average J.l and standard deviation 0', and normalize the whole vector so that IIA m 112 = N ( 0'2 + J.l2). M = number of memories or odors presented to network during training, and Ai" the input to neuron i while odor m is present. un, = = During learning, in the asynchronous update equation, Ai(t) At for T time steps, then Ai(t) for the next T time steps, and so on; i.e., the various odors were presented cyclically. = Al 3.2 PERFORMANCE MEASURE FOR THE MODEL The piriform cortex gets inputs from the olfactory bulb and sends outputs to other areas of the brain. Assuming that during recall the network receives noisy versions of the learned input patterns (or odors), we presume the piriform cortex performs a useful service if it reduces the chance of error in deciding which odor is present at the input. One way to quantify this is by using the minimum probability of classification error, Pe (from the field of pattern recognition 2). = For the case of 2 odors corrupted by gaussian noise, Pe the area underneath the intersection of the gaussians. For spherically symmetric gaussians with mean vectors J.ll and J.l2 and identical standard deviations 0', 2 Pe = .;:; = 1 *, 00 .L e -u2du where d 1IJ.l1 - J.l211. Thus, the important parameter is the amount of overlap as quantified by dIu - the larger the dIu, the lower the overlap and Pe . 2See, for example, (Duda and Hart, 1973). 49 50 Hasselmo, Anderson, and Bower For more than 2 odors and for non-gaussian noise or non-spherically-symmetric gaussian noise, the equation for Pe becomes less tractable. But keeping with the above calculations, an analogue of dIu was developed as follows. ul was set = (IIx -lJiIl 2 ), and then f3 was defined as f3 =L i<i !llJi -lJj II ~(Ui + Uj) where i,j E {I, ... , M}. Here, f3 is the analogue of dIu in the previous paragraph and is similar to an average over all odor pairs of dIu. For the model, if f3 is larger for the output vectors than for the input vectors, there is less overlap in the outputs, classification of the outputs is easier than classification of the inputs, and the model is serving a useful purpose. Thus, we use p f3out! f3in as the performance measure. = 3.3 TESTING THE MODEL The model was designed to show whether the presence of acetylcholine has any influence on learning performance. To that end, the model was allowed to learn for a time with various levels of acetylcholine present, and then acetylcholine was turned off and the model was tested. For testing, weight adaptation was turned off, acetylcholine influence was turned off (c 0), noisy versions of the various odors presented during learning were presented at the input, and the network was allowed to settle. From these noisy input/output pairs, u's could be estimated, f3in and f30ut could be calculated, and finally p could be calculated. Then, the state of the network could either be reset (for a new learning run) or be set to what it was before the test (so that learning could continue as if uninterrupted). = 3.4 RESULTS OF TESTING A typical example of a test run is shown in figure 3. There, c was varied from 0 (no acetylcholine) to .9 (a large concentration), and the various other parameters were: N lO, M 10, r 2, h .3, 'Ya 1, 00 1, 'Yl lO-3, Ol 1, and T 10. In the figure, large dark rectangles represent larger values of p. Small or non-existant rectangles represent values of p :5 1. = = = = = = = = = Notice that, for a fixed amount of acetylcholine, the model's performance rises and then falls over time. Ideally, the performance should rise and then flatten out, as further learning should not degrade performance. The weight adaptation equation used in the model was not optimized to preclude overlearning (where all of the weights being reinforced have saturated to the largest allowed value). In principle, the function 1(?) could be used for this, perhaps in conjunction with a weight decay term. This was not of great concern since the peak performance is what indicates whether or not acetylcholine has a useful effect. Also, the more acetylcholine present, the longer the learning took. This is reasonable as, before saturation, AW ex: (I-C). Figure 4 shows maximum average performance for various values of acetylcholine. Averages were calculated by doing many tests like the one above. This is useful as Cholinergic Modulation May Enhance Cortical Associative Memory Function the odor inputs and the individual tests are stochastic in nature. Obviously, the larger values of acetylcholine enhance performance. 0.9 I. u .. ............ . .. .... . ........ .. .... .... ... ........... .. ..... ?????1 .. "1" .. ?? 11 ... ? ... 111.11111?111111 ... 111111 ? ....... , . .. .. . .. .. ...... ' ??1 ? ? ??' ?IIIII.IoI'lIl.I'IIIIIIUII".IIIIII'lIl1l1qlll"II'II~III'III? ?1I1111 ? ...... .... ... ........ '" "'1,111 " ,111111111111111111111111""11111".. l1li1? ?1)???1.1. .. .. .. . ?????? ? ? 11 ?IIIIIIIIII... U.......... 1d '11,11.11'.111 11'11' , I " ......... I. I.lIlilllllllJlil .. . . . . . . ? ? ? ?1'1111111'. " ? 1????????????1.......... 111111.11.11 111.11 ? . '., ..... ....... I....II ... I~? ?ql.IIIIII .. I...I..?. ??1 111."... ,11 .....1 ?.?!I?I .... 1111.... 11II111n1I11Jt1J1.U. ... . ?? ... IIIIIIIIIIII.I...U _ "?I????? .??1II11411111??I?II.1411110 .. ??1 ." ..... ....... .... .......... .. .. . ... ??1 .' ?........ ?10 11.. 111 1? ?1,.111.. ..1.. ...... 1.' ???1111 .? 11?............ .. ...... ...... ... ..... .. ..................... ...... ..... ... .. ???????? ?IIIIII14Ulqlll,II ?? .?.?IIIII?????.? ???? ????????? .. ........................ ??? ???? ?? ??1?????? ???? ?? ??? ???? ???? ??? ???? ?? ... ... .... ..... . . ...... ,11111111.11.1 .. ?lil??1?? .... . ... . ..... ..... .... ' " ...... . . ... . .... ... . ..... . . . . . ....... . . .. . . . .. ................ . ........... . . " ..... . .... ?'?ldl.IIII ?..I?? ???? ?? ??? ???.. ?? ??????????? ????????? .. .................. ................ ???? ?? ???? ??????1????? ?????????? ......... '" .. <l) ? ... ?111111111??????????? ....... .... ........ ........ .... .... . ......... .. ... ... .. ............. .... . .... ...... ..............?....... ... .. ..... ?1?????? .---... u '--" .-.... t:: u.... 0 ro I I I III. III ?? a...... ....t:: ;..... <l) u t:: 0 - t:: ......... .. ' 11'11'" .... ... ... . ............ .. . .. . . ........................... ... ... ..... .. ... ....... . ..... 111 11 .. ...... ...... .................... ..... ....... .......................................... ............................................... .... . 0 ...c: u ????11?1?1?????????????? ??????? ???? ??????? ??? ???? ????? ............................... ...... ......... ................ ?? ???????????? ??????1?????????? ????111?????? ??? ???????? ????? ?? ???? ????????????????? ????????????????????????1???????????????????????? ????????????????????? ???1????? ................ . ........ >-. .... ?????1??????????? ???? ???????????? ????????????? ????? ???.... ....... ........ .. ...... .......... ....... ... .... .. .... .. ....... .. ??? ??????? ?.. ????????? ???1 <l) <.) ? ?? 111 ?? .. ? .. ??? ? .. ? ??? ??????? ?? ? ? .. ?? ???? ?? ?????? .... ? ?? ??? ? ?????? ?? ...... ??????? .. ?? .. ? ?? ?? .. ?? .. ~ ?????????1.. ?.. ???????.... ?..................... ??.. 1????? ?? ????????? ? .. ?????? ??????? .. ?? .. ???? .. ??? ? .. ?.. ?? .... ?? ..?.. .. ? .. ??. ...... ??? ........ ? ? .. ????.. ? .. ? 1 .. ?? .. ?????????? .. ?? . ......... ... .... 0 . ???1??? ??? .. ? .. ??? .. ??? .. ? .. ??? .. ??????? .. ?? .... ?? .. ? .. ? .. ? .. ? .. ?? ?? ......... .. ..................... .. .............. ........ ... .. ....... ... .... .. .. . o 4.5x105 Time (t) Figure 3: Sample test run, with time on the horizontal axis and acetylcholine level on the vertical axis. Larger black rectangles indicate better performance. 1.7 ~ ~ t.. ::s 1.6 eJ.l ~ e'3 ~ e'3 ~ (j e e; ::s 5 e .>< c8 1.5 ~~ 1.2 ~ > t.. e'3 t.. 1.4 1.3 . 1.1 0.0 0.2 0.4 0.6 0.8 1.0 Acetylcholine concentration (c) Figure 4: Maximum average performance vs. acetylcholine level. Acetylcholine increases the performance level attained. 51 52 Hasselmo, Anderson, and Bower 4 CONCLUSION The results from the model show that suppression of connections between cells within the piriform cortex during learning enhances the performance during recall. Thus, acetylcholine released in the cortex during learning may enhance associative memory function. These results may explain some of the behavioral evidence for the role of acetylcholine in memory function and predict that acetylcholine may be released in cortical structures preferentially during learning. Further biological experiments are necessary to confirm this prediction. Acknowledgements This work was supported by ONR contracts NOOOl4-88-K-0513 and NOOOl4-87-K0377 and NIH postdoctoral training grant NS0725l. References J .A. Anderson, J .W. Silverstein, S.A. Ritz and R.S. Jones (1977) Distinctive features, categorical perception, and probability learning: Some applications of a neural model. Psychol. Rev. 84: 413-45l. J .M. Bower (1990) Reverse engineering the nervous system: An anatomical, physiological and computer based approach. In S. Zornetzer, J. Davis and C. Lau (eds.), An Introduction to Neural and Electronic Networks. San Diego: Academic Press. R. Duda and P. Hart (1973), Pattern Classification and Scene Analysis, New York: Wiley. L.B. Haberly and J .M. Bower (1989) Olfactory cortex: Model circuit for study of associative memory? Trends Neurosci. 12: 258-264. J.J. Hagan and R.G.M. Morris (1989) The cholinergic hypothesis of memory: A review of animal experiments. In L.L. Iversen, S.D. Iversen and S.H. Snyder (eds.) Handbook of Psychopharmacology Vol. 20 New York: Plenum Press. M.E. Hasselmo, M.A. Wilson, B.P. Anderson and J .M. Bower (1991) Associative memory function in piriform (olfactory) cortex: Computational modeling and neuropharmacology. In: Cold Spring Harbor Symposium on Quantitative Biology: The Brain. Cold Spring Harbor: Cold Spring Harbor Laboratory. M.E. Hasselmo and J .M. Bower (1991) Cholinergic suppression specific to intrinsic not afferent fiber synapses in piriform (olfactory) cortex. J. Neurophysiol. in press. T. Kohonen, P. Lehtio, J. Rovamo, J. Hyvarinen, K. Bry and L. Vainio (1977) A principle of neural associative memory. Neurosci. 2:1065-1076. M.D. Kopelman (1986) The cholinergic neurotransmitter system in human memory and dementia: A review. Quart. J. Exp. Psych 01. 38A:535-573. M.A. Wilson and J .M. Bower (1988) A computer simulation of olfactory cortex with functional implications for storage and retrieval of olfactory information. In D. Anderson (ed.) Neural Information Processing Systems. AlP Press: New York. Part II N euro- Dynal11.ics
306 |@word version:2 duda:2 simulation:1 awij:1 activation:3 intriguing:1 designed:1 update:1 v:1 nervous:1 differential:1 become:2 symposium:1 behavioral:2 paragraph:1 olfactory:8 lehtio:1 brain:3 ol:1 preclude:1 psychopharmacology:1 becomes:1 circuit:1 what:2 psych:1 suppresses:2 developed:2 quantitative:1 every:2 ro:1 control:1 grant:1 before:3 service:1 engineering:1 local:1 diu:5 firing:3 modulation:7 black:1 quantified:1 evoked:2 range:1 testing:3 cold:3 procedure:1 area:2 flatten:1 get:1 storage:1 context:1 influence:2 restriction:1 demonstrated:1 rule:1 adjusts:1 ritz:1 updated:3 plenum:1 diego:1 hypothesis:1 trend:1 bry:1 recognition:1 hagan:2 role:3 region:1 substantial:1 ui:1 ideally:1 purely:1 distinctive:1 neurophysiol:1 various:6 represented:1 neurotransmitter:2 fiber:16 effective:1 widely:1 larger:5 otherwise:1 noisy:3 associative:16 obviously:1 took:1 reset:1 adaptation:2 kohonen:2 turned:3 combining:1 ioi:1 description:1 normalize:1 transmission:8 perfusion:1 ij:1 indicate:1 quantify:1 radius:1 stochastic:1 human:1 alp:1 settle:1 ja:3 activating:1 biological:1 considered:1 ic:1 deciding:1 great:1 exp:1 predict:1 sma:1 released:2 purpose:1 hasselmo:10 largest:1 smv:1 hope:1 gaussian:4 ej:1 acetylcholine:24 wilson:4 conjunction:1 transmitter:1 indicates:1 suppression:5 underneath:1 am:1 pasadena:3 selective:2 wij:3 classification:4 animal:1 field:1 f3:4 identical:1 biology:1 jones:1 iiiiiiiiii:1 individual:1 cns:1 interneurons:1 investigate:1 brooke:2 cholinergic:11 saturated:1 implication:1 necessary:1 theoretical:1 modeling:4 deviation:2 aw:1 corrupted:1 peak:1 contract:1 yl:3 off:3 michael:1 enhance:7 recorded:1 ldl:1 creating:1 account:2 potential:4 afferent:11 mv:1 picked:1 linked:1 doing:1 il:2 reinforced:1 silverstein:1 monitoring:1 presume:1 unaffected:1 explain:1 synapsis:5 synaptic:12 ed:3 frequency:1 james:1 associated:2 gain:1 recall:2 attained:1 synapse:1 strongly:1 anderson:7 receives:1 horizontal:1 iiiii:2 aj:2 perhaps:1 effect:6 chemical:1 spherically:2 symmetric:2 laboratory:1 pharmacological:2 during:11 ll:1 davis:1 demonstrate:2 performs:1 l1:1 recently:1 nih:1 ug:1 stimulation:1 functional:1 cerebral:2 jl:2 neuropharmacology:1 ai:17 iia:1 cortex:21 longer:1 inhibition:4 showed:1 reverse:1 onr:1 arbitrarily:1 continue:1 caltech:5 minimum:1 x105:1 ii:9 reduces:1 washout:1 hebbian:1 adapt:1 calculation:1 academic:1 retrieval:1 hart:2 schematic:1 prediction:1 represent:3 cell:3 schematically:2 affecting:1 iiii:1 grow:1 leaving:1 sends:1 recording:1 presence:2 iii:4 affect:1 harbor:3 whether:2 ul:1 york:3 useful:4 clear:1 quart:1 amount:4 dark:1 morris:2 notice:1 estimated:1 anatomical:2 serving:1 vol:1 snyder:1 threshold:2 iiiiii:2 rectangle:3 year:1 run:4 almost:1 reasonable:1 electronic:1 layer:13 hi:1 uninterrupted:1 strength:2 scene:1 speed:1 c8:1 spring:3 extracellular:1 inhibits:1 according:1 membrane:1 suppressed:1 rev:1 modification:1 lau:1 taken:1 equation:3 tractable:1 end:1 gaussians:2 odor:11 iix:1 iversen:2 uj:1 concentration:3 enhances:3 lateral:1 capacity:1 degrade:1 mail:2 assuming:1 preferentially:1 piriform:17 ql:1 hij:4 negative:1 rise:2 suppress:1 lil:2 vertical:1 neuron:13 varied:1 lb:1 pair:2 extensive:1 connection:4 optimized:1 learned:1 below:1 pattern:3 perception:1 saturation:1 max:1 memory:24 analogue:5 overlap:3 settling:1 representing:3 noool4:2 axis:2 categorical:1 psychol:1 ljj:1 review:2 l2:2 acknowledgement:1 interesting:1 agent:2 bulb:2 haberly:2 principle:2 neuropharmacological:2 lo:2 excitatory:1 supported:1 asynchronous:1 keeping:1 wide:1 fall:1 distributed:1 slice:1 calculated:3 cortical:7 san:1 simplified:3 hyvarinen:1 confirm:1 handbook:1 llm:1 postdoctoral:1 zornetzer:1 un:1 nature:2 learn:1 ca:3 investigated:1 carbachol:7 intracellular:1 neurosci:2 whole:1 noise:3 allowed:4 euro:1 wiley:1 bower:15 ib:4 pe:5 bij:5 cyclically:1 specific:2 showing:1 dementia:1 explored:1 decay:1 physiological:2 concern:1 evidence:1 essential:1 intrinsic:12 easier:1 intersection:1 saturating:1 chance:1 change:2 typical:1 la:6 ya:2 selectively:3 support:1 preparation:1 tested:3 ex:1
2,270
3,060
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Fei Sha Computer Science Division UC Berkeley, CA 94720 [email protected] Kilian Q. Weinberger Dept of Computer and Information Science U of Pennsylvania, Philadelphia, PA 19104 [email protected] Qihui Zhu Dept of Computer and Information Science U of Pennsylvania, Philadelphia, PA 19104 [email protected] Lawrence K. Saul Dept of Computer Science and Engineering UC San Diego, La Jolla, CA 92093 [email protected] Abstract In many areas of science and engineering, the problem arises how to discover low dimensional representations of high dimensional data. Recently, a number of researchers have converged on common solutions to this problem using methods from convex optimization. In particular, many results have been obtained by constructing semidefinite programs (SDPs) with low rank solutions. While the rank of matrix variables in SDPs cannot be directly constrained, it has been observed that low rank solutions emerge naturally by computing high variance or maximal trace solutions that respect local distance constraints. In this paper, we show how to solve very large problems of this type by a matrix factorization that leads to much smaller SDPs than those previously studied. The matrix factorization is derived by expanding the solution of the original problem in terms of the bottom eigenvectors of a graph Laplacian. The smaller SDPs obtained from this matrix factorization yield very good approximations to solutions of the original problem. Moreover, these approximations can be further refined by conjugate gradient descent. We illustrate the approach on localization in large scale sensor networks, where optimizations involving tens of thousands of nodes can be solved in just a few minutes. 1 Introduction In many areas of science and engineering, the problem arises how to discover low dimensional representations of high dimensional data. Typically, this high dimensional data is represented in the form of large graphs or matrices. Such data arises in many applications, including manifold learning [12], robot navigation [3], protein clustering [6], and sensor localization [1]. In all these applications, the challenge is to compute low dimensional representations that are consistent with observed measurements of local proximity. For example, in robot path mapping, the robot?s locations must be inferred from the high dimensional description of its state in terms of sensorimotor input. In this setting, we expect similar state descriptions to map to similar locations. Likewise, in sensor networks, the locations of individual nodes must be inferred from the estimated distances between nearby sensors. Again, the challenge is to find a planar representation of the sensors that preserves local distances. In general, it is possible to formulate these problems as simple optimizations over the low dimensional representations !xi of individual instances (e.g., robot states, sensor nodes). The most straight- forward formulations, however, lead to non-convex optimizations that are plagued by local minima. For this reason, large-scale problems cannot be reliably solved in this manner. A more promising approach reformulates these problems as convex optimizations, whose global minima can be efficiently computed. Convexity is obtained by recasting the problems as optimizations over the inner product matrices Xij = !xi ? !xj . The required optimizations can then be relaxed as instances of semidefinite programming [10], or SDPs. Two difficulties arise, however, from this approach. First, only low rank solutions for the inner product matrices X yield low dimensional representations for the vectors !xi . Rank constraints, however, are non-convex; thus SDPs and other convex relaxations are not guaranteed to yield the desired low dimensional solutions. Second, the resulting SDPs do not scale very well to large problems. Despite the theoretical guarantees that follow from convexity, it remains prohibitively expensive to solve SDPs over matrices with (say) tens of thousands of rows and similarly large numbers of constraints. For the first problem of ?rank regularization?, an apparent solution has emerged from recent work in manifold learning [12] and nonlinear dimensionality reduction [14]. This work has shown that while the rank of solutions from SDPs cannot be directly constrained, low rank solutions often emerge naturally by computing maximal trace solutions that respect local distance constraints. Maximizing the trace of the inner product matrix X has the effect of maximizing the variance of the low dimensional representation {!xi }. This idea was originally introduced as ?semidefinite embedding? [12, 14], then later described as ?maximum variance unfolding? [9] (and yet later as ?kernel regularization? [6, 7]). Here, we adopt the name maximum variance unfolding (MVU) which seems to be currently accepted [13, 15] as best capturing the underlying intuition. This paper addresses the second problem mentioned above: how to solve very large problems in MVU. We show how to solve such problems by approximately factorizing the large n?n matrix X as X ? QYQ! where Q is a pre-computed n?m rectangular matrix with m # n. The factorization leaves only the much smaller m ? m matrix Y to be optimized with respect to local distance constraints. With this factorization, and by collecting constraints using the Schur complement lemma, we show how to rewrite the original optimization over the large matrix X as a simple SDP involving the smaller matrix Y. This SDP can be solved very quickly, yielding an accurate approximation to the solution of the original problem. Moreover, if desirable, this solution can be further refined [1] by (non-convex) conjugate gradient descent in the vectors {!xi }. The main contribution of this paper is the matrix factorization that makes it possible to solve large problems in MVU. Where does the factorization come from? Either implicitly or explicitly, all problems of this sort specify a graph whose nodes represent the vectors {!xi } and whose edges represent local distance constraints. The matrix factorization is obtained by expanding the low dimensional representation of these nodes (e.g., sensor locations) in terms of the m # n bottom (smoothest) eigenvectors of the graph Laplacian. Due to the local distance constraints, one expects the low dimensional representation of these nodes to vary smoothly as one traverses edges in the graph. The presumption of smoothness justifies the partial orthogonal expansion in terms of the bottom eigenvectors of the graph Laplacian [5]. Similar ideas have been widely applied in graphbased approaches to semi-supervised learning [4]. Matrix factorizations of this type have also been previously studied for manifold learning; in [11, 15], though, the local distance constraints were not properly formulated to permit the large-scale applications considered here, while in [8], the approximation was not considered in conjunction with a variance-maximizing term to favor low dimensional representations. The approach in this paper applies generally to any setting in which low dimensional representations are derived from an SDP that maximizes variance subject to local distance constraints. For concreteness, we illustrate the approach on the problem of localization in large scale sensor networks, as recently described by [1]. Here, we are able to solve optimizations involving tens of thousands of nodes in just a few minutes. Similar applications to the SDPs that arise in manifold learning [12], robot path mapping [3], and protein clustering [6, 7] present no conceptual difficulty. This paper is organized as follows. Section 2 reviews the problem of localization in large scale sensor networks and its formulation by [1] as an SDP that maximizes variance subject to local distance constraints. Section 3 shows how we solve large problems of this form?by approximating the inner product matrix of sensor locations as the product of smaller matrices, by solving the smaller SDP that results from this approximation, and by refining the solution from this smaller SDP using local search. Section 4 presents our experimental results on several simulated networks. Finally, section 5 concludes by discussing further opportunities for research. 2 Sensor localization via maximum variance unfolding The problem of sensor localization is best illustrated by example; see Fig. 1. Imagine that sensors are located in major cities throughout the continental US, and that nearby sensors can estimate their distances to one another (e.g., via radio transmitters). From only this local information, the problem of sensor localization is to compute the individual sensor locations and to identify the whole network topology. In purely mathematical terms, the problem can be viewed as computing a low rank embedding in two or three dimensional Euclidean space subject to local distance constraints. We assume there are n sensors distributed in the plane and formulate the problem as an optimization over their planar coordinates Figure 1: Sensors distributed over US cities. Dis!x1 , . . . , !xn ? %2 . (Sensor localization in three tances are estimated between nearby cities within dimensional space can be solved in a similar a fixed radius. way.) We define a neighbor relation i ? j if the ith and jth sensors are sufficiently close to estimate their pairwise distance via limited-range radio transmission. From such (noisy) estimates of local pairwise distances {dij }, the problem of sensor localization is to infer the planar coordinates {!xi }. Work on this problem has typically focused on minimizing the sum-of-squares loss function [1] that penalizes large deviations from the estimated distances: !" #2 (1) '!xi ? !xj '2 ? d2ij min ! x1 ,...,! xn i?j In some applications, the locations of a few sensors are also known in advance. For simplicity, in this work we consider the scenario where no such ?anchor points? are available as prior knowledge, and the goal is simply to position the sensors up to a global rotation, reflection, and translation. Thus, to the above optimization, without loss of generality we can add the centering constraint: $! $2 $ $ (2) !xi $ = 0. $ i It is straightforward to extend our approach to incorporate anchor points, which generally leads to even better solutions. In this case, the centering constraint is not needed. The optimization in eq. (1) is not convex; hence, it is likely to be trapped by local minima. By relaxing the constraint that the sensor locations !xi lie in the %2 plane, we obtain a convex optimization that is much more tractable [1]. This is done by rewriting the optimization in eqs. (1?2) in terms of the elements of the inner product matrix Xij =!xi ? !xj . In this way, we obtain: ! " #2 Minimize: Xii ? 2Xij + Xjj ? d2ij i?j ! (3) subject to: (i) Xij = 0 and (ii) X ) 0. ij The first constraint centers the sensors on the origin, as in eq. (2), while the second constraint specifies that X is positive semidefinite, which is necessary to interpret it as an inner product matrix in Euclidean space. In this case, the vectors {!xi } are determined (up to rotation) by singular value decomposition. The convex relaxation of the optimization in eqs. (1?2) drops the constraint that that the vectors !xi lie in the %2 plane. Instead, the vectors will more generally lie in a subspace of dimensionality equal to the rank of the solution X. To obtain planar coordinates, one can project these vectors into their two dimensional subspace of maximum variance, obtained from the top two eigenvectors of X. Unfortunately, if the rank of X is high, this projection loses information. As the error of the projection grows with the rank of X, we would like to enforce that X has low rank. However, the rank of a matrix is not a convex function of its elements; thus it cannot be directly constrained as part of a convex optimization. Mindful of this problem, the approach to sensor localization in [1] borrows an idea from recent work in unsupervised learning [12, 14]. Very simply, an extra term is added to the loss function that favors solutions with high variance, or equivalently, solutions with high trace. (The trace is proportional % to the variance assuming that the sensors are centered on the origin, since tr(X) = xi '2 .) i '! The extra variance term in the loss function favors low rank solutions; intuitively, it is based on the observation that a flat piece of paper has greater diameter than a crumpled one. Following this intuition, we consider the following optimization: ! " #2 Maximize: tr(X) ? ? Xii ? 2Xij + Xjj ? d2ij i?j ! (4) subject to: (i) Xij = 0 and (ii) X ) 0. ij The parameter ? > 0 balances the trade-off between maximizing variance and preserving local distances. This general framework for trading off global variance versus local rigidity has come to be known as maximum variance unfolding (MVU) [9, 15, 13]. As demonstrated in [1, 9, 6, 14], these types of optimizations can be written as semidefinite programs (SDPs) [10]. Many general-purpose solvers for SDPs exist in the public domain (e.g., [2]), but even for systems with sparse constraints, they do not scale very well to large problems. Thus, for small networks, this approach to sensor localization is viable, but for large networks (n ? 104 ), exact solutions are prohibitively expensive. This leads us to consider the methods in the next section. 3 Large-scale maximum variance unfolding Most SDP solvers are based on interior-point methods whose time-complexity scales cubically in the matrix size and number of constraints [2]. To solve large problems in MVU, even approximately, we must therefore reduce them to SDPs over small matrices with small numbers of constraints. 3.1 Matrix factorization To obtain an optimization involving smaller matrices, we appeal to ideas in spectral graph theory [5]. The sensor network defines a connected graph whose edges represent local pairwise connectivity. Whenever two nodes share an edge in this graph, we expect the locations of these nodes to be relatively similar. We can view the location of the sensors as a function that is defined over the nodes of this graph. Because the edges represent local distance constraints, we expect this function to vary smoothly as we traverse edges in the graph. The idea of graph regularization in this context is best understood by analogy. If a smooth function is defined on a bounded interval of %1 , then from real analysis, we know that it can be well approximated by a low order Fourier series. A similar type of low order approximation exists if a smooth function is defined over the nodes of a graph. This low-order approximation on graphs will enable us to simplify the SDPs for MVU, just as low-order Fourier expansions have been used to regularize many problems in statistical estimation. Function approximations on graphs are most naturally derived from the eigenvectors of the graph Laplacian [5]. For unweighted graphs, the graph Laplacian L computes the quadratic form ! (fi ? fj )2 (5) f ! Lf = i?j on functions f ? % defined over the nodes of the graph. The eigenvectors of L provide a set of basis functions over the nodes of the graph, ordered by smoothness. Thus, smooth functions f can be well approximated by linear combinations of the bottom eigenvectors of L. n Expanding the sensor locations !xi in terms of these eigenvectors yields a compact factorization %m for the inner product matrix X. Suppose that !xi ? ?=1 Qi? !y? , where the columns of the n?m rectangular matrix Q store the m bottom eigenvectors of the graph Laplacian (excluding the uniform eigenvector with zero eigenvalue). Note that in this approximation, the matrix Q can be cheaply precomputed from the unweighted connectivity graph of the sensor network, while the vectors !y? play the role of unknowns that depend in a complicated way on the local distance estimates dij . Let Y denote the m ? m inner product matrix of these vectors, with elements Y?? = !y? ? !y? . From the low-order approximation to the sensor locations, we obtain the matrix factorization: X ? QYQ! . (6) Eq. (6) approximates the inner product matrix X as the product of much smaller matrices. Using this approximation for localization in large scale networks, we can solve an optimization for the much smaller m?m matrix Y, as opposed to the original n?n matrix X. The optimization for the matrix Y is obtained by substituting eq. (6) wherever the matrix X appears in eq. (4). Some simplifications occur due to the structure of the matrix Q. Because the columns of Q store mutually orthogonal eigenvectors, it follows that tr(QYQ! ) = tr(Y). Because we do not include the uniform eigenvector in Q, it follows that QYQ! automatically satisfies the centering constraint, which can therefore be dropped. Finally, it is sufficient to constrain Y ) 0, which implies that QYQ! ) 0. With these simplifications, we obtain the following optimization: ! & '2 Maximize: tr(Y) ? ? (QYQ! )ii ?2(QYQ! )ij +(QYQ! )jj ? d2ij i?j (7) subject to: Y)0 Eq. (6) can alternately be viewed as a form of regularization, as it constrains neighboring sensors to have nearby locations even when the estimated local distances dij suggest otherwise (e.g., due to noise). Similar forms of graph regularization have been widely used in semi-supervised learning [4]. 3.2 Formulation as SDP As noted earlier, our strategy for solving large problems in MVU depends on casting the required optimizations as SDPs over small matrices with few constraints. The matrix factorization in eq. (6) leads to an optimization over the m ? m matrix Y, as opposed to the n ? n matrix X. In this section, we show how to cast this optimization as a correspondingly small SDP. This requires us to reformulate the quadratic optimization over Y ) 0 in eq. (4) in terms of a linear objective function with linear or positive semidefinite constraints. We start by noting that the objective function in eq. (7) is a quadratic function of the elements of the 2 matrix Y. Let Y ? %m denote the vector obtained by concatenating all the columns of Y. With this notation, the objective function (up to an additive constant) takes the form 2 b! Y ? Y ! AY, 2 (8) where A ? %m ?m is the positive semidefinite matrix that collects all the quadratic coefficients in 2 the objective function and b ? %m is the vector that collects all the linear coefficients. Note that the trace term in the objective function, tr(Y), is absorbed by the vector b. With the above notation, we can write the optimization in eq. (7) as an SDP in standard form. As in [8], this is done in two steps. First, we introduce a dummy variable # that serves as a lower bound on the quadratic piece of the objective function in eq. (8). Next, we express this bound as a linear matrix inequality via the Schur complement lemma. Combining these steps, we obtain the SDP: Maximize: subject to: b! Y ? # (i) Y ) 0 and (ii) ( I 1 (A 2 Y)! 1 A2 Y # ) ) 0. (9) 1 In the second constraint of this SDP, we have used I to denote the m2 ?m2 identity matrix and A 2 to denote the matrix square root. Thus, via the Schur lemma, this constraint expresses the lower bound # ? Y ! AY, and the SDP is seen to be equivalent to the optimization in eqs. (7?8). The SDP in eq. (9) represents a drastic reduction in complexity from the optimization in eq. (7). The only variables of the SDP are the m(m + 1)/2 elements of Y and the unknown scalar #. The only constraints are the positive semidefinite constraint on Y and the linear matrix inequality of size m2 ? m2 . Note that the complexity of this SDP does not depend on the number of nodes or edges in the network. As a result, this approach scales very well to large problems in sensor localization. In the above formulation, it is worth noting the important role played by quadratic penalties. The use of the Schur lemma in eq. (9) was conditioned on the quadratic form of the objective function in eq. (7). Previous work on MVU has enforced the distance constraints as strict equalities [12], as one-sided inequalities [9, 11], and as soft constraints with linear penalties [14]. Expressed as SDPs, these earlier formulations of MVU involved as many constraints as edges in the underlying graph, even with the matrix factorization in eq. (6). Thus, the speed-ups obtained here over previous approaches are not merely due to graph regularization, but more precisely to its use in conjunction with quadratic penalties, all of which can be collected in a single linear matrix inequality via the Schur lemma. 3.3 Gradient-based improvement While the matrix factorization in eq. (6) leads to much more tractable optimizations, it only provides an approximation to the global minimum of the original loss function in eq. (1). As suggested in [1], we can refine the approximation from eq. (9) by using it as initial starting point for gradient descent in eq. (1). In general, gradient descent on non-convex functions can converge to undesirable local minima. In this setting, however, the solution of the SDP in eq. (9) provides a highly accurate initialization. Though no theoretical guarantees can be made, in practice we have observed that this initialization often lies in the basin of attraction of the true global minimum. Our most robust results were obtained by a two-step process. First, starting from the m-dimensional solution of eq. (9), we used conjugate gradient methods to maximize the objective function in eq. (4). Though this objective function is written in terms of the inner product matrix X, the hill-climbing in this step was performed in terms of the vectors !xi ? %m . While not always necessary, this first step was mainly helpful for localization in sensor networks with irregular (and particularly non-convex) boundaries. It seems generally difficult to representation such boundaries in terms of the bottom eigenvectors of the graph Laplacian. Next, we projected the results of this first step into the %2 plane and use conjugate gradient methods to minimize the loss function in eq. (1). This second step helps to correct patches of the network where either the graph regularization leads to oversmoothing and/or the rank constraint is not well modeled by MVU. 4 Results We evaluated our algorithm on two simulated sensor networks of different size and topology. We did not assume any prior knowledge of sensor locations (e.g., from anchor points). We added white noise to each local distance measurement with a standard deviation of 10% of the true local distance. 0.6 0.6 0.4 0.4 0.2 0.2 0 0 !0.2 !0.2 !0.4 !0.4 !0.6 !0.6 !0.8 !0.6 !0.4 !0.2 0 0.2 0.4 0.6 0.8 !0.8 !0.6 !0.4 !0.2 0 0.2 0.4 0.6 0.8 Figure 2: Sensor locations inferred for n = 1055 largest cities in the continental US. On average, each sensor estimated local distances to 18 neighbors, with measurements corrupted by 10% Gaussian noise; see text. Left: sensor locations obtained by solving the SDP in eq. (9) using the m = 10 bottom eigenvectors of the graph Laplacian (computation time 4s). Despite the obvious distortion, the solution provides a good initial starting point for gradient-based improvement. Right: sensor locations after post-processing by conjugate gradient descent (additional computation time 3s). Figure 3: Results on a simulated network with n = 20000 uniformly distributed nodes inside a centered unit square. See text for details. The first simulated network, shown in Fig. 1, placed nodes at scaled locations of the n = 1055 largest cities in the continental US. Each node estimated the local distance to up to 18 other nodes within a radius of size r = 0.09. The SDP in eq. (9) was solved using the m = 10 bottom eigenvectors of the graph Laplacian. Fig. 2 shows the solution from this SDP (on the left), as well as the final result after gradient-based improvement (on the right), as described in section 3.3. From the figure, it can be seen that the solution of the SDP recovers the general topology of the network but tends to clump nodes together, especially near the boundaries. After gradient-based improvement, however, the inferred locations differ very little from the true locations. The construction and solution of the SDP required 4s of total computation time on a 2.4 GHz Pentium 4 desktop computer, while the post-processing by conjugate gradient descent took an additional 3s. computation time (in sec) objective value The second simulated network, shown in Fig. 3, placed nodes at n = 20000 uniformly sampled objective points inside the unit square. The nodes were then time 2.0 480 centered on the origin. Each node estimated the local distance to up to 20 other nodes within a radius of size r = 0.06. The SDP in eq. (9) was solved using 1.0 240 the m = 10 bottom eigenvectors of the graph Laplacian. The computation time to construct and solve the SDP was 19s. The follow-up conjugate gradi0 5 10 15 20 ent optimization required 52s for 100 line searches. number of eigenvectors Fig. 3 illustrates the absolute positional errors of the sensor locations computed in three different ways: the solution from the SDP in eq. (8), the refined so- Figure 4: Left: the value of the loss funclution obtain by conjugate gradient descent, and the tion in eq. (1) from the solution of the SDP in ?baseline? solution obtained by conjugate gradient eq. (8). Right: the computation time to solve descent from a random initialization. For these plots, the SDP. Both are plotted versus the number the sensors were colored so that the ground truth of eigenvectors, m, in the matrix factorizapositioning reveals the word CONVEX in the fore- tion. ground with a radial color gradient in the background. The refined solution in the third panel is seen to yield highly accurate results. (Note: the representations in the second and fourth panels were scaled by factors of 0.50 and 1028, respectively, to have the same size as the others.) We also evaluated the effect of the number of eigenvectors, m, used in the SDP. (We focused on the role of m, noting that previous studies [1, 7] have thoroughly investigated the role of parameters such as the weight constant ?, the sensor radius r, and the noise level.) For the simulated network with nodes at US cities, Fig. 4 plots the value of the loss function in eq. (1) obtained from the solution of eq. (8) as a function of m. It also plots the computation time required to create and solve the SDP. The figure shows that more eigenvectors lead to better solutions, but at the expense of increased computation time. In our experience, there is a ?sweet spot? around m ? 10 that best manages this tradeoff. Here, the SDP can typically be solved in seconds while still providing a sufficiently accurate initialization for rapid convergence of subsequent gradient-based methods. Finally, though not reported here due to space constraints, we also tested our approach on various data sets in manifold learning from [12]. Our approach generally reduced previous computation times of minutes or hours to seconds with no noticeable loss of accuracy. 5 Discussion In this paper, we have proposed an approach for solving large-scale problems in MVU. The approach makes use of a matrix factorization computed from the bottom eigenvectors of the graph Laplacian. The factorization yields accurate approximate solutions which can be further refined by local search. The power of the approach was illustrated by simulated results on sensor localization. The networks in section 4 have far more nodes and edges than could be analyzed by previously formulated SDPs for these types of problems [1, 3, 6, 14]. Beyond the problem of sensor localization, our approach applies quite generally to other settings where low dimensional representations are inferred from local distance constraints. Thus we are hopeful that the ideas in this paper will find further use in areas such as robotic path mapping [3], protein clustering [6, 7], and manifold learning [12]. Acknowledgments This work was supported by NSF Award 0238323. References [1] P. Biswas, T.-C. Liang, K.-C. Toh, T.-C. Wang, and Y. Ye. Semidefinite programming approaches for sensor network localization with noisy distance measurements. IEEE Transactions on Automation Science and Engineering, 3(4):360?371, 2006. [2] B. Borchers. CSDP, a C library for semidefinite programming. Optimization Methods and Software 11(1):613-623, 1999. [3] M. Bowling, A. Ghodsi, and D. Wilkinson. Action respecting embedding. In Proceedings of the Twenty Second International Conference on Machine Learning (ICML-05), pages 65?72, Bonn, Germany, 2005. [4] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA, 2006. [5] F. R. K. Chung. Spectral Graph Theory. American Mathematical Society, 1997. [6] F. Lu, S. Keles, S. Wright, and G. Wahba. Framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences, 102:12332?12337, 2005. [7] F. Lu, Y. Lin, and G. Wahba. Robust manifold unfolding with kernel regularization. Technical Report 1108, Department of Statistics, University of Wisconsin-Madison, 2005. [8] F. Sha and L. K. Saul. Analysis and extension of spectral methods for nonlinear dimensionality reduction. In Proceedings of the Twenty Second International Conference on Machine Learning (ICML-05), pages 785?792, Bonn, Germany, 2005. [9] J. Sun, S. Boyd, L. Xiao, and P. Diaconis. The fastest mixing Markov process on a graph and a connection to a maximum variance unfolding problem. SIAM Review, 48(4):681?699, 2006. [10] L. Vandenberghe and S. P. Boyd. Semidefinite programming. SIAM Review, 38(1):49?95, March 1996. [11] K. Q. Weinberger, B. D. Packer, and L. K. Saul. Nonlinear dimensionality reduction by semidefinite programming and kernel matrix factorization. In Z. Ghahramani and R. Cowell, editors, Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics (AISTATS-05), pages 381?388, Barbados, West Indies, 2005. [12] K. Q. Weinberger and L. K. Saul. Unsupervised learning of image manifolds by semidefinite programming. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR-04), volume 2, pages 988?995, Washington D.C., 2004. Extended version in International Journal of Computer Vision, 70(1): 77-90, 2006. [13] K. Q. Weinberger and L. K. Saul. An introduction to nonlinear dimensionality reduction by maximum variance unfolding. In Proceedings of the Twenty First National Conference on Artificial Intelligence (AAAI-06), Cambridge, MA, 2006. [14] K. Q. Weinberger, F. Sha, and L. K. Saul. Learning a kernel matrix for nonlinear dimensionality reduction. In Proceedings of the Twenty First International Conference on Machine Learning (ICML-04), pages 839?846, Banff, Canada, 2004. [15] L. Xiao, J. Sun, and S. Boyd. A duality view of spectral methods for dimensionality reduction. In Proceedings of the Twenty Third International Conference on Machine Learning (ICML-06), pages 1041? 1048, Pittsburgh, PA, 2006.
3060 |@word version:1 seems:2 decomposition:1 tr:6 reduction:7 initial:2 series:1 toh:1 yet:1 must:3 written:2 additive:1 subsequent:1 recasting:1 drop:1 plot:3 intelligence:2 leaf:1 plane:4 desktop:1 ith:1 colored:1 provides:3 node:25 location:21 traverse:2 banff:1 mathematical:2 viable:1 inside:2 introduce:1 manner:1 pairwise:3 upenn:2 rapid:1 sdp:30 automatically:1 little:1 solver:2 project:1 discover:2 moreover:2 underlying:2 maximizes:2 bounded:1 notation:2 panel:2 eigenvector:2 guarantee:2 berkeley:2 collecting:1 prohibitively:2 scaled:2 unit:2 positive:4 engineering:4 local:30 reformulates:1 understood:1 dropped:1 tends:1 despite:2 path:3 approximately:2 initialization:4 studied:2 collect:2 relaxing:1 fastest:1 factorization:18 limited:1 range:1 clump:1 acknowledgment:1 practice:1 lf:1 spot:1 area:3 projection:2 ups:1 pre:1 word:1 radial:1 boyd:3 protein:4 suggest:1 cannot:4 close:1 interior:1 undesirable:1 mvu:11 context:1 equivalent:1 map:1 demonstrated:1 center:1 maximizing:4 straightforward:1 starting:3 convex:14 rectangular:2 formulate:2 focused:2 simplicity:1 m2:4 attraction:1 regularize:1 vandenberghe:1 embedding:3 coordinate:3 diego:1 imagine:1 suppose:1 play:1 exact:1 programming:7 construction:1 origin:3 pa:3 element:5 expensive:2 approximated:2 located:1 particularly:1 recognition:1 observed:3 bottom:10 role:4 solved:7 wang:1 thousand:3 d2ij:4 kilian:1 connected:1 sun:2 trade:1 mentioned:1 intuition:2 convexity:2 complexity:3 constrains:1 respecting:1 wilkinson:1 depend:2 rewrite:1 solving:4 purely:1 localization:17 division:1 basis:1 represented:1 various:1 oversmoothing:1 borchers:1 artificial:2 refined:5 whose:5 apparent:1 emerged:1 solve:12 widely:2 say:1 distortion:1 otherwise:1 quite:1 cvpr:1 favor:3 statistic:2 noisy:2 final:1 eigenvalue:1 took:1 maximal:2 product:12 neighboring:1 combining:1 mixing:1 academy:1 indie:1 description:2 olkopf:1 ent:1 convergence:1 transmission:1 sea:2 help:1 illustrate:2 ij:3 noticeable:1 eq:35 c:2 kilianw:1 come:2 trading:1 implies:1 differ:1 radius:4 correct:1 centered:3 enable:1 public:1 extension:1 proximity:1 sufficiently:2 considered:2 ground:2 around:1 plagued:1 wright:1 lawrence:1 mapping:3 substituting:1 major:1 vary:2 adopt:1 a2:1 purpose:1 estimation:1 radio:2 currently:1 largest:2 create:1 city:6 unfolding:8 mit:1 feisha:1 sensor:48 always:1 gaussian:1 casting:1 conjunction:2 derived:3 refining:1 properly:1 improvement:4 rank:16 transmitter:1 mainly:1 pentium:1 baseline:1 helpful:1 cubically:1 typically:3 relation:1 germany:2 constrained:3 uc:2 equal:1 construct:1 washington:1 represents:1 unsupervised:2 icml:4 others:1 report:1 simplify:1 few:4 sweet:1 diaconis:1 preserve:1 national:2 packer:1 individual:3 highly:2 navigation:1 analyzed:1 semidefinite:14 yielding:1 accurate:5 edge:9 partial:1 necessary:2 experience:1 orthogonal:2 euclidean:2 penalizes:1 desired:1 plotted:1 theoretical:2 instance:2 column:3 soft:1 earlier:2 increased:1 deviation:2 expects:1 uniform:2 csdp:1 dij:3 reported:1 corrupted:1 thoroughly:1 international:6 siam:2 off:2 barbados:1 together:1 quickly:1 connectivity:2 again:1 aaai:1 opposed:2 american:1 chung:1 sec:1 automation:1 coefficient:2 explicitly:1 depends:1 piece:2 performed:1 root:1 tion:2 later:2 view:2 graphbased:1 start:1 sort:1 complicated:1 contribution:1 minimize:2 square:4 accuracy:1 variance:18 likewise:1 efficiently:1 yield:6 identify:1 climbing:1 sdps:17 manages:1 lu:2 fore:1 worth:1 researcher:1 straight:1 converged:1 whenever:1 centering:3 sensorimotor:1 involved:1 obvious:1 naturally:3 recovers:1 sampled:1 knowledge:2 color:1 dimensionality:7 organized:1 appears:1 originally:1 supervised:3 follow:2 planar:4 specify:1 formulation:5 done:2 though:4 evaluated:2 generality:1 just:3 nonlinear:5 defines:1 grows:1 name:1 effect:2 ye:1 true:3 biswas:1 regularization:10 hence:1 equality:1 illustrated:2 white:1 bowling:1 noted:1 hill:1 ay:2 reflection:1 fj:1 image:1 recently:2 fi:1 common:1 rotation:2 volume:1 extend:1 approximates:1 interpret:1 measurement:4 cambridge:2 smoothness:2 similarly:1 mindful:1 chapelle:1 robot:5 add:1 recent:2 jolla:1 scenario:1 store:2 inequality:4 discussing:1 preserving:1 minimum:6 greater:1 relaxed:1 seen:3 additional:2 converge:1 maximize:4 semi:3 ii:4 zien:1 desirable:1 infer:1 smooth:3 technical:1 lin:1 post:2 award:1 laplacian:12 qi:1 involving:4 vision:2 kernel:5 represent:4 irregular:1 background:1 interval:1 singular:1 sch:1 extra:2 strict:1 subject:7 schur:5 near:1 noting:3 xj:3 pennsylvania:2 topology:3 wahba:2 inner:10 idea:6 reduce:1 tradeoff:1 qihui:1 penalty:3 xjj:2 jj:1 action:1 generally:6 eigenvectors:19 ten:3 diameter:1 reduced:1 specifies:1 xij:6 exist:1 nsf:1 estimated:7 trapped:1 dummy:1 xii:2 write:1 express:2 rewriting:1 tenth:1 graph:34 relaxation:2 concreteness:1 merely:1 sum:1 enforced:1 fourth:1 throughout:1 patch:1 capturing:1 bound:3 guaranteed:1 simplification:2 played:1 quadratic:8 refine:1 occur:1 constraint:35 precisely:1 fei:1 constrain:1 ghodsi:1 flat:1 software:1 keles:1 nearby:4 bonn:2 fourier:2 speed:1 min:1 relatively:1 department:1 combination:1 march:1 conjugate:9 smaller:10 wherever:1 intuitively:1 sided:1 mutually:1 previously:3 remains:1 precomputed:1 needed:1 know:1 tractable:2 drastic:1 serf:1 available:1 permit:1 enforce:1 spectral:4 weinberger:5 original:6 top:1 clustering:4 include:1 opportunity:1 madison:1 ghahramani:1 especially:1 approximating:1 society:1 objective:11 added:2 strategy:1 sha:3 gradient:16 continental:3 distance:27 subspace:2 simulated:7 manifold:8 collected:1 reason:1 assuming:1 modeled:1 reformulate:1 providing:1 minimizing:1 balance:1 equivalently:1 difficult:1 unfortunately:1 liang:1 expense:1 trace:6 reliably:1 unknown:2 twenty:5 observation:1 markov:1 descent:8 extended:1 excluding:1 ucsd:1 canada:1 inferred:5 introduced:1 complement:2 cast:1 required:5 optimized:1 connection:1 hour:1 alternately:1 address:1 able:1 suggested:1 beyond:1 pattern:1 challenge:2 program:2 including:1 tances:1 power:1 difficulty:2 zhu:1 library:1 concludes:1 philadelphia:2 text:2 review:3 prior:2 wisconsin:1 loss:9 expect:3 presumption:1 proportional:1 analogy:1 versus:2 borrows:1 sufficient:1 consistent:1 basin:1 xiao:2 editor:2 share:1 translation:1 row:1 placed:2 supported:1 jth:1 dis:1 saul:7 neighbor:2 correspondingly:1 emerge:2 absolute:1 sparse:1 hopeful:1 distributed:3 ghz:1 boundary:3 xn:2 unweighted:2 computes:1 forward:1 made:1 san:1 projected:1 far:1 transaction:1 approximate:1 compact:1 implicitly:1 global:5 robotic:1 reveals:1 anchor:3 conceptual:1 pittsburgh:1 xi:17 factorizing:1 search:3 promising:1 robust:2 ca:2 expanding:3 expansion:2 investigated:1 constructing:1 domain:1 did:1 aistats:1 main:1 whole:1 noise:4 arise:2 x1:2 fig:6 west:1 position:1 concatenating:1 smoothest:1 lie:4 third:2 minute:3 appeal:1 exists:1 workshop:1 justifies:1 conditioned:1 illustrates:1 smoothly:2 simply:2 likely:1 cheaply:1 absorbed:1 positional:1 expressed:1 ordered:1 scalar:1 applies:2 cowell:1 loses:1 satisfies:1 truth:1 ma:2 viewed:2 formulated:2 goal:1 identity:1 determined:1 uniformly:2 lemma:5 total:1 accepted:1 experimental:1 la:1 duality:1 arises:3 incorporate:1 dept:3 tested:1 rigidity:1
2,271
3,061
A Probabilistic Algorithm Integrating Source Localization and Noise Suppression of MEG and EEG Data Johanna M. Zumer Biomagnetic Imaging Lab Department of Radiology Joint Graduate Group in Bioengineering University of California, San Francisco San Francisco, CA 94143-0628 [email protected] Hagai T. Attias Golden Metallic, Inc. San Francisco, CA [email protected] Kensuke Sekihara Dept. of Systems Design and Engineering Tokyo Metropolitan, University Tokyo, 191-0065 Japan [email protected] Srikantan S. Nagarajan Biomagnetic Imaging Lab Department of Radiology Joint Graduate Group in Bioengineering University of California, San Francisco San Francisco, CA 94143-0628 [email protected] Abstract We have developed a novel algorithm for integrating source localization and noise suppression based on a probabilistic graphical model of stimulus-evoked MEG/EEG data. Our algorithm localizes multiple dipoles while suppressing noise sources with the computational complexity equivalent to a single dipole scan, and is therefore more efficient than traditional multidipole fitting procedures. In simulation, the algorithm can accurately localize and estimate the time course of several simultaneously-active dipoles, with rotating or fixed orientation, at noise levels typical for averaged MEG data. Furthermore, the algorithm is superior to beamforming techniques, which we show to be an approximation to our graphical model, in estimation of temporally correlated sources. Success of this algorithm for localizing auditory cortex in a tumor patient and for localizing an epileptic spike source are also demonstrated. 1 Introduction Mapping functional brain activity is an important problem in basic neuroscience research as well as clinical use. Clinically, such brain mapping procedures are useful to guide neurosurgical planning, navigation, and tumor and epileptic spike removal, as well as guiding the surgeon as to which areas of the brain are still relevant for cognitive and motor function in each patient. Many non-invasive techniques have emerged for functional brain mapping, such as functional magnetic resonance imaging (fMRI) and electromagnetic source imaging (ESI). Although fMRI is the most popular method for functional brain imaging with high spatial resolution, it suffers from poor temporal resolution since it measures blood oxygenation level signals with fluctuations in the order of seconds. However, dynamic neuronal activity has fluctuations in the sub-millisecond time-scale that can only be directly measured with electromagnetic source imaging (ESI). ESI refers to imaging of neuronal activity using magnetoencephalography (MEG) and electroencephalography (EEG) data. MEG refers to measurement of tiny magnetic fields surrounding the head and EEG refers to measurement of voltage potentials using an electrode array placed on the scalp. The past decade has shown rapid development of whole-head MEG/EEG sensor arrays and of algorithms for reconstruction of brain source activity from MEG and EEG data. Source localization algorithms, which can be broadly classified as parametric or tomographic, make assumptions to overcome the ill-posed inverse problem. Parametric methods, including equivalent current dipole (ECD) fitting techniques, assume knowledge about the number of sources and their approximate locations. A single dipolar source can be localized well, but ECD techniques poorly describe multiple sources or sources with large spatial extent. Alternatively, tomographic methods reconstruct an estimate of source activity at every grid point across the whole brain. Of many tomographic algorithms, the adaptive beamformer has been shown to have the best spatial resolution and zero localization bias [1, 2]. All existing methods for brain source localization are hampered by the many types of noise present in MEG/EEG data. The magnitude of the stimulus-evoked neural sources are on the order of noise on a single trial, and so typically 50-200 trials are needed to average in order to distinguish the sources above noise. This can be time-consuming and difficult for a subject or patient to hold still or pay attention through the duration of the experiment. Gaussian thermal noise is present at the sensors themselves. Background room interference such as from powerlines and electronic equipment can be problematic. Biological noise such as heartbeat, eyeblink or other muscle artifact can also be present. Ongoing brain activity itself, including the drowsy-state alpha (?10Hz) rhythm can drown out evoked brain sources. Finally, most localization algorithms have difficulty in separating neural sources of interest that have temporally overlapping activity. Noise in MEG and EEG data is typically reduced by a variety of preprocessing algorithms before being used by source localization algorithms. Simple forms of preprocessing include filtering out frequency bands not containing a brain signal of interest. Additionally and more recently, ICA algorithms have been used to remove artefactual components, such as eyeblinks. More sophisticated techniques have also recently been developed using graphical models for preprocessing prior to source localization [3, 4]. This paper presents a probabilistic modeling framework for MEG/EEG source localization that is robust to interference and noise. The framework uses a probabilistic hidden variable model that describes the observed sensor data in terms of activity from unobserved brain and interference sources. The unobserved source activities and model parameters are inferred from the data by a VariationalBayes Expectation-Maximization algorithm. The algorithm then creates a spatiotemporal image of brain activity by scanning the brain, inferring the model parameters and variables from sensor data, and using them to compute the likelihood of a dipole at each grid location in the brain. We also show that an established source localization method, the minimum variance adaptive beamformer (MVAB), is an approximation of our framework. 2 Probabilistic model integrating source localization and noise suppression This section describes the generative model for the data. We assume that the MEG/EEG data has been collected such that stimulus onset or some other experimental marker indicated the ?zero? time point. Ongoing brain activity, biological noise, background environmental noise, and sensor noise are present in both pre-stimulus and post-stimulus periods; however, the evoked neural sources of interest are only present in the post-stimulus time period. We therefore assume that the sensor data can be described as coming from four types of sources: (1) evoked source at a particular voxel (grid point), (2) all other evoked sources not at that voxel, (3) all background noise sources with spatial covariance at the sensors (including brain, biological, or environmental sources), and (4) sensor noise. We first infer the model describing source types (3) and (4) from the pre-stimulus data, then fix certain quantities (described in section 2.2) and infer the full model describing the remaining source types (1) and (2) from the post-stimulus data (described in section 2.1). After inference of the model, a map of the source activity is created as well as a map of the likelihood of activity across voxels. Let yn denote the K ? 1 vector of sensor data for time point n, where K is the number of sensors (typically 200). Time ranges from ?Npre : 0 : Npost ? 1 where Npre (Npost ) indicates the number ? ? ? x F s B u ? y Figure 1: (Left) Graphical model for proposed algorithm. Variables are inside dotted box, parameters outside dotted box. Values in circles unknown and learned from the model, and values in squares known. (Right) Representation of factors influencing the data recorded at the sensors. In orange, a post-stimulus source at the voxel of interest, focused on by the lead field F. In red, other post-stimulus sources not at that particular voxel. In green, all background sources, including ongoing brain activity, eyeblinks, heartbeat, and electrical noise. In blue, thermal noise present in each sensor. of time samples in the pre-(post-)stimulus period. The generative model for data y n is  Bun + vn n = ?Npre , . . . , ?1 yn = F r srn + Ar xrn + Bun + vn n = 0, . . . , Npost ? 1 (1) The K ? 3 forward lead field matrix F r represents the physical (and linear) relationship between a dipole source at voxel r for each orientation, and its influence on sensor k = 1 : K [5]. The lead field F r is calculated from knowing the geometry of the source location to the sensor location, as well as the conducting medium in which the source lies: the human head is most commonly approximated as a single-shell sphere volume conductor. The source activity s rn is a 3 ? 1 vector of dipole strength in each of the three orientations at time n for the voxel r. The K ? L matrix A and the L ? 1 vector xn represent the post-stimulus mixing matrix and evoked non-localized factors, respectively, corresponding to source type (2) discussed above. The K ? M matrix B and the M ? 1 vector un represent the background mixing matrix and background factors, respectively. The K ? 1 vector vn represents the sensor-level noise. All quantities depend on r in the post-stimulus period except for B, un and ? (the sensor precision), which will be learned from the pre-stimulus data and fixed as the other quantities are learned for each voxel. Note however the posterior update for u ?n does depend on the voxel r. The graphical model is shown in Fig. 1. This generative model becomes a probabilistic model when we specify prior distributions, as described in the next two sections. 2.1 Localization of evoked sources learned from post-stimulus data In the stimulus-evoked paradigm, the source strength at each voxel is learned from the post-stimulus data. The background mixing matrix B and sensor noise precision ? are fixed, after having been learned from the pre-stimulus data, described in section 2.2. We assume those quantities remain constant through the post-stimulus period and are independent of source location. We assume Gaussian prior distributions on the source factors and interference factors. We further make the assumption that the signals are independent and identically distributed (i.i.d.) across time. The source factors have prior precision given by the 3 ? 3 matrix ?, which relates to the strength of the dipole in each of 3 orientations. All Normal distributions specified in this paper are defined by their mean and precision (inverse covariance). p(s) = Y n p(sn ); p(sn ) = 3 Y p(sjn ) = N (0, ?) (2) j=1 The interference and background factors are assumed to have identity precision. To complete specification of this model, we need to specify prior distributions on the model parameters. We use a conjugate prior for the interference mixing matrix A, where the ?j is a hyperparameter over the jth column of A and ?i is the precision of the ith sensor. The hyperparameter ? (a diagonal matrix) provides a robust mechanism for automatic model order selection, so that the optimal size of A is inferred from the data through ?. L Y Y p(x) = p(xn ); p(xn ) = p(xjn ) = N (0, I); n p(u) = Y j=1 p(un ); p(un ) = n M Y p(ujn ) = N (0, I) j=1 p(A) = Y (3) N (Aij |0, ?i ?j ) ij We now specify the full model: p(y|s, x, u, A, B) = Y p(yn |sn , xn , un , A, B); n (4) p(yn |sn , xn , un , A, B, ?) = N (yn |F sn + Axn + Bun , ?) Exact inference on this model is intractable using the joint posterior over the interference factors and interference mixing matrix; thus the following variational-Bayesian approximation for the posteriors is used: (5) p(s, x, A|y) ? q(s, x, A|y) = q(s, x|y)q(A|y) We learn the hidden variables and parameters from the post-stimulus data, iterating through each voxel in the brain, using a variational-Bayesian Expectation-Maximization (EM) algorithm. All variables, parameters and hyperparameters are hidden and are learned from the data. In place of maximizing the logp(y), which would be mathematically intractable, we maximize a lower bound to logp(y) defined by F in the following equation Z F = dx ds dA q(s, x, A|y) [logp(y, s, x, A) ? logq(s, x, A|y)] (6) = log p(y) ? KL[q(s, x, A|y)||p(s, x, A|y)] where KL(q||p) is the Kullback-Leibler divergence between distributions q and p. F is equal to logp(y) when the approximation in Eq. 5 is true, thus making the KL-distance zero. We use a variational-Bayesian EM algorithm which alternately maximizes the function F with respect to the posteriors q(s, x|y) and q(A|y). In the E-step, F is maximized w.r.t. q(s, x|y), keeping q(A|y) constant, and the sufficient statistics of the hidden variables are computed. In the M-step, F is maximized w.r.t. q(A|y), keeping q(s, x|y) constant, and the MAP estimate of the parameters and hyperparameters are computed. In the E-step, the posterior distribution of the background factors given the data is computed: q(x0n |yn ) = N (? x0n , ?); x ?0n = ??1 A?0T ?yn ; ? = A?0T ?A?0 + K? + I 0 where we define: ! s?n 0 x ?n ; A?0 = x ?n = u ?n ? F A? B  0 ; I = ? 0 0 0 I 0 0 0 I ! ; ?= (7) 0 0 0 0 ?AA 0 0 0 0 ! (8) In the M-step, we maximize the function F w.r.t. q(A|y) holding q(s, x|y) fixed. We update the posterior distribution of the interference mixing matrix A including its precision ? AA . Note that the ? was learned lead field F is fixed based on the geometry of the sensors relative to the head, and B and fixed from the pre-stimulus data. The sensor noise precision ? is also kept fixed from the prestimulus period. The MAP values of the hyperparameter ? and source factor precision ? are learned here from the post-stimulus data. ? ux )?AA ; ?AA = (Rxx + ?)?1 ; A? = (Ryx ? F Rsx ? BR 1 1 ?T ? ??1 = Rss ; ??1 = diag( A ?A + ?AA ) N K (9) The matrices, such as Ryx , represent the posterior covariance between the two subscripts and explicit definitions are omitted for space. In each iteration of EM, the marginal likelihood is increased. The variational likelihood function (the lower bound on the exact marginal likelihood) is given as follows: N Lr =  K 0 N |?||?r | 1 X  T T r r 0r ? x ? log ? ?y ? x ? log|?r ||?r | y n n + n 2 |?r | 2 n=1 n 2 (10) This likelihood function is dependent on the source voxel r and thus a map of the likelihood across the brain can be displayed. Furthermore, we can also plot an image of the source power estimates and the time course of activity at each voxel. 2.2 Separation of background sources learned from pre-stimulus data Localization error of proposed model (blue) relative to beamforming (green) 20 Algortihm, sim. interference Algorithm, real brain noise MVAB, sim. inteference MVAB, real brain noise 15 Error (mm) We note that the computational complexity of the proposed algorithm is on the order O(KLN S), roughly equivalent to a single dipole scan, which is of order O(N (K 2 +S)). These are much smaller than the complexity of a multi-dipole scan which is order O(N S P ) where P is the number of dipoles, and if S represents roughly several thousand voxels. We further note that the number of hidden variables to be estimated is less than the number of data points observed, thus not posing significant problems for estimation accuracy. 10 5 0 -10 -5 0 5 SNIR (dB) We learn the background mixing matrix and sensor noise pre- Figure 2: Performance of algorithm cision from the pre-stimulus data using a variational-Bayes relative to beamforming for simufactor analysis model. We assume Gaussian prior distribu- lated datasets. See text for details. tions on the background factors and sensor noise, with zero mean and identity precision; we assume a flat prior on the sensor precision. We again use a conjugate prior for the background mixing matrix B, where ?j is a hyperparameter, similar to the expression for the interference mixing matrix. All variables, parameters and hyperparameters are hidden and are learned from the pre-stimulus data. We make the variational-Bayesian approximation for the background mixing matrix and background factors p(u, B|y) ? q(u, B|y) = q(u|y)q(B|y). In the E-step, we maximize the function F w.r.t. q(u|y) holding q(B|y) fixed. We update the posterior distribution of the factors: q(u|y) = Y q(un |yn ); q(un |yn ) = N (? un , ?) n ? T ?B ? + K? ?1 + I u ?n = ? ?1 B T ?yn ; ? = B In the M-step, we compute the full posterior distribution of the background mixing matrix B, including its precision matrix ?, and the MAP estimates of the noise precision ? and the hyperparameter ?. We assume the noise precision is diagonal. ? = Ryu ?; ? = (Ruu + ?)?1 B ? ?1 = diag( 2.3 1 ?T ? B ?B + ?); K ??1 = 1 ? T diag(Ryy ? BR yu ) N (11) Relationship to minimum-variance adaptive beamforming Minimum variance adaptive beamforming (MVAB) is one of the best performing source localization techniques. MVAB estimates the dipole source time series by s?n = WM V AB yn , where ?1 ?1 WM V AB = (F T Ryy F )?1 F T Ryy and Ryy is the measured data covariance matrix. Thus, MVAB also has computational complexity equivalent to a single-dipole scan, on the order O(K 2 + S). MVAB attempts to suppress interference, but recent studies have shown the MVAB is ineffective in cancellation of interference from other brain sources, especially if there are many such sources. In this section, we derive that MVAB is an approximation to inference on our model. 1 0 z (mm) z (mm) -1 -600 1 0.5 40 40 20 0 20 -50 0 x (mm) 50 200 600 1000 -200 200 600 1000 -200 200 600 1000 -0.5 0 -600 1 -20 -20 -200 -50 0 x (mm) 0 50 -1 -600 Figure 3: Example of algorithm and MVAB for correlated source simulation. See text for details We start by rewriting Eq. (1) as yn = F sn + zn , where zn is termed the total noise and is given by zn = Axn + Bun + vn . It has mean zero and precision matrix ? = (AAT + BB T + ??1 )?1 . Assuming we have estimated the model parameters A, B, ?, ?, the MAP estimate of the dipole source time series is s?n = W yn , where W = ??1 F T ? and ? = F T ?F + ?. It can be shown that this expression is equivalent to Eq. 8. In the infinite data limit, the data covariance satisfies Ryy = F ??1 F T + ??1 . Its inverse is found, ?1 = ? ? ?F ??1 F T ?. Hence, we obtain using the matrix inversion lemma, to be Ryy (12) ?1 F T Ryy = (I ? F T ?F ??1 )F T ? = ???1 F T ? where the last step used the expression for ?. Next, we approximate ? ? F T ?F . We then use Eq. (12) to obtain: ?1 ?1 W ? (F T ?F )?1 F T ? = (F T ?F )?1 ???1 ???1 F T ? = (F T Ryy F )?1 F T Ryy = WM V AB 3.1 Results Simulations The proposed method was tested in a variety of realistic source configurations reconstructed on a 5mm voxel grid. A single-shell spherical volume conductor model was used to calculate the forward lead field [5]. Simulated datasets were constructed by placing Gaussian-damped sinusoidal time courses at specific locations inside a voxel grid based on realistic head geometry. Sources were assumed to be present in the post-stimulus period with 437 samples along with a pre-stimulus period of 263 samples. Performance of SAKETINI (blue) relative to beamforming (green) in ability to estimate source time course NA Correlation of estimated with true source 3 IN 1 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0 1 RE 0 5 -5 0 5 -5 0 5 -5 In the ?noise-alone (NA)? cases, Gaussian noise SNIR (dB) SNIR (dB) SNIR (dB) only was added to all time points at the sensors. In Algorithm, uncorrelated sources the ?interference (IN)? cases, Gaussian noise time MVAB, uncorrelated sources courses occurring in both pre- and post-stimulus Algorithm, correlated sources periods, representing simulated ?ongoing? activity, MVAB, correlated sources were placed at 50 random locations throughout the brain voxel grid, and their activity was projected Figure 4: Performance of algorithm relative onto the sensors and added to both the sensor noise to beamforming for simulated datasets. See and source activity. Finally, in the ?real (RE)? cases, text for details. 700 samples of real MEG sensor data averaged over 100 trials collected while a human subject was alert but not performing tasks or receiving stimuli. This real background data thus includes real sensor noise plus real ?ongoing? brain activity that could interfere with evoked sources and adds spatial correlation to the sensor data. We varied the Signal-to-Noise Ratio (SNR) and the corresponding Signal to Noise-plus-Interference Ratio (SNIR). SNIR is calculated from the ratio of the sensor data resulting from sources only to sensor data from noise plus interference. The first performance figure (Fig. 2) shows the localization error of the proposed method relative to the MVAB. For this data, a single dipole was placed randomly within Normalized Intensity (-1000 to 1000) 1000 500 0 -500 -1000 -100 0 100 200 300 Time (ms) 400 Figure 5: Algorithm applied to auditory MEG dataset in patient with temporal lobe tumor. the voxel grid space. The largest peak in the likelihood map was found and the distance from this point to the true source was recorded. Each datapoint is an average of 20 realizations of the source configuration, with error bars showing standard error. This simulation was performed for a variety of SNIR?s and for all three cases of noise described above. The results from NA were omitted since both the proposed method and MVAB performed perfectly (zero error). This figure clearly shows that the error in localization is smaller for the proposed method (black) than for MVAB (green). The next set of simulations examines the proposed method?s ability to estimate the source time course sn . Three sources were placed in the brain voxel grid. The locations of these sources were fixed, but the orientation and time courses were allowed to vary across realizations of the simulations. In half the cases, two of the three sources were forced to be perfectly correlated in time (a scenario where the MVAB is known to fail), while the time course of the third source was random relative to the other two. An example of the likelihood map and estimated time courses are shown in Fig. 3. The likelihood map from the proposed method (on the left) has peaks near all three sources, including the two that were perfectly correlated (depicted by squares). However, the MVAB (middle plot) largely misses the source on the left. On the right plot, the estimated time courses from the proposed method (dashes) and MVAB (dots) are plotted relative to the true time course (solid). The top and middle plots correspond to the (square) correlated sources. While both methods estimate the time courses well, MVAB underestimates the overall strength of the source on the top plot, and exhibits extra noise in the pre-stimulus period for the middle plot. The performance of the proposed model on the same set of simulations of correlated sources, compared to beamforming, are shown in Fig. 4. This figure shows the correlation of the estimated with the true time course, for three cases of NA, IN, and RE, and for both correlated and uncorrelated sources, as a function of SNIR. The proposed method consistently out-performs the MVAB whether the simulated sources are highly correlated with each other (dashed lines), or uncorrelated (solid), and especially in the RE case. Each datapoint represents an average of 10 realizations of the simulation, with standard errors on the order of 0.05 (not shown). 3.2 Real data Stimulus-evoked data was collected in a 275-channel CTF System MEG device from a patient with temporal lobe tumor near auditory cortex. The stimulus was a noise burst presented binaurally in 120 trials. A large peak is typically seen around 100ms after presentation of an auditory stimulus, termed the M100 peak. Figure 5 shows the results of the proposed method applied to this dataset. On the right, the likelihood map show a spatial peak in auditory cortex near the tumor. At that peak voxel, the time course was extracted and plotted on the left, showing the clear M100 peak. This information can be useful to the neurosurgical team for guiding the location of surgical lesion and for providing knowledge of the patient?s auditory processing abilities. We next tested the proposed method on its ability to localize interictal spikes obtained from a patient with epilepsy. No sensory stimuli were presented to this patient in this dataset, which was collected in the same MEG device described above. A Registered EEG/Evoked Potential Technologist marked segments of the continuously-collected dataset which contained spontaneous spikes, as well as segments that clearly contained no spikes. One segment of data with a spike marked at 400ms was used here as the ?post-stimulus? period and a separate, spike-free, segment of equal length was used as the ?pre-stimulus? period. Figure 6 shows the proposed method?s performance on this dataset. The top left subplot shows the raw sensor data for the segment containing the marked spike. The bottom left shows the location of the equivalent-current dipole (ECD) fit to several spikes from this patient; this location from the ECD fit would normally be used clinically. The middle bottom figure shows the likelihood map from the proposed model; the peak is in clear agreement with the standard ECD localization. The middle top figure shows the time course estimated for the likelihood spatial peak. 12 RMS = 311.2 fT Normalized Intensity (-1000 to 1000) 0.6 Magnetic Field (T ) 0.4 0.2 0 0. 2 0. 4 0. 6 0. 8 1 0 100 200 300 400 Time (ms) 500 600 700 1000 500 0 -500 -1000 0 200 400 Time (ms) 600 Normalized Intensity (-1000 to 1000) x 10 1 0.8 1000 500 0 -500 -1000 0 200 400 Time (ms) 600 1000 1000 800 900 600 800 400 700 200 600 0 500 200 400 400 300 600 200 800 100 1000 0 Figure 6: Performance of algorithm applied to data from an epileptic patient. See text for details. The spike at 400ms is clearly seen; this cleaned waveform could be of use to the clinician in analyzing peak shape. Finally, the top right plot shows a source time course from a randomly selected location far from the epileptic spike source (shown with cross-hairs on bottom right plot), in order to show the low noise level and to show lack of cross-talk onto source estimates elsewhere. 4 Extensions We have described a novel probabilistic algorithm which performs source localization while robust to interference and demonstrated its superior performance over a standard method in a variety of simulations and real datasets. The model takes advantage of knowledge of when sources of interest are not occurring (such as in the pre-stimulus period of a evoked response paradigm). This model currently assumes averaged data from an evoked response paradigm, but could be extended to examine variations from the average in individual trials, only involving a few extra parameters to estimate. Furthermore, the model could be extended to take advantage of temporal smoothness in the data as well as frequency content. Additionally, spatial smoothness or spatial priors from other modalities, such as structural or functional MRI, could be incorporated. Furthermore, one is not limited to s n in a single voxel; the above formulation holds for any P arbitrarily chosen dipole components, no matter which voxels they belong to, and for any value of P . Of course, as P increases the inferred value of ? becomes less accurate, and one might choose to restrict it to a diagonal or block-diagonal form. References [1] K. Sekihara, M. Sahani, and S.S. Nagarajan, ?Localization bias and spatial resolution of adaptive and non-adaptive spatial filters for MEG source reconstruction,? NeuroImage, vol. 25, pp. 1056? 1067, 2005. [2] K. Sekihara, S.S. Nagarajan, D. Poeppel, and A. Marantz, ?Performance of an MEG adaptivebeamformer technique in the presence of correlated neural activities: Effects on signal intensity and time-course estimates,? IEEE Trans Biomed Eng, vol. 49, pp. 1534?1546, 2002. [3] Srikantan S. Nagarajan, Hagai T. Attias, Kenneth E. Hild, and Kensuke Sekihara, ?A graphical model for estimating stimulus-evoked brain responses from magnetoencephalography data with large background brain activity,? Neuroimage, vol. 30, pp. 400?416, 2006. [4] S.S. Nagarajan, H.T. Attias, K.E. Hild, and K. Sekihara, ?Stimulus evoked independent factor analysis of MEG data with large background activity,? in Adv. Neur. Info. Proc. Sys., 2005. [5] J. Sarvas, ?Basic mathematical and electromagnetic concepts of the biomagnetic inverse problem,? Phys Med Biol, vol. 32, pp. 11?22, 1987.
3061 |@word trial:5 middle:5 sri:1 inversion:1 mri:1 bun:4 m100:2 simulation:9 r:1 lobe:2 covariance:5 eng:1 solid:2 npost:3 configuration:2 series:2 suppressing:1 past:1 existing:1 current:2 com:1 dx:1 realistic:2 oxygenation:1 shape:1 motor:1 mrsc:2 remove:1 plot:8 update:3 alone:1 generative:3 half:1 device:2 selected:1 sys:1 ith:1 npre:3 lr:1 provides:1 location:12 mathematical:1 along:1 constructed:1 alert:1 burst:1 fitting:2 inside:2 ica:1 rapid:1 roughly:2 themselves:1 planning:1 axn:2 multi:1 brain:28 examine:1 srikantan:2 spherical:1 electroencephalography:1 becomes:2 estimating:1 maximizes:1 medium:1 developed:2 unobserved:2 temporal:4 every:1 golden:1 lated:1 normally:1 yn:13 drowsy:1 before:1 influencing:1 engineering:1 aat:1 limit:1 analyzing:1 subscript:1 fluctuation:2 black:1 plus:3 might:1 evoked:16 limited:1 graduate:2 range:1 averaged:3 block:1 procedure:2 area:1 pre:15 integrating:3 refers:3 mvab:20 onto:2 selection:1 influence:1 xrn:1 equivalent:6 map:12 demonstrated:2 maximizing:1 attention:1 duration:1 focused:1 resolution:4 dipole:17 examines:1 sarvas:1 array:2 variation:1 spontaneous:1 exact:2 us:1 agreement:1 approximated:1 logq:1 observed:2 bottom:3 ft:1 electrical:1 thousand:1 calculate:1 artefactual:1 adv:1 binaurally:1 complexity:4 esi:3 dynamic:1 depend:2 segment:5 surgeon:1 localization:19 creates:1 heartbeat:2 joint:3 talk:1 surrounding:1 forced:1 describe:1 outside:1 emerged:1 posed:1 reconstruct:1 ability:4 statistic:1 radiology:2 itself:1 advantage:2 reconstruction:2 coming:1 relevant:1 realization:3 mixing:11 poorly:1 electrode:1 tions:1 derive:1 ac:1 measured:2 ij:1 sim:2 eq:4 technologist:1 waveform:1 tokyo:2 filter:1 human:2 nagarajan:5 fix:1 electromagnetic:3 biological:3 hagai:2 mathematically:1 extension:1 hold:2 mm:6 around:1 hild:2 drown:1 normal:1 mapping:3 vary:1 omitted:2 estimation:2 proc:1 currently:1 largest:1 metropolitan:1 clearly:3 sensor:32 gaussian:6 voltage:1 consistently:1 likelihood:13 indicates:1 algortihm:1 suppression:3 equipment:1 inference:3 dependent:1 typically:4 hidden:6 biomed:1 overall:1 orientation:5 ill:1 development:1 resonance:1 spatial:11 orange:1 marginal:2 field:7 equal:2 having:1 represents:4 placing:1 yu:1 fmri:2 ryy:9 stimulus:39 few:1 randomly:2 simultaneously:1 divergence:1 individual:1 geometry:3 ab:3 attempt:1 interest:5 highly:1 navigation:1 damped:1 accurate:1 bioengineering:2 snir:8 rotating:1 circle:1 srn:1 re:4 plotted:2 increased:1 column:1 modeling:1 ar:1 localizing:2 logp:4 zn:3 maximization:2 snr:1 scanning:1 spatiotemporal:1 peak:10 probabilistic:7 receiving:1 continuously:1 na:4 again:1 recorded:2 containing:2 choose:1 cognitive:1 japan:1 potential:2 sinusoidal:1 includes:1 inc:1 matter:1 onset:1 performed:2 lab:2 red:1 wm:3 bayes:1 start:1 johanna:1 square:3 accuracy:1 variance:3 conducting:1 largely:1 maximized:2 correspond:1 surgical:1 bayesian:4 raw:1 accurately:1 cc:1 classified:1 datapoint:2 phys:1 suffers:1 definition:1 underestimate:1 poeppel:1 frequency:2 pp:4 invasive:1 auditory:6 dataset:5 popular:1 knowledge:3 sophisticated:1 specify:3 response:3 formulation:1 box:2 furthermore:4 correlation:3 d:1 overlapping:1 marker:1 lack:1 interfere:1 artifact:1 indicated:1 effect:1 normalized:3 true:5 concept:1 hence:1 leibler:1 rhythm:1 m:7 complete:1 performs:2 rsx:1 image:2 variational:6 novel:2 recently:2 superior:2 functional:5 physical:1 jp:1 volume:2 discussed:1 belong:1 epilepsy:1 measurement:2 significant:1 ctf:1 smoothness:2 automatic:1 grid:8 cancellation:1 dot:1 specification:1 cortex:3 add:1 posterior:9 recent:1 termed:2 scenario:1 certain:1 success:1 arbitrarily:1 muscle:1 seen:2 minimum:3 subplot:1 paradigm:3 period:13 maximize:3 signal:6 dashed:1 relates:1 multiple:2 full:3 infer:2 clinical:1 sphere:1 cross:2 post:16 involving:1 basic:2 dipolar:1 hair:1 patient:10 expectation:2 iteration:1 represent:3 beamformer:2 background:19 source:89 modality:1 extra:2 ineffective:1 subject:2 hz:1 med:1 db:4 beamforming:8 ruu:1 structural:1 near:3 presence:1 ryx:2 identically:1 variety:4 fit:2 perfectly:3 restrict:1 knowing:1 br:2 attias:3 whether:1 expression:3 epileptic:4 rms:1 useful:2 iterating:1 clear:2 band:1 ujn:1 reduced:1 problematic:1 millisecond:1 dotted:2 neuroscience:1 estimated:7 blue:3 broadly:1 hyperparameter:5 rxx:1 vol:4 group:2 four:1 blood:1 localize:2 rewriting:1 kenneth:1 kept:1 imaging:7 inverse:4 place:1 throughout:1 x0n:2 tomographic:3 electronic:1 vn:4 separation:1 interictal:1 bound:2 pay:1 dash:1 distinguish:1 activity:23 scalp:1 strength:4 flat:1 performing:2 department:2 neur:1 clinically:2 poor:1 conjugate:2 across:5 describes:2 remain:1 em:3 smaller:2 making:1 interference:17 equation:1 describing:2 mechanism:1 fail:1 needed:1 magnetic:3 hampered:1 kln:1 remaining:1 include:1 ecd:5 top:5 graphical:6 assumes:1 especially:2 added:2 quantity:4 spike:11 parametric:2 traditional:1 diagonal:4 exhibit:1 distance:2 separate:1 separating:1 simulated:4 extent:1 collected:5 assuming:1 meg:18 length:1 relationship:2 ratio:3 providing:1 difficult:1 holding:2 info:1 suppress:1 design:1 unknown:1 datasets:4 thermal:2 displayed:1 extended:2 incorporated:1 head:5 team:1 rn:1 varied:1 intensity:4 inferred:3 cleaned:1 specified:1 kl:3 california:2 learned:11 registered:1 ryu:1 established:1 alternately:1 trans:1 bar:1 including:7 green:4 power:1 difficulty:1 localizes:1 representing:1 temporally:2 created:1 sn:7 sahani:1 text:4 prior:10 voxels:3 removal:1 relative:8 filtering:1 localized:2 sufficient:1 tiny:1 uncorrelated:4 course:18 elsewhere:1 placed:4 last:1 keeping:2 free:1 jth:1 distribu:1 aij:1 guide:1 bias:2 distributed:1 overcome:1 calculated:2 xn:5 sensory:1 forward:2 commonly:1 adaptive:6 san:5 preprocessing:3 projected:1 voxel:19 far:1 bb:1 reconstructed:1 approximate:2 alpha:1 cision:1 kullback:1 active:1 assumed:2 francisco:5 consuming:1 alternatively:1 un:9 decade:1 additionally:2 learn:2 channel:1 robust:3 ca:3 eeg:11 metallic:1 posing:1 da:1 diag:3 sekihara:5 whole:2 noise:41 hyperparameters:3 sjn:1 marantz:1 allowed:1 lesion:1 neuronal:2 fig:4 eyeblink:1 precision:15 sub:1 inferring:1 guiding:2 explicit:1 neuroimage:2 lie:1 third:1 specific:1 showing:2 intractable:2 biomagnetic:3 magnitude:1 occurring:2 depicted:1 xjn:1 contained:2 ux:1 aa:5 environmental:2 satisfies:1 extracted:1 shell:2 identity:2 presentation:1 magnetoencephalography:2 marked:3 room:1 content:1 typical:1 except:1 infinite:1 clinician:1 conductor:2 miss:1 tumor:5 lemma:1 total:1 experimental:1 scan:4 ucsf:2 ongoing:5 dept:1 tested:2 biol:1 correlated:11
2,272
3,062
Combining causal and similarity-based reasoning Charles Kemp, Patrick Shafto, Allison Berke & Joshua B. Tenenbaum Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139 {ckemp,shafto,berke,jbt}@mit.edu Abstract Everyday inductive reasoning draws on many kinds of knowledge, including knowledge about relationships between properties and knowledge about relationships between objects. Previous accounts of inductive reasoning generally focus on just one kind of knowledge: models of causal reasoning often focus on relationships between properties, and models of similarity-based reasoning often focus on similarity relationships between objects. We present a Bayesian model of inductive reasoning that incorporates both kinds of knowledge, and show that it accounts well for human inferences about the properties of biological species. 1 Introduction Will that berry taste good? Is that table strong enough to sit on? Predicting whether an object has an unobserved property is among the most basic of all inductive problems. Many kinds of knowledge appear to be relevant: different researchers emphasize the role of causal knowledge, similarity, category judgments, associations, analogical mappings, scripts, and intuitive theories, and each of these approaches accounts for an important subset of everyday inferences. Taken in isolation, however, each of these approaches is fundamentally limited. Humans draw on multiple kinds of knowledge and integrate them ?exibly when required, and eventually our models should attempt to match this ability [1]. As an initial step towards this goal, we present a model of inductive reasoning that is sensitive both to causal relationships between properties and to similarity relationships between objects. The inductive problem we consider can be formalized as the problem of ?lling in missing entries in an object-property matrix (Figure 1). Previous accounts of inductive reasoning generally address some version of this problem. Models of causal reasoning [2] usually focus on relationships between properties (Figure 1a): if animal A has wings, for instance, it is likely that animal A can ?y. Similarity-based models [3, 4, 5] usually focus on relationships between objects (Figure 1b): if a duck carries gene X, a goose is probably more likely than a pig to carry the same gene. Previous models, however, cannot account for inferences that rely on similarity and causality: if a duck carries gene X and gene X causes enzyme Y to be expressed, it is likely that a goose expresses enzyme Y (Figure 1c). We develop a unifying model that handles inferences like this, and that subsumes previous probabilistic approaches to causal reasoning [2] and similarity-based reasoning [5, 6]. Our formal framework overcomes some serious limitations of the two approaches it subsumes. Approaches that rely on causal graphical models typically assume that the feature vectors of any two objects (any two rows of the matrix in Figure 1a) are conditionally independent given a causal network over the features. Suppose, for example, that the rows of the matrix correspond to people and the causal network states that smoking leads to lung cancer with probability 0.3. Suppose that Tim, Tom and Zach are smokers, that Tim and Tom are identical twins, and that Tim has lung cancer. The assumption of conditional independence implies that Tom and Zach are equally likely to suffer from long cancer, a conclusion that seems unsatisfactory. The assumption is false because of variables that are unknown but causally relevant?variables capturing unknown biological and environmental factors that mediate the relationship between smoking and disease. Dealing with these unknown a) f1 f2 f3 o1 1 1 1 o2 0 0 o3 0 1 b) c) f1 f2 f3 o1 1 ? ? ... o2 ? ? ? ... o3 ? ? ? f1 f2 f3 ... o1 1 0 1 ... 1 o2 1 0 ? 1 o3 0 1 ? o4 o4 o4 ? ? ? 1 ? ? 0 1 ? ... .. .. .. .. . . . . Figure 1: (a) Models of causal reasoning generally assume that the rows of an object-feature matrix are conditionally independent given a causal structure over the features. These models are often used to make predictions about unobserved features of novel objects. (b) Models of similaritybased reasoning generally assume that columns of the matrix are conditionally independent given a similarity structure over the objects. These models are often used to make predictions about novel features. (c) We develop a generative model for object-feature matrices that incorporates causal relationships between features and similarity relationships between objects. The model uses both kinds of information to make predictions about matrices with missing entries. variables is dif?cult, but we suggest that knowledge about similarity between objects can help. Since Tim is more similar to Tom than Zach, our model correctly predicts that Tom is more likely to have lung cancer than Zach. Previous models of similarity-based reasoning [5, 6] also suffer from a restrictive assumption of conditional independence. This time the assumption states that features (columns of the matrix in Figure 1b) are conditionally independent given information about the similarity between objects. Empirical tests of similarity-based models often attempt to satisfy this assumption by using blank properties?subjects, for example, might be told that coyotes have property P, and asked to judge the probability that foxes have property P [3]. To a ?rst approximation, inferences in tasks like this conform to judgments of similarity: subjects conclude, for example, that foxes are more likely to have property P than mice, since coyotes are more similar to foxes than mice. People, however, ?nd it natural to reason about properties that are linked to familiar properties, and that therefore violate the assumption of conditional independence. Suppose, for instance, you learn that desert foxes have skin that is resistant to sunburn. It now seems that desert rats are more likely to share this property than arctic foxes, even though desert foxes are more similar in general to arctic foxes than to desert rats. Our model captures inferences like this by incorporating causal relationships between properties: in this case, having sunburn-resistant skin is linked to the property of living in the desert. Limiting assumptions of conditional independence can be avoided by specifying a joint distribution on an entire object-property matrix. Our model uses a distribution that is sensitive both to causal relationships between properties and to similarity relationships between objects. We know of no previous models that attempt to combine causality and similarity, and one set of experiments that has been taken to suggest that people ?nd it dif?cult to combine these sources of information [7]. After introducing our model, we present two experiments designed to test it. The results suggest that people are able to combine causality with similarity, and that our model accounts well for this capacity. 2 A generative model for object-feature matrices Consider ?rst a probabilistic approach to similarity-based reasoning. Assume that So is an object structure: a graphical model that captures relationships between a known set of objects (Figure 1b). Suppose, for instance, that the objects include a mouse, a rat, a squirrel and a sheep (o1 through o4 ). So can be viewed as a graphical model that captures phylogenetic relationships, or as a formalization of the intuitive similarity between these animals. Given some feature of interest, the feature values for all objects can be collected into an object vector vo and So speci?es a distribution P (vo ) on these vectors. We work with the case where (So , ?) is a tree-structured graphical model of the sort previously used by methods for Bayesian phylogenetics [8] and cognitive models of property induction [5, 6]. The objects lie at the leaves of the tree, and we assume that object vectors are binary vectors generated by a mutation process over the tree. This process has a parameter, ?, that represents the base rate of a novel feature?the expected proportion of objects with that feature. For instance, if ? is low, the model (So , ?) will predict that a novel feature will probably not be found in any of the animals, but if the feature does occur in exactly two of the animals, the mouse and the rat are a more likely pair than the mouse and the sheep. The mutation process can be formalized as a continuous-time Markov process with two states (off and on) and with in?nitesimal matrix:   ?? ? Q= 1 ? ? ?(1 ? ?) We can generate object vectors from this model by imagining a binary feature spreading out over the tree from root to leaves. The feature is on at the root with probability ?, and the feature may switch states at any point along any branch. The parameter ? determines how easy it is to move between the on state and the off state. If ? is high, it will be easy for the Markov process to enter the on state, and dif?cult for it to leave once it is there. Consider now a probabilistic approach to causal reasoning. Assume that Sf is a feature structure: a graphical model that captures relationships between a known set of features (Figure 1a). The features, for instance, may correspond to enzymes, and Sf may capture the causal relationships between these enzymes. One possible structure states that enzyme f1 is involved in the production of enzyme f2 , which is in turn involved in the production of enzyme f3 . The feature values for any given object can be collected into a feature vector vf and Sf speci?es a distribution P (vf ) on these vectors. Suppose now that we are interested in a model that combines the knowledge represented by Sf and So (Figure 1c). Given that the mouse expresses enzyme f1 , for instance, a combined model should predict that rats are more likely than squirrels to express enzyme f2 . Formally, we seek a distribution P (M ), where M is an object-feature matrix, and P (M ) is sensitive to both the relationships between features and the relationships between animals. Given this distribution, Bayesian inference can be used to make predictions about the missing entries in a partially observed matrix. If the features in Sf happen to be independent (Figure 1b), we can assume that column i of the matrix is generated by (So , ?i ), where ?i is the base rate of fi . Consider then the case where Sf captures causal relationships between the features (Figure 1c). These causal relationships will typically depend on several hidden variables. Causal relationships between enzymes, for instance, are likely to depend on other biological variables, and the causal link between smoking and lung cancer is mediated by many genetic and environmental variables. Often little is known about these hidden variables, but to a ?rst approximation we can assume that they respect the similarity structure So . In Figure 1c, for example, the unknown variables that mediate the relationship between f1 and f2 are more likely to take the same value in o1 and o2 than in o1 and o4 . We formalize these intuitions by converting a probabilistic model Sf (Figure 2a) into an equivalent model SfD (Figure 2b) that uses a deterministic combination of independent random events. These random events will include hidden but causally relevant variables. In Figure 2b, for example, the model SfD indicates that the effect e is deterministically present if the cause c is present and the transmission mechanism t is active, or if there is a background cause b that activates e. The model SfD is equivalent to Sf in the sense that both models induce the same distribution over the variables that appear in Sf . In general there will be many models SfD that meet this condition, and there are algorithms which convert Sf into one of these models [9]. For some applications it might be desirable to integrate over all of these models, but here we attempt to choose the simplest?the model SfD with the fewest variables. Given a commitment to a speci?c deterministic model, we assume that the root variables in SfD are independently generated over So . More precisely, suppose that the base rate of the ith variable in SfD is ?i . The distribution P (M ) we seek must meet two conditions (note that each candidate matrix M now has a column for each variable in SfD ). First, the marginal distribution on each row must match a) c = 0 0.9 c=1 0.1 c e b) c = 0 0.9 c=1 c 0.1 e t=0 0.2 t=1 0.8 t b=0 0.9 b=1 b 0.1 c 0 1 c 0 0 0 0 1 1 1 1 e=0 e=1 0.9 0.1 0.18 0.72 t 0 0 1 1 0 0 1 1 b 0 1 0 1 0 1 0 1 e=0 e=1 1 0 0 1 1 0 0 1 1 0 0 1 0 1 0 1 c) c t b e o1 c1 t1 b1 e1 o2 c2 t2 b2 e2 o3 c3 t3 b3 e3 Figure 2: (a) A graphical model Sf that captures a probabilistic relationship between a cause c and an effect e. (b) A deterministic model SfD that induces the same joint distribution over c and e. t indicates whether the mechanism of causal transmission between c and e is active, and b indicates whether e is true owing to a background cause independent of c. All of the root variables (c, t and b) are independent, and the remaining variables (e) are deterministically speci?ed once the root variables are ?xed. (c) A graphical model created by combining SfD with a tree-structured representation of the similarity between three objects. The root variables in SfD (c, t, and b) are independently generated over the tree. Note that the arrows on the edges of the combined model have been suppressed. the distribution speci?ed by SfD . Second, if fi is a root variable in SfD , the marginal distribution on column i must match the distribution speci?ed by (So , ?i ). There is precisely one distribution P (M ) that satis?es these conditions, and we can represent it using a graphical model that we call the combined model. Suppose that there are n objects in So . To create the combined model, we ?rst introduce n copies of SfD . For each root variable i in SfD , we now connect all copies of variable i according to the structure of So (Figure 2c). The resulting graph provides the topology of the combined model, and the conditional probability distributions (CPDs) are inherited from So and SfD . Each node that belongs to the ith copy of So inherits a CPD from (So , ?i ), and all remaining nodes inherit a (deterministic) CPD from Sf . Now that the distribution P (M ) is represented as a graphical model, standard inference techniques can be used to compute the missing entries in a partially-observed matrix M . All results in this paper were computed using the implementation of the junction tree algorithm included in the Bayes Net toolbox [10]. 3 Experiments When making inductive inferences, a rational agent should exploit all of the information available, including causal relationships between features and similarity relationships between objects. Whether humans are able to meet this normative standard is not clear, and almost certainly varies from task to task. On one hand, there are motivating examples like the case of the three smokers where it seems natural to think about causal relationships and similarity relationships at the same time. On the other hand, Rehder [7] argues that causal information tends to overwhelm similarity information, and supports this conclusion with data from several tasks involving arti?cial categories. To help resolve these competing views, we designed several tasks where subjects were required to simultaneously reason about causal relationships between enzymes and similarity relationships between animals. 3.1 Experiment 1 Materials and Methods. 16 adults participated in this experiment. Subjects were asked to reason about the presence of enzymes in a set of four animals: a mouse, a rat, a sheep, and a squirrel. Each subject was trained on two causal structures, each of which involved three enzymes. Pseudobiological names like ?dexotase? were used in the experiment, but here we will call the enzymes f1 , f2 and f3 . In the chain condition, subjects were told that f3 is known to be produced by several a) Task 1 2 f1 f2 f3 r=0.89 r=0.94 r=0.83 r=0.74 r=0.91 r=0.93 r=0.86 r=0.93 r=0.86 r=0.82 0 Task 2 f1 Task 1 r=0.68 1 ?1 2 1 0 ?1 ?2 b) r=0.97 f2 2 1 0 f3 ?1 Task 2 2 1 0 ?1 mouse rat sqrl sheep mouse rat sqrl sheep mouse rat sqrl sheep mouse rat sqrl sheep Human Combined Causal Similarity Figure 3: Experiment 1: Behavioral data (column 1) and predictions for three models. (a) Results for the chain condition. Known test results are marked with arrows: in task 1, subjects were told only that the mouse had tested positive for f1 , and in task 2 they were told in addition that the rat had tested negative for f2 . Error bars represent the standard error of the mean. (b) Results for the common-effect condition. pathways, and that the most common pathway begins with f1 , which stimulates production of f2 , which in turn leads to the production of f3 . In the common-effect condition, subjects were told that f3 is known to be produced by several pathways, and that one of the most common pathways involves f1 and the other involves f2 . To reinforce each causal structure, subjects were shown 20 cards representing animals from twenty different mammal species (names of the species were not supplied). The card for each animal represented whether that animal had tested positive for each of the three enzymes. The cards were chosen to be representative of the distribution captured by a causal network with known structure (chain or common-effect) and known parameterization. In the chain condition, for example, the network was a noisy-or network with the form of a chain, where leak probabilities were set to 0.4 (f1 ) or 0.3 (f2 and f3 ), and the probability that each causal link was active was set to 0.7. After subjects had studied the cards for as long as they liked, the cards were removed and subjects were asked several questions about the enzymes (e.g. ?you learn about a new mammal?how likely is it that the mammal produces f3 ??) The questions in this training phase were intended to encourage subjects to re?ect on the causal relationships between the enzymes. In both conditions, subjects were told that they would be testing the four animals (mouse, rat, sheep and squirrel) for each of the three enzymes. Each condition included two tasks. In the chain condition, subjects were told that the mouse had tested positive for f1 , and asked to predict the outcome of each remaining test (Figure 1c). Subjects were then told in addition that the rat had tested negative for f2 , and again asked to predict the outcome of each remaining test. Note that this second task requires subjects to integrate causal reasoning with similarity-based reasoning: causal reasoning predicts that the mouse has f2 , and similarity-based reasoning predicts that it does not. In the common-effect condition, subjects were told that the mouse had tested positive for f3 , then told in addition that the rat had tested negative for f2 . Ratings were provided on a scale from 0 (very likely to test negative) to 100 (very likely to test positive). Results. Subjects used the 100 point scale very differently: in task 1 of each condition, some subjects chose numbers between 80 and 100, and others chose numbers between 0 and 100. We therefore converted each set of ratings to z-scores. Average z-scores are shown in the ?rst column of Figure 3, and the remaining columns show predictions for several models. In each case, model predictions have been converted from probabilities to z-scores to allow a direct comparison with the human data. Our combined model uses a tree over the four animals and a causal network over the features. We used the tree shown in Figure 1b, where objects o1 through o4 correspond to the mouse, the rat, the squirrel and the sheep. The tree component of our model has one free-parameter?the total path length of the tree. The smaller the path length, the more likely that all four animals have the same feature values, and the greater the path length, the more likely that distant animals in the tree (e.g. the mouse and the sheep) will have different feature values. All results reported here use the same value of this parameter?the value that maximizes the average correlation achieved by our model across Experiments 1 and 2. The causal component of our model includes no free parameters, since we used the parameters of the network that generated the cards shown to subjects during the training phase. Comparing the ?rst two columns of Figure 3, we see that our combined model accounts well for the human data. Columns 3 and 4 of Figure 3 show model predictions when we remove the similarity component (column 3) or the causal component (column 4) from our combined model. The model that uses the causal network alone is described by [2], among others, and the model that uses the tree alone is described by [6]. Both of these models miss qualitative trends evident in the human data. In task 1 of each condition, the causal model makes identical predictions about the rat, the squirrel and the sheep: in task 1 of the chain condition, for example, it cannot use the similarity between the mouse and the rat to predict that the rat is also likely to test positive for f1 . In task 1 of each condition the similarity model predicts that the unobserved features (f2 and f3 for the chain condition, and f1 and f2 for the common-effect condition) are distributed identically across the four animals. In task 1 of the chain condition, for example, the similarity model does not predict that the mouse is more likely than the sheep to test positive for f2 and f3 . The limitations of the causal and similarity models suggest that some combination of causality and similarity is necessary to account for our data. There are likely to be approaches other than our combined model that account well for our data, but we suggest that accurate predictions will only be achieved when the causal network and the similarity information are tightly integrated. Simply averaging the predictions for the causal model and the similarity model will not suf?ce: in task 1 of the chain condition, for example, both of these models predict that the rat and the sheep are equally likely to test positive for f2 , and computing an average across these models will result in the same prediction. 3.2 Experiment 2 Our working hypothesis is that similarity and causality should be combined in most contexts. An alternative hypothesis?the root-variables hypothesis?was suggested to us by Bob Rehder, and states that similarity relationships are used only if some of the root variables in a causal structure Sf are unobserved. For instance, similarity might have in?uenced inferences in the chain condition of Experiment 1 only because the root variable f1 was never observed for all four animals. The root-variables hypothesis should be correct in cases where all root variables in the true causal structure are known. In Figure 2c, for instance, similarity no longer plays a role once the root variables are observed, since the remaining variables are deterministically speci?ed. We are interested, however, in cases where Sf may not contain all of the causally relevant variables, and where similarity can help to make predictions about the effects of unobserved variables. Consider, for example, the case of the three smokers, where Sf states that smoking causes lung cancer. Even though the root variable is observed for Tim, Tom and Zach (all three are smokers), we still believe that Tom is more likely to suffer from lung cancer than Zach having discovered that Tim has lung cancer. The case of the three smokers therefore provides intuitive evidence against the root-variables hypothesis, and we designed a related experiment to explore this hypothesis empirically. Materials and Methods. Experiment 2 was similar to Experiment 1, except that the common-effect condition was replaced by a common-cause condition. In the ?rst task for each condition, subjects were told only that the mouse had tested positive for f1 . In the second task, subjects were told in addition that the rat, the squirrel and the sheep had tested positive for f1 , and that the mouse had a) Task 1 2 f1 f2 f3 r=0.55 r=0.92 r=0.98 r=0.82 r=0.97 r=0.99 r=0.79 r=0.84 r=0.98 r=0.89 r=0.98 1 0 ?1 1 Task 2 r=0.94 0 ?1 ?2 b) f1 Task 1 2 f2 1 f3 0 ?1 Task 2 1 0 ?1 ?2 mouse rat sqrl sheep mouse rat sqrl sheep mouse rat sqrl sheep mouse rat sqrl sheep Human Combined Causal Similarity Figure 4: Experiment 2: Behavioral data and predictions for three models. In task 2 of each condition, the root variable in the causal network (f1 ) is observed for all four animals. tested negative for f2 . Note that in the second task, values for the root variable (f1 ) were provided for all animals. 18 adults participated in this experiment. Results. Figure 4 shows mean z-scores for the subjects and for the four models described previously. The judgments for the ?rst task in each condition replicate the ?nding from Experiment 1 that subjects combine causality and similarity when just one of the 12 animal-feature pairs is observed. The results for the second task rule out the root-variables hypothesis. In the chain condition, for example, the causal model predicts that the rat and the sheep are equally likely to test positive for f2 . Subjects predict that the rat is less likely than the sheep to test positive for f2 , and our combined model accounts for this prediction. 4 Discussion We developed a model of inductive reasoning that is sensitive to causal relationships between features and similarity relationships between objects, and demonstrated in two experiments that it provides a good account of human reasoning. Our model makes three contributions. First, it provides an integrated view of two inductive problems?causal reasoning and similarity-based reasoning?that are usually considered separately. Second, unlike previous accounts of causal reasoning, it acknowledges the importance of unknown but causally relevant variables, and uses similarity to constrain inferences about the effects of these variables. Third, unlike previous models of similarity-based reasoning, our model can handle novel properties that are causally linked to known properties. For expository convenience we have emphasized the distinction between causality and similarity, but the notion of similarity needed by our approach will often have a causal interpretation. A treestructured taxonomy, for example, is a simple representation of the causal process that generated biological species?the process of evolution. Our combined model can therefore be seen as a causal model that takes both relationships between features and evolutionary relationships between species into account. More generally, our framework can be seen as a method for building sophisticated causal models, and our experiments suggest that these kinds of models will be needed to account for the complexity and subtlety of human causal reasoning. Other researchers have proposed strategies for combining probabilistic models [11], and some of these methods may account well for our data. In particular, the product of experts approach [12] should lead to predictions that are qualitatively similar to the predictions of our combined model. Unlike our approach, a product of experts model is not a directed graphical model, and does not support predictions about interventions. Neither of our experiments explored inferences about interventions, but an adequate causal model should be able to handle inferences of this sort. Causal knowledge and similarity are just two of the many varieties of knowledge that support inductive reasoning. Any single form of knowledge is a worthy topic of study, but everyday inferences often draw upon multiple kinds of knowledge. We have not provided a recipe for combining arbitrary forms of knowledge, but our work illustrates two general themes that may apply quite broadly. First, different generative models may capture different aspects of human knowledge, but all of these models use a common language: the language of probability. Probabilistic models are modular, and can be composed in many different ways to build integrated models of inductive reasoning. Second, the stochastic component of most generative models is in part an expression of ignorance. Using one model (e.g. a similarity model) to constrain the stochastic component of another model (e.g. a causal network) may be a relatively general method for combining probabilistic knowledge representations. Although we have focused on human reasoning, integrated models of induction are needed in many scienti?c ?elds. Our combined model may ?nd applications in computational biology: predicting whether an organism expresses a certain gene, for example, should rely on phylogenetic relationships between organisms and causal relationships between genes. Related models have already been explored: Engelhardt et al. [13] develop an approach to protein function prediction that combines phylogenetic relationships between proteins with relationships between protein functions, and several authors have explored models that combine phylogenies with hidden Markov models. Combining two models is only a small step towards a fully integrated approach, but probability theory provides a lingua franca for combining many different representations of the world. Acknowledgments We thank Bob Rehder and Brian Milch for valuable discussions. This work was supported in part by AFOSR MURI contract FA9550-05-1-0321, the William Asbjornsen Albert memorial fellowship (CK) and the Paul E. Newton Chair (JBT). References [1] A. Newell. Uni?ed theories of cognition. Harvard University Press, Cambridge, MA, 1989. [2] B. Rehder and R. Burnett. Feature inference and the causal structure of categories. Cognitive Science, 50: 264?314, 2005. [3] D. N. Osherson, E. E. Smith, O. Wilkie, A. Lopez, and E. Sha?r. Category-based induction. Psychological Review, 97(2):185?200, 1990. [4] S. A. Sloman. Feature-based induction. Cognitive Psychology, 25:231?280, 1993. [5] C. Kemp and J. B. Tenenbaum. Theory-based induction. In Proceedings of the Twenty-Fifth Annual Conference of the Cognitive Science Society, pages 658?663. Lawrence Erlbaum Associates, 2003. [6] C. Kemp, T. L. Grif?ths, S. Stromsten, and J. B. Tenenbaum. Semi-supervised learning with trees. In Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [7] B. Rehder. When similarity and causality compete in category-based property generalization. Memory and Cognition, 34(1):3?16, 2006. [8] J. P. Huelsenbeck and F. Ronquist. MRBAYES: Bayesian inference of phylogenetic trees. Bioinformatics, 17(8):754?755, 2001. [9] D. Poole. Probabilistic Horn abduction and Bayesian networks. Arti?cial Intelligence, 64(1):81?129, 1993. [10] K. Murphy. The Bayes Net Toolbox for MATLAB. Computing science and statistics, 33:1786?1789, 2001. [11] C. Genest and J. V. Zidek. Combining probability distributions: a critique and an annotated bibliography. Statistical Science, 1(2):114?135, 1986. [12] G. E. Hinton. Modelling high-dimensional data by combining simple experts. In Proceedings of the 17th National Conference on Arti?cial Intelligence. AAAI Press, 2000. [13] B. E. Engelhardt, M. I. Jordan, and S. E. Brenner. A graphical model for predicting protein molecular function. In Proceedings of the 23rd International Conference on Machine Learning, 2006.
3062 |@word version:1 seems:3 replicate:1 nd:3 proportion:1 seek:2 arti:3 eld:1 mammal:3 carry:3 initial:1 score:4 genetic:1 o2:5 blank:1 comparing:1 nitesimal:1 must:3 distant:1 happen:1 cpds:1 remove:1 designed:3 alone:2 generative:4 leaf:2 intelligence:2 parameterization:1 cult:3 ith:2 rehder:5 smith:1 fa9550:1 provides:5 node:2 phylogenetic:4 along:1 c2:1 direct:1 ect:1 qualitative:1 lopez:1 combine:7 pathway:4 behavioral:2 introduce:1 expected:1 brain:1 resolve:1 little:1 begin:1 provided:3 maximizes:1 xed:1 kind:8 developed:1 unobserved:5 cial:3 exactly:1 intervention:2 appear:2 causally:5 t1:1 positive:12 tends:1 critique:1 meet:3 path:3 might:3 chose:2 studied:1 specifying:1 dif:3 limited:1 directed:1 acknowledgment:1 horn:1 testing:1 empirical:1 induce:1 suggest:6 protein:4 cannot:2 convenience:1 context:1 milch:1 equivalent:2 deterministic:4 demonstrated:1 missing:4 independently:2 focused:1 formalized:2 rule:1 handle:3 uenced:1 notion:1 limiting:1 suppose:7 play:1 us:7 hypothesis:7 harvard:1 trend:1 associate:1 predicts:5 muri:1 observed:7 role:2 capture:8 removed:1 valuable:1 disease:1 intuition:1 leak:1 complexity:1 asked:5 trained:1 depend:2 upon:1 f2:25 joint:2 differently:1 osherson:1 represented:3 fewest:1 outcome:2 quite:1 modular:1 ability:1 statistic:1 think:1 noisy:1 net:2 product:2 commitment:1 relevant:5 combining:9 analogical:1 intuitive:3 everyday:3 recipe:1 rst:8 transmission:2 produce:1 liked:1 leave:1 object:33 tim:6 help:3 develop:3 strong:1 involves:2 implies:1 judge:1 shafto:2 correct:1 owing:1 annotated:1 stochastic:2 human:12 material:2 f1:23 generalization:1 biological:4 brian:1 squirrel:7 considered:1 lawrence:1 mapping:1 predict:8 cognition:2 ronquist:1 spreading:1 sensitive:4 treestructured:1 create:1 sfd:16 zidek:1 mit:3 activates:1 ck:1 focus:5 inherits:1 unsatisfactory:1 modelling:1 indicates:3 abduction:1 sense:1 inference:16 typically:2 entire:1 integrated:5 hidden:4 interested:2 among:2 animal:20 marginal:2 once:3 f3:17 having:2 never:1 identical:2 arctic:2 represents:1 biology:1 t2:1 cpd:2 fundamentally:1 serious:1 others:2 composed:1 simultaneously:1 tightly:1 national:1 murphy:1 familiar:1 replaced:1 phase:2 intended:1 william:1 attempt:4 interest:1 satis:1 certainly:1 sheep:20 allison:1 grif:1 scienti:1 chain:12 accurate:1 edge:1 encourage:1 necessary:1 fox:7 tree:15 re:1 causal:59 psychological:1 instance:9 column:12 introducing:1 subset:1 entry:4 coyote:2 erlbaum:1 motivating:1 reported:1 connect:1 varies:1 stimulates:1 combined:16 international:1 probabilistic:9 told:12 off:2 contract:1 mouse:26 again:1 aaai:1 huelsenbeck:1 choose:1 cognitive:5 expert:3 wing:1 account:15 converted:2 twin:1 subsumes:2 b2:1 includes:1 satisfy:1 script:1 root:19 view:2 linked:3 lung:7 sort:2 bayes:2 inherited:1 mutation:2 contribution:1 judgment:3 correspond:3 t3:1 bayesian:5 produced:2 burnett:1 researcher:2 bob:2 ed:5 against:1 involved:3 e2:1 rational:1 knowledge:17 formalize:1 sophisticated:1 supervised:1 tom:7 though:2 just:3 correlation:1 hand:2 working:1 berke:2 believe:1 b3:1 effect:10 name:2 contain:1 true:2 building:1 evolution:1 inductive:12 jbt:2 ignorance:1 conditionally:4 during:1 rat:26 o3:4 evident:1 vo:2 argues:1 reasoning:30 novel:5 fi:2 charles:1 common:10 empirically:1 association:1 interpretation:1 organism:2 cambridge:3 enter:1 rd:1 language:2 had:11 resistant:2 similarity:53 longer:1 patrick:1 base:3 enzyme:18 belongs:1 certain:1 binary:2 joshua:1 captured:1 seen:2 greater:1 speci:7 converting:1 living:1 semi:1 branch:1 multiple:2 violate:1 desirable:1 memorial:1 match:3 long:2 equally:3 e1:1 molecular:1 prediction:19 involving:1 basic:1 albert:1 represent:2 achieved:2 c1:1 background:2 addition:4 participated:2 separately:1 fellowship:1 source:1 unlike:3 probably:2 subject:25 incorporates:2 jordan:1 call:2 sqrl:8 presence:1 enough:1 easy:2 identically:1 switch:1 independence:4 isolation:1 variety:1 psychology:1 topology:1 competing:1 whether:6 expression:1 suffer:3 e3:1 cause:7 adequate:1 matlab:1 generally:5 clear:1 tenenbaum:3 induces:1 category:5 simplest:1 stromsten:1 generate:1 supplied:1 correctly:1 broadly:1 conform:1 express:4 four:8 neither:1 ce:1 graph:1 convert:1 compete:1 you:2 almost:1 draw:3 vf:2 capturing:1 annual:1 occur:1 precisely:2 constrain:2 bibliography:1 aspect:1 chair:1 relatively:1 department:1 structured:2 according:1 expository:1 combination:2 smaller:1 across:3 suppressed:1 making:1 taken:2 goose:2 previously:2 overwhelm:1 turn:2 eventually:1 mechanism:2 needed:3 know:1 junction:1 available:1 apply:1 alternative:1 remaining:6 include:2 graphical:11 newton:1 unifying:1 exploit:1 restrictive:1 build:1 society:1 skin:2 move:1 question:2 already:1 strategy:1 sha:1 evolutionary:1 sloman:1 link:2 reinforce:1 card:6 capacity:1 thank:1 topic:1 collected:2 kemp:3 reason:3 induction:5 o4:6 engelhardt:2 length:3 o1:8 relationship:41 phylogenetics:1 taxonomy:1 negative:5 implementation:1 similaritybased:1 unknown:5 twenty:2 markov:3 mrbayes:1 hinton:1 discovered:1 worthy:1 arbitrary:1 rating:2 smoking:4 required:2 pair:2 c3:1 toolbox:2 distinction:1 address:1 able:3 adult:2 bar:1 usually:3 suggested:1 poole:1 pig:1 including:2 memory:1 event:2 natural:2 rely:3 predicting:3 representing:1 nding:1 created:1 acknowledges:1 mediated:1 review:1 berry:1 taste:1 afosr:1 fully:1 suf:1 limitation:2 integrate:3 agent:1 ckemp:1 share:1 production:4 row:4 cancer:8 supported:1 copy:3 free:2 formal:1 allow:1 fifth:1 distributed:1 world:1 author:1 qualitatively:1 avoided:1 wilkie:1 emphasize:1 uni:1 gene:6 overcomes:1 dealing:1 active:3 b1:1 conclude:1 continuous:1 lling:1 table:1 learn:2 genest:1 imagining:1 inherit:1 arrow:2 paul:1 mediate:2 causality:8 representative:1 formalization:1 theme:1 duck:2 zach:6 sf:15 deterministically:3 lie:1 candidate:1 third:1 emphasized:1 normative:1 explored:3 evidence:1 sit:1 incorporating:1 false:1 importance:1 illustrates:1 smoker:5 simply:1 likely:23 explore:1 expressed:1 partially:2 subtlety:1 newell:1 environmental:2 determines:1 ma:3 conditional:5 viewed:1 goal:1 marked:1 towards:2 brenner:1 included:2 except:1 averaging:1 miss:1 total:1 specie:5 e:3 desert:5 formally:1 phylogeny:1 people:4 support:3 bioinformatics:1 tested:10
2,273
3,063
Detecting Humans via Their Pose Alessandro Bissacco Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 [email protected] Ming-Hsuan Yang Honda Research Institute 800 California Street Mountain View, CA 94041 [email protected] Stefano Soatto Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 [email protected] Abstract We consider the problem of detecting humans and classifying their pose from a single image. Specifically, our goal is to devise a statistical model that simultaneously answers two questions: 1) is there a human in the image? and, if so, 2) what is a low-dimensional representation of her pose? We investigate models that can be learned in an unsupervised manner on unlabeled images of human poses, and provide information that can be used to match the pose of a new image to the ones present in the training set. Starting from a set of descriptors recently proposed for human detection, we apply the Latent Dirichlet Allocation framework to model the statistics of these features, and use the resulting model to answer the above questions. We show how our model can efficiently describe the space of images of humans with their pose, by providing an effective representation of poses for tasks such as classification and matching, while performing remarkably well in human/non human decision problems, thus enabling its use for human detection. We validate the model with extensive quantitative experiments and comparisons with other approaches on human detection and pose matching. 1 Introduction Human detection and localization from a single image is an active area of research that has witnessed a surge of interest in recent years [9, 18, 6]. Simply put, given an image, we want to devise an automatic procedure that locates the regions that contain human bodies in arbitrary pose. This is hard because of the wide variability that images of humans exhibit. Given that it is impractical to explicitly model nuisance factors such as clothing, lighting conditions, viewpoint, body pose, partial and/or self-occlusions, one can learn a descriptive model of human/non human statistics. The problem then reduces to a binary classification task for which we can directly apply general statistical learning techniques. Consequently, the main focus of research on human detection so far has been on deriving a suitable representation [9, 18, 6], i.e. one that is most insensitive to typical appearance variations, so that it provides good features to a standard classifier. Recently local descriptors based on histograms of gradient orientations such as [6] have proven to be particularly successful for human detection tasks. The main idea is to use distributions of gradient orientations in order to be insensitve to color, brightness and contrast changes and, to some extent, local deformations. However, to account for more macroscopic variations, due for example to changes in pose, a more complex statistical model is warranted. We show how a special class of hierarchical Bayesian processes can be used as generative models for these features and applied to the problem of detection and pose classification. This work can be interpreted as an attempt to bridge the gap between the two related problems of human detection and pose estimation in the literature. In human detection, since a simple yes/no answer is required, there is no need to introduce a complex model with latent variables associated to physical quantities. In pose estimation, on the other hand, the goal is to infer these quantities and therefore a full generative model is a natural approach. Between these extremes lies our approach. We estimate a probabilistic model with a set of latent variables, which do not necessarily admit a direct interpretation in terms of configurations of objects in the image. However, these quantities are instrumental to both human detection and the pose classification problem. The main difficulty is in the representation of the pose information. Humans are highly articulated objects with many degrees of freedom, which makes defining pose classes a remarkably difficult problem. Even with manual labeling, how does one judge the distance between two poses or cluster them? In such situations, we believe that the only avenue is an unsupervised method. We propose an approach which allows for unsupervised clustering of images of humans and provides a low dimensional representation encoding essential information on their pose. The chief difference with standard clustering or dimensionality reduction techniques is that we derive a full probabilistic framework, which provides principled ways to combine and compare different models, as required for tasks such as human detection, pose classification and matching. 2 Context and Motivation The literature on human detection and pose estimation is too broad for us to review here. So we focus on the case of a single image, neglecting scenarios where temporal information or a background model are available and effective algorithms based on silhouettes [20, 12, 1] or motion patterns [18] can be applied. Detecting humans and estimating poses from single images is a fundamental problem with a range of sensible applications, such as image retrieval and understanding. It makes sense to tackle this problem as we know humans are capable of telling the locations and poses of people from the visual information contained in photographs. The question is how to represent such information, and the answer we give constitutes the main novelty of this work. Numerous representation schemes have been exploited for human detection, e.g., Haar wavelets [18], edges [9], gradient orientations [6], gradients and second derivatives [19] and regions from image segmentation [15]. With these representations, algorithms have been applied for the detection process such as template matching [9], support vector machine [19, 6], Adaboost [18], and grouping [15], to name a few. Most approaches to pose estimation are based on body part detectors, using either edge, shape, color and texture cues [7, 21, 15], or learned from training data [19]. The optimal configuration of the part assembly is then computed using dynamic programming as first introduced in [7], or by performing inference on a generative probabilistic model, using either Data Driven Markov Chain Monte Carlo, Belief Propagation or its non-Gaussian extensions [21]. These works focus on only one of the two problems, either detection or pose estimation. Our approach is different, in that our goal is to extract more information than a simple yes/no answer, while at the same time not reaching the full level of detail of determining the precise location of all body parts. Thus we want to simultaneously perform detection and pose classification, and we want to do it in an unsupervised manner. In this aspect, our work is related to the constellation models of Weber et al. [23], although we do not have an explicit decomposition of the object in parts. We start from the representation [6] based on gradient histograms recently applied to human detection with excellent results, and derive a probabilistic model for it. We show that with this model one can successfully detect humans and classify their poses. The statistical tools used in this work, Latent Dirichlet Allocation (LDA) [3] and related algorithms [5, 4], have been introduced in the text analysis context and recently applied to the problem of recognition of object and action classes [8, 22, 2, 16]. Contrary to most approaches (all but [8]) where the image is treated as a ?bag of features? and all spatial information is lost, we encode the location and orientation of edges in the basic elements (words) so that this essential information is explicitly represented by the model. 3 A Probabilistic Model for Gradient Orientations We first describe the features that we use as the basic representations of images, and then propose a probabilistic model with its application to the feature generation process. 3.1 Histogram of Oriented Gradients Local descriptors based on gradient orientations are one of the most successful representations for image-based detection and matching, as was firstly demonstrated by Lowe in [14]. Among the various approaches within this class, the best performer for humans appears to be [6]. This descriptor is obtained by computing weighted histograms of gradient orientations over a grid of spatial neighborhoods (cells), which are then grouped in overlapping regions (blocks) and normalized for brightness and contrast changes. Assume that we are given a patch of 64 ? 128 pixels, we divide the patch into cells of 8 ? 8 pixels, and for each cell a gradient orientation histogram is computed. The histogram represents a quantization in 9 bins of gradient orientations in the range 0? ? 180? . Each pixel contributes to the neighboring bins, both in orientation and space, by an amount proportional to the gradient magnitude and linearly decreasing with the distance from the bin center. These cells are grouped in 2 ? 2 blocks, and the contribution of each pixel is also weighted by a Gaussian kernel with ? = 8, centered in the block. Finally the vectors v of cell histograms within one block are normalized in L2 norm: v ? = v/(||v||2 + ). The final descriptor is a collection of histograms from overlapping blocks (each cell shared by 4 blocks). The main characteristic of such a representation is robustness to local deformations, illumination changes and, to a limited extent, viewpoint and pose changes due to coarsening of the histograms. In order to handle the larger variations typical of human body images, we need to complement this representation with a model. We propose a probabilistic model that can accurately describe the generation process of these features. 3.2 Latent Dirichlet Allocation Latent Dirichlet Allocation (LDA) [3] is a hierachical model for sparse discrete mixture distributions, where the basic elements (words) are sampled from a mixture of component distributions, and each component defines a discrete distribuition over the set of words. We are given a collection of documents where words w, the basic units of our data, take values in a dictionary of W unique elements w ? { 1, ? ? ? , W }. A document w = ( w1 , w2 , ? ? ? , wW ) PW is a collection of word counts wj : j=1 wj = N . The standard LDA model does not include the distribution of N , so it can be omitted in what follows. The corpus D = { w1 , w2 , ? ? ? , wM } is a collection of M documents. The LDA model introduces a set of K latent variables, called topics. Each word in the document is assumed to be generated by one of the topics. Under this model, the generative process for each document w in the corpus is as follows: 1. Choose ? ? Dirichlet(?). 2. For each word j = 1, ? ? ? , W in the dictionary, choose a word count wj ? p(wj |?, ?). where the word counts wj are drawn from a discrete distribution conditioned on the topic proportions ?: p(wj |?, ?) = ?j. ?. Recently several variants to this model have been developed, notably the Multinomial PCA [4], where the discrete distributions are replaced by multinomials, and the Gamma-Poisson process [5], where the number of words ?i from each component are independent Gamma samples and p(wj |?, ?) is Poisson. The hyperparameter ? ? RK + represents the prior on W ?K the topic distribution, ? ? RK are the parameters of the + are the topic proportions, and ? ? R+ word distributions conditioned on topics. In the context of this work, words correspond to oriented gradients, and documents as well as corpus correspond to images and a set of images respectively. The topic derived by the LDA model is the pose of interest in this work. Here we can safely assume that the topic distributions ? are deterministic parameters, later for the purpose of inference we will treat them as random variables and assign them a Dirichlet prior: ?.k ? Dirichlet(?) , where ?.k denotes the k-th column of ?. Then the likelihood of a document w is: Z p(w|?, ?) = p(?|?) W Y p(wn |?, ?)d? (1) n=1 where documents are represented as a continuous mixture distribution. The advantage over standard mixture of discrete distributions [17], is that we allow each document to be generated by more than one topic. 3.3 A Bayesian Model for Gradient Orientation Histograms Now we can show how the described two-level Bayesian process finds a natural application in modeling the spatial distribution of gradient orientations. Here we consider the histogram of oriented gradients [6] as the basic feature from which we build our generative model, but let us point out that the framework we introduce is more general and can be applied to any descriptor based on histograms1 . In this histogram descriptor, we have that each bin represents the intensity of the gradient at a particular location, defined by a range of orientations and a local neighborhood (cell). Thus the bin height denotes the strength and number of the edges in the cell. The first thing to notice in deriving a generative models for this class of features is that, since they represent a weighted histogram, they have non-negative elements. Thus a proper generative model for these descriptors imposes non-negativity constraints. As we will see in the experiments, a linear approach such as Non-negative Matrix Factorization [13] leads to extremely poor performance, probably due to the high curvature of the space. On the opposite end, representing the nonlinearity of the space with a set of samples by Vector Quantization is feasible only using a large number of samples, which is against our goal of deriving an economical representation of the pose. We propose using the Latent Dirichlet Allocation model to represent the statistics of the gradient orientation features. In order to do so we need to quantize feature values. While not investigated in the original paper [6], quantization is common practice for similar histogram-based descriptors, such as [14]. We tested the effect of quantization on the performance of the human detector based on Histogram of Oriented Gradient descriptors and linear Support Vector Machines described in [6]. As evident in Figure 1, with 16 or more discrete levels we practically obtain the same performance as with the original continuous descriptors. Thus in what follows we can safely assume that the basic features are collections of small integers, the histogram bin counts wj . Thus, if we quantize histogram bins and assign a unique word to each bin, we obtain a representation for which we can directly apply the LDA framework. Analogous to document analysis, an orientation histogram computed on an image patch is a document w represented as a bag of words ( w1 , ? ? ? , wW ), where the word counts wj are the bin heights. We assume that such a histogram is generated by a mixture of basic components (topics), where each topic z induces a discrete distribution p(r|?.z ) on bins representing a typical configuration of edges common to a class of elements in the dataset. By summing the contributions from each topic we obtain the total count wj for each bin, distributed according to p(wj |?, ?). The main property of such feature formation process, desirable for our applications, is the fact that topics combine additively. That is, the same bin may have contributions from multiple topics, and this models the fact that the bin height is the count of edges in a neighborhood which may include parts generated by different components. Finally, let us point out that by assigning a unique word to each bin we model spatial information, encoded in the word identity, whereas most previous approaches (e.g. [22]) using similar probabilistic models for object class recognition did not exploit this kind of information. 4 Probabilistic Detection and Pose Estimation The first application of our approach is human detection. Notice that our main goal is to develop a model to represent the statistics of images for human pose classification. We use the human detection problem as a convenient testbed for validating the goodness of our representation, since for this application large labelled datasets and efficient algorithms are available. By no means we intend to compete with state-of-the-art discriminative approaches for human detection alone, which are optimized to represent the decision boundary and thus are supposed to perform better than generative approaches in binary classification tasks. However, if the generative model is good at capturing the statistics of human images we expect it to perform well also in discriminating humans from the background. In human detection, given a set of positive and negative examples and a previously unseen image Inew , we are asked to choose between two hypotheses: either it contains a human or it is a background image. The first step is to compute the gradient histogram representation w(I) for the test and training images. Then we learn a model for humans and background images and use a threshold 1 Notice that, due to the particular normalization procedure applied, the histogram features we consider here do not have unit norm (in fact, they are zero on uniform regions). on the likelihood ratio2 for detection: L= P (w(Inew )|Human) (2) P (w(Inew )|Background) For the the LDA (and related models [5, 4], the likelihoods p(w(I)|?, ?) are computed as in (1), where ?, ? are model parameters and can be learned from data. In practice, we can assume ? is known and compute an estimate of ? from the training corpus. In doing so, we can choose from two main inference algorithms: mean field or variational inference [3] and Gibbs sampling [10]. Mean field algorithms provide a lower bound on the likelihood, while Gibbs sampling gives statistics based on a sequential sampling scheme. As shown in Figure 1, in our experiments Gibbs sampling exhibited superior performance over mean field in terms of classification accuracy. We have experimented with two variations, a direct method and Rao-Blackwellised sampling (see [4] for details). Both methods gave similar performance, here we report the results obtained using the direct method, whose main iteration is as follows: 1. For each document wi = (wi,1 , ? ? ? , wi,W ): (i) First sample ?(i) ? p(?|wi , ?, ?), and then sample vj. ? Multinomial(?j. ?(i) , wi,j ) 2. For each topic k: P (i) Sample ?.k ? Dirichlet( i v.k + ?) In pose classification, we start from a set of unlabeled training examples of human poses and learn the topic distribution ?. This defines a probabilistic mapping to the topic variables, which can be seen as an economical representation encoding essential information of the pose. That is, from a ? image Inew , we estimate the topic proportions Z ?(Inew ) as: ? new ) = ?p(?|w(Inew ), ?, ?)d? ?(I (3) Pose information can be recovered by matching the new image Inew to an image I in the training set. For matching, ideally we would like to compute the matching score as Sopt (I, Inew ) = P (w(Inew )|w(I), ?, ?), i.e. the posterior probability of the test image Inew given the training image I and the model ?, ?. However this would be computationally expensive as for each pair I, Inew it requires computing an expectation of the form (3), thus we opted for a suboptimal solution. For ? each training document I, in the learning step we compute the posterior topic proportions ?(I) as in (3). Then the matching score S between Inew and I is given by the dot product between the two ? and ?(I ? new ): vectors ?(I) ? ? new ) > S(I, Inew ) =< ?(I), ?(I (4) ? The computation of this score requires only a dot product between low dimensional unit vectors ?, so our approach represent an efficient method for matching and clustering poses in large datasets. 5 Experiments We first tested the efficacy of our model for the human detection task. We used the dataset provided by [6], consisting of 2340 64 ? 128 images of pedestrians in various configurations and 1671 images of outdoor scenes not containing humans. We collected negative examples by random sampling 10 patches from each of the first 1218 non-human images. These, together with 1208 positive examples and their left-right reflections, constituted our first training set. We used the learned model to classify remaining 1132 positive and on 5889 patches randomly extracted from the residual background images. We first computed the histograms of oriented gradients from the image patches following the procedure outlined in Section 3.1. These feature are quantized so that they can be represented by our discrete stochastic model. We tested the effect of different quantization levels on the performances of the boosted SVM classifier [6]: a initial training on the provided dataset is followed by a boosting round where the trained classifier is applied to the background images to find false positive; these hard examples are then added to for a second training of the classifier. As Figure 1 shows, the effect of quantization is significant only if we use less than 4 bits. Therefore, we chose to discretize the features to 16 quantization levels. 2 Ideally we would like to use the posterior ratio R = P (Human|Inew )/P (Background|Inew ). However notice that R is equal to (2) if we assume equal priors P (Human) = P (Background). Given the number of topics K and the prior hyperparameters ?, ?, we learned topic distributions ? ? using either Gibbs sampling or Mean Field. We tested both Gamma [5] and topic proportions ?(I) and Dirichlet [3, 4] distributions for topic priors, obtaining best results with the multinomial model [4] with scalar priors ?i = a, ?i = b, in these experiments a = 2/K and b = 0.5. The number of topics K is an important parameter that should be carefully chosen based on considerations on modeling power and complexity. With a higher number of topics we can more accurately fit the data, which can be measured by the increase in the likelihood of the training set. This does not come for free: we have a larger number of parameters and an increased computational cost for learning. Eventually, an excessive topic number causes overfitting, which can be measured as the likelihood in the test dataset decreases. For the INRIA data, experimental evaluations suggested that a good tradeoff is obtained with K = 24. We learned two models, one for positive and one for negative examples. For learning we run the Gibbs sampling algorithm described in Section 4 for a total number of 300 samples per document, including 50 samples to compute the likelihoods (1). We also trained the model using the Mean Field approximation, but as we can see in Figures 1 and 4 the results using Gibbs sampling are better. For details on the implementation we refer to [4]. We then obtain a detector by computing the likelihood ratio (2) and comparing it with a threshold. In Figure 1 we show the performances of our detector on the INRIA dataset, where for the sake of comparison with other approaches boosting is not performed. We show the results for: ? Linear SVM classifier: Trained as described, using the SVMLight software package. ? Vector Quantization: Positive and negative models learned as collections of K clusters using the K-Means algorithm. Then the decision rule is Nearest Neighbor, that is whether the closest cluster belongs to positive or negative model. ? Non-negative Matrix Factorization: Feature vectors are collected in a matrix V , and the factorization that minimizes ||Y ? W H||22 with W, H nonnegative is computed using the multiplicative update algorithm of [13]. Using an analogy with the LDA model, the columns of W contain the topic distributions, while the columns of H represent the component weights. A classifier is obtained as the difference of the residuals of the feature projections on the positive and negative models. From the plot we see how the results of our approach are comparable with the performance of the Linear SVM, while being far superior to the other generative approaches. We would like to stress that a sole comparison on detection performance with state-of-the discriminative classifiers would be inappropriate, since our model targets pose classification which is harder than binary detection. A fair comparison should divide the dataset in classes and compare our model with a multiclass classifier. But then we would face the difficult problem of how to label human poses. For the experiments on pose classification and matching, we used the CMU Mobo dataset [11]. It consists of sequences of subjects performing different motion patterns, each sequence taken from 6 different views. In the experiments we used 22 sequences of fast walking motion, picking the first 100 frames from each sequence. In the first experiment we trained the model with all the views and set the number of topics equal to the number of views, K = 6. As expected, each topic distribution represents a view and by assigning every image I to the topic k with highest proportion k = arg maxk ??k (I) we correctly associated all the images from the same view to the same topic. To obtain a more challenging setup, we restricted to a single view and tested the classification performance of our approach in matching poses. We learned a model with K = 8 topics from 16 training sequences, and used the remaining 6 for testing. In Figure 2 we show sample topics distributions from this model. In Figure 3, for each test sequence we display a sample frame and the associated top ten matches from the training data according to the score (4). We can see how the pose is matched against change of appearance and motion style, specifically a test subject pose is matched to similar poses of different subjects in the training set. This shows how the topic representation factors out most of the appearance variations and retains only essential information on the pose. In order to give a quantitative evaluation of the pose matching performance and compare with other approaches, we labeled the dataset by mapping the set of walking poses to the interval [0, 1]. We manually assigned 0 to the frames at the beginning of the double support phase, when the swinging Effect of Histogram Quantization on Human Detection Detector Performance Comparison 0.2 0.5 0.1 miss rate miss rate 0.2 0.05 0.1 0.05 0.02 0.01 ?6 10 Continous 32 Levels 16 Levels 8 Levels 4 Levels ?5 10 0.02 ?4 ?3 ?2 10 10 10 false positives per window (FPPW) ?1 10 0.01 ?3 10 NMF VQ LDA Gibbs LDA MF Linear SVM ?2 ?1 10 10 false positives per window (FPPW) Figure 1: Human detection results. (Left) Effect on human detection performances of quantizing the histogram of oriented gradient descriptor [6] for a boosted linear SVM classifier based on these features. Here we show false positive vs. false negative curves on log scale. We can see that for 16 quantization levels or more the differences are negligible, thus validating our discrete approach. (Right) Performances of five detectors using HOG features trained without boosting and tested on the INRIA dataset: LDA detectors learned by Gibbs Sampling and Mean Field, Vector Quantization, Non-negative Matrix Factorization - all with K = 24 components/codewords - and Linear SVM. We can see how the Gibbs LDA outperform by far the other unsupervised clustering techniques and scores comparably with the Linear SVM, which is specifically optimized for the simpler binary classification problem. Figure 2: Topics distributions and clusters. We show sample topics (2 out of 8) from the LDA model trained on the single view Mobo sequences. For each topic k, we show 12 images in 2 rows. The first column shows the distribution of local orientations associated with topic k: (top) visualization of the orientations and (bottom) average gradient intensities for each cell. The right 5 columns show the top ten images in the dataset with highest topic proportion ??k , shown below each image. We can see that topics are tightly related to pose classes. foot touches the ground, and 1 to the frames where the legs are approximately parallel. We labeled the remaining frames automatically using linear interpolation between keyframes. The average interval between keyframes is 8.1 frames, this motivates our choice of the number of topics K = 8. For each test frame, we computed the pose error as the difference between the associated pose value and the average pose of the best top 10 matches in the training dataset. We obtained an average error of 0.16, corresponding to 1.3 frames. In Figure 4 we show the average pose error per test sequence obtained with our approach compared with Vector Quantization, where the pose is obtained as average of labels associated with the closest clusters, and Non-negative Matrix Factorization, where as in LDA similarity of poses is computed as dot product of the component weights. In all the models we set equal number of components/clusters to K = 8. We can see that our approach performs best in all testing sequences. In Figure 4 we also show the average pose error when matching test frames to a single train sequence. Although the different appearance affects the matching performance, overall the results shows how our approach can be successfully applied to automatically match poses of different subjects. 6 Conclusions We introduce a novel approach to human detection, pose classification and matching from a single image. Starting from a representation robust to a limited range of variations in the appearance of humans in images, we derive a generative probabilistic model which allows for automatic discovery of pose information. The model can successfully perform detection and provides a low dimensional representation of the pose. It automatically clusters the images using representative distributions and allows for an efficient approach to pose matching. Our experiments show that our approach matches or exceeds the state of the art in human detection, pose classification and matching. Figure 3: Pose matching examples. On the left one sample frame from test sequences, on the right the top 10 matches in the training set based on the similarity score (4), reported below the image. We can see how our approach allows to match poses even despite large changes in appearance, and the same pose is correctly matched across different subjects. Average Pose Error 0.5 LDA Gibbs LDA MF NMF VQ 0.4 0.3 0.2 0.1 0 Figure 4: Pose matching error. (Left) Average pose error in matching test sequences to the training set, for our model (both Gibbs and Mean Field learning), Non-Negative Matrix Factorization and Vector Quantization. We see how our model trained with Gibbs sampling model clearly outperforms the other approaches. (Right) Average pose error in matching test and training sequence pairs with our approach, where each row is a test sequence and each column a training sequence. The highest error corresponds to about 2 frames, while the mean error is 0.16 and amounts to approximately 1.3 frames. Acknowledgments This work was conducted while the first author was an intern at Honda Research Institute in 2005. Work at UCLA was supported by AFOSR F49620-03-1-0095 and ONR N00014-03-1-0850:P0001. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] A. Agarwal and B. Triggs. 3d human pose from silhouettes by relevance vector regression. CVPR, 2004. A. Agarwal and B. Triggs. Hyperfeatures: Multilevel local coding for visual recognition. ECCV, 2006. D. Blei, A. Ng, and M. Jordan. Latent drichlet allocation. Journal on Machine Learning Research, 2003. W. Buntine and A. Jakulin. Discrete principal component analysis. HIIT Technical Report, 2005. J. Canny. GaP: a factor model for discrete data. ACM SIGIR, pages 122?129, 2004. N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. CVPR, 2005. P. F. Felzenszwalb and D. P. Huttenlocher. Efficient matching of pictorial structures. CVPR, 2000. R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from Google?s image search. Proc. ICCV, pages 1816?1823, 2005. D. M. Gavrila and V. Philomin. Real-time object detection for smart vehicles. Proc. ICCV, 1999. T. L. Griffiths and M. Steyvers. Finding scientific topics. Proc. National Academy of Science, 2004. R. Gross and J. Shi. The cmu motion of body dataset. Technical report, CMU, 2001. G.Shakhnarovich, P.Viola, andT.Darrell. Fast pose estimation with parameter-sensitive hashing. ICCV, 2003.. D. Lee and H. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 1999. D. G. Lowe. Object recognition from local scale-invariant features. Proc. ICCV, pages 1150?1157, 1999. G. Mori, X. Ren, A. A. Efros, and J. Malik. Recovering human body configurations: Combining segmentation and recognition. Proc. CVPR, 2:326?333, 2004. J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learning of human action categories using spatialtemporal words. Proc. BMVC, 2006. K. Nigam, A. K. McCallum, S. Thurn, and T. Mitchell. Text classification from labeled and unlabeled documents using EM. Machine Learning, pages 1?34, 2000. P.Viola, M.Jones, and D.Snow. Detecting pedestrians using patterns of motion and appearance. ICCV, 2003 . R. Ronfard, C. Schmid, and B. Triggs. Learning to parse pictures of people. ECCV, 2002. R. Rosales and S. Sclaroff. Inferring body without tracking body parts. Proc. CVPR, 2:506?511, 2000. L. Sigal, M. Isard, B. H. Sigelman, and M. Black. Attractive people: Assembling loose-limbed models using non-parametric belief propagation. Proc. NIPS, pages 1539?1546, 2003. J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering object categories in image collections. Proc. ICCV, 2005. M. Weber, M. Welling, and P. Perona. Toward automatic discovery of object categories. CVPR, 2000.
3063 |@word pw:1 dalal:1 proportion:7 instrumental:1 norm:2 triggs:4 additively:1 decomposition:1 brightness:2 harder:1 reduction:1 initial:1 configuration:5 contains:1 score:6 efficacy:1 document:15 outperforms:1 recovered:1 comparing:1 assigning:2 shape:1 plot:1 update:1 v:1 alone:1 generative:11 cue:1 isard:1 discovering:1 mccallum:1 beginning:1 bissacco:2 blei:1 detecting:4 provides:4 quantized:1 honda:2 location:4 boosting:3 org:1 firstly:1 simpler:1 five:1 height:3 direct:3 consists:1 combine:2 introduce:3 manner:2 notably:1 expected:1 surge:1 freeman:1 ming:1 decreasing:1 automatically:3 inappropriate:1 window:2 provided:2 estimating:1 matched:3 what:3 mountain:1 kind:1 interpreted:1 minimizes:1 developed:1 finding:1 impractical:1 temporal:1 quantitative:2 safely:2 blackwellised:1 every:1 tackle:1 classifier:9 unit:3 positive:11 negligible:1 local:8 treat:1 despite:1 encoding:2 jakulin:1 interpolation:1 approximately:2 inria:3 chose:1 black:1 challenging:1 limited:2 factorization:7 range:4 unique:3 acknowledgment:1 testing:2 lost:1 block:6 practice:2 procedure:3 area:1 matching:23 convenient:1 word:18 hierachical:1 projection:1 griffith:1 unlabeled:3 put:1 context:3 deterministic:1 demonstrated:1 center:1 shi:1 starting:2 sigir:1 swinging:1 hsuan:1 rule:1 deriving:3 steyvers:1 handle:1 variation:6 analogous:1 target:1 programming:1 hypothesis:1 element:5 recognition:5 particularly:1 expensive:1 walking:2 labeled:3 huttenlocher:1 bottom:1 wang:1 region:4 wj:11 decrease:1 highest:3 russell:1 alessandro:1 principled:1 gross:1 complexity:1 ronfard:1 asked:1 ideally:2 seung:1 dynamic:1 trained:7 shakhnarovich:1 smart:1 localization:1 represented:4 various:2 articulated:1 train:1 fast:2 describe:3 effective:2 monte:1 labeling:1 formation:1 neighborhood:3 whose:1 encoded:1 larger:2 cvpr:6 statistic:6 unseen:1 final:1 descriptive:1 advantage:1 sequence:15 quantizing:1 propose:4 product:3 canny:1 neighboring:1 combining:1 academy:1 supposed:1 validate:1 los:4 cluster:7 double:1 darrell:1 object:11 derive:3 develop:1 pose:70 measured:2 nearest:1 sole:1 recovering:1 c:2 judge:1 come:1 rosales:1 foot:1 snow:1 stochastic:1 centered:1 human:59 bin:14 multilevel:1 assign:2 inew:15 niebles:1 extension:1 clothing:1 practically:1 ground:1 mapping:2 efros:2 dictionary:2 omitted:1 purpose:1 estimation:7 proc:9 bag:2 label:2 bridge:1 sensitive:1 grouped:2 successfully:3 tool:1 weighted:3 clearly:1 gaussian:2 reaching:1 boosted:2 encode:1 derived:1 focus:3 likelihood:8 contrast:2 opted:1 sense:1 detect:1 inference:4 her:1 perona:2 pixel:4 arg:1 classification:17 orientation:17 among:1 overall:1 spatial:4 special:1 art:2 field:7 equal:4 ng:1 sampling:11 manually:1 represents:4 broad:1 jones:1 unsupervised:6 constitutes:1 excessive:1 report:3 few:1 oriented:7 randomly:1 simultaneously:2 gamma:3 tightly:1 national:1 pictorial:1 replaced:1 phase:1 occlusion:1 consisting:1 attempt:1 freedom:1 detection:35 interest:2 investigate:1 highly:1 evaluation:2 introduces:1 mixture:5 extreme:1 chain:1 edge:6 capable:1 neglecting:1 partial:1 divide:2 deformation:2 witnessed:1 classify:2 column:6 modeling:2 rao:1 increased:1 goodness:1 retains:1 cost:1 uniform:1 successful:2 conducted:1 too:1 buntine:1 reported:1 answer:5 drichlet:1 fundamental:1 discriminating:1 probabilistic:11 lee:1 picking:1 together:1 w1:3 containing:1 choose:4 admit:1 derivative:1 style:1 account:1 coding:1 pedestrian:2 explicitly:2 performed:1 multiplicative:1 vehicle:1 later:1 view:8 lowe:2 doing:1 start:2 wm:1 parallel:1 contribution:3 accuracy:1 descriptor:12 characteristic:1 efficiently:1 correspond:2 yes:2 bayesian:3 accurately:2 comparably:1 ren:1 carlo:1 economical:2 lighting:1 detector:7 manual:1 against:2 associated:6 sampled:1 dataset:12 mitchell:1 color:2 dimensionality:1 segmentation:2 carefully:1 appears:1 higher:1 hashing:1 adaboost:1 zisserman:2 bmvc:1 hiit:1 hand:1 parse:1 touch:1 overlapping:2 propagation:2 google:1 defines:2 lda:16 scientific:1 believe:1 name:1 effect:5 contain:2 normalized:2 soatto:2 assigned:1 attractive:1 round:1 self:1 nuisance:1 stress:1 evident:1 performs:1 motion:6 stefano:1 reflection:1 image:49 weber:2 variational:1 consideration:1 recently:5 novel:1 common:2 superior:2 multinomial:4 physical:1 sopt:1 keyframes:2 insensitive:1 interpretation:1 assembling:1 significant:1 refer:1 gibbs:12 automatic:3 grid:1 outlined:1 nonlinearity:1 dot:3 similarity:2 curvature:1 posterior:3 closest:2 recent:1 belongs:1 driven:1 scenario:1 n00014:1 binary:4 onr:1 devise:2 exploited:1 seen:1 performer:1 novelty:1 full:3 desirable:1 multiple:1 reduces:1 infer:1 exceeds:1 technical:2 match:7 retrieval:1 locates:1 variant:1 basic:7 regression:1 expectation:1 poisson:2 cmu:3 histogram:25 represent:7 kernel:1 normalization:1 iteration:1 agarwal:2 cell:9 background:9 remarkably:2 want:3 whereas:1 interval:2 macroscopic:1 w2:2 exhibited:1 probably:1 subject:5 gavrila:1 validating:2 thing:1 contrary:1 fppw:2 coarsening:1 jordan:1 integer:1 yang:1 svmlight:1 wn:1 andt:1 affect:1 fit:1 gave:1 opposite:1 suboptimal:1 idea:1 avenue:1 tradeoff:1 multiclass:1 angeles:4 whether:1 pca:1 cause:1 action:2 amount:2 ten:2 induces:1 category:4 outperform:1 hyperfeatures:1 notice:4 per:4 correctly:2 discrete:11 hyperparameter:1 threshold:2 drawn:1 year:1 limbed:1 compete:1 run:1 package:1 patch:6 decision:3 comparable:1 bit:1 capturing:1 bound:1 followed:1 display:1 nonnegative:1 strength:1 constraint:1 fei:4 scene:1 software:1 sake:1 ucla:3 aspect:1 extremely:1 performing:3 department:2 according:2 poor:1 across:1 em:1 wi:5 leg:1 restricted:1 iccv:6 invariant:1 taken:1 computationally:1 mori:1 vq:2 previously:1 visualization:1 count:7 eventually:1 loose:1 know:1 end:1 available:2 apply:3 hierarchical:1 robustness:1 original:2 denotes:2 dirichlet:10 clustering:4 assembly:1 include:2 remaining:3 top:5 exploit:1 build:1 malik:1 intend:1 question:3 quantity:3 added:1 codewords:1 parametric:1 exhibit:1 gradient:24 distance:2 street:1 sensible:1 topic:42 extent:2 collected:2 toward:1 providing:1 ratio:2 difficult:2 setup:1 hog:1 negative:14 implementation:1 proper:1 motivates:1 perform:4 discretize:1 markov:1 datasets:2 enabling:1 defining:1 situation:1 variability:1 precise:1 maxk:1 frame:12 ww:2 viola:2 arbitrary:1 intensity:2 nmf:2 introduced:2 complement:1 pair:2 required:2 extensive:1 optimized:2 continous:1 sivic:1 california:3 learned:9 testbed:1 nip:1 suggested:1 below:2 pattern:3 sigelman:1 including:1 belief:2 power:1 suitable:1 natural:2 difficulty:1 treated:1 haar:1 residual:2 representing:2 scheme:2 numerous:1 picture:1 negativity:1 extract:1 schmid:1 text:2 review:1 literature:2 understanding:1 l2:1 prior:6 discovery:2 determining:1 afosr:1 expect:1 generation:2 allocation:6 proportional:1 proven:1 analogy:1 degree:1 imposes:1 sigal:1 viewpoint:2 classifying:1 row:2 eccv:2 supported:1 free:1 allow:1 telling:1 institute:2 wide:1 template:1 neighbor:1 face:1 felzenszwalb:1 sparse:1 distributed:1 f49620:1 boundary:1 curve:1 author:1 collection:7 mhyang:1 far:3 welling:1 silhouette:2 active:1 overfitting:1 corpus:4 summing:1 assumed:1 discriminative:2 fergus:1 continuous:2 latent:9 search:1 chief:1 learn:3 nature:1 robust:1 ca:3 obtaining:1 contributes:1 nigam:1 quantize:2 warranted:1 excellent:1 complex:2 necessarily:1 investigated:1 vj:1 did:1 main:9 constituted:1 linearly:1 motivation:1 hyperparameters:1 fair:1 body:9 representative:1 inferring:1 explicit:1 lie:1 outdoor:1 wavelet:1 rk:2 constellation:1 experimented:1 svm:7 grouping:1 essential:4 quantization:13 false:5 sequential:1 texture:1 magnitude:1 illumination:1 conditioned:2 gap:2 spatialtemporal:1 mf:2 sclaroff:1 photograph:1 simply:1 appearance:7 intern:1 visual:2 contained:1 tracking:1 scalar:1 corresponds:1 extracted:1 acm:1 goal:5 identity:1 consequently:1 labelled:1 shared:1 feasible:1 hard:2 change:7 specifically:3 typical:3 miss:2 principal:1 called:1 total:2 experimental:1 people:3 support:3 relevance:1 tested:6
2,274
3,064
Information Bottleneck for Non Co-Occurrence Data Yevgeny Seldin? Noam Slonim? Naftali Tishby?? ? School of Computer Science and Engineering Interdisciplinary Center for Neural Computation The Hebrew University of Jerusalem ? The Lewis-Sigler Institute for Integrative Genomics Princeton University {seldin,tishby}@cs.huji.ac.il, [email protected] ? Abstract We present a general model-independent approach to the analysis of data in cases when these data do not appear in the form of co-occurrence of two variables X, Y , but rather as a sample of values of an unknown (stochastic) function Z(X, Y ). For example, in gene expression data, the expression level Z is a function of gene X and condition Y ; or in movie ratings data the rating Z is a function of viewer X and movie Y . The approach represents a consistent extension of the Information Bottleneck method that has previously relied on the availability of co-occurrence statistics. By altering the relevance variable we eliminate the need in the sample of joint distribution of all input variables. This new formulation also enables simple MDL-like model complexity control and prediction of missing values of Z. The approach is analyzed and shown to be on a par with the best known clustering algorithms for a wide range of domains. For the prediction of missing values (collaborative filtering) it improves the currently best known results. 1 Introduction In the situation of information explosion that characterizes todays world, the need for automatic tools for data analysis is more than obvious. Here, we focus on an unsupervised analysis of data that can be organized in matrix form. Clearly, this broad definition covers various types of data. For instance, in text analysis data, the rows of a matrix correspond to words, the columns to different documents, and entries indicate the number of occurrences of a particular word in a specific document. In a matrix of gene expression data, rows correspond to genes, columns to various experimental conditions, and entries indicate expression levels of given genes in given conditions. In movie rating data, rows correspond to viewers, columns to movies, and entries indicate ratings made by the viewers. Finally, for financial data, rows correspond to stocks, columns to different time points, and each entry indicates a price change of a particular stock at a given time point. While the text analysis case is a classical example of co-occurrence data, the remaining examples are not naturally interpreted that way. Typically, a normalized words-documents table is used as an estimator of a words-documents joint probability distribution, where each entry estimates the probability of finding a given word in a given document, whereas the words are assumed to be independent of each other [1, 2]. By contrast, values in a financial data matrix are a general function of stocks and days and in particular might include negative numerical values. Here, the data cannot be regarded as a sample from a joint probability distribution of stocks and days, even if a normalization is applied. Though it can be argued that each entry of the matrix is a sample from a joint probability distribution of three variables: stock, day, and price change - the degenerate nature of this sample must be taken into account: the rate change of a given stock on a given day occurs only once and no statistics exist. Therefore the joint probability distribution of the three variables cannot be estimated by direct sampling. A similar argument applies to survey data like movie ratings. In this case the sample might be even more degenerate, with many ?missing values?, since not all the viewers rate all the movies. Although gene expression data can be considered as a repeatable experiment, very often different experimental conditions correspond to only one single column in the matrix. Thus once again a single data point represents the joint statistics of three variables: gene, condition, and expression level. Nevertheless, in most such cases there is a statistical relationship within rows and/or within columns of a matrix. For instance, people with similar interests typically give similar ratings to movies and movies with similar characteristics give rise to similar rating profiles. Such relationships can be exploited by clustering algorithms that group together similar rows and/or columns, and furthermore make it possible to complete the missing entries in the matrix [3]. The existing clustering techniques can be classified into three major categories. (i) Similarity, or distance-based methods, require a pre-defined similarity measure that can be applied to all data points and possibly to new points as well. The nature of the distance measure is crucial to these techniques and inherently requires expert knowledge of the application domain, which is often unavailable. (ii) Generative modeling techniques in which a specific class of statistical models is chosen to describe the data. As before, an a priori choice of an appropriate model is far from obvious for most real world applications. (iii) An alternative line of study, relevant to our work, is the Information Bottleneck (IB) approach [4] and its extensions. Instead of defining the clustering objective through a distortion measure or data generation process, the approach suggests using relevant variables. A tradeoff between compression of the irrelevant and prediction of the relevant variables is then optimized using information theoretic principles. Importantly, the definition of the relevant variable is often natural and obvious for the task at hand, and in turn the method yields the optimal relevant distortion for the problem. Since the original work in [4], multiple studies have highlighted the theoretical and practical importance of the IB method, in particular in the context of cluster analysis [2, 5, 6, 7, 8]. However, the original formulation is based on the availability of co-occurrence data. In practice a given cooccurrence table is treated as a finite sample out of a joint distribution of the variables; namely row and column indices. Unfortunately, as mentioned above, this assumption does do not fit many realistic datasets, thus preventing a direct application of the IB approach in various domains. To address this issue, in [9] a random walk over the data points is defined, serving to transform non co-occurrence data into a transition probability matrix that can be further analyzed via an IB algorithm. In a more recent work [10] the suggestion is made to use the mutual information between different data points as part of a general information-theoretic treatment to the clustering problem. The resulting algorithm, termed the Iclust algorithm, was demonstrated to be superior or comparable to 18 other commonly used clustering techniques over a wide range of applications where the input data cannot be interpreted as co-occurrences [10]. However, both of the approaches have limitations ? Iclust requires a sufficient amount of columns in the matrix for reliable estimation of the information relations between rows, and the Markovian Relaxation algorithm involves various non-trivial steps in the data pre-process [9]. Here, we suggest an alternative approach, inspired by the multivariate IB framework [8]. The multivariate IB principle expands the original IB work to handle situations where multiple systems of clusters are constructed simultaneously with respect to different variables and the input data may correspond to more than two variables. While the multivariate IB was originally proposed for cooccurrence data, we argue here that this framework is rich enough to be rigorously applicable in the new situation. The idea is simple and intuitive: we look for a compact grouping of rows and/or columns such that the product space defined by the resulting clusters is maximally informative about the matrix content, i.e. the matrix entries. We show that this problem can be posed and solved within the original multivariate IB framework. The new choice of relevance variable eliminates the need to know the joint distribution of all the input variables (which is inaccessible in all the applications presented here). Moreover, when missing values are present, the analysis suggests an information theoretic technique for their completion. We explore the application of this approach to various domains. For gene expression data and financial data we obtain clusters of comparable quality (measured as coherence with manual labeling) to those obtained by state-of-the-art methods [10]. For movie rating matrix completion, performance is superior to the best known alternatives in the collaborative filtering literature [11]. 2 2.1 Theory Problem Setting We henceforth denote the rows of a matrix by X, the columns by Y , and the matrix entries by Z (and small x, y and z for specific instances). The number of rows is denoted by n, and the number of columns by m. We regard X and Y as discrete coordinate space and Z(X, Y ) as a function. Generalization to higher dimensions and continuous coordinates is readily possible, but not discussed here. For a given matrix, a row x, and a column y, the value of z(x, y) is assumed to be deterministic. This can be relaxed as well. The objective is to find ?good? partitions of the X-Y space that will be informative with respect to the function values Z(X, Y ). The partitions are defined by grouping of rows into clusters of rows C and grouping of columns into clusters of columns D. The complexity of such partitions is measured by the sum of the weighted mutual information values nI(X; C) + mI(Y ; D). For the hard partitions considered in the paper this sum is the number of bits required to describe the partition (see [12]). The informativeness of the partition is measured by another mutual information, I(C, D; Z). In these terms, the goal is to find minimally complex partitions that preserve a given information level about the matrix values Z. This can be expressed via the minimization of the following functional: min q(c|x),q(d|y) nI(X; C) + mI(Y ; D) ? ?I(C, D; Z), (1) where q(c|x) is the mapping of rows x to row clusters c, q(d|y) is the mapping of columns y to column clusters d, and ? is a Lagrange multiplier controlling the tradeoff between compression and accuracy. We first derive the relations between the quantities in the above optimization problem and then describe a sequential algorithm for its minimization. We will stick to the following notation conventions: p is used for distributions that involve only input parameters and hence do not change during the analysis, p? is used for empirical distributions, q for the sought mapping distributions and q? for empirical distributions dependent on the sought mappings. By the definition of the mutual information [12]: X q(c|x) X q(c|x) I(X; C) = p(x)q(c|x) log ? p?(x)q(c|x) log . q(c) q?(c) x,c x,c We define the indicator function:  1, if the entry (x, y) is present in the matrix 1x,y = 0, if the entry (x, y) is absent in the matrix and denote the total number of populated entries (which is our sample size) by: N = Then: P N umber of populated entries in row x y 1x,y p?(x) = = , N T otal number of populated entries X q?(c) = p?(x)q(c|x), P x,y x I(Y ; D), p?(y), and q?(d) are defined similarly. X X q(z|c, d) q?(z|c, d) I(C, D; Z) = q(c, d)q(z|c, d) log ? q?(c, d)? q (z|c, d) log . p(z) p?(z) c,d,z c,d,z We assume Z is a categorical variable, thus: P N umber of entries equal to z x,y:z(x,y)=z 1 p?(z) = = , N T otal number of populated entries 1x,y . P q(c|x)q(d|y)1x,y N umber of populated entries in section c, d , T otal number of populated entries N umber of entries equal to z in section c, d x,y:z(x,y)=z q(c|x)q(d|y) q?(z|c, d) = P = . q(c|x)q(d|y)1 N umber of populated entries in section c, d x,y x,y In the special case of complete data matrices 1x,y is identically 1 and q?(c, d) may be decomposed 1 as: q?(c, d) = q?(c)? q (d). In addition p?(x) and p?(y) accept the form p?(x) = n1 and p?(y) = m . But in the general case considered in this paper X and Y (and thus C and D) are not independent. q?(c, d) = P 2.2 x,y N = Sequential Optimization Given q(c|x) and q(d|y), one can calculate all the quantities defined above, and in particular the minimization functional Lmin = nI(X; C) + mI(Y ; D) ? ?I(C, D; Z) defined in equation (1). To minimize Lmin (using hard partitions) we can use the sequential (greedy) optimization algorithm suggested in [13]. This algorithm is quite simple: 1. Start with a random (hard) partition q(c|x), q(d|y). 2. Iteratively until convergence (no changes at step (b) are done) traverse all rows x and columns y of a matrix in a random order. For each row/column: (a) Draw x (or y) from its cluster. (b) Reassign it to a new cluster c? (or d? ), so that Lmin is minimized. The new cluster may appear to be the old cluster, and then no change is counted. Due to monotonic decrease in Lmin , which is lower bounded by ??H(Z) the algorithm is guaranteed to converge to some local minima of (1). Multiple random initializations may be used to improve the result. This simple algorithm is by far not the only way to optimize (1), but in practice it was shown to achieve very good results on similar optimization problems [2]. The complexity of the algorithm is analyzed in the complementary material, where it is shown to be O(M (n + m)|C||D|), when M is the number of iterations required for convergence (usually 10-40) and |C|, |D| are cardinalities of the corresponding variables. 2.3 Minimal Description Length (MDL) Formulation The minimization functional Lmin has three free parameters that have to be externally determined: the tradeoff (or resolution) parameter ?, and the cardinalities |C|, and |D|. Whereas in some applications they may be given (e.g. the desired number of clusters), there are cases when they also require optimization (as in the example of matrix completion in the next section). To perform such optimization, the Minimum Description Length (MDL) principle [14] is used. The idea behind MDL is that models achieving better compression of the training data - when the compression includes a model description - also achieve better generalization on the test data. The following compression scheme is defined: |C| row and |D| column clusters define |C||D| secN tions each getting roughly |C||D| samples. The corresponding distributions q?(z|c, d) over categorical N variable Z may be described by |Z||C||D| log |C||D| bits (see [14]). As already mentioned, the ma2 trix partition itself may be described by nI(X; C) + mI(Y ; D) bits. And given the partition and the distributions q?(z|c, d) the number of bits required to code the matrix entries is N H(Z|C, D) [12]. N Thus the total description length is nI(X; C) + mI(Y ; D) + N H(Z|C, D) + |Z||C||D| log |C||D| . 2 Since H(Z|C, D) = H(Z) ? I(C, D; Z) and H(Z) is constant the latter can be omitted from optimization, which results in total minimization functional |Z||C||D| N Fmdl = nI(X; C) + mI(Y ; D) ? N I(C, D; Z) + log . (2) 2 |C||D| Observe that constrained on |C|, and |D|, Lmin corresponding to Fmdl accepts the form of Lmin = nI(X; C) + mI(Y ; D) ? N I(C, D; Z), (3) i.e. the optimal tradeoff ? = N is uniquely determined. Since in practice Fmdl is roughly convex in both C and D, the optimal values for these two parameters may be easily determined by scanning. C Z X D C Y Z X (a) Gin D Y (b) Gout Figure 1: Gin and Gout in Multivariate IB formulation. 2.4 Relation with the Multivariate Information Bottleneck Multivariate Information Bottleneck (IB) [8] is an unsupervised approach for structured data exploration. Its core lies in combining the Bayesian networks formalism [15] with the Information Bottleneck method [4]. Multivariate IB searches for a meaningful structured partition of the data, defined by compression variables (in our case these are C and D). Two graphs, Gin and Gout , are defined. The former specifies the relations between the input (data) variables ? in our case, X, Y , and Z ? and the compression variables. The latter specifies the information terms that are to be preserved by the partition. A tradeoff between the multi-information preserved in the input structure I Gin (which we want to minimize) and the multi-information expressed by the target structure I Gout (which we want to maximize) is then optimized. For a set of variables V = V1 , .., Vn , a directed acyclic graph (Bayesian network) G, and a joint probability distribution over V, p(V), the multi-information I G (V) is defined as: n X I G [p(V)] = I(Vi ; P aG (Vi )), i=1 where P aG (Vi ) are the parents of node Vi in G. The graphs Gin and Gout corresponding to our case are given in Figure 1. (The dashed link between X and Y in Gin appears when missing values are present in the matrix and may be chosen in any direction.) The corresponding optimization functional is: min q(c|x),q(d|y) = min q(c|x),q(d|y) I Gin (X; Y ; Z; C; D) ? ?I Gout (X; Y ; Z; C; D) I(X; Y ) + I(X, Y ; Z) + I(X; C) + I(Y ; D) ? ?I(C, D; Z). By observing that I(X, Y ; Z) and I(X; Y ) are independent of q(c|x) and q(d|y) we can eliminate them from the above optimization and obtain exactly the optimization functional defined in (1), only with equal weighting of I(X; C) and I(Y ; D). Thus the approach may be seen as a special case of the multivariate IB for the graphs Gin and Gout defined in Figure 1. An important distinction should be made though: unlike the multivariate IB we do not require the existence of the joint probability distribution p(x, y, z). This is achieved by excluding the term I(X, Y ; Z) from the optimization functional. 2.5 Relation with Information Based Clustering A recent information-theoretic approach for cluster analysis is given in [10], and is known as information based clustering, abbreviated as Iclust. In contrast to the original IB method, Iclust is equally applicable to co-occurrence as well as non co-occurrence data. In the following we highlight the relation of our work and this earlier contribution. By changing the notations used in [10] to those used here we can write the similarity measure s(x1 , x2 ) used in [10] as: XX p(y)p(z1 , z2 |x1 , x2 , y) P s(x1 , x2 ) = p(y)p(z1 , z2 |x1 , x2 , y) log P p(y )p(z 1 1 |x1 , y1 ) y1 y2 p(y2 )p(z2 |x2 , y2 ) z ,z y 1 2 Table 1: Clusters coherence for the ESR and S&P stock datasets. The table provides the coherence of the achieved solutions for Nc = 5, 10, 15 and 20 row clusters. The results achieved by the Iclust algorithm at the same settings are shown in brackets alongside the results of our algorithm. For the ESR data an average coherence according to the three GOs is shown. Separate results for each GO are provided in the supplementary material1 . Dataset Nc = 5 Nc = 10 Nc = 15 Nc = 20 ESR S&P 69 (79) 94 (88) 53 (49) 83 (91) 50 (52) 92 (93) 42 (42) 86 (86) = I(Z1 ; Z2 |x1 , x2 ). Substituting this in the optimization functional of [10], changing maximization to minimization by flipping the sign, and substituting T = ?1 we obtain: X X min I(X; C) ? ? q(c) q(x1 |c)q(x2 |c)s(x1 , x2 ) q(c|x) c = min I(X; C) ? ? q(c|x) x1 ,x2 X q(c, x1 , x2 )I(Z1 ; Z2 |x1 , x2 ), c,x1 ,x2 which is reminiscent of equation (1) if no column (Y ) grouping is done and cluster variance is measured through pairwise distances and not based on a centroid model. Importantly, in order to be able to evaluate I(Z1 ; Z2 |x1 , x2 ), I-Clust requires a sufficient amount of columns to be available, whereas our approach can operate with any amount of columns given. Alternately, even when the data contain many columns, but are relatively sparse, i.e., have many missing values, evaluating I(Z1 ; Z2 |x1 , x2 ) might be prohibitive as it requires a large enough intersection of non-missing observations for z1 and z2 . In our approach it is not a limitation. On the contrary, the approach is designed to cope with this kind of data and resolves the problem by simultaneous grouping of rows and columns of the matrix to amplify statistics. 3 Applications We first compare our algorithm to I-Clust, as it was shown to be superior/comparable to 18 other commonly used clustering techniques over a wide range of application domains [10]. We then describe an experiment on matrix completion. Another application to a small dataset is provided in the supplementary material1 . In the last two cases Iclust is not directly applicable. The multivariate IB is not directly applicable to all the provided examples. 3.1 One Dimensional Clustering - Comparison to I-Clust We focus on two applications reported in [10]. For purposes of comparison we restrict our algorithm to cluster only the rows dimension of the matrix by setting the number of column clusters, |D|, equal to the number of columns, m. This simplifies the objective functional defined in equation (1) to Lmin = I(X; C) ? ?I(C, Y ; Z). (To have a similar form to [10] we incorporate factor n multiplying I(X; C) in ?.) For both applications we use exactly the same setting as [10], including row-wise quantization of the input data into five equally populated bins and choosing the same values for the ? parameter. The first dataset consists of gene expression levels of yeast genes in 173 various forms of environmental stress [16]. Previous analysis identified a group of ? 300 stress-induced and ? 600 stress-repressed genes with ?nearly identical but opposite patterns of expression in response to the environmental shifts? [17]. These 900 genes were termed the yeast environmental stress response (ESR) module. Following [10] we cluster the genes into |C| = 5, 10, 15, and 20 clusters. To assess 1 Supplementary material is available at http://www.cs.huji.ac.il/?seldin the biological significance of the results we consider the coherence [18] of the obtained clusters with respect to three Gene Ontologies (GOs) [19]. Specifically, the coherence of a cluster is defined as the percentage of elements within this cluster that are given an annotation that was found to be significantly enriched in the cluster [18]. The results achieved by our algorithm on this dataset are comparable to the results achieved by I-Clust in all the verified settings - see Table 1. The second dataset is the day-to-day fractional changes in the price of the stocks in the Standard & Poor (S&P) 500 list2 , during 273 trading days of 2003. As with the gene expression data we take exactly the same setting used by [10] and cluster the stocks into |C| = 5, 10, 15 and 20 clusters. To evaluate the coherence of the ensuing clusters we use the Global Industry Classification Standard3 , which classifies companies into four different levels, organized in a hierarchical tree: sector, industry group, industry, and subindustry. As with the ESR dataset our results are comparable with the results of I-Clust for all the configurations - see Table 1. 3.2 Matrix Completion and Collaborative Filtering Here, we explore the full power of our algorithm in simultaneous grouping of rows and columns of a matrix. A highly relevant application is matrix completion - given a matrix with missing values we would like to be able to complete it by utilizing similarities between rows and columns. This problem is at the core of collaborative filtering applications, but may also appear in other fields. We test our algorithm on the publicly available MovieLens 100K dataset4 . The dataset consists of 100,000 ratings on a five-star scale for 1,682 movies by 943 users. We take the five non-overlapping splits of the dataset into 80% train on 20% test size provided at the MovieLens web site. We stress that with this division the training data are extremely sparse - only 5% of the training matrix entries are populated, whereas 95% of the values are missing. To find a ?good? bi-clustering of the ratings matrix, minimization of Fmdl defined in (2) is done by scanning cluster cardinalities |C| and |D| and optimizing Lmin as defined in (3) for each fixed pair of |C|, |D|. The minimum of Fmdl is obtained at |C| ? 13 and |D| ? 6 with beyond 1% sensitivity to small changes in |C| and in |D| both in Fmdl values and in prediction accuracy. See supplementary material1 for visualization of the solution at |C| = 4 and |D| = 3. To measure the accuracy of our algorithm we use mean absolute error (MAE) metrics, which is commonly used for evaluation on this dataset [11]. The mean absolute error is defined as: PN M AE = N1 i=1 |zi ? ri |, where zi -s are the predicted and ri -s are the actual ratings. To convert the distributions q?(z|c, d) we obtained in our clustering procedure to concrete predictions we take the median of z values within each section c, d. Note that our algorithm is general and does not directly optimize the MAE error functional. Nevertheless we obtain 0.72 MAE (with a deviation of less than 0.01 over multiple experiments). This confidently beats the ?magic barrier? of 0.73 reported in the collaborative filtering literature [11]. The root mean squared error (RMSE) measured for the same clustering with a mean of z values within each section c, d taken for prediction yields 0.96 (with a deviation below 0.01). This is much better than 1.165 RMSE reported for a dataset 20 times larger [20] and quite close to 0.9525 RMSE reported by Netflix for a dataset 1000 times larger of a similar nature5 . 4 Discussion A new model independent approach to the analysis of data given in the form of samples of a function Z(X, Y ) rather than samples of co-occurrence statistics of X and Y is introduced. From a theoretical viewpoint the approach is a much required extension of the Information Bottleneck method that allows for its application to entirely new domains. The approach also provides a natural way for bi-clustering and matrix completion. From a practical viewpoint the major contribution of the paper is the achievement of the best known results for a wide range of applications with a single algorithm. As well, we improve on the results of prediction of missing values (collaborative filtering). 2 Available at http://www.standardpoors.com Available at http://wrds.wharton.upenn.edu 4 Available at http://www.grouplens.org 5 See http://www.netflixprize.com/rules 3 Possible directions for further research include generalization to continuous data values, such as those obtained in gene expression and stock price data, and relaxation of the algorithm to ?soft? clustering solutions. Another interesting extension would be to dimensionality reduction, rather than clustering, as occurs in IB when applied to continuous variables [21]. The proposed framework also provides a natural platform for derivation of generalization bounds for missing values prediction that will be discussed elsewhere. References [1] Noan Slonim and Naftali Tishby. Document clustering using word clusters via the information bottleneck method. In Proceedings of 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2000. [2] Noam Slonim. The Information Bottleneck: Theory and Applications. PhD thesis, The Hebrew University of Jerusalem, 2002. [3] Sara C. Madeira and Arlindo L. Oliveira. Biclustering algorithms for biological data analysis: A survey. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 1(1):24?45, January 2004. [4] Naftali Tishby, Fernando Pereira, and William Bialek. The information bottleneck method. In Allerton Conference on Communication, Control and Computation, volume 37, pages 368?379. 1999. [5] Janne Sinkkonen and Samuel Kaski. Clustering based on conditional distributions in an auxiliary space. Neural Computation, 14(1):217?239, 2002. [6] David Gondek and Thomas Hofmann. Non-redundant data clustering. In 4th IEEE International Conference on Data Mining, 2004. [7] Susanne Still, William Bialek, and L?eon Bottou. Geometric clustering using the information bottleneck method. In Advances in Neural Information Processing Systems 16. [8] Noam Slonim, Nir Friedman, and Naftali Tishby. Multivariate information bottleneck. Neural Computation, 18, 2006. [9] Naftali Tishby and Noam Slonim. Data clustering by markovian relaxation and the information bottleneck method. In NIPS, 2000. [10] Noam Slonim, Gurinder Singh Atwal, Gasper Tracik, and William Bialek. Information-based clustering. In Proceedings of the National Academy of Science (PNAS), volume 102, pages 18297?1830, Dec. 2005. [11] J. Herlocker, J. Konstan, L. Terveen, and J. Riedl. Evaluating collaborative filtering recommender systems. In ACM Transactions on Information Systems, volume 22(1), pages 5?53, January 2004. [12] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley Series in Telecommunications. John Wiley & Sons, New York, NY, 1991. [13] Noam Slonim, Nir Friedman, and Naftali Tishby. Unsupervised document classification using sequential information maximization. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2002. [14] A. Barron, J. Rissanen, and B. Yu. The minimum description length principle in coding and modeling. IEEE Trans. Info. Theory, 44:2743?2760, 1998. [15] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufman Publishers, 1988. [16] Gasch A.P., Spellman P.T., Kao C.M., Carmel-Harel O., Eisen M.B., Storz G., Botstein D., and Brown P.O. Genomic expression programs in the response of yeast cells to environmental changes. Molecular Biology. Cell, 11(12):4241?57, December 2000. [17] A.P. Gasch. The environmental stress response: a common yeast response to environmental stresses. Topics in Current Genetics (series editor S. Hohmann), 1:11?70, 2002. [18] Segal E., Shapira M., Regev A., Pe?er D., Botstein D., Koller D., and Friedman N. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. Natural Genetics, 34(2):166?76, 2003. [19] M. Ashburner, C. A. Ball, J. A. Blake, D. Botstein, H. Butler, J. M. Cherry, A. P. Davis, K. Dolinski, S. S. Dwight, J. T. Eppig, M. A. Harris, D. P. Hill, L. Issel-Tarver, A. Kasarskis, S. Lewis, J. C. Matese, J. E. Richardson, M. Ringwald, G. M. Rubin, and G. Sherlock. Gene ontology: tool for the unification of biology. Nature Genetics, 25:25?29, May 2000. [20] Thomas Hoffman. Latent semantic models for collaborative filtering. In ACM Transactions on Information Systems, volume 22, January 2004. [21] Gal Chechik, Amir Globerson, Naftali Tishby, and Yair Weiss. Gaussian information bottleneck. Journal of Machine Learning Research, 6:165?188, January 2005.
3064 |@word compression:7 integrative:1 reduction:1 configuration:1 series:2 document:7 existing:1 current:1 z2:8 com:2 hohmann:1 must:1 readily:1 reminiscent:1 john:1 realistic:1 numerical:1 partition:13 informative:2 hofmann:1 enables:1 designed:1 joy:1 generative:1 greedy:1 prohibitive:1 amir:1 core:2 provides:3 node:1 traverse:1 allerton:1 org:1 five:3 constructed:1 direct:2 kasarskis:1 consists:2 pairwise:1 upenn:1 ontology:2 roughly:2 multi:3 inspired:1 decomposed:1 company:1 resolve:1 actual:1 cardinality:3 provided:4 xx:1 moreover:1 notation:2 bounded:1 classifies:1 kind:1 interpreted:2 kaufman:1 finding:1 ag:2 gal:1 sinkkonen:1 expands:1 exactly:3 gondek:1 stick:1 control:2 appear:3 before:1 engineering:1 local:1 slonim:7 might:3 minimally:1 initialization:1 mateo:1 suggests:2 sara:1 co:10 range:4 bi:2 directed:1 practical:2 globerson:1 practice:3 procedure:1 empirical:2 significantly:1 chechik:1 word:7 pre:2 shapira:1 suggest:1 cannot:3 amplify:1 close:1 context:1 lmin:9 optimize:2 www:4 deterministic:1 demonstrated:1 center:1 missing:12 jerusalem:2 go:3 convex:1 survey:2 resolution:1 sigir:3 identifying:1 estimator:1 rule:1 utilizing:1 regarded:1 importantly:2 financial:3 handle:1 coordinate:2 controlling:1 today:1 target:1 user:1 element:2 module:3 solved:1 calculate:1 decrease:1 mentioned:2 inaccessible:1 complexity:3 cooccurrence:2 sigler:1 rigorously:1 singh:1 division:1 easily:1 joint:10 stock:10 various:6 kaski:1 derivation:1 train:1 describe:4 labeling:1 choosing:1 netflixprize:1 quite:2 posed:1 supplementary:4 larger:2 distortion:2 plausible:1 statistic:5 richardson:1 highlighted:1 transform:1 itself:1 product:1 relevant:6 combining:1 degenerate:2 achieve:2 academy:1 intuitive:1 description:5 kao:1 getting:1 achievement:1 convergence:2 cluster:31 parent:1 tions:1 derive:1 madeira:1 ac:2 completion:7 measured:5 school:1 auxiliary:1 c:2 involves:1 indicate:3 trading:1 convention:1 predicted:1 direction:2 stochastic:1 exploration:1 material:2 bin:1 argued:1 require:3 generalization:4 biological:2 viewer:4 extension:4 considered:3 blake:1 mapping:4 substituting:2 major:2 sought:2 omitted:1 purpose:1 estimation:1 applicable:4 currently:1 grouplens:1 tool:2 weighted:1 hoffman:1 minimization:7 clearly:1 genomic:1 gaussian:1 rather:3 pn:1 focus:2 indicates:1 contrast:2 centroid:1 inference:1 dependent:1 eliminate:2 typically:2 accept:1 relation:6 koller:1 umber:5 issue:1 classification:2 denoted:1 priori:1 development:2 art:1 special:2 constrained:1 mutual:4 platform:1 equal:4 once:2 field:1 wharton:1 sampling:1 biology:3 identical:1 represents:2 broad:1 look:1 unsupervised:3 nearly:1 terveen:1 yu:1 minimized:1 intelligent:1 harel:1 simultaneously:1 preserve:1 national:1 n1:2 william:3 friedman:3 interest:1 highly:1 mining:1 gout:7 evaluation:1 mdl:4 analyzed:3 bracket:1 behind:1 cherry:1 explosion:1 unification:1 tree:1 old:1 walk:1 desired:1 theoretical:2 minimal:1 instance:3 industry:3 soft:1 column:29 modeling:2 markovian:2 formalism:1 cover:2 earlier:1 altering:1 maximization:2 deviation:2 entry:22 tishby:8 reported:4 scanning:2 international:3 sensitivity:1 huji:2 interdisciplinary:1 arlindo:1 probabilistic:1 together:1 concrete:1 again:1 squared:1 thesis:1 possibly:1 henceforth:1 expert:1 account:1 segal:1 star:1 coding:1 availability:2 includes:1 vi:4 root:1 observing:1 characterizes:1 start:1 relied:1 netflix:1 annotation:1 rmse:3 collaborative:8 ass:1 il:2 ni:7 accuracy:3 minimize:2 variance:1 characteristic:1 publicly:1 contribution:2 correspond:6 yield:2 bayesian:2 clust:5 multiplying:1 classified:1 simultaneous:2 manual:1 ashburner:1 definition:3 obvious:3 naturally:1 mi:7 dataset:11 treatment:1 knowledge:1 fractional:1 improves:1 dimensionality:1 organized:2 storz:1 appears:1 originally:1 higher:1 day:7 botstein:3 response:5 maximally:1 wei:1 formulation:4 done:3 though:2 furthermore:1 until:1 hand:1 web:1 overlapping:1 quality:1 yeast:4 eppig:1 normalized:1 multiplier:1 y2:3 contain:1 former:1 hence:1 brown:1 dwight:1 iteratively:1 semantic:1 during:2 uniquely:1 naftali:7 davis:1 samuel:1 stress:7 hill:1 complete:3 theoretic:4 reasoning:1 wise:1 superior:3 common:1 functional:10 volume:4 discussed:2 mae:3 automatic:1 rd:1 populated:9 similarly:1 similarity:4 multivariate:12 recent:2 optimizing:1 irrelevant:1 termed:2 exploited:1 seen:1 minimum:4 morgan:1 relaxed:1 converge:1 maximize:1 fernando:1 redundant:1 dashed:1 ii:1 multiple:4 full:1 pnas:1 retrieval:2 equally:2 molecular:1 prediction:8 ae:1 metric:1 iteration:1 normalization:1 achieved:5 dec:1 cell:2 preserved:2 whereas:4 addition:1 want:2 median:1 crucial:1 publisher:1 eliminates:1 unlike:1 operate:1 induced:1 december:1 contrary:1 iii:1 enough:2 identically:1 split:1 fit:1 zi:2 restrict:1 identified:1 opposite:1 idea:2 simplifies:1 tradeoff:5 absent:1 shift:1 bottleneck:14 expression:13 york:1 reassign:1 involve:1 gasper:1 amount:3 oliveira:1 category:1 http:5 specifies:2 exist:1 percentage:1 sign:1 estimated:1 serving:1 discrete:1 write:1 group:3 four:1 nevertheless:2 rissanen:1 achieving:1 changing:2 verified:1 v1:1 material1:3 graph:4 relaxation:3 ma2:1 sum:2 convert:1 telecommunication:1 atwal:1 vn:1 draw:1 coherence:7 comparable:5 bit:4 entirely:1 bound:1 guaranteed:1 annual:2 x2:14 ri:2 regulator:1 argument:1 min:5 extremely:1 relatively:1 structured:2 according:1 ball:1 poor:1 riedl:1 son:1 taken:2 equation:3 visualization:1 previously:1 turn:1 abbreviated:1 know:1 available:6 observe:1 hierarchical:1 barron:1 appropriate:1 occurrence:11 alternative:3 yair:1 existence:1 original:5 thomas:4 clustering:22 remaining:1 include:2 eon:1 classical:1 objective:3 already:1 quantity:2 occurs:2 flipping:1 regev:1 bialek:3 gin:8 distance:3 link:1 separate:1 ensuing:1 topic:1 argue:1 gasch:2 trivial:1 dataset4:1 length:4 code:1 index:1 relationship:2 hebrew:2 nc:5 unfortunately:1 sector:1 info:1 noam:6 negative:1 rise:1 magic:1 susanne:1 herlocker:1 unknown:1 perform:1 recommender:1 observation:1 datasets:2 finite:1 beat:1 january:4 situation:3 defining:1 excluding:1 communication:1 y1:2 rating:11 introduced:1 david:1 namely:1 required:4 pair:1 optimized:2 z1:7 accepts:1 distinction:1 pearl:1 nip:1 trans:1 alternately:1 address:1 able:2 suggested:1 alongside:1 usually:1 pattern:1 beyond:1 below:1 confidently:1 program:1 sherlock:1 reliable:1 including:1 power:1 natural:4 treated:1 indicator:1 spellman:1 scheme:1 improve:2 movie:10 categorical:2 genomics:1 nir:2 text:2 literature:2 geometric:1 par:1 highlight:1 generation:1 suggestion:1 filtering:8 limitation:2 acyclic:1 interesting:1 sufficient:2 consistent:1 informativeness:1 rubin:1 principle:4 viewpoint:2 editor:1 row:26 elsewhere:1 genetics:3 last:1 free:1 institute:1 wide:4 barrier:1 absolute:2 sparse:2 regard:1 dimension:2 world:2 transition:1 rich:1 evaluating:2 preventing:1 eisen:1 made:3 commonly:3 san:1 counted:1 far:2 cope:1 transaction:3 compact:1 gene:18 global:1 assumed:2 butler:1 continuous:3 search:1 regulatory:1 latent:1 table:6 carmel:1 nature:3 ca:1 inherently:1 unavailable:1 bottou:1 complex:1 domain:6 significance:1 yevgeny:1 matese:1 profile:1 complementary:1 x1:14 enriched:1 site:1 ny:1 wiley:2 pereira:1 konstan:1 lie:1 pe:1 ib:17 weighting:1 externally:1 specific:4 repeatable:1 er:1 grouping:6 quantization:1 sequential:4 importance:1 phd:1 intersection:1 explore:2 seldin:3 lagrange:1 expressed:2 trix:1 biclustering:1 applies:1 monotonic:1 environmental:6 lewis:2 acm:5 harris:1 conditional:1 goal:1 price:4 content:1 change:9 hard:3 determined:3 specifically:1 movielens:2 total:3 experimental:2 meaningful:1 otal:3 janne:1 people:1 latter:2 relevance:2 bioinformatics:1 incorporate:1 evaluate:2 princeton:2
2,275
3,065
Data Integration for Classification Problems Employing Gaussian Process Priors Mark Girolami Department of Computing Science University of Glasgow Scotland, UK [email protected] Mingjun Zhong IRISA, Campus de Beaulieu F-35042 Rennes Cedex France [email protected] Abstract By adopting Gaussian process priors a fully Bayesian solution to the problem of integrating possibly heterogeneous data sets within a classification setting is presented. Approximate inference schemes employing Variational & Expectation Propagation based methods are developed and rigorously assessed. We demonstrate our approach to integrating multiple data sets on a large scale protein fold prediction problem where we infer the optimal combinations of covariance functions and achieve state-of-the-art performance without resorting to any ad hoc parameter tuning and classifier combination. 1 Introduction Various emerging quantitative measurement technologies in the life sciences are producing genome, transcriptome and proteome-wide data collections which has motivated the development of data integration methods within an inferential framework. It has been demonstrated that for certain prediction tasks within computational biology synergistic improvements in performance can be obtained via the integration of a number of (possibly heterogeneous) data sources. In [2] six different data representations of proteins were employed for fold recognition of proteins using Support Vector Machines (SVM). It was observed that certain data combinations provided increased accuracy over the use of any single dataset. Likewise in [9] a comprehensive experimental study observed improvements in SVM based gene function prediction when data from both microarray expression and phylogentic profiles were manually combined. More recently protein network inference was shown to be improved when various genomic data sources were integrated [16] and in [1] it was shown that superior prediction accuracy of protein-protein interactions was obtainable when a number of diverse data types were combined in an SVM. Whilst all of these papers exploited the kernel method in providing a means of data fusion within SVM based classifiers it was initially only in [5] that a means of estimating an optimal linear combination of the kernel functions was presented using semi-definite programming. However, the methods developed in [5] are based on binary SVM?s, whilst arguably the majority of important classification problems within computational biology are inherently multiclass. It is unclear how this approach could be extended in a straightforward or practical manner to discrimination over multiple-classes. In addition the SVM is non-probabilistic and whilst post hoc methods for obtaining predictive probabilities are available [10] these are not without problems such as overfitting. On the other hand Gaussian Process (GP) methods [11], [8] for classification provide a very natural way to both integrate and infer optimal combinations of multiple heterogeneous datasets via composite covariance functions within the Bayesian framework an idea first proposed in [8]. In this paper it is shown that GP?s can indeed be successfully employed on general classification problems, without recourse to ad hoc binary classification combination schemes, where there are multiple data sources which are also optimally combined employing full Bayesian inference. A large scale example of protein fold prediction [2] is provided where state-of-the-art predictive performance is achieved in a straightforward manner without resorting to any extensive ad hoc engineering of the solution (see [2], [13]). As an additional important by-product of this work inference employing Variational Bayesian (VB) and Expectation Propagation (EP) based approximations for GP classification over multiple classes are studied and assessed in detail. It has been unclear whether EP based approximations would provide similar improvements in performance in the multi-class setting over the Laplace approximation and this work provides experimental evidence that both Variational and EP based approximations perform as well as a Gibbs sampler consistently outperforming the Laplace approximation. In addition we see that there is no statistically significant practical advantage of EP based approximations over VB approximations in this particular setting. 2 Integrating Data with Gaussian Process Priors Let us denote each of J independent (possibly heterogeneous) feature representations, Fj (X), of an object X by xj ? j = 1 ? ? ? J . For each object there is a corresponding polychotomous response target variable, t, so to model this response we assume an additive generalized regression model. Each distinct data representation of X, Fj (X) = xj , is nonlinearly transformed such that fj (xj ) : Fj 7? R and a linear P model is employed in this new space such that the overall nonlinear transformation is f (X) = j ?j fj (xj ). 2.1 Composite Covariance Functions Rather than specifying an explicit functional form for each of the functions fj (xj ) we assume that each nonlinear function corresponds to a Gaussian process (GP) [11] such that fj (xj ) ? GP (?j ) where GP (?j ) corresponds to a GP with trend and covariance functions mj (xj ) and Cj (xj , x0j ; ?j ) where ?j denotes a set of hyper-parameters associated with the covariance function. Due to the assumed independence of the feature representations the overall nonlinear function will also be a realization of a Gaussian process defined as f (X) J , ?1 ? ? ? ?J ) where now the P ? GP (?1 ? ? ? ?P overall trend and covariance functions follow as j ?j mj (xj ) and j ?j2 Cj (xj , x0j ; ?j ). For N object samples, X1 ? ? ? XN , each defined by the J feature representations, x1j ? ? ? xN j , denoted by Xj , with associated class specific response fk = [fk (X1 ) ? ? ? fk (XN )]T the overall GP prior is a multivariate Normal such that  X  fk | Xj=1???J , ?1k , ? ? ? ?J k , ?1k ? ? ? ?J k ? Nfk 0, ?jk Cjk (?jk ) (1) j 2 ?jk The positive random variables are denoted by ?jk , zero-trend GP functions have been assumed n and each Cjk (?jk ) is an N ? N matrix with elements Cj (xm j , xj ; ?jk ). A GP functional prior, over all possible responses (classes), is now available where possibly heterogeneous data sources are integrated via the composite covariance function. It is then, in principle, a straightforward matter to perform Bayesian inference with this model and no further recourse to ad hoc binary classifier combination methods or ancillary optimizations to obtain the data combination weights is required. 2.2 Bayesian Inference As we are concerned with classification problems over possibly multiple classes we employ a multinomial probit likelihood rather than a multinomial logit as it provides a means of developing a Gibbs sampler, and subsequent computationally efficient approximations, for the GP random variables. The Gibbs sampler is to be preferred over the Metropolis scheme as no tuning of a proposal distribution is required. As in [3] the auxiliary variables ynk = fk (Xn ) + nk , nk ? N (0, 1) are introduced and the N ? 1 dimensional vector of target class values associated with each Xn is given as t where each element tn ? {1, ? ? ? , K}. The N ? K matrix of GP random variables fk (Xn ) is denoted by F. We represent the N ? 1 dimensional columns of F by F?,k and the corresponding K ? 1 dimensional vectors, Fn,? , which are formed by the indexed rows of F . The N ? K matrix of auxiliary variables ynk is represented as Y, where the N ? 1 dimensional columns are denoted by Y?,k and the corresponding K ? 1 dimensional vectors are obtained from the rows of Y as Yn,? . The multinomial probit likelihood [3] is adopted which follows as tn = j if ynj = argmax {ynk } 1?k?K (2) and this has the effect of dividing RK into K non-overlapping K-dimensional cones Ck = {y : yk > yi , k 6= i} where RK = ?k Ck and so each P (tn = i|Yn,? ) can be represented as ?(yni > ynk ? k 6= i). Class specific independent Gamma priors, with parameters ?k , are placed on each ?jk and the individual components of ?jk (denote ?k = {?jk , ?jk }j=1???J ), a further Gamma prior is placed on each element of ?k with overall parameters a and b so this defines the full model likelihood and associated priors. 2.3 MCMC Procedure Samples from the full posterior P (Y, F, ?1???K , ?1???K |X1???N , t, a, b) can be obtained from the following Metropolis-within-Blocked-Gibbs Sampling scheme indexing over all n = 1 ? ? ? N and k = 1 ? ? ? K. (i+1) Yn,? (i) |Fn,? , tn (i+1) (i+1) (i) F?,k |Y?,k , ?k , X1,??? ,N (i+1) ?1 (i+1) |F?,1 (i+1) , Y?,k (i) , ?1 , X1,??? ,N (i+1) (i+1) ?k |?k , ak , bk (i) ? T N (Fn,? , I, tn ) (3) ? (i) (i+1) (i) N (?k Y?,k , ?k ) (4) (i+1) ? P (?k ? ) (5) (i+1) P (?k ) (6) where T N (Fn,? , I, tn ) denotes a conic truncation of a multivariate Gaussian with location parameters Fn,? and dispersion parameters I and the dimension indicated by the class value of tn will be the largest. An accept-reject strategy can be employed in sampling from the conic truncated Gaussian however this will very quickly become inefficient for problems with moderately large numbers of classes and as such a further Gibbs sampling scheme may be required. Each P (i) (i) (i) (i) (i) (i) (i) ?k = Ck (I + Ck )?1 and Ck = j=1 ?jk Cjk (?jk ) with the elements of Cjk (?jk ) defined (i) n as Cj (xm j , xj ; ?jk ). A Metropolis sub-sampler is required to obtain samples for the conditional (i+1) (i+1) distribution over the composite covariance function parameters P (?k ) and finally P (?k ) is a simple product of Gamma distributions. The predictive likelihood of a test sample X? is P (t? = k|X? , X1???N , t, a, b) which can be obtained by integrating over the posterior and predictive prior such that Z P (t? = k|f? )p(f? |?, X? , X1???N )p(?|X1???N , t, a, b)df? d? (7) where ? = Y, ?1???K . A Monte-Carlo estimate is obtained by using samples drawn from the full PS R posterior S1 s=1 P (t? = k|f? )p(f? |?(s) , X? , X1???N )df? and the integral over the predictive prior (l|s) requires further conditional samples, f? , to be drawn from each p(f? |?(s) , X? , X1???N ) finally yielding a Monte Carlo approximation of P (t? = k|X? , X1???N , t, a, b) ? ? L S L X S ?Y   ? X 1 XX  1 (l|s) (l|s) (l|s) P t? = k|f? = Ep(u) ? u + f?,k ? f?,j (8) ? ? LS LS l=1 s=1 l=1 s=1 j6=k MCMC procedures for GP classification have been previously presented in [8] and whilst this provides a practical means to perform Bayesian inference employing GP?s the computational cost incurred and difficulties associated with monitoring convergence and running multiple-chains on reasonably sized problems are well documented and have motivated the development of computationally less costly approximations [15]. A recent study has shown that EP is superior to the Laplace approximation for binary classification [4] and that for multi-class classification VB methods are superior to the Laplace approximation [3]. However the comparison between Variational and EP based approximations for the multi-class setting have not been considered in the literature and so we seek to address this issue in the following sections. 2.4 Variational Approximation From the conditional probabilities which appear in the Gibbs sampler it can be seen that a mean field approximation gives a simple iterative scheme which provides a computationally efficient alternative to the full sampler (including the Metropolis sub-sampler for the covariance function parameters), details of which are given in [3]. However given the excellent performance of EP on a number of approximate Bayesian inference problems it is incumbent on us to consider an EP solution here. We should point out that only the top level inference on the GP variables is considered here and the composite covariance function parameters will be obtained using another appropriate type-II maximum likelihood optimization scheme if possible. 2.5 Expectation Propagation with Full Posterior Covariance The required posterior can also be approximated by EP [7]. In this case the multinomial probit by a multivariate Gaussian such that p(F|t, X1???N ) ? Q(F) = Q likelihood is approximated Q 1 p(F |X ) g (F ) where gn (Fn,? ) = NFn,? (?n , ?n ), ?n is a K ? 1 vector and ?,k 1???N n,? k n n ? is a full K ? K dimensional covariance matrix. Denoting the cavity density as Q\n (F) = Qn Q p(F |X ) g (F ), ?,k 1???N i,? EP proceeds by iteratively re-estimating the moments ?n , ?n k i,i6=n i by moment matching [7] giving the following ?new = Ep?n {Fn,? } and ?new = Ep?n {Fn,? FTn,? } ? Ep?n {Fn,? }Ep?n {Fn,? }T , n n (9) where p?n = Zn?1 Q\n (Fn,? )p(tn |Fn,? ), and Zn is the required normalizing (partition) function which is required to obtain the above mean and covariance estimates. To proceed an analytic form for the partition function Zn is required. Indeed for binary classification employing a binomial probit likelihood an elegant EP solution follows due to the analytic form of the partition function [4]. However for the case of multiple classes with a multinomial probit likelihood the partition function no longer has a closed analytic form and further approximations are required to make any progress. There are two strategies which we consider, the first retains the full posterior coupling in the covariance matrices ?n by employing Laplace Propagation (LP) [14] and the second assumes no posterior coupling in ?n by setting this as a diagonal covariance matrix. The second form of approximation has been adopted in [12] when developing a multi-class version of the Informative Vector Machine (IVM) [6]. In the first case where we employ LP an additional significant O(K 3 N 3 ) computational scaling will be incurred however it can be argued that the retention of the posterior coupling is important. For the second case clearly we lose this explicit posterior coupling but, of course, do not incur the expensive computational overhead required of LP. We observed in unreported experiments that there is little of statistical significance lost, in terms of predictive performance, when assuming a factorable form for each p?n . LP proceeds by propagating the approximate moments such that  ?1 ? 2 log p?n new new argmax ?n ? log p?n and ?n ? ? (10) Fn,? ?Fn,? ?FTn,? The required derivatives follow straightforwardly and details are included in the accompanying material. The approximate predictive distribution for a new data point x? requires a Monte Carlo estimate employing samples drawn from a K-dimensional multivariate Gaussian for which details are given in the supplementary material2 . 2.6 Expectation Propagation with Diagonal Posterior Covariance By assuming a factorable approximate posterior, as in the variational approximation [3], a distinct Q simplification of the problem setting follows, where now we assume that gn (Fn,? ) = k NFn,k (?n,k , ?n,k ) i.e. is a factorable distribution. This assumption has already been made in [12] in developing an EP based multi-class IVM. Now significant computational simplificanew tion follows where the required moment matching amounts to ?new nk = Ep?nk {Fn,k } and ?nk = 2 2 Ep?nk {Fn,k } ? Ep?nk {Fn,k } where the density p?nk has a partition function which now has the analytic form q ? ? ?? \n \n \n K ? Y u + v ?ni + ?ni ? ?nj ? ? q Zn = Ep(u)p(v) ?? (11) ? ? \n 1 + ? j=1,j6=i nj 1 Conditioning on the covariance function parameters and associated hyper-parameters is implicit Supplementary material http://www.dcs.gla.ac.uk/people/personal/girolami/ pubs_2006/NIPS2006/index.htm 2 where u and v are both standard Normal random variables (v \n q \n \n \n ?ni = Fn,i ? ?ni ) with ?ni and ?ni having the usual meanings (details in accompanying material). Derivatives of this partition function follow in a straightforward way now allowing the required EP updates to proceed (details in supplementary material). The approximate predictive distribution for a new data point X? in this case takes a similar form to that for the Variational approximation [3]. So we have ? !? p K ? Y u + v ??k + ??k ? ??j ? p P (t? = k|X? , X1???N , t) = Ep(u)p(v) ? (12) ? ? 1 + ??j j=1,j6=k where the predictive mean and variance follow in standard form. ?1 ??j = (C?j )T (Cj + ?j ) ?j and ??j = c?j ? (C?j )T (Cj + ?j ) ?1 C?j (13) It should be noted here that the expectation over p(u) and p(v) could be computed by using either Gaussian quadrature or a simple Monte Carlo approximation which is straightforward as sampling from a univariate standardized Normal only is required. The VB approximation [3] however only requires a 1-D Monte Carlo integral rather than the 2-D one required here. 3 Experiments Before considering the main example of data integration within a large scale protein fold prediction problem we attempt to assess a number of approximate inference schemes for GP multi-class classification. We provide a short comparative study of the Laplace, VB, and both possible EP approximations by employing the Gibbs sampler as the comparative gold standard. For these experiments six multi-class data sets are employed 3 , i.e., Iris (N = 150, K = 3), Wine (N = 178, K = 3), Soybean (N = 47, K = 4), Teaching (N = 151, K = 3), Waveform (N = 300, K = 3) and ABE (N = 300, K = 3, which is a subset of the Isolet dataset using the letters ?A?, ?B? and ?E?,). A single radial basis covariance function with one length scale parameter is used in this comparative study. Ten-fold cross validation (CV) was used to estimate the predictive log-likelihood and the percentage predictive error. Within each of the ten folds a further 10 CV routine was employed to select the length-scale of the covariance function. For the Gibbs sampler, after a burn-in of 2000 samples, the following 3000 samples were used for inference, and the predictive error and likelihood were computed from the 3000 post-burn-in samples. For each data set and each method the percentage predictive error and the predictive log-likelihood were estimated in this manner. The summary results given as the mean and standard deviation over the ten folds are shown in Table 1. The results which cannot be distinguished from each other, under a Wilcoxon rank sum test with a 5% significance level, are highlighted in bold. From those results, we can see that across most data sets used, the predictive log-likelihood obtained from the Laplace approximation is lower than those of the three other methods. In our observations, the predictive performance of VB and the IEP approximation are consistently indistinguishable from the performance achieved from the Gibbs sampler. From the experiments conducted there is no evidence to suggest any difference in predictive performance between IEP & VB methods in the case of multi-way classification. As there is no benefit in choosing an EP based approximation over the Variational one we now select the Variational approximation in that inference over the covariance parameters follows simply by obtaining posterior mean estimates using an importance sampler. As a brief illustration of how the Variational approximation compares to the full Metropolis-withinBlocked-Gibbs Sampler consider a toy dataset consisting of three classes formed by a Gaussian surrounded by two annular rings having ten features only two of which are predictive of the class labels [3]. We can compare the compute time taken to obtain reasonable predictions from the full MCMC and the approximate Variational scheme [3]. Figure 1 (a) shows the samples of the covariance function parameters ? drawn from the Metropolis subsampler4 and overlaid in black the corresponding approximate posterior mean estimates obtained from the variational scheme [3]. It 3 http://www.ics.uci.edu/?mlearn/MPRepository.html It should be noted that multiple Metropolis sub-chains had to be run in order to obtain reasonable sampling of the ? ? R10 + 4 Table 1: Percentage predictive error (PE) and predictive log-likelihood (PL) for six data sets from UCI computed using Laplace, Variational Bayes (VB), independent EP (IEP), as well as MCMC using Gibbs sampler. Best results which are statistically indistinguishable from each other are highlighted in bold. Laplace VB Gibbs IEP Laplace VB Gibbs IEP Laplace VB Gibbs IEP ABE PE 4.000?3.063 2.000?2.330 3.333?3.143 5.333?5.019 Wine PE 3.889?5.885 2.222?3.884 4.514?5.757 3.889?5.885 Teach PE 39.24?15.74 41.12?9.92 42.41?6.22 42.54?11.32 PL -0.290?0.123 -0.164?0.026 -0.158?0.037 -0.139?0.050 PL -0.258?0.045 -0.182?0.057 -0.177?0.054 -0.133?0.047 PL -0.836?0.072 -0.711?0.125 -0.730?0.113 -0.800?0.072 Iris PE 3.333?3.513 3.333?3.513 3.333?3.513 3.333?3.513 Soybean PE 0.000?0.000 0.000?0.000 0.000?0.000 0.000?0.000 Wave PE 17.50?9.17 18.33?9.46 15.83?8.29 17.50?10.72 70 PL -0.132?0.052 -0.087?0.056 -0.079?0.056 -0.063?0.059 PL -0.359?0.040 -0.158?0.034 -0.158?0.039 -0.172?0.037 PL -0.430?0.085 -0.410?0.100 -0.380?0.116 -0.383?0.107 ?0.2 60 ?2 10 ?0.4 Predictive Likelihood Percentage Error 0 10 50 40 30 20 ?0.6 ?0.8 ?1 ?4 10 10 0 10 1 2 10 10 (a) 0 0 10 5 Time (Seconds ? Log) 10 ?1.2 0 10 (b) Time (Seconds ? Log) 5 10 (c) Figure 1: (a) Progression of MCMC and Variational methods in estimating covariance function parameters, vertical axis denotes each ?d , horizontal axis is time (all log scale) (b) percentage error under the MCMC (gray) and Variational (black) schemes, (c) predictive likelihood under both schemes. is clear that after 100 calls to the sub-sampler the samples obtained reflect the relevance of the features, however the deterministic steps taken in the variational routine achieve this in just over ten computational steps of equal cost to the Metropolis sub-sampler. Figure 1 (b) shows the predictive error incurred by the classifier and under the MCMC scheme 30,000 CPU seconds are required to achieve the same level of predictive accuracy under the variational approximation obtained in 200 seconds (a factor of 150 times faster). This is due, in part, to the additional level of sampling from the predictive prior which is required when using MCMC to obtain predictive posteriors. Because of these results we now adopt the variational approximation for the following large scale experiment. 4 Protein Fold Prediction with GP Based Data Fusion To illustrate the proposed GP based method of data integration a substantial protein fold classification problem originally studied in [2] and more recently in [13] is considered. The task is to devise a predictor of 27 distinct SCOP classes from a set (N = 314) of low homology protein sequences. Six 0.2 50 40 30 20 10 0 AA HP PT PY SS VP MA MF (a) 2.5 0.15 Alpha Weight Predictive Likelihood Percent Accuracy 60 0.1 0.05 2 1.5 1 0.5 0 AA HP PT PY SS VP MA MF (b) 0 AA HP PT SS VP PZ (c) Figure 2: (a) The prediction accuracy for each individual data set and the corresponding combinations, (MA) employing inferred weights and (MF) employing a fixed weighting scheme (b) The predictive likelihood achieved for each individual data set and with the integrated data (c) The posterior mean values of the covariance function weights ?1 ? ? ? ?6 . different data representations (each comprised of around 20 features) are available characterizing (1) Amino Acid composition (AA); (2) Hydrophobicity profile (HP); (3) Polarity (PT); (4) Polarizability (PY); (5) Secondary Structure (SS); (6) Van der Waals volume profile of the protein (VP). In [2] a number of classifier and data combination strategies were employed in devising a multiway classifier from a series of binary SVM?s. In the original work of [2] the best predictive accuracy obtained on an independent set (N = 385) of low sequence similarity proteins was 53%. It was noted after extensive careful manual experimentation by the authors that a combination of Gaussian kernels each composed of the (AA), (SS) and (HP) datasets significantly improved predictive accuracy. More recently in [13] a heavily tuned ad hoc ensemble combination of classifiers raised this performance to 62% the best reported on this problem. We employ the proposed GP based method (Variational approximation) in devising a classifier for this task where now we employ a composite covariance function (shared across all 27 classes), a linear combination of RBF functions for each data set. Figure (2) shows the predictive performance of the GP classifier in terms of percentage prediction accuracy (a) and predictive likelihood on the independent test set (b). We note a significant synergistic increase in performance when all data sets are combined and weighted (MA) where the overall performance accuracy achieved is 62%. Although the 0-1 loss test error is the same for an equal weighting of the data sets (MF) and that obtained using the proposed inference procedure (MA) for (MA) there is an increase in predictive likelihood i.e. more confident correct predictions being made. It is interesting to note that the weighting obtained (posterior mean for ?) Figure (2.c) weights the (AA) & (SS) with equal importance whilst other data sets play less of a role in performance improvement. 5 Conclusions In this paper we have considered the problem of integrating data sets within a classification setting, a common scenario within many bioinformatics problems. We have argued that the GP prior provides an elegant solution to this problem within the Bayesian inference framework. To obtain a computationally practical solution three approximate approaches to multi-class classification with GP priors, i.e. Laplace, Variational and EP based approximations have been considered. It is found that EP and Variational approximations approach the performance of a Gibbs sampler and indeed their predictive performances are indistinguishable at the 5% level of significance. The full EP (FEP) approximation employing LP has an excessive computational cost and there is little to recommend it in terms of predictive performance over the independent assumption (IEP). Likewise there is little to distinguish between IEP and VB approximations in terms of predictive performance in the multi-class classification setting though further experiments on a larger number of data sets is desirable. We employ VB to infer the optimal parameterized combinations of covariance functions for the protein fold prediction problem over 27 possible folds and achieve state-of-the-art performance without recourse to any ad hoc tinkering and tuning and the inferred combination weights are intuitive in terms of the information content of the highest weighted data sets. This is a highly practical solution to the problem of heterogenous data fusion in the classification setting which employs Bayesian inferen- tial semantics throughout in a consistent manner. We note that on the fold prediction problem the best performance achieved is equaled without resorting to complex and ad hoc data and classifier weighting and combination schemes. 5.1 Acknowledgements MG is supported by the Engineering and Physical Sciences Research Council (UK) grant number EP/C010620/1, MZ is supported by the National Natural Science Foundation of China grant number 60501021. References [1] A. Ben-Hur and W.S. Noble. Kernel methods for predicting protein-protein interactions. Bioinformatics, 21, Suppl. 1:38?46, 2005. [2] Chris Ding and Inna Dubchak. Multi-class protein fold recognition using support vector machines and neural networks. Bioinformatics, 17:349?358, 2001. [3] Mark Girolami and Simon Rogers. Variational Bayesian multinomial probit regression with Gaussian process priors. Neural Computation, 18(8):1790?1817, 2006. [4] M. Kuss and C.E. Rasmussen. Assessing approximate inference for binary Gaussian process classification. Journal of Machine Learning Research, 6:1679?1704, 2005. [5] G. R. G. Lanckriet, T. De Bie, N. Cristianini, M. I. Jordan, and W. S. Noble. A statistical framework for genomic data fusion. Bioinformatics, 20:2626?2635, 2004. [6] Neil Lawrence, Matthias Seeger, and Ralf Herbrich. Fast sparse Gaussian process methods: The informative vector machine. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15. MIT Press. [7] Thomas Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001. [8] R. Neal. Regression and classification using Gaussian process priors. In A.P. Dawid, M. Bernardo, J.O. Berger, and A.F.M. Smith, editors, Bayesian Statistics 6, pages 475?501. Oxford University Press, 1998. [9] Paul Pavlidis, Jason Weston, Jinsong Cai, and William Stafford Noble. Learning gene functional classifications from multiple data types. Journal of Computational Biology, 9(2):401? 411, 2002. [10] J.C. Platt. Probabilities for support vector machines. In A. Smola, P. Bartlett, B. Schlkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 61?74. MIT Press, 1999. [11] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [12] M.W. Seeger, N.D. Lawrence, and R. Herbrich. Efficient nonparametric Bayesian modelling with sparse Gaussian process approximations. Technical Report, ?http://www.kyb.tuebingen.mpg.de/bs/people/seeger/?, 2006. [13] Hong-Bin Shen and Kuo-Chen Chou. Ensemble classifier for protein fold pattern recognition. Bioinformatics, Advanced Access(doi:10.1093), 2006. [14] Alexander Smola, Vishy Vishwanathan, and Eleazar Eskin. Laplace propagation. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch?olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [15] C.K.I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342?1352, 1998. [16] Y. Yamanishi, J. P. Vert, and M. Kanehisa. Protein network inference from multiple genomic data: a supervised approach. Bioinformatics, 20, Suppl. 1:363?370, 2004.
3065 |@word version:1 logit:1 seek:1 covariance:25 moment:4 series:1 denoting:1 tuned:1 yni:1 bie:1 fn:19 subsequent:1 additive:1 partition:6 informative:2 analytic:4 kyb:1 update:1 discrimination:1 intelligence:1 devising:2 scotland:1 smith:1 short:1 eskin:1 provides:5 location:1 herbrich:2 become:1 overhead:1 manner:4 indeed:3 mpg:1 multi:11 little:3 cpu:1 considering:1 provided:2 estimating:3 campus:1 xx:1 emerging:1 developed:2 whilst:5 transformation:1 nj:2 quantitative:1 bernardo:1 classifier:12 uk:4 platt:1 grant:2 yn:3 producing:1 arguably:1 appear:1 before:1 positive:1 engineering:2 retention:1 eleazar:1 ak:1 oxford:1 black:2 burn:2 tinkering:1 studied:2 china:1 specifying:1 statistically:2 pavlidis:1 practical:5 lost:1 definite:1 procedure:3 reject:1 composite:6 inferential:1 matching:2 significantly:1 integrating:5 radial:1 vert:1 protein:19 proteome:1 suggest:1 cannot:1 synergistic:2 py:3 www:3 deterministic:1 demonstrated:1 c010620:1 straightforward:5 williams:2 l:2 nfn:2 shen:1 ynk:4 glasgow:1 isolet:1 ralf:1 laplace:13 target:2 pt:4 heavily:1 play:1 programming:1 carl:1 lanckriet:1 trend:3 element:4 recognition:3 jk:14 approximated:2 expensive:1 dawid:1 observed:3 ep:30 role:1 ding:1 stafford:1 highest:1 mz:1 yk:1 substantial:1 moderately:1 cristianini:1 rigorously:1 personal:1 irisa:2 predictive:36 incur:1 basis:1 htm:1 various:2 represented:2 distinct:3 fast:1 dubchak:1 monte:5 doi:1 hyper:2 choosing:1 supplementary:3 larger:1 s:6 statistic:1 neil:1 gp:23 highlighted:2 hoc:8 advantage:1 sequence:2 mg:1 matthias:1 cai:1 interaction:2 product:2 fr:1 j2:1 uci:2 realization:1 achieve:4 gold:1 intuitive:1 olkopf:1 convergence:1 p:1 assessing:1 comparative:3 yamanishi:1 ring:1 ben:1 object:3 coupling:4 illustrate:1 ac:2 propagating:1 progress:1 edward:1 dividing:1 auxiliary:2 girolami:4 waveform:1 correct:1 ancillary:1 material:4 rogers:1 bin:1 argued:2 pl:7 accompanying:2 around:1 considered:5 ic:1 normal:3 overlaid:1 lawrence:3 adopt:1 wine:2 lose:1 label:1 council:1 largest:1 successfully:1 weighted:2 mit:5 clearly:1 genomic:3 gaussian:20 rather:3 ck:5 zhong:1 improvement:4 consistently:2 rank:1 likelihood:19 modelling:1 seeger:3 equaled:1 ftn:2 chou:1 inference:17 integrated:3 accept:1 initially:1 transformed:1 france:1 semantics:1 overall:6 classification:23 issue:1 html:1 denoted:4 development:2 art:3 integration:5 raised:1 field:1 equal:3 having:2 sampling:6 manually:1 biology:3 excessive:1 noble:3 report:1 recommend:1 employ:6 composed:1 gamma:3 national:1 comprehensive:1 individual:3 argmax:2 consisting:1 william:1 attempt:1 highly:1 yielding:1 gla:2 chain:2 integral:2 indexed:1 re:1 ynj:1 increased:1 column:2 gn:2 tial:1 retains:1 zn:4 cost:3 deviation:1 subset:1 predictor:1 comprised:1 conducted:1 optimally:1 reported:1 straightforwardly:1 nips2006:1 combined:4 confident:1 incumbent:1 density:2 probabilistic:1 polychotomous:1 quickly:1 thesis:1 reflect:1 possibly:5 soybean:2 inefficient:1 derivative:2 toy:1 de:3 scop:1 bold:2 matter:1 ad:7 tion:1 jason:1 closed:1 wave:1 bayes:1 simon:1 ass:1 formed:2 ni:6 accuracy:9 variance:1 acid:1 likewise:2 ensemble:2 vp:4 bayesian:15 schlkopf:1 carlo:5 monitoring:1 j6:3 kuss:1 mlearn:1 manual:1 sebastian:1 vishy:1 minka:1 associated:6 dataset:3 hur:1 cj:6 obtainable:1 routine:2 x1j:1 originally:1 supervised:1 follow:4 response:4 improved:2 though:1 just:1 implicit:1 smola:2 hand:1 horizontal:1 christopher:1 nonlinear:3 overlapping:1 propagation:6 defines:1 indicated:1 gray:1 effect:1 homology:1 iteratively:1 neal:1 indistinguishable:3 noted:3 iris:2 hong:1 generalized:1 demonstrate:1 tn:8 fj:7 percent:1 meaning:1 variational:22 recently:3 superior:3 common:1 functional:3 multinomial:6 physical:1 conditioning:1 volume:1 measurement:1 significant:4 blocked:1 composition:1 gibbs:15 cv:2 cambridge:1 tuning:3 resorting:3 fk:6 i6:1 teaching:1 hp:5 waals:1 multiway:1 unreported:1 had:1 access:1 longer:1 similarity:1 wilcoxon:1 multivariate:4 posterior:16 recent:1 scenario:1 certain:2 transcriptome:1 binary:7 outperforming:1 life:1 yi:1 exploited:1 devise:1 der:1 seen:1 additional:3 employed:7 semi:1 ii:1 multiple:11 full:11 desirable:1 infer:3 annular:1 faster:1 technical:1 cross:1 post:2 prediction:13 regression:3 heterogeneous:5 expectation:5 df:2 kernel:4 adopting:1 represent:1 suppl:2 achieved:5 proposal:1 addition:2 source:4 microarray:1 sch:1 rennes:1 cedex:1 elegant:2 jordan:1 call:1 concerned:1 xj:14 independence:1 idea:1 multiclass:1 whether:1 motivated:2 six:4 expression:1 bartlett:1 becker:1 factorable:3 proceed:2 clear:1 amount:1 nonparametric:1 ten:5 documented:1 http:3 percentage:6 estimated:1 diverse:1 drawn:4 r10:1 cone:1 sum:1 run:1 letter:1 parameterized:1 throughout:1 x0j:2 reasonable:2 family:1 scaling:1 vb:13 simplification:1 distinguish:1 fold:14 vishwanathan:1 department:1 developing:3 combination:16 across:2 lp:5 metropolis:8 b:1 s1:1 indexing:1 taken:2 recourse:3 computationally:4 previously:1 adopted:2 available:3 experimentation:1 progression:1 appropriate:1 distinguished:1 alternative:1 original:1 thomas:1 denotes:3 running:1 top:1 binomial:1 assumes:1 standardized:1 giving:1 already:1 strategy:3 costly:1 inna:1 usual:1 diagonal:2 unclear:2 obermayer:1 thrun:2 majority:1 chris:1 barber:1 tuebingen:1 assuming:2 length:2 index:1 polarity:1 illustration:1 providing:1 berger:1 teach:1 perform:3 allowing:1 vertical:1 observation:1 dispersion:1 datasets:2 truncated:1 extended:1 dc:2 abe:2 inferred:2 introduced:1 bk:1 nonlinearly:1 required:17 extensive:2 heterogenous:1 address:1 proceeds:2 pattern:2 xm:2 including:1 natural:2 difficulty:1 predicting:1 advanced:1 scheme:15 technology:1 brief:1 conic:2 axis:2 prior:15 literature:1 acknowledgement:1 mingjun:1 fully:1 probit:6 loss:1 interesting:1 validation:1 hydrophobicity:1 integrate:1 incurred:3 foundation:1 consistent:1 principle:1 editor:4 surrounded:1 row:2 course:1 summary:1 placed:2 supported:2 truncation:1 rasmussen:2 wide:1 saul:1 characterizing:1 sparse:2 benefit:1 van:1 dimension:1 xn:6 genome:1 qn:1 author:1 collection:1 made:2 employing:12 transaction:1 approximate:12 alpha:1 preferred:1 bernhard:1 gene:2 cavity:1 overfitting:1 assumed:2 iterative:1 table:2 mj:2 reasonably:1 inherently:1 obtaining:2 schuurmans:1 excellent:1 complex:1 significance:3 main:1 paul:1 profile:3 quadrature:1 x1:13 amino:1 sub:5 explicit:2 pe:7 weighting:4 rk:2 beaulieu:1 specific:2 pz:1 svm:7 evidence:2 fusion:4 cjk:4 normalizing:1 importance:2 phd:1 margin:1 nk:8 chen:1 mf:4 simply:1 univariate:1 aa:6 corresponds:2 ivm:2 ma:7 weston:1 conditional:3 sized:1 kanehisa:1 careful:1 rbf:1 shared:1 content:1 included:1 sampler:16 secondary:1 kuo:1 experimental:2 select:2 mark:2 support:3 people:2 assessed:2 alexander:1 relevance:1 bioinformatics:6 mcmc:8
2,276
3,066
An Oracle Inequality for Clipped Regularized Risk Minimizers Ingo Steinwart, Don Hush, and Clint Scovel Modelling, Algorithms and Informatics Group, CCS-3 Los Alamos National Laboratory Los Alamos, NM 87545 {ingo,dhush,jcs}@lanl.gov Abstract We establish a general oracle inequality for clipped approximate minimizers of regularized empirical risks and apply this inequality to support vector machine (SVM) type algorithms. We then show that for SVMs using Gaussian RBF kernels for classification this oracle inequality leads to learning rates that are faster than the ones established in [9]. Finally, we use our oracle inequality to show that a simple parameter selection approach based on a validation set can yield the same fast learning rates without knowing the noise exponents which were required to be known a-priori in [9]. 1 Introduction The theoretical understanding of support vector machines (SVMs) and related kernel-based methods has been substantially improved in recent years. For example using Talagrand?s concentration inequality and local Rademacher averages it has recently been shown that SVMs for classification can learn with rates up to n?1 under somewhat realistic assumptions on the data-generating distribution (see [9, 11] and the related work [2]). However, the so-called ?shrinking technique? of [9, 11] for establishing such rates, requires the free parameters to be chosen a-priori, and in addition, the optimal values of these parameters depend on features of the data-generating distribution which are typically unknown. Consequently, [9, 11] do not provide a practical method for learning with fast rates. On the other hand, the oracle inequality in [2] only holds for distributions having Tsybakov noise exponent ?, and hence it describes a situation which is rarely met in practice. The goal of this work is to overcome these shortcomings by establishing a general oracle inequality (see Theorem 3.1) for regularized empirical risk minimizers. The key ingredient of this oracle inequality is the observation that for most commonly used loss functions it is possible to ?clip? the decision function of the algorithm before beginning with the theoretical analysis. In addition, a careful choice of the weighted empirical process Talagrand?s inequality is applied to, makes the ?shrinking technique? superfluous. Finally, by explicitly dealing with -approximate minimizers of the regularized risk our results also apply to actual SVM algorithms. With the help of the general oracle inequality we then establish an oracle inequality for SVM type algorithms (see Theorem 2.1) as well as a simple oracle inequality for model selection (see Theorem 4.2). For the former, we show that it leads to improved rates for e.g. binary classification under the assumptions considered in [9] and a-priori known noise exponents. Using the model selection theorem we then show how our new oracle inequality for SVMs can be used to analyze a simple parameter selection procedure based on a validation set that achieves the same learning rates without prior knowledge on the noise exponents. The rest of this work is organized as follows: In Section 2 we present our oracle inequality for SVM type algorithms. We then discuss its implications and analyze the simple parameter selection procedure when using Gaussian RBF kernels. In Section 3 we then present and prove the general oracle inequality. The proof of Theorem 2.1 as well as the oracle inequality for model selection can be found in Section 4. 2 Main Results Throughout this work we assume that X is compact metric space, Y ? [?1, 1] is compact, P is a Borel probability measure on X ? Y , and F is a set of functions over X such that 0 ? F. Often F is a reproducing kernel Hilbert space (RKHS) H of continuous functions over X with closed unit ball BH . It is well-known that H can then be continuously embedded into the space of continuous functions C(X) equipped with the usual maximum-norm k.k? . In order to avoid constants we always assume that this embedding has norm 1, i.e. k.k? ? k.kH . Furthermore, L : Y ? R ? [0, ?) always denotes a continuous function which is convex in its second variable such that L(y, 0) ? 1. The functions L will serve as loss functions and consequently let us recall that the associated L-risk of a measurable function f : X ? R is defined by  RL,P (f ) = E(x,y)?P L y, f (x) . Note that the assumption L(y, 0) ? 1 immediately gives RL,P (0) ? 1. Furthermore, the minimal L-risk is denoted by R?L,P , i.e. R?L,P = inf{RL,P (f ) | f : X ? R measurable}, and a function ? ? exists. . We always assume that such an fL,P attaining this infimum is denoted by fL,P The learning schemes we are mainly interested in are based on an optimization problem of the form   fP,? := arg min ?kf k2H + RL,P (f ) , (1) f ?H where ? > 0. Note that if we identify a training set T = ((x1 , y1 ), . . . , (xn , yn )) ? (X ? Y )n with its empirical measure, then fT,? denotes the empirical estimators of the above learning scheme. Obviously, support vector machines (see e.g. [5]) and regularization networks (see e.g. [7]) are both learning algorithms which fall into the above category. One way to describe the approximation error of these learning schemes is the approximation error function a(?) := ?kfP,? k2H + RL,P (fP,? ) ? R?L,P , ? > 0, which has been discussed in some detail in [10]. Furthermore in order to deal with the complexity of the used RKHSs let us recall that for a subset A ? E of a Banach space E the covering numbers are defined by n n o [ N (A, ?, E) := min n ? 1 : ?x1 , . . . , xn ? E with A ? (xi + ?BE ) , ? > 0, i=1 where BE denotes the closed unit ball of E. Given a finite sequence T = ((x1 , y1 ), . . . , (xn , yn )) ? (X ? Y )n we write TX := (x1 , . . . , xn ). For our main results we are particularly interested in covering numbers in the Hilbert space L2 (TX ) which consists of all equivalence classes of functions f : X ? Y ? R and which is equipped with the norm kf kL2 (TX ) n 1 X 1 f (xi ) 2 2 . := n i=1 (2) In other words, L2 (TX ) is a L2 -space with respect to the empirical measure of (x1 , . . . , xn ). Learning schemes of the form (1) typically produce functions fP,? with lim??0 kfP,? k? = ? (see e.g. [10] for a precise statement). Unfortunately, this behaviour has a serious negative impact on the learning rates when directly employing standard tool?s such as Hoeffding?s, Bernstein?s or Talagrand?s inequality. On the other hand, when dealing with e.g. the hinge loss it is obvious that clipping the function fP,? at ?1 and 1 does not worsen the corresponding risks. Following this simple observation we will consider loss functions L that satisfy the clipping condition  L(y, 1) if t ? 1 L(y, t) ? (3) L(y, ?1) if t ? ?1 , for all y ? Y . Recall that this type of loss function was already considered in [4, 11], but the clipping idea actually goes back to [1]. Moreover, it is elementary to check that most commonly used loss functions including the hinge loss and the least squares loss satisfy (3). Given a function f : X ? R we now define its clipped version f? : X ? [?1, 1] by ? if f (x) > 1 ?1 f?(x) := f (x) if f (x) ? [?1, 1] ? ?1 if f (x) < ?1 . It is clear from (3) that we always have L(y, f?(x)) ? L(y, f (x)) and consequently we obtain RL,P (f?) ? RL,P (f ) for all distributions P . Finally, we also need the following Lipschitz condition |L|1 := |L(y, t1 ) ? L(y, t2 )| ? 2. |t1 ? t2 | y?Y,?1?t1 ,t2 ?1 sup (4) With the help of these definitions we can now state our main result which establishes an oracle inequality for clipped versions of fT,? : Theorem 2.1 Let P be a distribution on X ? Y and let L be a loss function which satisfies (3) and (4). Let H be a RKHS of continuous functions on X. For a fixed element f0 ? H we define := ?kf0 k2H + RL,P (f0 ) ? R?L,P B(f0 ) := sup L(y, f0 (x)) . a(f0 ) (5) x?X,y?Y In addition, we assume that we have a variance bound of the form 2 ? ? ? EP L ? f? ? L ? fL,P ? v EP (L ? f? ? L ? fL,P ) (6) for constants v ? 1, ? ? [0, 1] and all measurable f : X ? R. Moreover, suppose that H satisfies  sup log N BH , ?, L2 (TX ) ? a??2p , ? > 0, (7) T ?(X?Y )n for some constants p ? (0, 1) and a ? 1. For fixed ? > 0 let fT,? ? H be a function that minimizes f 7? ?kf k2H + RL,T (f ) up to some  > 0. Then there exists a constant Kp,v depending only on p and v such that for all ? ? 1 we have with probability not less than 1 ? 3e?? that 1   1  32v?  2?? Kp,v a 2??+p(??1) Kp,v a 140? 14B(f0 )? + + 5 + + RL,P (f?T,? ) ? R?L,P ? ?p n ?p n n n 3n + 8a(f0 ) + 4. (8) The above oracle inequality has some interesting consequences as the following examples illustrate. We begin with an example that deals with a fixed kernel: Example 2.2 (Learning rates for single kernel) Assume that in Theorem 2.1 we have a Lipschitz continuous loss function such as the hinge loss. In addition assume that the approximation error function satisfies a(?) ? c?? , ? > 0, for some constants c > 0 and ? ? (0, 1]. Setting f0 := fP,? and optimizing (8) with respect to ? then shows that the corresponding SVM learns with rate n?? , where n ? 2? o  ? := min , . ? 2 ? ? + p(? ? 1) + p ? + 1 Recall that this learning rate has already been obtained in [11]. The next example investigates SVMs that use a Gaussian RBF kernel whose width may vary with the sample size: Example 2.3 (Classification with several Gaussian kernels) Let X be the unit ball in Rd and Y := {?1, 1}. Furthermore assume that we are interested in binary classification using the hinge 2 2 loss and the Gaussian RKHSs H? that belong to the RBF kernels k? (x1 , x2 ) := e?? kx1 ?x2 k with width ? > 0. If P has geometric noise exponent ? ? (0, ?) in the sense of [9] then it was shown in [9] that there exists a function f0 ? H? with kf0 k? ? 1 and  a? (f0 ) ? c ? d ? + ? ??d , ? > 0, ? > 0, where c > 0 is a constant independent of ? and ?. Moreover, [9, Thm. 2.1] shows that H? satisfies (7) for all p ? (0, 1) with a := cp,d,? ? (1?p)(1+?)d where ? > 0 can be arbitrarily chosen and cp,d,? is a suitable constant. Now assume that P has Tsybakov noise exponent q ? [0, ?] in the sense of [9]. It was then shown in [9] that (6) is satisfied q for ? := q+1 . Minimizing (8) with respect to ? and ? and choosing p and ? sufficiently small then yields that the corresponding SVM can learn with rate n??+? , where ? := ?(q + 1) , ?(q + 2) + q + 1 and ? > 0 can be chosen arbitrarily small. Note that these rates are superior to those obtained in [9, Theorem 2.8]. In the above examples the optimal parameters ? and ? depend on the sample size n but not on the training samples T . However, these optimal parameters require us to know certain characteristics of the distribution such as the approximation exponent ? or the noise exponents ? and q. The following example shows that the oracle inequality of Theorem 2.1 can be used to find these optimal parameters in a data-dependent fashion which does not require any a-priori knowledge: Example 2.4 In this example we assume that our training set T consists of 2n samples. We write T0 for the first n samples and T1 for the last n samples. Let fT0 ,?,? be the SVM solution using a Gaussian kernel with width ?. Moreover, let ? ? [1, n1/d ) and ? ? (0, 1] be finite sets with cardinality m? and m? , respectively. Under the assumptions of Example 2.3 the oracle inequality (8) then shows that with probability not less than 1 ? 3m? m? e?? we have   d  q+1  ?  q+1 ? q+2?? q+2 ? d ??d ? RL,P (fT0 ,?,? ) ? RL,P ? Kd,q,?,? + +? ?+? ?? n n simultaneously for all ? ? ? and ? ? ?, where ? ? (0, 1] is arbitrarily but fixed and Kd,q,?,? is a suitable constant. Now using a simple model selection approach (see e.g. Theorem 4.2) for the second half T1 of our training set we find that with probability not less than 1 ? e?? we have   q+1 ? + log(m? m? ) q+2 ? ? RL,P (fT0 ,?T? ,??T ) ? RL,P ? C 1 1 n   d  q+1 ? q+2? d ??d + C min + ? ? + ? , ???,??? ?? n where C is a constant only depending on d, q, ?, and ?, and (?T?1 , ??T1 ) ? ? ? ? is a pair that minimizes the empirical risk RL,T1 (.) over ? ? ?. Now assume that ?n and ?n are 1/n- and 1/n2 -nets of [1, n1/d ) and (0, 1], respectively. Obviously, we can choose ?n and ?n such that m?n ? n2 and m?n ? n2 , respectively. With such parameter sets it is then easy to check that we obtain exactly the rates we have found in Example 2.3, but without knowing the noise exponents ? and q a-priori. 3 An oracle inequality for clipped penalized ERM Theorem 2.1 is a consequence of a far more general oracle inequality on clipped penalized empirical risk minimizers. Since this result is of its own interest we now present it together with its proof in detail. To ? this end recall that a subroot is a nondecreasing function ? : [0, ?) ? [0, ?) such that ?(r)/ r is nonincreasing in r. Moreover, for a Rademacher sequence ? := (?1 , . . . , ?n ) n with respect to the measure ? and  a function h : Z ? R we define R? h : Z ? R by R? h := ?1 n ?1 h(z1 ) + ? ? ? + ?n h(zn ) . Now the general oracle inequality is: Theorem 3.1 Let P 6= ? be a set of (hyper)-parameters, F be a set of measurable functions f : X ? R with 0 ? F, and ? : P ? F ? [0, ?] be a function. Let P be a distribution on X ? Y and L be a loss function which satisfies (3) and (4). For a fixed pair (p0 , f0 ) ? P ? F we define a? (p0 , f0 ) := ?(p0 , f0 ) + RL,P (f0 ) ? R?L,P . Moreover, let us assume that the quantity B(f0 ) defined in (5) is finite. In addition, we assume that we have a variance bound of the form (6) for constants v ? 1, ? ? [0, 1] and all measurable f : X ? R. Furthermore, suppose that there exists a subroot ?n with ? R? (L ? f? ? L ? fL,P ET ?P n E??? r > 0. (9) sup ) ? ?n (r) , (p,f )?P?F ? ?(p,f )+EP (L?f??L?fL,P )?r Finally, let (pT,? , fT,? ) be an -approximate minimizer of (p, f ) 7? ?(p, f ) + RL,T (f ). Then for all ? ? 1 and all r satisfying 1 n  32v?  2?? 28? o r ? max 120?n (r), , (10) n n we have with probability not less than 1 ? 3e?? that ?(pT,? , fT,? ) + RL,P (f?T,? ) ? R?L,P ? 5r + 14B(f0 )? + 8a? (p0 , f0 ) + 4. 3n Proof: We write B for B(f0 ). For T ? (X ? Y )n we now observe ?(pT,? , fT,? ) + RL,T (f?T,? ) ? ?(p0 , f0 ) ? RL,T (f0 ) ?  by the definition of (pT,? , fT,? ), and hence we find ?(pT,? , fT,?)+RL,P (f?T,?)?R?L,P ? RL,P (f?T,? ) ? RL,T (f?T,? ) + RL,T (f0 ) ? RL,P (f0 ) + a? (p0 , f0 ) +  = RL,P (f?T,?)?RL,P (f ? )?RL,T (f?T,?)+RL,T (f ? ) L,P L,P +RL,T (f0 )?RL,T (f?0 )?RL,P (f0 )+RL,P (f?0 ) ? ? +RL,T (f?0 )?RL,T (fL,P )?RL,P (f?0 )+RL,P (fL,P ) (11) (12) (13) +a? (p0 , f0 ) +  . Let us first estimate the term in line (12). To this end we write h1 := L ? f0 ? L ? f?0 . Then our assumption on L guarantees h1 ? 0, and since we also have kh1 k? ? B, we find kh1 ?EP h1 k? ? B. In addition, we obviously have EP (h1 ?EP h1 )2 ? EP h21 ? BEP h1 . Consequently, Bernstein?s inequality [6, Thm. 8.2] shows that with probability not less than 1 ? e?? we have r 2? B EP h1 2B? + . ET h1 ? EP h1 < n 3n ? ? 1 Now using ab ? a2 + 2b we find 2? BEP h1 ? n? 2 ? EP h1 + B? 2n , and consequently we have  7B?  ? 1?e?? . (14) P n T ? Z n : RL,T (f0 )?RL,T (f?0 )?RL,P (f0 )+RL,P (f?0 ) < EP h1 + 6n Let us now estimate the term in line (13). To this end we write h2 := L? f?0 ?L?f ? . Then we have L,P kh2 k? ? 3 and kh2 ? EP h2 k? ? 6. In addition, our variance bound gives EP (h2 ? EP h2 )2 ? EP h22 ? v(EP h2 )? , and consequently, Bernstein?s inequality shows that with probability not less than 1 ? e?? we have r 2? v(EP h2 )? 4? ET h2 ? EP h2 < + . n n 0 ?1 ?1 Now, for q ?1 + (q 0 ) = 1 the elementary inequality ab ? aq q ?1 + bq (q 0 ) holds, and hence for ?  1 ?/2 2 q := 2?? , q 0 := ?2 , a := 21?? ?? ? v ? n? 2 , and b := 2EP? h2 we obtain r 1  2? v(EP h2 )? ?  21?? ?? v?  2?? ? 1? + EP h2 . n 2 n  1 Since elementary calculations show that 2?? ?? 2?? ? 1 we obtain r 1  2? v(EP h2 )? ?  2v?  2?? ? 1? + EP h2 . n 2 n Therefore we have with probability not less than 1 ? e?? that 1  ?  2v?  2?? 4? ? ? RL,T (f?0 ) ? RL,T (fL,P ) ? RL,P (f?0 ) + RL,P (fL,P ) < EP h2 + 1 ? + . (15) 2 n n ? Let us finally estimate the term in line (11). To this end we write hf := L ? f? ? L ? fL,P , f ? F. Moreover, for r > 0 we define n o EP hf ? hf Gr := : (p, f ) ? P ? F . ?(p, f ) + EP (hf ) + r Then for gp,f := EP hf ?hf ?(p,f )+EP (hf )+r kgp,f k? = sup z?Z ? Gr we have EP gp,f = 0 and EP hf ? hf (z) kEP hf ? hf k? 6 ? . = ?(p, f ) + EP (hf ) + r ?(p, f ) + EP (hf ) + r r In addition, the inequality a? b2?? ? (a + b)2 and the variance bound assumption (6) implies that 2 EP gp,f ? EP h2f EP h2f v ? ? 2?? . (EP (hf ) + r)2 r2?? (EP hf )? r Now define ?(r) := ET ?P n sup (p,f )?P?F EP hf ? ET hf . ?(p, f ) + EP (hf ) + r Standard symmetrization then yields ET ?P n sup |EP hf ? ET hf | ? 2ET ?P n E??? (p,f )?P?F ?(p,f )+EP (hf )?r sup |R? hf | , (p,f )?P?F ?(p,f )+EP (hf )?r and hence Lemma 3.2 proved below together with (9) shows ?(r) ? 10?n (r)r?1 , r > 0. Therefore applying Talagrand?s inequality in the version of [3] to the class Gr we obtain r   30?n (r) 2? v 7? n n + + ? 1 ? e?? . P T ? Z : sup ET g ? r nr2?? nr g?Gr 1/2 7? v Let us define ?r := 30?rn (r) + nr2?2?? + nr . Then the above inequality gives with probability not less than 1 ? e?? that for all (p, f ) ? P ? F we have r  7? 2? vr? + , EP hf ? ET hf ? ?r ? ?(p, f ) + EP hf + 30?n (r) + n n and consequently we have with probability not less than 1 ? e?? that ? ? RL,P (f?T,? ) ? RL,P (fL,P ) ? RL,T (f?T,? ) + RL,T (fL,P ) r 2? vr? 7? + . (16) n n Now observe that for the functions h1 and h2 which we defined when estimating (12) and (13) we have EP g + EP h = RL,P (f0 ) ? R?L,P , (17) and hence we can combine our estimates (16), (14), and (15) of the terms (11), (12), and (13) to obtain that with probability not less than 1 ? 3e?? we have  (1??r ) ?(pT,? , fT,? ) + RL,P (f?T,? ) ? R?L,P r 1 2? vr? ?  2v?  2?? (66+7B)? ? 30?n (r)+ + (1? ) + + a? (p0 , f0 )+RL,P (f0 )?R?L,P +. n 2 n 6n  ? ? ?r ? ?(pT,? , fT,? ) + RL,P (f?T,? ) ? RL,P (fL,P ) + 30?n (r) + 1/2 v In particular, for r satisfying the assumption (10) we have 30?rn (r) ? 14 , nr2?2?? ? 14 , and 1 1 7? ?? that nr ? 4 . This shows 1 ? ?r ? 4 , and hence we obtain with probability not less than 1 ? 3e r 1  2v?  2?? 44? 32? vr? ?(pT,? , fT,? ) + RL,P (f?T,? ) ? R?L,P ? 120?n (r) + + 2(2 ? ?) + n n n  14B? + + 4a? (p0 , f0 ) + 4 RL,P (f0 )?R?L,P + 4. 3n  1 ? 1/2 5r 2v? 2?? ? r, 44? ? However we also have 120?n (r) ? r, 32?nvr n ? 3 , and 2(2 ? ?) n 2(2 ? ?) 4r ? r, and hence we find the assertion. For the proof of Theorem 3.1 it remains to show the following lemma: Lemma 3.2 Let P and F be as in Theorem 3.1. Furthermore, let W : F ? R and a : P ? F ? [0, ?). Define ET W (f ) ? EP W (f ) ?(r) := ET ?P n sup a(p, f ) + r f ?P?F and suppose that there exists a subroot ? such that ET W (f ) ? EP W (f ) ? ?(r) , ET ?P n sup r > 0. (p,f )?P?F a(p,f )?r Then we have ?(r) ? 5r ?(r) for all r > 0. Proof: For x > 1, r > 0, and T ? (X ? Y )n we obtain by a standard peeling approach that sup (p,f )?P?F |EP W (f ) ? ET W (f )| a(p, f ) + r ? ? sup (p,f )?P?F a(p,f )?r |EP W (f ) ? ET W (f )| X + a(p, f ) + r i=0 sup (p,f )?P?F a(p,f )?rxi a(p,f )?rxi+1 ? ? |EP W (f ) ? ET W (f )| X + r (p,f )?P?F i=0 sup sup a(p,f )?r (p,f )?P?F a(p,f )?rxi a(p,f )?rxi+1 |EP W (f ) ? ET W (f )| a(p, f ) + r |EP W (f ) ? ET W (f )| rxi + r ? ? 1 1X 1 sup |EP W (f ) ? ET W (f )| + r (p,f )?P?F r i=0 xi + 1 a(p,f )?r = 1 r ?(r) + ? X ?(rxi+1 )  i=0 xi + 1 |EP W (f ) ? ET W (f )| . However since ? is a subroot we obtain that ?(rxi+1 ) ? x by setting x := 4. 4 sup (p,f )?P?F a(p,f )?rxi+1 i+1 2 ?(r) so that we obtain the assertion Proof of Theorem 2.1 Before we begin the proof of Theorem 2.1 let us state the following proposition which follows directly from [8] (see also [9, Prop. 5.7]) together with simple considerations on covering numbers: Proposition 4.1 Let F := H be a RKHS, P := {p0 } be a singleton, and ?(p0 , f ) := ?kf k2 . If (7) is satisfied then there exists a constant cp depending only on p such that (9) is satisfied for  p  1   r  p2  a  12  r  1+p 1 ? a  1+p , . ?n (r) := cp max v 2 (1?p) r 2 (1?p) ? n ? n Proof of Theorem 2.1: From the covering bound assumption we observe that Proposition 4.1 implies we have the bound (9) with ?n (r) defined by the righthand side of Proposition 4.1 and therefore Theorem 3.1 implies that Condition (10) becomes p  1 1 n  p  1  r  1+p  32v?  2?? 1 a  1+p 28? o (1?p) ? (1?p) r 2 a 2 2 2 r ? max 120cp v (18) , 120cp , , r ? n ? n n n and solving with respect to r yields the conclusion. Finally, for the parameter selection approach in Example 2.4 we need the following oracle inequality for model selection: Theorem 4.2 Let P be a distribution on X ? Y and let L be a loss function which satisfies (3), (4), and the variance bound (6). Furthermore, let F := {f1 , . . . , fm } be a finite set of functions mapping X into [?1, 1]. For T ? (X ? Y )n we define fT := arg min RL,T (f ) . f ?F Then there exists a universal constant K such that for all ? ? 1 we have with probability not less than 1 ? 3e?? that 1 1  K log m  2??  32v?  2?? 5K log m + 154? RL,P (fT ) ? R?L,P ? 5 +5 + n n n +8 min(RL,P (f ) ? R?L,P ) . f ?F Proof: Since all functions fi already map into [?1, 1] we do not have to consider the clipping operator. For r > 0 we now define Fr := {f ? F : RL,P (f ) ? R?L,P ? r}. Then the cardinality of ? , ?, L2 (T )) ? m for all Fr is smaller than or equal to m and hence we have N (L ? Fr ? L ? fL,P ? > 0. Using the technique of [8] (cf. also [9, Prop. 5.7]) we hence obtain that (9) is satisfied for   p c log m ?n (r) := ? max v log m r?/2 , ? , n n where c is a universal constant. Applying Theorem 3.1 then yields the assertion. References [1] P.L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans. Inform. Theory, 44:525?536, 1998. [2] G. Blanchard, O. Bousquet, and P. Massart. Statistical performance of support vector machines. Technical Report, 2004. [3] O. Bousquet. A Bennet concentration inequality and its application to suprema of empirical processes. C. R. Math. Acad. Sci. Paris, 334:495?500, 2002. [4] D.R. Chen, Q. Wu, Y.M. Ying, and D.X. Zhou. Support vector machine soft margin classifiers: Error analysis. Journal of Machine Learning Research, 5:1143?1175, 2004. [5] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. [6] L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, New York, 1996. [7] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7:219?269, 1995. [8] S. Mendelson. Improving the sample complexity using global data. IEEE Trans. Inform. Theory, 48:1977?1991, 2002. [9] I. Steinwart and C. Scovel. Fast rates for support vector machines using Gaussian kernels. Annals of Statistics, to appear. [10] I. Steinwart and C. Scovel. Fast rates for support vector machines. In Proceedings of the 18th Annual Conference on Learning Theory, COLT 2005, pages 279?294. Springer, 2005. [11] Q. Wu, Y. Ying, and D.-X. Zhou. Multi-kernel regularized classifiers. J. Complexity, to appear.
3066 |@word version:3 norm:3 p0:11 rkhs:3 scovel:3 realistic:1 girosi:1 half:1 beginning:1 math:1 prove:1 consists:2 combine:1 multi:1 gov:1 actual:1 equipped:2 cardinality:2 becomes:1 begin:2 estimating:1 moreover:7 substantially:1 minimizes:2 guarantee:1 exactly:1 k2:1 classifier:2 unit:3 yn:2 appear:2 before:2 t1:7 local:1 consequence:2 acad:1 establishing:2 clint:1 lugosi:1 equivalence:1 bennet:1 practical:1 practice:1 procedure:2 kh2:2 universal:2 suprema:1 empirical:9 orfi:1 word:1 selection:9 operator:1 bh:2 risk:9 applying:2 measurable:5 map:1 go:1 convex:1 immediately:1 bep:2 estimator:1 embedding:1 annals:1 pt:8 suppose:3 element:1 satisfying:2 particularly:1 recognition:1 ep:55 ft:13 complexity:4 cristianini:1 depend:2 solving:1 serve:1 tx:5 fast:4 shortcoming:1 describe:1 kp:3 hyper:1 choosing:1 whose:1 statistic:1 gp:3 nondecreasing:1 obviously:3 sequence:2 net:1 h22:1 fr:3 kx1:1 kh:1 los:2 rademacher:2 produce:1 generating:2 help:2 depending:3 illustrate:1 p2:1 implies:3 met:1 require:2 behaviour:1 f1:1 proposition:4 elementary:3 hold:2 sufficiently:1 considered:2 k2h:4 mapping:1 achieves:1 vary:1 a2:1 symmetrization:1 establishes:1 tool:1 weighted:1 gaussian:7 always:4 avoid:1 zhou:2 modelling:1 check:2 mainly:1 sense:2 dependent:1 minimizers:5 typically:2 kep:1 interested:3 arg:2 classification:6 colt:1 denoted:2 exponent:9 priori:5 equal:1 having:1 jones:1 t2:3 report:1 serious:1 simultaneously:1 national:1 n1:2 ab:2 interest:1 righthand:1 superfluous:1 nonincreasing:1 implication:1 poggio:1 bq:1 taylor:1 theoretical:2 minimal:1 soft:1 assertion:3 rxi:8 zn:1 clipping:4 nr2:3 subset:1 alamo:2 gr:4 probabilistic:1 informatics:1 together:3 continuously:1 nm:1 satisfied:4 choose:1 hoeffding:1 singleton:1 attaining:1 gy:1 b2:1 blanchard:1 satisfy:2 explicitly:1 h1:13 closed:2 analyze:2 sup:18 hf:26 worsen:1 square:1 variance:5 characteristic:1 yield:5 identify:1 kh1:2 kfp:2 cc:1 inform:2 definition:2 kl2:1 obvious:1 proof:9 associated:1 proved:1 recall:5 knowledge:2 lim:1 organized:1 hilbert:2 actually:1 back:1 nvr:1 kgp:1 improved:2 furthermore:7 talagrand:4 hand:2 steinwart:3 infimum:1 former:1 hence:9 regularization:2 laboratory:1 deal:2 width:3 covering:4 cp:6 consideration:1 recently:1 fi:1 superior:1 rl:60 banach:1 discussed:1 belong:1 h2f:2 cambridge:1 rd:1 jcs:1 shawe:1 aq:1 f0:34 own:1 recent:1 optimizing:1 inf:1 certain:1 inequality:33 binary:2 arbitrarily:3 somewhat:1 technical:1 faster:1 calculation:1 impact:1 metric:1 kernel:12 addition:8 rest:1 massart:1 bernstein:3 easy:1 architecture:1 fm:1 idea:1 knowing:2 t0:1 bartlett:1 york:1 clear:1 tsybakov:2 clip:1 svms:5 category:1 write:6 group:1 key:1 year:1 clipped:6 throughout:1 wu:2 decision:1 investigates:1 fl:15 bound:7 oracle:22 annual:1 x2:2 bousquet:2 min:6 ball:3 kd:2 describes:1 smaller:1 erm:1 remains:1 discus:1 know:1 end:4 apply:2 observe:3 rkhss:2 denotes:3 cf:1 hinge:4 establish:2 already:3 quantity:1 concentration:2 usual:1 nr:3 sci:1 devroye:1 minimizing:1 ying:2 unfortunately:1 statement:1 negative:1 unknown:1 observation:2 ingo:2 finite:4 situation:1 precise:1 y1:2 rn:2 reproducing:1 thm:2 pair:2 required:1 lanl:1 paris:1 z1:1 established:1 hush:1 trans:2 below:1 pattern:2 fp:5 including:1 max:4 suitable:2 regularized:5 scheme:4 prior:1 understanding:1 l2:5 geometric:1 kf:4 embedded:1 loss:14 interesting:1 ingredient:1 validation:2 h2:15 penalized:2 last:1 free:1 side:1 fall:1 overcome:1 xn:5 commonly:2 employing:1 far:1 approximate:3 compact:2 dealing:2 global:1 xi:4 don:1 continuous:5 learn:2 improving:1 ft0:3 main:3 noise:8 n2:3 x1:6 borel:1 fashion:1 vr:4 shrinking:2 learns:1 peeling:1 theorem:20 r2:1 svm:7 exists:7 mendelson:1 margin:1 chen:1 springer:2 minimizer:1 satisfies:6 prop:2 goal:1 consequently:7 rbf:4 careful:1 lipschitz:2 kf0:2 lemma:3 called:1 rarely:1 h21:1 support:8
2,277
3,067
Denoising and Dimension Reduction in Feature Space Mikio L. Braun Fraunhofer Institute1 FIRST.IDA Kekul?estr. 7, 12489 Berlin [email protected] Joachim Buhmann Inst. of Computational Science ETH Zurich CH-8092 Z?urich [email protected] 2,1 ? Klaus-Robert Muller Technical University of Berlin2 Computer Science Franklinstr. 28/29, 10587 Berlin [email protected] Abstract We show that the relevant information about a classification problem in feature space is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem. Thus, kernels not only transform data sets such that good generalization can be achieved even by linear discriminant functions, but this transformation is also performed in a manner which makes economic use of feature space dimensions. In the best case, kernels provide efficient implicit representations of the data to perform classification. Practically, we propose an algorithm which enables us to recover the subspace and dimensionality relevant for good classification. Our algorithm can therefore be applied (1) to analyze the interplay of data set and kernel in a geometric fashion, (2) to help in model selection, and to (3) de-noise in feature space in order to yield better classification results. 1 Introduction Kernel machines use a kernel function as a non-linear mapping of the original data into a highdimensional feature space; this mapping is often referred to as empirical kernel map [6, 11, 8, 9]. By virtue of the empirical kernel map, the data is ideally transformed such that a linear discriminative function can separate the classes with low generalization error, say via a canonical hyperplane with large margin. The latter is used to provide an appropriate mechanism of capacity control and thus to ?protect? against the high dimensionality of the feature space. The idea of this paper is to add another aspect, not covered by this picture. We will show theoretically that if the learning problem matches the kernel well, the relevant information of a supervised learning data set is always contained in a finite number of leading kernel PCA components (that is, the label information projected to the kernel PCA directions), up to negligible error. This result is based on recent approximation bounds dealing with the eigenvectors of the kernel matrix which show that if a function can be reconstructed using only a few kernel PCA components asymptotically, then the same already holds in a finite sample setting, even for small sample sizes. Consequently, the use of a kernel function not only greatly increases the expressive power of linear methods by non-linearly transforming the data, but it does so ensuring that the high dimensionality of the feature space will not become overwhelming: the relevant information for classification will stay confined within a comparably low dimensional subspace. This finding underlines the efficient use of data that is made by kernel machines using a kernel suited to the problem. While the number of data points stays constant for a given problem, a smart choice of kernel permits to make better use of the available data at a favorable ?data point per effective dimension?-ratio, even for infinitedimensional feature spaces. Furthermore we can use de-noising techniques in feature space, much in the spirit of Mika et al. [8, 5] and thus regularize the learning problem in an elegant manner. Let us consider an example. Figure 1 shows the first six kernel PCA components for an example data set. Above each plot, the variance of the data along this direction and the contribution of this Figure 1: Although the data set is embedded into a high-dimensional manifold, not all directions contain interesting information. Above the first six kernel PCA components are plotted. Of these, only the fourths is highly relevant for the learning problem. Note, however, that this example is atypical in having a single relevant component. In general, several components will have to be combined to construct the decision boundary. component to the class labels are plotted (normalized such that the maximal possible contribution is one1 ). Of these six components, only the fourth contributes significantly to the class memberships. As we will see below, the contributions in the other directions is mostly noise. This is true especially for components with small variance. Therefore, after removing this noise, a finite number of components suffice to represent the optimal decision boundary. The dimensionality of the data set in feature space is characteristic for the relation between a data set and a kernel. Roughly speaking, the relevant dimensionality of the data set corresponds to the complexity of the learning problem when viewed through the lens of the kernel function. This notion of complexity relates the number of data points required by the learning problem and the noise, as a small relevant dimensionality enables the de-noising of the data set to obtain an estimate of the true class labels, making the learning process much more stable. This combination of dimension and noise estimate allows us to distinguish among data sets showing weak performance which might either be complex or noisy. To summarize the main contributions of this paper: (1) We provide theoretical bounds showing that the relevant information (defined in section 2) is actually contained in the leading projected kernel principal components under appropriate conditions. (2) We propose an algorithm which estimates the relevant dimensionality of the data set and permits to analyze the appropriateness of a kernel for the data set, and thus to perform model selection among different kernels. (3) We show how the dimension estimate can be used in conjunction with kernel PCA to perform effective de-noising. We analyze some well-known benchmark data sets and evaluate the performance as a de-noising tool in Section 5. Note that we do not claim to obtain better performance within our framework when compared to, for example, cross-validation techniques. Rather, we are on par. Our contribution is to foster an understanding about a data set and to gain better insights of whether a mediocre classification result is due to intrinsic high dimensionality of the data or overwhelming noise level. 2 The Relevant Information and Kernel PCA Components In this section, we will define the notion of the relevant information contained in the class labels, and show that the location of this vector with respect to the kernel PCA components is linked to the scalar products with the eigenvectors of the kernel matrix. 1 Note, however, that these numbers do not simply add up, instead the contribution of a and b is ? a2 + b2 . Let us start to formalize the ideas introduced so far. As usual, we will consider a data set (X1 , Y1 ), . . . , (Xn , Yn ) where the X lie in some space X and the Y are in Y = {?1}. We assume that the (Xi , Yi ) are drawn i.i.d. from PX ?Y . In kernel methods, the data is non-linearly mapped into some feature space F via the feature map ?. Scalar products in F can be computed by the kernel k in closed form: h?(x), ?(x0 )i = k(x, x0 ). Summarizing all the pairwise scalar products results in the (normalized) kernel matrix K with entries k(Xi , Xj )/n. We wish to summarize the information contained in the class label vector Y = (Y1 , . . . , Yn ) about the optimal decision boundary. We define the relevant information vector as the vector G = (E(Y1 |X1 ), . . . , E(Yn |Xn )) containing the expected class labels for the objects in the training set. The idea is that since E(Y |X) = P (Y = 1|X) ? P (Y = ?1|X), the sign of G contains the relevant information on the true class membership by telling us which class is more probable. The observed class label vector can be written as Y = G ? N with N = G ? Y denoting the noise in the class labels. We want to study the relation of G with respect to the kernel PCA components. The following lemma relates projections of G to the eigenvectors of the kernel matrix K: Lemma 1 The kth kernel PCA component fk evaluated on the Xi s is equal to the kth eigenvector2 of the kernel matrix K: (fk (X1 ), . . . , fk (Xn )) = uk . Consequently, the projection of a vector Pd Y ? Rn to the leading d kernel PCA components is given by ?d (Y ) = k=1 uk u>k Y. Pn Proof The kernel PCA directions are given as (see [10]) vk = i=1 ?i ?(Xi ), where ?i = [uk ]i /lk , [uk ]i denoting the ith component of uk , and lk , uk being the eigenvalues and eigenvectors of the kernel matrix K. Thus, the kth PCA component for a point Xj in the training set is fk (Xj ) = h?(Xj ), vk i = n n 1 X 1 X h?(Xj ), ?(Xi )i[uk ]i = k(Xj , Xi )[uk ]i . lk i=1 lk i=1 The sum computes the jth component of Kuk = lk uk , because uk is an eigenvector of K. Therefore fk (Xj ) = 1 [lk uk ]j = [uk ]j . lk Since the uk are orthogonal (K is a symmetric matrix), the projection of Y to the space spanned by Pd the first d kernel PCA components is given by i=1 ui u>i Y .  3 A Bound on the Contribution of Single Kernel PCA Components As we have just shown, the location of G is characterized by its scalar products with the eigenvectors of the kernel matrix. In this section, we will apply results from [1, 2] which deal with the asymptotic convergence of spectral properties of the kernel matrix to show that the decay rate of the scalar products are linked to the decay rate of the kernel PCA principal values. It is clear that we cannot expect G to generally locate favorably with respect to the kernel PCA components, but only when there is some kind of match between G and the chosen kernel. This link will be established by asymptotic considerations. Kernel PCA is closely linked to the spectral properties of the kernel matrix, and it is known [3, 4] that the eigenvalues and the projections to eigenspaces converge. Their asymptotic R limits are given as the eigenvalues ?i and eigenfunctions ?i of the integral operator Tk f = k( ? , x)f (x)PX (dx) defined on L2 (PX ), where PX is the marginal measure of PX ?Y which generates our samples. The eigenvalues and also Peigenfunctions ? occur in the well-known Mercer?s formula: By Mercer?s theorem, k(x, x0 ) = i=1 ?i ?i (x)?i (x0 ). The asymptotic counterpart of G is given by the function g(x) = E(Y |X = x). We will encode fitness between k and g by requiring that g lies P in the image of Tk . This is equivalent ? to saying that there exists a sequence (?i ) ? `2 such that g = i=1 ?i ?i ?i .3 Under this condition, the scalar products decay as quickly as the eigenvalues, because hg, ?i i = ?i ?i = O(?i ). Because of the known convergence of spectral projections, we can expect the same behavior asymptotically 2 As usual, the eigenvectors are arranged in descending order by corresponding eigenvalue. A different condition is that g lies in the RKHS generated by k. This amounts to saying that g lies in the 1/2 image of Tk . Therefore, the condition used here is slightly more restrictive. 3 from the finite sample case. However, the convergence speed is the crucial question. This question is not trivial, because eigenvector stability is known to be linked to the gap between the corresponding eigenvalues, which will be fairly small for small eigenvalues. In fact, for example, the results from [14] do not scale properly with the corresponding eigenvalue, such that the bounds are too loose. A number of recent results on the spectral properties of the kernel matrix [1, 2] specifically deal with error bounds for small eigenvalues and their associated spectral projections. Using these results, we obtain the following bound on u>i G.4 Theorem 1 Let g = with high probability. P? i=1 ?i ?i ?i as explained above, and let G = (g(X1 ), . . . , g(Xn )). Then, 1 ? |u>i G| < 2li ar ci (1 + O(rn?1/4 )) n p p + rar ?r O(1) + Tr + ATr O(n?1/4 ) + rar ?r O(n?1/2 ), where r balancesP the different terms (1 ? r ? n), ci measures the size of the eigenvalue cluster r around li , ar = i=1 |?i | is a measure of the size of the first r components, ?r is the sum of all eigenvalues smaller than ?r , A is the supremum norm of g, and Tr is the error of projecting g to the space spanned by the first r eigenfunctions. The bound consists of a part which scales with li (first term) and a part which does not (remaining terms). Typically, the bound initially scales with li until the non-scaling part dominates the bound for larger i. These two parts are balanced by r. However, note that all terms which do not scale with li will typically be small: for smooth kernels, the eigenvalues quickly decay to zero as r ? ?. The related quantities ?r , and Tr , will also decay to zero at slightly slower rates. Therefore, by adjusting r (as n ? ?), the non-scaling part can be made arbitrarily small, leading to a small bound on |u>i G| for larger i. Put differently, the bound shows that the relevant information vector G (as introduced in Section 2) is contained in a number of leading PCA components up to a negligible error. The number of dimensions depends on the asymptotic coefficients ?i and the decay rate of the asymptotic eigenvalues of k. Since this rate is related to the smoothness of the kernel function, the dimension will be small for smooth kernels whose leading eigenfunctions ?i permit good approximation of g. 4 The Relevant Dimension Estimation Algorithm In this section, we will propose the relevant dimension estimation (RDE) algorithm which estimates the dimensionality of the relevant information from a finite sample, allowing us to analyze the fit between a kernel function and a data set in a practical way. Dimension Estimation We propose an approach which is motivated by the geometric findings explained above. Since G is not known, we can only observe the contributions of the kernel PCA components to Y , which can be written as Y = G + N (see Section 2). The contributions u>i Y will thus be formed as a superposition of u>i Y = u>i G + u>i N . Now, by Theorem 1, we know that G will be very close to zero for the latter coefficients, while on the other hand, the noise N will be equally distributed over all coefficients. Therefore, the kernel PCA coefficients s = u>i Y will have the shape of an evenly distributed noise floor u>i N from which the coefficients u>i G of the relevant information protrude (see Figure 2(b) for an example). We thus propose the following algorithm: Given a fixed kernel k, we estimate the true dimension by fitting a two component model to the coordinates of the label vector. Let s = (u>1 Y, . . . , u>n Y ). Then, assume that  N (0, ?12 ) 1 ? i ? d si ? N (0, ?22 ) d < i ? n. 4 We have tried to reduce the bound to its most prominent features. For a more detailed explanation of the quantities and the proof, see the appendix. Also, the confidence ? of the ?with high probability? part is hidden in the O( ? ) notation. We have used the O( ? ) notation rather deliberately to exhibit the dominant constants. (a) (b) (c) Figure 2: Further plots on the toy example from the introduction. (a) contains the kernel PCA component contributions (dots), and the training and test error by projecting the data set to the given number of leading kernel PCA components. (b) shows the negative log-likelihood of the two component model used to estimate the dimensionality of the data. (c) The resulting fit when using only the first four components. We select the d minimizing the negative log-likelihood, which is proportional to `(d) = n?d d log ?12 + log ?22 , n n with ?12 = d n X 1X 2 2 1 si , ?2 = s2i . d i=1 n?d (1) i=d+1 Model Selection for Kernel Choice For different kernels, we again use the likelihood and select the kernel which leads to the best fit in terms of the likelihood. If the kernel width does not match the scale of the structure of the data set, the fit of the two component model will be inferior: for very small or very large kernels, the kernel PCA coefficients of Y have no clear structure, such that the likelihood will be small. For example, for Gaussian kernels, for very small kernel widths, noise is interpreted as relevant information, such that there appears to be no noise, only very high-dimensional data. On the other hand, for very large kernel widths, any structure will be indistinguishable from noise such that the problem appears to be very noisy with almost no structure. In both cases, fitting the two component model will not work very well, leading to large values of `. Experimental Error Estimation The estimated dimension can be used to estimate the noise level present in the data set. The idea is to measure the error between the projected label ? = ?d (Y ), which approximates the true label information G. The resulting number vector G P 1 ? i 6= Yi } is an estimate of the fraction of misclassified examples in the training ? = n ni=1 1{[G] err set, and therefore an estimate for the noise level in the class labels. ? and the noise level are consistent if the estimated A Note on Consistency Both the estimate of G dimension d scales sub-linearly with n. The argument can be sketched as follows: since the kernel PCA components do not depend on Y , the noise N contained in Y is projected to a random subspace of dimension d. Therefore, n1 k?d (N )k2 ? nd ( n1 kN k2 ) ? 0 as n ? ?, since d/n ? 0 and 1 2 2 n kN k ? E(N ). Empirically, d was found to be rather stable, but in principle, the condition on d could even be enforced by adding a small sub-linear term (for example, estimated dimension d). 5 ? n, or log n, to the Experiments Toy Data Set Returning to the toy example from the introduction, let us now take a closer look at this data set. In Figure 2(a), the spectrum for the toy data set is plotted. We can see that every kernel PCA component contributes to the observed class label vector. However, most of these contributions are noise, since the classes are overlapping. The RDE method estimates that only the first four components are relevant. This behavior of the algorithm can also be seen from the training and independent test error measured on a second data set of size 1000 which can also be found in this plot. In Figure 2(b), the log-likelihoods from (1) are shown, and one observes a well pronounced minimum. Finally, in Figure 2(c), the resulting fit is shown. Benchmark data sets We performed experiments on the classification learning sets from [7]. For each of the data sets, we de-noise the data set using a family of rbf kernels by projecting the class labels to the estimated number of leading kernel PCA components. The kernel width is also selected automatically using the achieved log-likelihood as described above. The width of the rbf kernel is selected from 20 logarithmically spaced points between 10?2 and 104 for each data set. For the dimension estimation task, we compare our RDE method to a dimensionality estimate based Pd on cross-validation. More concretely, the matrix S = i=1 ui u>i computes the projection to the leading d kernel PCA components. Interpreting the matrix S as a linear fit matrix, the leave-oneout cross-validation error can be computed in closed form (see [12])5 , since S is diagonal with respect to the eigenvector basis ui . Evaluating the cross-validation error for all dimensions and for a number of kernel parameters, one can select the best dimension and kernel parameter. Since the cross-validation can be computed efficiently, the computational demands of both methods are equal. Table 3 shows the resulting dimension estimates. We see that both methods perform on par, which shows that the strong structural prior assumption underlying RDE is justified. For the de-noising task, we have compared a (unregularized) least-squares fit in the reduced feature space (kPCR) against kernel ridge regression (KRR) and support vector machines (SVM) on the same data set. The resulting test errors are plotted also in Table 3. We see that a relatively simple method on the reduced features leads to classification which is on par with the state-of-the-art competitors. Also note that the estimated error rates match the actually observed error rates quite well, although there is a tendency to under-estimate the true error. Finally, inspecting the estimated dimension and noise level reveals that the data sets breast-cancer, diabetis, flare-solar, german, and titanic all have only moderately large dimensionalities. This suggest that these data sets are inherently noisy and better results cannot be expected, at least within the family of rbf kernels. On the other hand, the data set image seems to be particularly noise free, given that one can achieve a small error in spite of the large dimensionality. Finally, the splice data set seems to be a good candidate to benefit from more data. 6 Conclusion Both in theory and on practical data sets, we have shown that the relevant information in a supervised learning scenario is contained in the leading projected kernel PCA components if the kernel matches the learning problem. The theory provides a consistent estimation for the expected class labels and the noise level. This behavior complements the common statistical learning theoretical view on kernel based learning with insight on the interaction of data and kernel: A well chosen kernel (a) makes the model estimate efficiently and generalize well, since only a comparatively low dimensional representation needs to be learned for a fixed given data size and (b) permits a de-noising step that discards some void projected kernel PCA directions and thus provides a regularized model. Practically, our RDE algorithm automatically selects the appropriate kernel model for the data and extracts as additional side information an estimate of the effective dimension and estimated expected 5 This applies only to the 2-norm. However, as the performance of 2-norm based methods like kernel ridge regression on classification problems show, the 2-norm is also informative on the classification performance. data set banana breast-cancer diabetis flare-solar german heart image ringnorm splice thyroid titanic twonorm waveform dim 24 2 9 10 12 4 272 36 92 17 4 2 14 dim (cv) 26 2 9 10 12 5 368 37 89 18 6 2 23 est. error rate 8.8 ? 1.5 25.6 ? 2.1 21.5 ? 1.3 32.9 ? 1.2 22.9 ? 1.1 15.8 ? 2.5 1.7 ? 1.0 1.9 ? 0.7 9.2 ? 1.3 2.0 ? 1.0 20.8 ? 3.8 2.3 ? 0.7 8.4 ? 1.5 kPCR 11.3 ? 0.7 27.0 ? 4.6 23.6 ? 1.8 33.3 ? 1.8 24.1 ? 2.1 16.7 ? 3.8 4.2 ? 0.9 4.4 ? 1.2 13.8 ? 0.9 5.1 ? 2.1 22.9 ? 1.6 2.4 ? 0.1 10.8 ? 0.9 KRR 10.6 ? 0.5 26.5 ? 4.7 23.2 ? 1.7 34.1 ? 1.8 23.5 ? 2.2 16.6 ? 3.5 2.8 ? 0.5 4.7 ? 0.8 11.0 ? 0.6 4.3 ? 2.3 22.5 ? 1.0 2.8 ? 0.2 9.7 ? 0.4 SVM 11.5 ? 0.7 26.0 ? 4.7 23.5 ? 1.7 32.4 ? 1.8 23.6 ? 2.1 16.0 ? 3.3 3.0 ? 0.6 1.7 ? 0.1 10.9 ? 0.6 4.8 ? 2.2 22.4 ? 1.0 3.0 ? 0.2 9.9 ? 0.4 Figure 3: Estimated dimensions and error rates for the benchmark data sets from [7]. ?dim? shows the medians of the estimated dimensionalities over the resamples. ?dim (cv)? shows the same quantity, but this time, the dimensions have been estimated by leave-one-out cross-validation. ?est. error rate? is the estimated error rate on the training set by comparing the de-noise class labels to the true class labels. The last three columns show the test error rates of three algorithms: ?kPCR? predicts using a simple least-squares hyperplane on the estimated subspace in feature space, ?KRR? is kernel ridge regression with parameters estimated using leave-one-out cross-validation, and ?SVM? are the original error rates from [7]. error for the learning problem. Compared to common cross-validation techniques one could argue that all we have achieved is to find a similar model as usual at a comparable computing time. However, we would like to emphasize that the side information extracted by our procedure contributes to a better understanding of the learning problem at hand: Is the classification result limited due to intrinsic high dimensional structure or are we facing noise and nuisance dimensions? Simulations show the usefulness of our RDE algorithm. An interesting future direction lies in combining these results with generalization bounds which are also based on the notion of an effective dimension, this time, however, with respect to some regularized hypothesis class (see, for example, [13]). Linking the effective dimension of the data set with the dimension of a learning algorithm, one could obtain data dependent bounds in a natural way with the potential to be tighter than bounds which are based on the abstract capacity of a hypothesis class. References [1] ML Braun. Spectral Properties of the Kernel Matrix and Their Application to Kernel Methods in Machine Learning. PhD thesis, University of Bonn, 2005. Available electronically at http://hss.ulb.unibonn.de/diss online/math nat fak/2005/braun mikio. [2] ML Braun. Accurate error bounds for the eigenvalues of the kernel matrix. Journal of Machine Learning Research, 2006. To appear. [3] V Koltchinskii and E Gin?e. Random matrix approximation of spectra of integral operators. Bernoulli, 6(1):113?167, 2000. [4] VI Koltchinskii. Asymptotics of spectral projections of some random matrices approximating integral operators. Progress in Probability, 43:191?227, 1998. [5] S Mika, B Sch?olkopf, A Smola, K-R M?uller, M Scholz, and Gunnar R?athsch. Kernel PCA and de-noising in feature space. In Advances in Neural Information Processing Systems 11. MIT Press, 1999. [6] K-R M?uller, S Mika, G R?atsch, K Tsuda, and B Sch?olkopf. An introduction to kernel-based learning algorithms. IEEE Transaction on Neural Networks, 12(2):181?201, May 2001. [7] G R?atsch, T Onoda, and K-R M?uller. Soft margins for AdaBoost. Machine Learning, 42(3):287?320, March 2001. [8] B Sch?olkopf, S Mika, CJC Burges, P Knirsch, K-R M?uller, G R?atsch, and AJ Smola. Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks, 10(5):1000?1017, September 1999. [9] B Sch?olkopf and AJ Smola. Learning with Kernels. MIT Press, 2001. [10] B Sch?olkopf, AJ Smola, and K-R M?uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299?1319, 1998. [11] V Vapnik. Statistical Learning Theory. Wiley, 1998. [12] G Wahba. Spline Models For Observational Data. Society for Industrial and Applied Mathematics, 1990. [13] T Zhang. Learning bounds for kernel regression using effective data dimensionality. Neural Computation, 17:2077?2098, 2005. [14] L Zwald and G Blanchard. On the convergence of eigenspaces in kernel principal components analysis. In Advances in Neural Information Processing Systems (NIPS 2005), volume 18, 2006. A Proof of Theorem 1 First, let usP collect the definitions concerning kernel functions. Let k be a Mercer kernel with ? k(x, x0 ) = i=1 ?i ?i (x)?i (x0 ), and k(x, x) ? K < ?. The kernel matrix of k for an n-sample X1 , . . . , Xn is [K]ij = k(Xi , Xj )/n. Eigenvalues of K are li , and P its eigenvectors are ui . The r kernel k is approximated by the truncated kernel matrix is kr (x, x0 ) = i=1 ?i ?i (x)?i (x0 ), and its kernel matrix is denoted by Kr , whose eigenvalues are mi . The approximation error is measured by Er = Kr ? K. We will measure the amount of clustering ci of the eigenvalues by the number of eigenvalues of Kr between li /2 and 2li . The matrix ? containing the sample vectors of the first r eigenfunctions ?i of k is given by [?r ]i` = ?` (Xi )/ n, 1 ? i ? n, 1 ? ` ? r. Since the eigenfunctions are orthogonal asymptotically, we can expect that the sample vectors of the eigenfunctionsP converge to either 0 or 1. The error is measured by the matrix Cr = ?>r ?r ? I. Finally, ? let ?r = i=r+1 ?i . P? Next, we collect definitions concerning some function f . Let f = i=1 ?i ?i ?i with (?i ) ? 2 `P , and |f | ? A < ?. The size of the contribution of the first r terms is measured by ar = r |?i |. Define the error of truncating f to the first r elements of its series expansion by Tr = Pi=1 ? ( i=r+1 ?2i ?i2 )1/2 . The proof of Theorem 1 is based on performing rough estimates of the bound from Theorem 4.92 in [1]. The bound is   1 ? |u>i f (X)| < min li D(r, n) + E(r, n) + T (r, n) 1?r?n n where the three terms are given by r p 1 4 + + r , D(r, n) = 2ar k?r kci , E(r, n) = 2rar k?r kkE k, T (r, n) = Tr + F Tr n? > 1/2 It holds that k?+ = (1 ? kCr k)?1/2 ([1], Lemma 4.44). Furthermore, r k ? (1 ? k?r ?r ? Ik) kCr k ? 1 as n ? ? for fixed r. For kernel with p bounded diagonal, it holds that with probability larger than 1?? ([1], Lemma 3.135) that kCr k ? r r(r + 1)K/?r n? = r2 O(n?1/2 ) with a rather ?1/2 large constant, especially, if ?r is small. Consequently, k?+ = 1+O(rn?1/4 ). r k ? (1?kCr k) Now, Lemma 3.135 in [1] bounds kEr k from which we can derive the asymptotic r p 1 2K?r kEr k ? ?r + ?r + = ?r + ?r O(n? 2 ), n? assuming that K will be reasonably small (for example, for rbf-kernels, K = 1). Combining this with our rate for k?+ k, we obtain  r p  p 1 1 1 1 E(r, n) = 2rar ?r + ?r O(n? 2 ) (1+O(rn? 4 )) = 2rar ?r (1+O(rn? 4 ))+rar ?r O(n? 2 ). Finally, we obtain 1 1 ? |u>i f (X)| = 2li ar ci (1 + O(rn? 4 )) n 1 + 2rar ?r (1 + O(rn? 4 )) + rar p p 1 1 ?r O(n? 2 ). + Tr + ATr O(n? 2 ). 1 If we assume that ?r will be rather small, we replace 1 + O(rn? 4 ) by O(1) for the second term and obtain the claimed rate. 
3067 |@word norm:4 seems:2 underline:1 nd:1 simulation:1 tried:1 tr:7 reduction:1 contains:2 series:1 denoting:2 rkhs:1 kcr:4 err:1 ida:1 comparing:1 si:2 dx:1 written:2 informative:1 shape:1 enables:2 plot:3 v:1 selected:2 flare:2 ith:1 provides:2 math:1 location:2 zhang:1 along:1 become:1 ik:1 consists:1 fitting:2 manner:2 x0:8 pairwise:1 theoretically:1 expected:4 behavior:3 roughly:1 automatically:2 overwhelming:2 underlying:2 suffice:1 notation:2 bounded:1 kind:1 interpreted:1 eigenvector:3 finding:2 transformation:1 every:1 braun:4 returning:1 k2:2 uk:13 control:1 yn:3 appear:1 negligible:3 limit:1 mika:4 might:1 koltchinskii:2 collect:2 ringnorm:1 limited:1 scholz:1 practical:2 procedure:1 ker:2 asymptotics:1 empirical:2 rde:6 eth:1 significantly:1 projection:8 confidence:1 spite:1 suggest:1 cannot:2 close:1 selection:3 operator:3 noising:7 mediocre:1 put:1 descending:1 zwald:1 equivalent:1 map:3 urich:1 truncating:1 insight:2 spanned:2 regularize:1 stability:1 notion:3 coordinate:1 hypothesis:2 fak:1 logarithmically:1 element:1 approximated:1 particularly:1 predicts:1 observed:3 observes:1 balanced:1 transforming:1 pd:3 complexity:2 ui:4 moderately:1 ideally:1 depend:1 smart:1 basis:1 differently:1 s2i:1 effective:6 klaus:1 whose:2 quite:1 larger:3 say:1 transform:1 noisy:3 online:1 interplay:1 sequence:1 eigenvalue:20 propose:5 interaction:1 maximal:1 product:6 tu:1 relevant:22 combining:2 achieve:1 pronounced:1 olkopf:5 convergence:4 cluster:1 leave:3 object:1 help:1 tk:3 derive:1 measured:4 ij:1 progress:1 strong:1 c:1 appropriateness:1 direction:7 waveform:1 closely:1 observational:1 rar:8 generalization:3 probable:1 tighter:1 inspecting:1 hold:3 practically:2 around:1 mapping:2 claim:1 a2:1 favorable:1 estimation:6 label:17 krr:3 superposition:1 tool:1 uller:5 mit:2 rough:1 always:1 gaussian:1 rather:5 pn:1 cr:1 conjunction:1 encode:1 joachim:1 vk:2 properly:1 bernoulli:1 likelihood:7 greatly:1 industrial:1 summarizing:1 inst:1 dim:4 dependent:1 membership:2 typically:2 initially:1 hidden:1 relation:2 transformed:1 fhg:1 misclassified:1 selects:1 sketched:1 classification:11 among:2 denoted:1 art:1 fairly:1 marginal:1 equal:2 construct:1 having:1 look:1 future:1 spline:1 few:1 fitness:1 n1:2 highly:1 hg:1 accurate:1 integral:3 closer:1 eigenspaces:2 orthogonal:2 plotted:4 tsuda:1 theoretical:2 column:1 soft:1 ar:5 kekul:1 entry:1 usefulness:1 too:1 kn:2 combined:1 twonorm:1 stay:2 quickly:2 again:1 thesis:1 containing:2 knirsch:1 leading:12 li:10 toy:4 potential:1 de:13 b2:1 coefficient:6 blanchard:1 depends:1 vi:1 performed:2 view:1 closed:2 analyze:4 linked:4 start:1 recover:1 solar:2 contribution:12 square:2 formed:1 ni:1 variance:2 characteristic:1 efficiently:2 yield:1 spaced:1 generalize:1 weak:1 comparably:1 definition:2 against:2 competitor:1 proof:4 associated:1 mi:1 gain:1 adjusting:1 dimensionality:15 formalize:1 actually:2 appears:2 supervised:2 adaboost:1 arranged:1 evaluated:1 furthermore:2 just:1 implicit:1 smola:4 until:1 hand:4 expressive:1 nonlinear:1 overlapping:1 aj:3 contain:1 normalized:2 true:7 counterpart:1 deliberately:1 requiring:1 symmetric:1 i2:1 deal:2 indistinguishable:1 width:5 nuisance:1 inferior:1 prominent:1 ridge:3 interpreting:1 estr:1 resamples:1 image:4 consideration:1 common:2 empirically:1 volume:1 linking:1 approximates:1 cv:2 smoothness:1 fk:5 consistency:1 mathematics:1 dot:1 stable:2 add:2 dominant:1 recent:2 inf:1 discard:1 scenario:1 claimed:1 arbitrarily:1 yi:2 muller:1 seen:1 minimum:1 additional:1 floor:1 converge:2 relates:2 smooth:2 technical:1 match:6 characterized:1 cross:8 concerning:2 equally:1 ensuring:1 regression:4 breast:2 kernel:105 represent:1 achieved:3 confined:1 justified:1 want:1 void:1 median:1 crucial:1 sch:5 eigenfunctions:5 elegant:1 spirit:1 structural:1 xj:8 fit:7 wahba:1 economic:1 idea:4 reduce:1 whether:1 six:3 pca:32 motivated:1 speaking:1 generally:1 covered:1 eigenvectors:7 clear:2 detailed:1 amount:2 reduced:2 http:1 canonical:1 sign:1 estimated:13 per:1 gunnar:1 four:2 kci:1 drawn:1 kuk:1 asymptotically:3 fraction:1 sum:2 enforced:1 fourth:2 franklinstr:1 saying:2 almost:1 family:2 decision:3 appendix:1 scaling:2 comparable:1 bound:20 distinguish:1 occur:1 generates:1 aspect:1 speed:1 argument:1 thyroid:1 bonn:1 min:1 performing:1 px:5 relatively:1 combination:1 march:1 smaller:1 slightly:2 making:1 usp:1 explained:2 projecting:3 heart:1 unregularized:1 zurich:1 loose:1 mechanism:1 german:2 know:1 available:2 permit:4 apply:1 observe:1 appropriate:3 spectral:7 slower:1 original:2 remaining:1 clustering:1 restrictive:1 especially:2 approximating:1 society:1 comparatively:1 already:1 question:2 quantity:3 ulb:1 usual:3 diagonal:2 exhibit:1 gin:1 kth:3 subspace:4 september:1 separate:1 mapped:1 berlin:3 capacity:2 link:1 atr:2 evenly:1 manifold:1 argue:1 discriminant:1 trivial:1 assuming:1 ratio:1 minimizing:1 mostly:1 robert:1 favorably:1 negative:2 oneout:1 perform:4 allowing:1 benchmark:3 finite:6 truncated:1 banana:1 y1:3 rn:8 locate:1 introduced:2 complement:1 required:1 learned:1 protect:1 established:1 nip:1 below:1 summarize:2 kke:1 explanation:1 power:1 natural:1 regularized:2 buhmann:1 titanic:2 picture:1 lk:7 fraunhofer:1 extract:1 prior:1 geometric:2 understanding:2 l2:1 asymptotic:7 embedded:1 par:3 expect:3 interesting:2 proportional:1 hs:1 facing:1 validation:8 consistent:2 mercer:3 jbuhmann:1 foster:1 principle:1 pi:1 cancer:2 last:1 free:1 electronically:1 jth:1 dis:1 side:2 burges:1 telling:1 distributed:2 benefit:1 boundary:3 dimension:27 xn:5 evaluating:1 computes:2 infinitedimensional:1 concretely:1 made:2 projected:6 far:1 transaction:2 reconstructed:1 emphasize:1 supremum:1 dealing:1 ml:2 reveals:1 discriminative:1 xi:8 spectrum:2 table:2 onoda:1 reasonably:1 inherently:1 contributes:3 expansion:1 complex:1 main:1 linearly:3 noise:23 x1:5 mikio:3 referred:1 fashion:1 wiley:1 sub:2 wish:1 lie:5 candidate:1 atypical:1 splice:2 removing:1 formula:1 theorem:6 showing:2 er:1 r2:1 decay:6 svm:3 virtue:1 dominates:1 intrinsic:2 exists:1 vapnik:1 adding:1 kr:4 ci:4 phd:1 nat:1 margin:2 demand:1 gap:1 diabetis:2 suited:1 simply:1 contained:8 scalar:6 applies:1 ch:2 corresponds:1 extracted:1 viewed:1 consequently:3 rbf:4 krm:1 replace:1 specifically:1 hyperplane:2 denoising:1 principal:3 lemma:5 lens:1 experimental:1 tendency:1 est:2 atsch:3 select:3 highdimensional:1 support:1 latter:2 ethz:1 evaluate:1
2,278
3,068
Learnability and the Doubling Dimension Yi Li Genome Institute of Singapore [email protected] Philip M. Long Google [email protected] Abstract Given a set F of classifiers and a probability distribution over their domain, one can define a metric by taking the distance between a pair of classifiers to be the probability that they classify a random item differently. We prove bounds on the sample complexity of PAC learning in terms of the doubling dimension of this metric. These bounds imply known bounds on the sample complexity of learning halfspaces with respect to the uniform distribution that are optimal up to a constant factor. We prove a bound that holds for any algorithm that outputs a classifier with zero error whenever this is possible; this bound is in terms of the maximum of the doubling dimension and the VC-dimension of F , and strengthens the best known bound in terms of the VC-dimension alone. We show that there is no bound on the doubling dimension in terms of the VC-dimension of F (in contrast with the metric dimension). 1 Introduction A set F of classifiers and a probability distribution D over their domain induce a metric in which the distance between classifiers is the probability that they disagree on how to classify a random object. (Let us call this metric D .) Properties of metrics like this have long been used for analyzing the generalization ability of learning algorithms [11, 32]. This paper is about bounds on the number of examples required for PAC learning in terms of the doubling dimension [4] of this metric space. The doubling dimension of a metric space is the least d such that any ball can be covered by 2d balls of half its radius. The doubling dimension has been frequently used lately in the analysis of algorithms [13, 20, 21, 17, 29, 14, 7, 22, 28, 6]. In the PAC-learning model, an algorithm is given examples (x1 ; f (x1 )); :::; (xm ; f (xm )) of the behavior of an arbitrary member f of a known class F . The items x1 ; :::; xm are chosen independently at random according to D. The algorithm must, with probability at least 1 ? (w.r.t. to the random choice of x1 ; :::; xm ), output a classifier whose distance from f is at most . We show that if using (F; D ) has doubling dimension d, then F  can be PAC-learned with respect to D  d + log ?1 (1)  examples. If in addition the VC-dimension of F is d, we show that any algorithm that outputs a classifier with zero training error whenever this is possible PAC-learns F w.r.t. D using O 0 q 1 d log 1 + log 1? A O@  examples. (2) We show that if F consists of halfspaces through the origin, and D is the uniform distribution over the unit ball in Rn, then the doubling dimension of (F; D ) is O(n). Thus (1) generalizes the n+log ?1 known bound of O for learning halfspaces with respect to the uniform distribution [25],  matching a knownlower bound for this problem [23] up to a constant factor. Both upper bounds n log 1 +log 1? improve on the O bound that follows from the traditional analysis; (2) is the first  such improvement for a polynomial-time algorithm. Some previous analyses of the sample complexity of learning have made use of the fact that the ?metric dimension? [18] is at most the VC-dimension [11, 15]. Since using the doubling dimension can sometimes lead to a better bound, a natural question is whether there is also a bound on the doubling dimension in terms of the VC-dimension. We show that this is not the case: it is possible to pack (1= )(1=2 o(1))d classifiers in a set F of VC-dimension d so that the distance between every pair is in the interval [ ; 2 ]. Our analysis was inspired by some previous work in computational geometry [19], but is simpler. Combining our upper bound analysis with established techniques (see [33, 3, 8, 31, 30]), one can perform similar analyses for the more general case in which no classifier in F has zero error. We have begun with the PAC model because it is a clean setting in which to illustrate the power of the doubling dimension for analyzing learning algorithms. The doubling dimension appears most useful when the best achievable error rate (the Bayes error) is of the same order as the inverse of the number of training examples (or smaller). Bounding the doubling dimension is useful for analyzing the sample complexity of learning because it limits the richness of a subclass of F near the classifier to be learned. For other analyses that exploit bounds on such local richness, please see [31, 30, 5, 25, 26, 34]. It could be that stronger results could be obtained by marrying the techniques of this paper with those. In any case, it appears that the doubling dimension is an intuitive yet powerful way to bound the local complexity of a collection of classifiers. 2 Preliminaries 2.1 Learning For some domain X , an example consists of a member of X , and its classification in f0; 1g. A classifier is a mapping from X to f0; 1g. A training set is a finite collection of examples. A learning algorithm takes as input a training set, and outputs a classifier. Suppose D is a probability distribution over X . Then define D (f; g) = PrxD (f (x) 6= g(x)): A learning algorithm A PAC learns F w.r.t. D with accuracy  and confidence ? from m examples if, for any f 2 F , if  domain elements x ; :::; xm are drawn independently at random according to D, and  (x ; f (x )); :::; (xm ; f (xm )) is passed to A, which outputs h, 1 1 then 1 Pr(D (f; h) > )  ?: If F is a set of classifiers, a learning algorithm is a consistent hypothesis finder for F if it outputs an element of F that correctly classifies all of the training data whenever it is possible to do so. 2.2 Metrics Suppose  = (Z; ) is a metric space. An -cover for  is a set T  Z such that every element of distance at most (with respect to ). Z has a counterpart in T that is at a An -packing for  is a set T  than (again, with respect to ). Z such that every pair of elements of T are at a distance greater 2 Z consists of all t 2 Z for which (z; t)  . Denote the size of the smallest -cover by N ( ; ). Denote the size of the largest -packing by M( ; ). The -ball centered at z Lemma 1 ([18]) For any metric space  = (Z; ), and any > 0, M(2 ; )  N ( ; )  M( ; ): The doubling dimension of  is the least d such that, for all radii > 0, any -ball in  can be covered by at most 2d =2-balls. That is, for any > 0 and any z 2 Z , there is a C  Z such that  jC j  2d, and  ft 2 Z : (z; t)  g  [c2C ft 2 Z : (c; t)  =2g. 2.3 Probability For a function and a probability distribution D, let ExD ( (x)) be the expectation of Pm w.r.t. D. 1 We will shorten this to ED ( ), and if u = (u1 ; :::; um ) 2 X m , then Eu ( ) will be m i=1 (ui ). We will use PrxD , PrD , and Pru similarly. 3 The strongest upper bound Theorem 2 Suppose  d is thedoubling dimension of =?) examples. learns F from O d+log(1  (F; D ). There is an algorithm A that PAC- The key lemma limits the extent to which points that are separated from one another can crowd around some point in a metric space with limited doubling dimension. Lemma 3 (see [13]) Suppose  = (Z; ) is a metric space with doubling dimension d and z For > 0, let B (z; ) consist of the elements of u centered at z ). Then 2 Z. 2 Z such that (u; z )  (that is, the -ball  d M( ; B (z; ))  8 : (In other words, any -packing must have at most (8 = )d elements within distance of z .) Proof: Since  has doubling dimension d, the set B (z; ) can be covered by 2d balls of radius =2. Each of these can be covered by 2d balls of radius =4, and so on. Thus, B (z; ) can be covered by 2ddlog2 = e  (4 = )d balls of radius =2. Applying Lemma 1 completes the proof. Now we are ready to prove Theorem 2. The proof is an application of the peeling technique [1] (see [30]). Proof of Theorem 2: Construct an =4 packing G greedily, by repeatedly adding an element of F to G for as long as this is possible. This packing is also an =4-cover, since otherwise we could add another member to G. Consider the algorithm that outputs the element of G with minimum error on the training set. What ever the target, some element of G has error at most =4. Applying Chernoff bounds, O log(1 =?) examples are sufficient that, with probability at least 1 ?=2, this classifier is incorrect on at most a fraction =2 of the training data. Thus, the training error of the hypothesis output by A is at most =2 with probability at least 1 ?=2. Choose an arbitrary function f , and let S be the random training set resulting from drawing m examples according to D, and classifying them using f . Define S (g; h) to be the fraction of examples in S on which g and h disagree. We have Pr(9g 2 G; D (g; f ) >  and S (g; f )  =2)   X=) log(1 k=0 X=) log(1 k=0 Pr(9g 2 G; 2k  < D (g; f )  2k+1  and S (g; f )  =2) 2(k+5)d e 2k m=8 by Lemma 3 and the standard Chernoff bound. Each of the following steps is a straightforward manipulation: X=) log(1 k=0  32d 2(k+5)d e 1 X k=0 22k de 2k m=8 2k = 32d m=8  X=) log(1 2kd e 2k m=8 k=0 d 64 e m=8 : 1 2de m=8 Since m = O((d + log(1=? ))=) is sufficient for completes the proof. 64de m=8  32d X=) log(1 k=0 22k de  ?=2 and 2de 2k m=8 m=8  1=2, this 4 A bound for consistent hypothesis finders In this section we analyze algorithms that work by finding hypotheses with zero training error. This is one way to achieve computational efficiency, as is the case when F consists of halfspaces. This analysis will use the notion of VC-dimension. Definition 4 The VC-dimension of a set F of f0; 1g-valued functions with a common domain is the size of the largest set x1 ; :::; xd of domain elements such that f(f (x1 ); :::; f (xd )) : f 2 F g = f0; 1gd: The following lemma generalizes the Chernoff bound to hold uniformly over a class of random variables; it concentrates on a simplified consequence of the Chernoff bound that is useful when bounding the probability that an empirical estimate is much larger than the true expectation. Lemma 5 (see [12, 24]) Suppose F is a set of f0; 1g-valued functions with a common domain X . Let d be the VC-dimension of F . Let D be a probability distribution over X . Choose > 0 and K  1. Then if c(d log 1 + log 1? ) m ; where c is an absolute constant, then K log(1 + K ) PruDm (9f; g 2 F; PrD (f 6= g)  but Pru (f 6= g) > (1 + K ) )  ?: Now we are ready for the main analysis of this section. Theorem 6 Suppose the doubling dimension of (F; D ) and the VC-dimension of F are both at most  q d. Any consistent hypothesis finder for F PAC learns F from O 1 Proof: Assume without loss of generality that 1=100, we have  =8.   1=100. Let d log 1 + log 1? =  exp( q examples. ln 1 ); since   Choose a target function f . For each h 2 F , define `h : X ! f0; 1g by `h (x) = 1 , h(x) 6= f (x). Let `F = f`h : h 2 F g. Since `g (x) 6= `h (x) exactly when g (x) 6= h(x), the doubling dimension of `F is the same as the doubling dimension of F ; the VC-dimension of `F is also known to be the same as the VC-dimension of F (see [32]). Construct an packing G greedily, by repeatedly adding an element of `F to G for as long as this is possible. This packing is also an -cover. 2 `F , let (g) be its nearest neighbor in G. Since  =8, by the triangle inequality, ED (g) >  and Eu(g) = 0 ) ED ((g)) > 7=8 and Eu (g) = 0: (3) For each g The triangle inequality also yields Eu (g) = 0 ) (Eu ((g))  =4 or Pru ((g) 6= g) > =4): Combining this with (3), we have we have Pr(9g 2 `F ; ED (g) >  but Eu (g) = 0)  Pr(9g 2 `F ; ED ((g)) > 7=8 but Eu ((g))  =4) +Pr(9g 2 `F ; Pru ((g) 6= g) > =4): (4) We have Pr(9g 2 `F ; ED ((g)) > 7=8 but Eu ((g))  =4)  Pr(9g 2 G; ED (g) > 7=8 but Eu (g)  =4) = Pr(9g 2 G; D (f; g) > 7=8 but Pru (f 6= g)  =4)   =  X log(8 (7 )) k=0 =  X Pr(9g 2 G; 2k (7=8) < D (g; f )  2k+1 (7=8) and Pru (f 6= g)  =4) log(8 (7 ))  82k+1 d k e c2 m; k=0 where c > 0 is an absolute constant, by Lemma 3 and the standard Chernoff bound. Computing a geometric sum exactly as in the proof of Theorem 2, we have that m = O(d=) suffices for  d Pr(9g 2 `F ; ED ((g)) > 7=8 but Eu((g))  =4)  c1  e c2m ; > 0. By plugging in the value of and solving, we can see that for absolute constants c1 ; c2 r 1 d log 1 + log 1 m=O   ? !! suffices for Pr(9g 2 `F ; PrD ((g)) > 7=8 but Pru ((g))  =4)  ?=2: (5) Since PrD ((g ) 6= g )   =8 for all g 2 `F , applying Lemma 5 with K = =(4 ) 1, we get that there is an absolute constant c > 0 such that  c d log 1 + log 1? m (6) (=4 ) log( 4 ) also suffices for Pr(9g 2 `F ; Pru ((g) 6= g) > =4)  ?=2: Substituting the value into (6), it is sufficient that  m q  c d(log 1 + log 1 ) + log 1? q : (=8) log 1 log 4 Putting this together with (5) and (4) completes the proof. 5 Halfspaces and the uniform distribution Proposition 7 If Un is the uniform distribution over the unit ball in Rn , and Hn is the set of halfspaces that go through the origin, then the doubling dimension of (Hn ; Un ) is O(n). Proof: Choose h 2 Hn and > 0. We will show that the ball of radius covered by O(n) balls of radius =2. centered at h can be Suppose UHn is the probability distribution over Hn obtained by choosing a normal vector w uniformly from the unit ball, and outputting fx : w  x  0g. The argument will be a ?volume argument? using UHn . It is known (see Lemma 4 of [25]) that PrgUHn (Un (g; h)  =4)  (c1 )n 1 > 0 is an absolute constant independent of and n. Furthermore, PrgUHn (Un (g; h)  5 =4)  (c2 )n 1 where c2 > 0 is another absolute constant. Suppose we choose arbitrarily choose g1 ; g2 ; ::: 2 Hn that are at a distance at most from h, but =2 far from one another. By the triangle inequality, =4-balls centered at g1 ; g2 ; ::: are disjoint. Thus, the probability that an random element of Hn is in a ball of radius =4 centered at one of g1 ; :::; gN is at least N (c1 )n 1 . On the other hand, since each g1 ; :::; gN has distance at most from h, any element of an =4 ball centered at one of them is at most + =4 far from h. Thus, the union of the =4 balls centered at g1 ; :::; gN is contained in the 5 =4 ball centered at h. Thus N (c1 )n 1  (c2 )n 1 , which implies N  (c2 =c1 )n 1 = 2O(n) , completing the proof. where c1 6 Separation Theorem 8 For all 2 [0; 1=2] and positive integers d there is a set F of classifiers and a probability distribution D over their common domain with the following properties:  the VC-dimension of F is d   jF j  b e d= c  for each f; g 2 F ,  D (f; g)  2 . 1 2 2 1 2 This proof uses the probabilistic method. We begin with the following lemma. Lemma 9 Choose positive integers s and d. Suppose A is chosen uniformly at random from among the subsets of f1; :::; sg of size d. Then, for any B > 1,  Pr(jA \ f1; :::; dgj  (1 + B )E (jA \ f1; :::; dgj))  1 +e B (1+B )E (jA\f1;:::;dgj) : Proof: in Appendix A. Now we?re ready for the proof of Theorem 8, which uses the deletion technique (see [2]). s d=2 c. Proof (of Theorem 8): Set the domain X to be f1; :::; sg, where s = dd= e. Let N = b 2ed Suppose f1 ; :::; fN are chosen independently, uniformly at random from among the classifiers that evaluate to 1 on exactly d elements of X . For any distinct i; j , suppose fi 1 (1) is fixed, and we think of the members of fj 1 (1) as being chosen one at a time. The probability that any of the elements of fj 1 (1) is also in fi 1 (1) is d=s. Applying the linearity of expectation, and averaging over the different possibilities for fi 1 (1), we get E(jfi 1 (1) \ fj 1 (1)j) = ds : 2 Applying Lemma 9, Pr(jfi (1) \ fj (1)j  d=2)  1  1 =  2ed ( 2sd )( ds2 ) s  2ed d=2 : s Thus, the expected number of pairs i; j such that Pr(jfi 1 (1) \ fj d=2 (N 2 =2) 2ed . This implies that there exist f1 ; :::; fN such that s  1 (1)j  d=2) jffi; j g : jfi (1) \ fj (1)j  d=2gj  (N =2) 2sed 1 1 2 d=2 is at most : If we delete one element from each such pair, and form G from what remains, then each pair g; h of elements in G satisfies jg 1 (1) \ h 1 (1)j < d=2: (7) f1; :::; sg, then (7) implies D (g; h) > . The number of d= ed elements of G is at least N (N =2) s  N=2. Since each g 2 G has g (1) = d, no function in G evaluates to 1 on each element of any set of If D is the uniform distribution over 2 2 2 1 d + 1 elements of X . Thus, the VC-dimension of G is at most d. Theorem 8 implies that there is no bound on the doubling dimension of (G; D ) in terms of the VC-dimension of G. For any constraint on the VC-dimension, a set G satisfying the constraint can have arbitrarily large doubling dimension by setting the value of in Theorem 8 arbitrarily small. Acknowledgement We thank G?abor Lugosi and Tong Zhang for their help. References [1] K. Alexander. Rates of growth for weighted empirical processes. In Proc. of Berkeley Conference in Honor of Jerzy Neyman and Jack Kiefer, volume 2, pages 475?493, 1985. [2] N. Alon, J. H. Spencer, and P. Erd?os. The Probabilistic Method. Wiley, 1992. [3] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999. [4] P. Assouad. Plongements lipschitziens dans. R . Bull. Soc. Math. France, 111(4):429?448, 1983. [5] P. L. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. Annals of Statistics, 33(4):1497?1537, 2005. [6] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. ICML, 2006. [7] H. T. H. Chan, A. Gupta, B. M. Maggs, and S. Zhou. On hierarchical routing in doubling metrics. SODA, 2005. [8] L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996. [9] D. Dubhashi and D. Ranjan. Balls and bins: A study in negative dependence. Random Structures & Algorithms, 13(2):99?124, Sept 1998. [10] Devdatt Dubhashi, Volker Priebe, and Desh Ranjan. Negative dependence through the FKG inequality. Technical Report RS-96-27, BRICS, 1996. [11] R. M. Dudley. Central limit theorems for empirical measures. Annals of Probability, 6(6):899?929, 1978. [12] R. M. Dudley. A course on empirical processes. Lecture notes in mathematics, 1097:2?142, 1984. [13] A. Gupta, R. Krauthgamer, and J. R. Lee. Bounded geometries, fractals, and low-distortion embeddings. FOCS, 2003. [14] S. Har-Peled and M. Mendel. Fast construction of nets in low dimensional metrics, and their applications. SICOMP, 35(5):1148?1184, 2006. [15] D. Haussler. Sphere packing numbers for subsets of the Boolean n-cube with bounded VapnikChervonenkis dimension. Journal of Combinatorial Theory, Series A, 69(2):217?232, 1995. [16] K. Joag-Dev and F. Proschan. Negative association of random variables, with applications. The Annals of Statistics, 11(1):286?295, 1983. [17] J. Kleinberg, A. Slivkins, and T. Wexler. Triangulation and embedding using small sets of beacons. FOCS, 2004. [18] A. N. Kolmogorov and V. M. Tihomirov. -entropy and -capacity of sets in functional spaces. American Mathematical Society Translations (Ser. 2), 17:277?364, 1961. [19] J. Koml?os, J. Pach, and G. Woeginger. Almost tight bounds on epsilon-nets. Discrete and Computational Geometry, 7:163?173, 1992. [20] R. Krauthgamer and J. R. Lee. The black-box complexity of nearest neighbor search. ICALP, 2004. [21] R. Krauthgamer and J. R. Lee. Navigating nets: simple algorithms for proximity search. SODA, 2004. [22] F. Kuhn, T. Moscibroda, and R. Wattenhofer. On the locality of bounded growth. PODC, 2005. [23] P. M. Long. On the sample complexity of PAC learning halfspaces against the uniform distribution. IEEE Transactions on Neural Networks, 6(6):1556?1559, 1995. [24] P. M. Long. Using the pseudo-dimension to analyze approximation algorithms for integer programming. Proceedings of the Seventh International Workshop on Algorithms and Data Structures, 2001. [25] P. M. Long. An upper bound on the sample complexity of PAC learning halfspaces with respect to the uniform distribution. Information Processing Letters, 87(5):229?234, 2003. [26] S. Mendelson. Estimating the performance of kernel classes. Journal of Machine Learning Research, 4:759?771, 2003. [27] R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, 1995. [28] A. Slivkins. Distance estimation and object location via rings of neighbors. PODC, 2005. [29] K. Talwar. Bypassing the embedding: Approximation schemes and compact representations for low dimensional metrics. STOC, 2004. [30] S. van de Geer. Empirical processes in M-estimation. Cambridge Series in Statistical and Probabilistic Methods, 2000. [31] A. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes With Applications to Statistics. Springer, 1996. [32] V. N. Vapnik. Estimation of Dependencies based on Empirical Data. Springer Verlag, 1982. [33] V. N. Vapnik. Statistical Learning Theory. New York, 1998. [34] T. Zhang. Information theoretical upper and lower bounds for statistical estimation. IEEE Transactions on Information Theory, 2006. to appear. A Proof of Lemma 9 Definition 10 ([16]) A collection X1 ; :::; Xn of random variables are negatively associated if for every disjoint jI j pair I; J 1; :::; n of index sets, and for every pair f : R R and g : RjJ j R of non-decreasing functions, we have f g ! ! E(f (Xi ; i 2 I )g(Xj ; j 2 J ))  E(f (Xi ; i 2 I ))E(g(Xj ; j 2 J )): Lemma 11 ([10]) If A is chosen uniformly at random from among the subsets of f1; :::; sg with exactly d elements, and Xi = 1 if i 2 A and 0 otherwise, then X1 ; :::; Xs are negatively associated. Lemma 12 ([9]) Collections Pn X1 ; :::; XnQofn negatively associated random variables satisfy Chernoff bounds: for any  > 0, E(exp( i=1 Xi )) i=1 E(exp(Xi )):  2f g gj 2 Proof of Lemma 9: Let Xi 0; 1 indicate whether i A. By Lemma 11, X1 ; :::; Xd are negatively Pd associated. We have A 1; :::; d = i=1 Xi . Combining Lemma 12 with a standard Chernoff-Hoeffding bound (see Theorem 4.1 of [27]) completes the proof. jf \f
3068 |@word achievable:1 polynomial:1 stronger:1 r:1 wexler:1 series:2 com:1 beygelzimer:1 yet:1 must:2 fn:2 alone:1 half:1 item:2 desh:1 math:1 location:1 mendel:1 simpler:1 zhang:2 mathematical:1 c2:7 incorrect:1 prove:3 consists:4 focs:2 expected:1 behavior:1 frequently:1 inspired:1 decreasing:1 begin:1 classifies:1 linearity:1 bounded:3 estimating:1 what:2 finding:1 marrying:1 berkeley:1 every:5 pseudo:1 subclass:1 xd:3 growth:2 exactly:4 um:1 classifier:17 ser:1 unit:3 appear:1 positive:2 local:3 sd:1 limit:3 consequence:1 analyzing:3 lugosi:2 black:1 limited:1 union:1 empirical:7 orfi:1 matching:1 confidence:1 induce:1 word:1 get:2 jffi:1 fkg:1 applying:5 ranjan:2 straightforward:1 go:1 sicomp:1 independently:3 shorten:1 haussler:1 embedding:2 notion:1 fx:1 annals:3 target:2 construction:1 suppose:11 programming:1 us:2 hypothesis:5 origin:2 element:21 jfi:4 strengthens:1 satisfying:1 recognition:1 ft:2 richness:2 eu:10 devdatt:1 halfspaces:8 pd:1 ui:1 complexity:9 peled:1 solving:1 tight:1 negatively:4 efficiency:1 triangle:3 packing:8 differently:1 kolmogorov:1 separated:1 distinct:1 fast:1 choosing:1 crowd:1 whose:1 larger:1 valued:2 distortion:1 drawing:1 otherwise:2 ability:1 statistic:3 gi:1 g1:5 vaart:1 think:1 net:3 outputting:1 dans:1 combining:3 achieve:1 intuitive:1 convergence:1 motwani:1 rademacher:1 ring:1 object:2 help:1 illustrate:1 alon:1 nearest:3 soc:1 implies:4 indicate:1 concentrate:1 kuhn:1 radius:8 vc:17 centered:8 routing:1 raghavan:1 bin:1 ja:3 suffices:3 generalization:1 f1:9 preliminary:1 proposition:1 spencer:1 jerzy:1 hold:2 proximity:1 around:1 bypassing:1 normal:1 exp:3 mapping:1 substituting:1 smallest:1 estimation:4 proc:1 combinatorial:1 largest:2 weighted:1 zhou:1 pn:1 volker:1 improvement:1 vapnikchervonenkis:1 contrast:1 greedily:2 abor:1 france:1 classification:1 among:3 cube:1 construct:2 chernoff:7 icml:1 report:1 geometry:3 possibility:1 har:1 tree:1 re:1 theoretical:2 delete:1 classify:2 boolean:1 gn:3 dev:1 cover:5 bull:1 subset:3 uniform:8 seventh:1 learnability:1 dependency:1 gd:1 tihomirov:1 international:1 randomized:1 probabilistic:4 lee:3 together:1 again:1 central:1 choose:7 hn:6 hoeffding:1 american:1 li:1 de:6 gy:1 star:1 satisfy:1 jc:1 pru:9 analyze:2 bayes:1 sed:1 accuracy:1 kiefer:1 yield:1 weak:1 prg:2 strongest:1 whenever:3 ed:13 definition:2 evaluates:1 against:1 dm:1 proof:17 associated:4 begun:1 appears:2 erd:1 box:1 generality:1 furthermore:1 langford:1 d:1 hand:1 o:2 google:2 true:1 counterpart:1 please:1 fj:6 jack:1 fi:3 common:3 functional:1 ji:1 volume:2 association:1 cambridge:3 pm:1 similarly:1 mathematics:1 jg:1 podc:2 f0:6 gj:2 add:1 chan:1 triangulation:1 manipulation:1 verlag:1 honor:1 inequality:4 arbitrarily:3 yi:1 der:1 minimum:1 greater:1 technical:1 long:7 sphere:1 beacon:1 finder:3 plugging:1 metric:17 expectation:3 sometimes:1 kernel:1 c1:7 addition:1 interval:1 completes:4 member:4 pach:1 call:1 integer:3 near:1 embeddings:1 xj:2 c2c:1 whether:2 bartlett:2 passed:1 wellner:1 dgj:3 york:1 repeatedly:2 fractal:1 useful:3 covered:6 exist:1 singapore:1 disjoint:2 correctly:1 discrete:1 prd:4 putting:1 key:1 drawn:1 clean:1 fraction:2 sum:1 inverse:1 letter:1 powerful:1 talwar:1 soda:2 almost:1 separation:1 appendix:1 bound:30 completing:1 constraint:2 bousquet:1 kleinberg:1 u1:1 argument:2 according:3 ball:20 kd:1 smaller:1 kakade:1 pr:16 ln:1 neyman:1 remains:1 koml:1 generalizes:2 hierarchical:1 dudley:2 krauthgamer:3 exploit:1 epsilon:1 society:1 dubhashi:2 question:1 dependence:2 traditional:1 navigating:1 distance:10 thank:1 capacity:1 philip:1 extent:1 devroye:1 index:1 stoc:1 ds2:1 negative:3 priebe:1 perform:1 disagree:2 upper:5 finite:1 ever:1 rn:2 arbitrary:2 pair:8 required:1 slivkins:2 learned:2 deletion:1 established:1 pattern:1 xm:7 power:1 natural:1 scheme:1 improve:1 imply:1 lately:1 ready:3 sept:1 sg:5 geometric:1 acknowledgement:1 loss:1 lecture:1 icalp:1 foundation:1 sufficient:3 consistent:3 dd:1 classifying:1 translation:1 uhn:4 course:1 proschan:1 institute:1 neighbor:4 taking:1 absolute:6 van:2 dimension:47 xn:1 genome:1 made:1 collection:4 simplified:1 far:2 transaction:2 compact:1 xi:7 un:4 search:2 pack:1 anthony:1 domain:9 main:1 bounding:2 prx:2 x1:10 tong:1 wiley:1 rjj:1 plong:1 learns:4 peeling:1 theorem:12 pac:11 x:1 gupta:2 consist:1 mendelson:2 workshop:1 vapnik:2 adding:2 locality:1 entropy:1 contained:1 g2:2 doubling:27 springer:3 satisfies:1 assouad:1 jf:2 uniformly:5 averaging:1 lemma:19 geer:1 alexander:1 evaluate:1 ex:1
2,279
3,069
Fundamental Limitations of Spectral Clustering Boaz Nadler?, Meirav Galun Department of Applied Mathematics and Computer Science Weizmann Institute of Science, Rehovot, Israel 76100 boaz.nadler,[email protected] Abstract Spectral clustering methods are common graph-based approaches to clustering of data. Spectral clustering algorithms typically start from local information encoded in a weighted graph on the data and cluster according to the global eigenvectors of the corresponding (normalized) similarity matrix. One contribution of this paper is to present fundamental limitations of this general local to global approach. We show that based only on local information, the normalized cut functional is not a suitable measure for the quality of clustering. Further, even with a suitable similarity measure, we show that the first few eigenvectors of such adjacency matrices cannot successfully cluster datasets that contain structures at different scales of size and density. Based on these findings, a second contribution of this paper is a novel diffusion based measure to evaluate the coherence of individual clusters. Our measure can be used in conjunction with any bottom-up graph-based clustering method, it is scale-free and can determine coherent clusters at all scales. We present both synthetic examples and real image segmentation problems where various spectral clustering algorithms fail. In contrast, using this coherence measure finds the expected clusters at all scales. Keywords: Clustering, kernels, learning theory. 1 Introduction Spectral clustering methods are common graph-based approaches to (unsupervised) clustering of data. Given a dataset of n points {xi }ni=1 ? Rp , these methods first construct a weighted graph G = (V, W ), where the n points are the set of nodes V and the weighted edges Wi,j are computed by some local symmetric and non-negative similarity measure. A common choice is a Gaussian kernel with width ?, where k ? k denotes the standard Euclidean metric in Rp   kxi ? xj k2 Wi,j = exp ? (1) 2? 2 In this framework, clustering is translated into a graph partitioning problem. Two main spectral approaches for graph partitioning have been suggested. The first is to construct a normalized cut (conductance) functional to measure the quality of a partition of the graph nodes V into k clusters[1, 2]. Specifically, for a 2-cluster partition V = S ? (V \ S) minimizing the following functional is suggested in [1]   X 1 1 ?(S) = Wi,j + (2) a(S) a(V \ S) i?S,j?V \S P where a(S) = i?S,j?V Wi,j . While extensions of this functional to more than two clusters are possible, both works suggest a recursive top-down approach where additional clusters are found by ? Corresponding author. www.wisdom.weizmann.ac.il/?nadler minimizing the same clustering functional on each of the two subgraphs. In [3] the authors also propose to augment this top-down approach by a bottom-up aggregation of the sub-clusters. As shown in [1] minimization ofP(2) is equivalent to maxy (y T W y)/(y T Dy), where D is a diagonal n ? n matrix with Di,i = j Wi,j , and y is a vector of length n that satisfies the constraints y T D1 = 0 and y i ? {1, ?b} with b some constant in (0, 1). Since this maximization problem is NP-hard, both works relax it by allowing the vector y to take on real values. This approximation leads to clustering according to the eigenvector with second largest eigenvalue of the normalized graph Laplacian, W y = ?Dy. We note that there are also graph partitioning algorithms based on a non-normalized functional leading to clustering according to the second eigenvector of the standard graph Laplacian matrix D ? W, also known as the Fiedler vector [4]. A second class of spectral clustering algorithms does not recursively employ a single eigenvector, but rather proposes to map the original data into the first k eigenvectors of the normalized adjacency matrix (or a matrix similar to it) and then apply a standard clustering algorithm such as k-means on these new coordinates, see for example [5]-[11] and references therein. In recent years, much theoretical work was done to justify this approach. Belkin and Niyogi [8] showed that for data uniformly sampled from a manifold, these eigenvectors approximate the eigenfunctions of the Laplace Beltrami operator, which give an optimal low dimensional embedding under a certain criterion. Optimality of these eigenvectors, including rotations, was derived in [9] for multiclass spectral clustering. Probabilistic interpretations, based on the fact that these eigenvectors correspond to a random walk on the graph were also given by several authors [11]-[15]. Limitations of spectral clustering in the presence of background noise and multiscale data were noted in [10, 16], with suggestions to replace the uniform ? 2 in eq. (1) with a location dependent scale ?(xi )?(xj ). The aim of this paper is to present fundamental limitations of spectral clustering methods, and propose a novel diffusion based coherence measure to evaluate the internal consistency of individual clusters. First, in Section 2 we show that based on the isotropic local similarity measure (1), the NP-hard normalized cut criterion may not be a suitable global functional for data clustering. We construct a simple example with only two clusters, where we prove that the minimum of this functional does not correspond to the natural expected partitioning of the data into its two clusters. Further, in Section 3 we show that spectral clustering suffers from additional limitations, even with a suitable similarity measure. Our theoretical analysis is based on the probabilistic interpretation of spectral clustering as a random walk on the graph and on the intimate connection between the corresponding eigenvalues and eigenvectors and the characteristic relaxation times and processes of this random walk. We show that similar to Fourier analysis, spectral clustering methods are global in nature. Therefore, even with a location dependent ?(x) as in [10], these methods typically fail to simultaneously identify clusters at different scales. Based on this analysis, we present in Section 4 simple examples where spectral clustering fails. We conclude with Section 5, where we propose a novel diffusion based coherence measure. This quantity measures the coherence of a set of points as all belonging to a single cluster, by comparing the relaxation times on the set and on its suggested partition. Its main use is as a decision tool whether to divide a set of points into two subsets or leave it intact as a single coherent cluster. As such, it can be used in conjunction with either top-down or bottom-up clustering approaches and may overcome some of their limitations. We show how use of this measure correctly clusters the examples of Section 4, where spectral clustering fails. 2 Unsuitability of normalized cut functional with local information As reported in the literature, clustering by approximate minimization of the functional (2) performs well in many cases. However, a theoretical question still remains: Under what circumstances is this functional indeed a good measure for the quality of clustering ? Recall that the basic goal of clustering is to group together highly similar points while setting apart dissimilar ones. Yet this similarity measure is typically based only on local information as in (1). Therefore, the question can be rephrased - is local information sufficient for global clustering ? While this local to global concept is indeed appealing, we show that it does not work in general. We construct a simple example where local information is insufficient for correct clustering according to the functional (2). Consider data sampled from a mixture of two densities in two dimensions 1 p(x) = p(x1 , x2 ) = [pL,? (x1 , x2 ) + pG (x1 , x2 )] (3) 2 Normalized cut ? = 0.05 Original Data 3 3 2 2 1 1 0 0 ?1 ?1 ?2 ?2 2 4 (a) 6 2 4 (b) 6 Figure 1: A dataset with two clusters and result of normalized cut algorithm [2]. Other spectral clustering algorithms give similar results. where pL,? denotes uniform density in a rectangular region ? = {(x1 , x2 ) | 0 < x1 < L, ?? < x2 < 0} of length L and width ?, and pG denotes a Gaussian density centered at (?1 , ?2 ) with diagonal covariance matrix ?2 I. In fig. 1(a) a plot of n = 1400 points from this density is shown with L = 8, ? = 0.05  L, (?1 , ?2 ) = (2, 0.2) and ? = 0.1. Clearly, the two clusters are the Gaussian ball and the rectangular strip ?. However, as shown in fig. 1(b), clustering based on the second eigenvector of the normalized graph Laplacian with weights Wi,j given by (1) partitions the points somewhere along the long strip instead of between the strip and the Gaussian ball. We now show that this result is not due to the approximation of the NP-hard problem but rather a feature of the original functional (2). Intuitively, the failure of the normalized cut criterion is clear. Since the overlap between the Gaussian ball and the rectangular strip is larger than the width of the strip, a cut that separates them has a higher penalty than a cut somewhere along the thin strip. To show this mathematically, we consider the penalty of the cut due to the numerator in (2) in the limit of a large number of points n ? ?. In this population setting, as n ? ? each point has an infinite number of neighbors, so we can consider the limit ? ? 0. Upon normalizing the similarity measure (1) by 1/2?? 2 , the numerator is given by Z Z 2 2 1 X X 1 Cut(?1 ) = lim p(x)p(y)e?kx?y k /2? dxdy (4) Wi,j = 2 n?? |V | 2?? ?1 ?2 x??1 y ??2 where ?1 , ?2 ? R2 are the regions of the two clusters. For ?  L, a vertical cut of the strip at location x = x1 far away from the ball (|x1 ? x0 |  ?) gives Z ?Z 0 1 1 ?(x?x0 )2 /2?2 1 e dxdx0 = (5) Cut(x > x1 ) ' lim 2 2?? 2 2 ??0 0 L 2?L ?? A similar calculation shows that for a horizontal cut at y = 0, 2 2 1 e??2 /2? ? Cut(y > 0) ' L 8?? (6) Finally, note that for a vertical cut far from the rectangle boundary ??, the denominators of the two cuts in eq. (2) have the same order of magnitude. Therefore, if L  ? and ?2 /? = O(1) the horizontal cut between the ball and the strip has larger normalized penalty than a vertical cut of the strip. This analysis explains the numerical results in fig. 1(b). Other spectral clustering algorithms that use two eigenvectors, including those that take a local scale into account, also fail to separate the ball from the strip and yield similar results to fig.1(b). A possible solution to this problem is to introduce multiscale anisotropic features that capture the geometry and dimensionality of the data in the similarity metric. In the context of image and texture segmentation, the need for multiscale features is well known [17, 18, 19]. Our example highlights its importance in general data clustering. 3 Additional Limitations of Spectral Clustering Methods An additional problem with recursive bi-partitioning is the need of a saliency criterion when required to return k > 2 clusters. Consider, for example a dataset which contains k = 3 clusters. After the first cut, the recursive algorithm should decide which subgraph to further partition and which to leave intact. A common approach that avoids this decision problem is to directly find three clusters by using the first three eigenvectors of W v = ?Dv. Specifically, denote by {?j , v j } the set of eigenvectors of W v = ?Dv with eigenvalues sorted in decreasing order, and denote by v j (xi ) the i-th entry (corresponding to the point xi ) in the j-th eigenvector v j . Many algorithms propose to map each point xi ? Rp into ?(xi ) = (v 1 (xi ), . . . , v k (xi )) ? Rk , and apply simple ? j of clustering algorithms to the points ?(xi ) [8, 9, 12]. Some works [6, 10] use the eigenvectors v ? j = D1/2 v j . D?1/2 W D?1/2 instead, related to the ones above via v We now show that spectral clustering that uses the first k eigenvectors for finding k clusters also suffers from fundamental limitations. Our starting point is the observation that vj are also eigenvectors of the Markov matrix M = D?1 W [13, 12]. Assuming the graph is connected, the largest eigenvalue is ?1 = 1 with |?j | < 1 for j > 1. Therefore, regardless of the initial condition P the random walk converges to the unique equilibrium distribution ?s , given by ?s (i) = Di,i / j Dj,j . Moreover, as shown in [13], the Euclidean distance between points mapped to these eigenvectors is equal to a so called ?diffusion distance? between points on the graph, X 2 2 ?2t (7) j (v j (x) ? v j (y)) = kp(z, t | x) ? p(z, t | y)kL2 (1/?s ) j where p(z, t | x) is the probability distribution of a random walk at time t given that it started at x, ?s is the equilibrium distribution, and k ? kL2 (w) is the weighted L2 norm with weight w(z). Therefore, the eigenvalues and eigenvectors {?j , v j } for j > 1, capture the characteristic relaxation times and processes of the random walk on the graph towards equilibrium. Since most methods use the first few eigenvector coordinates for clustering, it is instructive to study the properties of these relaxation times and of the corresponding eigenvectors. We perform this analysis under the following statistical model: we assume that the points {xi } are random samples from a smooth density p(x) in a smooth domain ? ? Rp . We write the density in Boltzmann form p(x) = e?U (x)/2 and denote U (x) as the potential. As described in [13], in the limit n ? ?, ? ? 0, the random walk with transition matrix M on the graph of points sampled from this density converges to a stochastic differential equation (SDE) ? ? ? x(t) = ??U (x) + 2w(t) (8) where w(t) is standard white noise (Brownian motion), and the right eigenvectors of the matrix M converge to the eigenfunctions of the following Fokker-Planck operator L?(x) ? ?? ? ?? ? ?U = ???(x) (9) defined for x ? ? with reflecting boundary conditions on ??. This operator is non-positive and its eigenvalues are ?1 = 0 < ?2 ? ?3 ? . . .. The eigenvalues ??j of L and the eigenvalues ?j of M are related by ?j = limn??,??0 (1 ? ?j )/?. Therefore the top eigenvalues of M correspond to the smallest of L. Eq. (7) shows that these eigenfunctions and eigenvalues capture the leading characteristic relaxation processes and time scales of the SDE (8). These have been studied extensively in the literature [20], and can give insight into the success and limitations of spectral clustering [13]. For example, if ? = Rp and the density p(x) consists of k highly separated Gaussian clusters of roughly equal size (k clusters), then there are exactly k eigenvalues very close or equal to zero, and their corresponding eigenfunctions are approximately piecewise constant in each of these clusters. Therefore, in this setting spectral clustering with k eigenvectors works very well. To understand the limitations of spectral clustering, we now explicitly analyze situations with clusters at different scales of size and density. For example, consider a density with three isotropic Gaussian clusters: one large cloud (cluster #1) and two smaller clouds (clusters 2 and 3). These correspond to one wide well and two narrow wells in the potential U (x). A representative 2-D dataset drawn from such a density is shown in fig. 2 (top left). The SDE (8) with this potential has a few characteristic time scales which determine the structure of its leading eigenfunctions. The slowest one is the mean passage time between cluster 1 and clusters 2 or 3, approximately given by [20] 2? ?1,2 = p 00 e(U (xmax )?U (xmin )) 00 |Umin Umax | (10) 00 00 where xmin is the bottom of the deepest well, xmax is the saddle point of U (x), and Umin , Umax are the second derivatives at these points. Eq. (10), also known as Arrhenius or Kramers formula of chemical reaction theory, shows that the mean first passage time is exponential in the barrier height [20]. The corresponding eigenfunction ?2 is approximately piecewise constant inside the large well and inside the two smaller wells with a sharp transition near the saddle point xmax . This eigenfunction easily separates cluster 1 from clusters 2 and 3 (see top center panel in fig. 2). A second characteristic time is ?2,3 , the mean first passage time between clusters 2 and 3, also given by a formula similar to (10). If the potential barrier between these two wells is much smaller than between wells 1 and 2, then ?2,3  ?1,2 . A third characteristic time is the equilibration time inside cluster 1. To compute it we consider a diffusion process only inside cluster 1, e.g. with an isotropic parabolic potential of the form U (x) = U (x1 )+U100 kx?x1 k2 /2, where x1 is the bottom of the well. In 1-D the eigenvalues and eigenfunctions are given by ?k = (k ? 1)U100 , with ?k (x) a polynomial of degree k ? 1. The corresponding intra-well relaxation times are given by ?kR = 1/?k+1 (k ? 1). The key point in our analysis is that if the equilibration time inside the wide well is slower than the mean first passage time between the two smaller wells, ?1R > ?2,3 , then the third eigenfunction of L captures the relaxation process inside the large well and is approximately constant inside the two smaller wells. This eigenfunction cannot separate between clusters 2 and 3. Moreover, if ?2R = ?1R /2 is still larger than ?2,3 then even the next leading eigenfunction captures the equilibration process inside the wide well, see a plot of ?3 , ?4 in fig. 2 (rows 1,2). Therefore, even this next eigenfunction is not useful for separating the two small clusters. In the example of fig. 2, only ?5 separates these two clusters. This analysis shows that when confronted with clusters of different scales, corresponding to a multiscale landscape potential, standard spectral clustering which uses the first k eigenvectors to find k clusters will fail. We present explicit examples in Section 4 below. The fact that spectral clustering with a single scale ? may fail to correctly cluster multiscale data was already noted in [10, 16]. To overcome this failure, [10] proposed replacing the uniform ? 2 in eq. (1) with ?(xi )?(xj ) where ?(x) is proportional to the local density at x. Our analysis can also provide a probabilistic interpretation to their method. In a nutshell, the effect of this scaling is to speed up the diffusion process at regions of low density, thus changing some of its characteristic times. If the larger cluster has low density, as in the examples in their paper, this approach is successful as it decreases ?1R . However, if the large cluster has a high density (comparable to the density of the small clusters), this approach is not able to overcome the limitations of spectral clustering, see fig. 3. Moreover, this approach may also fail in the case of uniform density clusters defined solely by geometry (see fig. 4). 4 Examples We illustrate the theoretical analysis of Section 3 with three examples, all in 2-D. In the first two examples, the n points {xi } ? R2 are random samples from the following mixture of three Gaussians ?1 N (x1 , ?12 I) + ?2 N (x2 , ?22 I) + ?3 N (x3 , ?32 I) (11) P with centers xi isotropic standard deviations ?i and weights ?i ( i ?i = 1). Specifically, we consider one large cluster with ?1 = 2 centered at x1 = (?6, 0), and two smaller clusters with ?2 = ?3 = 0.5 centered at x2 = (0, 0) and x3 = (2, 0). We present the results of both the NJW algorithm [6] and the ZP algorithm [10] for two different weight vectors. Example I: Weights (?1 , ?2 , ?3 ) = (1/3, 1/3, 1/3). In the top left panel of fig. 2, n = 1000 random points from this density clearly show the difference in scales between the large cluster and the smaller ones. The first few eigenvectors of M with a uniform ? = 1 are shown in the first two rows of the figure. The second eigenvector ?2 is indeed approximately piecewise constant and easily separates the larger cluster from the smaller ones. However, ?3 and ?4 are constant on the smaller clusters, capturing the relaxation process in the larger cluster (?3 captures relaxation along the y-direction, hence it is not a function of the x-coordinate). In this example, only ?5 can separate the two small clusters. Therefore, as predicted theoretically, the NJW algorithm [6] fails to produce reasonable clusterings for all values of ?. In this example, the density of the large cluster is low, and therefore as expected and shown in the last row of fig. 2, the ZP algorithm clusters correctly. Example II: Weights (?1 , ?2 , ?3 ) = (0.8, 0.1, 0.1). In this case the density of the large cluster is high, and comparable to that of the small clusters. Indeed, as seen in fig. 3 and predicted theoretically ? Original Data 3 10 8 2 6 0 y ? 2 4 0 4 ?2 2 ?4 ?10 0 ?10 ?5 0 ?10 x ?5 0 ?10 x ?4 NJW clustering, ?= 1 5 ?5 0 x ?5 20 8 6 4 2 0 10 0 0 ?5 ?10 ?5 ?10 ?10 0 ?5 0 ?10 x ZP ? ?2 ZP clustering, kNN = 7 5 ?5 ?10 ?5 0 0 x ZP ? ?3 8 6 4 2 0 ?2 8 6 4 2 0 ?2 0 ?5 ?10 ?5 x 0 ?10 ?5 x 0 Figure 2: A three cluster dataset corresponding to example I (top left), clustering results of NJW and ZP algorithms [6, 10] (center and bottom left, respectively), and various eigenvectors of M vs. the x coordinate (blue dots in 2nd and 3rd columns). The red dotted line is the potential U (x, 0). y ZP ? ?2 ZP results kNN = 7 Original Data 5 5 10 0 0 5 ?5 ?5 0 ZP ? ?3 10 5 0 ?10 ?5 0 x 5 ?10 ?5 0 5 ?10 ?5 x 0 ?5 ?10 ?5 x 0 Figure 3: Dataset corresponding to example II and result of ZP algorithm. the ZP algorithm fails to correctly cluster this data for all values of the parameter kN N in their algorithm. Needless to say, the NJW algorithm also fails to correctly cluster this example. Example III: Consider data {xi } uniformly sampled from a domain ? ? R2 , which consists of three clusters, one a large rectangular container and two smaller disks, all connected by long and narrow tubes (see fig. 4 (left)). In this example the container is so large that the relaxation time inside it is slower than the characteristic time to diffuse between the small disks, hence NJW algorithm fails to cluster correctly. Since density is uniform, the ZP algorithm fails as well, fig. 4 (right). Note that spectral clustering with the eigenvectors of the standard graph Laplacian has similar limitations, since the Euclidean distance between these eigenvectors is equal to the mean commute time on the graph [11]. Therefore, these methods may also fail when confronted with multiscale data. 5 Clustering with a Relaxation Time Coherence Measure The analysis and examples of Sections 3 and 4 may suggest the use of more than k eigenvectors in spectral clustering. However, clustering with k-means using 5 eigenvectors on the examples of Section 4 produced unsatisfactory results (not shown). Moreover, since the eigenvectors of the matrix M are orthonormal under a specific weight function, they become increasingly oscillatory. Therefore, it is quite difficult to use them to detect a small cluster, much in analogy to Fourier analysis, where it is difficult to detect a localized bump in a function from its Fourier coefficients. ZP k Original Data NN 10 10 5 5 0 0 ?5 ?5 ?10 ?10 ?10 ?5 0 5 ?10 10 ?5 0 =7 5 10 Figure 4: Three clusters defined solely by geometry, and result of ZP clustering (Example III). Original Image Coherence Measure Ncut with 4 clusters (a) (b) (c) Figure 5: Normalized cut and coherence measure segmentation on a synthetic image. Based on our analysis, we propose a different approach to graph-based clustering. Given the importance of relaxation times on the graph as indication of clusters, we propose a novel and principled measure for the coherence of a set of points as belonging to a single cluster. Our coherence measure can be used in conjunction with any clustering algorithm. Specifically, let G = (V, W ) be a weighted graph of points and let V = S ? (V \ S) be a possible partition (computed by some clustering algorithm). Our aim is to construct a meaningful measure to decide whether to accept or reject this partition. To this end, let ?2 denote the second largest eigenvalue of the Markov matrix M corresponding to the full graph G. We define ?V = 1/(1 ? ?2 ) as the characteristic relaxation time of this graph. Similarly, ?1 and ?2 denote the characteristic relaxation times of the two subgraphs corresponding to the partitions S and V \ S. If V is a single coherent cluster, then we expect ?V = O(?1 + ?2 ). If, however, V consists of two weakly connected clusters defined by S and V \ S, then ?1 and ?2 measure the characteristic relaxation times inside these two clusters while ?V measures the overall relaxation time. If the two sub-clusters are of comparable size, then ?V  (?1 + ?2 ). If however, one of them is much smaller than the other, then we expect max(?1 , ?2 )/ min(?1 , ?2 )  1. Thus, we define a set V as coherent if either ?V < c1 (?1 + ?2 ) or if max(?1 , ?2 )/ min(?1 , ?2 ) < c2 . In this case, V is not partitioned further. Otherwise, the subgraphs S and V \ S need to be further partitioned and similarly checked for their coherence. While a theoretical analysis is beyond the scope of this paper, reasonable numbers that worked in practice are c1 = 1.8 and c2 = 10. We note that other works have also considered relaxation times for clustering with different approaches [21, 22]. We now present use of this coherence measure with normalized cut clustering on the third example of Section 4. The first partition of normalized cut on this data with ? = 1 separates between the large container and the two smaller disks. The relaxation times of the full graph and the two subgraphs are (?V , ?1 , ?2 ) = (1350, 294, 360). These numbers indicate that the full dataset is not coherent, and indeed should be partitioned. Next, we try to partition the large container. Normalized cuts partitions the container roughly into two parts with (?V , ?1 , ?2 ) = (294, 130, 135), which according to our coherence measure means that the big container is a single structure that should not be split. Finally, normalized cut on the two small disks correctly separates them giving (?V , ?1 , ?2 ) = (360, 18, 28), which indicates that indeed the two disks should be split. Further analysis of each of the single disks by our measure shows that each is a coherent cluster. Thus, combination of our coherence measure with normalized cut not only clusters correctly, but also automatically finds the correct number of clusters, regardless of cluster scale. Similar results are obtained for the other examples in this paper. Finally, our analysis also applies to image segmentation. In fig. 5(a) a synthetic image is shown. The segmentation results of normalized cuts [24] and of the coherence measure combined with [23] appear in panels (b) and (c). Results on a real image are shown in fig. 6. Each segments is Original Image Coherence Measure Ncut 6 clusters Ncut 20 clusters Figure 6: Normalized cut and coherence measure segmentation on a real image. represented by a different color. With a small number of clusters normalized cut cannot find the small coherent segments in the image, whereas with a large number of clusters, large objects are segmented. Implementing our coherence measure with [23] finds salient clusters at different scales. Acknowlegments: The research of BN was supported by the Israel Science Foundation (grant 432/06), by the Hana and Julius Rosen fund and by the William Z. and Eda Bess Novick Young Scientist fund. References [1] J. Shi and J. Malik. Normalized cuts and image segmentation, PAMI, Vol. 22, 2000. [2] R. Kannan, S. Vempala, A. Vetta, On clusterings: good, bad and spectral, J. ACM, 51(3):497-515, 2004. [3] D. Cheng, R. Kannan, S. Vempala, G. Wang, A divide and merge methodology for clustering, ACM SIGMOD/PODS, 2005. [4] F.R.K. Chung, Spectral Graph Theory, Regional Conference Series in Mathematics Vol. 92, 1997. [5] Y. Weiss, Segmentation using eigenvectors: a unifying view, ICCV 1999. [6] A.Y. Ng, M.I. Jordan, Y. Weiss, On Spectral Clustering: Analysis and an algorithm, NIPS Vol. 14, 2002. [7] N. Cristianini, J. Shawe-Taylor, J. Kandola, Spectral kernel methods for clustering, NIPS, Vol. 14, 2002. [8] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering, NIPS Vol. 14, 2002. [9] S. Yu and J. Shi. Multiclass spectral clustering. ICCV 2003. [10] L. Zelnik-Manor, P. Perona, Self-Tuning spectral clustering, NIPS, 2004. [11] M. Saerens, F. Fouss, L. Yen and P. Dupont, The principal component analysis of a graph and its relationships to spectral clustering. ECML 2004. [12] M. Meila, J. Shi. A random walks view of spectral segmentation, AI and Statistics, 2001. [13] B. Nadler, S. Lafon, R.R. Coifman, I.G. Kevrekidis, Diffusion maps spectral clustering and eigenfunctions of Fokker-Planck operators, NIPS, 2005. [14] S. Lafon, A.B. Lee, Diffusion maps and coarse graining: a unified framework for dimensionality reduction, graph partitioning, and data set parameterization, PAMI, 28(9):1393-1403, 2006. [15] D. Harel and Y. Koren, On Clustering Using Random Walks, FST TCS, 2001. [16] I. Fischer, J. Poland, Amplifying the block matrix structure for spectral clustering, Proceedings of the 14th Annual Machine Learning Conference of Belgium and the Netherlands, pp. 21-28, 2005. [17] J. Malik, S. Belongie, T. Leung, J. Shi, Contour and texture analysis for image segmentation, Int. J. Comp. Vis. 43(1):7-27, 2001. [18] E. Sharon, A. Brandt, R. Basri, Segmentation and Boundary Detection Using Multiscale Intensity Measurements, CVPR, 2001. [19] M. Galun, E. Sharon, R. Basri and A. Brandt, Texture Segmentation by Multiscale Aggregation of Filter Responses and Shape Elements, ICCV, 2003. [20] C.W. Gardiner, Handbook of stochastic methods, third edition, Springer NY, 2004. [21] N. Tishby, N. Slonim, Data clustering by Markovian relaxation and the information bottleneck method, NIPS, 2000. [22] C. Chennubhotla, A.J. Jepson, Half-lives of eigenflows for spectral clustering, NIPS, 2002. [23] E. Sharon, A. Brandt, R. Basri, Fast multiscale image segmentation, ICCV, 2000. [24] T. Cour, F. Benezit, J. Shi. Spectral Segmentation with Multiscale Graph Decomposition. CVPR, 2005.
3069 |@word polynomial:1 norm:1 nd:1 disk:6 zelnik:1 bn:1 covariance:1 decomposition:1 pg:2 commute:1 recursively:1 reduction:1 initial:1 contains:1 series:1 reaction:1 comparing:1 yet:1 numerical:1 partition:11 shape:1 dupont:1 plot:2 fund:2 v:1 half:1 parameterization:1 isotropic:4 coarse:1 node:2 location:3 brandt:3 height:1 along:3 c2:2 differential:1 become:1 prove:1 consists:3 inside:10 introduce:1 coifman:1 theoretically:2 x0:2 indeed:6 expected:3 roughly:2 decreasing:1 automatically:1 moreover:4 panel:3 kevrekidis:1 israel:2 what:1 sde:3 eigenvector:7 unified:1 finding:2 nutshell:1 exactly:1 k2:2 partitioning:6 grant:1 appear:1 planck:2 positive:1 scientist:1 local:12 limit:3 slonim:1 solely:2 approximately:5 pami:2 merge:1 therein:1 studied:1 bi:1 weizmann:3 unique:1 recursive:3 practice:1 block:1 x3:2 reject:1 suggest:2 cannot:3 close:1 needle:1 operator:4 context:1 www:1 equivalent:1 map:4 center:3 shi:5 regardless:2 starting:1 pod:1 rectangular:4 equilibration:3 subgraphs:4 insight:1 orthonormal:1 embedding:2 population:1 coordinate:4 laplace:1 hana:1 us:2 element:1 julius:1 cut:30 bottom:6 cloud:2 wang:1 capture:6 region:3 connected:3 decrease:1 xmin:2 principled:1 cristianini:1 weakly:1 segment:2 upon:1 translated:1 easily:2 various:2 represented:1 fiedler:1 separated:1 fast:1 kp:1 quite:1 encoded:1 larger:6 cvpr:2 say:1 relax:1 otherwise:1 niyogi:2 knn:2 statistic:1 fischer:1 confronted:2 eigenvalue:13 indication:1 propose:6 subgraph:1 cour:1 cluster:82 zp:14 produce:1 leave:2 converges:2 object:1 illustrate:1 ac:2 keywords:1 eq:5 predicted:2 indicate:1 eigenflows:1 direction:1 beltrami:1 fouss:1 correct:2 filter:1 stochastic:2 centered:3 implementing:1 adjacency:2 explains:1 mathematically:1 extension:1 pl:2 considered:1 exp:1 nadler:4 equilibrium:3 scope:1 bump:1 smallest:1 belgium:1 amplifying:1 largest:3 successfully:1 tool:1 weighted:5 njw:6 minimization:2 clearly:2 gaussian:7 aim:2 manor:1 rather:2 conjunction:3 derived:1 unsatisfactory:1 indicates:1 slowest:1 contrast:1 detect:2 dependent:2 nn:1 leung:1 typically:3 accept:1 perona:1 overall:1 augment:1 proposes:1 equal:4 construct:5 ng:1 yu:1 unsupervised:1 thin:1 rosen:1 np:3 piecewise:3 few:4 employ:1 belkin:2 harel:1 simultaneously:1 kandola:1 individual:2 geometry:3 william:1 conductance:1 detection:1 highly:2 intra:1 mixture:2 edge:1 euclidean:3 divide:2 meirav:2 walk:9 taylor:1 theoretical:5 column:1 markovian:1 maximization:1 novick:1 deviation:1 subset:1 entry:1 uniform:6 successful:1 eigenmaps:1 tishby:1 reported:1 kn:1 kxi:1 synthetic:3 combined:1 density:22 fundamental:4 probabilistic:3 lee:1 together:1 graining:1 tube:1 derivative:1 leading:4 return:1 chung:1 account:1 potential:7 coefficient:1 int:1 explicitly:1 vi:1 try:1 view:2 analyze:1 red:1 start:1 aggregation:2 contribution:2 yen:1 il:2 ni:1 characteristic:11 correspond:4 wisdom:1 identify:1 yield:1 saliency:1 landscape:1 produced:1 comp:1 oscillatory:1 eda:1 suffers:2 strip:10 checked:1 failure:2 kl2:2 pp:1 di:2 sampled:4 dataset:7 recall:1 lim:2 color:1 dimensionality:2 segmentation:14 reflecting:1 higher:1 methodology:1 response:1 wei:2 done:1 horizontal:2 replacing:1 multiscale:10 quality:3 effect:1 normalized:23 concept:1 contain:1 hence:2 chemical:1 symmetric:1 white:1 numerator:2 width:3 self:1 noted:2 criterion:4 performs:1 motion:1 saerens:1 passage:4 image:13 novel:4 common:4 rotation:1 functional:13 anisotropic:1 interpretation:3 measurement:1 ai:1 rd:1 tuning:1 consistency:1 mathematics:2 similarly:2 meila:1 umin:2 shawe:1 dj:1 dot:1 similarity:8 fst:1 chennubhotla:1 brownian:1 recent:1 showed:1 apart:1 certain:1 success:1 life:1 seen:1 minimum:1 additional:4 dxdy:1 determine:2 converge:1 ii:2 full:3 smooth:2 segmented:1 calculation:1 long:2 ofp:1 laplacian:5 basic:1 denominator:1 circumstance:1 metric:2 kernel:3 xmax:3 c1:2 background:1 whereas:1 unsuitability:1 limn:1 container:6 regional:1 eigenfunctions:7 jordan:1 near:1 presence:1 iii:2 split:2 xj:3 multiclass:2 bottleneck:1 whether:2 penalty:3 useful:1 clear:1 eigenvectors:27 netherlands:1 extensively:1 dotted:1 correctly:8 blue:1 rehovot:1 write:1 rephrased:1 vol:5 group:1 key:1 salient:1 drawn:1 changing:1 diffusion:8 rectangle:1 sharon:3 graph:30 relaxation:19 year:1 parabolic:1 reasonable:2 decide:2 coherence:18 dy:2 decision:2 scaling:1 comparable:3 capturing:1 koren:1 cheng:1 annual:1 gardiner:1 constraint:1 worked:1 x2:7 diffuse:1 fourier:3 speed:1 optimality:1 min:2 vempala:2 department:1 according:5 ball:6 combination:1 belonging:2 smaller:12 increasingly:1 wi:7 appealing:1 partitioned:3 maxy:1 intuitively:1 dv:2 iccv:4 equation:1 remains:1 fail:7 end:1 gaussians:1 apply:2 away:1 spectral:40 slower:2 rp:5 original:8 denotes:3 clustering:72 top:8 unifying:1 somewhere:2 sigmod:1 giving:1 dxdx0:1 malik:2 question:2 quantity:1 already:1 diagonal:2 distance:3 separate:9 mapped:1 separating:1 manifold:1 kannan:2 assuming:1 length:2 relationship:1 insufficient:1 minimizing:2 difficult:2 negative:1 boltzmann:1 perform:1 allowing:1 vertical:3 observation:1 datasets:1 markov:2 ecml:1 situation:1 arrhenius:1 sharp:1 intensity:1 required:1 connection:1 coherent:7 narrow:2 nip:7 eigenfunction:6 able:1 suggested:3 beyond:1 below:1 including:2 max:2 suitable:4 overlap:1 natural:1 started:1 umax:2 poland:1 literature:2 l2:1 deepest:1 expect:2 highlight:1 suggestion:1 limitation:12 proportional:1 analogy:1 localized:1 foundation:1 degree:1 sufficient:1 kramers:1 row:3 supported:1 last:1 free:1 understand:1 institute:1 neighbor:1 wide:3 barrier:2 overcome:3 dimension:1 boundary:3 transition:2 avoids:1 lafon:2 acknowlegments:1 contour:1 author:3 far:2 approximate:2 boaz:2 basri:3 global:6 handbook:1 conclude:1 belongie:1 xi:14 nature:1 domain:2 vj:1 jepson:1 main:2 big:1 noise:2 galun:3 edition:1 x1:13 fig:17 representative:1 ny:1 sub:2 fails:7 explicit:1 exponential:1 intimate:1 third:4 young:1 down:3 rk:1 formula:2 bad:1 specific:1 r2:3 normalizing:1 importance:2 kr:1 texture:3 magnitude:1 kx:2 tc:1 saddle:2 ncut:3 applies:1 springer:1 fokker:2 satisfies:1 acm:2 vetta:1 goal:1 sorted:1 towards:1 replace:1 hard:3 specifically:4 infinite:1 uniformly:2 justify:1 principal:1 called:1 intact:2 meaningful:1 internal:1 dissimilar:1 evaluate:2 d1:2 instructive:1
2,280
307
An Attractor Neural Network Model of Recall and Recognition Eytan Ruppin Yechezkel Yeshurun Department of Computer Science Department of Computer Science School of Mathematical Sciences School of Mathematical Sciences Sackler Faculty of Exact Sciences Sackler Faculty of Exact Sciences Tel Aviv University Tel Aviv University 69978, Tel Aviv, Israel 69978, Tel Aviv, Israel Abstract This work presents an Attractor Neural Network (ANN) model of Recall and Recognition. It is shown that an ANN model can qualitatively account for a wide range of experimental psychological data pertaining to the these two main aspects of memory access. Certain psychological phenomena are accounted for, including the effects of list-length, wordfrequency, presentation time, context shift, and aging. Thereafter, the probabilities of successful Recall and Recognition are estimated, in order to possibly enable further quantitative examination of the model. 1 Motivation The goal of this paper is to demonstrate that a Hopfield-based [Hop82] ANN model can qualitatively account for a wide range of experimental psychological data pertaining to the two main aspects of memory access, Recall and Recognition. Recall is defined as the ability to retrieve an item from a list of items (words) originally presented during a previous learning phase, given an appropriate cue (cued RecalQ, or spontaneously (free RecalQ. Recognition is defined as the ability to successfully acknowledge that a certain item has or has not appeared in the tutorial list learned before. The main prospects of ANN modeling is that some parameter values, that in former, 'classical' models of memory retrieval (see e.g. [GS84]) had to be explicitly assigned, can now be shown to be emergent properties of the model. 642 An Attractor Neural Network Model of Recall and Recognition 2 The Model The model consists of a Hopfield ANN, in which distributed patterns representing the learned items are stored during the learning phase, and are later presented as inputs during the test phase. In this framework, successful Recall and Recognition is defined. Some additional components are added to the basic Hopfield model to enable the modeling of the relevant psychological phenomena. 2.1 The Hopfield Model The Hopfield model's dynamics are composed of a non-linear, iterative, asynchronous transformation of the network state [Hop82]. The process may include a stochastic noise which is analogous to the 'temperature' T in statistical mechanics. Formally, the Hopfield model is described as follows: Let neuron's i state be a binary variable Si, taking the values ? 1 denoting a firing or a resting state, correspondingly. Let the network's state be a vector S specifying the binary values of all its neurons. Let Jij be the synaptic strength between neurons i and j. Then, hi, the input 'field' of neuron i is given by hi L:f# JijSj. The neuron's dynamic behavior is described by = Si (t + 1) = { 1, -1, with probability ~(1 + tgh( ~? with probability ~(1 - tgh( ~? Storing a new memory pattern eJ.' in the network is performed by modifying every ij element of the syna.ptic connection matrix according to JlY w Ji d + -keJ.' ieJ.' j. = 1 A Hopfield network will always converge to a stable state, and every stored memory is an attractor having an area surrounding it termed its basin of attraction [Hop82]. In addition to the stored memories, also other, non-memory states exist as stable states (local minima) of the network [AGS85]. The maximal number m of (randomly generated) memory patterns which can be stored in the basic Hopfield network of n neurons is m = eke ? n, eke :::::: 0.14 [AGS85]. 2.2 2.2.1 Recall and Recognition in the model's framework Recall Recall is considered successful when upon starting from an initial cue the network converges to a stable state which corresponds to the learned memory nearest to the input pattern. Inter-pattern distance is measured by the Hamming distance between the input and the learned item encodings. If the network converges to a non-memory stable state, its output will stand for a 'failure of recall' response. 1. IThe question of "How do such non-memory states bear the meaning of 'recall failure'?" is out of the scope of this work. However, a possible explanation is that during the learning phase 'meaning' is assigned to the stored patterns via connections formed with external patterns, and since non-memory states lack such associations with external patterns, they are 'meaningless', yielding the 'recall failure' response. Another possible mechanism is that every output pattern generated in the recall process passes also a recognition phase so that non-memory states are rejected, (see the following paragraph describing recognition in our model). 643 644 Ruppin and Yeshurun 2.2.2 Recognition Recognition is considered successful when the network arrives at a stable state during a time interval A, beginning from input presentation. In general, the shorter the distance between an input and its nearest memory, the faster is its convergence [AM88, KP88, RY90]. Since non-memory (non-learned) stable states have higher energy levels and much shallower basins of attraction than memorized stable states [AGS85, LN89], convergence to such states takes significantly longer timer. Therefore, there exists a range of possible values of A that enable successful recognition only of inputs similar to one of the stored memories. 2.3 Other features of the model ? The context of the psychological experiments is represented as a substring of the input's encoding. In order to minimize inter-pattern correlation, the size of the context encoding relative to the total size of the memory encoding is kept small . ? The total associational linkage of a learned item, is modeled as an external field vector E. When a learned memory pattern eJ.' is presented to the network, the value of the external field vector generated is Ei h . ~J.', where h is an 'orientation' coefficient, expressing the association strength. = Additional features, including a modified storage equation accounting for learning taking place at the test phase, and a storage decay parameter, are described in [RY90]. 3 The Modeling of experimental data. Regarding every phenomenon discussed, a brief description of the psychological findings is followed by an account of its modeling. We rely on the known results pertaining to Hopfield models to show that qualitatively, the psychological phenomena reviewed are emergent properties of the model. When such analytical evidence is lacking, simulations were performed in order to account for the experimental data. For a review of the psychological literature supporting the findings modeled see [GS84]. The List-Length Effect: It is known that the probability of successful Recall or Recognition of a particular item decreases as the length of list of learned items lllcreases. List length is expressed in memory load. Since It has been shown that the width of the memories basins of attraction monotonically decreases following an approximately inverse parabolic curve [Wei85], Recall performance should decrease as memory load is increased. We have examined the convergence time of the same set of input patterns at different values of memory load. As demonstrated in Fig. 1, it was found tha.t, as the memory load is increased, successful convergence has occurred (on the average) only after an increasingly growing number of asynchronous iterations. Hence, convergence takes more time and can result in Recognition failure, although memories' stability is maintained till the critical capacity a c is reached. An Attractor Neural Network Model of Recall and Recognition 4000.0 3000.0 en c ?2 0 2000.0 '- ~ 1000.0 0.0 L - - - - ' - _ - - ' - - _ ' - - - - L _ '--......1.._....1....-----1_-'---1 10.0 20.0 30.0 40.0 50.0 60.0 Memory load Figurc I: Ilccogllitioll speed (No. of a.<;Ylldll?OIlOIlS iterations) a.c; a (1Il1ction of IIlcllIory load (No. of storcd memories). The nclwork's parameters arc n 500, T = 0.28 = The word-frequency effect: The more frequent a word is in language, the probabilit.y of recalling it increases, while the probability of recognizing it decreases. A word's frequency in the language is assumed to effect its retrieval through the stored word's semantic relations and associa.tions [Kat85, NCBK87]. It is assumed, that relative to low frequency words, high frequency words have more semantic relations and therefore more connections between the patterns representing them and other patterns stored in the memory (i.e., in other networks). This one-ta-many relationship is assumed to be reciprocal, i.e., each..~f the externally stored patterns has also connections projected to several of th?: stored patterns in the allocated network. The process leading to the formation of the external field E (acting upon the allocated network), generated by an input pattern n~arest to some stored memory pattern {IJ is assumed to be characterized as follows: 1. There is a threshold degree of overlap &ntin, such that E > 0 only when the allocated network's state overiap H IJ is higher than Omin. 2. At overlap values HIJ which are only moderately larger than Omin, hI-' is monotonically increasing, but as HI-' continues to rise, a certain 'optimal' point is reached, beyond which h IJ is monotonically decreasing. 3. High-frequency words have lower Omin values than low-frequency words. Recognition tests are characterized by a high initial value of overlap HI-', to some memory {IJ. The value of h IJ and EJ.' generated is post-optimal and therefore smaller than in the case of low-frequency words which have higher Omin values. In Recall tests the initial situation is characterized by low values of overlap HI-' to some nearest memory {I-'. only the overlap value of high-frequency words is sufficient for activating associated it.ems, i.e. HJ.' > Omin. 645 646 Ruppin and Yeshurun Presentation Time: Increasing the presentation time of learned words is known to improve both their Recall and Recognition. This is explained by the phenomenon of maintenance rehearsal; The memories' basins of attraction get deeper, since the 'energy' E of a given state equals to 2.:;:1 H/J 2 ? Deeper basins of attraction are also wider [HFP83, KPKP90]. Therefore, the probability of successful Recall and Recognition of rehearsed items is increased. The effect of a uniform rehearsal is equivalent to a temperature decrease. Hence, increasing presentation time will attenuate and delay the List length phenomenon, till a certain limit. In a similar way, the Test Delay phenomenon is accounted for [RY90]. Context Shift: The term Context Shift refers to the change in context from the tutorial period to the test period. Studies examining the effect of context shift have shown a decrement in Recall performance with context shift, but little change in Recognition performance. As demonstrated in [RY90], when a context shift is simulated by flipping some of the context string's bits, Recall performance severely deteriorates while memories stability remains intact. No significant increase in the time (i.e. number of asynchronous iterations) required for convergence was found , thus maintaining the pre-shift probability of successful Recognition. Age differences in Recall and Recognition: It was found that older people perform more poorly on Recall tasks than they do on Recognition tasks [eM87]. These findings can be accounted for by the assumption that synapses are being weakened and deleted with aging, which although being controversial has gained some experimental support (see [RY90]) . 'Ve have investigated the retrieval performance as a function of the input's initial overlap, various levels of synaptic dilution, and memory load: As demonstrated in Fig. 2, when the synaptic dilution is increased, a 'critical' phase is reached where memory retrieval of far-away input patterns is decreased but the retrieval of input patterns with a high level of ihitial overlap remains intact. As the memory load is increased, this 'critical' phase begins at lower levels of synaptic dilution. On the other hand, only a mild increase (of 15%) in recognition speed was found. ?;- 'i. u 1 ?J 2 n. . '0 u a: Figure? Til ... I'I"I'AJ,ilily (If I'1I(rr;s"ful n:Lriev:ol ,~rfllrrllAtlre IlS II runcliotl or memory lu,ullltlclll'e illput ,,,,II'CIII'I; iuitialll\'Crlal', lIt lwo clifTcJcuL cl ...grccs of "YIIAplic llilu. liun; ~O% ill lhe right?si,lc-J figllle, AI"I 5J';f, in lIlc Idl?sitlc<1 figure. Tlrc nclwork's parlllllc\crj; IIrC " = 500, l' = O.OS. An Attractor Neural Network Model of Recall and Recognition The interested reader can find a description of the modeling of additional phenomena, including test position, word fragment completion, and distractor similarity, in [RY90]. 4 4.1 On a quantitative test of the model. Estimating Recall performance In a given network, with n neurons and m memories, the radius r of the basins of attraction of the memories decreases as the memory load parameter (a = min) is increased. According to [MPRV87], n, m, and r are related according to the expression m = (1-2.r)l. n 4 logn' The concept of the basins of attraction implies a non-linear probability function with low probability when input vectors are further than the radius of attraction and high probability otherwise. The slope of this non-linearity increases as the noise level T is decreased. The probability Pc that a random input vector will converge to one of the stored memories can be estimated by Pc ~ l::~~ (~) . m. It is interesting to note that the rates of change of r and of Pc have distinct forms; Recall tests beginning from randomly generated cues would yield a very low rate of successful Recall (Pc). Yet, if one examines Recall by picking a stored memory, flipping some of its encoding bits, and presenting it as an input to the network (determining r), 'reasonable' levels of successful Recall can still be obtained even when a 'considerable' number of encoding bits are flipped. Pc can be also estimated by considering the context representation [RY90]. 4.2 Estimating Recognition performance The probability of correct Recognition depends mainly on the the length of the interval ~; assume that after an input pattern is presented to a network of n neurons, during the time interval ~, k iterations steps of a Monte Carlo simulation are performed: In each such step, a neuron is randomly selected, and then it examines whether or not it should flip its state, according to its input. We show that the probability Pg { d} that an input pattern will be successfully recognized is is bounded by Pg {d} ~ 1 - d . e -nlc ? It can be seen that Recognition's success depends strongly on the initial input proximity to a stored memory, and even more strongly dependent on the number of allowed asynchronous iteration k, determined by the length of~. For a selection of k = n(ln(d) + e), one obtains Pg ~ 1-e- c . The expected number of iterations, (denoted as Exp( X? till successful convergence is achieved is E(X) = l:t=l E(Xd = n . l:t=l ~ n .In(d). + In the more general case, Let 0 denote the Hamming distance (between the network's state S and a stored memory) below which retrieval is considered successful. Then, the corrected estimations of retrieval performance are Pg ~ 1 - (!) . e -~.o, and E(X) ~ n .In( ~). In simulations we have performed, (n 500, d 20, 0 10), the = = = 647 648 Ruppin and Yeshurun average number of iterations until successful convergence was in the range of 300 400, in excellent correspondence with the predicted expectation, E(X) = 500?ln(2). References [AGS85] [AM88] [CM87] [GS84] [HFP83] [Hop82] [Kat85] [KP88] D. J. Amit, H. Gutfreund, and H. Sompolinsky. Storing infinite numbers of patterns in a spin-glass model of neural networks. Phys. Rev. Lett., 55:1530, 1985. S. I. Amari and K. Maginu. Statistical neurodynamics of associative memory. Neural Networks, 1:63, 1988. F.I.M. Craik and J .M. McDowd. Age differences in recall and recognition. Journal of Experimental Psychology; Learning, Memory, and Cognition, 13(3):474, 1987. G. Gillund and M. Shiffrin. A retrieval model for both recognition and recall. Psychological Review, 91:1, 1984. J.J. Hopfield, D. I. Fienstien, and R. G. Palmer. Unlearning' has a stabilizing effect in collective memories. Nature, 304:158, 1983. J.J. Hopfield. Neural networks and physical systems with emergent collective abilities. Proc. Nat. Acad. Sci. USA, 79:2554, 1982. T. Kato. Semantic-memory sources of episodic retrieval failure. Memory & Cognition, 13(5):442, 1985. J. Komios and R. Paturi. Convergence results in an associative memory model. Neural Networks, 1:239, 1988. [KPKP90] B. Kagmar-Parsi and B. Kagmar-Parsi. On problem solving with hopfield neural networks. Bioi. Cybern., 62:415, 1990. M. Lewenstein and A. Nowak. Fully connected neural networks with [LN89] self-control of noise levels. Phys. Rev. Lett., 62(2):225, 1989. [MPRV87] R.J. McEliece, E.C. Posner, E.R. Rodemich, and S.S. Venkatesh. The capacity of the hopfield associative memory. IEEE Transactions on Information theory, IT-33( 4):461, 1987. [NCBK87] D.L Nelson, J.J. Canas, M.T. Bajo, and P.D. Keelan. Comparing word fragment completion and cued recall with letter cues. Journal of Experimental Psychology: Learning, Memory and Cognition, 13(4) :542, 1987. E. Ruppin and Y. Yeshurun. Recall and recognition in an attractor [RY90] neural network model of memory retrieval. Technical report, Dept. of Computer Science, Tel-Aviv University, 1990. G. Weisbuch . Scaling laws for the attractors of hopfield networks. J. [Wei85] Physique Lett., 46:L-623, 1985.
307 |@word mild:1 faculty:2 jijsj:1 simulation:3 accounting:1 pg:4 idl:1 initial:5 fragment:2 denoting:1 timer:1 comparing:1 si:3 yet:1 arest:1 cue:4 selected:1 item:9 beginning:2 reciprocal:1 gillund:1 mathematical:2 consists:1 kej:1 unlearning:1 paragraph:1 jly:1 inter:2 expected:1 behavior:1 mechanic:1 growing:1 ol:1 distractor:1 decreasing:1 eke:2 little:1 considering:1 increasing:3 begin:1 estimating:2 linearity:1 bounded:1 israel:2 string:1 gutfreund:1 weisbuch:1 finding:3 transformation:1 quantitative:2 every:4 ful:1 xd:1 control:1 before:1 local:1 aging:2 limit:1 severely:1 acad:1 encoding:6 firing:1 approximately:1 weakened:1 examined:1 specifying:1 palmer:1 range:4 spontaneously:1 episodic:1 area:1 probabilit:1 significantly:1 word:14 ln89:2 refers:1 pre:1 get:1 selection:1 storage:2 context:11 cybern:1 equivalent:1 demonstrated:3 starting:1 stabilizing:1 canas:1 examines:2 attraction:8 posner:1 retrieve:1 stability:2 analogous:1 exact:2 element:1 maginu:1 recognition:31 continues:1 connected:1 sompolinsky:1 decrease:6 prospect:1 moderately:1 dynamic:2 solving:1 ithe:1 upon:2 yeshurun:5 hopfield:14 emergent:3 represented:1 various:1 surrounding:1 distinct:1 monte:1 pertaining:3 formation:1 larger:1 otherwise:1 amari:1 ability:3 associative:3 rr:1 analytical:1 jij:1 maximal:1 frequent:1 relevant:1 hop82:4 kato:1 till:3 poorly:1 shiffrin:1 description:2 convergence:9 converges:2 cued:2 tions:1 wider:1 completion:2 parsi:2 measured:1 ij:6 nearest:3 school:2 lwo:1 predicted:1 implies:1 radius:2 correct:1 modifying:1 stochastic:1 enable:3 memorized:1 activating:1 proximity:1 considered:3 exp:1 scope:1 cognition:3 estimation:1 proc:1 successfully:2 always:1 modified:1 ej:3 hj:1 mainly:1 glass:1 dependent:1 relation:2 interested:1 orientation:1 ill:1 logn:1 denoted:1 field:4 equal:1 having:1 flipped:1 lit:1 report:1 dilution:3 randomly:3 omin:5 composed:1 ve:1 phase:8 attractor:8 recalling:1 physique:1 arrives:1 yielding:1 pc:5 nowak:1 shorter:1 psychological:9 increased:6 modeling:5 uniform:1 recognizing:1 successful:14 delay:2 examining:1 stored:15 picking:1 possibly:1 external:5 leading:1 til:1 account:4 coefficient:1 explicitly:1 depends:2 later:1 performed:4 reached:3 slope:1 minimize:1 formed:1 il:1 spin:1 yield:1 ags85:4 substring:1 lu:1 carlo:1 synapsis:1 phys:2 synaptic:4 failure:5 energy:2 rehearsal:2 frequency:8 associated:1 hamming:2 recall:34 rodemich:1 originally:1 higher:3 ta:1 response:2 illput:1 strongly:2 rejected:1 correlation:1 until:1 hand:1 mceliece:1 ei:1 o:1 lack:1 aj:1 aviv:5 usa:1 effect:7 concept:1 former:1 hence:2 assigned:2 semantic:3 during:6 width:1 self:1 maintained:1 presenting:1 paturi:1 demonstrate:1 temperature:2 meaning:2 ruppin:5 ji:1 physical:1 association:2 discussed:1 occurred:1 resting:1 kp88:2 expressing:1 significant:1 ai:1 attenuate:1 language:2 had:1 access:2 stable:7 longer:1 similarity:1 termed:1 certain:4 binary:2 success:1 seen:1 minimum:1 additional:3 recognized:1 converge:2 period:2 monotonically:3 ii:2 technical:1 faster:1 characterized:3 retrieval:10 post:1 basic:2 maintenance:1 expectation:1 craik:1 iteration:7 achieved:1 addition:1 interval:3 decreased:2 source:1 allocated:3 meaningless:1 pass:1 psychology:2 regarding:1 shift:7 whether:1 expression:1 linkage:1 exist:1 tutorial:2 estimated:3 deteriorates:1 thereafter:1 threshold:1 deleted:1 kept:1 inverse:1 letter:1 place:1 parabolic:1 reader:1 reasonable:1 scaling:1 bit:3 hi:6 sackler:2 followed:1 correspondence:1 syna:1 strength:2 aspect:2 speed:2 min:1 department:2 according:4 associa:1 smaller:1 increasingly:1 em:1 rev:2 explained:1 ln:2 equation:1 remains:2 describing:1 mechanism:1 flip:1 away:1 appropriate:1 include:1 maintaining:1 lhe:1 amit:1 classical:1 added:1 question:1 flipping:2 distance:4 simulated:1 capacity:2 sci:1 nelson:1 length:7 modeled:2 relationship:1 hij:1 rise:1 collective:2 perform:1 shallower:1 neuron:9 arc:1 acknowledge:1 supporting:1 situation:1 venkatesh:1 required:1 connection:4 learned:9 beyond:1 below:1 pattern:23 appeared:1 including:3 memory:52 explanation:1 critical:3 overlap:7 examination:1 rely:1 representing:2 older:1 improve:1 brief:1 review:2 literature:1 determining:1 relative:2 law:1 lacking:1 fully:1 bear:1 interesting:1 age:2 degree:1 controversial:1 basin:7 sufficient:1 storing:2 accounted:3 free:1 asynchronous:4 iej:1 deeper:2 wide:2 taking:2 correspondingly:1 distributed:1 curve:1 lett:3 stand:1 qualitatively:3 projected:1 far:1 transaction:1 obtains:1 assumed:4 iterative:1 reviewed:1 neurodynamics:1 nature:1 tel:5 investigated:1 cl:1 excellent:1 main:3 decrement:1 motivation:1 noise:3 allowed:1 fig:2 en:1 lc:1 position:1 externally:1 load:9 list:7 decay:1 evidence:1 exists:1 gained:1 nat:1 associational:1 crj:1 expressed:1 corresponds:1 tha:1 ptic:1 bioi:1 goal:1 presentation:5 ann:5 considerable:1 change:3 determined:1 infinite:1 corrected:1 acting:1 total:2 eytan:1 experimental:7 intact:2 formally:1 people:1 support:1 ciii:1 dept:1 phenomenon:8
2,281
3,070
Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons Stefan Klampfl, Robert Legenstein, Wolfgang Maass Institute for Theoretical Computer Science Graz University of Technology A-8010 Graz, Austria {klampfl,legi,maass}@igi.tugraz.at Abstract The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1 Introduction The Information Bottleneck (IB) approach and independent component analysis (ICA) have both attracted substantial interest as general principles for unsupervised learning [1, 2]. A hope has been, that they might also help us to understand strategies for unsupervised learning in biological systems. However it has turned out to be quite difficult to establish links between known learning algorithms that have been derived from these general principles, and learning rules that could possibly be implemented by synaptic plasticity of a spiking neuron. Fortunately, in a simpler context a direct link between an abstract information theoretic optimization goal and a rule for synaptic plasticity has recently been established [3]. The resulting rule for the change of synaptic weights in [3] maximizes the mutual information between pre- and postsynaptic spike trains, under the constraint that the postsynaptic firing rate stays close to some target firing rate. We show in this article, that this approach can be extended to situations where simultaneously the mutual information between the postsynaptic spike train of the neuron and other signals (such as for example the spike trains of other neurons) has to be minimized (Figure 1). This opens the door to the exploration of learning rules for information bottleneck analysis and independent component extraction with spiking neurons that would be optimal from a theoretical perspective. We review in section 2 the neuron model and learning rule from [3]. We show in section 3 how this learning rule can be extended so that it not only maximizes mutual information with some given spike trains and keeps the output firing rate within a desired range, but simultaneously minimizes mutual information with other spike trains, or other time-varying signals. Applications to infor- A B Figure 1: Different learning situations analyzed in this article. A In an information bottleneck task the learning neuron (neuron 1) wants to maximize the mutual information between its output Y1K and the activity of one or several target neurons Y2K , Y3K , . . . (which can be functions of the inputs X K and/or other external signals), while at the same time keeping the mutual information between the inputs X K and the output Y1K as low as possible (and its firing rate within a desired range). Thus the neuron should learn to extract from its high-dimensional input those aspects that are related to these target signals. This setup is discussed in sections 3 and 4. B Two neurons receiving the same inputs X K from a common set of presynaptic neurons both learn to maximize information transmission, and simultaneously to keep their outputs Y1K and Y2K statistically independent. Such extraction of independent components from the input is described in section 5. mation bottleneck tasks are discussed in section 4. In section 5 we show that a modification of this learning rule allows a spiking neuron to extract information from its input spike trains that is independent from the component extracted by another neuron. 2 Neuron model and a basic learning rule We use the model from [3], which is a stochastically spiking neuron model with refractoriness, where the probability of firing in each time step depends on the current membrane potential and the time since the last output spike. It is convenient to formulate the model in discrete time with step size ?t. The total membrane potential of a neuron i in time step tk = k?t is given by ui (tk ) = ur + N X k X wij ?(tk ? tn )xnj , (1) j=1 n=1 where ur = ?70mV is the resting potential and wij is the weight of synapse j (j = 1, . . . , N ). An input spike train at synapse j up to the k-th time step is described by a sequence Xjk = (x1j , x2j , . . . , xkj ) of zeros (no spike) and ones (spike); each presynaptic spike at time tn (xnj = 1) evokes a postsynaptic potential (PSP) with exponentially decaying time course ?(t ? tn ) with time constant ?m = 10ms. The probability ?ki of firing of neuron i in each time step tk is given by ?ki = 1 ? exp[?g(ui (tk )Ri (tk )?t] ? g(ui (tk ))Ri (tk )?t, (2) where g(u) = r0 log{1 + exp[(u ? u0 )/?u]} is a smooth increasing function of the membrane potential u (u0 = ?65mV, ?u = 2mV, r0 = 11Hz). The approximation is valid for sufficiently t?i ??abs )2 small ?t (?ki ? 1). The refractory variable Ri (t) = ? 2 (t? ?(t ? t?i ? ?abs ) assumes +(t?t? ?? )2 ref r i abs values in [0, 1] and depends on the last firing time t?i of neuron i (absolute refractory period ?abs = 3ms, relative refractory time ?ref r = 10ms). The Heaviside step function ? takes a value of 1 for non-negative arguments and 0 otherwise. This model from [3] is a special case of the spike-response model, and with a refractory variable R(t) that depends only on the time since the last postsynaptic event it has renewal properties [4]. The output of neuron i at the k-th time step is denoted by a variable yik that assumes the value 1 if a postsynaptic spike occurred and 0 otherwise. A specific spike train up to the k-th time step is written as Yik = (yi1 , yi2 , . . . , yik ). The information transmission between an ensemble of input spike trains XK and the output spike train YiK can be quantified by the mutual information1 [5] X P (YiK |X K ) . (3) I(XK ; YiK ) = P (X K , YiK ) log P (YiK ) K K X ,Yi The idea in [3] was to maximize the quantity I(XK ; YiK ) ? ?DKL (P (YiK )||P? (YiK )), where P K K ? K DKL (P (YiK )||P? (YiK )) = YiK P (Yi ) log(P (Yi )/P (Yi )) denotes the Kullback-Leibler divergence [5], imposing the additional constraint that the firing statistics P (Yi ) of the neuron should stay as close as possible to a target distribution P? (Yi ). This distribution was chosen to be that of a constant target firing rate g? accounting for homeostatic processes. An online learning-rule performk ing gradient ascent on this quantity was derived for the weights wij of neuron i, with ?wij denoting the weight change during the k-th time step: k ?wij k k = ?Cij Bi (?), (4) ?t k k which consists of the ?correlation term? Cij and the ?postsynaptic term? Bik [3]. The term Cij measures coincidences between postsynaptic spikes at neuron i and PSPs generated by presynaptic action potentials arriving at synapse j, ? X ? k ? g 0 (u1 (tk )) ? k ?t k?1 k + ?(tk ? tn )xnj y1 ? ?k1 , (5) C1j = C1j 1? k ?C g(u1 (t )) n=1 in an exponential time window with time constant ?C = 1s and g 0 (ui (tk )) denoting the derivative of g with respect to u. The term ? ? ?? ? g(u1 (tk )) g? yk B1k (?) = 1 log ?t g?1 (tk ) g?1 (tk ) ? ? ? (1 ? y1k )R1 (tk ) g(u1 (tk )) ? (1 + ?)? g1 (tk ) + ?? g , (6) compares the current firing rate g(ui (tk )) with its average firing rate2 g?i (tk ), and simultaneously the running average g?i (tk ) with the constant target rate g?. The argument indicates that this term also depends on the optimization parameter ?. 3 Learning rule for multi-neuron interactions We extend the learning rule presented in the previous section to a more complex scenario, where the mutual information between the output spike train Y1K of the learning neuron (neuron 1) and some target spike trains YlK (l > 1) has to be maximized, while simultaneously minimizing the mutual information between the inputs X K and the output Y1K . Obviously, this is the generic IB scenario applied to spiking neurons (see Figure 1A). A learning rule for extracting independent components with spiking neurons (see section 5) can be derived in a similar manner. For simplicity, we consider the case of an IB optimization for only one target spike train Y2K , and derive an update rule for the synaptic weights w1j of neuron 1. The quantity to maximize is therefore L = ?I(XK ; Y1K ) + ?I(Y1K ; Y2K ) ? ?DKL (P (Y1K )||P? (Y1K )), (7) where ? and ? are optimization constants. To maximize this objective function, we derive the weight k change ?w1j during the k-th time step by gradient ascent on (7), assuming that the weights w1j can change between some bounds 0 ? w1j ? wmax (we assume wmax = 1 throughout this paper). 1 We use boldface letters (Xk ) to distinguish random variables from specific realizations (X k ). The rate g?i (tk ) = hg(ui (tk ))iXk |Y k?1 denotes an expectation of the firing rate over the input distribution i given the postsynaptic history and is implemented as a running average with an exponential time window (with a time constant of 10ms). 2 Note that all three terms of (7) implicitly depend on w1j because the output distribution P (Y1K ) changes if we modify the weights w1j . Since the first and the last term of (7) have already been considered (up to the sign) in [3], we will concentrate here on the middle term L12 := ?I(Y1K ; Y2K ) k and denote the contribution of the gradient of L12 to the total weight change ?w1j in the k-th time k step by ?w ?1j . In order to get an expression for the weight change in a specific time step tk we write the probabilities P (YiK ) and P (Y1K , Y2K ) occurring in (7) as products over individual time bins, i.e., P (YiK ) = QK QK k?1 k ) and P (Y1K , Y2K ) = k=1 P (y1k , y2k |Y1k?1 , Y2k?1 ), according to the chain rule k=1 P (yi |Yi of information theory [5]. Consequently, we rewrite L12 as a sum over the contributions of the PK individual time bins, L12 = k=1 ?Lk12 , with + * k?1 k?1 k k P (y , y |Y , Y ) 1 2 1 2 . (8) ?Lk12 = ? log P (y1k |Y1k?1 )P (y2k |Y2k?1 ) Xk ,Yk ,Yk 1 2 k The weight change ?w ?1j is then proportional to the gradient of this expression with respect to the k weights w1j , i.e., ?w ?1j = ?(??Lk12 /?w1j ), with some learning rate ? > 0. The evaluation of the ? k ? k k k gradient yields ?w ?1j = ? C1j ?F12 with a correlation term C1j as in (5) and a term Xk ,Y1k ,Y2k ? ? g?12 (tk ) g?12 (tk ) k k k k ? g ? (t ) ? ? y (1 ? y )R (t )?t 2 2 1 2 g?1 (tk )? g?1 (tk ) g2 (tk ) ? ? g?12 (tk ) k ? g ? (t ) + ? (1 ? y1k )y2k R1 (tk )?t 1 g?2 (tk ) ? ? + (1 ? y1k )(1 ? y2k )R1 (tk )R2 (tk )(?t)2 g?12 (tk ) ? g?1 (tk )? g2 (tk ) . k F12 = y1k y2k log (9) Here, g?i (tk ) = hg(ui (tk ))iXk |Y k?1 denotes the average firing rate of neuron i and g?12 (tk ) = i hg(u1 (tk ))g(u2 (tk ))iXk |Y k?1 ,Y k?1 denotes the average product of firing rates of both neurons. Both 2 1 quantities are implemented online as running exponential averages with a time constant of 10s. Under the assumption of a small learning rate ? we can approximate the expectation h?iXk ,Y1k ,Y2k by averaging over a single long trial. Considering now all three terms in (7) we finally arrive at an online rule for maximizing (7) k ? k ? ?w1j k k = ??C1j B1 (??) ? ??tB12 . ?t (10) k which consists of a term C1j sensitive to correlations between the output of the neuron and its k that characterize the presynaptic input at synapse j (?correlation term?) and terms B1k and B12 postsynaptic state of the neuron (?postsynaptic terms?). Note that the argument of B1k is different from (4) because some of the terms of the objective function (7) have a different sign. In order to k to have compensate the effect of a small ?t, the constant ? has to be large enough for the term B12 an influence on the weight change. k The factors C1j and B1k were described in the previous section. In addition, our learning rule k k /(?t)2 that is sensitive to the statistical dependence between contains an extra term B12 = F12 the output spike train of the neuron and the target. It is given by ? ? g?12 (tk ) g?12 (tk ) yk yk y1k k k k k (1 ? y )R (t ) ? g ? (t ) B12 = 1 22 log ? 2 2 2 (?t) g?1 (tk )? g?1 (tk ) g2 (tk ) ?t ? ? g?12 (tk ) yk ? g?1 (tk ) ? 2 (1 ? y1k )R1 (tk ) ?t g?2 (tk ) ? ? + (1 ? y1k )(1 ? y2k )R1 (tk )R2 (tk ) g?12 (tk ) ? g?1 (tk )? g2 (tk ) . (11) This term basically compares the average product of firing rates g?12 (which corresponds to the joint probability of spiking) with the product of average firing rates g?1 g?2 (representing the probability of independent spiking). In this way, it measures the momentary mutual information between the output of the neuron and the target spike train. For a simplified neuron model without refractoriness (R(t) = 1), the update rule (4) resembles the BCM-rule [6] as shown in [3]. With the objective function (7) to maximize, we expect an ?antiHebbian BCM? rule with another term accounting for statistical dependencies between Y1K and Y2K . Since there is no refractoriness, the postsynaptic rate ?1 (tk ) is given directly by the current value of g(u(tk )), and the update rule (10) reduces to the rate model3 ? ? k ? k ?? ? k ?w1j ??1 ? = ???jpre,k f (?1k ) log 1k ?t g? ??1 ? k ??? ? ? k ? ??12 ??12 k k ?1 , (12) ???t ?2 log k k ? ??2 ??1 ??2 ??1k ??2k Pk where the presynaptic rate at synapse j at time tk is denoted by ?jpre,k = a n=1 ?(tk ? tn )xnj k with a in units (Vs)?1 . The values ??1k , ??2k , and ??12 are running averages of the output rate ?1k , the k rate of the target signal ?2 and of the product of these values, ?1k ?2k , respectively. The function f (?1k ) = g 0 (g ?1 (?1k ))/a is proportional to the derivative of g with respect to u, evaluated at the current membrane potential. The first term in the curly brackets accounts for the homeostatic process (similar to the BCM rule, see [3]), whereas the second term reinforces dependencies between Y1K and Y2K . Note that this term is zero if the rates of the two neurons are independent. It is interesting to note that if we rewrite the simplified rate-based learning rule (12) in the following way, k ?w1j = ???jpre,k ?(?1k , ?2k ), (13) ?t we can view it as an extension of the classical Bienenstock-Cooper-Munro (BCM) rule [6] with a two-dimensional synaptic modification function ?(?1k , ?2k ). Here, values of ? > 0 produce LTD whereas values of ? < 0 produce LTP. These regimes are separated by a sliding threshold, however, in contrast to the original BCM rule this threshold does not only depend on the running average of the postsynaptic rate ??1k , but also on the current values of ?2k and ??2k . 4 Application to Information Bottleneck Optimization We use a setup as in Figure 1A where we want to maximize the information which the output Y1K of a learning neuron conveys about two target signals Y2K and Y3K . If the target signals are statistically independent from each other we can optimize the mutual information to each target signal separately. This leads to an update rule k ? k ? k ?? ?w1j k k = ??C1j B1 (??) ? ??t B12 + B13 , ?t (14) k k are the postsynaptic terms (11) sensitive to the statistical dependence between and B13 where B12 the output and target signals 1 and 2, respectively. We choose g? = 30Hz for the target firing rate, and we use discrete time with ?t = 1ms. In this experiment we demonstrate that it is possible to consider two very different kinds of target signals: one target spike train has has a similar rate modulation as one part of the input, while the other target spike train has a high spike-spike correlation with another part of the input. The learning neuron receives input at 100 synapses, which are divided into 4 groups of 25 inputs each. The first two input groups consist of rate modulated Poisson spike trains4 (Figure 2A). Spike trains from the remaining groups 3 and 4 are correlated with a coefficient of 0.5 within each group, however, spike trains from different groups are uncorrelated. Correlated spike trains are generated by the procedure described in [7]. The first target signal is chosen to have the same rate modulation as the inputs from group 1, except that Gaussian random noise is superimposed with a standard deviation of 2Hz. The second target spike train is correlated with inputs from group 3 (with a coefficient of 0.5), but uncorrelated to inputs from group 4. Furthermore, both target signals are silent during random intervals: at each 3 In the absence of refractoriness we use an alternative gain function galt (u) = [1/gmax + 1/g(u)]?1 in order to pose an upper limit of gmax = 100Hz on the postsynaptic firing rate. C evolution of weights 0 0 2500 5000 50 0 0 2500 t [ms] D 20 40 0.5 60 80 100 5000 1 20 40 t [min] 60 0 E ?3 0.04 50 0 0 2500 5000 2500 t [ms] 5000 50 0 0 F MI/KLD of neuron 1 0.01 target 1 [Hz] output [Hz] B 50 synapse idx input 2 [Hz] input 1 [Hz] A 1 x 10 I(output;targets) correlation with targets 0.5 0.4 0.02 0.005 target 1 target 2 0.3 0.5 0.2 0.1 0 0 20 40 t [min] 0 60 0 0 20 40 t [min] 60 0 0 20 40 60 t [min] Figure 2: Performance of the spike-based learning rule (10) for the IB task. A Modulation of input rates to input groups 1 and 2. B Evolution of weights during 60 minutes of learning (bright: strong synapses, wij ? 1, dark: depressed synapses, wij ? 0.) Weights are initialized randomly between 0.10 and 0.12, ? = 10?4 , ? = 2 ? 103 , ? = 50. C Output rate and rate of target signal 1 during 5 seconds after learning. D Evolution of the average mutual information per time bin (solid line, left scale) between input and output and the Kullback-Leibler divergence per time bin (dashed line, right scale) as a function of time. Averages are calculated over segments of 1 minute. E Evolution of the average mutual information per time bin between output and both target spike trains as a function of time. F Trace of the correlation between output rate and rate of target signal 1 (solid line) and the spike-spike correlation (dashed line) between the output and target spike train 2 during learning. Correlation coefficients are calculated every 10 seconds. time step, each target signal is independently set to 0 with a certain probability (10?5 ) and remains silent for a duration chosen from a Gaussian distribution with mean 5s and SD 1s (minimum duration is 1s). Hence this experiment tests whether learning works even if the target signals are not available all of the time. Figure 2 shows that strong weights evolve for the first and third group of synapses, whereas the efficacies for the remaining inputs are depressed. Both groups with growing weights are correlated with one of the target signals, therefore the mutual information between output and target spike trains increases. Since spike-spike correlations convey more information than rate modulations synaptic efficacies develop more strongly to group 3 (the group with spike-spike correlations). This results in an initial decrease in correlation with the rate-modulated target to the benefit of higher correlation with the second target. However, after about 30 minutes when the weights become stable, the correlations as well as the mutual information quantities stay roughly constant. An application of the simplified rule (12) to the same task is shown in Figure 3 where it can be seen that strong weights close to wmax are developed for the rate-modulated input. To some extent weights grow also for the inputs with spike-spike correlations in order to reach the constant target firing rate g?. In contrast to the spike-based rule the simplified rule is not able to detect spike-spike correlations between output and target spike trains. 4 The rate of the first 25 inputs is modulated by a Gaussian white-noise signal with mean 20Hz that has been low pass filtered with a cut-off frequency of 5Hz. Synapses 26 to 50 receive a rate that has a constant value of 2Hz, except that a burst is initiated at each time step with a probability of 0.0005. Thus there is a burst on average every 2s. The duration of a burst is chosen from a Gaussian distribution with mean 0.5s and SD 0.2s, the minimum duration is chosen to be 0.1s. During a burst the rate is set to 50Hz. In the simulations we use discrete time with ?t = 1ms. A B synapse idx evolution of weights 0.5 3 0.03 0.4 2 0.02 0.3 1 0.01 0.2 4 20 0.8 40 0.6 60 0.4 80 MI/KLD of neuron 1 0.04 1 100 C ?3 x 10 correlation with target 1 0.2 10 20 t [min] 30 0 0 10 20 0 30 0.1 0 10 t [min] 20 30 t [min] Figure 3: Performance of the simplified update rule (12) for the IB task. A Evolution of weights during 30 minutes of learning (bright: strong synapses, wij ? 1, dark: depressed synapses, wij ? 0.) Weights are initialized randomly between 0.10 and 0.12, ? = 10?3 , ? = 104 , ? = 10. B Evolution of the average mutual information per time bin (solid line, left scale) between input and output and the Kullback-Leibler divergence per time bin (dashed line, right scale) as a function of time. Averages are calculated over segments of 1 minute. C Trace of the correlation between output rate and target rate during learning. Correlation coefficients are calculated every 10 seconds. 5 Extracting Independent Components With a slight modification in the objective function (7) the learning rule allows us to extract statistically independent components from an ensemble of input spike trains. We consider two neurons receiving the same input at their synapses (see Figure 1B). For both neurons i = 1, 2 we maximize information transmission under the constraint that their outputs stay as statistically independent from each other as possible. That is, we maximize ? i = I(XK ; YiK ) ? ?I(Y1K ; Y2K ) ? ?DKL (P (YiK )||P? (YiK )). L (15) Since the same terms (up to the sign) are optimized in (7) and (15) we can derive a gradient ascent rule for the weights of neuron i, wij , analogously to section 3: k ? ? k ?wij k k = ?Cij . Bi (?) ? ??tB12 ?t (16) Figure 4 shows the results of an experiment where two neurons receive the same Poisson input with a rate of 20Hz at their 100 synapses. The input is divided into two groups of 40 spike trains each, such that synapses 1 to 40 and 41 to 80 receive correlated input with a correlation coefficient of 0.5 within each group, however, any spike trains belonging to different input groups are uncorrelated. The remaining 20 synapses receive uncorrelated Poisson input. Weights close to the maximal efficacy wmax = 1 are developed for one of the groups of synapses that receives correlated input (group 2 in this case) whereas those for the other correlated group (group 1) as well as those for the uncorrelated group (group 3) stay low. Neuron 2 develops strong weights to the other correlated group of synapses (group 1) whereas the efficacies of the second correlated group (group 2) remain depressed, thereby trying to produce a statistically independent output. For both neurons the mutual information is maximized and the target output distribution of a constant firing rate of 30Hz is approached well. After an initial increase in the mutual information and in the correlation between the outputs, when the weights of both neurons start to grow simultaneously, the amounts of information and correlation drop as both neurons develop strong efficacies to different parts of the input. 6 Discussion Information Bottleneck (IB) and Independent Component Analysis (ICA) have been proposed as general principles for unsupervised learning in lower cortical areas, however, learning rules that can implement these principles with spiking neurons have been missing. In this article we have derived from information theoretic principles learning rules which enable a stochastically spiking neuron to solve these tasks. These learning rules are optimal from the perspective of information theory, but they are not local in the sense that they use only information that is available at a single A B C weights of neuron 1 ?4 weights of neuron 2 20 40 0.5 60 80 100 10 20 t [min] 30 0.5 60 2 80 10 20 t [min] 30 30 correlation between outputs 0.016 0.04 0.012 0.03 0.012 0.03 0.008 0.02 0.008 0.02 0.004 0.01 0.004 0.01 0 30 20 F MI/KLD of neuron 2 0.04 20 10 t [min] 0.016 t [min] 0 0 0 E 10 I(output 1;output2) 4 40 MI/KLD of neuron 1 0 0 x 10 20 100 0 D 6 1 synapse idx synapse idx 1 0 0 10 20 t [min] 0 30 0.6 0.4 0.2 0 0 10 20 30 t [min] Figure 4: Extracting independent components. A,B Evolution of weights during 30 minutes of learning for both postsynaptic neurons (red: strong synapses, wij ? 1, blue: depressed synapses, wij ? 0.) Weights are initialized randomly between 0.10 and 0.12, ? = 10?3 , ? = 100, ? = 10. C Evolution of the average mutual information per time bin between both output spike trains as a function of time. D,E Evolution of the average mutual information per time bin (solid line, left scale) between input and output and the Kullback-Leibler divergence per time bin for both neurons (dashed line, right scale) as a function of time. Averages are calculated over segments of 1 minute. F Trace of the correlation between both output spike trains during learning. Correlation coefficients are calculated every 10 seconds. synapse without an auxiliary network of interneurons or other biological processes. Rather, they tell us what type of information would have to be ideally provided by such auxiliary network, and how the synapse should change its efficacy in order to approximate a theoretically optimal learning rule. Acknowledgments We would like to thank Wulfram Gerstner and Jean-Pascal Pfister for helpful discussions. This paper was written under partial support by the Austrian Science Fund FWF, # S9102-N13 and # P17229N04, and was also supported by PASCAL, project # IST2002-506778, and FACETS, project # 15879, of the European Union. References [1] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. In Proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing, pages 368?377, 1999. [2] A. Hyv?arinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley, New York, 2001. [3] T. Toyoizumi, J.-P. Pfister, K. Aihara, and W. Gerstner. Generalized Bienenstock-Cooper-Munro rule for spiking neurons that maximizes information transmission. Proc. Natl. Acad. Sci. USA, 102:5239?5244, 2005. [4] W. Gerstner and W. M. Kistler. Spiking Neuron Models. Cambridge University Press, Cambridge, 2002. [5] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, 1991. [6] E. L. Bienenstock, L. N. Cooper, and P. W. Munro. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci., 2(1):32?48, 1982. [7] R. G?utig, R. Aharonov, S. Rotter, and H. Sompolinsky. Learning input correlations through non-linear temporally asymmetric hebbian plasticity. Journal of Neurosci., 23:3697?3714, 2003.
3070 |@word trial:1 middle:1 open:1 hyv:1 simulation:1 accounting:2 thereby:1 solid:4 initial:2 contains:1 efficacy:6 denoting:2 xnj:4 current:5 attracted:1 written:2 plasticity:3 drop:1 update:5 fund:1 v:1 xk:8 yi1:1 filtered:1 allerton:1 simpler:1 burst:4 direct:1 become:1 consists:2 manner:1 theoretically:1 ica:2 roughly:1 growing:1 multi:2 brain:1 window:2 considering:1 increasing:1 provided:1 project:2 maximizes:3 what:1 kind:1 minimizes:1 developed:2 every:4 legi:1 control:1 unit:1 local:1 modify:1 sd:2 limit:1 acad:1 initiated:1 firing:20 modulation:4 might:1 klampfl:2 quantified:1 resembles:1 b12:6 range:2 statistically:6 bi:2 acknowledgment:1 union:1 implement:2 procedure:1 area:1 convenient:1 pre:1 specificity:1 get:1 close:4 context:1 influence:1 kld:4 optimize:1 missing:2 maximizing:1 independently:1 duration:4 formulate:1 simplicity:1 rule:39 target:42 aharonov:1 curly:1 element:1 asymmetric:1 cut:1 coincidence:1 graz:2 sompolinsky:1 decrease:1 yk:6 substantial:1 ui:7 ideally:1 depend:2 rewrite:2 segment:3 ist2002:1 joint:1 train:30 separated:1 approached:1 tell:1 quite:1 jean:1 solve:1 toyoizumi:1 otherwise:2 statistic:1 g1:1 online:3 obviously:1 sequence:1 interaction:2 product:5 maximal:1 turned:1 realization:1 transmission:4 r1:5 produce:3 tk:59 help:1 derive:3 develop:2 pose:1 strong:7 implemented:4 auxiliary:2 s9102:1 concentrate:1 exploration:1 enable:1 kistler:1 bin:10 arinen:1 biological:2 extension:1 sufficiently:1 considered:1 exp:2 achieves:1 proc:1 sensitive:3 stefan:1 hope:1 gaussian:4 mation:1 rather:1 varying:1 derived:5 indicates:1 superimposed:1 contrast:2 utig:1 detect:1 sense:1 helpful:1 bienenstock:3 wij:13 infor:1 orientation:1 pascal:2 denoted:2 development:1 special:1 renewal:1 mutual:20 extraction:5 unsupervised:4 minimized:1 develops:1 c1j:8 randomly:3 oja:1 simultaneously:6 divergence:4 individual:2 ab:4 interest:1 interneurons:1 evaluation:1 analyzed:1 bracket:1 natl:1 hg:3 chain:1 gmax:2 partial:1 initialized:3 desired:2 y1k:30 xjk:1 galt:1 theoretical:2 output2:1 y2k:21 facet:1 cover:1 deviation:1 tishby:1 characterize:1 dependency:2 stay:5 off:1 receiving:2 analogously:1 concrete:1 choose:1 possibly:1 external:2 stochastically:3 derivative:2 account:1 potential:7 coefficient:6 mv:3 igi:1 blind:1 stream:2 depends:4 view:1 wolfgang:1 ixk:4 red:1 start:1 decaying:1 contribution:2 f12:3 bright:2 qk:2 ensemble:2 maximized:2 yield:1 basically:1 history:1 synapsis:15 reach:1 synaptic:6 frequency:1 conveys:1 mi:4 gain:1 austria:1 x1j:1 higher:1 response:1 synapse:11 evaluated:1 refractoriness:5 strongly:1 furthermore:1 binocular:1 correlation:26 receives:2 wmax:4 usa:1 effect:1 evolution:10 hence:1 leibler:4 maass:2 proprioceptive:1 white:1 during:11 m:8 generalized:1 trying:1 theoretic:2 demonstrate:1 tn:5 recently:1 xkj:1 common:1 spiking:15 refractory:4 exponentially:1 discussed:2 occurred:1 extend:1 resting:1 slight:1 cambridge:2 imposing:1 ylk:1 depressed:5 stable:1 cortex:1 perspective:2 scenario:2 selectivity:1 certain:1 rotter:1 yi:8 seen:1 minimum:2 fortunately:1 additional:1 b1k:4 r0:2 maximize:9 redundant:1 period:1 signal:18 u0:2 sliding:1 dashed:4 reduces:1 hebbian:1 ing:1 smooth:1 long:1 compensate:1 divided:2 dkl:4 prediction:1 basic:1 austrian:1 expectation:2 poisson:3 receive:4 whereas:5 want:2 separately:1 addition:1 interval:1 grow:2 source:2 extra:1 ascent:3 hz:14 ltp:1 bik:1 fwf:1 extracting:3 information1:1 door:1 enough:1 psps:1 silent:2 idea:1 bottleneck:10 whether:1 expression:2 munro:3 ltd:1 york:2 action:1 yik:19 amount:1 dark:2 sign:3 per:8 reinforces:1 blue:1 discrete:3 write:1 group:26 threshold:2 sum:1 letter:1 powerful:1 evokes:1 throughout:1 arrive:1 separation:1 legenstein:1 ki:3 bound:1 distinguish:1 annual:1 activity:1 constraint:3 ri:3 aspect:1 u1:5 argument:3 min:13 n13:1 according:2 belonging:1 membrane:4 psp:1 remain:1 postsynaptic:16 ur:2 modification:3 aihara:1 remains:1 available:2 generic:1 alternative:1 original:1 thomas:1 denotes:4 assumes:2 running:5 remaining:3 tugraz:1 k1:1 establish:1 classical:1 objective:4 already:1 quantity:5 spike:52 strategy:3 dependence:2 bialek:1 gradient:6 link:2 thank:1 sci:1 presynaptic:5 extent:1 l12:4 idx:4 boldface:1 assuming:1 minimizing:1 preferentially:1 difficult:1 setup:2 cij:4 robert:1 trace:3 negative:1 upper:1 neuron:62 situation:2 extended:2 communication:1 y1:1 homeostatic:2 optimized:1 bcm:5 established:1 antihebbian:1 able:1 regime:1 event:1 representing:1 technology:1 temporally:1 extract:4 review:1 evolve:1 relative:1 expect:1 interesting:1 proportional:2 article:3 principle:8 uncorrelated:5 course:1 supported:1 last:4 keeping:1 arriving:1 understand:1 institute:1 absolute:1 benefit:1 feedback:1 calculated:6 cortical:1 world:1 valid:1 sensory:2 simplified:5 approximate:2 implicitly:1 kullback:4 keep:2 b1:2 assumed:1 w1j:13 learn:2 model3:1 gerstner:3 complex:1 european:1 pk:2 yi2:1 neurosci:2 noise:2 ref:2 convey:1 cooper:3 wiley:2 momentary:1 pereira:1 exponential:3 ib:6 third:1 minute:7 specific:3 r2:2 essential:1 consist:1 occurring:1 karhunen:1 visual:1 g2:4 u2:1 corresponds:1 extracted:1 goal:1 consequently:1 b13:2 absence:1 change:10 wulfram:1 except:2 averaging:1 total:2 x2j:1 pas:1 pfister:2 internal:2 support:1 modulated:4 heaviside:1 correlated:9
2,282
3,071
A Small World Threshold for Economic Network Formation Eyal Even-Dar Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 [email protected] Michael Kearns Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 [email protected] Abstract We introduce a game-theoretic model for network formation inspired by earlier stochastic models that mix localized and long-distance connectivity. In this model, players may purchase edges at distance d at a cost of d? , and wish to minimize the sum of their edge purchases and their average distance to other players. In this model, we show there is a striking ?small world? threshold phenomenon: in two dimensions, if ? < 2 then every Nash equilibrium results in a network of constant diameter (independent of network size), and if ? > 2 then every Nash equilibrium results in a network whose diameter grows as a root of the network size, and thus is unbounded. We contrast our results with those of Kleinberg [8] in a stochastic model, and empirically investigate the ?navigability? of equilibrium networks. Our theoretical results all generalize to higher dimensions. 1 Introduction Research over the last decade from fields as diverse as biology, sociology, economics and computer science has established the frequent empirical appearance of certain structural properties in naturally occurring networks. These properties include small diameter, local clustering of edges, and heavy-tailed degree distributions [11]. Not content to simply catalog such apparently ?universal? properties, many researchers have proposed stochastic models of decentralized network formation that can explain their emergence. A typical such model is known as preferential attachment [3], in which arriving vertices are probabilistically more likely to form links to existing vertices with high degree; this generative process is known to form networks with power law degree distributions. In parallel with these advances, economists and computer scientists have examined models in which networks are formed due to ?rational? or game-theoretic forces rather than probabilistic ones. In such models networks are formed via the self-interested behavior of individuals who benefit from participation in the network [7]. Common examples include models in which a vertex or player can purchase edges, and would like to minimize their average shortest-path distance to all other vertices in the jointly formed network. A player?s overall utility thus balances the desire to purchase few edges yet still be ?well-connected? in the network. While stochastic models for network formation define a (possibly complex) distribution over possible networks, the game-theoretic models are typically equated with their (possibly complex) set of (Nash) equilibrium networks. It is also common to analyze the so-called Price of Anarchy [9] in such models, which measures how much worse an equilibrium network can be than some measure of social or centralized optimality [6, 2, 5, 1]. In this paper we introduce and give a rather sharp analysis of a network formation model of the game-theoretic variety, but which was inspired by a striking result of Kleinberg [8] in a stochastic model, and thus forms a bridge between these two lines of thought. In Kleinberg?s stochastic model, the network formation process begins on an underlying substrate network that is highly regular ? for instance, a grid in two dimensions. This regular substrate is viewed as a coarse model of ?local? connectivity, such as one?s geographically close neighbors. The stochastic process then adds ?longdistance? edges to the grid, in an attempt to model connections formed by travel, chance meetings, and so on. Kleinberg?s model assumes that the probability that an edge connecting two vertices whose grid distance is d is proportional to 1/d? for some ? > 0 ? thus, longer-distance edges are less likely, but will still appear in significant numbers due to the long tail of the generating distribution. An interesting recent empirical study [4] of the migration patterns of dollar bills provides evidence for the validity of such a model. In a theoretical examination of the ?six degrees of separation? or ?small world? folklore first popularized by the pioneering empirical work of Travers and Milgram [10], Kleinberg proved that only for ? = 2 will the resulting network be likely to support the routing of messages on short paths using a natural distributed algorithm. For larger values of ? the network simply does not have short paths (small diameter), and for smaller values the diameter is quite small, but the long-distance edges cannot be exploited effectively from only local topological information. Our model and result can be viewed as an ?economic? contrast to Kleinberg?s. We again begin with a regular substrate like the grid in two dimensions; these edges are viewed as being provided free of charge to the players or vertices. A vertex u is then free to purchase an edge to a vertex v at grid distance d = ?(u, v) at a cost of d? for ? > 0. Thus, longer-distance edges now have higher cost rather than lower probability, but again in a power law form. We analyze the networks that are Nash equilibria of a game in which each player?s payoff is the negative of the sum of their edge purchases and average distances to the other vertices. Our main result is a precise analysis of the diameter (longest shortest path between any pair of vertices) of equilibrium networks in this model. In particular, we show a sharp threshold result: for any ? < 2, every pure Nash equilibrium network has only constant diameter (that is, diameter independent of the network size n); and for any ? > 2, every pure Nash equilibrium has diameter that grows as a root of the network size (that is, unbounded and growing rapidly with n). In the full version, we show in addition that the threshold phenomenon occurs in mixed Nash equilibrium as well. Despite the outward similarity, there are some important differences between our results and Kleinberg?s. In addition to the proofs being essentially unrelated (since one requires a stochastic and the other an equilibrium analysis), Kleinberg?s result establishes a ?knife?s edge? (fast routing only at ? exactly 2), while ours is a threshold or phase transition ? there is a broad range of ? values yielding constant diameter, which sharply crosses over to polynomial growth at ? = 2. On the other hand, for ? = 2 Kleinberg establishes that in his model not only that there is small (though order log(n)2 rather than constant) diameter, but that short paths can be navigated by a naive greedy routing algorithm. However, simulation results discussed in Section 5 suggest that the equilibrium networks of our model do support fast routing as well. Like Kleinberg?s results, all of ours generalize to higher dimensions as well, with the threshold occurring at ? = r in r-dimensional space. The outline of the paper is as follows. In Section 2 we define our game-theoretic model and introduce the required equilibrium concepts. In Section 3 we provide the constant diameter upper bound for r = 2 when ? < 2, and also even better constants for ? ? 1. Section 4 provides the diameter lower bound for ? > 2, while in Section 5 we explore greedy routing in equilibrium networks via simulation. 2 Preliminaries We devote this section to a formal definition of the model. We assume that the players are ? located on a grid, so each player v is uniquely identified with a grid point (a, b), where 1 ? a, b ? n; thus the total number of players is n. The action of player vi is a vector si ? {0, 1}n indicating which edges to other players vi has purchased. We let s = s1 ? ? ? ? ? sn be the joint action of all the players, v1 , ..., vn . We also use s?i to denote the joint action of all players except player vi . The Graph. The joint action s defines an undirected graph G(s) as follows. The nodes of G(s) are the players V = {v1 , . . . , vn }. An edge (vi , vj ) is bought by player vi if and only if si (j) = 1. Let Ei (si ) = {(vi , vj ) | si (j) = 1} be the set of edges bought by player vi and let E(s) = ?i?V Ei (si ). The graph induced by s is G(s) = (V, E(s)). Distances and Costs. The grid defines a natural distance ?. Let vi be the player identified with the grid point (a, b) and vi0 with (a0 , b0 ); then their grid distance is ?(vi , vi0 ) = |a ? a0 | + |b ? b0 |. Next we define a natural family of edge cost functions in which the cost of an edge is a function of the grid distance: ? 0 ?(vi , vj ) = 1 c(vi , vj ) = a?(vi , vj )? otherwise where a, ? > 0 are parameters of the model. Thus, grid edges are free to the players, and longer edges have a cost polynomial in their grid distance. The Game. We are now ready to define the formal network formation game we shall analyze. The overall cost function ci of player vi is defined as ci (s) = ci (si , s?i ) = X e?Ei (si ) c(e) + n X ?G(s) (vi , vj ) j=1 where ?G(s) (u, v) is the shortest distance between u and v in G(s). Thus, in this game player i wishes to minimize ci (s), which requires balancing edge costs and shortest paths. We emphasize that players benefit from edge purchases by other players, since shortest paths are measured with respect to the overall graph formed by all edges purchased. The graph diameter is defined as maxi,j ?G(s) (vi , vj ). Equilibrium Concepts. A joint action s = s1 ? ? ? ? ? sn is said to be a Nash equilibrium if for every player i and any alternative action s?i ? {0, 1}n , we have ci (si , s?i ) ? ci (? si , s?i ). If s is a Nash equilibrium we say that its corresponding graph G(s) is an equilibrium graph. A joint action s = s1 ? ? ? ? ? sn is said to be link stable if for every player i and any alternative action s?i ? {0, 1}n that differs from si in exactly one coordinate (i.e. one edge), we have ci (si , s?i ) ? ci (? si , s?i ). If s is link stable we say that its corresponding graph G(s) is a stable graph. Note that an equilibrium graph implies a link stable graph. Link stability means that the graph is stable under single-edge unilateral deviations (as opposed to Nash, which permits arbitrary unilateral deviations), and is a private case of the pairwise stability given notion given in [7]. The popularity of the link stable notion is due to its simplicity and due to the fact that it is easily computable, as opposed to computing best responses which in similar problems is known to be NP-Hard [6]. Note?that as the grid edges are free, the diameter of an equilibrium or link stable graph is bounded by 2 n. 3 Constant Diameter at Equilibrium for ? ? [0, 2] In this section we analyze the diameter of equilibrium networks when ? ? [0, 2]. Our results actually hold under the more general notion of link stability as well. The following is the first of our two main theorems. Theorem 3.1 For any constant ? > 0, if ? = 2 ? ?, then there exists a constant c(?) such that for any n, all Nash equilibria or link stable graphs over n players have diameter at most c(?). The proof of this theorem has a number of technical subtleties, so we first provide its intuition, which is illustrated in Figure 1(B). We analyze an equilibrium (or link stable) graph in stages, and focus on the distance of vertices to some focal player u. In each stage we argue that more grid-distant players have an incentive to purchase an edge to u due to the centrality of u in the graph. We start with the following simple fact: for every nodes v and w we have that if ?(v, w) ? d then ?G(s) (v, w) ? d since all grid edges are free. We would like to show that even a stronger property holds ? namely, that if ?(v, w) ? d? then ?G(s) (w, v) ? d for some ? > 1. Since this property is no longer simply implied by the grid edges, it requires arguing that grid-distant vertices have an incentive to purchase edges to each other. Suppose there are nodes u and v such that ?G(s) (u, v) ? d. We first define a ?close? graph neighborhood of u, Su = {w|?G(s) (u, w) ? d/3}. Note that for every w ? Su we have that ?G(s) (v, w) ? 2d/3. Next we would like to claim that the cardinality of Su is large ? thus u?s neighborhood is densely populated. For this we define Su? = {w|?(u, w) ? d/3} ? Su . Using the grid topology (see Figure 1(A)) we see that |Su? | is of order d2 . All distances are on the grid ?=d ? Su u v The grid center Nodes in distance 1 Nodes in distance 2 Nodes in distance 3 v? (A) ? = d3/? (B) Figure 1: (A) The number of nodes at exact distance k is exactly 4k, while the number within k is order k 2 . (B) Illustration of the main argument of Theorem 3.1. Here u and v are vertices at grid distance d, while u and v 0 are at grid distance d0 , where d ? d3/? ? d0 . In the proof we use the size of Su? to show that v benefits by purchasing an edge to u and thus must be distance 1 to u in the equilibrium graph; this in turn allows us to argue that v 0 wishes to purchase an edge to u as well. Now consider the benefit to v of buying the edge (v, u) (which is not in the graph since ?G(s) (u, v) ? d). Since the distance from v to every node in Su is reduced by at least d/3 and the set size is at least order d2 , we have that the benefit is of order d3 . The fact that this edge was not bought implies that ?(v, u)? = ?(d3 ). Therefore, we have that ?G(s) (u, v) ? d implies that ?(u, v) = ?(d3/? ), which is the contrapositive of ?(u, v) = O(d3/? ) implies ?G(s) (u, v) ? d. In other words, for ?small enough? values of ? (quantified in the full proof), vertices quite distant from u in the grid have an incentive to buy an edge to u, by virtue of the dense population in Su? . But this in turn argues that the size of Su is even larger than Su? ; we then ?bootstrap? this argument to show that yet further vertices have an incentive to connect to u, and so on. We now proceed with the formal proof based on this argument. Lemma 3.2 Let G(s) be an equilibrium or link stable graph and u be the ? grid center. Suppose that for every node v such that ?(u, v) ? d? (where ? ? 1 and d? < n/2), we have that 0 ?G(s) (u, v) ? d. Then for every d, and for every node v such that ?(u, v) ? 21/? (d/3)? , where ? 0 = 2?+1 ? , we have that ?G(s) (u, v) ? d. Proof: Let v be a node such that ?G(s) (u, v) = d and let Su = {w|?G(s) (u, w) ? d/3}; observe that d0 = minw?Su ?G(s) (w, v) is at least 2d 3 and thus v?s benefit of buying the edge (v, u) is at least d3 |Su |. Next we would like to bound the size of Su from below. Using the topology of the grid, the grid the center node has 4k nodes (See Figure 1(A)) in exact grid distance k (if k ? n/2), which implies that the center node has 2k 2 nodes in grid distance at most k. The set Su contains all nodes such that ?G(s) (u, w) ? d/3 by definition which implies by our assumption that it includes all nodes w such that ?(u, w) ? (d/3)? . Therefore, the size of Su is at least 2(d/3)2? . Now since G(s) is an equilibrium or link stable graph, it means that v would not like to buy the edge (u, v) and thus d2?+1 ?(u, v)? > 2(d/3)2? ? d/3 = 2 2?+1 3 (2?+1)/? Taking the ? root, we have that ?G(s) (u, v) > d implies ?(u, v) ? 21/? d3(2?+1)/? , which is the (2?+1)/? contrapositive of ?(u, v) ? 21/? d3(2?+1)/? implies ?G(s) (u, v) ? d, as required. Equipped with this lemma we can prove rather strong results regarding the case where ? = 2??, for ? > 0. In the previous lemma there are two parts in the change of the radius ? one is the exponent, which grows, and the second is that instead of having d in the base we have only d/3. The next lemma shows that as long as d is large enough we can ignore the fact that the base decreases from d to d/3 ? and thus ?amplify? the exponent ? in the preceding analysis to a larger exponent (1+?1 )?. Su2 v u Figure 2: A graph with diameter of 6. Lemma 3.3 (Amplification Lemma) Let G(s) be an equilibrium or link stable graph. Let ? = 2 ? ? for some ? > 0. Let c(?) be a constant determined by subsequent analysis. Suppose that for every ? d > c(?), for every node v such that ?(u, v) ? d? (where ? ? 1, d? < n/2, and u is the grid center), we have that ?G(s) (u, v) ? d. Then for every d > c(?), for every node v such that 0 ? . ?(u, v) ? d? , where ? 0 = ?(1 + ?1 ), we have that ?G(s) (u, v) ? d, where ?1 = 2(2??) Proof: Set c(?) = 3 1+2?1 ?1 1+2?1 . By Lemma 3.2 we have that for every d > 3 ?1 for every nodes u and ? (d/3)? v such that ?(u, v) ? 2 , where ?? = 2?+1 ? , we have that ?G(s) (u, v) ? d. ? 2?+1 ? ? d(1+ 2?? )? (d/3) ? d(1+ 2?? )?+1/? > = = ? ? 2 2 ? 3(1+ 2?? )?+1/? 3(1+ 2?? )? d(1+?1 )? d?1 ? ? d(1+?1 )? = 3(1+2?1 )? where both inequalities hold for d ? c(?) (d/3)? 2 Now we are ready to prove the main theorem of this section. 1+2?1 ? and let u be the grid center. For every Proof: (Theorem 3.1) Let c0 (?) = 3 ?1 , where ?1 = 2(2??) node v such that ?(u, v) ? c0 (?), we must have ?G(s) (u, v) ? c0 (?), since all grid edges are part ? of G(s). Next we prove that all nodes within grid distance n/2 are within graph distance c0 (?). Since ?G(s) (u, v) ? ?(u, v), we can apply Lemma 3.3 to obtain that in radius c0 (?) of u in G, are all nodes v such that ?(u, v) ? c0 (?)1+?1 . We repeat this argument recursively and obtain after the k k-th time, that all nodes v such that ?(u, v) ? c0 (?)(1+?1 ) satisfy ?G(s) (u, v) ? c0 (?). Taking ? k = log1+?1 ( n/2), this implies that there are n/2 nodes within c0 (?) from u. Now suppose there exists a node v such that ?G(s) (u, v) ? 3c0 (?). Then by buying the edge (v, u), u?s benefit is at least 2c0 (?)n/2 (we know that are at least n/2 nodes within graph distance of c0? (?) from u), ? there ? while its cost is bounded by n < n (since any node grid distance from u is at most n). Setting c(?) = 6c0 (?), we obtain the theorem. 3.1 Even Smaller Constant Diameter at Equilibrium for ? ? 1 The constant diameter bound c(?) in Theorem 3.1 blows up as ? approaches 0. In this section we show that for ? < 1, rather? small constant bounds hold. Note that when ? ? 1, the most expensive edge cost is bounded by 2a n, where a is the edge cost constant. We will use this fact to show that every equilibrium graph G(s) has a small constant diameter. Let u, v ? V , we let TG(s) (u, v) be the set of all nodes that u can reach on a shortest path that includes v. Formally, TG(s) (u, v) = {w | ?G(s) (u, w) = ?G(s) (u, v) + ?G(s) (v, w)}. We start by providing a technical lemma. Lemma 3.4 Let G(s) be an equilibrium or link stable graph. Let u, v ? V be an arbitrary pair of ?(u,v)? players. If (u, v) ? / E(s) then | TG(s) (u, v) | ? ?G(s) (u,v)?1 . Proof: Buying the edge (u, v) (at a cost of ?(u, v)? ) makes the distance from u to every w ? TG(s) (u, v) shorter by ?G(s) (u, v) ? 1. However, s is a Nash equilibrium, thus we know that the edge (u, v) was not bought. This implies that the benefit (?G(s) (u, v) ? 1) ? |TG(s) (u, v)| from buying the edge is bounded by ?(u, v)? . Lemma 3.5 Let G(s) = (V, E(s)) be an equilibrium graph and let u, v ? V . ? If ? < 1 then ?G(s) (u, v) ? 5. ? If ? = 1 then ?G(s) (u, v) ? 2da2 + 4e Proof: We prove for the case that the cost functions is a?(u, v) and omit the proof for the case where ? < 1 which is similar. Assume for contradiction that there exist a node v such that ?G(s) (u, v) ? 2 da where u is the grid center node (note that the grid distance from u is bounded by ? + 4e + 1, n). Let Su2 = {w|?G(s) (u, w) ? 2} be the set of nodes at a distance of at most 2 from u (See Figure 3.1) including u. We first bound the size of Su2 . For every node w ? Su2 we have ?G(s) (u, w) ? da2 +4e?1. Buying the edge (v, u) makes the distance between v and every w ? Su2 at most 3. Thus, the benefit from buying the edge (v, u) is at least (da2 +4e?1?3)|Su2 | = da2 e|Su2 |. However, the edge (v, u) ? / E(s) and is not part of the equilibrium graph.?Therefore, the benefit 2 2 from buying it is at most ?(v, u). This implies that u | ? ?(v, u) ? a n. Now we look on a ? da e|S 2 n/da e shortest paths tree rooted at u. There are at most ? 2 nodes at a distance of 2 from u. Each ? n descendants one of them has at most a by Lemma 3.4. Since the graph is connected, we get that ? ? ? a n/da2 e(a n ? 2) + a n/da2 e ? n, which is a contradiction. 3.2 The Case ? = 2 In this case we obtain neither a constant upper ? bound nor a polynomial lower bound. We show that ? 2/ logn ? c for ? = 2 the diameter is bounded by O( n ), which is bounded by n for every constant c (i.e. this bound is very small as well); however it bounds from above any polylogarithmic function. Theorem 3.6 Let the edge cost be c((u, v) = ?(u, v)2 , and let G(s) =?(V, E(s)) be an equilibrium ? 2/ logn or link stable graph . Then the graph diameter is bounded by O( n ). Proof: We again apply Lemma 3.2 repeatedly, but now with ? = 2. After applying it for the first time we have that all nodes which are in grid distance ( d3 )3/2 from u the grid center are within graph distance d. Recall that Su = {w|?G(s) (u, w) ? d/3}. Using the same arguments we construct a series of distances xk , such that if ?(u, v) ? xk then ?G(s) (u, v) ? d. We begin with x1 as ( d3 )3/2 and now compute x2 : d d3/2 d x22 > |Su | = ( 3/2 /33/2 )2 3 3 3 d4/2 Solving it we obtain that x2 = 37/2 . Suppose that after repeating the argument for the kth time we have that xk is at least dak /3bk . Using this bound we derive a lower bound on the size of Su and obtain the following bound for the k + 1 iteration: d dak /3ak 2 d ) x2k+1 > |Su | = ( 3 3 3bk ak +1/2 Thus we obtain that xk+1 = 3bdk +ak +1/2 and ak+1 = ak + 1/2 bk+1 = bk + ak + 1/2. Our next goal is to estimate ak and bk . The estimation of ak is straight forward and ak = k/2 + 1. For bk it is enough for our needs to consider an upper bound; since we have bk+1 = bk + k/2 + 3/2, one can easily verify that k 2 /2 is an upper bound for k ? 3. Therefore, in order to provide an upper bound on the distance form the center grid u we would like to find an initial d such that ? dk/2 ?k such that k2 /2 ? n 3? ? and d is minimal. This clearly holds for d = n2/ logn and k = logn and can be shown to be the minimal value for which it holds. Now using similar arguments to previous proofs we show that every other node cannot be further away from u. 4 Polynomial Diameter at Equilibrium for ? > 2 We now give our second main result, which states that for ? > 2 the diameter grows as a root of n and is thus unbounded. ? ??2 Theorem 4.1 For any ?, the diameter of any Nash equilibrium or link stable graph is ?( n ?+1 ). Before giving the proof we note that this bound implies a trivial lower bound of a constant for ? ? 2, ? 1/4 and a polynomial for ? > 2. For instance, setting ? = 3 we obtain a lower bound of ?( n ). We first provide a simple lemma (stated without proof) regarding the influence of one edge on a connected graph?s diameter. S Lemma 4.2 Let G = (V, E) be a connected graph with diameter C, and Let G0 = (V, E {e}) for 0 any edge e then the diameter of G is at least C/2. Proof: (Theorem 4.1) Let D be the diameter of an equilibrium graph, and d be the grid distance of (w, v) the most expensive edge bought in G, note that the most expensive edge corresponds to ? n/d, the longest edge in grid distance terms. First we observe that D ? 2 as the grid diameter is ? 2 n and the fastest way to traverse it is through edges of maximal length which is d. By Lemma 4.2 the benefit of buying an edge (u, v) is at most 2D(n ? 3), since the diameter before was at most 2D and the distance to your two neighbor and yourself has not been changed. Therefore, have ?(u, v)? = d? ? 2D(n ? 3). Next we use the two simple bounds d? ? 2Dn ? 2 n/d ? D (1) (2) Substituting the bound of d in Equation 2 into equation 1 we obtain that ? (2 n/D)? ? 2Dn ? (2 n)? ? D1+? 2n ? ??2 c( n 1+? ) ? D as required. 5 Simulations The analyses we have considered so far examine static properties of equilibrium and link stable graphs, and as such do not shed light on natural dynamics that might lead to them. In this section we briefly describe dynamical simulations on a 100 ? 100 grid (which has 108 possible edges). At each iteration a random vertex u is selected. With probability 1/2, an existing edge of u (grid or longdistance) is selected at random, and we compute whether (given the current global configuration of the graph), u would prefer not to purchase this edge, in which case it is deleted. With probability 1/2, we instead select a second random vertex v, and compute whether (again given the global graph) u would like to purchase the edge (u, v), in which case it is added. Note that if this dynamic converges, it is to a link stable graph and not necessarily a Nash equilibrium, since only single-edge deviations are considered. The left panel of Figure 3 shows the worst-case diameter as a function of the number of iterations, and demonstrates the qualitative validity of our theory for this dynamic. For ? = 1, 2 the diameter quickly falls to a rather small value (less than 10). The asymptotes for ? = 3, 4 are considerably higher. The right panel revisits the question that was the primary interest of Kleinberg?s work [8], namely the efficiency of ?naive? or greedy navigation or routing. If we wish to route a message from the grid center to a randomly chosen destination, and the message is always forwarded from its current vertex to the graph neighbor whose grid address is closest to the destination, how long will it take? Kleinberg was the first to observe and explain the fact that the mere existence of short paths (small diameter) may not be sufficient for such greedy local routing algorithms to find the short paths. In the right panel of Figure 3 we show that the routing efficiency does in fact seem to echo our theoretical results ? for the aforementioned dynamic, very short paths (only slightly higher than the diameter) are found for small ?, much longer paths for larger ?. Dynamics Greedy routing and dynamics 60 60 ?=1 ?=2 ?=3 ?=4 50 Average greedy routing distance 50 Average distance 40 30 20 10 0 ?=1 ?=2 ?=3 ?=4 40 30 20 10 0 5 10 Iterations 15 0 0 5 x 10 5 10 Iterations 15 5 x 10 Figure 3: Left panel: graph diameter vs. iterations for a simple dynamic. Right panel: greedy routing efficiency vs. iterations for the same dynamic. 6 Extensions We conclude by briefly mentioning generalizations of our theoretical results that we omit detailing. All of the results carry over higher dimensions, where the threshold phenomenon takes place at ? equaling the grid dimension. We can also easily handle the case where the grid wraps around rather than having boundaries. We can also generalize to the pairwise link stability notion of [7], in which that the cost of each link is shared between the end points of the edge. Finally, we can construct network that are stable. References [1] S. Albers, S. Eilts, E. Even-Dar, Y. Mansour, and L. Roditty. On Nash equilibria for a network creation game. In Proc. of SODA, pages 89?98, 2006. [2] E. Anshelevich, A. Dasgupta, J. Kleinberg, E. Tardos, T. Wexler, and T. Roughgarden. The price of stability for network design with fair cost allocation. In Proc. of FOCS, pages 295?304, 2004. [3] Albert-L?aszl?o Barab?asi and R. Albert. Emergence of scaling in random networks. Science, 286:509?512, 1999. [4] D. Brockmann, L. Hufnagel, and T. Geisel. The scaling laws of human travel. Nature, 439:462?465, 2005. [5] J. Corbo and D.C. Parkes. The price of selfish behavior in bilateral network formation. In Proc. of PODC, pages 99?107, 2005. ? Papadimitriou, and S. Shenker. On a network creation game. [6] A. Fabrikant, A. Luthra, E. Maneva, C.H. In Proc. of PODC, pages 347?351, 2003. [7] M. Jackson. A survey of models of network formation:stability and efficiency. In G. Demange and M. Wooders, editors, Group Formation in Economics: Networks, Clubs and Coalitions. 2003. [8] Jon Klienberg. Navigation in a small world. Nature, 406:845, 2000. [9] E. Koutsoupias and C. H. Papadimitriou. Worst-case equilibria. In Proceedings of 16th STACS, pages 404?413, 1999. [10] J. Travers and S. Miligram. An expiermental study of small world problem. Sociometry, 32:425, 1969. [11] Duncan J. Watts. Six Degrees: The Science of a Connected Age. W. W. Norton, Cambridge, Mass., 2003.
3071 |@word private:1 briefly:2 version:1 polynomial:5 stronger:1 c0:13 d2:3 simulation:4 wexler:1 recursively:1 carry:1 initial:1 configuration:1 mkearns:1 contains:1 series:1 ours:2 existing:2 current:2 si:12 yet:2 must:2 distant:3 subsequent:1 asymptote:1 v:2 generative:1 greedy:7 selected:2 xk:4 short:6 parkes:1 coarse:1 provides:2 node:36 club:1 traverse:1 unbounded:3 dn:2 descendant:1 prove:4 qualitative:1 focs:1 introduce:3 pairwise:2 upenn:2 behavior:2 nor:1 growing:1 examine:1 inspired:2 buying:9 equipped:1 cardinality:1 provided:1 begin:3 unrelated:1 underlying:1 bounded:8 panel:5 mass:1 navigability:1 every:25 charge:1 growth:1 shed:1 exactly:3 k2:1 demonstrates:1 omit:2 appear:1 anarchy:1 maneva:1 before:2 scientist:1 local:4 despite:1 ak:9 path:13 might:1 examined:1 quantified:1 fastest:1 mentioning:1 range:1 arguing:1 differs:1 bootstrap:1 empirical:3 universal:1 asi:1 thought:1 word:1 regular:3 suggest:1 get:1 cannot:2 close:2 amplify:1 applying:1 influence:1 bill:1 center:10 economics:2 survey:1 simplicity:1 pure:2 contradiction:2 jackson:1 his:1 stability:6 population:1 notion:4 coordinate:1 handle:1 tardos:1 suppose:5 exact:2 substrate:3 pa:2 expensive:3 located:1 stacs:1 aszl:1 worst:2 equaling:1 connected:5 decrease:1 intuition:1 nash:15 dynamic:8 solving:1 creation:2 efficiency:4 roditty:1 easily:3 joint:5 fast:2 describe:1 formation:10 neighborhood:2 whose:3 quite:2 larger:4 say:2 otherwise:1 forwarded:1 bdk:1 emergence:2 jointly:1 echo:1 maximal:1 frequent:1 fabrikant:1 rapidly:1 amplification:1 sea:1 generating:1 converges:1 derive:1 measured:1 b0:2 albers:1 strong:1 geisel:1 implies:12 radius:2 stochastic:8 human:1 routing:11 generalization:1 preliminary:1 extension:1 hold:6 around:1 considered:2 equilibrium:42 claim:1 substituting:1 estimation:1 proc:4 travel:2 bridge:1 establishes:2 clearly:1 always:1 rather:8 probabilistically:1 geographically:1 focus:1 longest:2 contrast:2 dollar:1 typically:1 a0:2 interested:1 overall:3 aforementioned:1 logn:4 exponent:3 field:1 construct:2 having:2 biology:1 broad:1 look:1 jon:1 purchase:12 papadimitriou:2 np:1 few:1 randomly:1 densely:1 individual:1 phase:1 attempt:1 centralized:1 message:3 interest:1 investigate:1 highly:1 navigation:2 yielding:1 light:1 x22:1 edge:62 preferential:1 minw:1 shorter:1 vi0:2 tree:1 detailing:1 theoretical:4 sociology:1 minimal:2 instance:2 earlier:1 tg:5 cost:17 vertex:18 deviation:3 connect:1 considerably:1 migration:1 probabilistic:1 destination:2 michael:1 connecting:1 quickly:1 connectivity:2 again:4 opposed:2 possibly:2 worse:1 blow:1 includes:2 satisfy:1 vi:15 bilateral:1 root:4 eyal:1 apparently:1 analyze:5 start:2 parallel:1 contrapositive:2 minimize:3 formed:5 who:1 generalize:3 mere:1 researcher:1 straight:1 explain:2 reach:1 milgram:1 definition:2 norton:1 naturally:1 proof:16 static:1 rational:1 proved:1 recall:1 actually:1 higher:6 response:1 though:1 stage:2 hand:1 ei:3 su:22 defines:2 grows:4 validity:2 concept:2 verify:1 illustrated:1 game:11 self:1 uniquely:1 rooted:1 d4:1 outline:1 theoretic:5 argues:1 common:2 empirically:1 brockmann:1 tail:1 discussed:1 shenker:1 significant:1 cambridge:1 grid:48 focal:1 populated:1 podc:2 stable:18 longer:5 similarity:1 add:1 base:2 closest:1 recent:1 route:1 certain:1 inequality:1 travers:2 meeting:1 exploited:1 preceding:1 shortest:7 full:2 mix:1 d0:3 technical:2 cross:1 long:5 knife:1 barab:1 essentially:1 albert:2 iteration:7 addition:2 induced:1 undirected:1 seem:1 bought:5 structural:1 enough:3 variety:1 pennsylvania:2 identified:2 topology:2 economic:2 regarding:2 computable:1 whether:2 six:2 utility:1 unilateral:2 proceed:1 action:8 dar:2 repeatedly:1 outward:1 repeating:1 diameter:38 reduced:1 exist:1 popularity:1 dak:2 diverse:1 shall:1 incentive:4 dasgupta:1 group:1 threshold:7 deleted:1 navigated:1 d3:12 neither:1 v1:2 graph:43 sum:2 striking:2 soda:1 place:1 family:1 vn:2 separation:1 prefer:1 scaling:2 duncan:1 x2k:1 bound:21 topological:1 roughgarden:1 sharply:1 your:1 x2:2 kleinberg:13 argument:7 optimality:1 popularized:1 watt:1 coalition:1 smaller:2 slightly:1 s1:3 equation:2 turn:2 know:2 end:1 decentralized:1 permit:1 apply:2 observe:3 away:1 centrality:1 alternative:2 existence:1 assumes:1 clustering:1 include:2 folklore:1 giving:1 purchased:2 implied:1 g0:1 added:1 question:1 occurs:1 primary:1 devote:1 said:2 kth:1 wrap:1 distance:46 link:20 argue:2 trivial:1 economist:1 length:1 illustration:1 providing:1 balance:1 negative:1 stated:1 design:1 upper:5 payoff:1 precise:1 mansour:1 sharp:2 arbitrary:2 bk:8 pair:2 required:3 namely:2 connection:1 catalog:1 polylogarithmic:1 established:1 address:1 below:1 pattern:1 dynamical:1 pioneering:1 including:1 power:2 natural:4 force:1 examination:1 participation:1 attachment:1 ready:2 log1:1 naive:2 philadelphia:2 sn:3 law:3 mixed:1 interesting:1 proportional:1 allocation:1 localized:1 age:1 longdistance:2 degree:5 purchasing:1 sufficient:1 editor:1 heavy:1 balancing:1 changed:1 repeat:1 last:1 free:5 arriving:1 sociometry:1 formal:3 neighbor:3 fall:1 taking:2 benefit:11 distributed:1 boundary:1 dimension:7 world:5 transition:1 equated:1 forward:1 far:1 social:1 emphasize:1 ignore:1 global:2 buy:2 conclude:1 decade:1 tailed:1 nature:2 complex:2 necessarily:1 vj:7 da:3 main:5 dense:1 revisits:1 n2:1 fair:1 x1:1 wish:4 theorem:11 maxi:1 dk:1 virtue:1 evidence:1 exists:2 effectively:1 ci:9 occurring:2 simply:3 appearance:1 likely:3 explore:1 selfish:1 desire:1 subtlety:1 corresponds:1 chance:1 viewed:3 goal:1 price:3 shared:1 content:1 hard:1 change:1 typical:1 except:1 determined:1 yourself:1 anshelevich:1 koutsoupias:1 kearns:1 lemma:16 called:1 total:1 player:29 da2:6 indicating:1 formally:1 select:1 support:2 d1:1 phenomenon:3
2,283
3,072
Generalized Maximum Margin Clustering and Unsupervised Kernel Learning Hamed Valizadegan Computer Science and Engineering Michigan State University East Lansing, MI 48824 [email protected] Rong Jin Computer Science and Engineering Michigan State University East Lansing, MI 48824 [email protected] Abstract Maximum margin clustering was proposed lately and has shown promising performance in recent studies [1, 2]. It extends the theory of support vector machine to unsupervised learning. Despite its good performance, there are three major problems with maximum margin clustering that question its efficiency for real-world applications. First, it is computationally expensive and difficult to scale to large-scale datasets because the number of parameters in maximum margin clustering is quadratic in the number of examples. Second, it requires data preprocessing to ensure that any clustering boundary will pass through the origins, which makes it unsuitable for clustering unbalanced dataset. Third, it is sensitive to the choice of kernel functions, and requires external procedure to determine the appropriate values for the parameters of kernel functions. In this paper, we propose ?generalized maximum margin clustering? framework that addresses the above three problems simultaneously. The new framework generalizes the maximum margin clustering algorithm by allowing any clustering boundaries including those not passing through the origins. It significantly improves the computational efficiency by reducing the number of parameters. Furthermore, the new framework is able to automatically determine the appropriate kernel matrix without any labeled data. Finally, we show a formal connection between maximum margin clustering and spectral clustering. We demonstrate the efficiency of the generalized maximum margin clustering algorithm using both synthetic datasets and real datasets from the UCI repository. 1 Introduction Data clustering, the unsupervised classification of samples into groups, is an important research area in machine learning for several decades. A large number of algorithms have been developed for data clustering, including the k-means algorithm [3], mixture models [4], and spectral clustering [5, 6, 7, 8, 9]. More recently, maximum margin clustering [1, 2] was proposed for data clustering and has shown promising performance. The key idea of maximum margin clustering is to extend the theory of support vector machine to unsupervised learning. However, despite its success, the following three major problems with maximum margin clustering has prevented it from being applied to real-world applications: ? High computational cost. The number of parameters in maximum margin clustering is quadratic in the number of examples. Thus, it is difficult to scale to large-scale datasets. Figure 1 shows the computational time (in seconds) of the maximum margin clustering algorithm with respect to different numbers of examples. We Time comparision 1600 1400 Generalized Maxmium Marging Clustering Maximum Margin Clustering Time (seconds) 1200 1000 800 600 400 200 0 40 60 80 100 120 140 160 Number of Samples 180 200 220 Figure 1: The scalability of the original maximum margin clustering algorithm versus the generalized maximum margin clustering algorithm 50 2 45 1.8 40 35 Clustering error 1.6 1.4 1.2 1 30 25 20 15 10 0.8 5 0.6 0 10 0.4 0.4 0.6 0.8 1 1.2 1.4 1.6 (a) Data distribution 1.8 2 2.2 20 30 40 50 60 70 80 90 100 Kernel Width (% of data range) in RBF function (b) Clustering error versus kernel width Figure 2: Clustering error of spectral clustering using the RBF kernel with different kernel width. The horizonal axis of Figure 2(b) represents the percentage of the distance range (i.e., the difference between the maximum and the minimum distance) that is used for kernel width. clearly see that the computational time increases dramatically when we apply the maximum margin clustering algorithm to even modest numbers of examples. ? Requiring clustering boundaries to pass through the origins. One important assumption made by the maximum margin clustering in [1] is that the clustering boundaries will pass through the origins. To this end, maximum margin clustering requires centralizing data points around the origins before clustering data. It is important to note that centralizing data points at the origins does not guarantee clustering boundaries to go through origins, particularly when cluster sizes are unbalanced with one cluster significantly more popular than the other. ? Sensitive to the choice of kernel functions. Figure 2(b) shows the clustering error of maximum margin clustering for the synthesized data of two overlapped Gaussians clusters (Figure 2(a)) using the RBF kernel with different kernel width. We see that the performance of maximum margin clustering depends critically on the choice of kernel width. The same problem is also observed in spectral clustering [10]. Although a number of studies [8, 9, 10, 6] are devote to automatically identifying appropriate kernel matrices in clustering, they are either heuristic approaches or require additional labeled data. In this paper, we propose ?generalized maximum margin clustering? framework that resolves the above three problems simultaneously. In particular, the proposed framework reformulates the problem of maximum margin clustering to include the bias term in the classification boundary, and therefore remove the assumption that clustering boundaries have to pass through the origins. Furthermore, the new formulism reduces the number of parameters to be linear in the number of examples, and therefore significantly reduces the computational cost. Finally, it is equipped with the capability of unsupervised kernel learning, and therefore, is able to determine the appropriate kernel matrix and clustering memberships simultaneously. More interestingly, we will show that spectral clustering, such as the normalized cut algorithm, can be viewed as a special case of the generalized maximum margin clustering. The remainder of the paper is organized as follows: Section 2 reviews the work of maximum margin clustering and kernel learning. Section 3 presents the framework of generalized maximum margin clustering. Our empirical studies are presented in Section 4. Section 5 concludes this work. 2 Related Work The key idea of maximum margin clustering is to extend the theory of support vector machine to unsupervised learning. Given the training examples D = (x1 , x2 , . . . , xn ) and their class labels y = (y1 , y2 , . . . , yn ) ? {?1, +1}n , the dual problem of support vector machine can be written as: 1 maxn ?> e ? ?> diag(y)Kdiag(y)? ??R 2 s. t. 0 ? ? ? C, ?> y = 0 (1) where K ? Rn?n is the kernel matrix and diag(y) stands for the diagonal matrix that uses the vector y as its diagonal elements. To apply the above formulism to unsupervised learning, the maximum margin clustering approach relaxes class labels y to continuous variables, and searches for both y and ? that maximizes the classification margin. This leads to the following optimization problem: min y,?,?,? s. t. t ? (yy> ) ? K (e + ? ? ? + ?y)> ? ? 0, ? ? 0 e + ? ? ? + ?y t ? 2C? > e ? ?0 where ? stands for the element wise product between two matrices. To convert the above problem into a convex programming problem, the authors of [1] makes two important relaxations. The first one relaxes yy> into a positive semi-definitive (PSD) matrix M ? 0 whose diagonal elements are set to be 1. The second relaxation sets ? = 0, which is equivalent to assuming that there is no bias term b in the expression of classification boundaries, or in other words, classification boundaries have to pass through the origins of data. These two assumption simplify the above optimization problem as follows: min M,?,? s. t. t ? M ?K (e + ? ? ?)> e+??? t ? 2C? > e ? ? 0, ? ? 0, M ? 0 ? ?0 (2) Finally, a few additional constraints of M are added to the above optimization problem to prevent skewed clustering sizes [1]. As a consequence of these two relaxations, the number of parameters is increased from n to n2 , which will significantly increase the computational cost. Furthermore, by setting ? = 0, the maximum margin clustering algorithm requires clustering boundaries to pass through the origins of data, which is unsuitable for clustering data with unbalanced clusters. Another important problem with the above maximum margin clustering is the difficulty in determining the appropriate kernel similarity matrix K. Although many kernel based clustering algorithms set the kernel parameters manually, there are several studies devoted to automatic selection of kernel functions, in particular the kernel width for the RBF kernel, ? ? kx ?x k2 i.e., ? in exp ? i2?2j 2 . Shi et al. [8] recommended choosing the kernel width as 10% to 20% of the range of the distance between samples. However, in our experiment, we found that this is not always a good choice, and in many situations it produces poor results. Ng et al. [9] chose kernel width which provides the least distorted clusters by running the same clustering algorithm several times for each kernel width. Although this approach seems to generate good results, it requires running seperate experiments for each kernel width, and therefore could be computationally intensive. Manor et al. in [10] proposed a self-tuning spectral clustering algorithm that computes a different local kernel width for each data point xi . In particular, the local kernel width for each xi is computed as the distance of xi to its kth nearest neighbor. Although empirical study seems to show the effectiveness of this approach, it is unclear how to find the optimal k in computing the local kernel width. As we will see in the experiment section, the clustering accuracy depends heavily on the choice of k. Finally, we will briefly overview the existing work on kernel learning. Most previous work focus on supervised kernel learning. The representative approaches in this category include the kernel alignment [11, 12], semi-definitive programming [13], and spectral graph partitioning [6]. Unlike these approaches, the proposed framework is designed for unsupervised kernel learning. 3 Generalized Maximum Margin Clustering and Unsupervised Kernel Learning We will first present the proposed clustering algorithm for hard margin, followed by the extension to soft margin and unsupervised kernel learning. 3.1 Hard Margin In the case of hard margin, the dual problem of SVM is almost identical to the problem in Eqn. (1) except that the parameter ? does not have the upper bound C. Following [13], we further convert the problem in (1) into its dual form: 1 (e + ? + ?y)T diag(y)K ?1 diag(y)(e + ? + ?y) min ? ,y,? 2 s. t. ? ? 0, y ? {+1, ?1}n (3) where e is a vector with all its elements being one. Unlike the treatment in [13], which rewrites the above problem as a semi-definitive programming problem, we introduce variables z that is defined as follows: z = diag(y)(e + ?) Given that ? ? 0, the above expression for z is essentially equivalent to the constraint |zi | ? 1 or zi2 ? 1 for i = 1, 2, . . . , n. Then, the optimization problem in (3) is rewritten as follows: 1 (z + ?e)T K ?1 (z + ?e) min z,? 2 s. t. zi2 ? 1, i = 1, 2, . . . , n (4) Note that the above problem may not have unique solutions for z and ? due to the translation invariance of the objective function. More specifically, given an optimal solution z and ?, we may be able to construct another solution z0 and ?0 such that: z0 = z + ?e, ?0 = ? ? ?. Evidently, both solutions result in the same value for the objective function in (4). Furthermore, with appropriately chosen ?, the new solution z0 and ?0 will be able to satisfy the constraint zi2 ? 1. Thus, z0 and ?0 is another optimal solution for (3). This is in fact related to the problem in SVM where the bias term b may not be unique [14]. To remove the translation invariance from the objective function, we introduce an additional term Ce (z> e)2 into the objective function, i.e. 1 (z + ?e)T K ?1 (z + ?e) + Ce (z> e)2 min z,? 2 s. t. zi2 ? 1, i = 1, 2, . . . , n (5) where constant Ce weights the important of the punishment factor against the original objective. It is set to be 10, 000 in our experiment. For the simplicity of our expression, we further define w = (z; ?) and P = (In , e). Then, the problem in (4) becomes 2 min wT P T K ?1 P w + Ce (e> 0 w) n+1 w?R s. t. wi2 ? 1, i = 1, 2, . . . , n (6) where e0 is a vector with all its elements being 1 except its last element which is zero. We then construct the Lagrangian as follows n X 2 i L(w, ?) = wT P T K ?1 P w + Ce (e> ?i (w> In+1 w ? 1) 0 w) ? i=1 ? = w > T P K ?1 P+ C e e0 e> 0 ? n X i ?i In+1 ! w+ i=1 n X ?i i=1 i where In+1 is an (n + 1) ? (n + 1) matrix with all the elements being zero except the ith diagonal element which is 1. Hence, the dual problem of (6) is n X ?i maxn ??R s. t. i=1 T P K ?1 P+ Ce e0 e> 0 ? n X i ?i In+1 ?0 i=1 ?i ? 0, i = 1, 2, . . . , n Finally, the solution?w can be computed using the KKT ! condition, i.e., n X i P T K ?1 P + Ce e0 e> ?i In+1 w = 0n+1 0 ? (7) i=1 In words, the Psolution w ? is proportional to the eigenvector of matrix ? T other n i ? ? I P K ?1 P + Ce e0 e> i n+1 for the zero eigenvalue. Since wi = (1 + ?i )yi , i = 0 i=1 1, 2, . . . , n and ?i ? 0, the class labels {yi }ni=1 can be inferred directly from the sign of {wi }ni=1 . Remark I It is important to realize that the problem in (5) is non-convex due to the nonconvex constraint wi2 ? 1. Thus, the optimal solution found by the dual problem in (7) is not necessarily the optimal solution for the prime problem in (5). Our hope is that although the solution found by the dual problem is not optimal for the prime problem, it is still a good solution for the prime problem in (5). This is similar to the SDP relaxation made by the maximum margin clustering algorithm in (2) that relaxes a non-convex programming problem into a convex one. However, unlike the relaxation made in (2) that increases the number of variables from n to n2 , the new formulism of maximum margin does not increase the number of parameters (i.e., ?), and therefore will be computational more efficient. This is shown in Figure 1, in which the computational time of generalized maximum margin clustering is increased much slower than that of the maximum margin algorithm. Remark II To avoid the high computational cost in estimating K ?1 , we replace K ?1 with 1/2 its normalized graph Laplacian L(K) [15], which is defined as L(K) = I ?D1/2 where PKD n D is a diagonal matrix whose diagonal elements are computed as Di,i = j=1 Ki,j , i = ? = L(K)? where ? stands for the 1, 2, . . . , n. This is equivalent to defining a kernel matrix K operator of pseudo inverse. More interesting, we have the following theorem showing the relationship between generalized maximum margin clustering and the normalized cut. Theorem 1. The normalized cut algorithm is a special case of the generalized maximum margin clustering in (7) if the following conditions hold, i.e., (1) K ?1 is set to be the ? normalized Laplacian L(K), (2) all the ?s are enforced to be the same, i.e., ?i = ?0 , i = 1, 2, . . . , n, and (3) Ce ? 1. Proof sketch: Given the conditions 1 to 3 in the theorem, the new objective function in (7) ? becomes: max ? s.t. L(K) ? ?In and the solution for this problem is the largest eigenvector ? of L(K). ??0 3.2 Soft Margin We extend the formulism in (7) to the case of soft margin by considering the following problem: n X 1 (e + ? ? ? + ?y)T diag(y)K ?1 diag(y)(e + ? ? ? + ?y) + C? min ?i2 ? ,y,?,? 2 i=1 s. t. ? ? 0, ? ? 0, y ? {+1, ?1}n (8) where C? weights the importance of the clustering errors against the clustering margin. Similar to the previous derivation, we introduce the slack variable z and simplify the above problem as follows: n X 1 (z + ?e)T K ?1 (z + ?e) + Ce (z> e)2 + C? min ?i2 z,?,? 2 i=1 s. t. (zi + ?i )2 ? 1, ?i ? 0, i = 1, 2, . . . , n (9) By approximating (zi + ?i )2 as zi2 + ?i2 , we have the dual form of the above problem written as: n X maxn ?i ??R s. t. i=1 P > K ?1 P + Ce e0 e> 0 ? n X i ?0 ?i In+1 i=1 0 ? ?i ? C? , i = 1, 2, . . . , n (10) The main difference between the above formulism and the formulism in (7) is the introduction of the upper bound C? for ? in the case of soft margin. In the experiment, we set the parameter C? to be 100, 000, a very large value. 3.3 Unsupervised Kernel Learning As already pointed out, the performance of many clustering algorithms depend on the right choice of the kernel similarity matrix. To address this problem, we extend the formulism in (10) by including the kernel learning mechanism. In particular, we assume that a set of m kernel similarity matrices K1 , K2 , . . . , Km available. Our goal is to identify the linear Pare m combination of kernel matrices, i.e., K = i=1 ?i Ki , that leads to the optimal clustering accuracy. More specifically, we need to solve the following optimization problem: n X max ?i ?,? s. t. i=1 P > ?m X !?1 ?i Ki P + C e e0 e> 0 ? i=1 n X i ?i In+1 ?0 i=1 0 ? ?i ? C? , i = 1, 2, . . . , n, m X ?i = 1, ?i ? 0, i = 1, 2, . . . , m (11) i=1 Unfortunately, it is difficult to solve the above problem due to the complexity introduced Pm by ( i=1 ?i Ki )?1 . Hence, we consider an alternative problem to the above one. We first ?1, L ?2, . . . , L ? m . Each Laplacian Li is conintroduce a set of normalized graph Laplacian L structed from the P kernel similarity matrix Ki . We then defined the inverse of the combined m ? i . Then, we have the following optimization problem matrix as K ?1 = i=1 ?i L n X max ?i ?,? s. t. i=1 m X i=1 ? i P + C e e0 e> ?i P > L 0 ? n X i ?i In+1 ?0 i=1 0 ? ?i ? C? , i = 1, 2, . . . , n, m X i=1 ?i = 1, ?i ? 0, i = 1, 2, . . . , m (12) 2 0.6 1 1.8 0.4 1.6 0.5 0.2 1.4 0 0 1.2 ?0.2 1 ?0.5 0.8 ?0.4 ?1 0.6 ?0.6 0.4 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 (a) Overlapped Gaussian ?1 ?0.5 0 0.5 (b) Two Circles 1 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 (c) Two Connected Circles Figure 3: Data distribution of the three synthesized datasets By solving the above problem, we are able to resolve both ? (corresponding to clustering memberships) and ? (corresponding to kernel learning) simultaneously. 4 Experiment We tested the generalized maximum margin clustering algorithm on both synthetic datasets and real datasets from the UCI repository. Figure 3 gives the distribution of the synthetic datasets. The four UCI datasets used in our study are ?Vote?, ?Digits?, ?Ionosphere?, and ?Breast?. These four datasets comprise of 218, 180, 351, and 285 examples, respectively, and each example in these four datasets is represented by 17, 64, 35, and 32 features. Since the ?Digits? dataset consists of multiple classes, we further decompose it into four datasets of binary classes that include pairs of digits difficult to distinguish. Both the normalized cut algorithm [8] and the maximum margin clustering algorithm [1] are used as the baseline. The RBF kernel is used throughout this study to construct the kernel similarity matrices. In our first experiment, we examine the optimal performance of each clustering algorithm by using the optimal kernel width that is acquired through an exhaustive search. The optimal clustering errors of these three algorithms are summarized in the first three columns of Table 1. It is clear that generalized maximum margin clustering algorithm achieve similar or better performance than both maximum margin clustering and normlized cut for most datasets when they are given the optimal kernel matrices. Note that the results of maximum margin clustering are reported for a subset of samples(including 80 instances) in UCI datasets due to the out of memory problem. Table 1: Clustering error (%) of normalized cut (NC), maximum margin clustering (MMC), generalized maximum margin clustering (GMMC) and self-tuning spectral clustering (ST). Dataset Optimal Kernel Width Unsupervised Kernel Learning NC MMC GMMC GMMC ST (Best k) ST(Worst k) Two Circles 2 0 0 0 0 50 7 6.25 0 0 1 45 Two Jointed Circles 1.25 2.5 1.25 3.75 5 7.5 Two Gaussian 25 15 9.6 11.90 11 40 Vote 35 10 5.6 5.6 5 50 Digits 3-8 45 31.25 2.2 3 0 47 Digits 1-7 34 1.25 .5 5.6 1.5 50 Digits 2-7 48 3.75 16 12 9 48 Digits 8-9 25 21.25 23.5 27.3 26.5 48 Ionosphere 36.5 38.75 36.1 37 37.5 41.5 Breast In the second experiment, we evaluate the effectiveness of unsupervised kernel learning. Ten kernel matrices are created by using the RBF kernel with the kernel width varied from 10% to 100% of the range of distance between any two examples. We compare the proposed unsupervised kernel learning to the self-tuning spectral clustering algorithm in [10]. One of the problem with the self-tuning spectral clustering algorithm is that its clustering error usually depends on the parameter k, i.e., the number of nearest neighbor used for computing the kernel width. To provide a full picture of the self-tuning spectral clustering, we vary k from 1 and 15 , and calculate both best and worst performance using different k. The last three columns of Table 1 summarizes the clustering errors of generalized maximum margin clustering and self-tuning spectral clustering with both best and worst k. First, observe the big gap between best and worst performance of self-tuning spectral clustering with different choice of k, which implies that this algorithm is sensitive to parameter k. Second, for most datasets, generalized maximum margin clustering achieves similar performance as self-tuning spectral clustering with the best k. Furthermore, for a number of datasets, the unsupervised kernel learning method achieves the performance close to the one using the optimal kernel width. Both results indicate that the proposed algorithm for unsupervised kernel learning is effective in identifying appropriate kernels. 5 Conclusion In this paper, we proposed a framework for the generalized maximum margin clustering. Compared to the existing algorithm for maximum margin clustering, the new framework has three advantages: 1) it reduces the number of parameters from n2 to n, and therefore has a significantly lower computational cost, 2) it allows for clustering boundaries that do not pass through the origin, and 3) it can automatically identify the appropriate kernel similarity matrix through unsupervised kernel learning. Our empirical study with three synthetic datasets and four UCI datasets shows the promising performance of our proposed algorithm. References [1] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In Advances in Neural Information Processing Systems (NIPS) 17, 2004. [2] L. Xu and D. Schuurmans. Unsupervised and semi-supervised multi-class support vector machines. In Proceedings of the 20th National Conference on Artificial Intelligence (AAAI-05)., 2005. [3] J. Hartigan and M. Wong. A k-means clustering algorithm. Appl. Statist., 28:100?108, 1979. [4] R. A. Redner and H. F. Walker. Mixture densities, maximum likelihood and the em algorithm. SIAM Review, 26:195?239, 1984. [5] C. Ding, X. He, H. Zha, M. Gu, and H. Simon. A min-max cut algorithm for graph partitioning and data clustering. In Proc. IEEE Int?l Conf. Data Mining, 2001. [6] F. R. Bach and M. I. Jordan. Learning spectral clustering. In Advances in Neural Information Processing Systems 16, 2004. [7] R. Jin, C. Ding, and F. Kang. A probabilistic approach for optimizing spectral clustering. In Advances in Neural Information Processing Systems 18, 2006. [8] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888?905, 2000. [9] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems 14, 2001. [10] L. Zelnik-Manor and P. Perona. Self-tuning spectral clustering. In Advances in Neural Information Processing Systems 17, pages 1601?1608, 2005. [11] N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. S. Kandola. On kernel-target alignment. In NIPS, pages 367?373, 2001. [12] X. Zhu, J. Kandola, Z. Ghahramani, and J. Lafferty. Nonparametric transforms of graph kernels for semi-supervised learning. In Advances in Neural Information Processing Systems 17, pages 1641?1648, 2005. [13] G. R. G. Lanckriet, N. Cristianini, P. L. Bartlett, Laurent El Ghaoui, and Michael I. Jordan. Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5:27?72, 2004. [14] C. J. C. Burges and D. J. Crisp. Uniqueness theorems for kernel methods. Neurocomputing, 55(1-2):187?220, 2003. [15] F.R.K. Chung. Spectral Graph Theory. Amer. Math. Society, 1997.
3072 |@word repository:2 briefly:1 seems:2 km:1 zelnik:1 elisseeff:1 interestingly:1 existing:2 written:2 realize:1 remove:2 designed:1 intelligence:2 ith:1 provides:1 math:1 cse:1 consists:1 lansing:2 introduce:3 acquired:1 valizadegan:1 examine:1 sdp:1 multi:1 automatically:3 resolve:2 equipped:1 considering:1 becomes:2 estimating:1 maximizes:1 eigenvector:2 developed:1 guarantee:1 pseudo:1 k2:2 partitioning:2 yn:1 before:1 positive:1 engineering:2 local:3 reformulates:1 consequence:1 despite:2 laurent:1 chose:1 appl:1 range:4 unique:2 digit:7 procedure:1 area:1 empirical:3 significantly:5 centralizing:2 word:2 close:1 selection:1 operator:1 wong:1 crisp:1 equivalent:3 lagrangian:1 shi:2 go:1 convex:4 simplicity:1 identifying:2 d1:1 target:1 heavily:1 programming:5 us:1 origin:11 lanckriet:1 overlapped:2 element:9 expensive:1 particularly:1 cut:8 labeled:2 observed:1 ding:2 worst:4 calculate:1 connected:1 complexity:1 cristianini:2 depend:1 rewrite:1 solving:1 efficiency:3 gu:1 represented:1 derivation:1 seperate:1 effective:1 artificial:1 choosing:1 exhaustive:1 whose:2 heuristic:1 solve:2 advantage:1 eigenvalue:1 evidently:1 neufeld:1 propose:2 formulism:7 product:1 remainder:1 uci:5 achieve:1 scalability:1 cluster:5 produce:1 mmc:2 nearest:2 implies:1 indicate:1 require:1 decompose:1 rong:1 extension:1 hold:1 around:1 exp:1 major:2 vary:1 achieves:2 uniqueness:1 proc:1 label:3 sensitive:3 largest:1 hope:1 clearly:1 always:1 gaussian:2 manor:2 avoid:1 focus:1 likelihood:1 baseline:1 el:1 membership:2 perona:1 classification:5 dual:7 special:2 construct:3 comprise:1 ng:2 manually:1 identical:1 represents:1 unsupervised:18 simplify:2 few:1 simultaneously:4 national:1 kandola:2 neurocomputing:1 psd:1 mining:1 alignment:2 mixture:2 semidefinite:1 devoted:1 modest:1 taylor:1 circle:4 e0:8 increased:2 column:2 soft:4 instance:1 cost:5 subset:1 reported:1 synthetic:4 punishment:1 combined:1 st:3 density:1 siam:1 probabilistic:1 michael:1 aaai:1 external:1 conf:1 chung:1 li:1 summarized:1 int:1 satisfy:1 depends:3 zha:1 capability:1 simon:1 ni:2 accuracy:2 identify:2 critically:1 hamed:1 against:2 proof:1 mi:2 di:1 dataset:3 treatment:1 popular:1 improves:1 organized:1 segmentation:1 redner:1 supervised:3 wei:1 amer:1 furthermore:5 sketch:1 eqn:1 requiring:1 normalized:9 y2:1 hence:2 i2:4 skewed:1 width:19 self:9 larson:1 generalized:18 demonstrate:1 image:1 wise:1 recently:1 overview:1 extend:4 he:1 synthesized:2 automatic:1 tuning:9 pm:1 pointed:1 shawe:1 similarity:6 recent:1 optimizing:1 prime:3 nonconvex:1 binary:1 success:1 yi:2 minimum:1 additional:3 determine:3 recommended:1 semi:5 ii:1 multiple:1 full:1 reduces:3 bach:1 prevented:1 laplacian:4 breast:2 essentially:1 kernel:67 walker:1 appropriately:1 unlike:3 lafferty:1 effectiveness:2 jordan:3 relaxes:3 zi:3 idea:2 intensive:1 expression:3 bartlett:1 passing:1 remark:2 dramatically:1 clear:1 transforms:1 nonparametric:1 ten:1 statist:1 category:1 generate:1 percentage:1 sign:1 yy:2 group:1 key:2 four:5 hartigan:1 prevent:1 ce:11 graph:6 relaxation:5 convert:2 enforced:1 inverse:2 distorted:1 extends:1 almost:1 throughout:1 jointed:1 summarizes:1 bound:2 horizonal:1 ki:5 followed:1 distinguish:1 quadratic:2 comparision:1 constraint:4 x2:1 pkd:1 min:9 maxn:3 combination:1 poor:1 em:1 wi:2 ghaoui:1 computationally:2 slack:1 mechanism:1 end:1 generalizes:1 gaussians:1 rewritten:1 available:1 apply:2 observe:1 appropriate:7 spectral:19 zi2:5 alternative:1 slower:1 original:2 clustering:102 ensure:1 include:3 running:2 unsuitable:2 k1:1 ghahramani:1 approximating:1 society:1 objective:6 malik:1 question:1 added:1 already:1 diagonal:6 devote:1 unclear:1 kth:1 distance:5 assuming:1 relationship:1 nc:2 difficult:4 unfortunately:1 allowing:1 upper:2 datasets:18 jin:2 situation:1 defining:1 y1:1 rn:1 varied:1 inferred:1 introduced:1 pair:1 connection:1 kang:1 nip:2 address:2 able:5 usually:1 pattern:1 wi2:2 including:4 max:4 memory:1 difficulty:1 zhu:1 pare:1 picture:1 axis:1 created:1 lately:1 concludes:1 review:2 determining:1 interesting:1 proportional:1 versus:2 translation:2 last:2 formal:1 bias:3 burges:1 neighbor:2 boundary:11 xn:1 world:2 kdiag:1 stand:3 computes:1 author:1 made:3 preprocessing:1 transaction:1 kkt:1 xi:3 msu:2 continuous:1 search:2 decade:1 table:3 promising:3 rongjin:1 schuurmans:2 necessarily:1 diag:7 main:1 big:1 definitive:3 n2:3 x1:1 xu:2 representative:1 third:1 z0:4 theorem:4 showing:1 svm:2 ionosphere:2 structed:1 importance:1 margin:58 kx:1 gap:1 michigan:2 viewed:1 goal:1 rbf:6 replace:1 hard:3 specifically:2 except:3 reducing:1 wt:2 pas:7 invariance:2 vote:2 east:2 support:5 unbalanced:3 evaluate:1 tested:1
2,284
3,073
Simplifying Mixture Models through Function Approximation Kai Zhang James T. Kwok Department of Computer Science and Engineering The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong {twinsen, jamesk}@cse.ust.hk Abstract Finite mixture model is a powerful tool in many statistical learning problems. In this paper, we propose a general, structure-preserving approach to reduce its model complexity, which can bring significant computational benefits in many applications. The basic idea is to group the original mixture components into compact clusters, and then minimize an upper bound on the approximation error between the original and simplified models. By adopting the L2 norm as the distance measure between mixture models, we can derive closed-form solutions that are more robust and reliable than using the KL-based distance measure. Moreover, the complexity of our algorithm is only linear in the sample size and dimensionality. Experiments on density estimation and clustering-based image segmentation demonstrate its outstanding performance in terms of both speed and accuracy. 1 Introduction In many statistical learning problems, it is useful to obtain an estimate of the underlying probability density given a set of observations. Such a density model can facilitate discovery of the underlying data structure in unsupervised learning, and can also yield, asymptotically, optimal discriminant procedures [7]. In this paper, we focus on the finite mixture model, P which describes the distribution by a mixture of simple parametric functions ?(?)?s as f (x) = nj=1 ?j ?(x, ?j ). Here, ?j is the Pn parameter for the jth component, and the mixing parameters ?j ?s satisfy j=1 ?j = 1. The most common parametric form of ? is the Gaussian, leading to the well-known Gaussian mixtures. The mixture model has been widely used in clustering and density estimation, where the model parameters can be estimated by the standard Expectation-Maximization (EM) algorithm. However, the EM can be prohibitively expensive on large problems [12]. On the other hand, note that in many learning processes using mixture models (such as particle filtering [6] and non-parametric belief propagation [13]), the computational requirement is also very demanding due to the large number of components involved in the model. In this situation, our interest is more on reducing the number of components for prospective computational saving. Previous works typically employ spatial data structures, such as the kd-tree [8, 9], for acceleration. Recently, [5] proposes to reduce a large Gaussian mixture into a smaller one by minimizing a KL-based distance between the two mixtures. This has been applied with success on hierarchical clustering of scenery images and handwritten digits. In this paper, we propose a new algorithm for simplifying a given finite mixture model while preserving its component structures, with application to nonparametric density estimation and clustering. The idea is to minimize an upper bound on the approximation error between the original and simplified mixture models. By adopting the L2 norm as the error criterion, we can derive closed-form solutions that are more robust and reliable than using the KL-based distance measures. At the same time, our algorithm can be applied to general Gaussian kernels, and the complexity is only linear in the sample size and dimensionality. The rest of the paper is organized as follows. In Section 2 we describe the proposed approach in detail, and illustrate its advantages compared with the existing ones. In Section 3, we report experimental results on simplifying the Parzen window estimator, and color image segmentation through the mean shift clustering procedure. Section 4 gives some concluding remarks. 2 Approximation Algorithm Given a mixture model f (x) = n X ?j ?j (x), (1) j=1 we assume that the jth component ?j (x) is of the form ?j (x) = |Hj |?1/2 KHj (x ? xj ) , (2) with weight ?j , center xj and covariance matrix Hj . Here, KH (x) = K(H?1/2 x) where K(x) is the kernel that is bounded and has compact support. Note that for radially symmetric kernels, it suffices to define K by the profile k such that K(x) = k(kxk2 ). With this notion, the gradient of the kernel function, KH (x), can be conveniently written as ?x KH (x) = k ? (r)?x r = 2k ? (r)H?1 x, where r = xH?1 x. Our task is to approximate f with a simpler mixture model g(x) = m X wi gi (x), (3) i=1 with m ? n, where each component gi also takes the form ? i |?1/2 K ? (x ? ti ) , gi (x) = |H Hi (4) ? i. with weight wi , center ti , and covariance matrix H Note that direct approximation of f by g is not feasible, because they involve a large number of components. Given a distance measure D(?, ?) between functions, the approximation error ? ? n m X X E = D(f, g) = D ? ?j ?j , wi gi ? (5) j=1 i=1 is usually difficult to optimize. However, the problem can be by minimizing R very much simplified 2 an upper bound of E. Consider the L2 distance D(?, ?? ) = (?(x) ? ?? (x)) dx, and suppose that the mixture components {?j }nj=1 are divided into disjoint clusters S1 , . . . , Sm . Then, it is easy to see that the approximation error E is bounded by ? ?2 ? ?2 Z X n m m Z X X X ?wi gi (x) ? E= ? ?j ?j (x) ? wi gi (x)? dx ? m ?j ?j (x)? dx. j=1 i=1 i=1 Denote this upper bound by E = m Ei = Z Pm i=1 ? j?Si E i , where ?wi gi (x) ? X j?Si ?2 ?j ?j (x)? dx. (6) Note that E is the sum of the ?local? approximation errors E i ?s. Hence, if we can find a good representative wi gi for each cluster by minimizing the local approximation error E i , the overall approximation performance can also be guaranteed. This suggests partitioning the original mixture components into compact clusters, wherein approximation can then be done much more easily. Our basic algorithm proceeds as follows: 1. (Section 2.1.1) Partition the set of mixture components (?j ?s) into m clusters where m ? n. Let Si be the set that indexes all components belonging to the ith cluster.P 2. (Section 2.1.2) For each cluster, approximate the local mixture model j?Si ?j ?j by a single component wi gi , where gi is defined in (4). Pm 3. The simplified model g is obtained by g(x) = i=1 wi gi (x). These steps will be discussed in more detail in the following sections. 2.1 Procedure 2.1.1 Partitioning of Components In this section, we consider how to group similar components into the same cluster, so that the subsequent local approximation can be more accurate. A useful algorithm for this task is the classic vector quantization (VQ) [4], where one iterates between partitioning a set of vectors and finding the best prototype for each partition until the distortion error converges. By defining a distance D(?, ?) between mixture components ?j ?s, we can partition the mixture components in a similar way. However, vector quantization is sensitive to the initial partitioning. So we first introduce a simple but highly efficient partitioning method called sequential sampling (SS): 1. Randomly select a ?j and add it to the set of representatives R. 2. For all the components (j = 1, 2, . . . , n), do the following ? Compute the distance D (?j , Ri ), where Ri ? R. ? Once if D (?j , Ri ) ? r, where r is a predefined threshold, assign ?j to the representative Ri , and then process the next component. ? If D (?j , Ri ) > r for all Ri ? R, add ?j as a new representative of R. 3. Terminate when all the components have been processed. This procedure partitions the components by choosing those ?j ?s that are enough far away as representatives, with a user-defined resolution r. So it is less sensitive to initialization. In practice, we will first initialize by sequential sampling, and then perform the iterative VQ procedure to further refine the partition, i.e., find the best representative Ri for each Pcluster, reassign each component ?j to the closest representative R?(j) , and iterate until the error j ?j D(?j , R?(j) ) converges. 2.1.2 Local Approximation In this part, we consider how to obtain a good representative, wi gi in (4), for each local cluster Si . ? i associated with gi . Using the L2 norm, The task is to determine the unknown variables wi , ti and H the upper bound (6) of the local approximation error can be written as ? ?2 Z X ?wi gi (x) ? Ei = ?j ?j (x)? dx j?Si X 2CK ?j k(rij ) CK ? wi + ci . ? i |1/2 ? i |1/2 |2H |Hj + H j?Si R R P Here, CK = k(x? x)dx is a kernel-dependent constant, ci = ( j?Si ?j ?2j (x))2 dx is a data? i )?1 (ti ?xj ). dependent constant (irrelevant to the unknown variables), and rij = (ti ?xj )? (Hj + H Here we have assumed that k(a) ? k(b) = k(a + b), which is valid for the Gaussian and negative exponential kernels. Without this assumption, solutions can still be obtained but are less compact. = wi2 ? i , one can set the corresponding partial derivatives of E i to To minimize E i w.r.t. wi , ti and H zero. However, this leads to a nonlinear system that is quite difficult to solve. Here, we decouple the relations among these three parameters. First, observe that E i is a quadratic function of wi . ? i and ti , the minimum value of E i can be easily obtained as Therefore, given H ? ?2 ?1/2 X 1 min ? i| 2 ? ? i ? . Ei = |H ?j k(rij ) Hj + H (7) j?Si min The remaining task is to minimize E i ti = M?1 i where Mi = min ? i . By setting ?t E w.r.t. ti and H i i = 0, we have X ?j k ? (rij ) (Hj + H ? i )?1 xj , ? i |1/2 |Hj + H j?Si (8) X ?j k ? (rij ) (Hj + H ? i )?1 . ? i |1/2 |Hj + H j?Si ? i is fixed, we can obtain ti by starting with an initial This is an iterative contraction mapping. If H (0) ? i , we set ? ? E min = 0 and obtain ti , and then iterate (8) until convergence. Now, to solve H Hi i ? i = P?1 H i  X ?j (H ? i + Hj )?1  ? i + Hj )?1 H ?i , k(rij )Hj + 4(?k ? (rij ))(xj ? ti )(xj ? ti )? (H ? i |1/2 |Hj + H j?Si (9) where Pi = X (H ? i + Hj )?1 ? k(r ). ? i |1/2 j ij |Hj + H j?Si In summary, we first initialize P P (0) ti = j /( j?Si ?j ), j?Si ?j x   P P (0) (0) ? (0) = H ?j Hj + (t ? xj )(t ? xj )? /( i i j?Si i j?Si ?j ), ? i are substituted and then iterate (8) and (9) until convergence. The converged values of ti and H into ?wi E i = 0 to obtain wi as 1 ? i| 2 wi = |2H X j?Si ?j k(rij ) . ? i |1/2 |Hj + H (10) 2.2 Complexity In the partitioning step, sequential sampling has a complexity of O(dmn), where n is original model size, m is the number of clusters, and d the dimension. By using a hierarchical scheme [2], this can be reduced to O(dn Pm log(m)). The VQ takes O(dnm) time. In the local approximation step, the complexity is l i=1 ni d3 = lnd3 , where l is the maximum number of iterations needed. In ? i ?s while still obtaining a practice, we can enforce a diagonal structure on the covariance matrix H closed-form solution. Hence, the complexity becomes linear in the dimension d instead of cubic. Summing up these three terms, the overall complexity is O(dn log(m) + dnm + lnd) = O(dn(m + l)), which is linear in both the data size and dimension (in practice m and l are quite small). 2.3 Remarks In this section, we discuss some interesting properties of the approximation scheme proposed in Section 2.1.2. To have better intuitions, we examine the special case of a Parzen window density estimator [11], where all ?j ?s have the same weights and bandwidths (Hj = H for j = 1, 2, . . . , n). Equation (9) then reduces to ? i = H + 4H ? i (H ? i + H)?1 Vi , H where Vi = P j?Si ?j (?k ? (rij ))(xj ? ti )(xj ? ti )? P . j?Si ?j k(rij ) (11) ? i of gi can be decomposed into two parts: the bandwidth H of the It shows that the bandwidth H original kernel density estimator, and the covariance Vi of the local cluster Si with an adjusting ? i (H ? i + H)?1 . As an illustration, consider the 1-D case where H = h2 , H ? i = h2 . matrix ?i = 4H i 4h2 2 2 2 2 i Then ?i = h2 +h 2 , and hi = h + ?i Vi . Since Vi ? 0 and ?i ? 2, we can see that hi ? h + Vi . i Moreover, hi is closely related to the spread of the local cluster. If all the points in Si are located at the same position (i.e., Vi = 0), then h2i = h2 . Otherwise, the larger the spread of the local cluster, ? i ?s are adaptive to the local data distribution. the larger is hi . In other words, the bandwidths H ? i = H + Cov[Si ]. Related works in simplifying the mixture models (such as [5]) simply choose H In comparison, our covariance term Vi is more reliable in that it incorporates distance-based weighting. Interestingly, this is somewhat similar to the bandwidth matrix used in the manifold Parzen windows [14], which is designed for handling sparse, high-dimensional data more robustly. Note ? i is derived rigorously by minimizing the L2 approximation error. Therefore, that our choice of H this coincidence naturally indicates the robustness of the L2 -norm based distance measures. Moreover, note that the adjusting matrix ?i changes not only the scale of the bandwidth, but also its eigen-structures in an iterative manner. This will be very beneficial in multivariate cases. Second, in determining the center of gi , (8) can be reduced to P ? ? i (xj ? ti ) xj j?Si ?j kH+H . ti = P ? ? (xj ? ti ) j?Si ?j kH+H (12) i This can be regarded a mean-shift procedure [1] in the d-dimensional space with kernel K. It is easy to verify that this iterative procedure is indeed locating the peak of the density function ? i |? 21 P pi (x) = |H + H ? i (x ? xj ). Note, on the other hand, that what we want to j?Si KH+H 1 P approximate originally is the local density fi (x) = |H|? 2 j?Si KH (x ? xj ). In the 1-D case ? i = h2 ), the bandwidth of pi (i.e., h2 + h2 ) is larger than that of fi (i.e., h2 ). (with H = h2 , and H i i It appears intriguing that on fitting a kernel density fi (x) estimated on the sample set {xj }j?Si , one needs to locate the maximum of another density function pi (x), instead of the maximum of fi (x) itself or simply, the mean of the sample set {xj }j?Si as chosen in [5]. Indeed, these three choices coincide when the distribution of Si is symmetric and uni-modal, but will differ otherwise. Intuitively, when the data is asymmetric, the center ti should be biased towards the heavier side of the data distribution. The maximum of fi (x) thus fails to meet this requirement. On the other hand, the mean of Si , though biased towards the heavier side, still lacks an accurate control on the degree of bias. In comparison, our method provides a principled way of selecting the center. Note that pi (x) has a larger bandwidth than the original fi (x). Therefore, its maximum will move towards the heavier side of the distribution compared with that of fi (x), with the degree of bias automatically controlled by the mean shift iterations in (12). Here, we give an illustration on the performance of the three center selection schemes. Figure 1(a) shows the histogram of a local cluster Si , whose Parzen window estimator (fi ) is asymmetric. Figure 1(b) plots the corresponding approximation error E i (6) at different bandwidths hi (the remaining parameter, wi , is set to the optimal value by (10) ). As can be seen, the approximation error of our method is consistently lower than those of the other two. Moreover, the resultant optimum is also much lower. 10 35 8 approximation error histogram 25 20 15 10 7 6 5 4 3 2 1 5 0 0 local maximu local mean our method 9 30 1.5 2 2.5 3 x (a) The histogram of a local cluster Si and its density fi . 0.05 0.1 0.15 0.2 0.25 0.3 h2i (b) Approximation error. Figure 1: Approximation of an asymmetric density using different center selection schemes. 3 Experiments In this section, we perform experiments to evaluate the performance of our mixture simplification scheme. We focus on the Parzen window estimator which, on given a set of samples S = {xi }ni=1 in 1 Pn Rd , can be written as f?(x) = n1 |H|? 2 j=1 KH (x ? xj ) . Note that the Parzen window estimator is a limiting form of the mixture model, where the number of components equals the data size and can be quite huge. In Section 3.1, we use the proposed approach to reduce the number of components in the kernel density estimator, and compare its performance with the algorithm in [5]. Then, in Section 3.2, we perform color image segmentation by running the mean shift clustering algorithm on the simplified density model. 3.1 Simplifying Nonparametric Density Models In this section, we reduce the number of kernels in the Parzen window estimator by using the proposed approach and the method in [5]. Experiments are performed on a 1-D set with 1800 samples 6 4 8 N (?2.6, 0.09) + 18 N (?0.8, 0.36) + 18 N (1.7, 0.64), where drawn from the Gaussian mixture 18 2 2 N (?, ? ) denotes the normal distribution with mean ? and variance ? . The Gaussian kernel with fixed bandwidth h = 0.3 is used for density estimation. To make the problem more challenging, we choose m = 5, i.e., only 5 kernels are used to approximate the density. The k-means algorithm is used for initialization. As can be seen from Figure 2(b), the third Gaussian component has been broken into two by the method in [5]. In comparison, our result in Figure 2(c) is more reliable. 180 0.45 160 0.4 0.45 0.4 140 0.35 0.35 120 0.3 0.3 100 0.25 0.25 80 0.2 0.2 60 0.15 0.15 40 0.1 0.1 20 0.05 0.05 0 ?4 ?3 ?2 ?1 0 1 2 3 4 0 ?4 ?3 ?2 ?1 x (a) Histogram. 0 1 2 3 4 5 x (b) Result by [5]. 0 ?4 ?3 ?2 ?1 0 1 2 3 4 5 x (c) Our result. Figure 2: Approximate the Parzen window estimator by simplifying mixture models. Green: Parzen window estimator; black: simplified mixture model; blue-dashed: components of the mixture model. To have a quantitative evaluation, we randomly generate the 3-Gaussian data 100 times, and compare the two algorithms (ours and [5]) using the following error criteria: 1) the L2 error (5); 2) the standard KL distance; 3) the local KL-distance used in [5]. The local KL-distance between two Pn Pm mixtures, f = j=1 ?j ?j and g = i=1 wi gi , is defined as d(f, g) = n X ?j KL(?j ||g?(j) ), j=1 where ?(j) is the function that maps each component ?j to the closest representative component g?(j) such that ?(j) = arg mini=1,2,...,m KL(?j ||gi ). Results are plotted in Figure 3, where for clarity we order the results in increasing error obtained by [5]. We can see that under the L2 norm, the error of our algorithm is significantly lower than that of [5]. Quantitatively, our error is only about 36.61% of that by [5]. On using the standard KL-distance, our error is about 87.34% of that by [5], where the improvement is less significant. This is because the KL-distance is sensitive to the tail of the distribution, i.e., a small difference in the low-density regions may induce a huge KL-distance. As for the local KL-distance, our error is about 99.35% of that by [5]. 3.2 Image Segmentation The Parzen window estimator can be used to reveal important clustering information, namely that its modes (or local maxima) correspond to dominant clusters in the data. This property is utilized in the ?5 5 ?3 x 10 7 x 10 method in [5] our method 4.5 3600 method in [5] our method method in [5] our method 3500 6 4 3 2.5 2 1.5 local KL error KL distance 2 L ?norm error 3400 5 3.5 4 3 2 3300 3200 3100 3000 1 1 2900 0.5 0 0 20 40 60 80 100 number of tests (a) The L2 distance error. 0 0 20 40 60 x 80 100 2800 0 20 40 60 80 100 number of tests (b) Standard KL-distance. (c) Local KL-distance defined by [5] Figure 3: Quantitative comparison of the approximation errors. mean shift clustering algorithm [1, 3], where every data point is moved along the density gradient until it reaches the nearest local density maximum. The mean shift algorithm is robust, and can identify arbitrarily-shaped clusters in the feature space. Recently, mean shift is applied in color image segmentation and has proven to be quite successful [1]. The idea is to identify homogeneous image regions through clustering in a properly selected feature space (such as color, texture, or shape). However, mean shift can be quite expensive due to the large number of kernels involved in the density estimator. To reduce the computational requirement, we first reduce the density estimator f?(x) to a simpler model g(x) using our simplification scheme, and then apply the iterative mean shift procedure on the simplified model g(x). Experiments are performed on a number of benchmark images1 used in [1]. We use the Gaussian kernel with bandwidth h = 20. The partition parameter is r = 25. For comparison, we also implement the standard mean shift and its fast version using kd-trees (using the ANN library [10]). The codes are written in C++ and run on a 2.26GHz Pentium-III machine. As the ?true? segmentation of an image is subjective, so only a visual comparison is intended here. Table 1: Total wall time (in seconds) on various segmentation tasks, and the number of components in g(x). image squirrel hand house lake data size 60,192 (209?288) 73,386 (243?302) 48,960 (192?255) 262,144 (512?512) mean shift standard kd-tree 1215.8 11.94 1679.7 12.92 1284.5 5.16 3343.0 85.65 our method # components time consumption 81 0.18 120 0.35 159 0.22 440 3.67 Segmentation results are shown in Figures 4. The rows, from top to bottom, are: the original image, segmentation results by standard mean shift and our approach. We can see that our results are closer to those by the standard mean shift (applied on the original density estimator), with the number of components (Table 1) dramatically smaller than the data size n. This demonstrates the success of our approximation scheme in maintaining the structure of the data distribution using highly compact models. Our algorithm is also much faster than the standard mean shift and its fast version using kdtrees. The reason is that kd-trees only facilitates range searching but does not reduce the expensive computations associated with the large number of kernels. 4 Conclusion Finite mixture is a powerful model in many statistical learning problems. However, the large model size can be a major hindrance in many applications. In this paper, we reduce the model complexity by first grouping the components into compact clusters, and then perform local function approximation that minimizes an upper bound of the approximation error. Our algorithm has low complexity, and demonstrates more reliable performance compared with methods using KL-based distances. 1 http://www.caip.rutgers.edu/?comanici/MSPAMI/msPamiResults.html Figure 4: Image segmentation by standard mean shift (2nd row), and ours (bottom). References [1] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):603?619, 2002. [2] T. Feder and D. Greene. Optimal algorithms for approximate clustering. In Proceedings of ACM Symposium on Theory of Computing, pages 434?444, 1988. [3] K. Fukunaga and L. Hostetler. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Transactions on Information Theory, 21:32?40, 1975. [4] A. Gersho and R.M. Gray. Vector Quantization and Signal Compression. Kluwer Academic Press, Boston, 1992. [5] J. Goldberger and S. Roweis. Hierarchical clustering of a mixture model. In Advances in Neural Information Processing Systems 17, pages 505?512. 2005. [6] B. Han, D. Comaniciu, Y. Zhu, and L. Davis. Incremental density approximation and kernel-based Bayesian filtering for object tracking. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, pages 638?644, 2004. [7] A.J. Izenman. Recent developments in nonparametric density estimation. Journal of the American Statistical Association, 86(413):205?224, 1991. [8] T. Kanungo, D.M. Mount, N.S. Netanyahu, C.D. Piatko, R. Silverman, and A.Y. Wu. An efficient kmeans clustering algorithm: Analysis and implementation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7):881?892, 2002. [9] A.W. Moore. Very fast EM-based mixture model clustering using multiresolution kd-trees. In Advances in Neural Information Processing Systems 11, pages 543?549, 1998. [10] D.M. Mount and S. Arya. ANN: A library for approximate nearest neighbor searching. In Proceedings of Center for Geometric Computing Second Annual Fall Workshop Computational Geometry (available from http://www.cs.umd.edu/?mount/ANN), 1997. [11] E. Parzen. On estimation of a probability density function and mode. Annals of Mathematical Statistics, 33:1065?1075, 1962. [12] K. Popat and R.W. Picard. Cluster-based probability model and its application to image and texture processing. IEEE Transactions on Image Processing, 6(2):268?284, 1997. [13] E.B. Sudderth, A. Torralba, W.T. Freeman, and A.S. Willsky. Describing visual scenes using transformed Dirichlet processes. In Advances in Neural Information Processing Systems 19, 2006. [14] P. Vincent and Y. Bengio. Manifold Parzen windows. In Advances in Neural Information Processing Systems 15, 2003.
3073 |@word kong:2 version:2 compression:1 norm:6 nd:1 simplifying:6 covariance:5 contraction:1 initial:2 selecting:1 ours:2 interestingly:1 subjective:1 existing:1 si:33 goldberger:1 dx:7 ust:1 written:4 intriguing:1 subsequent:1 partition:6 shape:1 designed:1 plot:1 intelligence:2 selected:1 ith:1 iterates:1 provides:1 cse:1 simpler:2 zhang:1 mathematical:1 dn:3 along:1 direct:1 symposium:1 fitting:1 manner:1 introduce:1 indeed:2 examine:1 freeman:1 decomposed:1 automatically:1 window:11 increasing:1 becomes:1 moreover:4 underlying:2 bounded:2 what:1 minimizes:1 finding:1 nj:2 quantitative:2 every:1 ti:21 prohibitively:1 demonstrates:2 partitioning:6 control:1 engineering:1 local:25 mount:3 meet:1 dnm:2 black:1 initialization:2 suggests:1 challenging:1 range:1 piatko:1 practice:3 implement:1 silverman:1 digit:1 procedure:8 dmn:1 significantly:1 h2i:2 word:1 induce:1 selection:2 optimize:1 www:2 map:1 center:8 starting:1 resolution:1 estimator:14 regarded:1 classic:1 searching:2 notion:1 meer:1 limiting:1 annals:1 suppose:1 user:1 homogeneous:1 expensive:3 recognition:2 located:1 utilized:1 asymmetric:3 bottom:2 coincidence:1 rij:10 region:2 principled:1 intuition:1 broken:1 complexity:10 rigorously:1 easily:2 various:1 fast:3 describe:1 choosing:1 quite:5 whose:1 kai:1 widely:1 solve:2 distortion:1 s:1 otherwise:2 larger:4 cov:1 gi:18 statistic:1 itself:1 advantage:1 propose:2 mixing:1 multiresolution:1 roweis:1 kh:8 moved:1 convergence:2 cluster:19 requirement:3 optimum:1 incremental:1 converges:2 object:1 derive:2 illustrate:1 nearest:2 ij:1 c:1 differ:1 closely:1 assign:1 suffices:1 wall:1 squirrel:1 normal:1 mapping:1 major:1 torralba:1 estimation:7 sensitive:3 tool:1 kowloon:1 gaussian:10 ck:3 pn:3 hj:18 derived:1 focus:2 improvement:1 consistently:1 properly:1 indicates:1 hk:1 pentium:1 dependent:2 typically:1 relation:1 transformed:1 overall:2 among:1 arg:1 html:1 proposes:1 development:1 spatial:1 special:1 initialize:2 equal:1 once:1 saving:1 shaped:1 sampling:3 unsupervised:1 report:1 quantitatively:1 employ:1 randomly:2 intended:1 geometry:1 n1:1 interest:1 huge:2 highly:2 picard:1 evaluation:1 mixture:32 predefined:1 accurate:2 closer:1 partial:1 tree:5 plotted:1 maximization:1 successful:1 density:28 peak:1 international:1 parzen:12 choose:2 american:1 derivative:1 leading:1 satisfy:1 vi:8 performed:2 closed:3 minimize:4 ni:2 accuracy:1 variance:1 yield:1 correspond:1 identify:2 handwritten:1 bayesian:1 vincent:1 jamesk:1 converged:1 reach:1 involved:2 james:1 naturally:1 associated:2 mi:1 resultant:1 radially:1 adjusting:2 color:4 dimensionality:2 segmentation:10 organized:1 appears:1 originally:1 wherein:1 modal:1 done:1 though:1 hostetler:1 until:5 hand:4 hindrance:1 ei:3 nonlinear:1 propagation:1 lack:1 mode:2 gray:1 reveal:1 facilitate:1 verify:1 true:1 hence:2 symmetric:2 moore:1 comaniciu:2 davis:1 hong:2 criterion:2 demonstrate:1 bring:1 image:13 recently:2 fi:9 common:1 discussed:1 tail:1 association:1 kluwer:1 significant:2 rd:1 pm:4 particle:1 han:1 add:2 dominant:1 closest:2 multivariate:1 recent:1 irrelevant:1 success:2 arbitrarily:1 preserving:2 minimum:1 seen:2 somewhat:1 determine:1 comanici:1 dashed:1 signal:1 reduces:1 faster:1 academic:1 divided:1 controlled:1 basic:2 vision:1 expectation:1 rutgers:1 iteration:2 kernel:17 adopting:2 histogram:4 want:1 sudderth:1 biased:2 rest:1 umd:1 facilitates:1 incorporates:1 iii:1 easy:2 enough:1 bengio:1 iterate:3 xj:19 bandwidth:11 reduce:8 idea:3 prototype:1 shift:16 heavier:3 feder:1 locating:1 remark:2 reassign:1 dramatically:1 useful:2 clear:1 involve:1 nonparametric:3 kanungo:1 processed:1 reduced:2 generate:1 http:2 estimated:2 disjoint:1 blue:1 group:2 threshold:1 drawn:1 d3:1 clarity:1 asymptotically:1 sum:1 run:1 powerful:2 wu:1 lake:1 bound:6 hi:7 guaranteed:1 simplification:2 quadratic:1 refine:1 annual:1 greene:1 ri:7 scene:1 speed:1 min:4 concluding:1 fukunaga:1 department:1 kd:5 belonging:1 describes:1 smaller:2 em:3 beneficial:1 wi:20 s1:1 intuitively:1 equation:1 vq:3 discus:1 describing:1 needed:1 gersho:1 available:1 apply:1 kwok:1 hierarchical:3 away:1 observe:1 enforce:1 caip:1 robustly:1 robustness:1 eigen:1 original:9 denotes:1 clustering:13 remaining:2 running:1 top:1 dirichlet:1 maintaining:1 move:1 izenman:1 parametric:3 diagonal:1 gradient:3 distance:22 consumption:1 prospective:1 manifold:2 discriminant:1 water:1 reason:1 toward:1 willsky:1 code:1 index:1 illustration:2 mini:1 minimizing:4 difficult:2 negative:1 implementation:1 unknown:2 perform:4 upper:6 observation:1 sm:1 benchmark:1 finite:4 arya:1 situation:1 defining:1 locate:1 namely:1 kl:17 proceeds:1 usually:1 pattern:4 wi2:1 reliable:5 green:1 belief:1 demanding:1 zhu:1 scheme:7 technology:1 library:2 geometric:1 l2:9 discovery:1 determining:1 interesting:1 filtering:2 images1:1 proven:1 h2:10 degree:2 netanyahu:1 pi:5 row:2 summary:1 jth:2 side:3 bias:2 neighbor:1 fall:1 sparse:1 benefit:1 ghz:1 dimension:3 valid:1 adaptive:1 coincide:1 simplified:7 far:1 transaction:4 approximate:7 compact:6 uni:1 summing:1 assumed:1 xi:1 iterative:5 bay:1 table:2 terminate:1 robust:4 obtaining:1 substituted:1 spread:2 profile:1 representative:9 cubic:1 fails:1 position:1 xh:1 exponential:1 kxk2:1 house:1 weighting:1 third:1 popat:1 grouping:1 workshop:1 quantization:3 sequential:3 ci:2 texture:2 boston:1 simply:2 visual:2 conveniently:1 tracking:1 khj:1 acm:1 scenery:1 kmeans:1 acceleration:1 ann:3 towards:3 feasible:1 change:1 reducing:1 decouple:1 called:1 total:1 experimental:1 select:1 support:1 outstanding:1 evaluate:1 handling:1
2,285
3,074
On Transductive Regression Corinna Cortes Google Research 76 Ninth Avenue New York, NY 10011 [email protected] Mehryar Mohri Courant Institute of Mathematical Sciences and Google Research 251 Mercer Street New York, NY 10012 [email protected] Abstract In many modern large-scale learning applications, the amount of unlabeled data far exceeds that of labeled data. A common instance of this problem is the transductive setting where the unlabeled test points are known to the learning algorithm. This paper presents a study of regression problems in that setting. It presents explicit VC-dimension error bounds for transductive regression that hold for all bounded loss functions and coincide with the tight classification bounds of Vapnik when applied to classification. It also presents a new transductive regression algorithm inspired by our bound that admits a primal and kernelized closedform solution and deals efficiently with large amounts of unlabeled data. The algorithm exploits the position of unlabeled points to locally estimate their labels and then uses a global optimization to ensure robust predictions. Our study also includes the results of experiments with several publicly available regression data sets with up to 20,000 unlabeled examples. The comparison with other transductive regression algorithms shows that it performs well and that it can scale to large data sets. 1 Introduction In many modern large-scale learning applications, the amount of unlabeled data far exceeds that of labeled data. Large amounts of digitized data are widely available but the cost of labeling is often prohibitive since it typically requires human assistance. Semi-supervised learning or transductive inference leverage unlabeled data to achieve better predictions and are thus particularly relevant to modern applications. Semi-supervised learning consists of using both labeled and unlabeled data to find a hypothesis that accurately labels unseen examples. Transductive inference uses the same information but only aims at predicting the labels of the known unlabeled examples. This paper deals with regression problems in the transductive setting, which arise in a variety of contexts. This may be to predict the real-valued labels of the nodes of a known graph in computational biology, or the scores associated to known documents in information extraction problems. The problem of transduction inference was originally formulated and analyzed by Vapnik [1982] who described it as a simpler task than the traditional induction treated in machine learning. A number of recent publications have dealt with the topic of transductive inference [Vapnik, 1998, Joachims, 1999, Bennett and Demiriz, 1998, Chapelle et al., 1999, Graepel et al., 1999, Schuurmans and Southey, 2002, Corduneanu and Jaakkola, 2003, Zhu et al., 2004, Lanckriet et al., 2004, Derbeko et al., 2004, Belkin et al., 2004, Zhou et al., 2005]. But, with the exception of [Chapelle et al., 1999], [Schuurmans and Southey, 2002], and [Belkin et al., 2004], this work has primarily dealt with classification problems. We present a specific study of transductive regression. We give new error bounds for transductive regression that hold for all bounded loss functions and coincide with the tight classification bounds of Vapnik [1998] when applied to classification. Our results also include explicit VC-dimension bounds for transductive regression. This contrasts with the original regression bound given by Vapnik [1998] which assumes a specific condition of global regularity on the class of functions and is based on a complicated and implicit function of the samples sizes and the confidence parameter. As stated by Vapnik [1998], this function must be ?tabulated by a computer?. We also present a new algorithm for transductive regression inspired by our bound which first exploits the position of unlabeled points to locally estimate their labels, and then uses a global optimization to ensure robust predictions. We show that our algorithm admits both a primal and a kernelized closed-form solution. Existing algorithms for the transductive setting require the inversion of a matrix whose dimension is either the total number of unlabeled and labeled examples [Belkin et al., 2004], or the total number of unlabeled examples [Chapelle et al., 1999]. This may be prohibitive for many real-world applications with very large amounts of unlabeled examples. One of the original motivations for our work was to design algorithms dealing precisely with such situations. When the dimension of the feature space N is not too large, our algorithm provides a very efficient solution whose cost is dominated by the construction and inversion of an N ? N -matrix. Similarly, when the number of training points m is small compared to the number of unlabeled points, using an empirical kernel map, our algorithm requires only constructing and inverting an m ? m-matrix. Our study also includes the results of our experiments with several publicly available regression data sets with up to 20,000 unlabeled examples, limited only by the size of the data sets. We compared our algorithm with those of Belkin et al. [2004] and Chapelle et al. [1999], which are among the very few algorithms described in the literature dealing specifically with the problem of transductive regression. The results show that our algorithm performs well in several data sets compared to these algorithms and that it can scale to large data sets. The paper is organized as follows. Section 2 describes in more detail the transductive regression setting we are studying. New generalization error bounds for transductive regression are presented in Section 3. Section 4 describes and analyzes both the primal and dual versions of our algorithm and the experimental results of our study are reported in Section 5. 2 Definition of the Problem Assume that a full sample X of m + u examples is given. The learning algorithm further receives the labels of a random subset of X of size m which serves as a training sample: (x1 , y1 ), . . . , (xm , ym ) ? X ? R. (1) The remaining u unlabeled examples, xm+1 , . . . , xm+u ? X , serve as test data. The learning problem that we consider consists of predicting accurately the labels ym+1 , . . . , ym+u of the test examples. No other test examples will ever be considered. This is a transduction regression problem [Vapnik, 1998].1 It differs from the standard (induction) regression estimation problem by the fact that the learning algorithm is given the unlabeled test examples beforehand. Thus, it may exploit that information and achieve a better result than via the standard induction. In what follows, we consider a hypothesis space H of real-valued functions for regression estimation. For a hypothesis h ? H, we denote by R0 (h) its mean squared error on the full sample, by b R(h) its error on the training data, and by R(h) the error of h on the test examples: m+u 1 X (h(xi )?yi )2 . u i=m+1 (2) For convenience, we will sometimes denote by yx = yi the label of a point x = xi ? X . R0 (h) = m+u X 1 (h(xi )?yi )2 m + u i=1 m 1 X b R(h) = (h(xi )?yi )2 m i=1 R(h) = 3 Transductive Regression Generalization Error This section presents explicit generalization error bounds for transductive regression. Vapnik [1998] introduced and analyzed the problem of transduction and presented transductive inference bounds for both classification and regression. His regression bound assumes however a specific regularity condition on the hypothesis functions leading in particular to a surprising bound where no error on the training data implies zero generalization error. The bound has the multiplicab tive form: R(h) ? ?(m, u, d, ?)R(h), where d is the VC-dimension of the class of hypotheses used and ? is the confidence parameter. Furthermore, for certain values of the parameters, for example larger ds or smaller ?s, ? becomes infinite and the bound is ineffective [Vapnik, 1998, page 349]. ? is also based on a complicated and implicit function of m, u, and ?, which makes its interpretation difficult. For example, it is hard to analyze the asymptotic behavior of the bound for large u. 1 This is in fact one of the two transduction settings discussed by [Vapnik, 1998], but, under some general conditions, the results proved with this setting carry over to the other. Instead, our bounds simply hold for general bounded loss functions and, when applied to classification, coincide with the tight classification bounds of Vapnik [1998]. Our results also include explicit VC-dimension bounds for transductive regression. To the best of our knowledge, these are the first general explicit bounds for transductive regression. ? defined as follows. Let ?(?, k) be defined by: Our first bound uses the function ?   k m+u?k X r m?r  , ?? ? 0, ?k ? N, u? ? k ? m(1 ? ?) + u, ?(?, k) = (3) m+u m r?I(m,u,?) r where I(m, u, k, ?) is the set of integers r such that: k?r u ? m > ? and max(0, k ? u) ? r ? min(m, k). ?(?, k) represents the probability of observing a difference in error rate of more than ? between the training and test set when the total q number of errors is k (see [Cortes and Mohri, k ? is used in the transductive classi? ? ?, k). ? 2006]). Then ? is defined as ?(?) = maxk ?( m+u fication bound of Vapnik [1998] (see [Cortes and Mohri, 2006][Theorem 2]). [Cortes and Mohri, ? 2006][Corollary 2] gives an upper bound on ?. For any subset X ? ? X , any non-negative real number t ? 0, and hypothesis h ? H, let ?(h, t, X ? ) denote the fraction of the points xi ? X ? , i = 1, . . . , k, such that (h(xi ) ? yi )2 ? t > 0. Thus, ?(h, t, X ? ) represents the error rate over the sample X ? of the classifier that associates to a point x the value zero if (h(x) ? yx )2 ? t, one otherwise. Two classifiers associated in this way to ?(h, t, X ) and ?(h? , t? , X ) can be viewed as equivalent if they label X in an identical way. Since X is finite, there is a finite number of equivalence classes of such classifiers, we will denote that number by N (m + u). ? Theorem 1 Let ? > 0, and let ?0 > 0 be the minimum value of ? such that N (m + u)?(?) ? ?, and assume that the loss function is bounded: for all h ? H and x ? X , (h(x) ? yx )2 ? B 2 , where B ? R+ . Then, with probability at least 1 ? ?, for all h ? H, s  2 2 2 u? B u?0 B 0 b b R(h) ? R(h) + + + ?0 B R(h) . (4) 2(m + u) 2(m + u) Proof. For any h ? H, let R1 (h) be defined by: Z B2 p R1 (h) = ?(h, t, X ) dt. (5) 0 By the Cauchy-Schwarz inequality, R1 (h) ? Z B2 0 ?(h, t, X ) dt !1/2 Z B2 0 1dt !1/2 =B Z B2 0 ?(h, t, X ) dt !1/2 . (6) 1 Let D denote the uniform probability distribution associated to the sample X . Thus, D(x) = m+u for all x ? X . Let Prx?D [Ex ] denote the probability of event Ex when x is randomly drawn according to D. By definition of R0 and the Lebesgue integral, for all h ? H, Z Z ? Z B2 2 2 R0 (h) = (h(x) ? yx ) D(x) dx = Pr [(h(x) ? yx ) > t] dt = ?(h, t, X ) dt. (7) 0 X x?D 0 Similarly, setting Xm = {xi ? X : i ? [1, m]} and Xu = {xi ? X : i ? [m + 1, m + u]}, we have Z B2 Z B2 b R(h) = ?(h, t, Xm ) dt and R(h) = ?(h, t, Xu ) dt. (8) 0 0 In view of Equation 7, Inequality 6 can be rewritten as: R1 (h) ? B 2006][Theorem 2], for all ? > 0 and for any t ? 0, Pr[ sup h?H p R0 (h). By [Cortes and Mohri, ?(h, t, Xu ) ? ?(h, t, Xm ) ? p > ?] ? N (m + u)?(?). ?(h, t, X ) (9) ? Fix ? > 0. Then, with probability at least 1 ? N (m + u)?(?), for all integers n > 1 and i ? 0, 2 2 ?(h, iBn , Xu ) ? ?(h, iBn , Xm ) q ? ?. 2 ?(h, iBn , X ) Then, the convergence of the Riemann sums to the integral ensures that n n 1X 1X iB 2 iB 2 b R(h) ? R(h) = lim ?(h, , Xu ) ? ?(h, , Xm ) n?? n n n i=0 n i=0 r n p 1X iB 2 ? ? lim , X ) = ?R1 (h) ? ?B R0 (h). ?(h, n?? n n i=0 (10) (11) (12) ? Let ? > 0 and select ? = ?0 as the minimum value of ? such that N (m + u)?(?) ? ?, then with probability at least 1 ? ?, p b R(h) ? R(h) ? ?0 B R0 (h). (13) b Plugging in the following expression of R0 (h) with respect to R(h) and R(h) m b u R(h) + R(h), (14) R0 (h) = m+u m+u and solving the second-degree equation in R(h) yields directly the statement of the theorem. Theorem 1 provides a general bound on the regression error within the transduction setting. The theorem can also be used to derive a bound in the classification case by simply setting B = 1. The resulting bound coincides with the tight classification bound given by Vapnik [1998]. The bound ? and is implicit. The following provides a general and given by Theorem 1 depends on the function ? explicit error bound for transduction regression directly expressed in terms of the empirical error, the number of equivalence N (m + u) or the VC-dimension d, and the sample sizes m and u. Corollary 1 Let H be a set of hypotheses with VC-dimension d. Assume that the loss function is bounded: for all h ? H and x ? X , (h(x) ? yx )2 ? B 2 , where B ? R+ . Then, with probability at least 1 ? ?, for all h ? H, s  2 2 2 u? B u?B b b R(h) ? R(h) + + + ?B R(h) , (15) 2(m + u) 2(m + u) r q    2(m+u) 2(m+u) (m+u)e 1 1 with ? = N (m + u) + log + log log ? d log mu ? mu d ? . ? ?(?) ? ?. By [Cortes Proof. By Theorem 1, Inequality 15 holds for all ? >  0 such that N (m+ u) 1 mu ? and Mohri, 2006][Corollary 2], log N (m + u) ?(?) ? log N (m + u) ? 2 m+u ?2 . Setting log ? to match this upper bound yields the expression of ? given above. Since N (m + u) is bounded by the shattering coefficient of H of order m + u, by Sauer?s lemma, log N (m + u) ? d log (m+u)e . d This gives the upper bound on ? in terms of the VC-dimension. The bound is explicit and can be readily used within the Structural Risk Minimization (SRM) framework, either by using the expression of ? in terms of the VC-dimension, or the tighter expression with respect to the number of equivalence classes N . In the latter case, a structure of increasing number of equivalence classes can be constructed as in [Vapnik, 1998, page 360]. A more practical algorithm inspired by these concepts is described in the next section. 4 Transductive Regression Algorithm This section presents an algorithm for the transductive regression problem. Before presenting this algorithm, let us first emphasize that the algorithms introduced for transductive classification problems, e.g., transductive SVMs [Vapnik, 1998, Joachims, 1999], cannot be readily used for regression. These algorithms typically select the hypothesis h, out of a hypothesis space H, that minimizes the following optimization function m u X  1 X ?1 ? L (h(x ), y ) + C L h(xm+i ), ym+i , (16) min ?(h) + C i i ? ym+i ,i=1,...,u m i=1 u i=1 where ?(h) is a capacity measure term, L is the loss function used, C ? 0 and C ? ? 0 regularization ? ? parameters, and where the minimum is taken over all possible labels ym+1 , . . . , ym+u for the test points. In regression, this scheme would lead to a trivial solution not exploiting the transduction setting. Indeed, let h0 be the hypothesis minimizing the first two terms, that is the solution of ? the induction problem. For the particular choice ym+i = h0 (xm+i ), i = 1, . . . , u, the third term vanishes. Thus, h0 is also minimizing the sum of all three terms. In two-group classification, the trivial solution is typically not the solution of the minimization problem because in general h0 (xm+i ) is not in {0, 1}. The main idea behind the design of our algorithm is to exploit the additional information provided in transduction, that is the position of the unlabeled examples. Our algorithm has two stages. The first stage is based on the position of unlabeled points. For each unlabeled point xi , i = m+1, . . . , m+u, a local estimate label y?i is determined using the labeled points in the neighborhood of xi . In the second stage, a global hypothesis h is found that best matches all labels, those of the training data and the estimate labels y?i . This second stage is critical and distinguishes our method from other suggested ones. While using local information to determine labels is important (see for example the discussion of Vapnik [1998]), it is not sufficient for a robust prediction. A global estimate of all labels is needed to make predictions less vulnerable to noise. 4.1 Local Estimates Let ? be a feature mapping from X to a vector space F provided with a norm. We fix a radius r ? 0 and consider for all x? ? Xu , the ball of radius r centered in ?(x? ), denoted by B(?(x? ), r). This defines the neighborhood of the image of each unlabeled point. A single radius r is used for all neighborhoods to limit the number of parameters for the algorithm. Labeled points x ? Xm whose images ?(x) fall within the neighborhood of ?(x? ), x? ? Xu , help determine an estimate label of x? . With a very large radius r, the labels of all training examples contribute to the definition of the local estimates. But, with smaller radii, only a limited number of computations are needed. When no such labeled point exists in the neighborhood of x? ? Xu , which depends on the radius r selected, x? is disregarded in both training stages of the algorithm. There are many possible ways to define the estimate label of x? ? Xu based on the neighborhood points. One simple way consists of defining it as the weighted average of the neighborhood labels yx , where the weights may be defined as the inverse of distances of ?(x) to ?(x? ), or as similarity measures K(x, x? ) when a positive definite kernel K is associated to ?. Thus, when the set of labeled points with images in the neighborhood of ?(x? ) is not empty, I = {i ? [1, m] : ?(xi ) ? B(?(x? ), r)} 6= ?, the estimate label y?x? of x? ? Xu can be given by: X wi y?i P with wi?1 = k?(x? ) ? ?(xi )k ? r or wi = K(x? , xi ). (17) y?x? = i wi i?I The estimate labels can also be obtained as the solution of a local linear or kernel ridge regression, which is what we used in most of our experiments. In practice, with a relatively small radius r, the computation of an estimated label y?i depends only on a limited number of labeled points and their labels, and is quite efficient. 4.2 Global Optimization The second stage of our algorithm consists of selecting a hypothesis h that fits best the labels of the training points and the estimate labels provided in the first stage. As suggested by Corollary 1, hypothesis spaces with a smaller number of equivalence classes guarantee a better generalization error. The bound also suggests reducing the empirical error. This leads us to consider the following objective function 2 G = ||w|| + C m X i=1 2 (h(xi ) ? yi ) + C ? m+u X (h(xi ) ? y?i )2 , (18) i=m+1 where h is as a linear function with weight vector w ? F : ?x ? X , h(x) = w ? ?(x), and where C ? 0 and C ? ? 0 are regularization parameters. The first two terms of the objective function coincide with those used in standard (kernel) ridge regression. The third term, which restricts the estimate error, can be viewed as imposing a smaller number of equivalence classes on the hypothesis space as suggested by the error bound of Corollary 1. The constraint explicitly exploits knowledge about the location of all the test points, and limits the range of the hypothesis at these locations, thereby reducing the number of equivalence classes. Our algorithm can be viewed as a generalization of (kernel) ridge regression to the transductive setting. In the following, we will show that this generalized optimization problem admits a closed-form solution and a natural kernel-based solution. 4.2.1 Primal solution Let N be the dimension of the feature space and let W ? RN ?1 denote the column matrix whose components are the coordinates of w, Y ? Rm?1 the column matrix whose components are the labels yi of the training examples, and Y? ? Ru?1 the column-matrix whose components are the estimated labels y?i of the test examples. Let X = [?(x1 ), . . . , ?(xm )] ? RN ?m denote the matrix whose columns are the components of the images by ? of the training examples, and similarly X? = [?(xm+1 ), . . . , ?(xm+u )] ? RN ?u the matrix corresponding to the test examples. G can then be rewritten as: G = kWk2 + CkX? W ? Yk2 + C ? kX?? W ? Y? k2 . (19) G is convex and differentiable and its gradient is given by ?G = 2W + 2C X(X? W ? Y) + 2C ? X? (X?? W ? Y? ). (20) ? The matrix W minimizing G is the unique solution of ?G = 0. Since (IN + C XX + C X X?? ) is invertible, it is given by the following expression W = (IN + C XX? + C ? X? X?? )?1 (C XY + C ? X? Y? ). ? ? (21) N ?N This gives a closed-form solution in the primal space based on the inversion of a matrix in R . Let T (N ) be the time complexity of computing the inverse of a matrix in RN ?N . T (N ) = O(N 3 ) using standard methods or T (N ) = O(N 2.376 ) with the method of Coppersmith and Winograd. The time complexity of the computation of W from X, X? , Y, and Y? is thus in O(T (N )+(m+u)N 2). When the dimension N of the feature space is small compared to the number of examples m + u, which is typical in modern learning applications where u is large, this method remains practical and leads to a very efficient computation. The use of the so-called empirical kernel map [Sch?olkopf and Smola, 2002] also makes this method very attractive. Given a kernel K, the empirical kernel feature vector associated to x is the m-dimensional vector ?(x) = [K(x, x1 ), . . . , K(x, xm )]? . Thus, the dimension of the feature space is then N = m. For relatively small m, even for very large values of u with respect to m, the solution is efficiently computable and yet benefits from the use of kernels. This computational advantage is not shared by other methods such as the manifold regularization techniques [Belkin et al., 2004], or even by the regression technique described by [Chapelle et al., 1999], despite it is based on a primal method (we have derived a dual version of that method as well, see Section 5) since it requires among other things the inversion of a matrix in Ru?u . Once W is computed, prediction can be done by computing X?? W in time O(uN ). 4.2.2 Dual solution The computation can also be done in the dual space, which is useful in the case of very highdimensional feature spaces. Let MX ? RN ?(m+u) and MY ? R(m+u)?1 be the matrices defined by:  ?   ? ? CY ? ? ? M = . (22) MX = CX C X Y C ? Y? ?1 Then, Equation 21 can be rewritten as: W = (IN + MX M? MX MY . To determine the dual X) solution, observe that ? ?1 ?1 M? = (M? M? X (MX MX + ?IN ) X MX + ?Im+u ) X, (23) (m+u)?(m+u) where Im+u denotes the identity matrix of R . This can be derived without difficulty ?1 from a series expansion of (MX M? + ?I ) . Thus, W can also be computed via: N X W = MX (Im+u + K)?1 MY , (24) where K is the Gram matrix K = M? Let K21 ? Ru?m and K22 ? Ru?u X MX . the sub-matrices of the Gram K defined by: K21 = (K(xm+i , xj )1?i?u,1?j?m ) and K22 (K(xm+i , xm+j )1?i,j?u ) and let K2 ? Ru?(m+u) be the matrix defined by: K2 = ? C K21  ? C ? K22 = X?? MX . be = (25) Dataset Boston Housing [13] California Housing [8] kin-32fh [32] Elevators [18] No. of unlab. points 25 500 2,500 5,000 20,000 2,500 8,000 500 2500 15,000 Relative improvement in MSE (%) Our algorithm Chapelle et al. [1999] Belkin et al. [2004] 20.2?14.7 4.3?11.3 2.4?5.4 8.4?6.9 2.7?3.0 3.9?12.3 25.9?8.3 0.2?0.3 0.0?0.0 17.2?8.7 0.0?0.0 0.0?0.0 22.0?11.0 ? ? 9.4?3.7 2.2?2.6 2.7?3.1 18.4?5.9 0.5?0.5 0.9?0.7 14.4?10.4 1.5?2.7 2.6 ?7.7 9.0?6.9 2.2?2.9 0.0?0.0 9.7?5.8 ? ? Table 1: Transductive regression experiments. The number in brackets after the name indicates the input dimensionality of the data set. The number of training examples was m = 481 for the Boston Housing data set, m = 25 for the other tasks. The number of unlabeled examples was u = 25 for the Boston Housing data set and varied from u = 500 to the maximum of 20,000 examples for the California Housing data set. For u ? 10,000, the algorithms of Chapelle et al. [1999] and Belkin et al. [2004] did not terminate within the time period of our experiments. Then, predictions can be made using kernel functions alone since X ?? W can be computed by: X?? W = X?? MX (Im+u + K)?1 MY = K2 (Im+u + K)?1 MY . (26) When the dimension of the feature space N is very large with respect to the total number of examples, this can lead to a faster computation of the solution. (Im+u + K)?1 MY can be computed in O(T (m + u) + (m + u)2 tK ) and predictions are computed in time O(u (m + u)), where tK is the time complexity of the computation of K(x, x), x, x? ? X . As already pointed in the description of the local estimates, in practice, some unlabeled points are disregarded in the training phases because no labeled point falls in their neighborhood. Thus, instead of u, a smaller number of unlabeled examples u? ? u determines the computational cost. 5 Experimental Results This section reports the results of our experiments with the transductive regression algorithm just presented with several data sets. For comparison, we also implemented the algorithm of Chapelle et al. [1999] and that of Belkin et al. [2004], which are among the very few algorithms described in the literature dealing specifically with the problem of transductive regression. For the algorithm of Chapelle et al. [1999], we in fact derived and implemented a dual solution not described in the original paper. With the notation used in that paper, it can be shown that ?K ? ? (K ?K ? ? + ?I)?1 . C=I?K (27) Our comparisons were made using several publicly available regression data sets: Boston Housing, kin-32fh a data set in the Kinematics family with high unpredictability or noise, California Housing, and Elevators [Torgo, 2006]. For the Boston Housing data set, we used the same partitioning of the training and test sets as in [Chapelle et al., 1999]: 481 training examples and 25 test examples. The input variables were normalized to have mean zero and a variance one. For the kin-32fh, California Housing, and Elevators data sets, 25 training examples were used with varying (large) amounts of test examples: 2,500 and 8,000 for kin-32fh; from 500 up to 20,000 for California Housing; and from 500 to 15,000 for Elevators. The experiments were repeated for 100 random partitions of training and test sets. The kernels used with all algorithms were Gaussian kernels. To measure the improvement produced by the transductive inference algorithms, we used kernel ridge regression as a baseline. The optimal values for the width of the Gaussian ? and the ridge C1 were determined using cross-validation. These parameters were then fixed at these values. The remaining parameters for our algorithm, r and C ? , were determined using a grid search and cross-validation. The parameters of the algorithms of Chapelle et al. [1999] and Belkin et al. [2004] were determined in the same way. Alternatively, the parameters could be selected using the explicit VC-dimension generalization bound of Corollary 1. For our algorithm, we found the best values of r to be typically among the 2.5% smallest distances between training and test points. Thus, each estimate label was determined by only a small number of labeled points. For our algorithm, we experimented both with the dual solution using Gaussian kernels, and the primal solution with an empirical Gaussian kernel map as described in Section 4.2.1. The results obtained were very similar, however the primal method was dramatically faster since it required the inversion of relatively small-dimensional matrices even for a large number of unlabeled examples. For consistency, all the results reported for our method relate to the dual solution, except from those with very large u, e.g. u ? 10,000, where the dual method was too time-consuming. Table 1 shows the results of our experiments. For each data set and each algorithm, the relative improvement in mean squared error (MSE) with respect to the baseline averaged over the random partitions is indicated, followed by its standard deviation. Some improvements were small or not statistically significant. In general, we observed no significant performance improvement over the baseline on any of these data sets using the Laplacian regularized least squares method of Belkin et al. [2004]. We note that, while positive classification results have been previously reported for this algorithm, no transductive regression experimental result seems to have been published for it. Our results for the method of Chapelle et al. [1999] match those reported by the authors for the Boston Housing data set (both absolute and relative MSE). Our algorithm achieved a significant improvement of the MSE in all data sets and for different amounts of unlabeled data and was shown to be practical for large data sets of 20,000 test examples. This matches many real-world situations where amount of unlabeled data is orders of magnitude larger than that of labeled data. 6 Conclusion We presented a general study of transductive regression. We gave new and general explicit error bounds for transductive regression and described a simple and general algorithm inspired by our bound that can scale to relatively large data sets. The results of experiments show that our algorithm achieves a smaller error in several tasks compared to other previously published algorithms for transductive regression. The problem of transductive regression arises in a variety of learning contexts, in particular for learning node labels of a very large graphs such as the web graph. This leads to computational problems that may require approximations or new algorithms. We hope that our study will be useful for dealing with these and other similar transduction regression problems. References Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization; a geometric framework for learning from examples. Technical Report TR-2004-06, University of Chicago, 2004. Kristin Bennett and Ayhan Demiriz. Semi-supervised support vector machines. NIPS 11, pages 368?374, 1998. Olivier Chapelle, Vladimir Vapnik, and Jason Weston. Transductive Inference for Estimating Values of Functions. NIPS 12, pages 421?427, 1999. Adrian Corduneanu and Tommi Jaakkola. On information regularization. In Christopher Meek and Uffe Kj?rulff, editors, Proceedings of the Nineteenth Annual Conference on Uncertainty in Artificial Intelligence, pages 151?158, 2003. Corinna Cortes and Mehryar Mohri. On Transductive Regression. Technical Report TR2006-883, Courant Institute of Mathematical Sciences, New York University, November 2006. Philip Derbeko, Ran El-Yaniv, and Ron Meir. Explicit learning curves for transduction and application to clustering and compression algorithms. J. Artif. Intell. Res. (JAIR), 22:117?142, 2004. Thore Graepel, Ralf Herbrich, and Klaus Obermayer. Bayesian transduction. NIPS 12, 1999. Thorsten Joachims. Transductive inference for text classification using support vector machines. In Ivan Bratko and Saso Dzeroski, editors, Proceedings of ICML-99, 16th International Conference on Machine Learning, pages 200?209. Morgan Kaufmann Publishers, San Francisco, US, 1999. Gert R. G. Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, and Michael I. Jordan. Learning the kernel matrix with semidefinite programming. J. Mach. Learn. Res., 5:27?72, 2004. ISSN 1533-7928. Bernhard Sch?olkopf and Alex Smola. Learning with Kernels. MIT Press: Cambridge, MA, 2002. Dale Schuurmans and Finnegan Southey. Metric-Based Methods for Adaptive Model Selection and Regularization. Machine Learning, 48:51?84, 2002. Lu??s Torgo. Regression datasets, 2006. http://www.liacc.up.pt/ ltorgo/Regression/DataSets.html. Vladimir N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer, Berlin, 1982. Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998. Dengyong Zhou, Jiayuan Huang, and Bernard Scholkopf. Learning from labeled and unlabeled data on a directed graph. In L. De Raedt and S. Wrobel, editors, Proceedings of ICML-05, pages 1041?1048, 2005. Xiaojin Zhu, Jaz Kandola, Zoubin Ghahramani, and John Lafferty. Nonparametric transforms of graph kernels for semi-supervised learning. NIPS 17, 2004.
3074 |@word version:2 inversion:5 compression:1 norm:1 seems:1 adrian:1 thereby:1 tr:1 carry:1 series:1 score:1 selecting:1 document:1 existing:1 com:1 surprising:1 jaz:1 yet:1 dx:1 must:1 readily:2 john:1 chicago:1 partition:2 alone:1 intelligence:1 prohibitive:2 selected:2 provides:3 node:2 contribute:1 location:2 ron:1 herbrich:1 simpler:1 mathematical:2 constructed:1 scholkopf:1 consists:4 interscience:1 indeed:1 behavior:1 inspired:4 riemann:1 increasing:1 becomes:1 provided:3 xx:2 bounded:6 notation:1 estimating:1 what:2 minimizes:1 guarantee:1 classifier:3 rm:1 k2:4 partitioning:1 before:1 positive:2 local:6 limit:2 despite:1 mach:1 laurent:1 equivalence:7 suggests:1 limited:3 range:1 statistically:1 averaged:1 directed:1 practical:3 unique:1 practice:2 definite:1 differs:1 empirical:7 confidence:2 zoubin:1 convenience:1 unlabeled:28 cannot:1 selection:1 context:2 risk:1 www:1 equivalent:1 map:3 convex:1 unlab:1 fication:1 his:1 ralf:1 gert:1 coordinate:1 construction:1 pt:1 olivier:1 programming:1 us:4 hypothesis:15 lanckriet:2 associate:1 particularly:1 labeled:13 winograd:1 observed:1 cy:1 ensures:1 ran:1 vanishes:1 mu:3 complexity:3 cristianini:1 torgo:2 tight:4 solving:1 serve:1 liacc:1 artificial:1 labeling:1 klaus:1 neighborhood:9 h0:4 whose:7 quite:1 widely:1 valued:2 larger:2 nineteenth:1 otherwise:1 niyogi:1 unseen:1 transductive:40 demiriz:2 housing:11 advantage:1 differentiable:1 relevant:1 achieve:2 description:1 olkopf:2 exploiting:1 convergence:1 regularity:2 empty:1 r1:5 yaniv:1 unpredictability:1 tk:2 help:1 derive:1 dengyong:1 dzeroski:1 ibn:3 c:1 implemented:2 implies:1 tommi:1 radius:7 vc:9 centered:1 human:1 require:2 fix:2 generalization:7 tighter:1 im:6 hold:4 considered:1 ayhan:1 mapping:1 predict:1 achieves:1 smallest:1 fh:4 estimation:3 label:29 schwarz:1 weighted:1 kristin:1 minimization:2 hope:1 mit:1 gaussian:4 aim:1 zhou:2 varying:1 jaakkola:2 publication:1 corollary:6 derived:3 joachim:3 improvement:6 indicates:1 contrast:1 baseline:3 inference:8 el:2 typically:4 kernelized:2 rulff:1 saso:1 classification:14 among:4 dual:9 denoted:1 html:1 once:1 extraction:1 biology:1 represents:2 identical:1 shattering:1 icml:2 report:3 belkin:11 primarily:1 modern:4 few:2 randomly:1 distinguishes:1 kandola:1 intell:1 elevator:4 phase:1 lebesgue:1 analyzed:2 bracket:1 semidefinite:1 primal:8 behind:1 beforehand:1 integral:2 xy:1 sauer:1 re:2 instance:1 column:4 raedt:1 cost:3 deviation:1 subset:2 uniform:1 srm:1 too:2 reported:4 my:6 international:1 invertible:1 michael:1 ym:8 squared:2 ltorgo:1 huang:1 leading:1 closedform:1 de:1 b2:7 includes:2 coefficient:1 explicitly:1 depends:3 view:1 jason:1 closed:3 analyze:1 observing:1 sup:1 complicated:2 partha:1 square:1 publicly:3 variance:1 who:1 efficiently:2 kaufmann:1 yield:2 ckx:1 dealt:2 bayesian:1 accurately:2 produced:1 lu:1 published:2 definition:3 associated:5 proof:2 proved:1 dataset:1 knowledge:2 lim:2 dimensionality:1 graepel:2 organized:1 originally:1 courant:2 supervised:4 dt:8 jair:1 done:2 furthermore:1 just:1 implicit:3 stage:7 smola:2 d:1 receives:1 web:1 christopher:1 google:3 defines:1 corduneanu:2 indicated:1 thore:1 artif:1 name:1 k22:3 normalized:1 concept:1 regularization:6 deal:2 attractive:1 assistance:1 width:1 coincides:1 generalized:1 presenting:1 ridge:5 performs:2 image:4 common:1 discussed:1 interpretation:1 kwk2:1 significant:3 cambridge:1 imposing:1 consistency:1 grid:1 similarly:3 pointed:1 chapelle:13 similarity:1 yk2:1 recent:1 certain:1 inequality:3 yi:7 morgan:1 analyzes:1 minimum:3 additional:1 r0:9 determine:3 period:1 semi:4 full:2 exceeds:2 technical:2 match:4 faster:2 cross:2 plugging:1 laplacian:1 prediction:8 regression:49 metric:1 kernel:19 sometimes:1 achieved:1 c1:1 publisher:1 sch:2 ineffective:1 thing:1 lafferty:1 jordan:1 integer:2 structural:1 leverage:1 variety:2 xj:1 fit:1 gave:1 ivan:1 idea:1 avenue:1 computable:1 expression:5 bartlett:1 tabulated:1 peter:1 york:4 dramatically:1 useful:2 amount:8 nonparametric:1 transforms:1 locally:2 svms:1 http:1 meir:1 restricts:1 estimated:2 group:1 drawn:1 graph:5 fraction:1 sum:2 inverse:2 uncertainty:1 family:1 bound:37 followed:1 meek:1 annual:1 precisely:1 constraint:1 alex:1 dominated:1 min:2 relatively:4 according:1 ball:1 describes:2 smaller:6 wi:4 pr:2 ghaoui:1 thorsten:1 taken:1 equation:3 remains:1 previously:2 kinematics:1 needed:2 serf:1 studying:1 available:4 rewritten:3 observe:1 corinna:3 vikas:1 original:3 assumes:2 remaining:2 ensure:2 include:2 denotes:1 clustering:1 yx:7 exploit:5 ghahramani:1 objective:2 already:1 dependence:1 traditional:1 obermayer:1 gradient:1 mx:12 distance:2 berlin:1 capacity:1 street:1 philip:1 topic:1 manifold:2 cauchy:1 nello:1 trivial:2 induction:4 ru:5 issn:1 minimizing:3 vladimir:3 difficult:1 statement:1 relate:1 stated:1 negative:1 design:2 upper:3 datasets:2 finite:2 november:1 situation:2 maxk:1 ever:1 defining:1 digitized:1 y1:1 rn:5 varied:1 ninth:1 introduced:2 inverting:1 tive:1 required:1 california:5 nip:4 suggested:3 xm:19 coppersmith:1 max:1 event:1 critical:1 treated:1 natural:1 difficulty:1 predicting:2 regularized:1 bratko:1 zhu:2 scheme:1 xiaojin:1 kj:1 text:1 literature:2 geometric:1 asymptotic:1 relative:3 loss:6 southey:3 validation:2 degree:1 sufficient:1 mercer:1 editor:3 mohri:8 institute:2 fall:2 absolute:1 mikhail:1 benefit:1 curve:1 dimension:15 world:2 gram:2 dale:1 author:1 made:2 adaptive:1 coincide:4 san:1 far:2 finnegan:1 emphasize:1 bernhard:1 dealing:4 global:6 derbeko:2 consuming:1 xi:15 francisco:1 alternatively:1 un:1 search:1 table:2 terminate:1 learn:1 robust:3 schuurmans:3 expansion:1 mehryar:2 mse:4 constructing:1 did:1 main:1 motivation:1 noise:2 arise:1 prx:1 repeated:1 x1:3 xu:10 transduction:11 ny:2 wiley:1 sub:1 position:4 explicit:10 ib:3 third:2 kin:4 theorem:8 wrobel:1 specific:3 k21:3 nyu:1 experimented:1 cortes:7 admits:3 exists:1 vapnik:19 magnitude:1 disregarded:2 kx:1 boston:6 cx:1 simply:2 expressed:1 vulnerable:1 sindhwani:1 springer:1 determines:1 ma:1 weston:1 viewed:3 formulated:1 identity:1 shared:1 bennett:2 hard:1 specifically:2 infinite:1 determined:5 reducing:2 typical:1 except:1 classi:1 lemma:1 total:4 called:1 bernard:1 experimental:3 jiayuan:1 exception:1 select:2 highdimensional:1 support:2 latter:1 arises:1 ex:2
2,286
3,075
Correcting Sample Selection Bias by Unlabeled Data Jiayuan Huang School of Computer Science Univ. of Waterloo, Canada [email protected] Alexander J. Smola NICTA, ANU Canberra, Australia [email protected] Karsten M. Borgwardt Ludwig-Maximilians-University Munich, Germany [email protected] Arthur Gretton MPI for Biological Cybernetics T?ubingen, Germany [email protected] Bernhard Sch?olkopf MPI for Biological Cybernetics T?ubingen, Germany [email protected] Abstract We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice. 1 Introduction The default assumption in many learning scenarios is that training and test data are independently and identically (iid) drawn from the same distribution. When the distributions on training and test set do not match, we are facing sample selection bias or covariate shift. Specifically, given a domain of patterns X and labels Y, we obtain training samples Z = {(x1 , y1 ), . . . , (xm , ym )} ? X ? Y from ? a Borel probability distribution Pr(x, y), and test samples Z ? = {(x?1 , y1? ), . . . , (x?m? , ym ? )} ? X?Y ? drawn from another such distribution Pr (x, y). Although there exists previous work addressing this problem [2, 5, 8, 9, 12, 16, 20], sample selection bias is typically ignored in standard estimation algorithms. Nonetheless, in reality the problem occurs rather frequently : While the available data have been collected in a biased manner, the test is usually performed over a more general target population. Below, we give two examples; but similar situations occur in many other domains. 1. Suppose we wish to generate a model to diagnose breast cancer. Suppose, moreover, that most women who participate in the breast screening test are middle-aged and likely to have attended the screening in the preceding 3 years. Consequently our sample includes mostly older women and those who have low risk of breast cancer because they have been tested before. The examples do not reflect the general population with respect to age (which amounts to a bias in Pr(x)) and they only contain very few diseased cases (i.e. a bias in Pr(y|x)). 2. Gene expression profile studies using DNA microarrays are used in tumor diagnosis. A common problem is that the samples are obtained using certain protocols, microarray platforms and analysis techniques. In addition, they typically have small sample sizes. The test cases are recorded under different conditions, resulting in a different distribution of gene expression values. In this paper, we utilize the availability of unlabeled data to direct a sample selection de-biasing procedure for various learning methods. Unlike previous work we infer the resampling weight directly by distribution matching between training and testing sets in feature space in a non-parametric manner. We do not require the estimation of biased densities or selection probabilities [20, 2, 12], or the assumption that probabilities of the different classes are known [8]. Rather, we account for the difference between Pr(x, y) and Pr? (x, y) by reweighting the training points such that the means of the training and test points in a reproducing kernel Hilbert space (RKHS) are close. We call this reweighting process kernel mean matching (KMM). When the RKHS is universal [14], the population solution to this miminisation is exactly the ratio Pr? (x, y)/ Pr(x, y); however, we also derive a cautionary result, which states that even granted this ideal population reweighting, the convergence of the empirical means in the RKHS depends on an upper bound on the ratio of distributions (but not on the dimension of the space), and will be extremely slow if this ratio is large. The required optimisation is a simple QP problem, and the reweighted sample can be incorporated straightforwardly into several different regression and classification algorithms. We apply our method to a variety of regression and classification benchmarks from UCI and elsewhere, as well as to classification of microarrays from prostate and breast cancer patients. These experiments demonstrate that KMM greatly improves learning performance compared with training on unweighted data, and that our reweighting scheme can in some cases outperform reweighting using the true sample bias distribution. Key Assumption 1: In general, the estimation problem with two different distributions Pr(x, y) and Pr? (x, y) is unsolvable, as the two terms could be arbitrarily far apart. In particular, for arbitrary Pr(y|x) and Pr? (y|x), there is no way we could infer a good estimator based on the training sample. Hence we make the simplifying assumption that Pr(x, y) and Pr? (x, y) only differ via Pr(x, y) = Pr(y|x) Pr(x) and Pr(y|x) Pr? (x). In other words, the conditional probabilities of y|x remain unchanged (this particular case of sample selection bias has been termed covariate shift [12]). However, we will see experimentally that even in situations where our key assumption is not valid, our method can nonetheless perform well (see Section 4). 2 Sample Reweighting We begin by stating the problem of regularized risk minimization. In general a learning method minimizes the expected risk R[Pr, ?, l(x, y, ?)] = E(x,y)?Pr [l(x, y, ?)] (1) of a loss function l(x, y, ?) that depends on a parameter ?. For instance, the loss function could be the negative log-likelihood ? log Pr(y|x, ?), a misclassification loss, or some form of regression loss. However, since typically we only observe examples (x, y) drawn from Pr(x, y) rather than Pr? (x, y), we resort to computing the empirical average m 1 X Remp [Z, ?, l(x, y, ?)] = l(xi , yi , ?). (2) m i=1 To avoid overfitting, instead of minimizing Remp directly we often minimize a regularized variant Rreg [Z, ?, l(x, y, ?)] := Remp [Z, ?, l(x, y, ?)] + ??[?], where ?[?] is a regularizer. 2.1 Sample Correction The problem is more involved if Pr(x, y) and Pr? (x, y) are different. The training set is drawn from Pr, however what we would really like is to minimize R[Pr? , ?, l] as we wish to generalize to test examples drawn from Pr? . An observation from the field of importance is thati h sampling ? (x,y) R[Pr ? , ?, l(x, y, ?)] = E(x,y)?Pr? [l(x, y, ?)] = E(x,y)?Pr Pr l(x, y, ?) (3) Pr(x,y) | {z } :=?(x,y) = R[Pr, ?, ?(x, y)l(x, y, ?)], (4) provided that the support of Pr? is contained in the support of Pr. Given ?(x, y), we can thus compute the risk with respect to Pr? using Pr. Similarly, we can estimate the risk with respect to Pr? by computing Remp [Z, ?, ?(x, y)l(x, y, ?)]. The key problem is that the coefficients ?(x, y) are usually unknown, and we need to estimate them from the data. When Pr and Pr? differ only in Pr(x) and Pr? (x), we have ?(x, y) = Pr? (x)/Pr(x), where ? is a reweighting factor for the training examples. We thus reweight every observation (x, y) such that observations that are under-represented in Pr obtain a higher weight, whereas overrepresented cases are downweighted. Now we could estimate Pr and Pr? and subsequently compute ? based on those estimates. This is closely related to the methods in [20, 8], as they have to either estimate the selection probabilities or have prior knowledge of the class distributions. Although intuitive, this approach has two major problems: first, it only works whenever the density estimates for Pr and Pr? (or potentially, the selection probabilities or class distributions) are good. In particular, small errors in estimating Pr can lead to large coefficients ? and consequently to a serious overweighting of the corresponding observations. Second, estimating both densities just for the purpose of computing reweighting coefficients may be overkill: we may be able to directly estimate the coefficients ?i := ?(xi , yi ) without having to estimate the two distributions. Furthermore, we can regularize ?i directly with more flexibility, taking prior knowledge into account similar to learning methods for other problems. 2.2 Using the sample reweighting in learning algorithms Before we describe how we will estimate the reweighting coefficients ?i , let us briefly discuss how to minimize the reweighted regularized risk m 1 X Rreg [Z, ?, l(x, y, ?)] := ?i l(xi , yi , ?) + ??[?], (5) m i=1 in the classification and regression settings (an additional classification method is discussed in the accompanying technical report [7]). Support Vector Classification: Utilizing the setting of [17]we can have the following minimization problem (the original SVMs can be formulated in the same way): m X 1 2 ?i ?i (6a) minimize k?k + C ?,? 2 i=1 subject to h?(xi , yi ) ? ?(xi , y), ?i ? 1 ? ?i /?(yi , y) for all y ? Y, and ?i ? 0. (6b) ? Here, ?(x, y) is a feature map from X ? Y into a feature space F, where ? ? F and ?(y, y ) denotes a discrepancy function between y and y ? . The dual of (6) is given by m m X X 1 minimize ?iy ?jy? k(xi , y, xj , y ? ) ? ?iy (7a) ? 2 i=1;y?Y i,j=1;y,y ? ?Y X subject to ?iy ? 0 for all i, y and ?iy /?(yi , y) ? ?i C. (7b) y?Y ? ? ? ? Here k(x, y, x , y ) := h?(x, y), ?(x , y )i denotes the inner product between the feature maps. This generalizes the observation-dependent binary SV classification described in [10]. Modifications of existing solvers, such as SVMStruct [17], are straightforward. 2 2 Penalized LMS Regression: Assume l(x, y, ?) = (y ? h?(x), ?i) and ?[?] = k?k . Here we m minimize X 2 ?i (yi ? h?(xi ), ?i)2 + ? k?k . (8) i=1 Denote by ?? the diagonal matrix with diagonal (?1 , . . . , ?m ) and let K ? Rm?m be the kernel ? ? matrix Kij = k(xi , xj ). In this case minimizing (8) is equivalent to minimizing (y ? K?)? ?(y ? ? K?) + ?? K? with respect to ?. Assuming that K and ? have full rank, the minimization yields ? = (????1 + K)?1 y. The advantage of this formulation is that it can be solved as easily as solving the standard penalized regression problem. Essentially, we rescale the regularizer depending on the pattern weights: the higher the weight of an observation, the less we regularize. 3 Distribution Matching 3.1 Kernel Mean Matching and its relation to importance sampling Let ? : X ? F be a map into a feature space F and denote by ? : P ? F the expectation operator ?(Pr) := Ex?Pr(x) [?(x)] . (9) Clearly ? is a linear operator mapping the space of all probability distributions P into feature space. Denote by M(?) := {?(Pr) where Pr ? P} the image of P under ?. This set is also often referred to as the marginal polytope. We have the following theorem (proved in [7]): Theorem 1 The operator ? is bijective if F is an RKHS with a universal kernel k(x, x? ) = h?(x), ?(x? )i in the sense of Steinwart [15]. The use of feature space means to compare distributions is further explored in [3]. The practical consequence of this (rather abstract) result is that if we know ?(Pr? ), we can infer a suitable ? by solving the following minimization problem: minimize ?(Pr ? ) ? Ex?Pr(x) [?(x)?(x)] subject to ?(x) ? 0 and Ex?Pr(x) [?(x)] = 1. (10) ? This is the kernel mean matching (KMM) procedure. For a proof of the following (and further results in the paper) see [7]. Lemma 2 The problem (10) is convex. Moreover, assume that Pr? is absolutely continuous with respect to Pr (so Pr(A) = 0 implies Pr? (A) = 0). Finally assume that k is universal. Then the solution ?(x) of (10) is P r? (x) = ?(x)P r(x). 3.2 Convergence of reweighted means in feature space Lemma 2 shows that in principle, if we knew Pr and ?[Pr? ], we could fully recover Pr? by solving a simple quadratic program. In practice, however, neither ?(Pr? ) nor Pr is known. Instead, we only have samples X and X ? of size m and m? , drawn iid from Pr and Pr? respectively. Naively we could just replace the expectations in (10) by empirical averages and hope that the resulting optimization problem provides us with a good estimate of ?. However, it is to be expected that empirical averages will differ from each other due to finite sample size effects. In this section, we explore two such effects. First, we demonstrate that in the finite sample case, for a fixed ?, the empirical estimate of the expectation of ? is normally distributed: this provides a natural limit on R the precision with which we should enforce the constraint ?(x)d Pr(x) = 1 when using empirical expectations (we will return to this point in the next section). Lemma 3 If ?(x) ? [0, B] is some fixed function of x ? X,P then given xi ? Pr iid such that ?(xi ) 1 has finite mean and non-zero variance, the sample mean m i ?(xi ) converges in distribution to a R Gaussian with mean ?(x)d Pr(x) and standard deviation bounded by 2?Bm . This lemma is a direct consequence of the central limit theorem [1, Theorem 5.5.15]. Alternatively, ? it is straightforward to get a large deviation bound that likewise converges as 1/ m [6]. Our second result demonstrates the deviation between the empirical means of Pr? and ?(x) Pr in feature space, given ?(x) is chosen perfectly in the population sense. In particular, this result shows that convergence of these two means will be slow if there is a large difference in the probability mass of Pr? and Pr (and thus the bound B on the ratio of probability masses is large). Lemma 4 In addition to the Lemma 3 conditions, assume that we draw X ? := {x?1 , . . . , x?m? } iid from X using Pr? = ?(x) Pr, and k?(x)k ? R for all x ? X. Then with probability at least 1 ? ? m m? 1 X   p p 1 X ?(xi )?(xi ) ? ? ?(x?i ) ? 1 + ?2 log ?/2 R B 2 /m + 1/m? m i=1 m i=1 (11) Note that this lemma shows that for a given ?(x), which is correct in the population sense, we can bound the deviation between the feature space mean of Pr? and the reweighted feature space mean of Pr. It is not a guarantee that we will find coefficients ?i that are close to ?(xi ), but it gives us a useful upper bound on the outcome of the optimization. p Lemma 4 implies that we have O(B 1/m + 1/m? B 2 ) convergence in m, m? and B. This means that, for very different distributions we need a large equivalent sample size to get reasonable convergence. Our result also implies that it is unrealistic to assume that the empirical means (reweighted or not) should match exactly. 3.3 Empirical KMM optimization To find suitable values of ? ? RmPwe want to minimize the discrepancy between means subject m 1 to constraints ?i ? [0, B] and | m i=1 ?i ? 1| ? ?. The former limits the scope of discrepancy between Pr and Pr? whereas the latter ensures that the measure ?(x) Pr(x) is close to a probability distribution. The objective function is given by the discrepancy term between the two empirical P m? m ? means. Using Kij := k(xi , xj ) and ?i := m ? j=1 k(xi , xj ) one may check that m m? 1 X 2 1 1 X 2 ?i ?(xi ) ? ? ?(x?i ) = 2 ? ? K? ? 2 ?? ? + const. m i=1 m i=1 m m We now have all necessary ingredients to formulate a quadratic problem to find suitable ? via m X 1 ?i ? m ? m?. (12) minimize ? ? K? ? ?? ? subject to ?i ? [0, B] and ? 2 i=1 ? In accordance with Lemma 3, we conclude that a good choice of ? should be O(B/ m). Note that (12) is a quadratic program which can be solved efficiently using interior point methods or any other successive optimization procedure. We also point out that (12) resembles Single Class SVM [11] using the ?-trick. Besides the approximate equality constraint, the main difference is the linear correction term by means of ?. Large values of ?i correspond to particularly important observations xi and are likely to lead to large ?i . 4 Experiments 4.1 Toy regression example Our first experiment is on toy data, and is intended mainly to provide a comparison with the approach of [12]. This method uses an information criterion to optimise the weights, under certain restrictions on Pr and Pr? (namely, Pr? must be known, while Pr can be either known exactly, Gaussian with unknown parameters, or approximated via kernel density estimation). Our data is generated according to the polynomial regression example from [12, Section 2], for which Pr ? N(0.5, 0.52 ) and Pr? ? N(0, 0.32 ) are two normal distributions. The observations are generated according to y = ?x + x3 , and are observed in Gaussian noise with standard deviation 0.3 (see Figure 1(a); the blue curve is the noise-free signal). We sampled 100 training (blue circles) and testing (red circles) points from Pr and Pr? respectively. We attempted to model the observations with a degree 1 polynomial. The black dashed line is a best-case scenario, which is shown for reference purposes: it represents the model fit using ordinary least squared (OLS) on the labeled test points. The red line is a second reference result, derived only from the training data via OLS, and predicts the test data very poorly. The other three dashed lines are fit with weighted ordinary least square (WOLS), using one of three weighting schemes: the ratio of the underlying training and test densities, KMM, and the information criterion of [12]. A summary of the performance over 100 trials is shown in Figure 1(b). Our method outperforms the two other reweighting methods. 0.6 0.4 1 0.2 Sum of square loss 0 ?0.2 ?0.4 ?0.6 ?0.8 ?1 ?1.2 ?1.4 ?0.4 (a) x from q0 true fitting model OLS fitting x q0 x from q1 OLS fitting xq1 0 0.2 0.6 0.4 0.2 WOLS by ratio WOLS by KMM WOLS by min IC ?0.2 0.8 0 0.4 0.6 0.8 1 1.2 ratio KMM IC OLS (b) Figure 1: (a) Polynomial models of degree 1 fit with OLS and WOLS;(b) Average performances of three WOLS methods and OLS on the test data in (a). Labels are Ratio for ratio of test to training density; KMM for our approach; min IC for the approach of [12]; and OLS for the model trained on the labeled test points. 4.2 Real world datasets We next test our approach on real world data sets, from which we select training examples using a deliberately biased procedure (as in [20, 9]). To describe our biased selection scheme, we need to define an additional random variable si for each point in the pool of possible training samples, where si = 1 means the ith sample is included, and si = 0 indicates an excluded sample. Two situations are considered: the selection bias corresponds to our assumption regarding the relation between the training and test distributions, and P (si = 1|xi , yi ) = P (si |xi ); or si is dependent only on yi , i.e. P (si |xi , yi ) = P (si |yi ), which potentially creates a greater challenge since it violates our key assumption 1. In the following, we compare our method (labeled KMM) against two others: a baseline unweighted method (unweighted), in which no modification is made, and a weighting by the inverse of the true sampling distribution (importance sampling), as in [20, 9]. We emphasise, however, that our method does not require any prior knowledge of the true sampling probabilities. 2 In our experiments, we used a Gaussian kernel ? exp(??kx ? i ? xj k ) in our kernel classification and regression algorithms, and parameters ? = ( m ? 1)/ m and B = 1000 in the optimization (12). 0.2 0.35 unweighted importance sampling KMM 0.18 unweighted importance sampling KMM 0.3 0.16 0.25 0.12 test error test error 0.14 0.1 0.08 0.06 0.2 0.15 0.1 0.04 0.05 0.02 0 1 2 3 4 5 6 biased feature 7 8 0 9 (a) Simple bias on features 0.2 0.3 0.4 training set proportion 0.5 (b) Joint bias on features 12 0.07 unweighted importance sampling KMM 0.06 0.1 optimal weights inverse of true sampling probabilites 10 0.05 test error 8 0.04 6 0.03 4 0.02 2 0.01 0 1 2 3 4 training set proportion (c) Bias on labels 0 0 5 (d) ? 10 20 30 40 50 vs inverse sampling prob. Figure 2: Classification performance analysis on breast cancer dataset from UCI. 4.2.1 Breast Cancer Dataset This dataset is from the UCI Archive, and is a binary classification task. It includes 699 examples from 2 classes: benign (positive label) and malignant (negative label). The data are randomly split into training and test sets, where the proportion of examples used for training varies from 10% to 50%. Test results are averaged over 30 trials, and were obtained using a support vector classifier with kernel size ? = 0.1. First, we consider a biased sampling scheme based on the input features, of which there are nine, with integer values from 0 to 9. Since smaller feature values predominate in the unbiased data, we sample according to P (s = 1|x ? 5) = 0.2 and P (s = 1|x > 5) = 0.8, repeating the experiment for each of the features in turn. Results are an average over 30 random training/test splits, with 1/4 of the data used for training and 3/4 for testing. Performance is shown in Figure 2(a): we consistently outperform the unweighted method, and match or exceed the performance obtained using the known distribution ratio. Next, we consider a sampling bias that operates jointly across multiple features. We select samples less often when they are further from the sample mean x over the training data, i.e. P (si |xi ) ? exp(??kxi ? xk2 ) where ? = 1/20. Performance of our method in 2(b) is again better than the unweighted case, and as good as or better than reweighting using the sampling model. Finally, we consider a simple biased sampling scheme which depends only on the label y: P (s = 1|y = 1) = 0.1 and P (s = 1|y = ?1) = 0.9 (the data has on average twice as many positive as negative examples when uniformly sampled). Average performance for different training/testing split proportions is in Figure 2(c); remarkably, despite our assumption regarding the difference between the training and test distributions being violated, our method still improves the test performance, and outperforms the reweighting by density ratio for large training set sizes. Fig- ure 2(d) shows the weights ? are proportional to the inverse of true sampling probabilities: positive examples have higher weights and negative ones have lower weights. 4.2.2 Further Benchmark Datasets We next compare the performance on further benchmark datasets1 by selecting training data via various biased sampling schemes. Specifically, for the sampling distribution bias on labels, we use P (s = 1|y) = exp(a + by)/(1 + exp(a + by)) (datasets 1 to 5), or the simple step distribution P (s = 1|y = 1) = a, P (s = 1|y = ?1) = b (datasets 6 and 7). For the remaining datasets, we generate biased sampling schemes over their features. We first do PCA, selecting the first principal component of the training data and the corresponding projection values. Denoting the minimum value of the projection as m and the mean as m, we apply a normal distribution with mean m + (m ? m)/a and variance (m ? m)/b as the biased sampling scheme. Please refer to [7] for detailed parameter settings. We use penalized LMS for regression problems and SVM for classification problems. To evaluate generalization performance, we utilize the normalized mean P i ??i ) square error (NMSE) given by n1 ni=1 (yvar y for regression problems, and the average test error for classification problems. In 13 out of 23 experiments, our reweighting approach is the most accurate (see Table 1), despite having no prior information about the bias of the test sample (and, in some cases, despite the additional fact that the data reweighting does not conform to our key assumption 1). In addition, the KMM always improves test performance compared with the unweighted case. Two additional points should be borne in mind: first, we use the same ? for the kernel mean matching and the SVM, as listed in Table 1. Performance might be improved by decoupling these kernel sizes: indeed, we employ kernels that are somewhat large, suggesting that the KMM procedure is helpful in the case of relatively smooth classification/regresssion functions. Second, we did not find a performance improvement in the case of data sets with smaller sample sizes. This is not surprising, since a reweighting would further reduce the effective number of points used for training, resulting in insufficient data for learning. Table 1: Test results for three methods on 18 datasets with different sampling schemes. The results are averages over 10 trials for regression problems (marked *) and 30 trials for classification problems. We used a Gaussian kernel of size ? for both the kernel mean matching and the SVM/LMS regression, and set B = 1000. DataSet 1. Abalone* 2. CA Housing* 3. Delta Ailerons(1)* 4. Ailerons* 5. haberman(1) 6. USPS(6vs8)(1) 7. USPS(3vs9)(1) 8. Bank8FM* 9. Bank32nh* 10. cpu-act* 11. cpu-small* 12. Delta Ailerons(2)* 13. Boston house* 14. kin8nm* 15. puma8nh* 16. haberman(2) 17. USPS(6vs8) (2) 18. USPS(6vs8) (3) 19. USPS(3vs9)(2) 20. Breast Cancer 21. India diabetes 22. ionosphere 23. German credit ? 1e ? 1 1e ? 1 1e3 1e ? 5 1e ? 2 1/128 1/128 1e ? 1 1e ? 2 1e ? 12 1e ? 12 1e3 1e ? 4 1e ? 1 1e ? 1 1e ? 2 1/128 1/128 1/128 1e ? 1 1e ? 4 1e ? 1 1e ? 4 ntr 2000 16512 4000 7154 150 500 500 4500 4500 4000 4000 4000 300 5000 4499 150 500 500 500 280 200 150 400 selected 853 3470 1678 925 52 260 252 654 740 1462 1488 634 108 428 823 90 156 104 252 96 97 64 214 ntst 2177 4128 3129 6596 156 1042 1145 3692 3692 4192 4192 3129 206 3192 3693 156 1042 1042 1145 419 568 201 600 unweighted 1.00 ? 0.08 2.29 ? 0.01 0.51 ? 0.01 1.50 ? 0.06 0.50 ? 0.09 0.13 ? 0.18 0.016 ? 0.006 0.5 ? 0.1 23 ? 4.0 10 ? 1 9?2 2?2 0.8 ? 0.2 0.85 ? 0.2 1.1 ? 0.1 0.27 ? 0.01 0.23 ? 0.2 0.54 ? 0.0002 0.46 ? 0.09 0.05 ? 0.01 0.32 ? 0.02 0.32 ? 0.06 0.283 ? 0.004 NMSE / Test err. importance samp. 1.1 ? 0.2 1.72 ? 0.04 0.51 ? 0.01 0.7 ? 0.1 0.37 ? 0.03 0.1 ? 0.2 0.012 ? 0.005 0.45 ? 0.06 19 ? 2 4.0 ? 0.2 4.0 ? 0.2 1.5 ? 1.5 0.74 ? 0.09 0.81 ? 0.1 0.77 ? 0.05 0.39 ? 0.04 0.23 ? 0.2 0.5 ? 0.2 0.5 ? 0.2 0.036 ? 0.005 0.30 ? 0.02 0.31 ? 0.07 0.282 ? 0.004 KMM 0.6 ? 0.1 1.24 ? 0.09 0.401 ? 0.007 1.2 ? 0.2 0.30 ? 0.05 0.1 ? 0.1 0.013 ? 0.005 0.47 ? 0.05 19 ? 2 1.9 ? 0.2 2.0 ? 0.5 1.7 ? 0.9 0.76 ? 0.07 0.81 ? 0.2 0.83 ? 0.03 0.25 ? 0.2 0.16 ? 0.08 0.16 ? 0.04 0.2 ? 0.1 0.033 ? 0.004 0.30 ? 0.02 0.28 ? 0.06 0.280 ? 0.004 4.2.3 Tumor Diagnosis using Microarrays Our next benchmark is a dataset of 102 microarrays from prostate cancer patients [13]. Each of these microarrays measures the expression levels of 12,600 genes. The dataset comprises 50 samples from normal tissues (positive label) and 52 from tumor tissues (negative label). We simulate the realisitc scenario that two sets of microarrays A and B are given with dissimilar proportions of tumor samples, and we want to perform cancer diagnosis via classification, training on A and predicting 1 Regression data from http://www.liacc.up.pt/?ltorgo/Regression/DataSets.html; classification data from UCI. Sets with numbers in brackets are examined by different sampling schemes. on B. We select training examples via the biased selection scheme P (s = 1|y = 1) = 0.85 and P (s = 1|y = ?1) = 0.15. The remaining data points form the test set. We then perform SVM classification for the unweighted, KMM, and importance sampling approaches. The experiment was repeated over 500 independent draws from the dataset according to our biased scheme; the 500 resulting test errors are plotted in [7]. The KMM achieves much higher accuracy levels than the unweighted approach, and is very close to the importance sampling approach. We study a very similar scenario on two breast cancer microarray datasets from [4] and [19], measuring the expression levels of 2,166 common genes for normal and cancer patients [18]. We train an SVM on one of them and test on the other. Our reweighting method achieves significant improvement in classification accuracy over the unweighted SVM (see [7]). Hence our method promises to be a valuable tool for cross-platform microarray classification. Acknowledgements: The authors thank Patrick Warnat (DKFZ, Heidelberg) for providing the microarray datasets, and Olivier Chapelle and Matthias Hein for helpful discussions. The work is partially supported by by the BMBF under grant 031U112F within the BFAM project, which is part of the German Genome Analysis Network. NICTA is funded through the Australian Government?s Backing Australia?s Ability initiative, in part through the ARC. This work was supported in part by the IST Programme of the EC, under the PASCAL Network of Excellence, IST-2002-506778. References [1] G. Casella and R. Berger. Statistical Inference. Duxbury, Pacific Grove, CA, 2nd edition, 2002. [2] M. Dudik, R.E. Schapire, and S.J. Phillips. Correcting sample selection bias in maximum entropy density estimation. In Advances in Neural Information Processing Systems 17, 2005. [3] A. Gretton, K. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the two-sampleproblem. In NIPS. MIT Press, 2006. [4] S. Gruvberger, M. Ringner, Y.Chen, S.Panavally, L.H. Saal, C. Peterson A.Borg, M. Ferno, and P.S.Meltzer. Estrogen receptor status in breast cancer is associated with remarkably distinct gene expression patterns. Cancer Research, 61, 2001. [5] J. Heckman. Sample selection bias as a specification error. Econometrica, 47(1):153?161, 1979. [6] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58:13?30, 1963. [7] J. Huang, A. Smola, A. Gretton, K. Borgwardt, and B. Sch?olkopf. Correcting sample selection bias by unlabeled data. Technical report, CS-2006-44, University of Waterloo, 2006. [8] Y. Lin, Y. Lee, and G. Wahba. Support vector machines for classification in nonstandard situations. Machine Learning, 46:191?202, 2002. [9] S. Rosset, J. Zhu, H. Zou, and T. Hastie. A method for inferring label sampling mechanisms in semisupervised learning. In Advances in Neural Information Processing Systems 17, 2004. [10] M. Schmidt and H. Gish. Speaker identification via support vector classifiers. In Proc. ICASSP ?96, pages 105?108, Atlanta, GA, May 1996. [11] B. Sch?olkopf, J. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443?1471, 2001. [12] H. Shimodaira. Improving predictive inference under convariance shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90, 2000. [13] D. Singh, P. Febbo, K. Ross, D. Jackson, J. Manola, C. Ladd, P. Tamayo, A. Renshaw, A. DAmico, and J. Richie. Gene expression correlates of clinical prostate cancer behavior. Cancer Cell, 1(2), 2002. [14] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. Journal of Machine Learning Research, 2:67?93, 2002. [15] I. Steinwart. Support vector machines are universally consistent. J. Compl., 18:768?791, 2002. [16] M. Sugiyama and K.-R. M?uller. Input-dependent estimation of generalization error under covariate shift. Statistics and Decisions, 23:249?279, 2005. [17] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 2005. [18] P. Warnat, R. Eils, and B. Brors. Cross-platform analysis of cancer microarray data improves gene expression based classification of phenotypes. BMC Bioinformatics, 6:265, Nov 2005. [19] M. West, C. Blanchette, H. Dressman, E. Huang, S. Ishida, R. Spang, H Zuzan, J.A. Olson Jr, J.R.Marks, and J.R.Nevins. Predicting the clinical status of human breast cancer by using gene expression profiles. PNAS, 98(20), 2001. [20] B. Zadrozny. Learning and evaluating classifiers under sample selection bias. In International Conference on Machine Learning ICML?04, 2004.
3075 |@word rreg:2 trial:4 briefly:1 middle:1 polynomial:3 proportion:5 nd:1 tamayo:1 gish:1 simplifying:1 q1:1 attended:1 selecting:2 denoting:1 rkhs:4 outperforms:2 existing:1 err:1 surprising:1 si:9 must:1 saal:1 benign:1 hofmann:1 resampling:2 v:1 selected:1 ith:1 renshaw:1 provides:2 successive:1 direct:2 borg:1 initiative:1 fitting:3 manner:2 excellence:1 indeed:1 expected:2 karsten:1 mpg:2 frequently:1 nor:1 planning:1 behavior:1 cpu:2 solver:1 haberman:2 begin:1 provided:1 moreover:2 estimating:3 bounded:2 mass:2 underlying:1 project:1 what:1 probabilites:1 minimizes:1 guarantee:1 every:1 act:1 exactly:3 rm:1 demonstrates:1 classifier:3 platt:1 normally:1 grant:1 before:2 positive:4 accordance:1 limit:3 consequence:2 despite:3 receptor:1 ure:1 black:1 might:1 twice:1 au:1 resembles:1 examined:1 averaged:1 practical:1 nevins:1 testing:5 practice:2 x3:1 procedure:5 universal:3 empirical:10 matching:8 projection:2 word:1 altun:1 get:2 unlabeled:3 selection:16 close:4 operator:3 interior:1 risk:6 ga:1 influence:1 tsochantaridis:1 restriction:1 equivalent:2 map:3 overrepresented:1 www:1 straightforward:2 independently:1 convex:1 formulate:1 correcting:3 estimator:1 spang:1 utilizing:1 jackson:1 regularize:2 population:6 svmstruct:1 target:1 suppose:2 pt:1 olivier:1 us:1 diabetes:1 trick:1 approximated:1 particularly:1 predicts:1 labeled:3 observed:1 solved:2 ensures:1 valuable:1 econometrica:1 trained:1 singh:1 solving:3 predictive:1 creates:1 usps:5 liacc:1 easily:1 joint:1 icassp:1 various:2 represented:1 regularizer:2 train:1 univ:1 distinct:1 describe:2 effective:1 outcome:1 ability:1 statistic:1 jointly:1 housing:1 advantage:1 matthias:1 product:1 uci:4 ludwig:1 flexibility:1 poorly:1 intuitive:1 olson:1 olkopf:4 convergence:5 produce:1 diseased:1 converges:2 derive:1 depending:1 stating:1 rescale:1 school:1 c:2 implies:3 australian:1 differ:3 rasch:1 closely:1 correct:1 subsequently:1 kb:1 human:1 australia:2 violates:1 require:2 government:1 generalization:2 really:1 convariance:1 biological:2 correction:3 accompanying:1 considered:1 ic:3 normal:4 exp:4 credit:1 mapping:1 scope:1 lm:3 major:1 achieves:2 xk2:1 purpose:2 estimation:7 proc:1 label:10 ross:1 waterloo:2 tool:1 weighted:1 minimization:4 hope:1 mit:1 clearly:1 uller:1 gaussian:5 always:1 rather:4 avoid:1 derived:1 joachim:1 improvement:2 consistently:1 rank:1 likelihood:2 check:1 indicates:1 greatly:1 mainly:1 baseline:1 sense:3 helpful:2 inference:3 dependent:3 typically:3 relation:2 germany:3 backing:1 classification:21 dual:1 html:1 pascal:1 platform:3 marginal:1 field:1 having:2 sampling:25 bmc:1 represents:1 icml:1 discrepancy:4 report:2 prostate:3 others:1 serious:1 employ:1 few:1 randomly:1 intended:1 n1:1 atlanta:1 screening:2 bracket:1 accurate:1 grove:1 arthur:2 necessary:1 taylor:1 circle:2 plotted:1 hein:1 instance:1 kij:2 measuring:1 ordinary:2 addressing:1 deviation:5 straightforwardly:1 varies:1 sv:1 kxi:1 rosset:1 borgwardt:3 density:8 international:1 lee:1 pool:1 ym:2 iy:4 again:1 squared:1 reflect:1 recorded:1 central:1 huang:3 ltorgo:1 woman:2 hoeffding:1 borne:1 resort:1 american:1 return:1 toy:2 downweighted:1 account:2 suggesting:1 de:4 includes:2 availability:1 coefficient:6 depends:3 performed:1 try:1 diagnose:1 red:2 recover:2 samp:1 minimize:9 square:3 ni:1 accuracy:2 variance:2 who:2 likewise:1 efficiently:1 yield:1 correspond:1 generalize:1 identification:1 iid:4 cybernetics:2 tissue:2 nonstandard:1 casella:1 whenever:1 against:1 nonetheless:2 involved:1 richie:1 gruvberger:1 proof:1 associated:1 sampled:2 proved:1 dataset:7 remp:4 knowledge:3 improves:4 hilbert:1 higher:4 improved:1 formulation:1 furthermore:1 just:2 smola:5 lmu:1 steinwart:3 reweighting:17 thati:1 semisupervised:1 effect:2 normalized:1 unbiased:1 contain:1 true:6 deliberately:1 former:1 hence:2 equality:1 excluded:1 q0:2 reweighted:5 please:1 speaker:1 mpi:2 abalone:1 criterion:2 bijective:1 demonstrate:3 image:1 kin8nm:1 common:2 ols:8 qp:1 discussed:1 association:1 refer:1 significant:1 warnat:2 phillips:1 consistency:1 similarly:1 sugiyama:1 shawe:1 funded:1 ishida:1 chapelle:1 specification:1 patrick:1 apart:1 scenario:5 termed:1 certain:2 ubingen:2 inequality:1 binary:2 arbitrarily:1 blanchette:1 yi:11 minimum:1 additional:4 greater:1 preceding:1 somewhat:1 dudik:1 signal:1 dashed:2 full:1 multiple:1 pnas:1 gretton:3 infer:3 ntr:1 smooth:1 technical:2 match:3 cross:2 clinical:2 lin:1 jy:1 variant:1 regression:15 breast:10 optimisation:1 patient:3 essentially:1 expectation:4 kernel:17 cell:1 addition:3 whereas:2 want:2 remarkably:2 aged:1 microarray:5 sch:4 biased:12 unlike:1 archive:1 subject:5 db:1 call:1 integer:1 ideal:1 exceed:1 split:3 identically:1 vs9:2 meltzer:1 variety:1 xj:5 fit:3 hastie:1 ifi:1 perfectly:1 wahba:1 inner:1 regarding:2 reduce:1 microarrays:6 shift:4 expression:8 pca:1 granted:1 e3:2 nine:1 ignored:1 useful:1 detailed:1 listed:1 amount:1 nonparametric:1 repeating:1 svms:1 dna:1 generate:2 http:1 outperform:2 schapire:1 febbo:1 delta:2 puma8nh:1 blue:2 diagnosis:3 conform:1 promise:1 ist:2 key:5 regresssion:1 drawn:7 neither:1 utilize:2 year:1 sum:2 inverse:4 prob:1 unsolvable:1 reasonable:1 draw:2 decision:1 bound:5 quadratic:3 occur:1 constraint:3 alex:1 cautionary:1 simulate:1 extremely:1 min:2 relatively:1 pacific:1 munich:1 according:4 structured:1 sampleproblem:1 shimodaira:1 jr:1 remain:1 smaller:2 across:1 maximilians:1 b:1 modification:2 kmm:17 pr:93 discus:1 turn:1 malignant:1 german:2 mechanism:1 know:1 xq1:1 mind:1 available:1 generalizes:1 apply:2 observe:1 appropriate:1 enforce:1 duxbury:1 schmidt:1 original:1 denotes:2 remaining:2 const:1 unchanged:1 objective:1 occurs:1 parametric:1 diagonal:2 predominate:1 heckman:1 thank:1 estrogen:1 participate:1 polytope:1 collected:1 tuebingen:2 nicta:2 assuming:1 besides:1 berger:1 insufficient:1 ratio:11 minimizing:3 providing:1 mostly:1 potentially:2 reweight:1 negative:5 unknown:2 perform:3 upper:2 observation:9 datasets:9 benchmark:4 finite:3 arc:1 zadrozny:1 situation:4 incorporated:1 y1:2 reproducing:1 arbitrary:1 canada:1 namely:1 required:1 datasets1:1 nip:1 able:1 vs8:3 usually:2 pattern:3 xm:1 below:1 biasing:1 challenge:1 program:2 optimise:1 unrealistic:1 misclassification:1 suitable:3 natural:1 regularized:3 predicting:2 zhu:1 older:1 scheme:12 prior:4 interdependent:1 acknowledgement:1 loss:5 fully:1 proportional:1 facing:1 ingredient:1 age:1 degree:2 consistent:1 principle:1 cancer:16 elsewhere:1 penalized:3 summary:1 supported:2 free:1 bias:19 india:1 peterson:1 overkill:1 taking:1 emphasise:1 distributed:1 curve:1 default:1 dimension:1 valid:1 world:2 unweighted:13 genome:1 evaluating:1 author:1 commonly:1 made:1 universally:1 bm:1 programme:1 far:1 ec:1 correlate:1 compl:1 approximate:1 nov:1 bernhard:1 status:2 gene:8 dressman:1 overfitting:1 conclude:1 knew:1 xi:22 alternatively:1 continuous:1 ladd:1 reality:1 table:3 ca:3 decoupling:1 improving:1 heidelberg:1 williamson:1 zou:1 domain:2 protocol:1 ntst:1 did:1 main:1 uwaterloo:1 noise:2 profile:2 edition:1 repeated:1 x1:1 nmse:2 fig:1 canberra:1 referred:2 west:1 borel:1 slow:2 bmbf:1 precision:1 bank8fm:1 inferring:1 comprises:1 wish:2 house:1 weighting:3 theorem:4 covariate:3 explored:1 svm:7 ionosphere:1 exists:1 naively:1 wols:6 importance:9 anu:2 kx:1 margin:1 chen:1 phenotype:1 boston:1 entropy:1 likely:2 explore:1 contained:1 partially:1 corresponds:1 conditional:1 marked:1 formulated:1 consequently:2 replace:1 experimentally:1 included:1 specifically:2 operates:1 uniformly:1 tumor:4 lemma:9 principal:1 experimental:1 attempted:1 jiayuan:1 select:3 support:9 mark:1 latter:1 alexander:1 aileron:3 absolutely:1 violated:1 dissimilar:1 bioinformatics:1 evaluate:1 tested:1 ex:3
2,287
3,076
A Complexity-Distortion Approach to Joint Pattern Alignment Andrea Vedaldi Stefano Soatto Department of Computer Science University of California at Los Angeles Los Angeles, CA 90035 {vedaldi,soatto}@cs.ucla.edu Abstract Image Congealing (IC) is a non-parametric method for the joint alignment of a collection of images affected by systematic and unwanted deformations. The method attempts to undo the deformations by minimizing a measure of complexity of the image ensemble, such as the averaged per-pixel entropy. This enables alignment without an explicit model of the aligned dataset as required by other methods (e.g. transformed component analysis). While IC is simple and general, it may introduce degenerate solutions when the transformations allow minimizing the complexity of the data by collapsing them to a constant. Such solutions need to be explicitly removed by regularization. In this paper we propose an alternative formulation which solves this regularization issue on a more principled ground. We make the simple observation that alignment should simplify the data while preserving the useful information carried by them. Therefore we trade off fidelity and complexity of the aligned ensemble rather than minimizing the complexity alone. This eliminates the need for an explicit regularization of the transformations, and has a number of other useful properties such as noise suppression. We show the modeling and computational benefits of the approach to the some of the problems on which IC has been demonstrated. 1 Introduction Joint pattern alignment attempts to remove from an ensemble of patterns the effect of nuisance transformations of a systematic nature. The aligned patterns have then a simpler structure and can be processed more easily. Joint pattern alignment is not the same problem as aligning a pattern to another; instead all the patterns are projected to a common ?reference? (usually a subspace) which is unknown and needs to be discovered in the process. Joint pattern alignment is useful in many applications and has been addressed by several authors. Here we only review the methods that are most related the present work. Transform Component Analysis [7] (TCA) explicitly models the aligned ensemble as a Gaussian linear subspace of patterns. In fact, TCA is a direct extension of Probabilistic Principal Component Analysis (PPCA) [10]: Patterns are generated as in standard PPCA and additional hidden layers model the nuisance deformations. Expectation-maximization is used to learn the model from data which result in their alignment. Unfortunately the method requires the space of transformations to be quantized and it is not clear how well the approach could scale to complex scenarios. Image Congealing (IC) [9] takes a different perspective. The idea is that, as the nuisance deformations should increase the complexity of the data, one should be able to identify and undo them by contrasting this effect. Thus IC transforms the data to minimize an appropriate measure of the ?complexity? of the ensemble. With respect to TCA, IC results in a lighter formulation which enables addressing more complex transformations and makes fewer assumptions on the aligned ensemble. An issue with the standard formulation of IC is that it does not require the aligned data to be a faithful representation of the original data. Thus simplifying the data might not only remove the nuisance factors, but also the useful information carried by the patterns. For example, if entropy is used to measure complexity, a typical degenerate solution is obtained by mapping all the data to a constant, which results in minimum (null) entropy. Such solutions are avoided by explicitly regularizing the transformations, in ways that are however rather arbitrary [9]. One should instead search for an optimal compromise between complexity of the simplified data and preservation of the useful information (Sect. 2). This approach is not only more direct, but also conceptually more straightforward as no ad hoc regularization needs to be introduced. We illustrate some of its relationship with rate-distortion theory (Sect. 2.1) and information bottleneck [2] (Sect. 2.2) and we contrast it to IC (Sect. 2.4). In Sect. 3 we specialize our model to the problem of image alignment as done in [9]. For this case, we show that the new model has the same computational complexity of IC (Sect. 3.1). We also show that a Gauss-Newton based algorithm is possible, which is useful to converge quickly during the final stage of the optimization (Sect. 3.2; in a similar context a descent based algorithm was introduced in [1]). In Sect. 4 we illustrate the practical behavior of the algorithm, showing how the complexitydistortion compromise affects the final solution. In particular, our results compare favorably with the ones of [9], with the added simplicity and other benefits, such as noise suppression. 2 Problem formulation We formulate joint pattern alignment as the problem of finding a deformed pattern ensemble which is simpler but faithful to the original data. This is similar to a lossy compression problem [5, 4, 3] and is in fact equivalent to it in some cases (Sect. 2.1). A pattern (or data) ensemble x ? X is a random variable with density p(x). Similarly, an aligned ensemble or alignment y ? X of the ensemble x is another variable y that has conditional statistic p(y|x). We seek for an alignment that is ?simpler? than x but ?faithful? to x. The complexity R of the alignment y is measured by an operator R = H(y) such as, for example, the entropy of the random variable y (but we will see other options). The cost of representing x by y is expressed by a distortion function d(x, y) ? R+ and the faithfulness of the alignment y is quantified as the expected distortion D = E[d(x, y)]. Consider a class W of deformations w : X ? X acting on the patterns X . In order for the alignment y to factor out W we consider a distortion function which is invariant to the action of W; in particular, given a base distortion d0 (x, y), we consider the deformation invariant distortion d(x, y) = min d0 (x, w(y)) w?W Thus an aligned pattern y is faithful to a deformed pattern x if it is possible to map y to x by a nuisance deformation w. Figuring out the best alignment y boils down in optimizing p(y|x) for complexity and distortion. However, this require trading off complexity and distortion and there is no unique way of doing so. The distortion-complexity function D(R) gives the best distortion D that can be achieved by alignments of complexity R. All such distortion-optimal alignments are equally good in principle, and it is the application that poses an upper bound on the acceptable distortion. D(R) can be computed by optimizing the distortion D w.r.t. p(y|x) while keeping constant the complexity R. However it is usually easier optimize the Lagrangian min D + ?R p(y|x) (1) whose optimum is attained where the derivative of D(R) is equal to ??. Then by varying ? one spans the graph of D(R) and finds all the optimal alignments for given complexities. 2.1 Relation to rate-distortion and entropy constrained vector quantization If one chooses the mutual information I(x, y) as complexity measure H(y) in eq. (1), then (1) becomes a rate-distortion problem and the function D(R) a rate-distortion function [5]. The formulation is valid both for discrete and continuous spaces X , but yields to a mapping p(y|x) that is genuinely stochastic. Therefore the alignment y of a pattern x is in general not unique. This is because in rate-distortion y is an auxiliary variable used to derive a deterministic code for long sequences (x1 , . . . , xn ) of data, not for data x in isolation. In contrast, entropy constrained vector quantization [4, 3] assumes that y is finite (i.e. that it spans a finite subset of X ) and that it is functionally determined by x (i.e. y = y(x)). Then it measures the complexity of y as the (discrete) entropy H(y). This is analogous to a rate-distortion problem, except that one searches for a ?single letter? optimal coding y of x rather than an optimal coding for long sequences (x1 , . . . , xn ). Unlike rate-distortion, however, the aligned ensemble y is discrete even if the ensemble x is continuous. 2.2 Relation to information bottleneck Information Bottleneck (IB) [2] is a special rate-distortion problem in which one compresses a variable x while preserving the information carried by x about another variable z, representing the task of interest. In this sense IB is similar to the idea proposed here. By designing an appropriate distribution p(x, z) it may also be possible to obtain an alignment effect similar to the one we seek here. For example, if W is a group of transformations, one may define z = z(x) = {w(x) : w ? W}, for which z is indifferent exactly to the deformations w of x. 2.3 Alternative measures of complexity Instead of the entropy H(y) or the mutual information I(x, y) we can use alternative measures of complexity that yield to more convenient computations. An example is the averaged-per-pixel entropy introduced by IC [9] and discussed in Sect. 3. Generalizing this idea, we assume that the aligned data y depend functionally on the patterns x (i.e. y = y(x)) and we express the complexity of y as the total entropy of lower dimensional projections ?1 (y), . . . , ?M (y), ?i : X ? Rk of the ensemble. Distortion and entropies are estimated empirically and non-parametrically. Concretely, given an ensemble x1 , . . . , xK ? X of patterns, we recover transformations w1 , . . . , wK ? W and aligned patterns y1 , . . . , yK ? X that minimize M K K X 1 X 1 X d(xi , wi (yi )) ? ? log pj (?j (yi )), K i=1 K i=1 j=1 where the densities pj (?j (y)) are estimated from the samples ?j (y1 ), . . . , ?j (yK ) by histogramming (discrete case) or by a Parzen estimator [6] with Gaussian kernel g? (y) of variance ? (continuous case1 ), i.e. N 1 X pj (?j (y)) = g? (?j (y) ? ?j (yi )). N i=1 2.4 Comparison to image congealing In IC [9], given data x1 , . . . , xK ? X , one looks for transformations v : X ? X , x 7? y such that the density p(y) estimated from samples y1 = v1 (x1 ), . . . , yK = vK (xK ) has minimum entropy. If the transformations enable to do so, one can minimize the entropy by mapping all the patterns to a constant; to avoid this one considers the regularized cost function X H(y) + ? R(vi ) (2) i 1 The Parzen estimator implies that the differential entropy of the distributions pj is always lower bounded by the entropy of the kernel g? . This prevents the differential entropy to have arbitrary small negative values. where R(v) is a term penalizing unacceptable deformations. Compared to IC, in our formulation: I The distortion term E[d(x, y)] substitutes the arbitrary regularization R(v). I The aligned patterns y are not obtained by deforming the patterns x; instead y is obtained as a simplification of x within an acceptable level of distortion. This fact induces a noise-cancellation effect (Sect. 4). I The transformations w can be rather general, even non-invertible. IC can use complex trans- formations too, but most likely these would need to be heavily regularized as they would tend to annihilate the patterns. 3 Application to joint image alignment We apply our model to the problem of removing a family of geometric distortions from images. This is the same application for which IC [7] was proposed in the first place. We are given a set I1 (x), . . . , IK (x) of digital images (pattern ensemble) defined on a regular lattice x ? ? ? R2 and with range in [0, 1]. The images may be affected by parametric transformations wi (?) = w(?; qi ) : R2 ? R2 , so that Ii (x) = Ti (wx) + ni (x), x?? for templates (aligned ensemble2 ) Ti (y), y ? ? and residuals ni (x). Here qi is the vector of parameters of the transformation wi (for example, wi might be a 2-D affine transformation y = Lx + l and qi the vector q = [L11 L21 L12 L22 l1 l2 ]). The templates Ti (y), y ? ? are digital images themselves. In order to define Ti (wx) when wx 6? ?, bilinear interpolation and zero-padding are used. Therefore the symbol Ti (wi x) really denotes the quantity T (wi x) = A(x; wi )Ti , x?? where A(x; wi ) is a row vector of mixing coefficients determined by wi and and the interpolation method being used and Ti is the vector obtained by stacking the pixels of the template Ti (y), y ? ?. We will also use the notation wi ? Ti = A(wi )Ti where the left hand side is the stacking of the warped template T (wi x), x ? ? and A(wi ) is the matrix whose rows are the vectors A(x; wi ) for x ? ?. P The distortion is defined to be the squared l2 norm of the residual d(Ii , w ? Ti ) = x?? (Ii (x) ? Ti (wi x))2 . The complexity of the aligned ensemble T (y), y ? ? is computed as in Sect. 2.3 by projecting on the image pixels and averaging their entropies (this is equivalent to assuming that the pixels are statistically independent). For each pixel y ? ? a density p(T (y) = t), t ? [0, 1] is estimated non parametrically from the data {T1 (y), . . . , TK (y)} (we use Parzen window as explained in Sect. 2.3). The complexity of a pixel is thus H(T (y)) = ? K 1 X log p(Ti (y)). K i=1 Finally the overall cost function is obtained by summing over all pixels and averaging over all images: L(w1 , . . . , wK , T1 , . . . , TK ) = K K 1 XX 1 XX (Ii (x) ? Ti (wi x))2 ? ? log p(Ti (y)). (3) K i=1 K i=1 x?? 3.1 y?? Basic search In this section we show how the optimization algorithm from [7] can be adapted to work with the new formulation. This algorithm is a simple coordinate maximization in the dimensions of the search space: 2 With respect to Sect. 2 the patterns xi are now the images Ii and the alignment y are the templates Ti . 1: Estimate the probabilities p(T (y)), y ? ? from the templates {Ti (y) : i = 1, . . . , K} 2: For each pattern i = 1, . . . , K and for each component qji of the parameter vector qi , try a few values of qji . For each value re-compute the cost function (3) and keep the best. 3: Repeat, refining the sampling step of the parameters. This algorithm is appropriate if the dimensionality of the parameter vector q is reasonably small. Here we consider affine transformations for the sake of the illustration, so that q is six-dimensional. In (1.) and (2.) estimating the probabilities p(Ti (y)) and the cost function L(w1 , . . . , wK , T1 , . . . , TK ) requires to know Ti (y). As a first order approximation (as the final result will be refined by Gauss-Newton as explained in the next Section), we bypass this problem and we simply set Ti = wi?1 ? Ii , exploiting the fact that the affine P transformations wi are invertible3 . Eventually all we do is substituting the regularization term i R(vi ) of [9] with the expected distortion K K 1 XX 1 XX ?1 2 (Ii (x) ? wi ? (wi ? Ii (x))) = (Ii (x) ? A(x; wi )A(wi?1 )Ii )2 K i=1 K i=1 x?? x?? Note that warping and un-warping the image Ii is a lossy operation even if wi is bijective because the transformation, applied to digital images, introduces aliasing. Thus the new algorithm is simply avoiding those transformations wi that would introduce excessive loss of fidelity. 3.2 Gauss-Newton search With respect to IC, where only the transformations w1 , . . . , wK are estimated, here we compute the templates T1 , . . . , Tk as well. While this might be not so important when a coarse approximation to the solution has to be found (for which the algorithm of Sect. 3.1 can be used), it must be taken into account to get refined results. This can be done (with a bit of numeric care) by Gauss-Newton (GN). Applying Gauss-Newton requires to take derivatives with respect to the pixel values Ti (y). We exploit the fact that the variables T (y) are continuous, as opposed to [9]. We still process a single image per time, reiterating several times across the whole ensemble {I1 (x), . . . , IK (x)}. For a given image Ii we update the warp parameters qi and the template Ti simultaneously. We exploit the fact that, as the number K of images is usually big, the density p(T (y)) does not change significantly when only one of the templates Ti is being changed. Therefore p(T (y)) can be assumed constant in the computation of the gradient and the Hessian of the cost function (3). The gradient is given by X X X p(T ?wi ?L ? i (y)) ?L = 2?i (x)?Ti (wi x) > (x), = 2?i (x)(A(x; wi )?y ) ? > ?Ti (y) p(Ti (y)) ?qi ?qi x?? x?? y?? where ?i (x) = Ti (wi x)?Ii (x) is the reconstruction residual, A(x; wi ) is the linear map introduced in Sect. 3 and ?y = ?(z ? y) is the 2-D discrete delta function centered on y, encoded as a vector. The approximated Hessian of the cost function (3) can be obtained as follows. First, we use the Gauss-Newton approximation for the derivative w.r.t. the transformation parameters qi X ?w> ?2L ?wi ? 2 i (x)?> Ti (wi x)?Ti (wi x) > (x) ?qi ?qi ?qi> ?q i x?? We then have X X p?(Ti (y))p(Ti (y)) ? p(T ? i (y))2 ?2L 2 = 2(A(x; w )? ) ? i y ?Ti (y)2 p(Ti (y))2 x?? y?? X ?2L = 2(A(x; wi )?y )(A(x; wi )?z ) ?Ti (y)?Ti (z) x?? X X ? ?2L ?wi = 2(A(x; wi )?y )?Ti (wi x) > + 2?i (x)A(x; wi ) D1 ?y ?Ti (y)?q > ?q i x?? x?? 3 D2 ?y ? ?wi ?qi> Our criterion avoids implicitly non-invertible affine transformations as they yield highly distorted codes. Figure 1: Toy example. Top left. We distort the patterns by applying translations drawn uniformly from the 8-shaped region (the center corresponds to the null translation). Top. We show the gradient based algorithm while it gradually aligns the patterns by reducing the complexity of the alignment y. Dark areas correspond to high values of the density of the alignment; we also superimpose the trajectory of one of the patterns. Unfortunately the gradient based algorithm, being a local technique, gets trapped in two local modes (the modes can however be fused in a post-processing stage). Bottom. The basic algorithm completely eliminates the effect of the nuisance transformations doing a better job of avoiding local minima. Although for this simple problem the basic search is more effective, on more difficult scenarios the extra complexity of the Gauss-Newton search pays off (see Sect. 4). where D1 is the discrete linear operator used to compute the derivative of Ti (y) along its first dimension and D2 the analogous operator for the second dimension. The second term of the last equation gives a very small contribution and can be dropped. The equations are all straightforward and result in la linear system ??>  ?2L ????>  =? ?L ??>   where the vector ?> = q > T (y1 ) . . . T (yn ) has size in the order of the number of pixels of the template T (y), y ? ?. While this system is large, it is also extremely sparse an can be solved rather efficiently by standard methods [8]. 4 Experiments The first experiment (Fig.1) is a toy problem illustrating our method. We collect K patterns xi , i = 1, . . . , K which are arrays of M 2D points xi = (x1i , . . . , xM i ). Such points are generated by drawing M i.i.d. samples from a 2-D Gaussian distribution and adding a random translation wi ? R2 to them. The distribution of the translations wi is generic (in the example wi is drawn uniformly from an 8-shaped region of the plane): This is not a problem as we do not need to make any particular assumptions on wPbesides that it is a translation. The distortion d(xi , yi ) is simply the m sum of the Euclidean distances j=1 kyji + wi ? xji k2 between the patterns xi and the transformed codes wi (yiQ ) = (y1i + wi , . . . , ymi +wi ). The distribution p(yi ) of the codes is assumed to factorize as p(yi ) = j=1 p(yji ) where the p(yji ) are identical densities estimated by Parzen window from all the available samples {yji , j = 1, . . . , M, i = 1, . . . , K}. In the second experiment (Fig. 2) we align hand-written digits extracted from the NIST Special Database 19. The results (Fig. 3) should be compared to the ones from [9]: They are of analogous quality, but they were achieved without regularizing the class of admissible transformations. Despite this, we did not observe any of the aligned patterns to collapse. In Fig. 4 we show the effect of choosing different values of the parameter ? in the cost function (3). As ? is increased, the alignment complexity is reduced and the fidelity of the alignment is degraded. By an appropriate choice of ?, the alignment can be regarded as a ?restoration? or ?canonization? of the pattern which abstracts from details of the specific instance. Expected value per pixel Entropy per pixel Entropy per pixel 5 ?1 ?1.5 5 ?1 5 10 10 ?1.5 10 10 15 15 ?2 15 15 ?2 20 20 ?2.5 40 20 ?2.5 40 25 25 30 5 10 15 20 25 ?4 0 ?2.2 Rate ?1.8 0 5 10 15 20 25 20 ?4 p(T(y)) along middle scanline 5 10 15 20 25 0 ?2.2 10 0.2 Rate ?1.8 0 Distortion per pixel 15 0.4 ?2 40 20 0.6 1 5 20 25 0.8 2 0 0 5 10 15 20 25 Distortion?rate diagram x 10 3 10 5 10 15 20 25 25 30 4 15 0.2 ?2 40 Distortion per pixel 20 0.4 0.5 20 25 0.6 1 0 0 p(T(y)) along middle scanline 0.8 1.5 25 20 5 10 15 20 25 Distortion?rate diagram x 10 20 Distortion 5 Distortion Expected value per pixel Entropy per pixel Entropy per pixel 0 0 5 5 10 15 20 25 5 10 15 20 25 Figure 2: Basic vs GN image alignment algorithms. Left. We show the results of applying the basic image alignment algorithm of Sect. 3.1. The patterns are zeroes from the NIST Special Database 19. We show in writing order: The expected value E[T (y)]; the per-pixel entropy H(T (y)) (it can be negative as it is differential); a 3-D plot of the same function H(T (y)); the distortion-complexity diagram as the algorithm minimizes the function D + ?R (in green we show some lines of constant cost); the probability p(T (y) = l) as l ? [0, 1] and y varies along the middle scan-line; and the perpixel distortion D(x) = E[(I(x)?T (wx))2 ]. Right. We demonstrate the GN algorithm of Sect. 3.2. The algorithm achieves a significantly better solution in term of the cost function (3). Moreover GN converges in only two sweeps of the dataset, while the basic algorithm after 10 sweeps is still slowly moving. This is due to the fact that GN selects both the best search direction and step size, resulting in a more efficient search strategy. Figure 3: Aligned patterns. Left. A few patterns from NIST Special Database 19. Middle. Basic algorithm: Results are very similar to [9], except that no regularization on the transformations is used. Right. GN algorithm: Patterns achieve a better alignment due to the more efficient search strategy; they also appear to be much more ?regular? due to the noise cancellation effect discussed in Fig. 4. Bottom. More examples of patterns before and after GN alignment. 5 Conclusions IC is a useful algorithm for joint pattern alignment, both robust and flexible. In this paper we showed that the original formulation can be improved by realizing that alignment should result in a simplified representation of the useful information carried by the patterns rather than a simplification of the patterns. This results in a formulation that does not require inventing regularization terms in order to prevent degenerate solutions. We also showed that Gauss-Newton can be successfully applied to this problem for the case of image alignment and that this is in some regards more effective than the original IC algorithm. 5 x 10 ?4 Distortion 4 3 2 1 0 ?2.3 ?2.2 ?2.1 Rate ?2 ?1.9 (a) Distortion-Complexity ?1.8 (b) Not aligned (c) Aligned Figure 4: Distortion-complexity balance. We illustrate the effect of varying the parameter ? in (3). (a) Estimated distortion-complexity function D(R). The green (dashed) lines have slope equal to ? and should be tangent to D(R) (Sect. 2). (b) We show the alignment T (wi x) of eight patterns (rows) as ? is increased (columns). In order to reduce the entropy of the alignment, the algorithm ?forgets? about specific details of each glyph. (c) The same as (b), but aligned. Acknowledgments We would like to acknowledge the support of AFOSR FA9550-06-1-0138 and ONR N00014-03-10850. References [1] P. Ahammad, C. L. Harmon, A. Hammonds, S. S. Sastry, and G. M. Rubin. Joint nonparametric alignment for analizing spatial gene expression patterns in drosophila imaginal discs. In Proc. CVPR, 2005. [2] K. Branson. The information bottleneck method. Lecture Slides, 2003. [3] J. Buhmann and H. K?uhnel. Vector quantization with complexity costs. IEEE Trans. on Information Theory, 39, 1993. [4] P. A. Chou, T. Lookabaugh, and R. M. Gray. Entropy-constrained vector quantization. In 37, editor, IEEE Trans. on Acoustics, Speech, and Signal Processing, volume 1, 1989. [5] T. M. Cover and J. A. Thomson. Elements of Information Theory. Wiley, 2006. [6] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. Wiley Inerscience, 2001. [7] B. J. Frey and N. Jojic. Transformation-invariant clustering and dimensionality reduction using EM. PAMI, 2000. [8] G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, 1996. [9] E. G. Learned-Miller. Data driven image models through continuous joint alignment. PAMI, 28(2), 2006. [10] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of The Royal Statistical Society, Series B, 61(3), 1999.
3076 |@word deformed:2 illustrating:1 middle:4 compression:1 norm:1 duda:1 d2:2 seek:2 simplifying:1 reduction:1 series:1 must:1 written:1 john:1 wx:4 enables:2 remove:2 plot:1 update:1 v:1 alone:1 fewer:1 plane:1 xk:3 realizing:1 fa9550:1 coarse:1 quantized:1 lx:1 simpler:3 unacceptable:1 along:4 direct:2 differential:3 ik:2 specialize:1 introduce:2 expected:5 xji:1 behavior:1 themselves:1 andrea:1 aliasing:1 window:2 becomes:1 xx:4 bounded:1 notation:1 estimating:1 moreover:1 null:2 minimizes:1 contrasting:1 finding:1 transformation:25 ti:38 unwanted:1 exactly:1 k2:1 yn:1 appear:1 t1:4 before:1 dropped:1 local:3 frey:1 bilinear:1 despite:1 interpolation:2 pami:2 might:3 quantified:1 collect:1 branson:1 collapse:1 range:1 case1:1 averaged:2 statistically:1 faithful:4 practical:1 unique:2 acknowledgment:1 digit:1 area:1 significantly:2 vedaldi:2 convenient:1 projection:1 regular:2 get:2 operator:3 context:1 applying:3 writing:1 optimize:1 equivalent:2 map:2 demonstrated:1 lagrangian:1 deterministic:1 center:1 straightforward:2 formulate:1 simplicity:1 estimator:2 array:1 regarded:1 coordinate:1 analogous:3 heavily:1 lighter:1 designing:1 element:1 approximated:1 genuinely:1 database:3 bottom:2 solved:1 region:2 sect:20 trade:1 removed:1 yk:3 principled:1 complexity:32 depend:1 compromise:2 completely:1 tca:3 easily:1 joint:10 effective:2 congealing:3 formation:1 choosing:1 refined:2 whose:2 encoded:1 cvpr:1 distortion:40 drawing:1 statistic:1 transform:1 final:3 hoc:1 sequence:2 propose:1 reconstruction:1 aligned:19 mixing:1 degenerate:3 achieve:1 los:2 exploiting:1 optimum:1 converges:1 tk:4 illustrate:3 derive:1 pose:1 measured:1 job:1 eq:1 solves:1 auxiliary:1 c:1 trading:1 implies:1 direction:1 stochastic:1 centered:1 enable:1 require:3 inventing:1 really:1 drosophila:1 extension:1 ic:17 ground:1 mapping:3 substituting:1 achieves:1 proc:1 successfully:1 gaussian:3 always:1 rather:6 avoid:1 varying:2 refining:1 vk:1 contrast:2 suppression:2 chou:1 sense:1 hidden:1 relation:2 transformed:2 i1:2 selects:1 pixel:19 issue:2 fidelity:3 overall:1 flexible:1 histogramming:1 classification:1 constrained:3 special:4 spatial:1 mutual:2 equal:2 shaped:2 sampling:1 identical:1 look:1 excessive:1 ymi:1 simplify:1 few:2 simultaneously:1 attempt:2 interest:1 highly:1 alignment:39 indifferent:1 introduces:1 golub:1 harmon:1 euclidean:1 re:1 deformation:9 increased:2 instance:1 modeling:1 column:1 gn:7 cover:1 restoration:1 maximization:2 lattice:1 stacking:2 cost:11 addressing:1 subset:1 parametrically:2 too:1 varies:1 chooses:1 density:7 systematic:2 off:3 probabilistic:2 invertible:2 parzen:4 quickly:1 fused:1 hopkins:1 w1:4 squared:1 opposed:1 l22:1 slowly:1 collapsing:1 warped:1 derivative:4 toy:2 account:1 coding:2 wk:4 coefficient:1 explicitly:3 ad:1 vi:2 try:1 doing:2 scanline:2 recover:1 option:1 slope:1 contribution:1 minimize:3 ni:2 degraded:1 variance:1 efficiently:1 miller:1 ensemble:17 yield:3 identify:1 correspond:1 conceptually:1 disc:1 trajectory:1 l21:1 aligns:1 distort:1 boil:1 ppca:2 dataset:2 dimensionality:2 attained:1 tipping:1 improved:1 formulation:9 done:2 stage:2 hand:2 mode:2 quality:1 gray:1 lossy:2 glyph:1 effect:8 soatto:2 regularization:8 jojic:1 during:1 nuisance:6 criterion:1 bijective:1 thomson:1 demonstrate:1 l1:1 stefano:1 image:23 regularizing:2 common:1 empirically:1 stork:1 volume:1 discussed:2 functionally:2 sastry:1 similarly:1 cancellation:2 moving:1 base:1 aligning:1 align:1 showed:2 perspective:1 optimizing:2 driven:1 scenario:2 n00014:1 onr:1 yi:6 preserving:2 minimum:3 additional:1 care:1 converge:1 signal:1 dashed:1 preservation:1 ii:13 d0:2 long:2 hart:1 post:1 equally:1 qi:12 basic:7 expectation:1 kernel:2 achieved:2 addressed:1 diagram:3 extra:1 eliminates:2 unlike:1 undo:2 tend:1 affect:1 isolation:1 reduce:1 idea:3 angeles:2 bottleneck:4 six:1 expression:1 padding:1 speech:1 hessian:2 action:1 useful:8 clear:1 transforms:1 nonparametric:1 reiterating:1 dark:1 slide:1 induces:1 processed:1 reduced:1 figuring:1 estimated:7 delta:1 per:12 trapped:1 discrete:6 affected:2 express:1 group:1 drawn:2 prevent:1 pj:4 penalizing:1 v1:1 graph:1 sum:1 letter:1 distorted:1 place:1 family:1 acceptable:2 bit:1 layer:1 bound:1 pay:1 simplification:2 adapted:1 sake:1 ucla:1 y1i:1 min:2 span:2 extremely:1 department:1 across:1 em:1 wi:47 projecting:1 invariant:3 explained:2 gradually:1 taken:1 equation:2 eventually:1 know:1 available:1 operation:1 apply:1 observe:1 eight:1 appropriate:4 generic:1 alternative:3 original:4 compress:1 assumes:1 substitute:1 denotes:1 top:2 clustering:1 qji:2 newton:8 exploit:2 society:1 warping:2 sweep:2 added:1 quantity:1 parametric:2 strategy:2 gradient:4 subspace:2 distance:1 considers:1 l12:1 assuming:1 code:4 relationship:1 illustration:1 minimizing:3 balance:1 difficult:1 unfortunately:2 favorably:1 negative:2 unknown:1 upper:1 l11:1 observation:1 finite:2 nist:3 descent:1 acknowledge:1 y1:4 discovered:1 arbitrary:3 superimpose:1 introduced:4 required:1 faithfulness:1 california:1 acoustic:1 learned:1 perpixel:1 trans:3 able:1 usually:3 pattern:46 xm:1 lookabaugh:1 green:2 royal:1 regularized:2 buhmann:1 residual:3 representing:2 carried:4 review:1 geometric:1 l2:2 tangent:1 afosr:1 loss:1 lecture:1 digital:3 affine:4 rubin:1 principle:1 editor:1 bypass:1 translation:5 row:3 changed:1 repeat:1 last:1 keeping:1 side:1 allow:1 warp:1 template:10 sparse:1 benefit:2 regard:1 van:1 dimension:3 xn:2 valid:1 numeric:1 avoids:1 author:1 collection:1 concretely:1 projected:1 avoided:1 simplified:2 implicitly:1 keep:1 annihilate:1 gene:1 summing:1 assumed:2 xi:6 factorize:1 yji:3 search:10 continuous:5 un:1 nature:1 learn:1 reasonably:1 ca:1 robust:1 complex:3 did:1 whole:1 noise:4 big:1 x1:5 fig:5 wiley:2 explicit:2 x1i:1 ib:2 forgets:1 admissible:1 down:1 rk:1 removing:1 specific:2 bishop:1 showing:1 symbol:1 r2:4 quantization:4 adding:1 easier:1 entropy:24 generalizing:1 simply:3 likely:1 prevents:1 expressed:1 corresponds:1 extracted:1 conditional:1 change:1 loan:1 typical:1 determined:2 except:2 uniformly:2 acting:1 averaging:2 reducing:1 principal:2 total:1 gauss:8 la:1 deforming:1 support:1 scan:1 d1:2 avoiding:2
2,288
3,077
Learning annotated hierarchies from relational data Daniel M. Roy, Charles Kemp, Vikash K. Mansinghka, and Joshua B. Tenenbaum CSAIL, Dept. of Brain & Cognitive Sciences, MIT, Cambridge, MA 02139 {droy, ckemp, vkm, jbt}@mit.edu Abstract The objects in many real-world domains can be organized into hierarchies, where each internal node picks out a category of objects. Given a collection of features and relations defined over a set of objects, an annotated hierarchy includes a specification of the categories that are most useful for describing each individual feature and relation. We define a generative model for annotated hierarchies and the features and relations that they describe, and develop a Markov chain Monte Carlo scheme for learning annotated hierarchies. We show that our model discovers interpretable structure in several real-world data sets. 1 Introduction Researchers in AI and cognitive science [1, 7] have proposed that hierarchies are useful for representing and reasoning about the objects in many real-world domains. One of the reasons that hierarchies are valuable is that they compactly specify categories at many levels of resolution, each node representing the category of objects at the leaves below the node. Consider, for example, the simple hierarchy shown in Figure 1a, which picks out five categories relevant to a typical university department: employees, staff, faculty, professors, and assistant professors. Suppose that we are given a large data set describing the features of these employees and the interactions among these employees. Each of the five categories will account for some aspects of the data, but different categories will be needed for understanding different features and relations. ?Faculty,? for example, is the single most useful category for describing the employees that publish papers (Figure 1b), but three categories may be needed to describe the social interactions among the employees (Figure 1c). In order to understand the structure of the department, it is important not only to understand the hierarchical organization of the employees, but to understand which levels in the hierarchy are appropriate for describing each feature and each relation. Suppose, then, that an annotated hierarchy is a hierarchy along with a specification of the categories in the hierarchy that are relevant to each feature and relation. The idea of an annotated hierarchy is one of the oldest proposals in cognitive science, and researchers including Collins and Quillian [1] and Keil [7] have argued that semantic knowledge is organized into representations with this form. Previous treatments of annotated hierarchies, however, usually suffer from two limitations. First, annotated hierarchies are usually hand-engineered, and there are few proposals describing how they might be learned from data. Second, annotated hierarchies typically capture knowledge only about the features of objects: relations between objects are rarely considered. We address both problems by defining a generative model for objects, features, relations, and hierarchies, and showing how it can be used to recover an annotated hierarchy from raw data. Our generative model for feature data assumes that the objects are located at the leaves of a rooted tree, and that each feature is generated from a partition of the objects ?consistent? with the hierarchy. A tree-consistent partition (henceforth, t-c partition) of the objects is a partition of the objects into disjoint categories, i.e. each class in the partition is exactly the set of leaves descending from some node in the tree. Therefore, a t-c partition can be uniquely encoded as the set of these nodes whose leaf descendants comprise the classes (Figure 1a,b). The simplest t-c partition is the singleton set (a) (b) False Staff (S) Pu bl Ha ishes s Dir Tenu ec t?D re ep os it True S S Employees (E) Professors (P) Faculty (F) P E Assistant Profs (A) E S S P F E F P S,S E F A E (c) A S,F F S P A S,S S,F F,S F,F P,P P,A F,S friends with S P A tA wA S,E P,S P,P P,A A,P A,A A F A,E works with orders around Figure 1: (a) A hierarchy over 15 members of a university department: 5 staff members, 5 professors and 5 assistant professors. (b) Three binary features, each of which is associated with a different t-c partition of the objects. Each class in each partition is labeled with the corresponding node in the tree. (c) Three binary relations, each of which is associated with a different t-c partition of the set of object pairs. Each class in each partition is labeled with the corresponding pair of nodes. containing the root node, which places all objects into a single class. The most complex t-c partition is the set of all leaves, which assigns each object to its own class. We assume that the features of objects in different classes are independent, but that objects in the same class tend to have similar features. Therefore, finding the categories in the tree most relevant to a feature can be formalized as finding the simplest t-c partition that best accounts for the distribution of the feature (Figure 1b). We define an annotated hierarchy as a hierarchy together with a t-c partition for each feature. Although most discussions of annotated hierarchies focus on features, much of the data available to human learners comes in the form of relations. Understanding the structure of social groups, for instance, involves inferences about relations like admires(?, ?), friend-of (?, ?) and brother-of (?, ?). Like the feature case, our generative model for relational data assumes that each (binary) relation is generated from a t-c partition of the set of all pairs of objects. Each class in a t-c partition now corresponds to a pair of categories (i.e. pair of nodes) (Figure 1c), and we assume that all pairs in a given class tend to take similar values. As in the feature case, finding the categories in the tree most relevant to a relation can be formalized as finding the t-c partition that best accounts for the distribution of the relation. The t-c partition for each relation can be viewed as an additional annotation of the tree. The final piece of our generative model is a prior over rooted trees representing hierarchies. Roughly speaking, the best hierarchy will then be the one that provides the best categories with which to summarize all the features and relations. Like other methods for discovering structure in data, our approach may be useful both as a tool for data analysis and as a model of human learning. After describing our approach, we apply it to several data sets inspired by problems faced by human learners. Our first analysis suggests that the model recovers coherent domains given objects and features from several domains (animals, foods, tools and vehicles). Next we show that the model discovers interpretable structure in kinship data, and in data representing relationships between ontological kinds. 2 A generative model for features and relations Our approach is organized around a generative model for feature data and relational data. For simplicity, we present our model for feature and relational data separately, focusing on the case where we have a single binary feature or a single binary relation. After presenting our generative model, we describe how it can be used to recover annotated hierarchies from data. We begin with the case of a single binary feature and define a joint distribution over three entities: a rooted, weighted, binary tree T with O objects at the leaves; a t-c partition of the objects; and feature observations, d. For a feature, a t-c partition ? is a set of nodes {n1 , n2 , . . . , nk }, such that each object is a descendant of exactly one node in ?. We will identify each node with the category of objects descending from it. We denote the data for all objects in the category n as dn . If o is a leaf (single object category), then do is the value of the feature for object o. In Figure 1b, three t-c partitions associated with the hierarchy are represented and each class in each partition is labeled with the corresponding category. The joint distribution P (T, w, ?, d|?, ?f ) is induced by the following generative process: i. Sample a tree T from a uniform distribution over rooted binary trees with O leaves (each leaf will represent an object and there are O objects). Each node n represents a category. ii. For each category n, sample its weight, wn , according to an exponential distribution with parameter ?, i.e. p(wn |?) = ?e??wn . iii. Sample a t-c partition ?f = {n1 , n2 , . . . , nk } ? ?(root-of(T )), where ?(n) is a stochastic, set-valued function:  {n} n is a leaf, or w.p. ?(wn ) ?(n) = (1) ?i ?(ni ) otherwise where ?(x) = 1?e?x and ni are the children of n. Intuitively, categories with large weight are more likely to be classes in the partition. For the publishes feature in Figure 1b, the t-c partition is {F, S}. iv. For each category n ? ?f , sample ?n ? Beta(?f , ?f ), where ?n is the probability that objects in category n exhibit the feature f . Returning to the publishes example in Figure 1b, two parameters, ?F and ?S , would be drawn for this feature. v. For each object o, sample its feature value do ? Bernoulli(?n ), where n ? ?f is the category containing o. Consider now the case where we have a single binary relation defined over all ordered pairs of objects {(oi , oj )}. In the relational case, our joint distribution is defined over a rooted, weighted, binary tree; a t-c partition of ordered pairs of objects; and observed, relational data represented as a matrix D where Di,j = 1 if the relation holds between oi and oj . Given a pair of categories (ni , mj ), let ni ? mj be the set of all pairs of objects (oi , oj ) such that oi is an object in the category ni and oj is an object in the category mj . With respect to pairs of trees, a t-c partition, ?, is a set of pairs of categories {(n1 , m1 ), (n2 , m2 ), . . . , (nk , mk )} such that, for every pair of objects (oi , oj ), there exists exactly one pair (nk , mk ) ? ? such that (oi , oj ) ? nk ? mk . To help visualize these 2D t-c partitions, we can reorder the columns and rows of the matrix D according to an in-order traversal of the binary tree T . Each t-c partition now splits the matrix into contiguous, rectangular blocks (see Figure 1c, where each rectangular block is labeled with its category pair). Assuming we have already generated a rooted, weighted binary tree, we now specify the generative process for a single binary relation (c.f. steps iii through v in the feature case): iii. Sample a t-c partition ?r = {(n1 , m1 ), . . . , (nk , mk )} ? ?(root-of(T ), root-of(T )), where ?(n, m) is a stochastic, set-valued function: ? w.p. ?(wn ) ? ?(wm ) ?{(n, m)} (2) ?(n, m) = ?i ?(ni , m) otherwise, w.p. 21 ? ?j ?(n, mj ) otherwise where ni /mj are the children of n/m. To handle special cases, if both n, m are leaves, ?(n, m) = {n, m}; if only one of the nodes is a leaf, we default to the feature case on the remaining tree, halting with probability ?(wn ) ? ?(wm ). Intuitively, if a pair of categories (n, m) both have large weight, the process is more likely to group all pairs of objects in n ? m into a single class. In Figure 1c, the t-c partition for the works with relation is {(S, S), (S, F ), (F, S), (F, F )}. iv. For each pair of categories (n, m) ? ?r , sample ?n,m ? Beta(?r , ?r ), where ?n,m is the probability that the relation holds between any pair of objects in n ? m. For the works with relation in Figure 1c, parameters would be drawn for each of the four classes in the t-c partition. v. For each pair of objects (oi , oj ), sample the relation Di,j ? Bernoulli(?n,m ), where (n, m) ? ?r and (oi , oj ) ? (n, m). That is, the probability that the relation holds is the same for all pairs in a given class. For data sets with multiple relations and features, we assume that all relations and features are conditionally independent given the weighted tree T . 2.1 Inference Given observations of features and relations, we can use the generative model to ask various questions about the latent hierarchy and its annotations. We start by determining the posterior distribution (r) R on the weighted tree topologies, (T, w), given data D = ({d(f ) }F }r=1 ) over O objects, f =1 , {D F F features and R relations and hyperparameters ? and ? = ({?f }f =1 , {?r }R r=1 ). By Bayes? rule, P (T, w|D, ?, ?) ? P (T )   ? 1 P (w|T, ?) Y  ?e??wn n P (D|T, w, ?) YF  YR P (d(f ) |T, w, ?f ) P (D(r) |T, w, ?r ) . f =1 r=1 P (f ) But P (d(f ) |T, w, ?f ) = |?, ?f ), where P (?|T, w) is the distribution over ? P (?|T, w) P (d t-c partitions induced by the stochastic function ? and P (d(f ) |?, ?f ) is the likelihood given the partition, marginalizing over the feature probabilities, ?n . Because the classes are indepenQ (f ) (f ) dent, P (d(f ) |?, ?f ) = n?? P (dn |n ? ?, ?f ), where Mf (n) = P (dn |n ? ?, ?f ) is the (f ) marginal likelihood for dn , the features for objects in category n. For our binary-valued data sets, Mf (n) is the standard marginal likelihood for the beta-binomial model. Because there are an exponential number of t-c partitions, we present an efficient dynamic program for calculating (f ) Tf (n) = P (dn |T, w, ?f ). Then, Tf (root-of(T )) = P (d(f ) |T, w, ?f ) is the desired quantity. First observe that, for all objects (i.e. leaf nodes) o, Tf (o) = Mf (o). Let n be a node and assume no ancestor of n is in ?. With probability ?(wn ) = 1 ? e?wn , category n will be a single class and the contribution to Tf will be Mf (n). Otherwise, ?(n) splits category n into its children, n1 and n2 . Now the possible partitions of the objects in category n are every t-c partition of the objects below n1 paired with every t-c partition below n2 . By independence, this contributes Tf (n1 )Tf (n2 ). Hence,  ?(wn )Mf (n) + (1 ? ?(wn )) Tf (n1 )Tf (n2 ) if n is an internal node Tf (n) = Mf (n) otherwise. (r) For the relational case, we describe a dynamic program Tr (n, m) that calculates P (Dn,m |T, w, ?r ), the probability of all relations between objects in n?m, conditioned on the tree, having marginalized (r) out the t-c partitions and relation probabilities. Let Mr (n, m) = P (Dn,m |(n, m) ? ?, ?r ) be the marginal likelihood of the relations in n ? m. For relations, Mf (n, m) is also the beta-binomial. If n and m are both leaves, then Tr (n, m) = Mr (n, m). Otherwise, Tr (n, m) = ?(wn ) ?(wm )Mr (n, m) ? n is a leaf ?Tr (n, m1 )Tr (n, m2 ) + (1 ? ?(wn ) ?(wm )) (Tr (n1 , m)Tr (n2 , m) m is a leaf ?1 2 ? (Tr (n, m1 )Tr (n, m2 ) + Tr (n1 , m)Tr (n2 , m)) otherwise The above dynamic programs have linear and quadratic complexity in the number of objects, respectively. Because we can efficiently compute the posterior density of a weighted tree, we can search for the maximum a posteriori (MAP) weighted tree. Conditioned on the MAP tree, we can efficiently compute the MAP t-c partition for each feature and relation. We find the MAP tree first, rather than jointly optimizing for both the topology and partitions, because marginalizing over the t-c partitions produces more robust trees; marginalization has a (Bayesian) ?Occam?s razor? effect and helps avoid overfitting. MAP t-c partitions can be computed by a straightforward modification of the above dynamic programs, replacing sums with max operations and maintaining a list of nodes representing the MAP t-c partition at each node in the tree. We chose to implement global search by building a Markov chain Monte Carlo (MCMC) algorithm with the posterior as the stationary distribution and keeping track of the best tree as the chain mixes. For all the results in this paper, we fixed the hyperparameters of all beta distributions to ? = 0.5 (i.e. the asymptotically least informative prior) and report the (empirical) MAP tree and MAP t-c partitions conditioned on the tree. The MCMC algorithm searches for the MAP tree by cycling through three Metropolis-Hastings (MH) moves adapted from [14]: i. Subtree Pruning and Regrafting: Choose a node n uniformly at random (except the root). Choose a non-descendant node m. Detach n from its parent and collapse the parent (remove node, attaching the remaining child to the parent?s parent and adding the parent?s weight to the child?s). Sample u ? Uniform(0, 1) and then insert a new node m? between m and its parent. Attach n to m? , set wm? := (1 ? u)wm and set wm := uwm . ii. Edge Weight Adjustment: Choose a node n uniformly at random (including the root) and propose a new weight wn (e.g. let x be Normal(log(wt ), 1) and let new weight be ex ). iii. Subtree Swapping: Choose a node n uniformly at random (except the root). Choose another node n? such that neither n nor n? is a descendant of the other, and swap n and n? . The first two moves suffice to make the chain ergodic; subtree swapping is included to improve mixing. The first and last moves are symmetric. We initialized the chain on a random tree with weights set to one, ran the chain for approximately one million iterations and assessed convergence by comparing separate chains started from multiple random initial states. 2.2 Related Work There are several methods that discover hierarchical structure in feature data. Hierarchical clustering [4] has been successfully used for analyzing both biological data [18] and psychological data, but cannot learn the annotated hierarchies that we consider. Bayesian hierarchical clustering (BHC) [6] is a recent alternative which constructs a tree as a byproduct of approximate inference in a flat clustering model, but lacks any notion of annotations. It is possible that a BHC-inspired algorithm could be derived to find approximate MAP annotated hierarchies. Our model for feature data is most closely related to methods for Bayesian phylogenetics [14]. These methods typically assume that features are generated directly by a stochastic process over a tree. Our model adds an intervening layer of abstraction by assuming that partitions are generated by a stochastic process over a tree, and that features are generated from these partitions. By introducing a partition for each feature, we gain the ability to annotate a hierarchy with the levels most relevant to each feature. There are several methods for discovering hierarchical structure in relational data [5, 13], but none of these methods provides a general purpose solution to the problem we consider. Most of these methods take a single relation as input, and assume that the hierarchy captures an underlying community structure: in other words, objects that are often paired in the input are assumed to lie nearby in the tree. Our approach handles multiple relations simultaneously, and allows a more flexible mapping between each relation and the underlying hierarchy. Different relations may depend on very different regions of the hierarchy, and some relations may establish connections between categories that are quite distant in the tree (see Figure 4). Many non-hierarchical methods for relational clustering have also been developed [10, 16, 17]. One family of approaches is based on the stochastic blockmodel [15], of which the Infinite Relational Model (IRM) [9] is perhaps the most flexible. The IRM handles multiple relations simultaneously, and does not assume that each relation has underlying community structure. The IRM, however, does not discover hierarchical structure; instead it partitions the objects into a set of non-overlapping categories. Our relational model is an extension of the blockmodel that discovers a nested set of categories as well as which categories are useful for understanding each relation in the data set. 3 Results We applied our model to three problems inspired by tasks that human learners are required to solve. Our first application used data collected in a feature-listing task by Cree and McRae [2]. Participants in this task listed the features that came to mind when they thought about a given object: when asked to think about a lemon, for example, subjects listed features like ?yellow,? ?sour,? and ?grows on trees.?1 We analyzed a subset of the full data set including 60 common objects and the 100 features most commonly listed for these objects. The 60 objects are shown in Figure 2, and were chosen to represent four domains: animals, food, vehicles and tools. Figure 2 shows the MAP tree identified by our algorithm. The model discovers the four domains as well as superordinate categories (e.g. ?living things?, including fruits, vegetables, and animals) and subordinate categories (e.g. ?wheeled vehicles?). Figure 2 also shows MAP partitions for 10 1 Note that some of the features are noisy ? according to these data, onions are not edible, since none of the participants chose to list this feature for onion. strawberry pineapple grapefruit apple tangerine nectarine lemon grape orange cucumber carrot radish onions lettuce potato clamp drill pliers wrench shovel chisel tomahawk sledgehammer axe hammer hoe scissors screwdriver rake crowbar van car truck bus jeep ship bike helicopter train motorcycle tricycle wheelbarrow submarine yacht jet seal lion dolphin mouse duck tiger rat chicken cat deer squirrel sheep pig horse cow Food Tools Vehicles Animals a tool an animal made of metal is edible has a handle is juicy is fast eaten in salads is white has 4 wheels has 4 legs analyzes Antibiotic Enzyme Hormone Lipid Amino Acid Carbohydrate Steroid Finding Diseases Sign or Symptom Laboratory Result Laboratory Procedure Diagnostic Procedure Therapeutic Procedure Natural Process Human?caused Process Plant Mammal Bird Animal Anatomical Structure Cell Tissue Cell Dysfunction Pathologic Function Disease or Syndrome Cell Function Physiologic Function Organ Function Acquired Abnormality Congenital Abnormality Figure 2: MAP tree recovered from a data set including 60 objects from four domains. MAP partitions for several features are shown: the model discovers, for example, that ?is juicy? is associated with only one part of the tree. The weight of each edge in the tree is proportional to its vertical extent. Chemicals affects process of causes causes (IRM) Figure 3: MAP tree recovered from 49 relations between entities in a biomedical data set. Four relations are shown (rows and columns permuted to match in-order traversal of the MAP tree). Consider the circled subset of the t-c partition for causes. This block captures the knowledge that ?chemicals? cause ?diseases.? The Infinite Relational Model (IRM) does not capture the appropriate structure in the relation cause because it does not model the latent hierarchy, instead choosing a single partition to describe the structure across all relations. representative features. The model discovers that some features are associated only with certain parts of the tree: ?is juicy? is associated with the fruits, and ?is metal? is associated with the manmade items. Discovering domains is a fundamental cognitive problem that may be solved early in development [11], but that is ignored by many cognitive models, which consider only carefully chosen data from a single domain (e.g. data including only animals and only biological features). By organizing the 60 objects into domains and identifying a subset of features that are associated with each domain, our model begins to suggest how infants may parse their environment into coherent domains of objects and features. Our second application explores the acquisition of ontological knowledge, a problem that has been previously discussed by Keil [7]. We demonstrate that our model discovers a simple biomedical ontology given data from the Unified Medical Language System (UMLS) [12]. The full data set includes 135 entities and 49 binary relations, where the entities are ontological categories like ?Sign or Symptom?, ?Cell?, and ?Disease or Syndrome,? and the relations include verbs like causes, analyzes and affects. We applied our model to a subset of the data including the 30 entities shown in Figure 3. OM1 OM1 OM1 OM1 OF1 OF1 OF1 OF1 YF1 YF1 YF1 YF1 YM1 YM1 YM1 YM1 YF3 YF3 OF3 OF3 YF3 YF3 OF3 OF3 YM3 OM3 OM3 YM3 YM3 YM3 OM3 OM3 OF4 OF4 OF4 YF4 YF4 OF4 YF4 YF4 OM4 OM4 YM4 YM4 YM4 YM4 OM4 OM4 YM2 YM2 OM2 OM2 OM2 YM2 YM2 OM2 OF2 OF2 OF2 OF2 YF2 YF2 YF2 YF2 Section 1 Agngiya Section 3 Adiaya Section 4 Umbaidya Section 2 Anowadya Anowadya (IRM) Figure 4: MAP tree recovered from kinship relations between 64 members of the Alyawarra tribe. Individuals have been labelled with their age, gender and kinship section (e.g. ?YF1? is a young female from section 1). MAP partitions are shown for four representative relations: the model discovers that different relations depend on the tree in very different ways; hierarchical structure allows for a compact representation (c.f. IRM). The MAP tree is an ontology that captures several natural groupings, including a category for ?living things? (plant, bird, animal and mammal), a category for ?chemical substances? (amino acid, lipid, antibiotic, enzyme etc.) and a category for abnormalities. The MAP partitions for each relation identify the relevant categories in the tree relatively cleanly: the model discovers, for example, that the distinction between ?living things? and ?abnormalities? is irrelevant to the first place of the relation causes, since neither of these categories can cause anything (according to the data set). This distinction, however, is relevant to the second place of causes: substances can cause abnormalities and dysfunctions, but cannot cause ?living things?. Note that the MAP partitions for causes and analyzes are rather different: one of the reasons why discovering separate t-c partitions for each relation is important is that different relations can depend on very different parts of an ontology. Our third application is inspired by the problem children face when learning the kinship structure of their social group. This problem is especially acute for children growing up in Australian tribes, which have kinship systems that are more complicated in many ways than Western kinship systems, but which nevertheless display some striking regularities. We focus here on data from the Alyawarra tribe [3]. Denham [3] collected a large data set by asking 104 tribe members to provide kinship terms for each other. Twenty-six different terms were mentioned in total, and four of them are represented in Figure 4. More than one kinship term may describe the relationship between a pair of individuals ? since the data set includes only one term per pair, some of the zeros in each matrix represent missing data rather than relationships that do not hold. For simplicity, however, we assume that relationships that were never mentioned do not exist. The Alyawarra tribe is divided into four kinship sections, and these sections are fundamental to the social structure of the tribe. Each individual, for instance, is permitted only to marry individuals from one of the other sections. Whether a kinship term applies between a pair of individuals depends on their sections, ages and genders [3, 8]. We analyzed a subset of the full data set including 64 individuals chosen to equally represent all four sections, both genders, and people young and old. The MAP tree divides the individuals perfectly according to kinship section, and discovers additional structure within each section. Group three, for example, is split by age and then by gender. The MAP partitions for each relation indicate that different relations depend very differently on the structure of the tree. Adiadya refers to a younger member of one?s own kinship section. The MAP partition for this relation contains fine-level structure only along the diagonal, indicating that the model has discovered that the term only applies between individuals from the same kinship section. Umbaidya can be used only between members of sections 1 and 3, and members of sections 2 and 4. Again the MAP partition indicates that the model has discovered this structure. In some places the MAP partitions appears to overfit the data: the partition for Umbaidya, for example, appears to capture some of the noise in this relation. This result may reflect the fact that our generative process is not quite right for these data: in particular, it does not capture the idea that some of the zeroes in each relation represent missing data. 4 Conclusions We developed a probabilistic model that assumes that features and relations are generated over an annotated hierarchy, and showed how this model can be used to recover annotated hierarchies from raw data. Three applications of the model suggested that it is able to recover interpretable structure in real-world data, and may help to explain the computational principles which allow human learners to acquire hierarchical representations of real-world domains. Our approach opens up several avenues for future work. A hierarchy specifies a set of categories, and annotations indicate which of these categories are important for understanding specific features and relations. A natural extension is to learn sets of categories that possess other kinds of structure, such as factorial structure [17]. For example, the kinship data we analyzed may be well described by three sets of overlapping categories where each individual belongs to a kinship section, a gender, and an age group. We have already extended the model to handle continuous data and can imagine other extensions, including higher-order relations, multiple trees, and relations between distinct sets of objects (e.g. given information, say, about the book-buying habits of a set of customers, this extension of our model could discover a hierarchical representation of the customers and a hierarchical representation of the books, and discover the categories of books that tend to be preferred by different kinds of customers). We are also actively exploring variants of our model that permit accurate online approximations for inference; e.g., by placing an exchangeable prior over tree structures based on a Polya-urn scheme, we can derive an efficient particle filter. We have shown that formalizing the intuition behind annotated hierarchies in terms of a prior on trees and partitions and a noise-robust likelihood enabled us to discover interesting structure in realworld data. We expect a fruitful area of research going forward will involve similar marriages between intuitions about structured representation from classical AI and cognitive science and modern inferential machinery from Bayesian statistics and machine learning. References [1] A. M. Collins and M. R. Quillian. Retrieval Time from Semantic Memory. JVLVB, 8:240?248, 1969. [2] G. Cree and K. McRae. Analyzing the factors underlying the structure and computation of the meaning of chipmunk, chisel, cheese, and cello (and many other concrete nouns). JEP Gen., 132:163?201, 2003. [3] W. Denham. The detection of patterns in Alyawarra nonverbal behaviour. PhD thesis, U. of Wash., 1973. [4] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analaysis. Wiley, 2001. [5] M. Girvan and M. E. J. Newman. Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12):7821?7826, 2002. [6] K. Heller and Z. Ghahramani. Bayesian Hierarchical Clustering. In ICML, 2005. [7] F. C. Keil. Semantic and Conceptual Development. Harvard University Press, Cambridge, MA, 1979. [8] C. Kemp, T. L. Griffiths, and J. B. Tenenbaum. Discovering Latent Classes in Relational Data. Technical Report AI Memo 2004-019, MIT, 2004. [9] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. In AAAI, 2006. [10] J. Kubica, A. Moore, J. Schneider, and Y. Yang. Stochastic link and group detection. In NCAI, 2002. [11] J. M. Mandler and L. McDonough. Concept formation in infancy. Cog. Devel., 8:291?318, 1993. [12] A. T. McCray. An upper level ontology for the biomedical domain. Comp. Func. Genom., 4:80?84, 2001. [13] J. Neville, M. Adler, and D. Jensen. Clustering relational data using attribute and link information. In Proc. of the Text Mining and Link Analysis Workshop, IJCAI, 2003. [14] D. L. Swofford, G. J. Olsen, P. J. Waddell, and D. M. Hillis. Phylogenetic inference. Molecular Systematics, 2nd. edition, 1996. [15] Y. J. Wang and G. Y. Wong. Stochastic blockmodels for directed graphs. JASA, 82:8?19, 1987. [16] S. Wasserman and K. Faust. Social network analysis: Methods and applications. Cambridge Press, 1994. [17] A. P. Wolfe and D. Jensen. Playing multiple roles: discovering overlapping roles in social networks. In Proc. of the Workshop on statistical relational learning and its connections to other fields, ICML, 2004. [18] K. Y. Yeung, M. Medvedovic, and R. E. Bumgarner. Clustering gene-expression data with repeated measurements. Genome Biology, 2003.
3077 |@word faculty:3 duda:1 seal:1 nd:1 open:1 cleanly:1 pick:2 mammal:2 tr:11 initial:1 contains:1 daniel:1 recovered:3 comparing:1 distant:1 partition:63 informative:1 yf3:4 remove:1 interpretable:3 stationary:1 generative:12 leaf:16 discovering:6 yr:1 item:1 infant:1 oldest:1 yamada:1 provides:2 node:26 five:2 phylogenetic:1 along:2 dn:7 beta:5 uwm:1 descendant:4 acquired:1 ontology:4 roughly:1 nor:1 growing:1 brain:1 inspired:4 buying:1 food:3 begin:2 discover:5 underlying:4 suffice:1 formalizing:1 bike:1 kinship:15 kind:3 developed:2 unified:1 finding:5 sour:1 every:3 exactly:3 returning:1 exchangeable:1 medical:1 analyzing:2 approximately:1 might:1 chose:2 bird:2 suggests:1 collapse:1 directed:1 block:3 implement:1 yacht:1 jep:1 procedure:3 habit:1 area:1 empirical:1 thought:1 inferential:1 ym1:4 word:1 refers:1 griffith:2 suggest:1 cannot:2 wheel:1 descending:2 wong:1 fruitful:1 map:26 customer:3 missing:2 straightforward:1 rectangular:2 resolution:1 ergodic:1 formalized:2 simplicity:2 assigns:1 identifying:1 wasserman:1 of3:4 m2:3 rule:1 enabled:1 handle:5 notion:1 hierarchy:39 suppose:2 alyawarra:4 imagine:1 harvard:1 wolfe:1 roy:1 located:1 labeled:4 ep:1 observed:1 role:2 solved:1 capture:7 wang:1 region:1 valuable:1 ran:1 disease:4 of1:4 environment:1 mentioned:2 complexity:1 intuition:2 asked:1 rake:1 traversal:2 dynamic:4 depend:4 learner:4 swap:1 compactly:1 joint:3 mh:1 differently:1 represented:3 various:1 cat:1 train:1 carbohydrate:1 distinct:1 fast:1 describe:6 monte:2 horse:1 deer:1 newman:1 choosing:1 formation:1 whose:1 encoded:1 quite:2 valued:3 solve:1 say:1 faust:1 otherwise:7 ability:1 statistic:1 think:1 jointly:1 noisy:1 final:1 online:1 propose:1 clamp:1 interaction:2 helicopter:1 relevant:7 motorcycle:1 gen:1 detach:1 mixing:1 organizing:1 academy:1 mcdonough:1 intervening:1 mccray:1 dolphin:1 parent:6 convergence:1 regularity:1 ijcai:1 produce:1 object:57 help:3 derive:1 develop:1 friend:2 polya:1 mansinghka:1 pathologic:1 involves:1 come:1 australian:1 indicate:2 closely:1 annotated:18 hammer:1 manmade:1 stochastic:8 filter:1 kubica:1 human:6 engineered:1 attribute:1 pineapple:1 subordinate:1 argued:1 shovel:1 behaviour:1 biological:3 dent:1 insert:1 extension:4 hold:4 squirrel:1 around:2 considered:1 marriage:1 normal:1 wheeled:1 mapping:1 visualize:1 early:1 purpose:1 assistant:3 proc:2 organ:1 tf:9 successfully:1 tool:5 weighted:7 cucumber:1 mit:3 rather:3 avoid:1 derived:1 focus:2 bernoulli:2 likelihood:5 lettuce:1 indicates:1 blockmodel:2 plier:1 posteriori:1 inference:5 abstraction:1 ym2:4 typically:2 onion:3 relation:65 ancestor:1 eaten:1 going:1 among:2 flexible:2 classification:1 development:2 animal:8 noun:1 special:1 orange:1 marginal:3 field:1 comprise:1 construct:1 having:1 never:1 biology:1 represents:1 placing:1 icml:2 future:1 report:2 few:1 modern:1 simultaneously:2 national:1 individual:10 n1:10 detection:2 organization:1 mining:1 sheep:1 analyzed:3 swapping:2 behind:1 chain:7 accurate:1 edge:2 potato:1 byproduct:1 screwdriver:1 machinery:1 tree:52 iv:2 divide:1 irm:7 initialized:1 re:1 desired:1 old:1 mk:4 psychological:1 instance:2 column:2 asking:1 contiguous:1 introducing:1 subset:5 quillian:2 uniform:2 radish:1 exploring:1 dir:1 adler:1 cello:1 density:1 fundamental:2 explores:1 csail:1 probabilistic:1 together:1 mouse:1 concrete:1 om1:4 again:1 reflect:1 thesis:1 aaai:1 containing:2 choose:5 denham:2 henceforth:1 cognitive:6 book:3 actively:1 account:3 halting:1 singleton:1 includes:3 caused:1 scissors:1 depends:1 piece:1 vehicle:4 root:8 wm:7 recover:4 start:1 bayes:1 participant:2 annotation:4 complicated:1 contribution:1 oi:8 ni:7 acid:2 efficiently:2 listing:1 identify:2 cree:2 yellow:1 raw:2 bayesian:5 none:2 carlo:2 comp:1 researcher:2 apple:1 tissue:1 explain:1 acquisition:1 associated:8 di:2 recovers:1 gain:1 therapeutic:1 nonverbal:1 treatment:1 ask:1 knowledge:4 car:1 organized:3 carefully:1 focusing:1 appears:2 ta:1 higher:1 permitted:1 specify:2 symptom:2 biomedical:3 overfit:1 hand:1 hastings:1 salad:1 parse:1 replacing:1 o:1 overlapping:3 lack:1 western:1 yf:1 perhaps:1 grows:1 building:1 effect:1 concept:2 true:1 anowadya:2 hence:1 chemical:3 symmetric:1 laboratory:2 jbt:1 umls:1 semantic:3 moore:1 white:1 conditionally:1 dysfunction:2 uniquely:1 rooted:6 razor:1 anything:1 rat:1 presenting:1 demonstrate:1 marry:1 reasoning:1 meaning:1 discovers:10 charles:1 common:1 permuted:1 admires:1 axe:1 million:1 discussed:1 m1:4 employee:7 measurement:1 cambridge:3 ai:3 particle:1 language:1 of2:4 specification:2 acute:1 etc:1 pu:1 add:1 enzyme:2 posterior:3 own:2 recent:1 female:1 showed:1 optimizing:1 irrelevant:1 belongs:1 ship:1 certain:1 binary:15 came:1 joshua:1 analyzes:3 additional:2 staff:3 mr:3 syndrome:2 schneider:1 living:4 ii:2 mandler:1 multiple:6 mix:1 full:3 hormone:1 technical:1 jet:1 match:1 retrieval:1 divided:1 hart:1 equally:1 molecular:1 paired:2 calculates:1 variant:1 publish:1 yeung:1 iteration:1 represent:5 annotate:1 chicken:1 cell:4 proposal:2 younger:1 separately:1 fine:1 posse:1 induced:2 tend:3 subject:1 thing:4 member:7 yang:1 abnormality:5 iii:4 split:3 wn:14 independence:1 marginalization:1 affect:2 topology:2 identified:1 cow:1 perfectly:1 idea:2 avenue:1 vikash:1 edible:2 steroid:1 whether:1 six:1 expression:1 vkm:1 suffer:1 speaking:1 cause:12 ignored:1 useful:5 physiologic:1 vegetable:1 listed:3 involve:1 factorial:1 tenenbaum:3 category:53 simplest:2 medvedovic:1 antibiotic:2 specifies:1 exist:1 brother:1 sign:2 diagnostic:1 disjoint:1 track:1 of4:4 per:1 anatomical:1 bumgarner:1 group:6 four:9 nevertheless:1 drawn:2 neither:2 umbaidya:3 asymptotically:1 graph:1 sum:1 realworld:1 striking:1 place:4 family:1 submarine:1 ueda:1 layer:1 display:1 quadratic:1 truck:1 adapted:1 lemon:2 scene:1 flat:1 nearby:1 aspect:1 urn:1 relatively:1 department:3 structured:1 according:5 across:1 metropolis:1 modification:1 grapefruit:1 intuitively:2 leg:1 bus:1 previously:1 describing:6 needed:2 mind:1 available:1 operation:1 systematics:1 permit:1 apply:1 observe:1 hierarchical:12 appropriate:2 alternative:1 assumes:3 remaining:2 binomial:2 clustering:7 include:1 marginalized:1 maintaining:1 calculating:1 ghahramani:1 prof:1 establish:1 carrot:1 wrench:1 especially:1 classical:1 bl:1 move:3 already:2 question:1 quantity:1 diagonal:1 cycling:1 exhibit:1 separate:2 link:3 entity:5 strawberry:1 collected:2 kemp:3 extent:1 reason:2 mcrae:2 devel:1 assuming:2 relationship:4 neville:1 acquire:1 phylogenetics:1 memo:1 drill:1 twenty:1 upper:1 vertical:1 observation:2 markov:2 keil:3 defining:1 relational:16 extended:1 discovered:2 verb:1 community:3 pair:24 publishes:2 required:1 grape:1 connection:2 coherent:2 learned:1 distinction:2 hillis:1 address:1 able:1 suggested:1 superordinate:1 below:3 usually:2 lion:1 pattern:2 pig:1 summarize:1 program:4 including:10 oj:8 max:1 memory:1 natural:3 attach:1 representing:5 scheme:2 improve:1 started:1 func:1 faced:1 prior:4 understanding:4 circled:1 heller:1 text:1 determining:1 marginalizing:2 girvan:1 plant:2 expect:1 interesting:1 limitation:1 proportional:1 bhc:2 age:4 ontological:3 jasa:1 metal:2 consistent:2 fruit:2 principle:1 ckemp:1 playing:1 occam:1 row:2 last:1 keeping:1 tribe:6 allow:1 understand:3 face:1 attaching:1 van:1 default:1 world:5 om4:4 genome:1 forward:1 collection:1 commonly:1 made:1 ec:1 social:7 pruning:1 approximate:2 compact:1 preferred:1 olsen:1 gene:1 global:1 overfitting:1 cheese:1 conceptual:1 assumed:1 reorder:1 search:3 latent:3 continuous:1 why:1 mj:5 learn:2 robust:2 contributes:1 complex:1 domain:14 blockmodels:1 noise:2 hyperparameters:2 edition:1 n2:9 child:7 repeated:1 amino:2 representative:2 wiley:1 droy:1 duck:1 exponential:2 lie:1 infancy:1 congenital:1 third:1 young:2 cog:1 specific:1 substance:2 showing:1 jensen:2 list:2 grouping:1 exists:1 workshop:2 false:1 adding:1 phd:1 wash:1 subtree:3 conditioned:3 nk:6 mf:7 likely:2 ordered:2 adjustment:1 applies:2 gender:5 corresponds:1 nested:1 ma:2 viewed:1 labelled:1 professor:5 tiger:1 included:1 typical:1 except:2 uniformly:3 infinite:3 wt:1 chisel:2 total:1 rarely:1 indicating:1 internal:2 people:1 collins:2 assessed:1 lipid:2 dept:1 mcmc:2 ex:1
2,289
3,078
Modeling Human Motion Using Binary Latent Variables Graham W. Taylor, Geoffrey E. Hinton and Sam Roweis Dept. of Computer Science University of Toronto Toronto, M5S 2Z9 Canada {gwtaylor,hinton,roweis}@cs.toronto.edu Abstract We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued ?visible? variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture. Website: http://www.cs.toronto.edu/?gwtaylor/publications/nips2006mhmublv/ 1 Introduction Recent advances in motion capture technology have fueled interest in the analysis and synthesis of complex human motion for animation and tracking. Models based on the physics of masses and springs have produced some impressive results by using sophisticated ?energy-based? learning methods[1] to estimate physical parameters from motion capture data[2]. But if we want to generate realistic human motion, we need to model all the complexities of the real dynamics and this is so difficult to do analytically that learning is likely to be essential. The simplest way to generate new motion sequences based on data is to concatenate parts of training sequences [3]. Another method is to transform motion in the training data to new sequences by learning to adjusting its style or other characteristics[4, 5, 6]. In this paper we focus on model driven analysis and synthesis but avoid the complexities involved in imposing physics-based constraints, relying instead on a ?pure? learning approach in which all the knowledge in the model comes from the data. Data from modern motion capture systems is high-dimensional and contains complex non-linear relationships between the components of the observation vector, which usually represent joint angles with respect to some skeletal structure. Hidden Markov models cannot model such data efficiently because they rely on a single, discrete K-state multinomial to represent the history of the time series. To model N bits of information about the past history they require 2N hidden states. To avoid this exponential explosion, we need a model with distributed (i.e. componential) hidden state that has a representational capacity which is linear in the number of components. Linear dynamical systems satisfy this requirement, but they cannot model the complex non-linear dynamics created by the non-linear properties of muscles, contact forces of the foot on the ground and myriad other factors. 2 An energy-based model for vectors of real-values In general, using distributed binary representations for hidden state in directed models of time series makes inference difficult. If, however, we use a Restricted Boltzmann Machine (RBM) to model the probability distribution of the observation vector at each time frame, the posterior over latent variables factorizes completely, making inference easy. Typically, RBMs use binary logistic units for both the visible data and hidden variables, but in our application the data (comprised of joint angles) is continuous. We thus use a modified RBM in which the ?visible units? are linear, realvalued variables that have Gaussian noise[7, 8]. The graphical model has a layer of visible units v and a layer of hidden units h; there are undirected connections between layers but no connections within a layer. For any setting of the hidden units, the distribution of each visible unit is defined by a parabolic log likelihood function that makes extreme values very improbable:1 ? log p(v, h) = X (vi ? ci )2 i 2?i2 ? X b j hj ? j X vi hj wij + const, ?i i,j (1) where ?i is the standard deviation of the Gaussian noise for visible unit i. (In practice, we rescale our data to have zero mean and unit variance. We have found that fixing ?i at 1 makes the learning work well even though we would expect a good model to predict the data with much higher precision). The main advantage of using this undirected, ?energy-based? model rather than a directed ?belief net? is that inference is very easy because the hidden units become conditionally independent when the states of the visible units are observed. The conditional distributions (assuming ?i = 1) are: p(hj = 1|v) = f (bj + X vi wij ), (2) hj wij , 1), (3) i p(vi |h) = N (ci + X j where f (?) is the logistic function, N (?, V ) is a Gaussian, bj and ci are the ?biases? of hidden unit j and visible unit i respectively, and wij is the symmetric weight between them. Maximum likelihood learning is slow in an RBM but learning still works well if we approximately follow the gradient of another function called the contrastive divergence[9]. The learning rule is: ?wij ? hvi hj idata ? hvi hj irecon , (4) where the first expectation (over hidden unit activations) is with respect to the data distribution and the second expectation is with respect to the distribution of ?reconstructed? data. The reconstructions are generated by starting a Markov chain at the data distribution, updating all the hidden units in parallel by sampling (Eq. 2) and then updating all the visible units in parallel by sampling (Eq. 3). For both expectations, the states of the hidden units are conditional on the states of the visible units, not vice versa. The learning rule for the hidden biases is just a simplified version of Eq. 4: ?bj ? hhj idata ? hhj irecon . (5) 2.1 The conditional RBM model The RBM we have described above models static frames of data, but does not incorporate any temporal information. We can model temporal dependencies by treating the visible variables in the previous time slice as additional fixed inputs [10]. Fortunately, this does not complicate inference. We add two types of directed connections (Figure 2): autoregressive connections from the past n configurations (time steps) of the visible units to the current visible configuration, and connections from the past m visibles to the current hidden configuration. The addition of these directed connections turns the RBM into a conditional RBM (CRBM). In our experiments, we have chosen n = m = 3. These are, however, tunable parameters and need not be the same for both types of directed connections. To simplify discussion, we will assume n = m and refer to n as the order of the model. 1 For any setting of the parameters, the gradient of the quadratic log likelihood will always overwhelm the gradient due to the weighted input from the binary hidden units provided the value vi of a visible unit is far enough from its bias, ci . Figure 1: In a trained model, probabilities of each feature being ?on? conditional on the data at the visible units. Shown is a 100-hidden unit model, and a sequence which contains (in order) walking, sitting/standing (three times), walking, crouching, and running. Rows represent features, columns represent sequential frames. Inference in the CRBM is no more difficult than in the standard RBM. Given the data at time t, t ? 1, . . . , t ? n, the hidden units at time t are conditionally independent. We can still use contrastive divergence for training the CRBM. The only change is that when we update the visible and hidden units, we implement the directed connections by treating data from previous time steps as a dynamically changing bias. The contrastive divergence learning rule for hidden biases is given in Eq. 5 and the equivalent learning rule for the temporal connections that determine the dynamically changing hidden unit biases is:  (t?q) (6) ?dij ? vit?q hhtj idata ? hhtj irecon . Hidden layer j i (t?q) where dij is the log-linear parameter (weight) connecting visible unit i at time t ? q to hidden unit j for q = 1..n. Similarly, the learning rule for the autoregressive connections that determine the dynamically changing visible unit biases is:  (t?q) ?aki ? vkt?q vit ? hvit irecon . (7) (t?q) where aki unit i. is the weight from visible unit k at time t ? q to visible The autoregressive weights can model short-term temporal structure very well, leaving the hidden units to model longer-term, higher level structure. During training, the states of the hidden units are determined by both the input they receive from the observed data and the input they receive from the previous time slices. The learning rule for W remains the same as a standard RBM, but has a different effect because the states of the hidden units are now influenced by the previous visible units. We do not attempt to model the first n frames of each sequence. Visible layer t-2 t-1 t Figure 2: Architecture of our model (in our experiments, we use three previous time steps) While learning a model of motion, we do not need to proceed sequentially through the training data sequences. The updates are only conditional on the past n time steps, not the entire sequence. As long as we isolate ?chunks? of frames (the size depending on the order of the directed connections), these can be mixed and formed into mini-batches. To speed up the learning, we assemble these chunks of frames into ?balanced? mini-batches of size 100. We randomly assign chunks to different mini-batches so that the chunks in each mini-batch are as uncorrelated as possible. To save computer memory, time frames are not actually replicated in mini-batches; we simply use indexing to simulate the ?chunking? of frames. 2.2 Approximations Our training procedure relies on several approximations, most of which are chosen based on experience training similar networks. While training the CRBM, we replaced vi in Eq. 4 and Eq. 7 by its expected value and we also used the expected value of vi when computing the probability of activation of the hidden units. However, to compute the one-step reconstructions of the data, we used stochastically chosen binary values of the hidden units. This prevents the hidden activities from transmitting an unbounded amount of information from the data to the reconstruction [11]. While updating the directed visible-to-hidden connections (Eq. 6) and the symmetric undirected connections (Eq. 4) we used the stochastically chosen binary values of the hidden units in the first term (under the data), but replaced hj by its expected value in the second term (under the reconstruction). We took this approach because the reconstruction of the data depends on the binary choices made when selecting hidden state. Thus when we infer the hiddens from the reconstructed data, the probabilities are highly correlated with the binary hidden states inferred from the data. On the other hand, we stop after one reconstruction, so the binary choice of hiddens from the reconstruction doesn?t correlate with any other terms, and there is no point including this extra noise. Lastly, we note that the fine-tuning procedure as a whole is making a crude approximation in addition to the one made by contrastive divergence. The inference step, conditional on past visible states, is approximate because it ignores the future (it does not do smoothing). Because of the directed connections, exact inference within the model should include both a forward and backward pass through each sequence (we currently perform only a forward pass). We have avoided a backward pass because missing values create problems in undirected models, so it is hard to perform learning efficiently using the full posterior. Compared with an HMM, the lack of smoothing is a loss, but this is more than offset by the exponential gain in representational power. 3 Data gathering and preprocessing We used data from the CMU Graphics Lab Motion Capture Database as well as from [12] (see acknowledgments). The processed data consists of 3D joint angles derived from 30 (CMU) or 17 (MIT) markers plus a root (coccyx, near the base of the back) orientation and displacement. For both datasets, the original data was captured at 120Hz; we have downsampled it to 30Hz. Six of the joint angle dimensions in the original CMU data had constant values, so they were eliminated. Each of the remaining joint angles had between one and three degrees of freedom. All of the joint angles and the root orientation were converted from Euler angles to the ?exponential map? parameterization [13]. This was done to avoid ?gimbal lock? and discontinuities. (The MIT data was already expressed in exponential map form and did not need to be converted.) We treated the root specially because it encodes a transformation with respect to a fixed global coordinate system. In order to respect physics, we wanted our final representation to be invariant to ground-plane translation and to rotation about the gravitational vertical. We represented each ground-plane translation by an incremental ?forwards? vector and an incremental ?sideways? vector relative to the direction the person was currently facing, but we represented height non-incrementally by the distance above the ground plane. We represented orientation around the gravitational vertical by the incremental change, but we represented the other two rotational degrees of freedom by the absolute pitch and roll relative to the direction the person was currently facing. The final dimensionality of our data vectors was 62 (for the CMU data) and 49 (for the MIT data). Note that we eliminated exponential map dimensions that were constant zero (corresponding to joints with a single degree of freedom). As mentioned in Sec. 2, each component of the data was normalized to have zero mean and unit variance. One advantage of our model is the fact that the data does not need to be heavily preprocessed or dimensionality reduced. Brand and Hertzmann [4] apply PCA to reduce noise and dimensionality. The autoregressive connections in our model can be thought of as doing a kind of ?whitening? of the data. Urtasun et al. [6] manually segment data into cycles and sample at regular time intervals using quaternion spherical interpolation. Dimensionality reduction becomes problematic when a wider range of motions is to be modeled. 4 Experiments After training our model using the updates described above, we can demonstrate in several ways what it has learned about the structure of human motion. Perhaps the most direct demonstration, which exploits the fact that it is a probability density model of sequences, is to use the model to generate de-novo a number of synthetic motion sequences. Video files of these sequences are available on the website mentioned in the abstract; these motions have not been retouched by hand in any motion editing software. Note that we also do not have to keep a reservoir of training data sequences around for generation - we only need the weights of the model and a few valid frames for initialization. Causal generation from a learned model can be done on-line with no smoothing, just like the learning procedure. The visible units at the last few time steps determine the effective biases of the visible and hidden units at the current time step. We always keep the previous visible states fixed and perform alternating Gibbs sampling to obtain a joint sample from the conditional RBM. This picks new hidden and visible states that are compatible with each other and with the recent (visible) history. Generation requires initialization with n time steps of the visible units, which implicitly determine the ?mode? of motion in which the synthetic sequence will start. We used randomly drawn consecutive frames from the training data as an initial configuration. 4.1 Generation of walking and running sequences from a single model In our first demonstration, we train a single model on data containing both walking and running motions; we then use the learned model to generate both types of motion, depending on how it is initialized. We trained2 on 23 sequences of walking and 10 sequences of jogging (from subject 35 in the CMU database). After downsampling to 30Hz, the training data consisted of 2813 frames. Figure 3: After training, the same model can generate walking (top) and running (bottom) motion (see videos on the website). Each skeleton is 4 frames apart. Figure 3 shows a walking sequence and a running sequence generated by the same model, using alternating Gibbs sampling (with the probability of hidden units being ?on? conditional on the current and previous three visible vectors). Since the training data does not contain any transitions between walking and running (and vice-versa), the model will continue to generate walking or running motions depending on where it is initialized. 4.2 Learning transitions between various styles In our second demonstration, we show that our model is capable of learning not only several homogeneous motion styles but also the transitions between them, when the training data itself contains 2 A 200 hidden-unit CRBM was trained for 4000 passes through the training data, using a third-order model (for directed connections). Weight updates were made after each mini-batch of size 100. The order of the sequences was randomly permuted such that walking and running sequences were distributed throughout the training data. examples of such transitions. We trained on 9 sequences (from the MIT database, file Jog1 M) containing long examples of running and jogging, as well as a few transitions between the two styles. After downsampling to 30Hz, this provided us with 2515 frames. Training was done as before, except that after the model was trained, an identical 200 hidden-unit model was trained on top of the first model (see Sec. 5). The resulting two-level model was used to generate data. A video available on the website demonstrates our model?s ability to stochastically transition between various motion styles during a single generated sequence. 4.3 Introducing transitions using noise In our third demonstration, we show how transitions between motion styles can be generated even when such transitions are absent in the data. We use the same model and data as described in Sec. 4.1, where we have learned on separate sequences of walking and running. To generate, we use the same sampling procedure as before, except that at each time we stochastically choose the hidden states (given the current and previous three visible vectors) we add a small amount of Gaussian noise to the hidden state biases. This encourages the model to explore more of the hidden state space without deviating too far the current motion. Applying this ?noisy? sampling approach, we see that the generated motion occasionally transitions between learned styles. These transitions appear natural (see the video on the website). 4.4 Filling in missing data Due to the nature of the motion capture process, which can be adversely affected by lighting and environmental effects, as well as noise during recording, motion capture data often contains missing or unusable data. Some markers may disappear (?dropout?) for long periods of time due to sensor failure or occlusion. The majority of motion editing software packages contain interpolation methods to fill in missing data, but this leaves the data unnaturally smooth. These methods also rely on the starting and end points of the missing data, so if a marker goes missing until the end of a sequence, na??ve interpolation will not work. Such methods often only use the past and future data from the single missing marker to fill in that marker?s missing values, but since joint angles are highly correlated, substantial information about the placement of one marker could be gained from the others. Our trained model has the ability to easily fill in such missing data, regardless of where the dropouts occur in a sequence. Due to its approximate inference method which does not rely on a backward pass through the sequence, it also has the ability to fill in such missing data on-line. Filling in missing data with our model is very similar to generation. We simply clamp the known data to the visible units, initialize the missing data to something reasonable (for example, the value at the previous frame), and alternate between stochastically updating the hidden and visible units, with the known visible states held fixed. To demonstrate filling in, we trained a model exactly as described in Sec. 4.1 except that one walking and one running sequence were left out of the training data to be used as test data. For each of these walking and running test sequences, we erased two different sets of joint angles, starting halfway through the test sequence. These sets were the joints in (1) the left leg, and (2) the entire upper body. As seen in the video files on the website, the quality of the filled-in data is excellent and is hardly distinguishable from the original ground truth of the test sequence. Figure 4 demonstrates the model?s ability to predict the three angles of rotation of the left hip. For the walking sequence (of length 124 frames), we compared our model?s performance to nearest neighbor interpolation, a simple method where for each frame, the values on known dimensions are compared to each example in the training set to find the closest match (measured by Euclidean distance in the normalized angle space). The unknown dimensions are then filled in using the matched example. As reconstruction from our model is stochastic, we repeated the experiment 100 times and report the mean. For the missing leg, mean squared reconstruction error per joint using our model was 8.78, measured in normalized joint angle space, and summed over the 62 frames of interest. Using nearest neighbor interpolation, the error was greater: 11.68. For the missing upper body, mean squared reconstruction error per joint using our model was 20.52. Using nearest neighbor interpolation, again the error was greater: 22.20. 2 1.5 1.5 Normalized joint angle Normalized joint angle 2 1 0.5 0 ?0.5 ?1 ?1.5 0 1 0.5 0 ?0.5 ?1 ?1.5 20 40 60 80 Frame 100 120 140 ?2 0 20 40 60 80 100 120 140 Frame Figure 4: The model successfully fills in missing data using only the previous values of the joint angles (through the temporal connections) and the current angles of other joints (through the RBM connections). Shown are two of the three angles of rotation for the left hip joint (the plot of the third is similar to the first). The original data is shown on a solid line, the model?s prediction is shown on a dashed line, and the results of nearest neighbor interpolation are shown on a dotted line (see a video on the website). 5 Higher level models Once we have trained the model, we can add layers like in a Deep Belief Network [14]. The previous layer CRBM is kept, and the sequence of hidden state vectors, while driven by the data, is treated as a new kind of ?fully observed? data. The next level CRBM has the same architecture as the first (though we can alter the number of its units) and is trained in the exact same way. Upper levels of the network can then model higher-order structure. This greedy procedure is justified using a variational bound [14]. A twolevel model is shown in Figure 5. k j We can also consider two special cases of the higher-level model. If we i keep only the visible layer, and its n-th order directed connections, we have a standard AR(n) model with Gaussian noise. If we take the two-hidden layer model and delete the first-level autoregressive connections, as well as both sets of visible-to-hidden directed connections, we have a simplified t-2 t-1 t model that can be trained in 2 stages: first learning a static (iid) model of pairs or triples of time frames, then using the inferred hidden states to train a ?fully-observed? sigmoid belief net that captures the temporal structure of Figure 5: Higherlevel models the hidden states. 6 Discussion We have introduced a generative model for human motion based on the idea that local constraints and global dynamics can be learned efficiently by a conditional Restricted Boltzmann Machine. Once trained, our models are able to efficiently capture complex non-linearities in the data without sophisticated pre-processing or dimensionality reduction. The model has been designed with human motion in mind, but should lend itself well to other high-dimensional time series. In relatively low-dimensional or unstructured data (for example if we were to model a single isolated joint) a single-layer model might be expected to have difficulty since such cyclic time series contain several subsequences which are locally very similar but occur in different phases of the overall cycle. It would be possible to preserve the global phase information by using a much higher order model, but for higher dimensional data such as full body motion capture this is unnecessary because the whole configuration of joint angles and angular velocities never has any phase ambiguity. So the single-layer version of our model actually performs much better on higher-dimensional data. Models with more hidden layers are able to implicitly model longer-term temporal information, and thus will mitigate this effect. We have demonstrated that our model can effectively learn different styles of motion, as well as the transitions between these styles. This differentiates our approach from PCA-based approaches which only accurately model cyclic motion, and additionally must build separate models for each type of motion. The ability of the model to transition smoothly, however, is dependent on having sufficient examples of such transitions in the training data. We plan to train on larger datasets encompassing such transitions between various styles of motion. If we augment the data with some static skeletal and identity parameters (in essence mapping a person?s unique identity to a set of features), we should be able to use the same generative model for many different people, and generalize individual characteristics from one type of motion to another. Finally, our model is not limited to a single source of data. In the future, we hope to integrate low-level vision data captured at the same time as motion; we could then learn the correlations between the vision stream and the joint angles. Acknowledgments The first data set used in this project was obtained from mocap.cs.cmu.edu. This database was created with funding from NSF EIA-0196217. The second data set used in this project was obtained from http://people.csail.mit.edu/ehsu/work/sig05stf/. For Matlab playback of motion and generation of videos, we have used Neil Lawrence?s motion capture toolbox (http://www.dcs.shef.ac.uk/?neil/mocap/). References [1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, ?Gradient-based learning applied to document recognition,? Proceedings of the IEEE, vol. 86, pp. 2278?2324, November 1998. [2] C. K. Liu, A. Hertzmann, and Z. Popovic, ?Learning physics-based motion style with nonlinear inverse optimization,? ACM Trans. Graph., vol. 24, no. 3, pp. 1071?1081, 2005. [3] O. Arikan, D. A. Forsyth, and J. F. O?Brien, ?Motion synthesis from annotations,? in Proc. SIGGRAPH, 2002. [4] M. Brand and A. Hertzmann, ?Style machines.,? in Proc. SIGGRAPH, pp. 183?192, 2000. [5] Y. Li, T. Wang, and H.-Y. Shum, ?Motion texture: a two-level statistical model for character motion synthesis,? in Proc. SIGGRAPH, pp. 465?472, 2002. [6] R. Urtasun, P. Glardon, R. Boulic, D. Thalmann, and P. Fua, ?Style-based Motion Synthesis,? Computer Graphics Forum, vol. 23, no. 4, pp. 1?14, 2004. [7] M. Welling, M. Rosen-Zvi, and G. E. Hinton, ?Exponential family harmoniums with an application to information retrieval.,? in Proc. NIPS 17, 2005. [8] Y. Freund and D. Haussler, ?Unsupervised learning of distributions of binary vectors using 2-layer networks,? in Proc. NIPS 4, 1992. [9] G. E. Hinton, ?Training products of experts by minimizing contrastive divergence.,? Neural Comput, vol. 14, pp. 1771?1800, Aug 2002. [10] I. Sutskever and G. E. Hinton, ?Learning multilevel distributed representations for high-dimensional sequences,? Tech. Rep. UTML TR 2006-003, University of Toronto, 2006. [11] Y. W. Teh and G. E. Hinton, ?Rate-coded restricted Boltzmann machines for face recognition,? in Proc. NIPS 13, 2001. [12] E. Hsu, K. Pulli, and J. Popovi?c, ?Style translation for human motion,? ACM Trans. Graph., vol. 24, no. 3, pp. 1082?1089, 2005. [13] F. S. Grassia, ?Practical parameterization of rotations using the exponential map,? J. Graph. Tools, vol. 3, no. 3, pp. 29?48, 1998. [14] G. E. Hinton, S. Osindero, and Y.-W. Teh, ?A fast learning algorithm for deep belief nets,? Neural Comp., vol. 18, no. 7, pp. 1527?1554, 2006.
3078 |@word version:2 contrastive:5 pick:1 tr:1 solid:1 reduction:2 initial:1 configuration:5 contains:4 series:4 selecting:1 cyclic:2 liu:1 shum:1 document:1 past:6 brien:1 current:7 activation:2 must:1 visible:39 realistic:1 concatenate:1 wanted:1 utml:1 treating:2 plot:1 update:4 designed:1 generative:3 leaf:1 website:7 greedy:1 parameterization:2 plane:3 short:1 toronto:5 unbounded:1 height:1 direct:1 become:1 consists:1 crbm:7 expected:4 relying:1 spherical:1 becomes:1 provided:2 project:2 matched:1 linearity:1 mass:1 what:1 kind:3 transformation:1 temporal:7 mitigate:1 visibles:1 exactly:1 demonstrates:2 uk:1 unit:47 appear:1 before:2 local:1 interpolation:7 approximately:1 might:1 plus:1 initialization:2 dynamically:3 limited:1 range:1 directed:13 acknowledgment:2 unique:1 lecun:1 practical:1 lost:1 practice:1 implement:1 procedure:6 displacement:1 thought:1 pre:1 regular:1 downsampled:1 cannot:2 applying:1 www:2 equivalent:1 map:4 demonstrated:1 missing:15 go:1 regardless:1 starting:3 vit:2 unstructured:1 pure:1 rule:6 haussler:1 fill:5 coordinate:1 heavily:1 exact:2 homogeneous:1 us:1 velocity:1 recognition:2 updating:4 walking:14 database:4 observed:4 bottom:1 wang:1 capture:12 cycle:2 balanced:1 mentioned:2 substantial:1 complexity:2 hertzmann:3 skeleton:1 dynamic:3 trained:11 segment:1 harmonium:1 myriad:1 completely:1 easily:1 joint:23 siggraph:3 various:4 represented:4 train:3 fast:1 effective:1 larger:1 valued:1 novo:1 ability:5 neil:2 transform:1 itself:2 noisy:1 final:2 higherlevel:1 sequence:34 advantage:2 net:3 took:1 propose:1 reconstruction:10 clamp:1 product:1 roweis:2 representational:2 sutskever:1 requirement:1 incremental:3 wider:1 depending:3 ac:1 fixing:1 measured:2 nearest:4 rescale:1 aug:1 eq:8 c:3 come:1 direction:2 foot:1 stochastic:1 human:8 require:1 multilevel:1 assign:1 gravitational:2 around:2 ground:5 boulic:1 lawrence:1 mapping:1 predict:2 bj:3 hvi:2 consecutive:1 proc:6 currently:3 vice:2 create:1 successfully:1 sideways:1 weighted:1 tool:1 hope:1 mit:5 sensor:1 gaussian:5 always:2 modified:1 rather:1 playback:1 avoid:3 hj:7 factorizes:1 publication:1 derived:1 focus:1 likelihood:3 tech:1 inference:9 dependent:1 typically:1 entire:2 hidden:47 wij:5 overall:1 orientation:3 augment:1 plan:1 smoothing:3 summed:1 initialize:1 special:1 once:2 never:1 having:1 sampling:6 eliminated:2 manually:1 identical:1 unsupervised:1 filling:4 pulli:1 alter:1 future:3 rosen:1 others:1 report:1 simplify:1 few:4 modern:1 randomly:3 simultaneously:1 divergence:5 ve:1 preserve:1 individual:1 deviating:1 replaced:2 phase:3 occlusion:1 attempt:1 freedom:3 interest:2 highly:2 extreme:1 held:1 chain:1 capable:1 explosion:1 improbable:1 experience:1 filled:2 taylor:1 euclidean:1 initialized:2 causal:1 isolated:1 delete:1 hip:2 column:1 modeling:1 ar:1 introducing:1 deviation:1 euler:1 comprised:1 dij:2 osindero:1 graphic:2 too:1 zvi:1 dependency:1 synthetic:2 chunk:4 person:3 density:1 hiddens:2 csail:1 standing:1 physic:4 synthesis:5 connecting:1 transmitting:1 na:1 squared:2 again:1 ambiguity:1 containing:2 choose:1 stochastically:5 adversely:1 expert:1 style:14 li:1 converted:2 de:1 sec:4 forsyth:1 satisfy:1 vi:7 depends:1 stream:1 root:3 lab:1 doing:1 start:1 parallel:2 annotation:1 hhj:2 formed:1 roll:1 variance:2 characteristic:2 efficiently:4 sitting:1 generalize:1 accurately:1 produced:1 iid:1 lighting:1 comp:1 m5s:1 history:3 influenced:1 complicate:1 failure:1 energy:3 rbms:1 pp:9 involved:1 arikan:1 rbm:11 static:3 stop:1 gain:1 tunable:1 adjusting:1 hsu:1 thalmann:1 knowledge:1 dimensionality:5 sophisticated:2 z9:1 actually:2 back:1 higher:8 popovi:1 follow:1 editing:2 eia:1 done:3 though:2 fua:1 just:2 stage:1 lastly:1 angular:1 until:1 correlation:1 hand:2 nonlinear:1 marker:6 lack:1 incrementally:1 logistic:2 mode:1 quality:1 perhaps:1 effect:3 normalized:5 consisted:1 contain:3 analytically:1 alternating:2 symmetric:2 i2:1 conditionally:2 during:4 encourages:1 aki:2 essence:1 demonstrate:3 performs:1 motion:49 variational:1 funding:1 sigmoid:1 rotation:4 permuted:1 multinomial:1 physical:1 twolevel:1 componential:1 refer:1 versa:2 imposing:1 gibbs:2 tuning:1 similarly:1 had:2 impressive:1 longer:2 whitening:1 add:3 base:1 something:1 posterior:2 closest:1 recent:2 driven:2 apart:1 occasionally:1 binary:11 continue:1 rep:1 muscle:1 captured:2 seen:1 additional:1 fortunately:1 greater:2 determine:4 mocap:2 period:1 dashed:1 full:2 infer:1 smooth:1 match:1 long:3 retrieval:1 coded:1 pitch:1 prediction:1 vision:2 expectation:3 cmu:6 represent:5 receive:3 addition:2 want:1 fine:1 justified:1 interval:1 shef:1 leaving:1 source:1 extra:1 specially:1 file:3 pass:1 isolate:1 hz:4 subject:1 undirected:5 recording:1 near:1 bengio:1 easy:2 enough:1 architecture:3 reduce:1 idea:1 haffner:1 absent:1 six:1 pca:2 proceed:1 hardly:1 matlab:1 deep:2 amount:2 locally:1 processed:1 simplest:1 reduced:1 http:3 generate:8 problematic:1 nsf:1 dotted:1 per:2 skeletal:2 discrete:1 vol:7 affected:1 drawn:1 idata:3 changing:3 preprocessed:1 kept:1 backward:3 graph:3 halfway:1 angle:20 package:1 inverse:1 fueled:1 parabolic:1 throughout:1 reasonable:1 family:1 graham:1 bit:1 dropout:2 layer:14 bound:1 quadratic:1 assemble:1 activity:1 occur:2 placement:1 constraint:2 software:2 encodes:1 speed:1 simulate:1 spring:1 performing:1 relatively:1 alternate:1 sam:1 character:1 making:2 leg:2 restricted:3 indexing:1 gathering:1 invariant:1 chunking:1 remains:1 overwhelm:1 turn:1 differentiates:1 mind:1 end:2 available:2 apply:1 save:1 batch:6 original:4 top:2 running:12 include:1 remaining:1 graphical:1 lock:1 const:1 exploit:1 build:1 disappear:1 forum:1 contact:1 already:1 gradient:4 distance:2 separate:2 capacity:1 hmm:1 majority:1 unnaturally:1 urtasun:2 assuming:1 length:1 modeled:1 relationship:1 mini:6 rotational:1 demonstration:4 downsampling:2 minimizing:1 difficult:3 synthesizing:1 boltzmann:3 unknown:1 perform:3 teh:2 upper:3 vertical:2 observation:2 markov:2 datasets:2 vkt:1 november:1 hinton:7 frame:20 dc:1 canada:1 inferred:2 introduced:1 pair:1 toolbox:1 connection:22 learned:6 discontinuity:1 trans:2 nip:3 able:3 usually:1 dynamical:1 including:1 memory:1 video:7 belief:4 lend:1 power:2 irecon:4 rely:3 force:1 treated:2 natural:1 difficulty:1 technology:1 realvalued:1 created:2 relative:2 freund:1 loss:1 expect:1 fully:2 encompassing:1 mixed:1 generation:6 geoffrey:1 facing:2 triple:1 integrate:1 degree:3 sufficient:1 uncorrelated:1 translation:3 row:1 compatible:1 last:2 bias:9 neighbor:4 face:1 absolute:1 distributed:4 slice:2 dimension:4 valid:1 transition:15 autoregressive:5 doesn:1 ignores:1 made:3 forward:3 replicated:1 simplified:2 avoided:1 preprocessing:1 far:2 welling:1 correlate:1 reconstructed:2 approximate:3 implicitly:2 keep:3 global:3 sequentially:1 unnecessary:1 popovic:1 subsequence:1 continuous:1 latent:4 additionally:1 nature:1 learn:2 excellent:1 complex:4 bottou:1 did:1 main:1 whole:2 noise:8 animation:1 repeated:1 jogging:2 body:3 reservoir:1 slow:1 precision:1 gwtaylor:2 exponential:7 comput:1 crude:1 third:3 unusable:1 offset:1 essential:1 sequential:1 effectively:1 gained:1 ci:4 texture:1 smoothly:1 distinguishable:1 simply:2 likely:1 explore:1 prevents:1 expressed:1 tracking:1 truth:1 environmental:1 relies:1 acm:2 conditional:10 identity:2 erased:1 change:2 hard:1 determined:1 except:3 gimbal:1 called:1 pas:4 brand:2 people:2 quaternion:1 incorporate:1 dept:1 correlated:2
2,290
3,079
TrueSkill TM : A Bayesian Skill Rating System Ralf Herbrich Tom Minka Microsoft Research Ltd. Thore Graepel Microsoft Research Ltd. Cambridge, UK Microsoft Research Ltd. Cambridge, UK [email protected] Cambridge, UK [email protected] [email protected] Abstract We present a new Bayesian skill rating system which can be viewed as a generalisation of the Elo system used in Chess. The new system tracks the uncertainty about player skills, explicitly models draws, can deal with any number of competing entities and can infer individual skills from team results. Inference is performed by approximate message passing on a factor graph representation of the model. We present experimental evidence on the increased accuracy and convergence speed of the system compared to Elo and report on our experience with the new rating system running in a large-scale commercial online gaming service under the name of 1 TrueSkill. Introduction Skill ratings in competitive games and sports serve three main functions. First, they allow players to be matched with other players of similar skill leading to interesting, balanced matches. Second, the ratings can be made available to the players and to the interested public and thus stimulate interest and competition. Thirdly, ratings can be used as criteria of qualication for tournaments. With the advent of online gaming, the interest in rating systems has increased dramatically because the quality of the online experience of millions of players each day are at stake. In 1959, Arpad Elo developed a statistical rating system for Chess, which was adopted by the World Chess Federation FIDE in 1970 [4]. The key idea behind the Elo system [2] is to model the probability of the possible game outcomes as a function of the two players' skill i exhibits performance pi  N (pi ; si ; 2 ) normally si with xed variance 2 . The probability that player 1 wins is given by the probability that his performance p1 exceeds the opponent's performance p2 ,   s s (1) P (p1 > p2 js1 ; s2 ) =  1p 2 ; 2 ratings s1 and s2 . In a game each player distributed around their skills  denotes the cumulative density of a zero-mean unit-variance Gaussian. After the s and s are updated such that the observed game outcome becomes more likely and s + s = const. is maintained. Let y = +1 if player 1 wins, y = 1 if player 2 wins and y = 0 if a draw occurs. Then the resulting (linearised) Elo update is given by s s + y, s s y and    p y+1 sp s  = ;  | {z } 2  2 where game, the skill ratings 1 1 1 2 1 2 2 2 1 2 K Factor where 0 < < 1 determines the weighting of the new evidence versus the old estimate. Most currently used Elo variants use a logistic distribution instead of a Gaussian because it is argued to provide a better t for Chess data. From the point of view of statistics the Elo system addresses the problem of estimating from paired comparison data [1] with the Gaussian variant corresponding to the the Bradley-Terry Thurstone Case V model and the logistic variant to model. In the Elo system, a player's rating is regarded as provisional as long as it is based on less than a xed number of, say, 20 games. This problem was addressed by Mark Glickman's Bayesian rating system Glicko [5] which introduces the idea of modeling the belief about a player's skill as a Gaussian belief distribution characterised by a mean  and a variance 2 . An important new application of skill rating systems are multiplayer online games that greatly benet from the ability to create online matches in which the participating players have roughly even skills and hence enjoyable, fair and exciting game experiences. Multiplayer online games provide the following challenges: 1. Game outcomes often refer to teams of players yet a skill rating for individual players is needed for future matchmaking. 2. More than two players or teams compete such that the game outcome is a permutation of teams or players rather than just a winner and a loser. In this paper we present a new rating system, TrueSkill, that addresses both these challenges in a principled Bayesian framework. We express the model as a factor graph (Section 2) and use approximate message passing (Section 3) to infer the marginal belief distribution over the skill of each player. In Section 4 we present experimental results on real-world data generated by Bungie Studios during the beta testing of the Xbox title Halo 2 and we report on our experience with the rating system running in the Xbox Live service. 2 Factor Graphs for Ranking n players f1; : : : ; ng in a game let k teams compete in a match. k non-overlapping subsets Aj  f1; : : : ; ng of the player population, Ai \ Aj = ; if i 6= j . The outcome r := (r1 ; : : : ; rk ) 2 f1; : : : ; k g is specied by a rank rj for each team j , with r = 1 indicating the winner and with the possibility of draws when ri = rj . The ranks are derived from the scoring rules of the game. We model the probability P (rjs; A) of the game outcome r given the skills s of the participating players and the team assignments A := fA1 ; : : : ; Ak g. From Bayes' rule we obtain From among a population of The team assignments are specied by the posterior distribution P (rjs; A) p (s) : (2) P (rjA) Qn 2 We assume a factorising Gaussian prior distribution, p(s) := i=1 N (si ; i ; i ). Each player 2 i is assumed to exhibit a performance pi  N (pi ; si ; ) in the game, centred around their 2 skill si with xed variance . The performance tj of team j is modeled as the sum of the P performances of its members, tj := i2Aj pi . Let us reorder the teams in ascending order of rank, r(1)  r(2)      r(k) . Disregarding draws, the probability of a game outcome r p (sjr; A) = is modeled as  P (rj ft1 ; : : : ; tk g) = P tr(1) > tr(2) >    > tr(k) ; that is, the order of performances generates the order in the game outcome. If draws are r(j) < r(j+1) requires tr(j) > tr(j+1) + " and the draw outcome r(j) = r(j+1) requires jtr(j) tr(j+1) j  ", where " > 0 is a draw margin that can be calculated permitted the winning outcome from the assumed probability of draw. 1 We need to be able to report skill estimates after each game and will therefore use an online learning scheme referred to as Gaussian density ltering [8]. The posterior distribution is approximated to be Gaussian and is used as the prior distribution for the next game. If the 1 The transitive relation 1 draws with 2 is not modelled exactly by the relation j  " and jt2 the three teams despite the possibility that jt1 which is non-transitive. If jt1 t2 j t3 " t3 > ". j jt1 t2 j  ", then the model generates a draw among N (s ;  ;  ) N (s ;  ;  ) N (s ;  ;  ) N (s ;  ;  ) s1 s2 s3 s4 1 1 2 1 2 N (p ; s ; ) 1 3 2 p1 p2 3 1 3 t2 I(d1 = t1 t2 ) 2 4 p4 I(t3 = p4 ) t3 I(d2 = t2 4 t3 ) d2 2 5 I(d1 > ") skills all teams 4 6 d1 the N (p ; s ; ) I(t2 = p2 + p3 ) t1 2 4 4 2 3 p3 I(t1 = p1 ) Figure 1: 4 N (p ; s ; ) 2 2 2 3 3 N (p ; s ; ) 2 1 2 2 2 I(jd2j  ") An example TrueSkill factor graph. si for performances of There are four types of variables: pi for the performances of all and dj for the team performance dierences. of all players, players, ti for the The rst row of factors encode the (product) prior; the product of the remaining factors characterises the likelihood for the game outcome Team 1 > Team 2 = Team 3. The arrows indicate the optimal message passing schedule: First, all light arrow messages are updated from top to bottom. In the following, the schedule over the team performance (dierence) nodes are iterated in the order of the numbers. Finally, the posterior over the skills is computed by updating all the dark arrow messages from bottom to top. skills are expected to vary over time, a Gaussian dynamics factor introduced which leads to an additive variance component of 2 N (si;t ; si;t ; ) can be 2 +1 in the subsequent prior. k = 3 teams with team assignments A = f1g, A = f2; 3g = f4g. Let us further assume that team 1 is the winner and that teams 2 and 3 draw, i.e., r := (1; 2; 2). We can represent the resulting joint distribution p (s; p; tjr; A) by Let us consider a game with and 1 A3 2 the factor graph depicted in Figure 1. A factor graph is a bi-partite graph consisting of variable and factor nodes, shown in Figure 1 as gray circles and black squares, respectively. The function represented by a factor graph in our case the joint distribution p (s; p; tjr; A)is given by the product of all the (potential) functions associated with each factor. The structure of the factor graph gives information about the dependencies of the factors involved and is the basis of ecient inference algorithms. Returning to Bayes rule (2), the quantities of interest are the posterior distribution p (si jr; A) over skills given game outcome r and team associations A. The p (si jr; A) are fpi g and calculated from the joint distribution integrating out the individual performances the team performances fti g, p (sjr; A) = Z 1 1  Z 1 1 p (s; p; tjr; A) dp dt : Factor f () = I( > ") (win) 12 Vf 2 v (t,?,?) v (t,?,?) ? = 0.50 ? = 1.00 ? = 4.00 4 8 6 0 4 ?2 2 ?4 0 ?6 f () = I(j j  ") (draw) 6 ? = 0.50 ? = 1.00 ? = 4.00 10 Function Factor ?4 ?2 0 2 4 ?6 ?6 6 ?4 ?2 t ? = 0.50 ? = 1.00 ? = 4.00 4 6 ? = 0.50 ? = 1.00 ? = 4.00 1 0.8 w (t,?,?) w (t,?,?) Wf 0.8 Function 2 t 1 0.6 0.4 0.2 0 ?6 0 0.6 0.4 0.2 ?4 ?2 0 2 4 0 ?6 6 ?4 ?2 t 0 2 4 6 t Figure 2: Update rules for the the approximate marginals for dierent values of the draw margin ": For a two-team game, the parameter t represents the dierence of team perfor- mances between winner and loser. Hence, in the win column (left) negative values of indicate a surprise outcome leading to a large update. t In the draw column (right) any stark deviation of team performances is surprising and leads to a large update. 3 Approximate Message Passing The sum-product algorithm in its formulation for factor graphs [7] exploits the sparse connection structure of the graph to perform ecient inference of single-variable marginals by message passing. The message passing for continuous variables is characterised by the following equations (these follow directly from the distributive law): p (vk ) where = mf !vj (vj ) = mvk !f (vk ) = Y 2 f Fvk Z mf !vk (vk ) Z    f (v) Y 2 f~ Fvk nff g Y 6 i=j (3) mvi !f (vi ) dvnj mf~!vk (vk ) ; (4) (5) Fvk denotes the set of factors connected to variable vk and vnj denotes the components v except for its j th component. If the factor graph is acyclic and the messages of the vector can be calculated and represented exactly then each message needs to be calculated only once and the marginals p(vk ) can be calculated from the messages by virtue of (3). As can be seen from Figure 1 the TrueSkill factor graph is in fact acyclic and the majority of messages can be represented compactly as 1dimensional Gaussians. However, (4) shows that messages 2 and 5 from the comparison factors dierences di I( > ") or I(jj  ") to the performance in Figure 1 are non Gaussianin fact, the true message would be the (non- Gaussian) factor itself. Following the Expectation Propagation algorithm [8], we approximate these messages as well as possible by approximating the marginal a Gaussian p^(di ) p(di ) with the same mean and variance as via moment matching resulting in p(di ). For Gaussian distributions, Factor Update equation x mf !x x 2 y x N (x; y; c ) 2 y1 mf !y    yn follows from x I(x > ") 1 0 n X new @ f !x aj  y1 j =1 yj !yj 2 a= 1 66 bn 4 3 b1 . . . bn +1 1 7 7 5 c p p 1 W f (d= c; " c) p pc; "pc) d + c  Vf (d= 1 Wf (d=pc; "pc) f !x ; d := x f !x xnew xnew c := x I(jxj  ") f    yn I(yn = a> [y1 ;    ; yn 1 ; x]) jj 1  yj f !yj A mf !yn mf !x 1 a2j A  f !yj j =1 yj x x mf> !x N 0 n X @ mf !yn I(x = b> y) f !y ) 1   x; y; c2 = N y; x; c2 . 2 fnew !x    yn  1 + c (y f !x I(x = a> y) y1 a := new mf !x x a (y f !y ) a (y f !y ) fnew !x fnew !x mf !x 1 v2 m x + 2 v new N (x; m; v ) x x + xnew p(x) and the messages mf !x for N (; ; ) in terms of 2 precision,  :=  , and precision adjusted mean,  := . The Table 1: The update equations for the (cached) marginals all factor types of a TrueSkill factor graph. We represent Gaussians their canonical parameters: missing update equation for the message or the marginal follow from (6). moment matching is known to minimise the KullbackLeibler divergence. Then, we exploit the fact that from (3) and (5) we have p^ (di ) = m ^ f !di (di )  mdi!f (di ) m ^ f !di (di ) = , p^ (di ) : mdi!f (di ) (6) Table 1 gives all the update equations necessary for performing inference in the TrueSkill factor graph. The top four rows result from standard Gaussian integrals. The bottom rule is the result of the moment matching procedure described above. The four functions are the additive and multiplicative correction term for the mean and variance of a (doubly) truncated Gaussian and are given by (see also Figure 2): VI(>") (t; ") :=  N (t ") ; W I >" (t; ") := VI >" (t; ")  VI >" (t; ) + t " ;  (t ") ( ) ( ) ( ) := N ((" " t)t) N( ("" tt)) ; (t; ") := VI jj>" (t; ") + (" t) N("(" t)t) +((" +" t) Nt) (" + t) : VI(jj>") (t; ") WI(jj>") 2 ( ) Since the messages 2 and 5 are approximate, we need to iterate over all messages that are on the shortest path between any two approximate marginals p^(di ) until the approximate marginals do not change anymore. The resulting optimal message passing schedule can be found in Figure 1 (arrows and caption). 4 Experiments and Online Service 4.1 Halo 2 Beta Test In order to assess the performance of the TrueSkill algorithm we performed experiments on the game outcome data set generated by Bungie Studios during the beta testing of the 2 Xbox title Halo 2 . The data set consists of thousands of game outcomes for four dierent types of games: 8 players against each other (Free for All), 4 players vs. 4 players (Small Teams), 1 player vs. 1 player (Head to Head), and 8 players vs. 8 players (Large Teams). The draw margin " for each factor node was set by counting the fraction of draws between " to the chance of drawing teams (empirical draw probability) and relate the draw margin by  " draw probability =  pn + n where n1 and n2 I(j j  ")   "  " 2  pn + n = 2 pn + n  1; are the number of players in each of the two teams compared by a I( > ") 1 or  1 2 node (see Figure 1). The performance variance 2 were set to the standard values (see next section). 1 2 2 and the dynamics variance We compared the TrueSkill algorithm = 0:07; this corresponds to a 24 on the Elo scale which is considered a good and stable dynamics (see [4]). When we had to process a team game or a game with more than two teams we used the so-called duelling heuristic: For each player, compute the 's in comparison to all other players based on the team outcome of the player and every other player and perform an update with the average of the 's. The approximate message passing algorithm described to Elo with a Gaussian performance distribution (1) and K factor of in the last section is extremely ecient; in all our experiments the runtime of the ranking algorithm was within twice the runtime of the simple Elo update. Predictive Performance The following table presents the prediction error (fraction of teams that were predicted in the wrong order before the game) for both algorithms (column 2 and 3). This measure is dicult to interpret because of the interplay of ranking and matchmaking: Depending on the (unknown) true skills of all players, the smallest achievable prediction error could be as big as 50%. In order to compensate for this latent, unknown variable, we arranged a competition between ELO and TrueSkill: We let each system predict which games it considered most tightly matched and presented them to the other algorithm. The algorithm that predicts more game outcomes correctly has a better ability to identify tight matches. For TrueSkill we used the matchmaking criterion (7) and for Elo we used the dierence in Elo scores, ELO full Free for All Small Teams Head to Head Large Teams 34.92% 32.14% 33.24% 39.49% s 1 s2 . TrueSkill full 30.82% 32.44% 38.15% 35.23% ELO challenged 38.30% 42.55% 40.57% 44.12% TrueSkill challenged 35.64% 37.17% 30.83% 29.94% It can be seen from column 4 and 5 of this table that TrueSkill is signicantly better at predicting the tight matches (the challenge set was always 20% of the total number of games in each game mode). 2 Available for download at http://research.microsoft.com/mlp/apg/downloads.htm Match Quality One of the main applications of a rating system is to be able to match players of similar skill. In order to com- 35 pare the ability of Elo and TrueSkill on this task, we 30 25 tight then it would be very likely to observe a draw. Thus, we plot the fraction of draws (out of all possible draws) accumulating over the match quality order assigned by each system. In the graph on the right we see % of pairwise draws sorted the games based on the match quality assigned by both systems to each game. If the match was truly that TrueSkill is signicantly better than Elo for both Small Team Free for All 20 15 10 Head To Head 5 the Free for All and Head to Head game mode but ELO TrueSkill 0 0 fails in Small Teams. This is possibly due to the vio- 20 40 60 80 100 % of games lation of the additive team performance model as most games in this mode are Capture-the-Flag games. Win Probability The perceived quality of a rating system for players is in terms of their winning ratio: if the winning ratio is high then player was erroneously assigned too weak opposi- 30 ond experiment we processed the Halo 2 dataset but rejected games that did not meet a certain match quality threshold. For the games thus selected, we computed the winning ratio of each player and, depending on the minimal number of games played by each player, measured the average deviation of the winning probability from 50% (the optimal winning ratio). The resulting plot on the right (for the Head to Head game mode) shows that with TrueSkill ever players with very few Avg. ? of Win. Prob. from 50% tion by the ranking system (and vice versa). In a sec- Halo 2 ELO TrueSkill 25 20 15 10 5 0 0 5 10 15 20 Minimal Number of Games games got mostly fair matches (with a winning probability within 35% to 65%). Convergence Properties Finally, we plotted two exemplary convergence trajectories for two of the highest rated players in the Free for All game mode (Solid line: TrueSkill; Dashed line: Elo). As can be seen, TrueSkill automatically chooses the correct learning rate whereas Elo only slowly converges to the target skill. In fact, TrueSkill comes close to the information theoretic limit of code a ranking of n n log(n) bits to en- players. For 8 player games, the information theoretic limit is log(n)= log(8)  5 games per player on average and the observed convergence for these two players is  10 games! 4.2 TrueSkill in Xbox 360 Live Xbox Live is Microsoft's console online gaming service. It lets players play together across the world in hundreds of dierent titles. As of September 2005 Xbox Live had over 2 million subscribed users who had accrued over 1.3 billion hours on the service. The new and improved Xbox 360 Live service oers automatic player rating and matchmaking using the TrueSkill algorithm. The system processes hundreds of thousands of games per day making it one of the largest applications of Bayesian inference to date. In Xbox Live we use a scale given by a prior 0 = 25 and 02 = (25=3)2 corresponding to a probability for positive skills of approximately 99%. The variance of performance is given by 2 = (0 =2)2 and the dynamics variance is chosen to be 2 = (0 =100)2. The TrueSkill 1% 3i. This choice ensures that the top of the leaderboards (a listing of all players according to  3) are only populated by players that are highly skilled with high certainty, having worked up their way from 0 =  3 . Pairwise matchmaking of players is performed using a match quality criterion derived as the draw probability relative to the highest possible draw probability in the limit " ! 0, s !  2 ( i j )  q ; i ; j ; i ; j := 2 +  +   exp 2 2 +  +  : (7) skill of a player lower quantile i i is currently displayed as a conservative skill estimate given by the 0 draw 0 2 2 2 2 2 i 2 j 2 2 2 i j Note that the matchmaking process can be viewed as a process of sequential experimental design [3]. Since the quality of a match is determined by the unpredictability of its outcome, the goals of matchmaking and nding the most informative matches are aligned! As a fascinating by-product we have the opportunity to study TrueSkill in action with player populations of hundreds of thousands of players. While we are only just beginning to analyse the vast amount of resulting data, we have already made some interesting observations. 1. Games dier in the number of eective skill levels. Games of chance (e.g., single game Backgammon or UNO) have a narrow skill distribution while games of skill (e.g., semi-realistic racing games) have a wide skill distribution. 2. Matchmaking and skill display result in a feedback loop back to the players, who often view their skill estimate as a reward or punishment for performance. Some players try to protect or boost their skill rating by either stopping to play, by carefully choosing their opponents, or by cheating. 3. The total skill distribution is shifted to below the prior distribution if players new to the system consistently lose their rst few games. When a skill reset was initiated, we found that the eect disappeared with tighter matchmaking enforced. 5 Conclusion TrueSkill is a globally deployed Bayesian skill rating algorithm based on approximate message passing in factor graphs. It has many theoretical and practical advantages over the Elo system and has been demonstrated to work well in practice. While we specically focused on the TrueSkill algorithm, many more interesting models can be developed within the factor graph framework presented here. In particular, the factor graph formulation is applicable to the family of constraint classication models [6] that encompass a wide range of multiclass and ranking problems. Also, instead of ranking individual entities one can use feature vectors to build a ranking function, e.g., for web pages represented as bags-of-words. Finally, we are planning to run a full time-independent EP analysis across chess games to obtain TrueSkill ratings for chess masters of all times. Acknowledgements We would like to thank Patrick O'Kelley, David Shaw and Chris Butcher for interesting discussions. We also thank Bungie Studios for providing the data. References [1] A. H. David. The Method of Paired Comparisons. Charles Grin and Company, London, 1969. [2] A. E. Elo. The rating of chess players: Past and present. Arco Publishing, New York, 1978. [3] V. V. Fedorov. Theory of optimal experiments. Academic Press, New York, 1972. [4] M. E. Glickman. A comprehensive guide to chess ratings. Amer. Chess Journal, 3:59102, 1995. [5] M. E. Glickman. Parameter estimation in large dynamic paired comparison experiments. Applied Statistics, 48:377394, 1999. [6] S. Har-Peled, D. Roth, and D. Zimak. Constraint classication: A new approach to multiclass classication and ranking. In NIPS 15, pages 785792, 2002. [7] F. R. Kschischang, B. Frey, and H.-A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Trans. Inform. Theory, 47(2):498519, 2001. [8] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001.
3079 |@word achievable:1 d2:2 bn:2 thoreg:1 tr:6 solid:1 moment:3 score:1 loeliger:1 past:1 trueskill:29 bradley:1 com:5 nt:1 surprising:1 si:10 yet:1 subsequent:1 realistic:1 additive:3 informative:1 plot:2 update:10 v:3 selected:1 cult:1 beginning:1 node:4 herbrich:1 provisional:1 c2:2 skilled:1 beta:3 ect:1 a2j:1 consists:1 doubly:1 pairwise:2 expected:1 roughly:1 p1:4 planning:1 globally:1 automatically:1 company:1 becomes:1 fti:1 estimating:1 matched:2 glickman:3 advent:1 xed:3 developed:2 certainty:1 every:1 ti:1 runtime:2 exactly:2 returning:1 wrong:1 uk:3 normally:1 unit:1 yn:7 t1:3 service:6 before:1 positive:1 frey:1 limit:3 despite:1 ak:1 initiated:1 meet:1 path:1 approximately:1 tournament:1 black:1 twice:1 downloads:1 bi:1 range:1 practical:1 testing:2 yj:6 ond:1 practice:1 procedure:1 empirical:1 erence:3 got:1 matching:3 word:1 integrating:1 close:1 live:6 accumulating:1 demonstrated:1 missing:1 roth:1 focused:1 rule:5 regarded:1 his:1 ralf:1 population:3 thurstone:1 updated:2 target:1 commercial:1 play:2 user:1 caption:1 approximated:1 updating:1 racing:1 predicts:1 observed:2 bottom:3 ep:1 capture:1 thousand:3 ensures:1 connected:1 highest:2 balanced:1 principled:1 peled:1 reward:1 dynamic:5 tight:3 predictive:1 serve:1 f2:1 basis:1 compactly:1 htm:1 joint:3 represented:4 london:1 outcome:18 choosing:1 heuristic:1 federation:1 say:1 drawing:1 ability:3 statistic:2 analyse:1 itself:1 online:9 interplay:1 advantage:1 exemplary:1 product:6 reset:1 p4:2 aligned:1 loop:1 date:1 loser:2 participating:2 competition:2 rst:2 convergence:4 billion:1 r1:1 cached:1 unpredictability:1 converges:1 disappeared:1 tk:1 depending:2 measured:1 erent:3 p2:4 predicted:1 signi:2 indicate:2 come:1 correct:1 public:1 argued:1 f1:3 tighter:1 fnew:3 adjusted:1 ft1:1 correction:1 around:2 considered:2 exp:1 predict:1 elo:24 vary:1 smallest:1 perceived:1 estimation:1 applicable:1 lose:1 bag:1 currently:2 title:3 largest:1 vice:1 create:1 mit:1 gaussian:15 always:1 rather:1 pn:3 encode:1 derived:2 vk:8 consistently:1 rank:3 likelihood:1 backgammon:1 greatly:1 wf:2 inference:6 stopping:1 relation:2 butcher:1 interested:1 among:2 marginal:3 once:1 having:1 ng:2 represents:1 future:1 report:3 t2:6 few:2 divergence:1 tightly:1 individual:4 comprehensive:1 consisting:1 microsoft:8 n1:1 linearised:1 interest:3 message:22 mlp:1 possibility:2 highly:1 introduces:1 truly:1 light:1 behind:1 tj:2 pc:4 f1g:1 har:1 integral:1 necessary:1 experience:4 old:1 circle:1 plotted:1 theoretical:1 minimal:2 increased:2 column:4 modeling:1 vnj:1 zimak:1 arpad:1 assignment:3 challenged:2 deviation:2 subset:1 hundred:3 xbox:8 too:1 dependency:1 chooses:1 punishment:1 density:2 accrued:1 cantly:2 together:1 thesis:1 possibly:1 slowly:1 leading:2 stark:1 potential:1 centred:1 sec:1 explicitly:1 ranking:9 vi:6 performed:3 view:2 multiplicative:1 tion:1 try:1 competitive:1 bayes:2 ass:1 ltering:1 partite:1 accuracy:1 square:1 variance:11 who:2 listing:1 t3:5 identify:1 modelled:1 bayesian:7 weak:1 iterated:1 jt1:3 trajectory:1 cation:4 fa1:1 inform:1 ed:2 against:1 involved:1 minka:3 associated:1 di:23 jxj:1 dataset:1 ective:1 graepel:1 schedule:3 carefully:1 back:1 dt:1 day:2 follow:2 tom:1 permitted:1 improved:1 formulation:2 arranged:1 amer:1 erences:2 just:2 rejected:1 until:1 web:1 gaming:3 overlapping:1 propagation:1 logistic:2 mode:5 aj:3 quality:8 stimulate:1 gray:1 thore:1 name:1 true:2 hence:2 assigned:3 leibler:1 deal:1 during:2 game:57 maintained:1 criterion:3 tt:1 theoretic:2 charles:1 console:1 matchmaking:9 winner:4 thirdly:1 million:2 association:1 marginals:6 interpret:1 refer:1 cambridge:3 versa:1 ai:1 automatic:1 populated:1 kelley:1 dj:1 had:3 stable:1 patrick:1 posterior:4 jt2:1 certain:1 scoring:1 seen:3 speci:3 shortest:1 dashed:1 semi:1 full:3 encompass:1 rj:3 infer:2 exceeds:1 match:15 academic:1 long:1 compensate:1 paired:3 prediction:2 variant:3 expectation:1 represent:2 whereas:1 addressed:1 member:1 counting:1 iterate:1 rherb:1 gri:1 competing:1 idea:2 tm:1 multiclass:2 minimise:1 ltd:3 passing:9 york:2 action:1 dramatically:1 amount:1 s4:1 dark:1 processed:1 http:1 canonical:1 s3:1 shifted:1 track:1 correctly:1 per:2 express:1 key:1 four:4 threshold:1 vast:1 graph:20 fraction:3 sum:3 enforced:1 compete:2 prob:1 run:1 uncertainty:1 master:1 family:2 fpi:1 p3:2 draw:26 fide:1 vf:2 bit:1 apg:1 xnew:3 played:1 display:1 fascinating:1 constraint:2 worked:1 uno:1 ri:1 generates:2 erroneously:1 speed:1 extremely:1 performing:1 according:1 multiplayer:2 jr:2 across:2 wi:1 making:1 s1:2 chess:9 glicko:1 equation:5 needed:1 ascending:1 adopted:1 available:2 gaussians:2 opponent:2 observe:1 v2:1 anymore:1 shaw:1 denotes:3 running:2 remaining:1 top:4 publishing:1 opportunity:1 mdi:2 const:1 cally:1 exploit:2 quantile:1 build:1 approximating:1 lation:1 already:1 quantity:1 occurs:1 exhibit:2 september:1 win:7 dp:1 thank:2 entity:2 distributive:1 majority:1 chris:1 mvi:1 code:1 modeled:2 ratio:4 providing:1 mostly:1 relate:1 negative:1 design:1 unknown:2 perform:2 observation:1 fedorov:1 displayed:1 truncated:1 enjoyable:1 halo:5 team:39 head:10 y1:4 ever:1 download:1 rating:24 introduced:1 david:2 cheating:1 bene:1 connection:1 narrow:1 protect:1 hour:1 boost:1 nip:1 trans:1 address:2 able:2 below:1 challenge:3 belief:3 terry:1 predicting:1 scheme:1 pare:1 rated:1 nding:1 transitive:2 prior:6 characterises:1 acknowledgement:1 relative:1 law:1 permutation:1 interesting:4 acyclic:2 versus:1 exciting:1 pi:6 factorising:1 row:2 last:1 free:5 guide:1 allow:1 wide:2 sparse:1 distributed:1 feedback:1 calculated:5 world:3 cumulative:1 arco:1 qn:1 made:2 avg:1 approximate:11 skill:37 kullback:1 vio:1 b1:1 assumed:2 reorder:1 continuous:1 latent:1 mvk:1 table:4 kschischang:1 vj:2 sp:1 did:1 main:2 arrow:4 s2:4 big:1 n2:1 fair:2 referred:1 cient:3 en:1 deployed:1 precision:2 fails:1 winning:7 weighting:1 rk:1 quali:1 er:2 disregarding:1 virtue:1 evidence:2 a3:1 sequential:1 phd:1 studio:3 margin:4 surprise:1 mf:12 depicted:1 likely:2 sport:1 corresponds:1 determines:1 chance:2 rjs:2 viewed:2 sorted:1 goal:1 sjr:2 change:1 generalisation:1 characterised:2 except:1 determined:1 flag:1 classi:3 conservative:1 called:1 total:2 experimental:3 stake:1 player:60 indicating:1 f4g:1 perfor:1 mark:1 d1:3
2,291
308
A Framework for the Cooperation of Learning Algorithms Leon Bottou Patrick Gallinari Laboratoire de Recherche en Informatique Universite de Paris XI 91405 Orsay Cedex France Abstract We introduce a framework for training architectures composed of several modules. This framework, which uses a statistical formulation of learning systems, provides a unique formalism for describing many classical connectionist algorithms as well as complex systems where several algorithms interact. It allows to design hybrid systems which combine the advantages of connectionist algorithms as well as other learning algorithms. 1 INTRODUCTION Many recent achievements in the connectionist area have been carried out by designing systems where different algorithms interact. For example (Bourlard & Morgan, 1991) have mixed a Multi-Layer Perceptron (MLP) with a Dynamic Programming algorithm. Another impressive application (Le Cun, Boser & al., 1990) uses a very complex multilayer architecture, followed by some statistical decision process. Also, in speech or image recognition systems, input signals are sequentially processed through different modules. Modular systems are the most promising way to achieve such complex tasks. They can be built using simple components and therefore can be easily modified or extended, also they allow to incorporate into their architecture some structural a priori knowledge about the task decomposition. Of course, this is also true for connectionism, and important 781 782 Bottou and Gallinari progresses in this field could be achieved if we were able to train multi-modules architectures. In this paper, we introduce a formal framework for designing and training such cooperative systems. It provides a unique formalism for describing both the different modules and the global system. We show that it is suitable for many connectionist algorithms, which allows to make them cooperate in an optimal way according to the goal of learning. It also allows to train hybrid systems where connectionist and classical algorithms interact. Our formulation is based on a probabilistic approach to the problem of learning which is described in section 2. One of the advantages of this approach is to provide a formal definition of the goal of learning. In section 3, we introduce modular architectures where each module can be described using this framework, and we derive explicit formulas for training the global system through a stochastic gradient descent algorithm. Section 4 is devoted to examples, including the case of hybrid algorithms combining MLP and Learning Vector Quantization (Bollivier, Gallinari & Thiria, 1990). 2 LEARNING SYSTEMS The probabilistic formulation of the problem of learning has been extensively studied for three decades (Tsypkin 1971), and applied to control, pattern recognition and adaptive signal processing. We recall here the main ideas and refer to (Tsypkin 1971) for a detailed presentation. 2.1 EXPECTED COST Let x be an instance of the concept to learn. In the case of a pattern recognition problem for example, x would be a pair (pattern, class). The concept is mathematically defined by an unknown probability density function p(x) which measures the likelihood of instance x. We shall use a system parameterized by w to perform some task that depends on p(x). Given an example x, we can define a local cost, J(x,w), that measures how well our system behaves on that example. For instance, for classification J would be zero if the system puts a pattern in the correct class, or positive in case of misclassification. Learning consists in finding a parameter w? that optimises some functional of the model parameters. For instance, one would like to minimize the expected cost (1). C(w) =f J(x,w) p(x)dx (1) The expected cost cannot be explicitely computed, because the density p(x) is unknown. Our only knowledge of the process comes from a series of observations {X1 ... xn} drawn from the unknown density p(x). Therefore, the quality of our system can only be measured through the realisations J(x,w) of the local cost function for the different observations. A Framework for the Cooperation of Learning Algorithms 2.2 STOCHASTIC GRADIENT DESCENT Gradient descent algorithms are the simplest minimization algorithms. We cannot, however, compute the gradient of the expected cost (1), because p(x) is unknown. Estimating these derivatives on a training set {X1 ... xn}, gives the gradient algorithm (2), where VJ denotes the gradient of J(x,w) with respect to w, and Et, a small positive constant. the "learning rate". 1 n Wt+ 1 =Wt - Et - 2. V J(Xj,wt} n 1. 1 (2) The stochastic gradient descent algorithm (3) is an alternative to algorithm (2). At each iteration, an example Xt is drawn at random, and a new value of w is computed. (3) Algorithm (3) is faster and more reliable than (2), it is the only solution for training adaptive systems like Neural networks (NN). Such stochastic approximations have been extensively studied in adpative signal processing (Benveniste. Metiver & Priouret, 1987). (Ljung & Soderstrom, 1983). Under certain conditions, algorithm (3) converges almost surely (Bottou, 1991). (White, 1991) and allows to reach an optimal state of the system. 3 MODULAR LEARNING SYSTEMS Most often, when the goal of learning is complex, it can be achieved more easily by using a decomposition of the global task into several simpler subtasks which for instance reflect some a priori knowledge about the structure of the problem. One can use this decomposition to build modular architectures where each module will correspond to one of the subtasks. Within this framework, we will use the expected risk (1) as the goal of learning. The problem now is to change the analytical formulation of the functional (1) so as to introduce the modular decomposition of the global task. In (1), the analytic expression of the local cost J(x,w) has two meanings: it describes a parametric relationship between the inputs and the outputs of the system, and measures the quality of the system. To introduce the decomposition, one may write this local cost J(x,w) as the composition of several functions. One of them will take into account the local error and therefore measure the quality of the system; the others will correspond to the decomposition of the parametric relationship between the inputs and the outputs of the system (Figure 1). Each of the modules will therefore receive some inputs from other modules or the external world and produce some outputs which will be sent to other modules. 783 784 B ottou and Gallinari a I-y-p Figure 1: A modular system In classical systems. these modules correspond to well defmed processing stages like e.g. signal processing. filtering. feature extraction. classification. They are trained sequentially and then linked together to build a complete processing system which takes some inputs (e.g. raw signals) and produces some outputs (e.g. classes). Neither the assumed decomposition. nor the behavior of the different modules is guaranteed to optimally contribute to the global goal of learning. We will show in the following that it is possible to optimally train such systems. 3.1 TRAINING MODULAR SYSTEMS Each function in the above composition defmes a local processing stage or module whose outputs are defined by a parametric function of its inputs (4). V' je y-1 (n), Yj =fj( (Xk) ke X-1(n) , (Wi) ie W-1 (n) ) (4) y-1 (n) ( resp. X-1 (n). and W-1 (n) ) denotes the set of subscripts associated to the outputs Y ( resp. inputs x and parameters W ) of module n. Conversely. output Yj ( resp. input xk and parameter Wi ) belongs to module Y(j) ( resp. X(k) and W(i) ). Modules are linked so as to build a feed-forward topology which is expressed by a function cj). V'k, xk = Y~(k) (5) We shall consider that the first module only feeds the system with examples and that the last module only computes Ylast = J(x,w). Following (Le Cun. 1988). we can compute the derivatives of J with a Lagrangian method. Let a and ~ be the Lagrange coefficients for constraints (4) and (5). L=J- L ~k(Xk-Y~(k)) - L aj (Yr!j( (Xk) ke X-1Y(j), (Wi) ie W-1Y(j) )) (6) k j By equating the derivatives of L with respect to x and Yto zero. we get recursive formulas for computing a and ~ in a single backward pass along the acyclic graph cj). A Framework for the Cooperation of Learning Algorithms (7) alast = 1, Then, the derivatives of J with respect to the weights are: dJ dL -(w) = -(aRw) = dwi dwi ,.." ~ ?.J d I: a'J :LJ.. :l. ... (8) je y-1W{i) UYVI Once we have computed the derivatives of the local cost J(x,w), we can apply the stochastic gradient descent algorithm (3) for minimizing of the expected cost C(w). We shall say that each module is defined by the equations in (7) and (8) that characterize its behavior. These equations are: ? a forward equation (F) Yj = fj( (xl<) keX-1(n) ,(Wi) ieW1(n) ) ? a backward equation (B) ~= L a . Ell J dXk jeY-'X(k) ? a gradient equation (G) dJ ~i=-.= dwl a/ : Laj ~ je Y-'W(i) awl The remaining equations do not depend on the nature of the modules. They describe how modules interact during training. Like back-propagation, they address the credit assignment problem between modules by globally minimizing a single cost function. Training such a complex system actually consists in cooperatively training its components. 4 EXAMPLES Most learning algorithms, as well as new algorithms may be expressed as modular learning systems. Here are some simple examples of modules and systems. 4.1 LINEAR AND QUASI-LINEAR SYSTEMS MODULE SYMBOL FORWARD BACKWARD GRADIENT Matrix product Wx Yi-tWikXk ~k'""L<Xjwik ~ik=aixk i Mean square error MSE Perceptron error Perceptron Sigmoid sigmo'id J. t{d k'Xk)2 ~k=-2 (dk-xk) J.- t(d k-1 9t +(Xk?Xk ~k-- (dk- 19t+(Xk? Yk?f(Xk) ~k,""f'(Xk)ak A few basic modules are defined in the above table. Figure 2 gives examples of linear and quasi linear algorithms derived by combining these modules. 785 786 Bottou and Gallinari ( Wx L ( r.r Examples H Wx MSE J ~~ perceptroj'" J L --1 sigmo'(d L r H Wx H Examples ---' sigmoId H MSE r J ExalT1lles _ _ _ _ _ _ _ _ _ _ _ _ _..1 1 Figure 2: An Adaline, a Perceptron, and a 2-Layer Perceptron. Some MLP architectures, Time Delay Networks for instance, use local connections and shared weights. Such complex architectures may be constructed by defining either quasilinear unit modules or complex matrix operations modules like convolutions. The latter solution leads to more efficient implementations. Figure 3 gives an example of convolution module, composed of several matrix products modules. w I I I I Xk Yk - &.. ( Convolve ) - .Yk Figure 3: A convolution module, composed of several matrix product modules. 4 ?2 EUCLIDIAN DISTANCE BASED ALGORITHMS A wide class of learning systems are based on the measure of euclidian distances. Again, defining an euclidian distance module and some adequate cost functions allows for handling most euclidian distance based algorithms. Here are some examples: MODULE SYMBOL BACKWARD GRADIENT (x-w)2 ~k=-2tUj(Wjk-Xk) Ajk=2Uj(Wjk- Xk) Minimum Min ~ko-1, ~k,oko-O LVQ 1 error LVQ1 Euclidian distance FORWARD If the nearest reference Xk? is associated to the correct class J - Xk o=Min{xiJ ~ko .1, ~k,ok?-O else J - -Xko=-Min{xiJ ~ko -1, ~k,ok?.O Combining an euclidian distance module with a "minimum" error module gives a Kmeans algorithm; combining it with a LVQI error module gives the LVQI algorithm (Figure 4). A Framework for the Cooperation of Learning Algorithms t~~J .... J Examples Figure 4: K-Means and Learning Vector Quantization. 4.3 HYBRID ALGORITHMS Hybrid algorithms which may combine classical and connectionist learning algorithms are easily defined by chaining appropriate modules. Figure 5, for instance, depicts an algorithm combining a MLP layer and LVQ1. This algorithm has been described and empirically compared to other pattern recognition algorithms in (Bollivier, Gallinari & Thiria, 1990). ( Wx 1 H sigmO?id)-.{ (w Examples -X)2]-1 LVa 1 ~ J 1 Figure 5: An hybrid algorithm combining a MLP and LVQ. Cooperative training gives a framework and a possible implementation for such algorithms. Nevertheless, there are still specific problems (e.g. convergence, initialization) which require a careful study. More complex hybrid systems, including combinations of Markov Models and Time Delay Networks, have been described within this framework in (Bottou,1991). 5 CONCLUSION Cooperative training of modular systems provides a unified view of many learning algorithms, as well as hybrid systems which combine classical or connectionist algorithms. Our formalism provides a way to define specific modules and to combine them into a cooperative system. This allows to design and implement complex learning systems which eventually incorporate structural a priori knowledge about the task. Acknowledgements During this work, L.B. was supported by DRET grant nO 87/808/19. References Benveniste A., Metivier M., Priouret P. (1987) Algorithmes adaptatifs et approximations stochastiques, Masson Bollivier M. de, Gallinari P. & Thiria S. (1990) Cooperation of neural nets for robust classification, Procedings of I1CNN 90, San Diego, voll, 113-120. 787 788 Boltou and Gallinari Bottou L. (1991) Une approche thiorique de l' apprentissage connexionniste; applications ala reconnaissance de la parole. PhD Thesis. Universite de Paris XI Bourlard H .? Morgan N. (1991) A Continuous Speech Recognition System Embedding MLP into HMM - In Touretzlc:y D.S .? Lipmann R. (eds.) Advances in Neural Information Processing Systems 3 (this volume). Morgan-Kaufman Le Cun Y.: A theoretical framework for back-propagation (1988) in Touretzky D.? Hinton G. & Sejnowsky T. (eds.) Proceedings of the 1988 Connectionist Models Summer School. 21-28. Morgan Kaufmann (1988) Le Cun Y.? Boser B .? & al.. (1990): Handwritten Digit Recognition with a BackPropagation Network- in D.Touretzky (ed.) Advances in Neural Information Processing Systems 2. 396-404. Morgan Kaufmann Ljung L. & SMerstriim T. (1983) Theory and Practice of Recursive Identification. MIT Press Tsypkin Ya. (1971) Adaptation and Learning in Automatic systems. Mathematics in science and engineering. vol 73. Academic Press White H. (1991) An Overview of Representation and Convergence results for Multilayer feed-forward Networks. Touretzky D.S .? Lipmann R. (eds.) Advances in Neural Information Processing Systems 3 (this volume). Morgan-Kaufman
308 |@word decomposition:7 euclidian:6 series:1 ala:1 dx:1 wx:5 analytic:1 yr:1 une:1 xk:17 recherche:1 provides:4 contribute:1 simpler:1 along:1 constructed:1 ik:1 consists:2 combine:4 introduce:5 expected:6 behavior:2 nor:1 multi:2 globally:1 estimating:1 kaufman:2 unified:1 finding:1 gallinari:8 control:1 unit:1 grant:1 positive:2 engineering:1 local:8 ak:1 id:2 subscript:1 initialization:1 studied:2 equating:1 conversely:1 adpative:1 unique:2 yj:3 recursive:2 practice:1 implement:1 backpropagation:1 digit:1 area:1 get:1 cannot:2 put:1 risk:1 lagrangian:1 lva:1 masson:1 ke:2 yto:1 embedding:1 resp:4 diego:1 programming:1 us:2 designing:2 recognition:6 cooperative:4 module:38 yk:3 dynamic:1 metivier:1 trained:1 depend:1 easily:3 train:3 informatique:1 describe:1 whose:1 modular:9 say:1 advantage:2 analytical:1 net:1 product:3 adaptation:1 combining:6 adaline:1 awl:1 achieve:1 wjk:2 achievement:1 convergence:2 produce:2 converges:1 derive:1 measured:1 nearest:1 school:1 progress:1 come:1 correct:2 stochastic:5 require:1 connectionism:1 mathematically:1 cooperatively:1 credit:1 minimization:1 mit:1 modified:1 derived:1 likelihood:1 nn:1 lj:1 quasi:2 france:1 kex:1 classification:3 priori:3 ell:1 field:1 optimises:1 once:1 extraction:1 connectionist:8 others:1 realisation:1 few:1 composed:3 mlp:6 devoted:1 soderstrom:1 theoretical:1 instance:7 formalism:3 assignment:1 cost:12 delay:2 optimally:2 characterize:1 density:3 ie:2 probabilistic:2 reconnaissance:1 together:1 again:1 reflect:1 thesis:1 external:1 derivative:5 account:1 de:6 coefficient:1 depends:1 view:1 approche:1 linked:2 minimize:1 square:1 kaufmann:2 correspond:3 raw:1 handwritten:1 identification:1 reach:1 touretzky:3 ed:4 definition:1 sejnowsky:1 universite:2 associated:2 recall:1 knowledge:4 cj:2 actually:1 back:2 feed:3 ok:2 formulation:4 stage:2 propagation:2 aj:1 quality:3 dxk:1 concept:2 true:1 white:2 during:2 defmed:1 chaining:1 lipmann:2 complete:1 fj:2 cooperate:1 image:1 meaning:1 sigmoid:2 behaves:1 functional:2 empirically:1 overview:1 volume:2 refer:1 composition:2 jey:1 automatic:1 mathematics:1 dj:2 impressive:1 patrick:1 recent:1 belongs:1 certain:1 yi:1 morgan:6 minimum:2 surely:1 signal:5 faster:1 academic:1 basic:1 ko:3 multilayer:2 iteration:1 achieved:2 receive:1 laj:1 laboratoire:1 else:1 cedex:1 lvq1:2 sent:1 orsay:1 structural:2 xj:1 architecture:8 topology:1 idea:1 expression:1 quasilinear:1 speech:2 adequate:1 detailed:1 extensively:2 processed:1 simplest:1 xij:2 write:1 shall:3 vol:1 voll:1 nevertheless:1 drawn:2 neither:1 backward:4 graph:1 parameterized:1 almost:1 decision:1 layer:3 followed:1 guaranteed:1 summer:1 constraint:1 thiria:3 arw:1 min:3 leon:1 according:1 combination:1 describes:1 wi:4 cun:4 equation:6 describing:2 eventually:1 tsypkin:3 operation:1 apply:1 appropriate:1 alternative:1 denotes:2 remaining:1 convolve:1 build:3 uj:1 classical:5 parametric:3 gradient:11 distance:6 hmm:1 parole:1 priouret:2 relationship:2 minimizing:2 design:2 implementation:2 unknown:4 perform:1 observation:2 convolution:3 markov:1 descent:5 defining:2 extended:1 hinton:1 subtasks:2 procedings:1 pair:1 paris:2 connection:1 boser:2 address:1 able:1 pattern:5 built:1 including:2 reliable:1 suitable:1 misclassification:1 hybrid:8 bourlard:2 carried:1 acknowledgement:1 ljung:2 mixed:1 filtering:1 acyclic:1 apprentissage:1 benveniste:2 course:1 cooperation:5 supported:1 last:1 formal:2 allow:1 perceptron:5 wide:1 xn:2 world:1 computes:1 stochastiques:1 forward:5 adaptive:2 dwi:2 san:1 global:5 sequentially:2 explicitely:1 assumed:1 xi:2 continuous:1 decade:1 table:1 tuj:1 promising:1 learn:1 nature:1 robust:1 interact:4 mse:3 bottou:6 complex:9 vj:1 main:1 x1:2 je:3 en:1 depicts:1 explicit:1 xl:1 formula:2 xt:1 specific:2 algorithmes:1 symbol:2 dk:2 dl:1 quantization:2 phd:1 lagrange:1 expressed:2 goal:5 presentation:1 lvq:2 kmeans:1 careful:1 shared:1 ajk:1 change:1 dwl:1 wt:3 pas:1 la:1 ya:1 latter:1 incorporate:2 handling:1
2,292
3,080
PG-means: learning the number of clusters in data Yu Feng Greg Hamerly Computer Science Department Baylor University Waco, Texas 76798 {yu feng, greg hamerly}@baylor.edu Abstract We present a novel algorithm called PG-means which is able to learn the number of clusters in a classical Gaussian mixture model. Our method is robust and efficient; it uses statistical hypothesis tests on one-dimensional projections of the data and model to determine if the examples are well represented by the model. In so doing, we are applying a statistical test for the entire model at once, not just on a per-cluster basis. We show that our method works well in difficult cases such as non-Gaussian data, overlapping clusters, eccentric clusters, high dimension, and many true clusters. Further, our new method provides a much more stable estimate of the number of clusters than existing methods. 1 Introduction The task of data clustering is important in many fields such as artificial intelligence, data mining, data compression, computer vision, and others. Many different clustering algorithms have been developed. However, most of them require that the user know the number of clusters (k) beforehand, while an appropriate value for k is not always clear. It is best to choose k based on prior knowledge about the data, but this information is often not available. Without prior knowledge it can be especially difficult to choose k when the data have high dimension, making exploratory data analysis difficult. In this paper, we present an algorithm called PG-means (PG stands for projected Gaussian) which is able to discover an appropriate number of Gaussian clusters and their locations and orientations. Our method is a wrapper around the standard and widely used Gaussian mixture model. The paper?s primary contribution is a novel method of determining if a whole mixture model fits its data well, based on projections and statistical tests. We show that the new approach works well not only in simple cases in which the clusters are well separated, but also in the situations where the clusters are overlapped, eccentric, in high dimension, and even non-Gaussian. We show that where some other methods tend to severely overfit, our method does not, and that our method is comparable to but much faster than a recent variational Bayes-based approach for learning k. 2 Related work Several algorithms have been proposed to determine k automatically. Most of these algorithms wrap around either k-means or Expectation-Maximization for fixed k. As they proceed, they use splitting or merging rules to increase or decrease k until a proper value is reached. Pelleg and Moore [9] proposed the X-means algorithm, which is a regularization framework for learning k with k-means. This algorithm tries many values for k and obtains a model for each k value. Then X-means uses the Bayesian Information Criterion (BIC) to score each model [5, 12], and chooses the model with the highest BIC score. Besides the BIC, other scoring criteria could also be applied such as the Akaike Information Criterion [1], or Minimum Description Length [10]. One drawback of the X-means algorithm is that the cluster covariances are all assumed to be spherical and the same width. This can cause X-means to overfit when it encounters data that arise from non-spherical clusters. Hamerly and Elkan [4] proposed the G-means algorithm, a wrapper around the k-means algorithm. G-means uses projection and a statistical test for the hypothesis that the data in a cluster come from a Gaussian distribution. The algorithm grows k starting with a small number of centers. It applies a statistical test to each cluster and those which are not accepted as Gaussians are split into two clusters. Interleaved with k-means, this procedure repeats until every cluster?s data are accepted as Gaussian. While this method does not assume spherical clusters and works well if true clusters is well-separated, it has difficulties when true clusters overlap since the hard assignment of k-means can clip data into subsets that look non-Gaussian. Sand and Moore [11] proposed an approach based on repairing faults in a Gaussian mixture model. Their approach modifies the learned model at the regions where the residual is large between the model?s predicted density and the empirical density. Each modification adds or removes a cluster center. They use a hill-climbing algorithm to seek a model which maximizes a model fitness scoring function. However, calculating the empirical density and comparing it to the model density is difficult, especially in high dimension. Tibshirani et al. [13] proposed the Gap statistic, which compares the likelihood of a learned model with the distribution of the likelihood of models trained on data drawn from a null distribution. Our experience has shown that this method works well for finding a small number of clusters, but has difficulty as the true k increases. Welling and Kurihara [15] proposed Bayesian k-means, which uses Maximization-Expectation (ME) to learn a mixture model. ME maximizes over the hidden variables (assignment of examples to clusters), and computes an expectation over model parameters (center locations and covariances). It is a special case of variational Bayesian methods. Bayesian k-means works well but is slower than our method. None of these prior approaches perform well in all situations; they tend to overfit, underfit, or are too computationally costly. These issues form the motivation for our new approach. 3 Methodology Our approach is called PG-means, where PG stands for projected Gaussian and refers to the fact that the method applies projections to the clustering model as well as the data, before performing each hypothesis test for model fitness. PG-means uses the standard Gaussian mixture model with Expectation-Maximization training, but any underlying algorithm for training a Gaussian mixture might be used. Our algorithm starts with a simple model and increases k by one at each iteration until it finds a model that fits the data well. Each iteration of PG-means uses the EM algorithm to learn a model containing k centers. Each time EM learning converges, PG-means projects both the dataset and the learned model to one dimension, and then applies the Kolmogorov-Smirnov (KS) test to determine whether the projected model fits the projected data. PG-means repeats this projection and test step several times for a single learned model. If any test rejects the null hypothesis that the data follows the model?s distribution, then it adds one cluster and starts again with EM learning. If every test accepts the null hypothesis for a given model, then the algorithm terminates. Algorithm 1 describes the algorithm more formally. When adding a new cluster PG-means preserves the k clusters it has learned and adds a new cluster. This preservation helps EM converge more quickly on the new model. To find the best new model, PG-means runs EM 10 times each time it adds a cluster with a different initial location for the new cluster. The mean of each new cluster is chosen from a set of randomly chosen examples, and also points with low model-assigned probability density. The initial covariance of the new cluster is based on the average of the existing clusters? covariances, and the new cluster prior is assigned 1/k and all priors are re-normalized. More than 10 EM applications could be used, as well as deterministic annealing [14], to ensure finding the best new model. In our tests, deterministic annealing did not improve the results of PG-means. As stated earlier, any training algorithm (not just EM) may be Algorithm 1 PG-means (dataset X, confidence ?, number of projections p) 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: Let k ? 1. Initialize the cluster with the mean and covariance of X. for i = 1 . . . p do Project X and the model to one dimension with the same projection. Use the KS test at significance level ? to test if the projected model fits the projected dataset. If the test rejects the null hypothesis, then break out of the loop. end for if any test rejected the null hypothesis then for i = 1 . . . 10 do Initialize k + 1 clusters as the k previously learned plus one new cluster. Run EM on the k + 1 clusters. end for Retain the model of k + 1 clusters with the best likelihood. Let k ? k + 1, and go to step 2. end if Every test accepts the null hypothesis; stop and return the model. used to fit a particular set of k Gaussian models. For example, one might use k-means if more speed is desired. 3.1 Projection of the model and the dataset PG-means is novel because it applies projection to the learned model as well as to the dataset prior to testing for model fitness. There are several reasons to project both the examples and the model. First, a mixture of Gaussians remains a mixture of Gaussians after being linearly projected. Second, there are many effective and efficient tests for model fitness in one dimension, but in higher dimensions such testing is more difficult. Assume some data X is sampled from a single Gaussian cluster with distribution X ? N (?, ?) in d dimensions. So ? = E[X] is the d ? 1 mean vector and ? = Cov[X] is the d ? d covariance matrix. Given a d ? 1 projection vector P of unit length (||P || = 1), we can project X along P as X 0 = P T X. Then X 0 ? N (?0 , ? 2 ), where ?0 = P T ? and ? 2 = P T ?P . We can project each cluster model to obtain a one-dimensional projection of an entire mixture along P . Then we wish to test whether the projected model fits the projected data. The G-means and X-means algorithms both perform statistical tests for each cluster individually. This makes sense because each algorithm is a wrapper around k-means, and k-means uses hard assignment (each example has membership in only one cluster). However, this approach is problematic when clusters overlap, since the hard assignment results in ?clipped? clusters, making them appear very non-Gaussian. PG-means tests all clusters and all data at once. Then if two true clusters overlap, the additive probability of the learned Gaussians representing those clusters will correctly model the increased density in the overlapping region. 3.2 The Kolmogorov-Smirnov test and critical values After projection, PG-means uses the univariate Kolmogorov-Smirnov [7] test for model fitness. The KS test statistic is D = maxX |F (X) ? S(X)| ? the maximum absolute difference between the true CDF F (X) with the sample CDF S(X). The KS test is only applicable if F (X) is fully specified; however, PG-means estimates the model with EM, so F (X) cannot be specified a priori. The best we can do is use the parameter estimates, but this will cause us to accept the model too readily. In other words, the probability of a Type I error will be too low and PG-means will tend to choose models with too few clusters. Lilliefors [6] gave a table of smaller critical values for the KS test which correct for estimated parameters of a single univariate Gaussian. These values come from Monte Carlo calculations. Along this vein, we create our own test critical values for a mixture of univariate Gaussians. To generate the critical values for the KS test statistic, we use the projected one-dimensional model that has been learned to generate many different datasets, and then measure the KS test statistic for each dataset. Then we find the KS test statistic that is in the ? range we desire, which is the critical value we want. Fortunately, this can be done efficiently and does not dominate the running time of our algorithm. It is much more efficient than if we were to generate datasets from the full dimensional data and project these to obtain the statistic distribution, yet they are equivalent approaches. Further optimization is? possible when we follow Lilliefors? observation that the critical value decreases as approximately n, for sufficiently large n, which we have also observed in our simulations with mixtures of Gaussians. Therefore,pwe can use Monte Carlo simulations with n0  n points, and scale the chosen critical value by n0 /n. A more accurate scaling given by Dallal and Wilkinson [2] did not offer additional benefit in our tests. We use at most n0 = 3/?, which is 3000 points for ? = 0.001. The Monte Carlo simulations can be easily parallelized, and our implementation uses two computational threads. 3.3 Number of projections We wish to use a small but sufficient number of projections and tests to discover when a model does not fit data well. Each projection provides a different view of model fitness along that projection?s direction. However, a projection can cause the data from two or more true clusters to be collapsed together, so that the test cannot see that there should be multiple densities used to model them. Therefore multiple projections are necessary to see these model and data discrepancies. We can choose the projections in several different ways. Random projection [3] provides a useful framework, which is what we use in this paper. Other possible methods include using the leading directions from principal components analysis, which gives a stable set of vectors which can be re-used, or choosing k ? 1 vectors that span the same subspace spanned by the k cluster centers. Consider two cluster centers ?1 and ?2 in d dimensions and the vector which connects them, m = ?2 ??1 . We assume for simplicity p that the two clusters have the same spherical covariance ? and are c-separated, that is, ||m|| ? c trace(?). We follow Dasgupta?s conclusion that c-separation is the natural measure for Gaussians [3]. Now consider the projection of m along some randomly chosen vector P ? N (0, 1/dI). We use this distribution because in high dimension P will be approximately unit-length. The probability that P is a ?good? projection, i.e. that it maintains c-separation between the cluster means when projected, is s !   p  T ?P ? dP T 1/2 P r |P m| ? c P T ?P > 1 ? Erf c = 1 ? Erf 2 2c trace(?) where Erf is the standard Gaussian error function. Here we have used the relation P T ?P = trace(?)/d when ? is spherical and ||P || = 1. If ? is not spherical, then this is true in an expected sense, i.e. E[P T ?P ] = trace(?)/d when ||P || = 1. If we perform p random projections, we wish that the probability that all p projections are ?bad? to be less than some ?: p p P r(p bad projections) = Erf 1/2 < ? p Therefore we need approximately p < log(?)/ log(Erf( 1/2)) ? ?2.6198 log(?) projections to find a projection that keeps the two cluster means c-separated. For ? = 0.01, this is only 12 projections, and for ? = 0.001, this is only 18 projections. 3.4 Algorithm complexity PG-means converges as fast as EM on any given k, and it repeats EM every time it adds a cluster. Let K be the final learned number of clusters on n data points. PG-means runs in O(K 2 nd2 l + Kn log(n)) time, where l is the number of iterations required for EM convergence. The n log(n) term comes from the sort required for each KS test, and the d2 comes from using full covariance matrices. PG-means uses a fixed number of projections for each model and each projection is linear in n, d, and k; therefore the projections do not increase the algorithm?s asymptotic run time. Note also that EM starts with k learned centers and one new randomly initialized center, so EM convergence is much faster in practice than if all k + 1 clusters were randomly initialized. We must also factor in the cost of the Monte Carlo simulations for determining the KS test critical value, which are O(Kd2 n log(n)/?) for each simulation. For fixed alpha, this does not increase the runtime significantly, and in practice the simulations are a minor part of the running time. Eccentricity=4 Learned k Eccentricity=1 Learned k c=2 c=4 c=6 25 22 22 20 20 20 15 18 18 10 2 4 8 16 16 2 4 8 16 16 60 40 40 40 30 30 20 20 20 0 2 4 10 8 16 dimension 2 4 10 8 16 dimension 2 4 8 16 PG?means G?means X?means BKM 2 4 8 16 dimension Figure 1: Each point represents the average number of clusters learned for various types of synthetic datasets. The true number of clusters is 20. The error bars denote the standard errors for the experiments (except for BKM, which was run once for each dataset type). Eccentricity=4 VI metric score Eccentricity=1 VI metric score c=2 c=4 c=6 1 0.4 0.4 0.5 0.2 0.2 0 0 10 20 1.5 0 0 10 20 0 0 1 1 0.5 0.5 1 10 20 PG?means G?means X?means BKM 0.5 0 0 10 dimension 20 0 0 10 dimension 20 0 0 10 dimension 20 Figure 2: Each point represents the average VI metric comparing the learned clustering to the correct labels for various types of synthetic datasets. Lower values are better. For each algorithm except BKM we provide standard error bars (BKM was run once for each dataset type). 4 Experimental evaluation We perform several experiments on synthetic and real-world datasets to illustrate the utility of PG-means and compare it with G-means, X-means, and Bayesian k-means (BKM). For synthetic datasets, we experiment with Gaussian and non-Gaussian data. We use ? = 0.001 for both PGmeans and G-means. For each model, PG-means uses 12 projections and tests, corresponding to an error rate of ? < 0.01 that it incorrectly accepts. All our experiments use MATLAB on Linux 2.4 on a dual-processor dual-hyperthreaded Intel Xeon 3.06 GHz computer with 2 gigabytes of memory. Figure 1 shows the number of clusters found by running PG-means, G-means, X-means and BKM on many synthetic datasets. Each of these datasets has 4000 points in d = 2, 4, 8 and 16 dimensions. PG?means G?means X?means Figure 3: The leftmost dataset has 10 true clusters with significant overlap (c = 1). Though PGmeans finds only 4 clusters, the model is very reasonable. On the right are the results for PG-means, G-means, and X-means on a dataset with 5 true eccentric and overlapping clusters. PG-means finds the correct model, while the others overfit with 15 and 19 clusters. All of the data are drawn from a mixture of 20 true Gaussians. The centers of the clusters in each dataset are chosen randomly, and each cluster generates the same number of points. Each Gaussian mixture dataset is specified by the average c-separation between each cluster center and its nearest neighbor (either 2, 4 or 6) and each cluster?s eccentricity (either 1 or 4). The eccentricity of is defined p as Ecc = ?max /?min where ?max and ?min are the maximum and minimum eigenvalues of the cluster covariance. An eccentricity of 1 indicates a spherical Gaussian. We generate 10 datasets of each type and run PG-means, G-means and X-means on each, and we run BKM on only one of them due to the running time of BKM. Each algorithm starts with one center, and we do not place an upper-bound on the number of clusters. It is clear that PG-means performs better than G-means and X-means when the data are eccentric (Ecc=4), especially when the clusters overlap (c = 2). In this situation G-means and X-means tend to overestimate the number of clusters. The rightmost plots in Figure 3 further illustrate this overfitting. PG-means is much more stable in its estimate of the number of clusters, unlike Gmeans and X-means which can dramatically overfit depending on the type of data. BKM generally does very well, but is less efficient than PG-means. For example, on a set of 24 different datasets each having 4000 points from 10 clusters, 2-16 dimensions and varying separations/eccentricities, PG-means was three times faster than BKM. Figure 1 only gives the information regarding the learned number of clusters, which is not enough to measure the true quality of learned models. In order to better evaluate the approaches, we use Meila?s VI (Variation of Information) metric [8] to compare the induced clustering to the true labels. The VI metric is non-negative and lower values are better. It is zero when the two compared clusterings are identical (modulo clusters being relabeled). Figure 2 shows the average VI metric obtained by running PG-means, G-means, X-means, and BKM on the same synthetic datasets as in Figure 1. PG-means does about as well as the other algorithms when the data are spherical and well-separated (see the top-right plot). However, the top-left plot shows that PG-means does not perform as well as G-means, X-means and BKM for spherical and overlapping data. The reason is that two spherical clusters overlap, they can look like a single eccentric cluster. Since PG-means can capture eccentric clusters effectively, it will accept these two overlapped spherical clusters as one cluster. But for the same case, G-means and X-means will probably recognize them as two different clusters. Therefore, although PG-means gives fewer clusters for spherical and overlapping data, the models it learns are reasonable. Figure 3 shows how 10 true overlapping clusters may look like far fewer clusters, and that PG-means can find an appropriate model with only 4 clusters. High dimensional data of any finite-variance distribution looks more Gaussian when linearly projected to a randomly chosen lower-dimensional space. Projection is a weighted sum of the original dimensions, and the sum of many random variables with finite variance tends to be Gaussian, according to the central limit theorem. Thus PG-means should be useful for high-dimensional data which are not Gaussian. To test this, we perform experiments on high-dimensional non-Gaussian synthetic datasets. These datasets are generated in a similar way of generating our synthetic Gaussian datasets, except that each true cluster has a uniform distribution. Each cluster is not necessarily axis-aligned or square; it is scaled for eccentricity and rotated. Each dataset has 4000 points in 8 dimensions equally distributed among 20 clusters. The eccentricity and c-separation values for the datasets are both 4. We run PG-means, G-means and X-means on 10 different datasets and BKM Table 1: Results for synthetic non-Gaussian data and the handwritten digits dataset. Each nonGaussian dataset contains 4000 points in 8 dimensions sampled from 20 true clusters each having uniform distribution. The eccentricity and c-separation are both 4. We run each algorithm except BKM on ten such datasets, and BKM on one. The digits dataset consists of 10 classes and 9298 examples. Algorithm PG-means G-means X-means BKM Non-Gaussian datasets (20 true clusters) Learned k VI metric 20 ? 0 0?0 42.2 ? 3.67 0.673 ? 0.071 27.7 ? 1.28 0.355 ? 0.059 20 0 Handwritten digits dataset (10 true classes) Learned k VI metric 14 2.045 48 3.174 29 2.921 15 1.980 on one of them. The results are shown in the left part of Table 1. G-means and X-means overfit the non-Gaussian datasets, while PG-means and BKM both perform excellently in the number of clusters learned and in learning the true labels according to the VI metric. We tested all of these algorithms on the U.S. Postal Service handwritten digits dataset (both the train and test portions, obtained from http://www-stat.stanford.edu/?tibs/ElemStatLearn/data.html). Each example is a grayscale image of a handwritten digit. There are 9298 examples in the dataset, and each example has 256 pixels (16 pixels on a side). The dataset has 10 true classes (digits 0-9). Our goal is to cluster the dataset without knowing the true labels and analyze the result to find out how well PG-means captures the true classes. We use random linear projection to project the dataset to 16 dimensions and run PG-means, Gmeans, X-means, and BKM on it. The results are shown in the right side of Table 1. PG-means gives 14 centers, which is closest to the true value. It also obtains nearly the best VI metric score. On the other hand, G-means and X-means find many more classes than the truth, which do not help them score well on the VI metric, and BKM takes over twice as long as PG-means. 5 Conclusions and future work We presented a new algorithm called PG-means for learning the number of Gaussian clusters k in data. Starting with one center, it grows k gradually. For each k, it learns a model using ExpectationMaximization. Then it projects both the model and the dataset to one dimension and tests for model fitness with the Kolmogorov-Smirnov test and its own critical values. It performs multiple projections and tests per model, to avoid being fooled by a poorly chosen projection. If the model does not fit well, PG-means adds an additional cluster. This procedure repeats until one model is accepted by all tests. We proved that only a small number of these fast tests are required to have good performance at finding model differences. In the future we will investigate methods of finding better projections for our task. We also hope to develop approximations to the critical values of the KS test on Gaussian mixtures, to avoid the cost of Monte Carlo simulations. PG-means finds better models than G-means and X-means when the true clusters are eccentric or overlap, especially in low-dimension. On high-dimensional data PG-means also performs very well. PG-means gives far more stable estimates of the number of clusters than the other two methods over many different types of data. Compared with Bayesian k-means, we showed that PG-means performs comparably, though PG-means is several times faster in our tests and uses less memory. Though PG-means looks for general Gaussian clusters, we showed that PG-means works well on high-dimensional non-Gaussian data, due to the central limit theorem and our use of projection. Our techniques would also be applicable as a wrapper around the k-means algorithm, which is really just a mixture of spherical Gaussians, or any other mixture of Gaussians with limited covariance. On the real-world handwritten digits dataset PG-means finds a very good clustering with nearly the correct number of classes, and PG-means and BKM are equally close to identifying the original labels among the algorithms we tested. We believe that the project-and-test procedure that PG-means uses is a useful method for determining fitness of a given mixture of Gaussians. However, the underlying standard EM clustering algorithm dominates the runtime and is difficult to initialize well, which are well-known problems. The project-and-test framework of PG-means does not depend on EM in any way, and could be wrapped around any other better method of finding a Gaussian mixture. Acknowledgements: We thank Dennis Johnston, Sanjoy Dasgupta, Charles Elkan, and the anonymous reviewers for helpful suggestions. We also thank Dan Pelleg and Ken Kurihara for sending us their source code. References [1] Hirotugu Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19:716?723, 1974. [2] Gerard E. Dallal and Leland Wilkinson. An analytic approximation to the distribution of Lilliefors? test for normality. The American Statistician, 40:294?296, 1986. [3] Sanjoy Dasgupta. Experiments with random projection. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI-2000), pages 143?151. Morgan Kaufmann Publishers, 2000. [4] Greg Hamerly and Charles Elkan. Learning the k in k-means. In Proceedings of the seventeenth annual conference on neural information processing systems (NIPS), pages 281?288, 2003. [5] Robert E. Kass and Larry Wasserman. A reference Bayesian test for nested hypotheses and its relationship to the schwarz criterion. Journal of the American Statistical Association, 90(431):928?934, 1995. [6] Hubert W. Lilliefors. On the Kolmogorov-Smirnov test of normality with mean and variance unknown. Journal of the American Statistical Association, 62(318):399?402, 1967. [7] Frank J. Massey, Jr. The Kolmogorov-Smirnov test for goodness of fit. Journal of the American Statistical Association, 46(253):68?78, 1951. [8] Marina Meila. Comparing clusterings by the variation of information. In COLT, pages 173?187, 2003. [9] Dan Pelleg and Andrew Moore. X-means: Extending k-means with efficient estimation of the number of clusters. In Proceedings of the 17th International Conf. on Machine Learning, pages 727?734. Morgan Kaufmann, 2000. [10] Jorma Rissanen. Modeling by shortest data description. Automatica, 14:465?471, 1978. [11] Peter Sand and Andrew W. Moore. Repairing faulty mixture models using density estimation. In Proceedings of the 18th International Conf. on Machine Learning, pages 457?464, 2001. [12] Gideon Schwarz. Estimating the dimension of a model. The Annnals of Statistics, 6(2):461?464, 1978. [13] Robert Tibshirani, Guenther Walther, and Trevor Hastie. Estimating the number of clusters in a dataset via the Gap statistic. Journal of the Royal Statistical Society B, 63:411?423, 2001. [14] Naonori Ueda and Ryohei Nakano. Deterministic annealing em algorithm. Neural Networks, 11(2):271? 282, 1998. [15] Max Welling and Kenichi Kurihara. Bayesian k-means as a ?maximization-expectation? algorithm. In SIAM conference on Data Mining SDM06, 2006.
3080 |@word compression:1 smirnov:6 d2:1 simulation:7 seek:1 covariance:10 pg:62 initial:2 wrapper:4 contains:1 score:6 rightmost:1 existing:2 ka:1 comparing:3 yet:1 must:1 readily:1 additive:1 analytic:1 remove:1 plot:3 n0:3 intelligence:2 fewer:2 provides:3 postal:1 location:3 along:5 ryohei:1 walther:1 consists:1 dan:2 leland:1 guenther:1 expected:1 spherical:13 automatically:1 project:10 discover:2 underlying:2 estimating:2 maximizes:2 null:6 what:1 developed:1 finding:5 every:4 runtime:2 scaled:1 hirotugu:1 control:1 unit:2 appear:1 overestimate:1 before:1 ecc:2 service:1 tends:1 limit:2 severely:1 approximately:3 might:2 plus:1 twice:1 k:11 limited:1 range:1 seventeenth:1 testing:2 hamerly:4 practice:2 digit:7 procedure:3 empirical:2 maxx:1 reject:2 significantly:1 projection:40 confidence:1 word:1 refers:1 cannot:2 close:1 faulty:1 collapsed:1 applying:1 www:1 equivalent:1 deterministic:3 reviewer:1 center:13 modifies:1 go:1 starting:2 simplicity:1 splitting:1 identifying:1 jorma:1 wasserman:1 rule:1 dominate:1 spanned:1 gigabyte:1 exploratory:1 variation:2 user:1 modulo:1 us:13 akaike:2 hypothesis:9 elkan:3 overlapped:2 vein:1 observed:1 tib:1 capture:2 region:2 decrease:2 highest:1 complexity:1 wilkinson:2 trained:1 depend:1 basis:1 easily:1 represented:1 various:2 kolmogorov:6 train:1 separated:5 fast:2 effective:1 monte:5 artificial:2 repairing:2 choosing:1 widely:1 stanford:1 statistic:8 cov:1 erf:5 final:1 eigenvalue:1 aligned:1 loop:1 poorly:1 sixteenth:1 description:2 convergence:2 cluster:100 eccentricity:11 extending:1 gerard:1 generating:1 converges:2 rotated:1 help:2 illustrate:2 depending:1 develop:1 stat:1 andrew:2 nearest:1 minor:1 expectationmaximization:1 predicted:1 come:4 direction:2 drawback:1 correct:4 larry:1 sand:2 require:1 really:1 anonymous:1 around:6 sufficiently:1 estimation:2 applicable:2 label:5 schwarz:2 individually:1 create:1 weighted:1 hope:1 gaussian:35 always:1 avoid:2 varying:1 nd2:1 likelihood:3 indicates:1 fooled:1 sense:2 helpful:1 membership:1 entire:2 accept:2 hidden:1 relation:1 pixel:2 issue:1 dual:2 orientation:1 among:2 html:1 priori:1 colt:1 special:1 initialize:3 field:1 once:4 having:2 identical:1 represents:2 yu:2 look:6 kd2:1 nearly:2 discrepancy:1 future:2 others:2 few:1 randomly:6 preserve:1 recognize:1 fitness:8 connects:1 statistician:1 mining:2 investigate:1 evaluation:1 mixture:20 hubert:1 accurate:1 beforehand:1 naonori:1 necessary:1 experience:1 initialized:2 re:2 desired:1 increased:1 xeon:1 earlier:1 modeling:1 goodness:1 assignment:4 maximization:4 cost:2 subset:1 uniform:2 too:4 kn:1 synthetic:9 chooses:1 density:8 international:2 siam:1 retain:1 together:1 quickly:1 nongaussian:1 linux:1 again:1 central:2 containing:1 choose:4 conf:2 american:4 leading:1 return:1 vi:11 try:1 break:1 view:1 doing:1 analyze:1 reached:1 start:4 bayes:1 maintains:1 sort:1 portion:1 contribution:1 square:1 greg:3 variance:3 kaufmann:2 efficiently:1 climbing:1 bayesian:8 handwritten:5 identification:1 comparably:1 none:1 carlo:5 processor:1 trevor:1 di:1 stop:1 sampled:2 dataset:25 proved:1 knowledge:2 higher:1 follow:2 methodology:1 done:1 though:3 just:3 rejected:1 until:4 overfit:6 hand:1 dennis:1 overlapping:6 quality:1 believe:1 grows:2 normalized:1 true:25 regularization:1 assigned:2 moore:4 sdm06:1 pwe:1 wrapped:1 width:1 criterion:4 leftmost:1 hill:1 performs:4 image:1 variational:2 novel:3 charles:2 association:3 significant:1 eccentric:7 automatic:1 meila:2 stable:4 add:6 closest:1 own:2 recent:1 showed:2 fault:1 scoring:2 morgan:2 minimum:2 fortunately:1 additional:2 pgmeans:2 parallelized:1 determine:3 converge:1 shortest:1 preservation:1 full:2 multiple:3 faster:4 calculation:1 offer:1 long:1 equally:2 marina:1 vision:1 expectation:5 metric:11 iteration:3 want:1 annealing:3 johnston:1 source:1 publisher:1 unlike:1 probably:1 induced:1 tend:4 split:1 enough:1 fit:9 bic:3 gave:1 hastie:1 regarding:1 knowing:1 texas:1 whether:2 thread:1 utility:1 elemstatlearn:1 peter:1 proceed:1 cause:3 matlab:1 dramatically:1 useful:3 generally:1 clear:2 ten:1 excellently:1 clip:1 ken:1 generate:4 http:1 problematic:1 estimated:1 per:2 tibshirani:2 correctly:1 dasgupta:3 rissanen:1 drawn:2 massey:1 pelleg:3 sum:2 run:11 uncertainty:1 clipped:1 place:1 reasonable:2 ueda:1 separation:6 scaling:1 comparable:1 interleaved:1 bound:1 annual:1 generates:1 speed:1 span:1 min:2 performing:1 department:1 according:2 kenichi:1 jr:1 terminates:1 describes:1 em:17 smaller:1 making:2 modification:1 gradually:1 bkm:21 computationally:1 previously:1 remains:1 know:1 end:3 sending:1 available:1 gaussians:11 appropriate:3 encounter:1 slower:1 original:2 top:2 clustering:9 ensure:1 running:5 include:1 nakano:1 calculating:1 especially:4 classical:1 society:1 feng:2 primary:1 costly:1 dp:1 wrap:1 subspace:1 thank:2 me:2 reason:2 besides:1 length:3 code:1 relationship:1 baylor:2 difficult:6 robert:2 frank:1 trace:4 stated:1 negative:1 implementation:1 proper:1 unknown:1 perform:7 upper:1 observation:1 datasets:19 finite:2 incorrectly:1 situation:3 required:3 specified:3 learned:20 accepts:3 nip:1 able:2 bar:2 gideon:1 max:3 memory:2 royal:1 overlap:7 critical:10 difficulty:2 natural:1 residual:1 representing:1 normality:2 improve:1 axis:1 prior:6 acknowledgement:1 determining:3 asymptotic:1 fully:1 suggestion:1 sufficient:1 repeat:4 side:2 neighbor:1 absolute:1 benefit:1 ghz:1 distributed:1 dimension:26 stand:2 world:2 computes:1 projected:12 far:2 welling:2 transaction:1 alpha:1 obtains:2 keep:1 overfitting:1 uai:1 automatica:1 assumed:1 grayscale:1 table:4 learn:3 robust:1 necessarily:1 did:2 significance:1 linearly:2 whole:1 underfit:1 arise:1 motivation:1 intel:1 wish:3 learns:2 theorem:2 bad:2 dominates:1 merging:1 adding:1 effectively:1 relabeled:1 gap:2 univariate:3 desire:1 applies:4 nested:1 truth:1 cdf:2 goal:1 lilliefors:4 hard:3 except:4 kurihara:3 principal:1 called:4 sanjoy:2 accepted:3 experimental:1 formally:1 evaluate:1 tested:2
2,293
3,081
Efficient Methods for Privacy Preserving Face Detection Shai Avidan Mitsubishi Electric Research Labs 201 Broadway Cambridge, MA 02139 [email protected] Moshe Butman Department of Computer Science Bar Ilan University Ramat-Gan, Israel [email protected] Abstract Bob offers a face-detection web service where clients can submit their images for analysis. Alice would very much like to use the service, but is reluctant to reveal the content of her images to Bob. Bob, for his part, is reluctant to release his face detector, as he spent a lot of time, energy and money constructing it. Secure MultiParty computations use cryptographic tools to solve this problem without leaking any information. Unfortunately, these methods are slow to compute and we introduce a couple of machine learning techniques that allow the parties to solve the problem while leaking a controlled amount of information. The first method is an information-bottleneck variant of AdaBoost that lets Bob find a subset of features that are enough for classifying an image patch, but not enough to actually reconstruct it. The second machine learning technique is active learning that allows Alice to construct an online classifier, based on a small number of calls to Bob?s face detector. She can then use her online classifier as a fast rejector before using a cryptographically secure classifier on the remaining image patches. 1 Introduction The Internet triggered many opportunities for cooperative computing in which buyers and sellers can meet to buy and sell goods, information or knowledge. Placing classifiers on the Internet allows buyers to enjoy the power of a classifier without having to train it themselves. This benefit is hindered by the fact that the seller, that owns the classifier, learns a great deal about the buyers? data, needs or goals. This raised the need for privacy in Internet transactions. While it is now common to assume that the buyer and the seller can secure their data exchange from the rest of the world, we are interested in a stronger level of security that allows the buyer to hide his data from the seller as well. Of course, the same can be said about the seller, who would like to maintain the privacy of his hard-earned classifier. Secure Multi-Party Computation (SMC) are based on cryptographic tools that let two parties, Alice and Bob, to engage in a protocol that will allow them to achieve a common goal, without revealing the content of their input. For example, Alice might be interested in classifying her data using Bobs? classifier without revealing anything to Bob, not even the classification result, and without learning anything about Bobs? classifier, other than a binary answer to her query. Recently, Avidan & Butman introduced Blind Vision [1] which is a method for securely evaluating a Viola-Jones type face detector [12]. Blind Vision uses standard cryptographic tools and is painfully slow to compute, taking a couple of hours to scan a single image. The purpose of this work is to explore machine learning techniques that can speed up the process, at the cost of a controlled leakage of information. In our hypothetical scenario Bob has a face-detection web service where clients can submit their images to be analyzed. Alice would very much like to use the service, but is reluctant to reveal the content of the images to Bob. Bob, for his part, is reluctant to release his face detector, as he spent a lot of time, energy and money constructing it. In our face detection protocol Alice raster scans the image and sends every image patch to Bob to be classified. We would like to replace cryptographically-based SMC methods with Machine Learning algorithms that might leak some information but are much faster to execute. The challenge is to design protocols that can explicitly control the amount of information leaked. To this end we propose two, well known, machine learning techniques. One based on the information bottleneck and the other on active learning. The first method is a privacy-preserving feature selection which is a variant of the informationbottleneck principle to find features that are useful for classification but not for signal reconstruction. In this case, Bob can use his training data to construct different classifiers that offer different trade-offs of information leakage versus classification accuracy. Alice can then choose the trade-off that suits her best and send only those features to Bob for classification. This method can be used, for example, as a filtering step that rejects a large number of the image patches as having no face included in them, followed by a SMC method that will securely classify the remaining image patches, using the full classifier that is known only to Bob. The second method is active learning and it helps Alice choose which image patches to send to Bob for classification. This method can be used either with the previous method or directly with an SMC protocol. The idea being that instead of sending all image patches to Bob for classification, Alice might try to learn from the interaction as much as she can and use her online trained classifier to reject some of the image patches herself. This can minimize the amount of information revealed to Bob, if the parties use the privacy-preserving features or the computational load, if the parties are using cryptographically-based SMC methods. 2 Background Secure multi-party computation originated from the work of Yao [14] who gave a solution to the millionaire problem: Two parties want to find which one has a larger number, without revealing anything else about the numbers themselves. Later, Goldriech et al. [5] showed that any function can be computed in such a secure manner. However, the theoretical construct was still too demanding to be of practical use. An easy introduction to Cryptography is given in [9] and a more advanced and theoretical treatment is given in [4]. Since then many secure protocols were reported for various data mining applications [7, 13, 1]. A common assumption in SMC is that the parties are honest but curious, meaning that they will follow the agreed-upon protocol but will try to learn as much as possible from the data-flow between the two parties. We will follow this assumption here. The information bottleneck principle [10] shows how to compress a signal while preserving its information with respect to a target signal. We offer a variant of the self-consistent equations used to solve this problem and offer a greedy feature selection algorithm that satisfy privacy constraints, that are represented as the percentage of the power spectrum of the original signal. Active learning methods assume that the student (Alice, in our case) has access to an Oracle (Bob) for labeling. The usual motivation in active learning is that the Oracle is assumed to be a human operator and having him label data is a time consuming task that should be avoided. Our motivation is similar, Alice would like to avoid using Bob because of the high computational cost involved in case of cryptographically secure protocols, or for fear of leaking information in case non-cryptographic methods are used. Typical active learning applications assume that the distribution of class size is similar [2, 11]. A notable exception is the work of [8] that propose an active learning method for anomaly detection. Our case is similar as image patches that contain faces are rare in an image. 3 Privacy-preserving Feature Selection Feature selection aims at finding a subset of the features that optimize some objective function, typically a classification task [6]. However, feature selection does not concern itself with the correlation of the feature subset with the original signal. This is handled with the information bottleneck method [10], that takes a joint distribution p(x, y) and finds a compressed representation of X, denoted by T , that is as informative about Y . This is achieved by minimizing the following functional: min L : L ? I(X; T ) ? ?I(T ; Y ) p(t|x) (1) where ? is a trade-off parameter that controls the trade off between compressing X and maintaining information about Y . The functional L admits a set of self-consistent equations that allows one to find a suitable solution. We map the information bottleneck idea to a feature selection algorithm to obtain a Privacypreserving Feature Selection (PPFS) and describe how Bob can construct such a feature set. Let Bob have a training set of image patches, their associated label and a weight associated with every feature (pixel) denoted {xn , yn , sn }N n=1 . Bob?s goal is to find a feature subset I ? {i1 , . . . , ik } s.t. a classifier F (x(I)) will minimize the classification error, where x(I) denotes a sample x that uses only the features in the set I. Formally, Bob needs to minimize: min F N X (F (xn (I)) ? yn ))2 (2) n=1 subject to X si < ? i?I where ? is a user defined threshold that defines the amount of information that can be leaked. We found it useful to use the PCA spectrum to measure the amount of information. Specifically, Bob computes the PCA space of all the face images in his database and maps all the data to that space, without reducing dimensionality. The weights {sn }N n=1 are now set to the eigenvalues associated with each dimension in the PCA space. This avoids the need to compute the mutual information between pixels by making the assumption that features do not carry mutual information with other features beyond second order statistics. Algorithm 1 Privacy-Preserving Feature Selection Input: {xn , yn , sn }N n=1 Threshold ? Number of iterations T Output: A privacy-preserving strong classifier F (x) ? Start with weights wn = 1/N n = 1, 2, . . . , N , F (x) = 0, I = ? ? Repeat for t = 1, 2, . . . , T P ? Set working index set J = I ? {j|sj + i?I si < ?} ? Repeat for j ? J ? Fit a regression stump gj (x(j)) ? aj (x(j) > ?j ) + bj to the j-th feature, x(j) PN wn (yn ?(aj (xn (j)>?j +bj )2 PN ? Compute error ej = n=1 n=1 wn ? Set ft = gi where ei < ej ?j ? J ? Update: F (x) ? F (x) + ft (x) (3) ?yn ft (xn ) (4) (5) wn ? wn e I ? I ? {i} Boosting was used for feature selection before [12] and Bob takes a similar approach here. He uses a variant of the gentleBoost algorithm [3] to find a greedy solution to (2). Specifically, Bob uses gentleBoost with ?stumps? as the weak classifiers where each ?stump? works on only one feature. The only difference from gentleBoost is in the choice of the features to be selected. In the original algorithm all the features are evaluated in every iteration of the algorithm, but here Bob can only use a subset of the features. In each iteration Bob can use features that were already selected or those that adding them will not increase the total weight of selected features beyond the threshold ?. Once Bob computed the privacy-preserving feature subset, the amount of information it leaks and its classification accuracy he publishes this information on the web. Alice then needs to map her image patches to this low-dimensional privacy-preserving feature space and send the data to Bob for classification. 4 Privacy-Preserving Active Learning In our face detection example Alice needs to submit many image patches to Bob for classification. This is computationally expensive if SMC methods are used and reveals information, in case the privacy-preserving feature selection method discussed earlier is used. Hence, it would be beneficial if Alice could minimize the number of image patches she needs to send Bob for classification. This is where she might use active learning. Instead of raster scanning the image and submitting every image patch for classification she sends a small number of randomly selected image patches, and based on their label, she determines the next group of image patches to be sent for classification. We found that substantial gains can be made this way. Specifically, Alice maintains an RBF network that is trained on-line, based on the list of labeled prototypes. Let {cj , yj }M j=1 be the list of M prototypes that were labeled so far. Then, Alice constructs a kernel matrix K where Kij = k(ci , cj ) and solves the least squares equation Ku = y, where y = [y1 , . . . , yM ]T . The kernel Alice uses is a Gaussian kernel whose width is set to be the range of the prototype coordinates, in each dimension. The score of each image patch x is given by h(x) = [k(x, c1 ), . . . , k(x, cM )]u. For the next round of classification Alice chooses the image patches with the highest h(x) score. This is in line with [2, 11, 8] that consider choosing the examples of which one has the least amount of information. In our case, Alice is interested in finding image patches that contain faces (which we assume are labeled +1) but most of the prototypes will be labeled ?1, because faces are a rear event in an image. As long as Alice does not sample a face image patch she will keep exploring the space of image patches in her image, by sampling image patches that are farthest away from the current set of prototypes. If an image patch that contains a face is sampled, then her online classifier h(x) will label similar image patches with a high score, thus guiding the search towards other image patches that might contain a face. To avoid large overlap between patches, we force a minimum distance, in the image plane, between selected patches. The algorithm is given in algorithm 2. Algorithm 2 Privacy-Preserving Active Learning Input: {xi }N i=1 unlabeled samples Number M of classification calls allowed Number s of samples to classify in each iteration Output: Online classifier h(x) ? Choose s random samples {xi }si=1 , set C = [x1 , . . . , xs ] and obtain their labels y = [y1 , . . . , ys ] from Bob. ? Repeat for m = 1, 2, ..., M times ? Construct the kernel matrix Kij = k(ci , cj ) and solve for the weight vector u through least squares Ku = y. ? Evaluate h(xi ) = [k(xi , c1 ), . . . , k(xi , cm )]u ?i = 1, . . . , N . ? Choose top s samples with highest h(x) score, send them to Bob for classification and add them, and their labels to C, y, respectively. 5 Experiments We have conducted a couple of experiments to validate both methods. Figure 1: Privacy preserving feature selection. We show the ROC curves of four strong classifiers, each trained with 100 weak, ?stump? classifiers, but with different levels of information leakage. The information leakage is defined as the amount of PCA spectrum captured by the features used in each classifier. The number in parenthesis shows how much of the eigen spectrum is captured by the features used in each classifier. The first experiment evaluates the privacy-preserving feature selection method. The training set consisted of 9666 image patches of size 24 ? 24 pixels each, split evenly to face/no-face images. The test set was of similar size. We then run algorithm 1 with different levels of the threshold ? and created a strong classifier with 100 weak, ?stump? based, classifiers. The ROC curves of several such classifiers are shown in figure 1. We found that, for this particular dataset, setting ? = 0.1 gives identical results to a full classifier, without any privacy constraints. Reducing ? to 0.01 did hurt the classification performance somewhat. The second experiment tests the active learning approach. We assume that Alice and Bob use the classifier with ? = 0.05 from the previous experiment, and measure how effective is the on-line classifier that Alice constructs in rejecting no-face image patches. Recall that there are three classifiers at play. One is the full classifier that Bob owns, the second is the privacy-preserving classifier that Bob owns and the last is the on-line classifier that Alice constructs. Alice uses the labels of Bobs? privacy-preserving classifier to construct her on-line classifier and the questions is: how many image patches she can reject, without actually rejecting image patches that will be classified as faces by the full classifier (that she knows nothing about)? Before we performed the experiment, we conducted the following pre-processing operation: We find, for each image, the scale at which the largest number of faces are detected using Bob?s full classifier, and used only the image at that scale. The experiment proceeds as follows. Alice chooses 5 image patches in each round, maps them to the reduced PCA space and sends them to Bob for classification, using his privacy-preserving classifier. Based on his labels, Alice then picks the next 5 image patches according to algorithm 2 and so on. Alice repeats the process 10 times, resulting in 50 patches that are sent to Bob for classification. The first 5 patches are chosen at random. Figure 2 shows the 50 patches selected by Alice, the online classifier h and the corresponding rejection/recall curve, for several test images. The rejection/recall curve shows how many image patches Alice can safely reject, based on h, without rejecting a face that will be detected by Bobs? full classifier. For example, in the top row of figure 2 we see that rejecting the bottom 40% of image patches based on the on-line classifier h will not reject any face that can be detected with the full classifier. Thus 50 image patches that can be quickly labeled while leaking very little information can help Alice reject thousands of image patches. Next, we conducted the same experiment on a larger set of images, consisting of 65 of the CMU+MIT database images1 . Figure 3 shows the results. We found that, on average (dashed line), 1 We used the 65 images in the newtest directory of the CMU+MIT dataset (a-1) (b-1) (c-1) (a-2) (b-2) (c-2) (a-3) (b-3) (c-3) Figure 2: Examples of privacy-preserving feature selection and active learning. (a) The input images and the image patches (marked with white rectangles) selected by the active learning algorithm. (b) The response image computed by the online classifier (the black spots correspond to the position of the selected image patches). Brighter means a higher score. (c) The rejection/recall curve showing how many image patches can be safely rejected. For example, panel (c-1) shows that Alice can reject almost 50% of the image patches, based on her online classifier (i.e., response image), and not miss a face that can be detected by the full classifier (that is known to Bob and not to Alice). (a) Figure 3: Privacy preserving active learning. Results on a dataset of 65 images. The figure shows how many image patches can be rejected, based on the online classifier that Alice owns, without rejecting a face. The horizontal axis shows how many image patches are rejected, based on the on-line classifier, and the vertical axis shows how many faces are maintained. For example, the figure shows (dashed line) that rejecting 20% of all image patches, based on the on-line classifier, will maintain 80% of all faces. The solid line shows that rejecting 40% of all image patches, based on the on-line classifier, will not miss a face in at least half (i.e. the median) of the images in the dataset. using only 50 labeled image patches Alice can reject up to about 20% of the image patches in an image while keeping 80% of the faces in that image (i.e., Alice will reject 20% of the image patches that Bob?s full classifier will classify as a face). If we look at the median (solid line), we see that for at least half the images in the data set, Alice can reject a little more than 40% of the image patches without erroneously rejecting a face. We found that increasing the number of labeled examples from 50 to a few hundreds does not greatly improve results, unless many thousands of samples are labeled, at which point too much information might be leaked. 6 Conclusions We described two machine learning methods to accelerate cryptographically secure classification protocols. The methods greatly accelerate the performance of the system, while leaking a controlled amount of information. The two methods are a privacy preserving feature selection that is similar to the information bottleneck and an active learning technique that was found to be useful in learning a rejector from an extremely small number of labeled data. We plan to keep investigating these methods, apply them to classification tasks in other domains and develop new methods to make secure classification faster to use. References [1] S. Avidan and M. Butman. Blind vision. In Proc. of European Conference on Computer Vision, 2006. [2] Y. Baram, R. El-Yaniv, and K. Luz. Online choice of active learning algorithms. Journal of Machine Learning Research, 5:255?291, March 2004. [3] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting, 1998. [4] O. Goldreich. Foundations of Cryptography: Volume 1, Basic Tools. Cambridge University Press, New York, 2001. [5] O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game or a completeness theorem for protocols with honest majority. In ACM Symposium on Theory of Computing, pages 218?229, 1987. [6] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. Journal of Machine Learning Research, 3:1157?1182, 2003. [7] Y. Lindell and B. Pinkas. Privacy preserving data mining. In CRYPTO: Proceedings of Crypto, 2000. [8] D. Pelleg and A. Moore. Active learning for anomaly and rare-category detection. In In Advances in Neural Information Processing Systems 18, 2004. [9] B. Schneier. Applied Cryptography. John Wiley & Sons, New York, 1996. [10] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. In In Proc. of 37th Allerton Conference on communication and computation, 1999. [11] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. Journal of Machine Learning Research, 2:45?66, 2001. [12] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Coference on Computer Vision and Pattern Recognition (CVPR), 2001. [13] R. N. Wright and Z. Yang. ?privacy-preserving bayesian network structure computataion on distributed heterogeneous data?. In KDD ?04: Proceeding of the tenth ACM SIGKDD international conference on Knowledge discovery in data mining, pages 22?25, 2004. [14] A. C. Yao. Protocols for secure computations. In Proc. 23rd IEEE Symp. on Foundations of Comp. Science, pages 160?164, Chicago, 1982. IEEE.
3081 |@word stronger:1 mitsubishi:1 elisseeff:1 pick:1 solid:2 carry:1 contains:1 score:5 current:1 com:1 si:3 john:1 chicago:1 additive:1 informative:1 kdd:1 update:1 greedy:2 selected:8 half:2 plane:1 directory:1 mental:1 completeness:1 boosting:2 allerton:1 symposium:1 ik:1 symp:1 introduce:1 manner:1 privacy:25 rapid:1 themselves:2 multi:2 little:2 increasing:1 panel:1 israel:1 cm:2 finding:2 safely:2 every:4 hypothetical:1 classifier:48 control:2 farthest:1 enjoy:1 yn:5 before:3 service:4 meet:1 might:6 black:1 alice:36 ramat:1 smc:7 range:1 practical:1 yj:1 spot:1 reject:10 revealing:3 cascade:1 pre:1 unlabeled:1 selection:15 operator:1 optimize:1 map:4 send:5 his:10 coordinate:1 hurt:1 target:1 play:2 user:1 engage:1 anomaly:2 us:6 expensive:1 recognition:1 cooperative:1 database:2 labeled:9 ft:3 bottom:1 thousand:2 compressing:1 earned:1 trade:4 highest:2 substantial:1 leak:2 leaking:5 seller:5 trained:3 upon:1 accelerate:2 joint:1 goldreich:2 herself:1 various:1 represented:1 train:1 fast:1 describe:1 effective:1 query:1 detected:4 labeling:1 choosing:1 whose:1 larger:2 solve:4 cvpr:1 reconstruct:1 compressed:1 statistic:1 gi:1 itself:1 online:10 triggered:1 eigenvalue:1 propose:2 reconstruction:1 interaction:1 achieve:1 validate:1 rejector:2 yaniv:1 object:1 spent:2 help:2 develop:1 solves:1 strong:3 c:1 human:1 exchange:1 painfully:1 exploring:1 wright:1 great:1 bj:2 purpose:1 proc:3 label:8 him:1 largest:1 tool:4 offs:1 mit:2 gaussian:1 aim:1 avoid:2 pn:2 ej:2 boosted:1 release:2 she:9 greatly:2 secure:11 sigkdd:1 rear:1 el:1 typically:1 her:11 koller:1 interested:3 i1:1 pixel:3 classification:24 denoted:2 plan:1 raised:1 mutual:2 construct:9 once:1 having:3 sampling:1 identical:1 sell:1 placing:1 jones:2 look:1 few:1 randomly:1 consisting:1 maintain:2 suit:1 friedman:1 detection:8 mining:3 analyzed:1 cryptographically:5 unless:1 theoretical:2 merl:1 classify:3 earlier:1 kij:2 cost:2 subset:6 rare:2 hundred:1 conducted:3 coference:1 too:2 millionaire:1 tishby:1 reported:1 answer:1 scanning:1 chooses:2 international:1 off:3 ym:1 quickly:1 yao:2 choose:4 ilan:1 stump:5 student:1 satisfy:1 notable:1 explicitly:1 blind:3 later:1 try:2 lot:2 lab:1 performed:1 view:1 start:1 maintains:1 shai:1 minimize:4 square:2 accuracy:2 who:2 correspond:1 weak:3 bayesian:1 rejecting:8 comp:1 bob:46 classified:2 detector:4 evaluates:1 raster:2 energy:2 involved:1 associated:3 couple:3 gain:1 sampled:1 dataset:4 treatment:1 baram:1 reluctant:4 recall:4 knowledge:2 dimensionality:1 cj:3 agreed:1 actually:2 higher:1 follow:2 adaboost:1 response:2 luz:1 execute:1 evaluated:1 rejected:3 correlation:1 working:1 horizontal:1 web:3 ei:1 defines:1 logistic:1 aj:2 reveal:2 contain:3 consisted:1 hence:1 moore:1 deal:1 white:1 leaked:3 round:2 game:1 self:2 width:1 maintained:1 anything:3 image:73 meaning:1 recently:1 common:3 functional:2 volume:1 discussed:1 he:4 cambridge:2 rd:1 access:1 money:2 gj:1 add:1 hide:1 showed:1 scenario:1 binary:1 preserving:22 minimum:1 captured:2 somewhat:1 signal:5 dashed:2 full:9 faster:2 offer:4 long:1 y:1 controlled:3 parenthesis:1 variant:4 avidan:4 regression:2 basic:1 vision:5 heterogeneous:1 cmu:2 iteration:4 kernel:4 achieved:1 c1:2 background:1 want:1 else:1 median:2 sends:3 rest:1 subject:1 privacypreserving:1 sent:2 flow:1 call:2 curious:1 yang:1 revealed:1 split:1 enough:2 easy:1 wn:5 fit:1 gave:1 brighter:1 hastie:1 hindered:1 idea:2 prototype:5 honest:2 bottleneck:7 handled:1 pca:5 york:2 useful:3 amount:9 category:1 reduced:1 percentage:1 pinkas:1 tibshirani:1 group:1 four:1 threshold:4 tenth:1 rectangle:1 pelleg:1 run:1 multiparty:1 almost:1 guyon:1 patch:52 internet:3 followed:1 oracle:2 constraint:2 erroneously:1 speed:1 min:2 extremely:1 department:1 according:1 march:1 beneficial:1 son:1 butman:3 making:1 computationally:1 equation:3 crypto:2 know:1 end:1 sending:1 operation:1 apply:1 away:1 eigen:1 original:3 compress:1 denotes:1 remaining:2 top:2 gan:1 opportunity:1 maintaining:1 wigderson:1 leakage:4 objective:1 already:1 moshe:1 question:1 usual:1 bialek:1 said:1 distance:1 majority:1 evenly:1 index:1 biu:1 minimizing:1 unfortunately:1 broadway:1 design:1 cryptographic:4 vertical:1 viola:2 communication:1 y1:2 introduced:1 publishes:1 security:1 hour:1 beyond:2 bar:1 proceeds:1 pattern:1 challenge:1 power:2 suitable:1 demanding:1 event:1 client:2 overlap:1 force:1 advanced:1 improve:1 axis:2 created:1 sn:3 text:1 discovery:1 filtering:1 images1:1 versus:1 foundation:2 consistent:2 principle:2 classifying:2 row:1 course:1 repeat:4 last:1 keeping:1 allow:2 face:32 taking:1 benefit:1 distributed:1 curve:5 dimension:2 xn:5 world:1 evaluating:1 avoids:1 computes:1 made:1 avoided:1 party:9 far:1 transaction:1 sj:1 keep:2 active:18 buy:1 reveals:1 investigating:1 owns:4 assumed:1 consuming:1 xi:5 spectrum:4 search:1 learn:2 ku:2 european:1 electric:1 constructing:2 submit:3 protocol:10 did:1 domain:1 motivation:2 nothing:1 allowed:1 cryptography:3 gentleboost:3 x1:1 securely:2 roc:2 slow:2 wiley:1 tong:1 position:1 originated:1 guiding:1 pereira:1 learns:1 theorem:1 load:1 showing:1 list:2 x:1 admits:1 submitting:1 concern:1 adding:1 ci:2 rejection:3 explore:1 fear:1 determines:1 acm:2 ma:1 goal:3 marked:1 rbf:1 towards:1 replace:1 content:3 hard:1 included:1 typical:1 specifically:3 reducing:2 miss:2 total:1 buyer:5 exception:1 formally:1 support:1 scan:2 evaluate:1
2,294
3,082
No-regret Algorithms for Online Convex Programs Geoffrey J. Gordon Department of Machine Learning Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Abstract Online convex programming has recently emerged as a powerful primitive for designing machine learning algorithms. For example, OCP can be used for learning a linear classifier, dynamically rebalancing a binary search tree, finding the shortest path in a graph with unknown edge lengths, solving a structured classification problem, or finding a good strategy in an extensive-form game. Several researchers have designed no-regret algorithms for OCP. But, compared to algorithms for special cases of OCP such as learning from expert advice, these algorithms are not very numerous or flexible. In learning from expert advice, one tool which has proved particularly valuable is the correspondence between no-regret algorithms and convex potential functions: by reasoning about these potential functions, researchers have designed algorithms with a wide variety of useful guarantees such as good performance when the target hypothesis is sparse. Until now, there has been no such recipe for the more general OCP problem, and therefore no ability to tune OCP algorithms to take advantage of properties of the problem or data. In this paper we derive a new class of no-regret learning algorithms for OCP. These Lagrangian Hedging algorithms are based on a general class of potential functions, and are a direct generalization of known learning rules like weighted majority and external-regret matching. In addition to proving regret bounds, we demonstrate our algorithms learning to play one-card poker. 1 Introduction In a sequence of trials we must pick hypotheses y1 , y2 , . . . ? Y. After we choose yt , the correct answer is revealed as a convex loss function ?t (yt ).1 Just before seeing the tth example, our total Pt?1 loss is therefore Lt = i=1 ?i (yi ). If we had predicted using some fixed hypothesis y instead, then Pt?1 our loss would have been i=1 ?i (y). Our total regret at time t is the difference between these two losses, with positive regret meaning that we would have preferred y to our actual plays: ?t (y) = Lt ? t?1 X i=1 ?i (y) ?t = sup ?t (y) y?Y We assume that Y is a compact convex subset of Rd that has at least two elements. In classical no-regret algorithms such as weighted majority, Y is a simplex: the corners of Y represent pure actions, the interior points of Y represent probability distributions over pure actions, and the number of corners n is the same as the number of dimensions d. In a more general OCP, Y may have 1 Many problems use loss functions of the form ?t (yt ) = ?(yt , yttrue ), where ? is a fixed function such as squared error and yttrue is a target output. The more general notation allows for problems where there may be more than one correct prediction. many more extreme points than dimensions, n ? d. For example, Y could be a convex set like {y | Ay = b, y ? 0} for some matrix A and vector b, or it could even be a sphere. The shape of Y captures the structure in our prediction problem. Each point in Y is a separate hypothesis, but the losses of different hypotheses are related to each other because they are all embedded in the common representation space Rd . While we could run a standard no-regret algorithm such as weighted majority on a structured Y by giving it hypotheses corresponding to the extreme points c1 . . . cn of Y, this transformation would lose the connections among hypotheses (with a corresponding loss in runtime and generalization ability). Our algorithms below are stated in terms of linear loss functions, ?t (y) = ct ? y. If ?t is nonlinear but convex, we can substitute the derivative at the current prediction, ??t (yt ), for ct , and our regret bounds will still hold (see [1, p. 53]). We will write C for the set of possible gradient vectors ct . 2 Related Work A large number of researchers have studied online prediction in general and OCP in particular. The OCP problem dates back to Hannan in 1957 [2]. The name ?online convex programming? is due to Zinkevich [3], who gave a clever gradient-descent algorithm. A similar algorithm and a weaker bound were presented somewhat earlier in [1]: that paper?s GGD algorithm, using potential function ?0 (w) = kkwk22 , is equivalent to Zinkevich?s ?lazy projection? with a fixed learning rate. Another clever algorithm for OCP was presented by Kalai and Vempala [4]. Compared to the above papers, the most important contribution of the current paper is its generality: no previous family of OCP algorithms can use as flexible a class of potential functions. As an illustration of the importance of this generality, consider the problem of learning from expert advice. Well-known regret bounds for this problem are logarithmic in the number of experts (e.g., [5]); no previous bounds for general OCP algorithms are sublinear in the number of experts, but logarithmic bounds follow directly as a special case of our results [6, sec. 8.1.2]. Despite this generality, our core result, Thm. 4 below, takes only half a dozen short equations to prove. From the online prediction literature, the closest related work is that of Cesa-Bianchi and Lugosi [7], which follows in the tradition of an algorithm and proof by Blackwell [8]. Cesa-Bianchi and Lugosi consider choosing predictions from an essentially-arbitrary decision space and receiving outcomes from an essentially-arbitrary outcome space. Together a decision and an outcome determine how a marker Rt ? Rd will move. Given a potential function G, they present algorithms which keep G(Rt ) from growing too quickly. This result is similar in flavor to our Thm. 4, and both Thm. 4 and the results of Cesa-Bianchi and Lugosi are based on Blackwell-like conditions. In fact, our Thm. 4 can be thought of as the first generalization of well-known online learning results such as Cesa-Bianchi and Lugosi?s to online convex programming. The main differences between the Cesa-Bianchi?Lugosi results and ours are the restrictions on their potential functions. They write P their potential function as G(u) = f (?(u)); they require ? to be additive (that is, ?(u) = i ?i (ui ) for one-dimensional functions ?i ), nonnegative, and twice differentiable, and they require f : R+ 7? R+ to be increasing, concave, and twice differentiable. These restrictions rule out many of the potential functions used here, and in fact they rule out most online convex programming problems. The most restrictive requirement is additivity; for example, when defining potentials for OCPs via Eq. (7) below, unless the set Y? can be factored as Y?1 ? Y?2 ? . . . ? Y?N the potentials are generally not expressible as f (?(u)) for additive ?. During the preparation of this manuscript, we became aware of the recent work of Shalev-Shwartz and Singer [9]. This work generalizes some of the theorems in [6] and provides a very simple and elegant proof technique for algorithms based on convex potential functions. However, it does not consider the problem of defining appropriate potential functions for the feasible regions of OCPs (as discussed in Sec. 5 below and in more detail in [6]); finding such functions is an important requirement for applying potential-based algorithms to OCPs. In addition to the general papers above, there are many no-regret algorithms for specific OCPs, such as predicting as well as the best pruning of a decision tree [10], reorganizing a binary search tree so that frequent items are near the root [4], and picking paths in a graph with unknown edge costs [11]. s1 ? 0 for t ? 1, 2, . . . y?t ? f (st ) if y?t ? u > 0 then yt ? y?t /(? yt ? u) else yt ? arbitrary element of Y fi Observe ct , compute st+1 from (1) end 1 0.8 0.6 0.4 0.2 0 ?0.2 ?0.4 ?0.6 ?0.8 ?1 ?1 ?0.8 ?0.6 ?0.4 ?0.2 0 0.2 0.4 0.6 0.8 1 Figure 1: A set Y = {y1 + y2 = 1, y ? 0} (thick dark line) and its safe set S (light shaded region). 3 (*) Figure 2: The gradient form of the Lagrangian Hedging algorithm. Regret Vectors Lagrangian Hedging algorithms maintain their state in a regret vector, st , defined by the recursion st+1 = st + (yt ? ct )u ? ct (1) with the base case s1 = 0. Here u is an arbitrary vector which satisfies y ? u = 1 for all y ? Y. (If necessary we can append a constant element to each y so that such a u exists.) The regret vector contains information about our actual losses and the gradients of our loss functions: from st we can find our regret versus any y as follows. (This property justifies the name ?regret vector.?) y ? st = t?1 X i=1 (yi ? ci )y ? u ? t?1 X i=1 y ? ci = Lt ? t?1 X i=1 y ? ci = ?t (y) We can define a safe set, in which our regret is guaranteed to be nonpositive: S = {s | (?y ? Y) y ? s ? 0} (2) The goal of the Lagrangian Hedging algorithm is to keep its regret vector st near the safe set S. S is a convex cone: it is closed under positive linear combinations of its elements. And, it is polar [12] to the cone of unnormalized hypotheses: S ? = Y? ? {?y | y ? Y, ? ? 0} (3) 4 The Main Algorithm We will present the general LH algorithm first, then (in Sec. 5) a specialization which is often easier to implement. The two versions are called the gradient form and the optimization form. The gradient form is shown in Fig. 2. At each step it chooses its play based on the current regret vector st (Eq. (1)) and a closed convex potential function F (s) : Rd 7? R with subgradient f (s) : Rd 7? Rd . This potential function is what distinguishes one instance of the LH algorithm from another. F (s) should be small when s is in the safe set, and large when s is far from the safe set. For example, suppose that Y is the probability simplex in Rd , so that S is the negative orthant in Rd . (This choice of Y would be appropriate for playing a matrix game or predicting from expert advice.) For this Y, two possible potential functions are X X [si ]2+ /2 e?si ? ln d F2 (s) = F1 (s) = ln i i where ? > 0 is a learning rate and [s]+ = max(s, 0). The potential F1 leads to the Hedge [5] and weighted majority [13] algorithms, while the potential F2 results in external-regret matching [14, Theorem B]. For more examples of useful potential functions, see [6]. To ensure the LH algorithm chooses legal hypotheses yt ? Y, we require the following (note the constant 0 is arbitrary; any other k would work as well) F (s) ? 0 ?s ? S (4) Theorem 1 The LH algorithm is well-defined: define S as in (2) and fix a finite convex potential function F . If F (s) ? 0 for all s ? S, then the LH algorithm picks hypotheses yt ? Y for all t. (Omitted proofs are given in [6].) We can also define a version of the LH algorithm with an adjustable learning rate: replacing F (s) with F (?s) is equivalent to updating st with learning rate ?. Adjustable learning rates will help us obtain regret bounds for some classes of potentials. 5 The Optimization Form Even if we have a convenient representation of our hypothesis space Y, it may not be easy to work directly with the safe set S. In particular, it may be difficult to define, evaluate, and differentiate a potential function F which has the necessary properties. To avoid these difficulties, we can work with an alternate form of the LH algorithm. This form, called the optimization form, defines F in terms of a simpler function W which we will call the hedging function. It uses the same pseudocode as the gradient form (Fig. 2), but on each step it computes F and ?F by solving an optimization problem involving W and the hypothesis set Y (Eq. (8) below). For example, two possible hedging functions are P  P ?i ln y?i + ln d if y? ? 0, i y?i = 1 iy W1 (? y) = ? otherwise X 2 W2 (? y) = y?i /2 (5) (6) i If Y is the probability simplex in Rd , it will turn out that W1 (? y /?) and W2 (? y ) correspond to the potentials F1 and F2 from Section 4 above. So, these hedging functions result in the weighted majority and external-regret matching algorithms. For an example where the hedging function is easy to write analytically but the potential function is much more complicated, see Sec. 8 or [6]. The optimization form of the LH algorithm using hedging function W is defined to be equivalent to the gradient form using F (s) = sup (s ? y? ? W (? y )) (7) ? y??Y Here Y? is defined as in (3).2 To implement the LH algorithm using the F of Eq. (7), we need an efficient way to compute ?F . As Thm. 2 below shows, there is always a y? which satisfies y? ? arg max (s ? y? ? W (? y )) (8) ? y??Y and any such y? is an element of ?F . So, the optimization form of the LH algorithm uses the same pseudocode as the gradient form (Fig. 2), but uses Eq. (8) with s = st to compute y?t in line (?). To gain an intuition for Eqs. (7?8), consider the example of external-regret matching. Since Y is the unit simplex in Rd , Y? is the positive orthant in Rd . So, with W2 (? y ) = k? y k22 /2, the optimization problem (8) will be equivalent to 1 y? = arg min ks ? y?k22 d y??R+ 2 That is, y? is the projection of s onto Rd+ by minimum Euclidean distance. It is not hard to verify that this projection replaces the negative elements of s with zeros, y? = [s]+ . Substituting this value for y? back into (7) and using the fact that s ? [s]+ = [s]+ ? [s]+ , the resulting potential function is X X F2 (s) = s ? [s]+ ? [si ]2+ /2 = [si ]2+ /2 i i as claimed above. This potential function is the standard one for analyzing external-regret matching. Theorem 2 Let W be convex, dom W ? Y? be nonempty, and W (? y ) ? 0 for all y?. Suppose the sets {? y | W (? y ) + s ? y? ? k} are compact for all s and k. Define F as in (7). Then F is finite and F (s) ? 0 for all s ? S. And, the optimization form of the LH algorithm using the hedging function W is equivalent to the gradient form of the LH algorithm with potential function F . 2 Eq. (7) is similar to the definition of the convex dual W ? , but the supremum is over y? ? Y? instead of over all y?. As a result, F and W ? can be very different functions. As discussed in [6], F can be expressed as the dual of a function related to W . 6 Theoretical Results Our main theoretical results are regret bounds for the LH algorithm. The bounds depend on the curvature of our potential F , the size of the hypothesis set Y, and the possible slopes C of our loss functions. Intuitively, F must be neither too curved nor too flat on the scale of the updates to st from Eq. (1): if F is too curved then ?F will change too quickly and our hypothesis yt will jump around a lot, while if F is too flat then we will not react quickly enough to changes in regret. We will state our results for the gradient form of the LH algorithm. For the optimization form, essentially the same results hold, but the constants are defined in terms of the hedging function instead. Therefore, we never need to work with (or even be able to write down) the corresponding potential function. For more details, see [6]. One result which is slightly tricky to carry over is tuning learning rates. The choice of learning rate below and the resulting bound are the same as for the gradient form, but the implementation is slightly different: to set a learning rate ? > 0, we replace W (? y ) with W (? y /?). We will need upper and lower bounds on F . We will assume F (s + ?) ? F (s) + ? ? f (s) + Ck?k2 (9) for all regret vectors s and increments ?, and [F (s) + A]+ ? inf Bks ? s? kp ? s ?S (10) for all s. Here k ? k is an arbitrary finite norm, and A ? 0, B > 0, C > 0, and 1 ? p ? 2 are constants. Eq. (9), together with the convexity of F , implies that F is differentiable and f is its gradient; the LH algorithm is applicable if F is not differentiable, but its regret bounds are weaker. We will bound the size of Y by assuming that kyk? ? M (11) for all y in Y. Here, k ? k? is the dual of the norm used in Eq. (9) [12]. The size of our update to st (in Eq. (1)) depends on the hypothesis set Y, the cost vector set C, and the vector u. We have already bounded Y; rather than bounding C and u separately, we will assume that there is a constant D so that E(kst+1 ? st k2 | st ) ? D (12) Here the expectation is taken with respect to our choice of hypothesis, so the inequality must hold for all possible values of ct . (The expectation is only necessary if we randomize our choice of hypothesis, as would happen if Y is the convex hull of some non-convex set. If interior points of Y are valid plays, we need not randomize, so we can drop the expectation in (12) and below.) Our theorem then bounds our regret in terms of the above constants. Since the bounds are sublinear in t, they show that Lagrangian Hedging is a no-regret algorithm when we choose an appropriate potential F . Theorem 3 Suppose the potential function F is convex and satisfies Eqs. (4), (9) and (10). Suppose that the problem definition is bounded according to (11) and (12). Then the LH algorithm (Fig. 2) achieves expected regret E(?t+1 (y)) ? M ((tCD + A)/B)1/p = O(t1/p ) versus any hypothesis y ? Y. If p = 1 the above bound is O(t). But, suppose that we know ahead of time the number of trials t we will see. Define G(s) = F (?s), where p ? = A/(tCD) Then the LH algorithm with potential G achieves regret ? ? E(?t+1 (y)) ? (2M/B) tACD = O( t) for any hypothesis y ? Y. The full proof of Thm. 3 appears in [6]; here, we sketch the proof of one of the most important intermediate results. Thm. 4 shows that, if we can guarantee E(st+1 ? st ) ? ?F (t) ? 0, then F (st ) cannot grow too quickly. This result is analogous to Blackwell?s approachability theorem: since the level sets of F are related to S, we will be able to show st /t ? S, implying no regret. Theorem 4 (Gradient descent) Let F (s) and f (s) satisfy Equation (9) with seminorm k ? k and Pt?1 constant C. Let x0 , x1 , . . . be a sequence of random vectors. Write st = i=0 xi , and let D be a constant so that E(kxt k2 | st ) ? D. Suppose that, for all t, E(xt ? f (st ) | st ) ? 0. Then for all t, E(F (st+1 ) | s1 ) ? F (s1 ) ? tCD P ROOF : The proof is by induction: for t ? 2, assume E(F (st ) | s1 ) ? F (s1 ) + (t ? 1)CD. (It is obvious that the base case holds for t = 1.) Then: F (st+1 ) = F (st + xt ) ? E(F (st+1 ) | st ) ? E(F (st+1 ) | s1 ) ? E(F (st+1 ) | s1 ) ? which is the desired result. 7 F (st ) + xt ? f (st ) + Ckxt k2 F (st ) + CD E(F (st ) | s1 ) + CD F (s1 ) + (t ? 1)CD + CD 2 Examples The classical applications of no-regret algorithms are learning from expert advice and learning to play a repeated matrix game. TheseP two tasks are essentially equivalent, since they both use the probability simplex Y = {y | y ? 0, i yi = 1} for their hypothesis set. This choice of Y simplifies the required algorithms greatly; with appropriate choices of potential functions, it can be shown that standard no-regret algorithms such as Freund and Schapire?s Hedge [5], Littlestone and Warmuth?s weighted majority [13], and Hart and Mas-Colell?s external-regret matching [14, Theorem B] are all special cases of the LH algorithm. A large variety of other online prediction problems can also be cast in our framework. These problems include path planning when costs are chosen by an adversary [11], planning in a Markov decision process when costs are chosen by an adversary [15], online pruning of a decision tree [16], and online balancing of a binary search tree [4]. More uses of online convex programming are given in [1, 3, 4]. In each case the bounds for the LH algorithm will be polynomial or better in the dimensionality of the appropriate hypothesis set and sublinear in the number of trials. 8 Experiments To demonstrate that our theoretical bounds translate to good practical performance, we implemented the LH algorithm with the potential function W2 from (6) and used it to learn policies for the game of one-card poker. (The hypothesis space for this learning problem is the set of sequence weight vectors, which is convex because one-card poker is an extensive-form game [17].) In one-card poker, two players (called the gambler and the dealer) each ante $1 and receive one card from a 13-card deck. The gambler bets first, adding either $0 or $1 to the pot. Then the dealer gets a chance to bet, again either $0 or $1. Finally, if the gambler bet $0 and the dealer bet $1, the gambler gets a second chance to bring her bet up to $1. If either player bets $0 when the other has already bet $1, that player folds and loses her ante. If neither player folds, the higher card wins the pot, resulting in a net gain of either $1 or $2 (equal to the other player?s ante plus the bet of $0 or $1). In contrast to the usual practice in poker we assume that the payoff vector ct is observable after each hand; the partially-observable extension is beyond the scope of this paper. One-card poker is a simple game; nonetheless it has many of the elements of more complicated games, including incomplete information, chance events, and multiple stages. And, optimal play requires behaviors like randomization and bluffing. The biggest strategic difference between onecard poker and larger variants such as draw, stud, or hold-em is the idea of hand potential: while 0.6 0.6 Gambler bound Dealer bound Avg payoff Minimax value 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 ?0.1 ?0.1 ?0.2 ?0.2 ?0.3 ?0.3 ?0.4 0 50 100 150 200 250 Gambler bound Dealer bound Avg payoff Minimax value 0.5 ?0.4 0 50 100 150 200 250 Figure 3: Performance in self-play (left) and against a fixed opponent (right). 45679 and 24679 are almost equally strong hands in a showdown (they are both 9-high), holding 45679 early in the game is much more valuable because replacing the 9 with either a 3 or an 8 turns it into a straight. Fig. 3 shows the results of two typical runs: in both panels the dealer is using our no-regret algorithm. In the left panel the gambler is also using our no-regret algorithm, while in the right panel the gambler is playing a fixed policy. The x-axis shows number of hands played; the y-axis shows the average payoff per hand from the dealer to the gambler. The value of the game, ?$0.064, is indicated with a dotted line. The middle solid curve shows the actual performance of the dealer (who is trying to minimize the payoff). The upper curve measures the progress of the dealer?s learning: after every fifth hand we extracted a strategy ytavg by taking the average of our algorithm?s predictions so far. We then plotted the worst-case value of ytavg . That is, we plotted the payoff for playing ytavg against an opponent which knows ytavg and is optimized to maximize the dealer?s losses. Similarly, the lower curve measures the progress of the gambler?s learning. In the right panel, the dealer quickly learns to win against the non-adaptive gambler. The dealer never plays a minimax strategy, as shown by the fact that the upper curve does not approach the value of the game. Instead, she plays to take advantage of the gambler?s weaknesses. In the left panel, the gambler adapts and forces the dealer to play more conservatively; in this case, the limiting strategies for both players are minimax. The curves in the left panel of Fig. 3 show an interesting effect: the small, damping oscillations result from the dealer and the gambler ?chasing? each other around a minimax strategy. One player will learn to exploit a weakness in the other, but in doing so will open up a weakness in her own play; then the second player will adapt to try to take advantage of the first, and the cycle will repeat. Each weakness will be smaller than the last, so the sequence of strategies will converge to a minimax equilibrium. This cycling behavior is a common phenomenon when two learning players play against each other. Many learning algorithms will cycle so strongly that they fail to achieve the value of the game, but our regret bounds eliminate this possibility. 9 Discussion We have presented the Lagrangian Hedging algorithms, a family of no-regret algorithms for OCP based on general potential functions. We have proved regret bounds for LH algorithms and demonstrated experimentally that these bounds lead to good predictive performance in practice. The regret bounds for LH algorithms have low-order dependences on d, the number of dimensions in the hypothesis set Y. This low-order dependence means that the LH algorithms can learn well in prediction problems with complicated hypothesis sets; these problems would otherwise require an impractical amount of training data and computation time. Our work builds on previous work in online learning and online convex programming. Our contributions include a new, deterministic algorithm; a simple, general proof; the ability to build algorithms from a more general class of potential functions; and a new way of building good potential functions from simpler hedging functions, which allows us to construct potential functions for arbitrary convex hypothesis sets. Future work includes a no-internal-regret version of the LH algorithm, as well as a bandit-style version. The former will guarantee convergence to a correlated equilibrium in nonzero-sum games, while the latter will allow us to work from incomplete observations of the cost vector (e.g., as might happen in an extensive-form game such as poker). Acknowledgments Thanks to Amy Greenwald, Martin Zinkevich, and Sebastian Thrun, as well as Yoav Shoham and his research group. This work was supported by NSF grant EF-0331657 and DARPA contracts F30602-01-C-0219, NBCH-1020014, and HR0011-06-0023. The opinions and conclusions are the author?s and do not reflect those of the US government or its agencies. References [1] Geoffrey J. Gordon. Approximate Solutions to Markov Decision Processes. PhD thesis, Carnegie Mellon University, 1999. [2] James F. Hannan. Approximation to Bayes risk in repeated play. In M. Dresher, A. Tucker, and P. Wolfe, editors, Contributions to the Theory of Games, volume 3, pages 97?139. Princeton University Press, 1957. [3] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth International Conference on Machine Learning. AAAI Press, 2003. [4] Adam Kalai and Santosh Vempala. Geometric algorithms for online optimization. Technical Report MIT-LCS-TR-861, Massachusetts Institute of Technology, 2002. [5] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In EuroCOLT 95, pages 23?37. Springer-Verlag, 1995. [6] Geoffrey J. Gordon. No-regret algorithms for structured prediction problems. Technical Report CMU-CALD-05-112, Carnegie Mellon University, 2005. [7] Nicol`o Cesa-Bianchi and G?abor Lugosi. Potential-based algorithms in on-line prediction and game theory. Machine Learning, 51:239?261, 2003. [8] David Blackwell. An analogue of the minimax theorem for vector payoffs. Pacific Journal of Mathematics, 6(1):1?8, 1956. [9] Shai Shalev-Shwartz and Yoram Singer. Convex repeated games and Fenchel duality. In B. Sch?olkopf, J.C. Platt, and T. Hofmann, editors, Advances in Neural Information Processing Systems, volume 19, Cambridge, MA, 2007. MIT Press. [10] David P. Helmbold and Robert E. Schapire. Predicting nearly as well as the best pruning of a decision tree. In Proceedings of COLT, pages 61?68, 1995. [11] Eiji Takimoto and Manfred Warmuth. Path kernels and multiplicative updates. In COLT, 2002. [12] R. Tyrell Rockafellar. Convex Analysis. Princeton University Press, New Jersey, 1970. [13] Nick Littlestone and Manfred Warmuth. The weighted majority algorithm. Technical Report UCSC-CRL-91-28, University of California Santa Cruz, 1992. [14] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127?1150, 2000. [15] H. Brendan McMahan, Geoffrey J. Gordon, and Avrim Blum. Planning in the presence of cost functions controlled by an adversary. In Proceedings of the Twentieth International Conference on Machine Learning, 2003. [16] David P. Helmbold and Robert E. Schapire. Predicting nearly as well as the best pruning of a decision tree. In COLT, 1995. [17] D. Koller, N. Meggido, and B. von Stengel. Efficient computation of equilibria for extensive two-person games. Games and Economic Behaviour, 14(2), 1996.
3082 |@word trial:3 middle:1 version:4 polynomial:1 norm:2 approachability:1 open:1 dealer:14 pick:2 tr:1 solid:1 carry:1 contains:1 ours:1 current:3 si:4 must:3 cruz:1 additive:2 happen:2 shape:1 hofmann:1 designed:2 drop:1 update:3 implying:1 half:1 item:1 kyk:1 warmuth:3 core:1 short:1 manfred:2 provides:1 boosting:1 simpler:2 ucsc:1 direct:1 stud:1 prove:1 x0:1 expected:1 behavior:2 nor:1 growing:1 planning:3 eurocolt:1 actual:3 reorganizing:1 increasing:1 rebalancing:1 notation:1 bounded:2 panel:6 what:1 finding:3 transformation:1 impractical:1 guarantee:3 every:1 concave:1 runtime:1 classifier:1 k2:4 tricky:1 platt:1 unit:1 grant:1 before:1 positive:3 t1:1 despite:1 analyzing:1 path:4 lugosi:6 might:1 plus:1 twice:2 studied:1 k:1 dynamically:1 shaded:1 practical:1 acknowledgment:1 practice:2 regret:46 implement:2 chasing:1 procedure:1 thought:1 shoham:1 matching:6 projection:3 convenient:1 seeing:1 get:2 onto:1 interior:2 clever:2 cannot:1 risk:1 applying:1 restriction:2 zinkevich:4 equivalent:6 lagrangian:6 yt:12 demonstrated:1 deterministic:1 primitive:1 convex:26 pure:2 react:1 helmbold:2 factored:1 rule:3 amy:1 his:1 proving:1 increment:1 analogous:1 limiting:1 target:2 play:13 pt:3 suppose:6 programming:7 us:4 designing:1 hypothesis:25 pa:1 element:7 wolfe:1 particularly:1 updating:1 capture:1 worst:1 region:2 cycle:2 valuable:2 intuition:1 agency:1 convexity:1 ui:1 econometrica:1 dom:1 depend:1 solving:2 predictive:1 f2:4 darpa:1 jersey:1 additivity:1 kp:1 choosing:1 outcome:3 shalev:2 emerged:1 larger:1 otherwise:2 ability:3 online:16 differentiate:1 advantage:3 sequence:4 differentiable:4 kxt:1 net:1 frequent:1 ocp:13 date:1 translate:1 achieve:1 adapts:1 olkopf:1 recipe:1 convergence:1 requirement:2 adam:1 help:1 derive:1 progress:2 eq:12 strong:1 implemented:1 c:1 predicted:1 implies:1 pot:2 safe:6 thick:1 correct:2 hull:1 opinion:1 require:4 government:1 behaviour:1 f1:3 generalization:4 fix:1 tyrell:1 randomization:1 extension:1 hold:5 around:2 equilibrium:4 scope:1 substituting:1 achieves:2 early:1 omitted:1 polar:1 applicable:1 lose:1 tool:1 weighted:7 mit:2 always:1 ck:1 kalai:2 avoid:1 rather:1 bet:8 she:1 greatly:1 contrast:1 brendan:1 tradition:1 eliminate:1 abor:1 her:3 bandit:1 koller:1 expressible:1 arg:2 classification:1 flexible:2 among:1 dual:3 colt:3 special:3 equal:1 aware:1 never:2 construct:1 santosh:1 nearly:2 future:1 simplex:5 report:3 gordon:4 distinguishes:1 roof:1 maintain:1 possibility:1 weakness:4 extreme:2 light:1 edge:2 necessary:3 lh:24 unless:1 tree:7 incomplete:2 euclidean:1 damping:1 littlestone:2 desired:1 plotted:2 theoretical:3 instance:1 fenchel:1 earlier:1 yoav:2 cost:6 strategic:1 subset:1 colell:2 too:7 answer:1 chooses:2 st:35 thanks:1 international:2 person:1 contract:1 receiving:1 picking:1 together:2 quickly:5 iy:1 w1:2 again:1 aaai:1 thesis:1 cesa:6 squared:1 choose:2 reflect:1 von:1 external:6 corner:2 expert:7 derivative:1 style:1 leading:1 potential:41 stengel:1 sec:4 includes:1 rockafellar:1 satisfy:1 depends:1 hedging:14 multiplicative:1 try:1 root:1 closed:2 lot:1 doing:1 sup:2 bayes:1 complicated:3 shai:1 slope:1 ante:3 contribution:3 minimize:1 became:1 who:2 correspond:1 researcher:3 straight:1 sebastian:1 definition:2 infinitesimal:1 against:4 nonetheless:1 tucker:1 james:1 obvious:1 proof:7 nonpositive:1 gain:2 proved:2 massachusetts:1 dimensionality:1 back:2 manuscript:1 appears:1 higher:1 follow:1 strongly:1 generality:3 just:1 stage:1 until:1 sketch:1 hand:6 replacing:2 nonlinear:1 marker:1 defines:1 indicated:1 seminorm:1 name:2 effect:1 k22:2 verify:1 y2:2 building:1 cald:1 former:1 analytically:1 nonzero:1 game:18 during:1 self:1 unnormalized:1 generalized:1 trying:1 ay:1 theoretic:1 demonstrate:2 bring:1 reasoning:1 meaning:1 ef:1 recently:1 fi:1 common:2 pseudocode:2 volume:2 discussed:2 mellon:3 cambridge:1 rd:12 tuning:1 mathematics:1 similarly:1 had:1 base:2 curvature:1 closest:1 own:1 recent:1 inf:1 claimed:1 verlag:1 inequality:1 binary:3 yi:3 minimum:1 somewhat:1 determine:1 shortest:1 maximize:1 converge:1 full:1 hannan:2 multiple:1 technical:3 adapt:1 sphere:1 hart:2 equally:1 controlled:1 prediction:11 involving:1 variant:1 essentially:4 cmu:2 expectation:3 represent:2 kernel:1 c1:1 receive:1 addition:2 separately:1 else:1 grow:1 sch:1 w2:4 ascent:1 showdown:1 elegant:1 call:1 near:2 presence:1 revealed:1 intermediate:1 easy:2 enough:1 variety:2 gave:1 economic:1 simplifies:1 cn:1 idea:1 gambler:14 specialization:1 action:2 useful:2 generally:1 santa:1 tune:1 amount:1 dark:1 eiji:1 tth:1 schapire:4 nsf:1 dotted:1 per:1 carnegie:3 write:5 group:1 blum:1 neither:2 takimoto:1 tcd:3 graph:2 subgradient:1 cone:2 sum:1 run:2 powerful:1 family:2 almost:1 oscillation:1 draw:1 decision:9 sergiu:1 bound:26 ct:8 guaranteed:1 played:1 correspondence:1 fold:2 replaces:1 dresher:1 nonnegative:1 ahead:1 flat:2 min:1 vempala:2 martin:2 department:1 structured:3 according:1 alternate:1 pacific:1 combination:1 smaller:1 slightly:2 em:1 s1:10 intuitively:1 taken:1 ln:4 equation:2 legal:1 turn:2 nonempty:1 fail:1 singer:2 know:2 end:1 generalizes:1 opponent:2 observe:1 appropriate:5 substitute:1 ensure:1 include:2 exploit:1 giving:1 restrictive:1 f30602:1 build:2 yoram:1 classical:2 move:1 already:2 strategy:6 randomize:2 rt:2 usual:1 dependence:2 poker:8 cycling:1 gradient:15 win:2 distance:1 separate:1 card:8 thrun:1 majority:7 induction:1 assuming:1 length:1 kst:1 illustration:1 difficult:1 robert:3 holding:1 stated:1 negative:2 append:1 implementation:1 policy:2 unknown:2 adjustable:2 bianchi:6 upper:3 observation:1 markov:2 finite:3 descent:2 orthant:2 curved:2 defining:2 payoff:7 andreu:1 y1:2 arbitrary:7 ggd:1 thm:7 bk:1 david:3 cast:1 blackwell:4 required:1 extensive:4 connection:1 optimized:1 nick:1 california:1 hr0011:1 able:2 adversary:3 beyond:1 below:8 program:1 max:2 including:1 analogue:1 event:1 difficulty:1 force:1 predicting:4 recursion:1 minimax:7 technology:1 numerous:1 axis:2 literature:1 geometric:1 nicol:1 embedded:1 loss:12 freund:2 sublinear:3 interesting:1 geoffrey:4 versus:2 editor:2 playing:3 cd:5 balancing:1 repeat:1 last:1 supported:1 weaker:2 allow:1 institute:1 wide:1 taking:1 fifth:1 sparse:1 curve:5 dimension:3 valid:1 computes:1 conservatively:1 author:1 jump:1 avg:2 adaptive:2 far:2 pruning:4 compact:2 observable:2 preferred:1 approximate:1 keep:2 supremum:1 nbch:1 pittsburgh:1 xi:1 shwartz:2 lcs:1 search:3 ggordon:1 learn:3 main:3 bounding:1 repeated:3 x1:1 advice:5 fig:6 biggest:1 mcmahan:1 learns:1 dozen:1 theorem:10 down:1 specific:1 xt:3 exists:1 avrim:1 adding:1 importance:1 ci:3 phd:1 justifies:1 flavor:1 easier:1 lt:3 logarithmic:2 twentieth:2 deck:1 bluffing:1 lazy:1 expressed:1 partially:1 springer:1 loses:1 satisfies:3 chance:3 extracted:1 hedge:2 ma:3 goal:1 greenwald:1 replace:1 crl:1 feasible:1 hard:1 change:2 experimentally:1 typical:1 total:2 called:3 duality:1 player:9 internal:1 latter:1 preparation:1 evaluate:1 princeton:2 phenomenon:1 correlated:2
2,295
3,083
Fast Discriminative Visual Codebooks using Randomized Clustering Forests Frank Moosmann?, Bill Triggs and Frederic Jurie GRAVIR-CNRS-INRIA, 655 avenue de l?Europe, Montbonnot 38330, France [email protected] Abstract Some of the most effective recent methods for content-based image classification work by extracting dense or sparse local image descriptors, quantizing them according to a coding rule such as k-means vector quantization, accumulating histograms of the resulting ?visual word? codes over the image, and classifying these with a conventional classifier such as an SVM. Large numbers of descriptors and large codebooks are needed for good results and this becomes slow using k-means. We introduce Extremely Randomized Clustering Forests ? ensembles of randomly created clustering trees ? and show that these provide more accurate results, much faster training and testing and good resistance to background clutter in several state-of-the-art image classification tasks. 1 Introduction Many of the most popular current methods for image classification represent images as collections of independent patches characterized by local visual descriptors. Patches can be sampled densely [18, 24], randomly [15], or at selected salient points [14]. Various local descriptors exist with different degrees of geometric and photometric invariance, but all encode the local patch appearance as a numerical vector and the more discriminant ones tend to be high-dimensional. The usual way to handle the resulting set of descriptor vectors is to vector quantize them to produce socalled textons [12] or visual words [5, 22]. The introduction of such visual codebooks has allowed significant advances in image classification, especially when combined with bag-of-words models inspired by text analysis [5, 7, 22, 24, 25]. There are various methods for creating visual codebooks. K-means clustering is currently the most common [5, 22] but mean-shift [9] and hierarchical k-means [17] clusterers have some advantages. These methods are generative but some recent approaches focus on building more discriminative codebooks [20, 24]. The above methods give impressive results but they are computationally expensive owing to the cost of assigning visual descriptors to visual words during training and use. Tree based coders [11, 17, 23] are quicker but (so far) somewhat less discriminative. It seems to be difficult to achieve both speed and good discrimination. This paper contributes two main ideas. One is that (small) ensembles of trees eliminate many of the disadvantages of single tree based coders without losing the speed advantages of trees. The second is that classification trees contain a lot of valuable information about locality in descriptor space that is not apparent in the final class labels. One can exploit this by training them for classification then ignoring the class labels and using them as ?clustering trees? ? simple spatial partitioners that assign a distinct region label (visual word) to each leaf. Combining these ideas, we introduce Extremely ? Current address: Institute of Measurement and Control, University of Karlsruhe, Germany. Contact: [email protected] Figure 1: Using ERC-Forests as visual codebooks in bag-of-feature image classification. Randomized Clustering Forests (ERC-Forests) ? ensembles of randomly created clustering trees. We show that these have good resistance to background clutter and that they provide much faster training and testing and more accurate results than conventional k-means in several state-of-the-art image classification tasks. In the rest of the paper, we first explain how decision trees can provide good visual vocabularies, then we describe our approach and present experimental results and conclusions. 2 Tree Structured Visual Dictionaries Our overall goal is to classify images according to the object classes that they contain (see figure 1). We will do this by selecting or sampling patches from the image, characterizing them by vectors of local visual descriptors and coding (quantizing) the vectors using a learned visual dictionary, i.e. a process that assigns discrete labels to descriptors, with similar descriptors having a high probability of being assigned the same label. As in text categorization, the occurrences of each label (?visual word?) are then counted to build a global histogram (?bag of words?) summarizing the image (?document?) contents. The histogram is fed to a classifier to estimate the image?s category label. Unlike text, visual ?words? are not intrinsic entities and different quantization methods can lead to very different performances. Computational efficiency is important because a typical image yields 103 ?104 local descriptors and data sets often contain thousands of images. Also, many of the descriptors generally lie on the background not the object being classified, so the coding method needs to be able to learn a discriminative labelling despite considerable background ?noise?. K-means and tree structured codes. Visual coding based on K-means vector quantization is effective but slow because it relies on nearest neighbor search, which remains hard to accelerate in high dimensional descriptor spaces despite years of research on spatial data structures (e.g. [21]). Nearest neighbour assignments can also be somewhat unstable: in high dimensions, concentration of measure [2] tends to ensure that there are many centres with similar distances to any given point. Component-wise decision trees offer logarithmic-time coding, but individual trees can rarely compete with a good K-means coding: each path through the tree typically accesses only a few of the feature dimensions, so there is little scope for producing a consensus over many different dimensions. Nist?er et al.[17] introduced a tree coding based on hierarchical K-means. This uses all components and gives a good compromise between speed and loss of accuracy. Random forests. Despite their popularity, we believe that K-means codes are not the best compromise. No single data structure can capture the diversity and richness of high dimensional descriptors. To do this an ensemble approach is needed. The theoretical and practical performance of ensemble classifiers is well documented [1]. Ensembles of random trees [4] seem particularly suitable for visual dictionaries owing to their simplicity, speed and performance [11]. Sufficiently diverse trees can be constructed using randomized data splits or samples [4]. Extremely Randomized Trees (see below) take this further by randomizing both attribute choices and quantization thresholds, obtaining even better results [8]. Compared to standard approaches such as C4.5, ER tree construction is rapid, depends only weakly on the dimensionality and requires relatively little memory. Clustering forests. Methods such as [11, 8] classify descriptors by majority voting over the treeassigned class labels. There are typically many leaves that assign a given class label. Our method works differently after the trees are built. It uses the trees as spatial partitioning methods not classifiers, assigning each leaf of each tree a distinct region label (visual word). For the overall image classification tasks studied here, histograms of these leaf labels are then accumulated over the whole image and a global SVM classifier is applied. Our approach is thus related to clustering trees ? decision trees whose leaves define a spatial partitioning or grouping [3, 13]. Such trees are able to find natural clusters in high dimensional spaces. They can be built without external class labels, but if labels are available they can be used to guide the tree construction. Ensemble methods and particularly forests of extremely randomized trees again offer considerable performance advantages here. The next section shows how such Extremely Randomized Clustering Forests can be used to produce efficient visual vocabularies for image classification tasks. 3 Extremely Randomized Clustering Forests (ERC-Forests) Our goal is to build a discriminative coding method. Our method starts by building randomized decision trees that predict class labels y from visual descriptor vectors d = (f1 , . . . , fD ), where fi , i = 1, . . . , D are elementary scalar features. For notational simplicity we assume that all of the descriptors from a given image share the same label y. We train the trees using a labeled (for now) training set L = {(dn , yn ), n = 1, . . . , N }. However we use the trees only for spatial coding, not classification per se. During a query, for each descriptor tested, each tree is traversed from the root down to a leaf and the returned label is the unique leaf index, not the (set of) descriptor label(s) y associated with the leaf. ERC-Trees. The trees are built recursively top down. At each node t corresponding to descriptor space region Rt , two children l, r are created by choosing a boolean test Tt that divides Rt into two disjoint regions, Rt = Rl ? Rr with Rl ? Rr = ?. Recursion continues until further subdivision is impossible: either all surviving training examples belong to the same class or all have identical values for all attributes. We use thresholds on elementary features as tests, Tt = {fi(t) ? ?t } for some feature index i(t) and threshold ?t . The tests are selected randomly as follows. A feature index i(t) is chosen randomly, a threshold ?t is sampled randomly from a uniform distribution, and the 2I(C,T ) , resulting node is scored over the surviving points using Shannon entropy [8]: Sc(C, T ) = H C +HT where HC denotes entropy of the class label distribution, HT the entropy of the partition induced by the test and I(C, T ) their mutual information. High scores indicate that the split separates the classes well. This procedure is repeated until the score is higher than a fixed threshold Smin or until a fixed maximum number Tmax of trials have been made. The test Tt that achieved the highest score is adopted and the recursion continues. The parameters (Smin , Tmax ) control the strength and randomness of the generated trees. High values (e.g. (1, D) for normal ID3 decision tree learning) produce highly discriminant trees with little diversity, while Smin = 0 or Tmax = 1 produce completely random trees. ERC-Forests. Compared to standard decision tree learning, the trees built using random decisions are larger and have higher variance. Class label variance can be reduced by voting over the ensemble of trees (e.g. [15]), but here, instead of voting we treat each leaf in each tree as a separate visual word and stack the leaf indices from each tree into an extended code vector for each input descriptor, leaving the integration of votes to the final classifier. The resulting process is reminiscent of spatial search algorithms based on random line projections (e.g. [10]), with each tree being responsible for distributing the data across its own set of clusters. Classifier forests are characterized by Breiman?s bound on the asymptotic generalization error [4], P E? ? ? (1 ? s2 )/s2 where s measures the strength of the individual trees and ? measures the correlation between them in terms of the raw margin. It would be interesting to optimize Smin and Tmax to minimize the bound but we have not yet tried this. Experimentally, the trees appear to be rather diverse while still remaining relatively strong, which should lead to good error bounds. Application to visual vocabularies. In the experiments below, local features are extracted from the training images by sampling sub-windows at random positions and scales1 and coding them using a visual descriptor function. An ERC-Forest is then built using the given class labels. To control the codebook size, we grow the trees fully then prune them back bottom up, recursively removing the node with the lowest gain until either a specified threshold on the gain or a specified number of 1 For image classification, dense enough random sampling eventually outperforms keypoint based sampling [18]. Figure 2: Left: example images from GRAZ-02. The rows are respectively Bikes (B), Cars (C) and background (N). Right: some test patches that were assigned to a particular ?car? leaf (left) and a particular ?bike? one (right)). leaves is reached. One can also prune during construction, which is faster but does not allow the number of leaf nodes to be controlled directly. In use, the trees transform each descriptor into a set of leaf node indices with one element from each tree. Votes for each index are accumulated into a global histogram and used for classification as in any other bag of features approach. Independently of the codebook, the denser the sampling the better the results, so typically we sample images more densely during testing than during codebook training, c.f. [18]. Computational complexity. The worst-case complexity for building a tree is O(Tmax N k), where N is the number of patches and k is the number of clusters/leaf nodes before pruning. With adversarial data the method cannot guarantee balanced trees so it can not do better than this, but in our experiments on real data we always obtained well balanced trees at a practical complexity of around O(Tmax N log k). The dependence on data dimensionality D is hidden in the constant Tmax , which needs to be set large enough to filter out irrelevant?feature dimensions, thus providing better coding and more balanced ? trees. A value of Tmax ? O( D) has been suggested [8], leading to a total complexity of O( DN log k). In contrast, k-means has a complexity of O(DN k) which is more than 104 times larger for our 768-D wavelet descriptor with N = 20000 image patches and k = 5000 clusters, not counting the number of iterations that k-means has to perform. Our method is also faster in use ? a useful property given that reliable image classification requires large numbers of subwindows to be labelled [18, 24]. Labeling a descriptor with a balanced tree requires O(log k) operations whereas k-means costs O(kD). 4 Experiments We present detailed results on the GRAZ-02 test set, http://www.emt.tugraz.at/?pinz/data/. Similar conclusions hold for two other sets that we tested, so we comment only briefly on these. GRAZ-02 (figure 2-left) contains three object categories ? bicycles (B), cars (C), persons (P) ? and negatives (N, meaning that none of B,C,P are present). It is challenging in the sense that the illumination is highly variable and the objects appear at a wide range of different perspectives and scales and are sometimes partially hidden. It is also neutral with respect to background, so it is not possible to detect objects reliably based on context alone. We tested various visual descriptors. The best choice turns out to depend on the database. Our color descriptor uses raw HSL color pixels to produce a 768-D feature vector (16?16 pixels ? 3 colors). Our color wavelet descriptor transforms this into another 768-D vector using a 16?16 Haar wavelet transform. Finally, we tested the popular grayscale SIFT descriptor [14], which returns 128-D vectors (4?4 histograms of 8 orientations). We measure performance with ROC curves and classification rates at equal error rate (EER). The method is randomized so we report means and variances over 10 learning runs. We use Smin = 0.5 but the exact value is not critical. In contrast Tmax has a significant influence on performance so it Figure 3: ?Bike? visual words for 4 different images. The brightness denotes the posterior probability for the visual word at the given image position to be labelled ?bike?. is chosen using a validation set. For the 768-D Color Wavelet Descriptor on the GRAZ-02 dataset, Tmax ? 50. Our algorithm?s ability to produce meaningful visual words is illustrated in figure 3 (c.f. [16]). Each white dot corresponds to the center of an image sub-window that reached an unmixed leaf node for the given object category (i.e. all of the training vectors belonging to the leaf are labeled with that category). Note that even though they have been learned on entire images without object segmentation, the visual vocabulary is discriminative enough to detect local structures in the test images that correspond well with representative object fragments, as illustrated in figure 2(right). The tests here were for individual object categories versus negatives (N). We took 300 images from each category, using images with even numbers for training and ones with odd numbers for testing. For Setting 1 tests we trained on the whole image as in [19], while for Setting 2 ones we used the segmentation masks provided with the images to train on the objects alone without background. For the GRAZ-02 database the wavelet descriptors gave the best performance. We report results for these on the two hardest categories, bikes and cars. For B vs. N we achieve 84.4% average EER classification rate for setting 1 and 84.1% for setting 2, in comparison to 76.5% from Opelt et al.[19]. For C vs. N the respective figures are 79.9%, 79.8% and 70.7%. Remarkably, using segmentation masks during training does not improve the image classification performance. This suggests that the method is able to pick out the relevant information from a significant amount of clutter. Comparing ERC-Forests with k-means and kd-clustering trees. Unless otherwise stated, 20 000 features (67 per image) were used to learn 1000 spatial bins per tree for 5 trees, and 8000 patches were sampled per image to build the resulting 5000-D histograms. The histograms are binarized using trivial thresholding at count 1 before being fed to the global linear SVM image classifier. We also tested with histograms normalized to total sum 1, and with thresholding by maximizing the mutual information of each dimension, but neither yielded better results for ERC-Forests. Fig. 4 gives some quantitative results on the bikes category (B vs. N). Fig. 4(a) shows the clear difference between our method and classical k-means for vocabulary construction. Note that we were not able to extend the k-means curve beyond 20 000 windows per image owing to prohibitive execution times. The figure also shows results for ?unsupervised trees? ? ERC-Forests built without using the class labels during tree construction. The algorithm remains the same, but the node scoring function is defined as the ratio between the splits so as to encourage balanced trees similar to randomized KD-trees. If only a few patches are sampled this is as good as k-means and much faster. However the spatial partition is so bad that with additional test windows, the binarized histogram vectors become almost entirely filled with ones, so discrimination suffers. As the dotted line shows, using binarization thresholds that maximize the mutual information can fix this problem but the results are still far below ERC-Forests. This comparison clearly shows the advantages of using supervision during clustering. Fig. 4(b) shows that codebooks with around 5000 entries (1000 per tree) suffice for good results. Fig. 4(c) shows that when the number of features used to build the codebooks is increased, the Figure 4: Evaluation of the parameters for B vs. N in setting 2: classification rate at the EER, averaged over trials. The error bars indicate standard deviations. See the text for further explanations. optimal codebook size also increases slightly. Also, if the trees are pruned too heavily they lose discriminative power: it is better to grow them fully and do without pruning. Fig. 4(d) shows that increasing the number of trees from 1 to 5 reduces the variance and increases the accuracy, with little improvement beyond this. Here, the number of leaves per tree was kept constant at 1000, so doubling the number of trees effectively doubles the vocabulary size. We also tested our method on the 2005 Pascal Challenge dataset, http://www.pascalnetwork.org/challenges/VOC/voc2005. This contains four categories, motorbikes, bicycles, people and cars. The goal is to distinguish each category from the others. Just 73 patches per image (50 000 in total over the 648 training images) were used to build the codebook. The maximum patch size was 30% of the image size. SIFT descriptors gave the best results for coding. The chosen forest contained four 7500-leaf trees, producing a 30 000-D histogram. The results (M:95.8%, B:90.1%, P:94%, C:96%) were either similar to or up to 2% better than the frontrunners in the 2005 Pascal Challenge [6], but used less information and had much faster processing times. A 2.8GHz P4 took around 20 minutes to build the codebook. Building the histograms for the 684 training and 689 test images with 10 000 patches per image took only a few hours. All times include both feature extraction and coding. We also compared our results with those of Mar?ee et al. [15]. They use the same kind of tree structures to classify images directly, without introducing the vocabulary layer that we propose. Our EER error rates are consistently 5?10% better than theirs. Finally, we tested the horse database from http://pascal.inrialpes.fr/data/horses. The task is difficult because the images were taken randomly from the internet and are highly variable regarding subject size, pose and visibility. Using SIFT descriptors we get an EER classification rate of 85.3%, which is significantly better than the other methods that we are aware of. 100 patches per image were used to build a codebook with 4 trees. 10 000 patches per image were used for testing. 5 Conclusions Bag of local descriptor based image classifiers give state-of-the art results but require the quantization of large numbers of high-dimensional image descriptors into many label classes. Extremely Randomized Clustering Forests provide a rapid and highly discriminative approach to this that outperforms k-means based coding in training time and memory, testing time, and classification accuracy. The method can use unlabelled data but it benefits significantly from labels when they are available. It is also resistant to background clutter, giving relatively clean segmentation and ?popout? of foreground classes even when trained on images that contain significantly more background features than foreground ones. Although trained as classifiers, the trees are used as descriptor-space quantization rules with the final classification being handled by a separate SVM trained on the leaf indices. This seems to be a promising approach for visual recognition, and may be beneficial in other areas such as object detection and segmentation. References [1] E. Bauer and R. Kohavi. An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning Journal, 36(1-2):105?139, 1999. [2] K. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. When is nearest neighbors meaningful? In Int. Conf. Database Theorie, pages 217?235, 1999. [3] H. Blockeel, L. De Raedt, and J. Ramon. Top-down induction of clustering trees. In ICML, pages 55?63, 1998. [4] L. Breiman. Random forests. ML Journal, 45(1):5?32, 2001. [5] G. Csurka, C. Dance, L. Fan, J. Williamowski, and C. Bray. Visual categorization with bags of keypoints. In ECCV?04 workshop on Statistical Learning in CV, pages 59?74, 2004. [6] M. Everingham et al. (33 authors). The 2005 PASCAL visual object classes challenge. In F. d?Alche Buc, I. Dagan, and J. Quinonero, editors, Proc. 1st PASCAL Challenges Workshop. Springer LNAI, 2006. [7] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from google?s image search. In ICCV, pages II: 1816?1823, 2005. [8] P. Geurts, D. Ernst, and L. Wehenkel. Extremely randomized trees. Machile Learning Journal, 63(1), 2006. [9] F. Jurie and B. Triggs. Creating efficient codebooks for visual recognition. In ICCV, 2005. ? [10] H. Lejsek, F.H. Asmundsson, B. Th?or-J?onsson, and L. Amsaleg. Scalability of local image descriptors: A comparative study. In ACM Int. Conf. on Multimedia, Santa Barbara, 2006. [11] V. Lepetit, P. Lagger, and P. Fua. Randomized trees for real-time keypoint recognition. In CVPR ?05 Vol.2, pages 775?781, 2005. [12] T. Leung and J. Malik. Representing and recognizing the visual appearance of materials using threedimensional textons. IJCV, 43(1):29?44, June 2001. [13] Bing Liu, Yiyuan Xia, and Philip S. Yu. Clustering through decision tree construction. In CIKM ?00, pages 20?29, 2000. [14] D.G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2), 2004. [15] R. Mar?ee, P. Geurts, J. Piater, and L. Wehenkel. Random subwindows for robust image classification. In CVPR, volume 1, pages 34?40, 2005. [16] F. Moosmann, D. Larlus, and F. Jurie. Learning saliency maps for object categorization. In ECCV?06 Workshop on the Representation and Use of Prior Knowledge in Vision, 2006. [17] D. Nist?er and H. Stew?enius. Scalable recognition with a vocabulary tree. In CVPR, 2006. [18] E. Nowak, F. Jurie, and B. Triggs. Sampling strategies for bag-of-features image classification. In ECCV?06, 2006. [19] A. Opelt and A. Pinz. Object localization with boosting and weak supervision for generic object recognition. In SCIA, 2005. [20] F. Perronnin, C. Dance, G. Csurka, and M. Bressan. Adapted vocabularies for generic visual categorization. In ECCV, 2006. [21] U. Shaft, J. Goldstein, and K. Beyer. Nearest neighbor query performance for unstable distributions. Technical Report TR 1388, Dpt of Computer Science, Univ. of Wisconsin, 1998. [22] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In ICCV, volume 2, pages 1470?1477, October 2003. [23] J. Winn and A. Criminisi. Object class recognition at a glance. In CVPR?06 - video tracks, 2006. [24] J. Winn, A. Criminisi, and T. Minka. Object categorization by learned universal visual dictionary. In ICCV, pages II: 1800?1807, 2005. [25] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of texture and object categories: A comprehensive study. Int. J. Computer Vision. To appear, 2006.
3083 |@word trial:2 briefly:1 seems:2 everingham:1 triggs:3 tried:1 brightness:1 pick:1 tr:1 recursively:2 lepetit:1 liu:1 contains:2 score:3 selecting:1 fragment:1 document:1 outperforms:2 current:2 comparing:1 assigning:2 yet:1 reminiscent:1 numerical:1 partition:2 visibility:1 discrimination:2 alone:2 generative:1 selected:2 leaf:20 v:4 prohibitive:1 unmixed:1 node:8 codebook:7 boosting:2 org:1 zhang:1 dn:3 constructed:1 become:1 ijcv:2 introduce:2 mask:2 rapid:2 inspired:1 voc:1 little:4 window:4 increasing:1 becomes:1 provided:1 suffice:1 bike:6 coder:2 lowest:1 kind:1 guarantee:1 quantitative:1 binarized:2 voting:4 classifier:10 control:3 partitioning:2 yn:1 producing:2 appear:3 before:2 local:11 treat:1 tends:1 despite:3 blockeel:1 path:1 marszalek:1 inria:1 tmax:10 studied:1 scia:1 suggests:1 challenging:1 range:1 jurie:4 averaged:1 practical:2 unique:1 responsible:1 testing:6 procedure:1 area:1 empirical:1 universal:1 significantly:3 projection:1 matching:1 word:13 eer:5 get:1 cannot:1 context:1 impossible:1 influence:1 accumulating:1 optimize:1 bill:1 conventional:2 www:2 center:1 maximizing:1 map:1 independently:1 simplicity:2 assigns:1 rule:2 handle:1 construction:6 heavily:1 mrt:1 losing:1 exact:1 us:3 element:1 expensive:1 particularly:2 recognition:6 continues:2 labeled:2 database:4 bottom:1 quicker:1 capture:1 worst:1 thousand:1 region:4 graz:5 richness:1 highest:1 valuable:1 balanced:5 complexity:5 pinz:2 trained:4 weakly:1 depend:1 alche:1 compromise:2 distinctive:1 localization:1 efficiency:1 completely:1 accelerate:1 differently:1 various:3 train:2 univ:1 distinct:2 fast:1 effective:2 describe:1 query:2 sc:1 labeling:1 horse:2 choosing:1 apparent:1 whose:1 larger:2 denser:1 cvpr:4 tested:7 otherwise:1 ability:1 id3:1 transform:2 final:3 advantage:4 rr:2 quantizing:2 took:3 propose:1 fr:2 p4:1 relevant:1 combining:1 ernst:1 achieve:2 scalability:1 cluster:4 double:1 produce:6 categorization:5 comparative:1 object:20 pose:1 nearest:4 odd:1 strong:1 indicate:2 owing:3 attribute:2 filter:1 criminisi:2 material:1 bin:1 require:1 assign:2 f1:1 generalization:1 fix:1 elementary:2 traversed:1 hold:1 sufficiently:1 around:3 normal:1 scope:1 predict:1 bicycle:2 dictionary:4 proc:1 bag:7 label:23 currently:1 lose:1 clearly:1 always:1 rather:1 beyer:2 breiman:2 encode:1 focus:1 june:1 notational:1 improvement:1 consistently:1 contrast:2 adversarial:1 summarizing:1 sense:1 detect:2 perronnin:1 cnrs:1 accumulated:2 leung:1 eliminate:1 typically:3 entire:1 lnai:1 hidden:2 perona:1 france:1 germany:1 pixel:2 overall:2 classification:25 orientation:1 pascal:5 socalled:1 art:3 spatial:8 integration:1 mutual:3 equal:1 aware:1 having:1 extraction:1 sampling:6 identical:1 hardest:1 unsupervised:1 icml:1 yu:1 foreground:2 photometric:1 report:3 others:1 few:3 randomly:7 neighbour:1 densely:2 comprehensive:1 individual:3 amsaleg:1 detection:1 fd:1 highly:4 evaluation:1 accurate:2 encourage:1 nowak:1 respective:1 unless:1 tree:74 filled:1 divide:1 theoretical:1 increased:1 classify:3 boolean:1 disadvantage:1 raedt:1 assignment:1 cost:2 introducing:1 deviation:1 neutral:1 entry:1 uniform:1 recognizing:1 too:1 bressan:1 randomizing:1 combined:1 person:1 st:1 randomized:14 again:1 external:1 creating:2 conf:2 leading:1 return:1 de:3 diversity:2 coding:14 int:3 textons:2 depends:1 csurka:2 root:1 lot:1 lowe:1 reached:2 start:1 minimize:1 accuracy:3 descriptor:37 variance:4 ensemble:8 yield:1 correspond:1 saliency:1 weak:1 raw:2 none:1 randomness:1 classified:1 explain:1 suffers:1 minka:1 associated:1 sampled:4 gain:2 dataset:2 popular:2 color:5 car:5 dimensionality:2 knowledge:1 segmentation:5 goldstein:2 back:1 higher:2 zisserman:2 fua:1 though:1 mar:2 just:1 until:4 correlation:1 google:2 glance:1 karlsruhe:1 believe:1 building:4 pascalnetwork:1 contain:4 normalized:1 assigned:2 illustrated:2 white:1 during:8 lastname:1 tt:3 geurts:2 image:56 wise:1 meaning:1 lazebnik:1 fi:2 inrialpes:2 common:1 rl:2 emt:1 volume:2 belong:1 extend:1 theirs:1 significant:3 measurement:1 cv:1 erc:10 centre:1 had:1 dot:1 access:1 europe:1 impressive:1 supervision:2 resistant:1 posterior:1 own:1 recent:2 perspective:1 irrelevant:1 barbara:1 scoring:1 additional:1 montbonnot:1 somewhat:2 prune:2 maximize:1 ii:2 keypoints:2 reduces:1 technical:1 faster:6 characterized:2 unlabelled:1 offer:2 retrieval:1 controlled:1 variant:1 scalable:1 vision:2 popout:1 histogram:12 represent:1 iteration:1 sometimes:1 kernel:1 achieved:1 background:9 whereas:1 remarkably:1 winn:2 uka:1 leaving:1 grow:2 kohavi:1 rest:1 unlike:1 comment:1 induced:1 tend:1 subject:1 seem:1 surviving:2 extracting:1 ee:2 counting:1 split:3 enough:3 gave:2 codebooks:9 idea:2 regarding:1 avenue:1 shift:1 handled:1 distributing:1 resistance:2 returned:1 generally:1 useful:1 se:1 detailed:1 clear:1 santa:1 transforms:1 clutter:4 amount:1 category:12 documented:1 reduced:1 http:3 exist:1 dotted:1 cikm:1 disjoint:1 popularity:1 per:11 voc2005:1 track:1 diverse:2 discrete:1 vol:1 smin:5 salient:1 four:2 threshold:7 neither:1 clean:1 ht:2 kept:1 year:1 sum:1 compete:1 run:1 almost:1 patch:14 decision:8 entirely:1 bound:3 layer:1 internet:1 distinguish:1 fan:1 yielded:1 strength:2 bray:1 adapted:1 fei:2 speed:4 extremely:8 pruned:1 relatively:3 structured:2 according:2 kd:3 belonging:1 across:1 slightly:1 beneficial:1 larlus:1 iccv:4 invariant:1 taken:1 computationally:1 remains:2 bing:1 turn:1 eventually:1 count:1 moosmann:3 needed:2 fed:2 adopted:1 available:2 operation:1 hierarchical:2 generic:2 occurrence:1 motorbike:1 dpt:1 bagging:1 top:2 clustering:16 ensure:1 denotes:2 remaining:1 tugraz:1 include:1 wehenkel:2 exploit:1 giving:1 especially:1 build:7 classical:1 threedimensional:1 contact:1 malik:1 strategy:1 concentration:1 rt:3 usual:1 dependence:1 subwindows:2 distance:1 separate:3 entity:1 majority:1 quinonero:1 philip:1 discriminant:2 unstable:2 consensus:1 trivial:1 induction:1 code:4 index:7 providing:1 ratio:1 difficult:2 october:1 frank:1 theorie:1 negative:2 stated:1 reliably:1 perform:1 nist:2 extended:1 stack:1 yiyuan:1 introduced:1 specified:2 c4:1 sivic:1 learned:3 hour:1 address:1 able:4 suggested:1 beyond:2 below:3 bar:1 firstname:1 challenge:5 built:6 reliable:1 memory:2 explanation:1 ramon:1 video:3 power:1 suitable:1 critical:1 natural:1 haar:1 recursion:2 representing:1 improve:1 keypoint:2 created:3 piater:1 schmid:1 text:5 binarization:1 geometric:1 prior:1 asymptotic:1 wisconsin:1 loss:1 fully:2 interesting:1 versus:1 validation:1 degree:1 thresholding:2 editor:1 classifying:1 share:1 row:1 eccv:4 guide:1 allow:1 institute:1 neighbor:3 wide:1 characterizing:1 opelt:2 dagan:1 sparse:1 ghz:1 benefit:1 curve:2 dimension:5 vocabulary:9 bauer:1 xia:1 partitioners:1 author:1 collection:1 made:1 counted:1 far:2 pruning:2 buc:1 ml:1 global:4 discriminative:8 fergus:1 grayscale:1 search:3 promising:1 learn:2 robust:1 ignoring:1 obtaining:1 contributes:1 forest:21 quantize:1 hc:1 dense:2 main:1 whole:2 noise:1 scored:1 s2:2 allowed:1 child:1 repeated:1 fig:5 representative:1 roc:1 slow:2 shaft:2 sub:2 position:2 lie:1 wavelet:5 onsson:1 down:3 removing:1 minute:1 bad:1 sift:3 er:3 svm:4 frederic:1 grouping:1 intrinsic:1 workshop:3 quantization:6 effectively:1 texture:1 execution:1 labelling:1 illumination:1 margin:1 locality:1 entropy:3 logarithmic:1 appearance:2 visual:36 contained:1 partially:1 scalar:1 doubling:1 springer:1 ramakrishnan:1 corresponds:1 relies:1 extracted:1 acm:1 goal:3 labelled:2 content:2 considerable:2 hard:1 experimentally:1 typical:1 total:3 multimedia:1 invariance:1 experimental:1 subdivision:1 shannon:1 vote:2 meaningful:2 rarely:1 people:1 dance:2
2,296
3,084
Bayesian Ensemble Learning Hugh A. Chipman Department of Mathematics and Statistics Acadia University Wolfville, NS, Canada Edward I. George Department of Statistics The Wharton School University of Pennsylvania Philadelphia, PA 19104-6302 Robert E. McCulloch Graduate School of Business University of Chicago Chicago, IL, 60637 Abstract We develop a Bayesian ?sum-of-trees? model, named BART, where each tree is constrained by a prior to be a weak learner. Fitting and inference are accomplished via an iterative backfitting MCMC algorithm. This model is motivated by ensemble methods in general, and boosting algorithms in particular. Like boosting, each weak learner (i.e., each weak tree) contributes a small amount to the overall model. However, our procedure is defined by a statistical model: a prior and a likelihood, while boosting is defined by an algorithm. This model-based approach enables a full and accurate assessment of uncertainty in model predictions, while remaining highly competitive in terms of predictive accuracy. 1 Introduction We consider the fundamental problem of making inference about an unknown function f that predicts an output Y using a p dimensional vector of inputs x when Y = f (x) + ,  ? N (0, ? 2 ). To do this, we consider modelling or at least approximating f (x) = E(Y | x), the mean of Y given x, by a sum of m regression trees: f (x) ? g1 (x) + g2 (x) + . . . + gm (x) where each gi denotes a binary regression tree. The sum-of-trees model is fundamentally an additive model with multivariate components. It is vastly more flexible than a single tree model which does not easily incorporate additive effects. Because multivariate components can easily account for high order interaction effects, a sum-of-trees model is also much more flexible than typical additive models that use low dimensional smoothers as components. Our approach is fully model based and Bayesian. We specify a prior, and then obtain a sequence of draws from the posterior using Markov chain Monte Carlo (MCMC). The prior plays two essential roles. First, with m chosen large, it restrains the fit of each individual gi so that the overall fit is made up of many small contributions in the spirit of boosting (Freund & Schapire (1997), Friedman (2001)). Each gi is a ?weak learner?. Second, it ?regularizes? the model by restraining the overall fit to achieve good bias-variance tradeoff. The prior specification is kept simple and a default choice is shown to have good out of sample predictive performance. Inferential uncertainty is naturally quantified in the usual Bayesian way: variation in the MCMC P draws of f = gi (evaluated at a set of x of interest) and ? indicates our beliefs about plausible values given the data. Note that the depth of each tree is not fixed so that we infer the level of interaction. Our point estimate of f is the average of the draws. Thus, our procedure captures ensemble learning (in which many trees are combined) both in the fundamental sum-of-trees specification and in the model-averaging used to obtain the estimate. 2 The Model The model consists of two parts: a sum-of-trees model, which we have named BART (Bayesian Additive Regression Trees), and a regularization prior. 2.1 A Sum-of-Trees Model To elaborate the form of a sum-of-trees model, we begin by establishing notation for a single tree model. Let T denote a binary tree consisting of a set of interior node decision rules and a set of terminal nodes, and let M = {?1 , ?2 , . . . , ?B } denote a set of parameter values associated with each of the B terminal nodes of T . Prediction for a particular value of input vector x is accomplished as follows: If x is associated with terminal node b of T by the sequence of decision rules from top to bottom, it is then assigned the ?b value associated with this terminal node. We use g(x; T, M ) to denote the function corresponding to (T, M ) which assigns a ?b ? M to x. Using this notation, and letting gi (x) = g(x; Ti , Mi ), our sum-of-trees model can more explicitly be expressed as Y = g(x; T1 , M1 ) + g(x; T2 , M2 ) + ? ? ? + g(x; Tm , Mm ) + , (1)  ? N (0, ? 2 ). (2) Unlike the single tree model, when m > 1 the terminal node parameter ?i given by g(x; Tj , Mj ) is merely part of the conditional mean of Y given x. Such terminal node parameters will represent interaction effects when their assignment depends on more than one component of x (i.e., more than one variable). Because (1) may be based on trees of varying sizes, the sum-of-trees model can incorporate both direct effects and interaction effects of varying orders. In the special case where every terminal node assignment depends on just a single component of x, the sum-of-trees model reduces to a simple additive function. With a large number of trees, a sum-of-trees model gains increased representation flexibility, which, when coupled with our regularization prior, gives excellent out of sample predictive performance. Indeed, in the examples in Section 4, we set m as large as 200. Note that with m large there are hundreds of parameters of which only ? is identified. This is not a problem for our Bayesian analysis. Indeed, this lack of identification is the reason our MCMC mixes well. Even when m is much larger than needed to capture f (effectively, we have an ?overcomplete basis?) the procedure still works well. 2.2 A Regularization Prior The complexity of the prior specification is vastly simplified by letting the Ti be i.i.d, the ?i,b (node b of tree i) be i.i.d given the set of T , and ? be independent of all T and ?. Given these independence assumptions we need only choose priors for a single tree T , a single ?, and ?. Motivated by our desire to make each g(x; Ti , Mi ) a small contribution to the overall fit, we put prior weight on small trees and small ?i,b . For the tree prior, we use the same specification as in Chipman, George & McCulloch (1998). In this prior, the probability that a node is nonterminal is ?(1 + d)?? where d is the depth of the node. In all examples we use the same prior corresponding to the choice ? = .95 and ? = 2. With this choice, trees with 1, 2, 3, 4, and ? 5 terminal nodes receive prior probability of 0.05, 0.55, 0.28, 0.09, and 0.03, respectively. Note that even with this prior, trees with many terminal nodes can be grown if the data demands it. At any non-terminal node, the prior on the associated decision rule puts equal probability on each available variable and then equal probability on each available rule given the variable. For the prior on a ?, we start by simply shifting and rescaling Y so that we believe the prior probability that E(Y | x) ? (?.5, .5) is very high. We let ? ? N (0, ??2 ). Given the Ti and an x, E(Y | x) ? is the sum of m independent ??s. The standard deviation of the sum is m ?? . We choose ?? so 2.5 conservative: df=10, quantile=.75 2.0 default: df=3, quantile=.9 0.0 0.5 1.0 1.5 aggressive: df=3, quantile=.99 0.0 0.5 1.0 1.5 2.0 2.5 3.0 sigma Figure 1: Three priors on ? when ? ? = 2. ? that .5 is within k standard deviations of zero: k m?? = .5. For example if k = 2 there is a 95% (conditional) prior probability that the mean of Y is in (?.5, .5). k = 2 is our default choice and in practice we typically rescale the response y so that its observed values range from -5. to .5. Note that this prior increases the shrinkage of ?i,b (toward zero) as m increases. For the prior on ? we start from the usual inverted-chi-squared prior: ? 2 ? ? ?/?2? . To choose the hyperparameters ? and ?, we begin by obtaining a ?rough overestimate? ? ? of ?. We then pick a degrees of freedom value ? between 3 and 10. Finally, we pick a value of q such as 0.75, 0.90 or 0.99, and set ? so that the qth quantile of the prior on ? is located at ? ? , that is P (? < ? ? ) = q. Figure 1 illustrates priors corresponding to three (?, q) settings when the rough overestimate is ? ? = 2. We refer to these three settings, (?, q) = (10, 0.75), (3, 0.90), (3, 0.99), as conservative, default and aggressive, respectively. For automatic use, we recommend the default setting (?, q) = (3, 0.90) which tends to avoid extremes. Simple data-driven choices of ? ? we have used in practice are the estimate from a linear regression or the sample standard deviation of Y . Note that this prior choice can be influential. Strong prior beliefs that ? is very small could lead to over-fitting. 3 A Backfitting MCMC Algorithm Given the observed data y, our Bayesian setup induces a posterior distribution p((T1 , M1 ), . . . , (Tm , Mm ), ?| y) on all the unknowns that determine a sum-of-trees model. Although the sheer size of this parameter space precludes exhaustive calculation, the following backfitting MCMC algorithm can be used to sample from this posterior. At a general level, our algorithm is a Gibbs sampler. For notational convenience, let T(i) be the set of all trees in the sum except Ti , and similarly define M(i) . The Gibbs sampler here entails m successive draws of (Ti , Mi ) conditionally on (T(i) , M(i) , ?): (T1 , M1 )|T(1) , M(1) , ?, y (T2 , M2 )|T(2) , M(2) , ?, y .. . (Tm , Mm )|T(m) , M(m) , ?, y, (3) followed by a draw of ? from the full conditional: ?|T1 , . . . Tm , M1 , . . . , Mm , y. (4) Hastie & Tibshirani (2000) considered a similar application of the Gibbs sampler for posterior sampling for additive and generalized additive models with ? fixed, and showed how it was a stochastic generalization of the backfitting algorithm for such models. For this reason, we refer to our algorithm as backfitting MCMC. In contrast with the stagewise nature of most boosting algorithms (Freund & Schapire (1997), Friedman (2001), Meek, Thiesson & Heckerman (2002)), the backfitting MCMC algorithm repeatedly resamples the parameters of each learner in the ensemble. The idea is that given (T(i) , M(i) ) and ? we may subtract the fit from (T(i) , M(i) ) from both sides of (1) leaving us with a single tree model with known error variance. This draw may be made following the approach of Chipman et al. (1998) or the refinement of Wu, Tjelmeland & West (2007). These methods draw (Ti , Mi ) | T(i) , M(i) , ?, y as Ti | T(i) , M(i) , ?, y followed by Mi | Ti , T(i) , M(i) , ?, y. The first draw is done by the Metropolis-Hastings algorithm after integrating out Mi and the second is a set of normal draws. The draw of ? is easily accomplished by subtracting all the fit from both sides of (1) so the the  are considered to be observed. The draw is then a standard inverted-chisquared. The Metropolis-Hastings draw of Ti | T(i) , M(i) , ?, y is complex and lies at the heart of our method. The algorithm of Chipman et al. (1998) proposes a new tree based on the current tree using one of four moves. The moves and their associated proposal probabilities are: growing a terminal node (0.25), pruning a pair of terminal nodes (0.25), changing a non-terminal rule (0.40), and swapping a rule between parent and child (0.10). Although the grow and prune moves change the implicit dimensionality of the proposed tree in terms of the number of terminal nodes, by integrating out Mi from the posterior, we avoid the complexities associated with reversible jumps between continuous spaces of varying dimensions (Green 1995). We initialize the chain with m single node trees, and then iterations are repeated until satisfactory convergence is obtained. At each iteration, each tree may increase or decrease the number of terminal nodes by one, or change one or two decision rules. Each ? will change (or cease to exist or be born), and ? will change. It is not uncommon for a tree to grow large and then subsequently collapse back down to a single node as the algorithm iterates. The sum-of-trees model, with its abundance of unidentified parameters, allows for ?fit? to be freely reallocated from one tree to another. Because each move makes only small incremental changes to the fit, we can imagine the algorithm as analogous to sculpting a complex figure by adding and subtracting small dabs of clay. Compared to the single tree model MCMC approach of Chipman et al. (1998), our backfitting MCMC algorithm mixes dramatically better. When only single tree models are considered, the MCMC algorithm tends to quickly gravitate toward a single large tree and then gets stuck in a local neighborhood of that tree. In sharp contrast, we have found that restarts of the backfitting MCMC algorithm give remarkably similar results even in difficult problems. Consequently, we run one long chain rather than multiple starts. In some ways backfitting MCMC is a stochastic alternative to boosting algorithms for fitting linear combinations of trees. It is distinguished by the ability to sample from a posterior distribution. At each iteration, we get a new draw f ? = g(x; T1 , M1 ) + g(x; T2 , M2 ) + . . . + g(x; Tm , Mm ) (5) corresponding to the draw of Tj and Mj . These draws are a (dependent) sample from the posterior distribution on the ?true? f . Rather than pick the ?best? f ? from these draws, the set of multiple draws can be used to further enhance inference. We estimate f by the posterior mean of f which is approximated by averaging the f ? over the draws. Further, we can gauge our uncertainty about the actual underlying f by the variation across the draws. For example, we can use the 5% and 95% quantiles of f ? (x) to obtain 90% posterior intervals for f (x). 4 Examples In this section we illustrate the potential of our Bayesian ensemble procedure BART in a large experiment using 42 datasets. The data are a subset of 52 sets considered by Kim, Loh, Shih & Chaudhuri Method Lasso Gradient Boosting Neural Nets Random Forests BART-cv Parameter shrinkage (in range 0-1) # of trees Shrinkage (multiplier of each tree added) Max depth permitted for each tree # hidden units Weight decay # of trees % variables sampled to grow each node Sigma prior: (?, q) combinations # trees ? Prior: k value for ?? Values considered 0.1, 0.2, ..., 1.0 50, 100, 200 0.01, 0.05, 0.10, 0.25 1, 2, 3, 4 see text .0001,.001, .01, .1, 1, 2, 3 500 10, 25, 50, 100 (3,0.90), (3,0.99), (10,0.75) 50, 200 2, 3, 5 Table 1: Operational parameters for the various competing models. (2007). Ten datasets were excluded either because Random Forests was unable to use over 32 categorical predictors, or because a single train/test split was used in the original paper. All datasets correspond to regression problems with between 3 and 28 numeric predictors and 0 to 6 categorical predictors. Categorical predictors were converted into 0/1 indicator variables corresponding to each level. Sample sizes vary from 96 to 6806 observations. As competitors we considered linear regression with L1 regularization (the Lasso) (Efron, Hastie, Johnstone & Tibshirani 2004) and four black-box models: Friedman?s (2001) gradient boosting, random forests (Breiman 2001), and neural networks with one layer of hidden units. Implementation details are given in Chipman, George & McCulloch (2006). Tree models were not considered, since they tend to sacrifice predictive performance for interpretability. We considered two versions of our Bayesian ensemble procedure BART. In BART-cv, the prior hyperparameters (?, q, k, m) were treated as operational parameters to be tuned via cross-validation. In BART-default, we set (?, q, k, m) = (3, 0.90, 2, 200). For both BART-cv and BART-default, all specifications of the quantile q were made relative to the least squares linear regression estimate ? ? , and the number of burn-in steps and MCMC iterations used were determined by inspection of a single long run. Typically 200 burn-in steps and 1000 iterations were used. With the exception of BART-default (which has no tuning parameters), all free parameters in learners were chosen via 5-fold cross-validation within the training set. The parameters considered and potential levels are given in Table 1. The levels used were chosen with a sufficientlly wide range that the optimal value was not at an extreme of the candidate values in most problems. Neural networks are the only model whose operational parameters need additional explanation. In that case, the number of hidden units was chosen in terms of the implied number of weights, rather than the number of units. This design choice was made because of the widely varying number of predictors across problems, which directly impacts the number of weights. A number of hidden units was chosen so that there was a total of roughly u weights, with u = 50, 100, 200, 500 or 800. In all cases, the number of hidden units was further constrained to fall between 3 and 30. For example, with 20 predictors we used 3, 8 and 21 as candidate values for the number of hidden units. The models were compared with 20 replications of the following experiment. For each replication, we randomly chose 5/6 of the data as a training set and the remaining 1/6 was used for testing. As mentioned above, 5-fold cv was used within each training set. In each of the 42 datasets, the response was minimally preprocessed, applying a log or square root transformation if this made the histogram of observed responses more bell-shaped. In about half the cases, a log transform was used to reduce a right tail. In one case (Fishery) a square root transform was most appropriate. Finally, in order to enable performance comparisons across all datasets, after possible nonlinear transformation, the resultant response was scaled to have sample mean 0 and standard deviation 1 prior to any train/test splitting. A total of 42 ? 20 = 840 experiments were carried out. Results across these experiments are summarized in Table 2, which gives mean RMSE values and Figure 2, which summarizes relative performance using boxplots. In Figure 2, the relative performances are calculated as follows: In Method RMSE BART-cv 0.5042 Boosting 0.5089 BART-default 0.5093 Random Forest 0.5097 Neural Net 0.5160 Lasso 0.5896 Table 2: Average test set RMSE values for each learner, combined across 20 train/test replicates of 42 datasets. The only statistically significant difference is Lasso versus the other methods. BART?default ? ? ? ??? ? ?? ? ?? ?? ??? ? ?????? ??? ? ?? ?? ?? ? ???? ?? BART?cv ?? ? ? ? ? ? ? ?? ????? ?? ??? ? ? ? ? ????? ? ???? ??? Boosting ? ? ? ?? ? ??????? ? ? ?? ?? ? ? ? ? ????? ?? ?? ? ?? ? ????? ?? ???? ? ??? ????? ? ? ?? ? ??? Neural Net ? ? ? ?? ?? ?? ??? ?? ? ? ??? ?? ? ?? ? ? ? ?? ? ?? ?? ?? ? ?? ? ? ? Lasso Random Forests ??? ? ?? ??? ?? ? ? ? ?? ? ??? ?? ?? ? ? ????? ? ??? ? ? ?? ????? ?? ?????? ? 1.0 1.2 1.4 1.6 1.8 2.0 Figure 2: Test set RMSE performance relative to best (ratio of 1 means minimum RMSE test error). Results are across 20 replicates in each of 42 datasets. Boxes indicate middle 50% of runs. Each learner has the following percentage of ratios larger than 2.0, which are not plotted above: Neural net: 5%, BART-cv: 6%, BART-default and Boosting: 7%, Random forests 10% and Lasso 21%. each of the 840 experiments, the learner with smallest RMSE was identified. The relative ratio for each learner is the raw RMSE divided by the smallest RMSE. Thus a relative RMSE of 1 means that the learner had the best performance in a particular experiment. The central box gives the middle 50% of the data, with the median indicated by a vertical line. The ?whiskers? of the plot extend to 1.5 times the box width, or the range of values, whichever comes first. Extremes outside the whiskers are given by individual points. As noted in the caption, relative RMSE ratios larger than 2.0 are not plotted. BART has the best performance, although all methods except the Lasso are not significantly different. The strong performance of our ?default? ensemble is especially noteworthy, since it requires no selection of operational parameters. That is, cross-validation is not necessary. This results in a huge computational savings, since under cross-validation, the number of times a learner must be trained is equal to the number of settings times the number of folds. This can easily be 50 (e.g. 5 folds by 10 settings), and in this experiment it was 90! BART-default is in some sense the ?clear winner? in this experiment. Although average predictive performance was indistinguishable from the other models, it does not require cross-validation. Moreover, the use of cross-validation makes it impossible to interpret the MCMC output as valid uncertainty bounds. Not only is the default version of BART faster, but it also provides valid statistical inference, a benefit not available to any of the other learners considered. To further stress the benefit of uncertainty intervals, we report some more detailed results in the analysis of one of the 42 datasets, the Boston Housing data. We applied BART to all 506 observations of the Boston Housing data using the default setting (?, q, k, m) = (3, 0.90, 2, 200) and the linear regression estimate ? ? to anchor q. At each of the 506 predictor values x, we used 5% and 95% quantiles of the MCMC draws to obtain 90% posterior intervals for f (x). An appealing feature of these posterior intervals is that they widen when there is less information about f (x). To roughly illustrate this, we calculated Cook?s distance diagnostic Dx for each x (Cook 1977) based on a linear least squares regression of y on x. Larger Dx indicate more uncertainty about predicting y with a (b) 3.0 2.9 2.8 2.7 2.4 2.5 2.6 log median value 0.25 0.20 0.15 Posterior Interval Width 3.1 (a) 0.00 0.02 0.04 0.06 Cook?s distance 0.08 0 20 40 60 80 Crime Figure 3: Plots from a single run of the Bayesian Ensemble model on the full Boston dataset. (a) Comparison of uncertainty bound widths with Cook?s distance measure. (b) Partial dependence plot for the effect of crime on the response (log median property value), with 90% uncertainty bounds. linear regression at x. To see how the width of the 90% posterior intervals corresponded to Dx , we plotted them together in Figure 3(a). Although the linear model may not be strictly appropriate, the plot is suggestive: all points with large Dx values have wider uncertainty bounds. Uncertainty bounds can also be used in graphical summaries such as a partial dependence plot (Friedman 2001), which shows the effect of one (or more) predictor on the response, margining out the effect of other predictors. Since BART provides posterior draws for f (x), calculation of a posterior distribution for the partial dependence function is straightforward. Computational details are provided in Chipman et al. (2006). For the Boston Housing data, Figure 3(b) shows the partial dependence plot for crime, with 90% posterior intervals. The vast majority of data values occur for crime < 5, causing the intervals to widen as crime increases and the data become more sparse. 5 Discussion Our approach is a fully Bayesian approach to learning with ensembles of tree models. Because of the nature of the underlying tree model, we are able to specify simple, effective priors and fully exploit the benefits of Bayesian methodology. Our prior provides the regularization needed to obtain good predictive performance. In particular, our default prior, which is minimially dependent on the data, performs well compared to other methods which rely on cross-validation to pick model parameters. We obtain inference in the natural Bayesian way from the variation in the posterior draws. While predictive performance in always our first goal, many researchers want to interpret the results. In this case, gauging the inferential uncertainty is essential. No other competitive methods do this in a convenient way. Chipman et al. (2006) and Abreveya & McCulloch (2006) provide further evidence of the predictive performance of our approach. In addition Abreveya & McCulloch (2006) illustrate the ability of our method to uncover interesting interaction effects in a real example. Chipman et al. (2006) and and Hill & McCulloch (2006) illustrate the inferential capabilities. Posterior intervals are shown to have good frequentist coverage. Chipman et al. (2006) also illustrates the method?s ability to obtain inference in the very difficult ?big p, small n? problem, where there are few observations and many potential predictors. A common concern with Bayesian approaches is sensitivity to prior parameters. Chipman et al. (2006) found that results were robust to a reasonably wide range of prior parameters, including ?, q, ?? , as well as the number of trees, m. m needs to be large enough to provide enough complexity to capture f , but making m ?too large? does not appreciably degrade accuracy (although it does make it slower to run). Chipman et al. (2006) provide guidelines for choosing m. In practice, the stability of the MCMC makes the method easy to use. Typcially, it burns-in rapidly. If the method is run twice with different seeds the same results are obtained both for fit and inference. Code is publicly available in the R-package BayesTree. Acknowledgments The authors would like to thank three anonymous referees, whose comments improved an earlier draft, and Wei-Yin Loh who generously provided the datasets used in the experiment. This research was supported by the Natural Sciences and Engineering Research Council of Canada, the Canada Research Chairs program, the Acadia Centre for Mathematical Modelling and Computation, the University of Chicago Graduate School of Business, NSF grant DMS 0605102 and by NIH/NIAID award AI056983. References Abreveya, J. & McCulloch, R. (2006), Reversal of fortune: a statistical analysis of penalty calls in the national hockey league, Technical report, Purdue University. Breiman, L. (2001), ?Random forests?, Machine Learning 45, 5?32. Chipman, H. A., George, E. I. & McCulloch, R. E. (1998), ?Bayesian CART model search (C/R: p948-960)?, Journal of the American Statistical Association 93, 935?948. Chipman, H. A., George, E. I. & McCulloch, R. E. (2006), BART: Bayesian additive regression trees, Technical report, University of Chicago. Cook, R. D. (1977), ?Detection of influential observations in linear regression?, Technometrics 19(1), 15?18. Efron, B., Hastie, T., Johnstone, I. & Tibshirani, R. (2004), ?Least angle regression?, Annals of Statistics 32, 407?499. Freund, Y. & Schapire, R. E. (1997), ?A decision-theoretic generalization of on-line learning and an application to boosting?, Journal of Computer and System Sciences 55, 119?139. Friedman, J. H. (2001), ?Greedy function approximation: A gradient boosting machine?, The Annals of Statistics 29, 1189?1232. Green, P. J. (1995), ?Reversible jump MCMC computation and Bayesian model determination?, Biometrika 82, 711?732. Hastie, T. & Tibshirani, R. (2000), ?Bayesian backfitting (with comments and a rejoinder by the authors?, Statistical Science 15(3), 196?223. Hill, J. L. & McCulloch, R. E. (2006), Bayesian nonparametric modeling for causal inference, Technical report, Columbia University. Kim, H., Loh, W.-Y., Shih, Y.-S. & Chaudhuri, P. (2007), ?Visualizable and interpretable regression models with good prediction power?, IEEE Transactions: Special Issue on Data Mining and Web Mining. In press. Meek, C., Thiesson, B. & Heckerman, D. (2002), Staged mixture modelling and boosting, Technical Report MS-TR-2002-45, Microsoft Research. Wu, Y., Tjelmeland, H. & West, M. (2007), ?Bayesian CART: Prior specification and posterior simulation?, Journal of Computational and Graphical Statistics. In press.
3084 |@word middle:2 version:2 simulation:1 pick:4 tr:1 born:1 tuned:1 qth:1 current:1 dx:4 must:1 additive:8 chicago:4 enables:1 plot:6 interpretable:1 bart:22 half:1 greedy:1 cook:5 inspection:1 iterates:1 boosting:14 node:21 provides:3 successive:1 draft:1 mathematical:1 direct:1 become:1 replication:2 consists:1 backfitting:10 fitting:3 sacrifice:1 indeed:2 roughly:2 growing:1 terminal:15 chi:1 actual:1 begin:2 provided:2 notation:2 underlying:2 moreover:1 mcculloch:10 transformation:2 every:1 ti:10 biometrika:1 scaled:1 unit:7 grant:1 overestimate:2 t1:5 engineering:1 local:1 tends:2 establishing:1 noteworthy:1 black:1 burn:3 chose:1 minimally:1 twice:1 quantified:1 collapse:1 graduate:2 range:5 statistically:1 acknowledgment:1 testing:1 practice:3 procedure:5 bell:1 significantly:1 inferential:3 convenient:1 integrating:2 get:2 convenience:1 interior:1 selection:1 put:2 unidentified:1 applying:1 impossible:1 straightforward:1 splitting:1 assigns:1 m2:3 rule:7 stability:1 variation:3 analogous:1 annals:2 imagine:1 gm:1 play:1 caption:1 pa:1 referee:1 approximated:1 located:1 predicts:1 bottom:1 role:1 observed:4 capture:3 decrease:1 mentioned:1 complexity:3 trained:1 predictive:8 learner:12 basis:1 easily:4 various:1 grown:1 train:3 effective:1 monte:1 corresponded:1 neighborhood:1 outside:1 exhaustive:1 choosing:1 whose:2 larger:4 plausible:1 widely:1 precludes:1 dab:1 ability:3 statistic:5 gi:5 g1:1 transform:2 fortune:1 housing:3 sequence:2 net:4 subtracting:2 interaction:5 causing:1 rapidly:1 flexibility:1 achieve:1 chaudhuri:2 parent:1 convergence:1 incremental:1 wider:1 illustrate:4 develop:1 nonterminal:1 rescale:1 school:3 strong:2 edward:1 coverage:1 indicate:2 come:1 stochastic:2 subsequently:1 enable:1 require:1 generalization:2 anonymous:1 strictly:1 mm:5 considered:10 normal:1 seed:1 vary:1 sculpting:1 smallest:2 council:1 appreciably:1 gauge:1 rough:2 generously:1 always:1 rather:3 avoid:2 shrinkage:3 breiman:2 varying:4 notational:1 modelling:3 likelihood:1 indicates:1 contrast:2 kim:2 sense:1 inference:8 dependent:2 typically:2 hidden:6 overall:4 issue:1 flexible:2 proposes:1 constrained:2 special:2 initialize:1 wharton:1 equal:3 saving:1 shaped:1 sampling:1 gauging:1 t2:3 recommend:1 fundamentally:1 report:5 few:1 randomly:1 widen:2 national:1 individual:2 consisting:1 microsoft:1 friedman:5 freedom:1 detection:1 technometrics:1 interest:1 huge:1 highly:1 mining:2 replicates:2 uncommon:1 mixture:1 extreme:3 swapping:1 tj:2 chain:3 accurate:1 partial:4 necessary:1 tree:56 plotted:3 causal:1 overcomplete:1 increased:1 earlier:1 modeling:1 assignment:2 deviation:4 subset:1 hundred:1 predictor:10 gravitate:1 too:1 combined:2 fundamental:2 sensitivity:1 hugh:1 enhance:1 together:1 quickly:1 fishery:1 vastly:2 squared:1 central:1 choose:3 american:1 rescaling:1 tjelmeland:2 account:1 aggressive:2 potential:3 converted:1 summarized:1 explicitly:1 depends:2 root:2 competitive:2 start:3 capability:1 rmse:10 contribution:2 il:1 square:4 accuracy:2 publicly:1 variance:2 who:1 ensemble:9 correspond:1 weak:4 bayesian:20 identification:1 raw:1 carlo:1 researcher:1 competitor:1 dm:1 naturally:1 associated:6 mi:7 resultant:1 gain:1 sampled:1 dataset:1 efron:2 dimensionality:1 clay:1 uncover:1 back:1 restarts:1 permitted:1 specify:2 response:6 methodology:1 improved:1 wei:1 evaluated:1 done:1 box:4 just:1 implicit:1 until:1 hastings:2 web:1 nonlinear:1 assessment:1 lack:1 reversible:2 stagewise:1 indicated:1 believe:1 effect:9 true:1 multiplier:1 regularization:5 assigned:1 excluded:1 satisfactory:1 conditionally:1 indistinguishable:1 width:4 noted:1 m:1 generalized:1 stress:1 hill:2 theoretic:1 performs:1 l1:1 resamples:1 nih:1 common:1 thiesson:2 winner:1 tail:1 extend:1 m1:5 association:1 interpret:2 refer:2 significant:1 gibbs:3 cv:7 automatic:1 tuning:1 mathematics:1 similarly:1 league:1 centre:1 had:1 specification:6 entail:1 multivariate:2 posterior:19 restraining:1 showed:1 driven:1 visualizable:1 binary:2 accomplished:3 inverted:2 minimum:1 george:5 additional:1 prune:1 freely:1 determine:1 smoother:1 full:3 mix:2 multiple:2 infer:1 reduces:1 technical:4 faster:1 determination:1 calculation:2 cross:7 long:2 divided:1 award:1 impact:1 prediction:3 regression:14 df:3 iteration:5 represent:1 histogram:1 receive:1 proposal:1 remarkably:1 want:1 addition:1 interval:9 grow:3 leaving:1 median:3 unlike:1 comment:2 cart:2 tend:1 spirit:1 call:1 chipman:14 split:1 enough:2 easy:1 acadia:2 independence:1 fit:9 pennsylvania:1 identified:2 hastie:4 lasso:7 competing:1 idea:1 tm:5 reduce:1 tradeoff:1 motivated:2 penalty:1 loh:3 repeatedly:1 dramatically:1 clear:1 detailed:1 amount:1 nonparametric:1 ten:1 induces:1 schapire:3 exist:1 percentage:1 nsf:1 diagnostic:1 tibshirani:4 four:2 sheer:1 shih:2 changing:1 preprocessed:1 boxplots:1 kept:1 chisquared:1 vast:1 merely:1 sum:17 run:6 package:1 angle:1 uncertainty:11 named:2 wu:2 draw:22 decision:5 summarizes:1 layer:1 bound:5 followed:2 meek:2 fold:4 occur:1 chair:1 department:2 influential:2 combination:2 heckerman:2 across:6 appealing:1 metropolis:2 reallocated:1 making:2 restrains:1 heart:1 needed:2 letting:2 whichever:1 reversal:1 staged:1 available:4 appropriate:2 distinguished:1 frequentist:1 alternative:1 slower:1 original:1 denotes:1 remaining:2 top:1 graphical:2 exploit:1 quantile:5 especially:1 approximating:1 implied:1 move:4 added:1 dependence:4 usual:2 gradient:3 distance:3 unable:1 thank:1 majority:1 degrade:1 reason:2 toward:2 code:1 ratio:4 setup:1 difficult:2 robert:1 sigma:2 implementation:1 design:1 guideline:1 unknown:2 vertical:1 observation:4 markov:1 datasets:9 purdue:1 regularizes:1 sharp:1 canada:3 pair:1 crime:5 able:1 program:1 green:2 max:1 interpretability:1 belief:2 shifting:1 power:1 explanation:1 including:1 business:2 treated:1 rely:1 predicting:1 indicator:1 natural:2 carried:1 categorical:3 coupled:1 columbia:1 philadelphia:1 text:1 prior:38 relative:7 freund:3 fully:3 whisker:2 interesting:1 rejoinder:1 versus:1 validation:7 degree:1 summary:1 supported:1 free:1 bias:1 side:2 johnstone:2 wide:2 fall:1 sparse:1 benefit:3 default:16 depth:3 dimension:1 numeric:1 calculated:2 valid:2 stuck:1 made:5 refinement:1 jump:2 simplified:1 author:2 transaction:1 pruning:1 suggestive:1 anchor:1 continuous:1 iterative:1 search:1 table:4 hockey:1 mj:2 reasonably:1 nature:2 robust:1 operational:4 obtaining:1 contributes:1 forest:7 excellent:1 complex:2 big:1 hyperparameters:2 child:1 repeated:1 west:2 quantiles:2 elaborate:1 n:1 lie:1 candidate:2 abundance:1 down:1 decay:1 cease:1 evidence:1 concern:1 essential:2 adding:1 effectively:1 illustrates:2 demand:1 subtract:1 boston:4 yin:1 simply:1 desire:1 expressed:1 g2:1 conditional:3 goal:1 consequently:1 change:5 typical:1 except:2 determined:1 averaging:2 sampler:3 conservative:2 total:2 exception:1 incorporate:2 mcmc:18
2,297
3,085
Blind Motion Deblurring Using Image Statistics Anat Levin? School of Computer Science and Engineering The Hebrew University of Jerusalem Abstract We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture. 1 Introduction Motion blur is the result of the relative motion between the camera and the scene during image exposure time. This includes both camera and scene objects motion. As blurring can significantly degrade the visual quality of images, photographers and camera manufactures are frequently searching for methods to limit the phenomenon. One solution that reduces the degree of blur is to capture images using shorter exposure intervals. This, however, increases the amount of noise in the image, especially in dark scenes. An alternative approach is to try to remove the blur off-line. Blur is usually modeled as a linear convolution of an image with a blurring kernel, also known as the point spread function (or PSF). Image deconvolution is the process of recovering the unknown image from its blurred version, given a blurring kernel. In most situations, however, the blurring kernel is unknown as well, and the task also requires the estimation of the underlying blurring kernel. Such a process is usually referred to as blind deconvolution. Most of the existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. While the uniform blur assumption is valid for a restricted set of camera motions, it?s usually far from being satisfying when the scene contains several objects moving independently. Existing deblurring methods which handle different motions usually rely on multiple frames. In this work, however, we would like to address blind multiple motions deblurring using a single frame. The suggested approach is fully automatic, under the following two assumptions. The first assumption is that the image consists of a small number of blurring layers with the same blurring kernel within each layer. Most of the examples in this paper include a single blurred object and an unblurred background. Our second simplifying assumption is that the motion is in a single direction ? Current address: MIT CSAIL, [email protected] and that the motion velocity is constant, such as in the case of a moving vehicle captured by a static camera. As a result, within each blurred layer, the blurring kernel is a simple one dimensional box filter, so that the only unknown parameters are the blur direction and the width of the blur kernel. Deblurring different motions requires the segmentation of the image into layers with different blurs as well as the reconstruction of the blurring kernel in each layer. While image segmentation is an active and challenging research area which utilizes various low level and high level cues, the only segmentation cue used in this work is the degree of blur. In order to discriminate different degrees of blur we use the statistics of natural images. Our observation is that statistics of derivatives responses in images are significantly changed as a result of blur, and that the expected statistics under different blurring kernels can be modeled. Given a model of the derivatives statistics under different blurring kernels our algorithm searches for a mixture model that will best describe the distribution observed in the input image. This results in a set of 2 (or some other small number) blurring kernels that were used in the image. In order to segment the image into blurring layers we measure the likelihood of the derivatives in small image windows, under each model. We then look for a smooth layers assignment that will maximize the likelihood in each local window. 1.1 Related work Blind deconvolution is an extensive research area. Research about blind deconvolution given a single image, usually concentrate at cases in which the image is uniformly blurred. A summary and analysis of many deconvolution algorithms can be found in [14]. Early deblurring methods treated blurs that can be characterized by a regular pattern of zeros in the frequency domain such as box filter blurs [26]. This method is known to be very sensitive to noise. Even in the noise free case, box filter blurs can not be identified in the frequency domain if different blurs are present. More recent methods are making other assumptions about the image model. This includes an autoregressive process [22], spatial isotropy [28], power low distributions [8, 20], and piecewise-smoothness edges modeling [3]. In a creative recent research which inspired our approach, Fergus et al [12] use the statistics of natural images to estimate the blurring kernel (again, assuming a uniform blur). Their approach searches for the max-marginal blurring kernel and a deblurred image, using a prior on derivatives distribution in an unblurred image. They address more than box filters, and present impressing reconstructions of complex blurring kernels. Our approach also relies on natural images statistics, but it takes the opposite direction: search for a kernel that will bring the unblurred distribution close to the observed distribution. Thus, in addition to handling non uniform blurs, our approach avoids the need to estimate the unblurred image in every step. In [10], Elder and Zucker propose a scale space approach for estimating the scale of an edge. As the edge?s scale provides some measure of blur this is used for segmenting an image into a focus and out of focus layers. The approach was demonstrated on a rather picewise constant image, unlike the rich texture patterns considered in this paper. In [4], blind restoration of spatially-varying blur was studied in the case of astronomical images, which have statistics quite different from the natural scenes addressed in this paper. Other approaches to motion deblurring include hardware approaches [6, 17, 7], and using multiple frames to estimate blur, e.g. [5, 21, 29]. Another related subject is the research on depth from focus or depth from defocus (see [9, 11] to name a few), in which a scene is captured using multiple focus settings. As a scene point focus is a function of its depth, the relative blur is used to estimate depth information. Again, most of this research relies on more than a single frame. Recent work in computer vision applied natural images priors for a variety of applications like denoising [25, 24], super resolution [27], video matting [2], inpainting [16] and reflections decomposition [15]. 2 Image statistics and blurring Figure 1(a) presents an image of an outdoor scene, with a passing bus. The bus is blurred horizontally as a result of the bus motion. In fig 1(b) we plot the log histogram of the vertical derivatives of this image, and the horizontal derivatives within the blurred area (marked with a rectangle). As can be 0 0 0 vertical horizontal ?1 Input 5 taps blur 21 taps blur ?2 ?2 horizontal blurred vertical ?1 ?2 ?3 ?4 ?3 ?4 ?4 ?6 ?5 ?5 ?6 ?8 ?6 ?7 ?10 ?7 ?8 ?9 ?0.5 (a) ?0.4 ?0.3 ?0.2 ?0.1 0 0.1 0.2 0.3 0.4 0.5 (b) ?12 ?0.5 ?0.4 ?0.3 ?0.2 ?0.1 0 (c) 0.1 0.2 0.3 0.4 0.5 ?8 ?0.5 ?0.4 ?0.3 ?0.2 ?0.1 0 0.1 0.2 0.3 0.4 0.5 (d) Figure 1: Blurred versus unblurred derivatives histograms. (a) Input image. (b) Horizontal derivatives within the blurred region versus vertical derivatives in the entire image. (c) Simulating different blurs in the vertical direction. (d) Horizontal derivatives within the blurred region matched with blurred verticals (4 tap blur). seen, the blur changes the shape of the histogram significantly. This suggests that the statistics of derivative filters responses can be used for detecting blurred image areas. How does the degree of blur affects the derivatives histogram? To answer this question we simulate histograms of different blurs. Let f k denote the horizontal box kernel of size 1 ? k (that is, all entries of fk equal 1/k). We convolve the image with the kernels f kT (where k runs from 1 to 30) and compute the vertical derivatives distributions: pk ? hist(dy ? fkT ? I) (1) T where dy = [1 ? 1] . Some of those log histograms are plotted in fig 1(c). As the size of the blurring kernel changes the derivatives distribution, we would also like to use the histograms for determining the degree of blur. For example, as illustrated in fig 1(d), we can match the distribution of vertical derivatives in the blurred area, and p 4 , the distribution of horizontal derivatives after blurring with a 4 tap kernel. 2.1 Identifying blur using image statistics Given an image, the direction of motion blur can be selected as the direction with minimal derivatives variation, as in [28]. For the simplicity of the derivation we will assume here that the motion direction is horizontal, and that the image contains a single blurred object plus an unblurred background. Our goal is to determine the size of the blur kernel. That is, to recover the filter f k which is responsible for the blur observed in the image. For that we compute the histogram of horizontal derivatives in the image. However, not all the image is blurred. Therefore, without segmenting the blurred areas there is no single blurring model p k that will describe the observed histogram. Instead, we try to describe the observed histogram with a mixture model. We define the log-likelihood of the derivatives in a window with respect to each of the blurring models as:  log pk (Ix (j)) (2) k (i) = j?Wi Where Ix (j) is the horizontal derivative in pixel j, and W i is a window around pixel i. Thus,  k (i) measures how well the i?th window is explained by a k-tap blur. For an input image I and a given pair of kernels, we can measure the data log-likelihood by associating each window with the maximum likelihood kernel:  L(I|fk1 , fk2 ) = max(k1 (i), k2 (i)) (3) i?I We search for a blurring model p k0 such that, when combined with the model p 1 (derivatives of the unblurred image), will maximize the log-likelihood of the observed derivatives: k0 = arg max L(I|f1 , fk ) k (4) One problem we need to address in defining the likelihoods is the fact that uniform areas, or areas with pure horizontal edges (the aperture problem) don?t contain any information about the blur. On the other hand, uniform areas receive the highest likelihoods from wide blur kernels (since the derivatives distribution for wide kernels is more concentrated around zero, as can be observed in figure 1(c)). When the image consists of large uniform areas, this bias the likelihood toward wider blur kernels. To overcome this, we start by scanning the image with a simple edge detector and keep only windows with significant vertical edges. In order to make our model consistent, when building the blurred distribution models p k (eq 1), we also take into account only pixels within a window around a vertical edge. Note that since we deal here with one dimensional kernels, we can estimate the expected blurred histogram pk (eq 1) from the perpendicular direction of the same image. 2.2 Segmenting blur layers Once the blurring kernel f k has been found, we can use it to deconvolve the image, as in fig 2(b). While this significantly improves the image in the blurred areas, serious artifacts are observed in the background. Therefore, in addition to recovering the blurring kernel, we need to segment the image into blurred and unblurred layers. We look for a smooth segmentation that will maximize the likelihood of the derivatives in each region. We define the energy of a segmentation as:   E(x) = ?(x(i), i) + eij |x(i) ? x(j)| (5) i <ij> where (x(i), i) = 1 (i) for x(i) = 0 and (x(i), i) =  k (i) for x(i) = 1, < i, j > are neighboring image pixels, and e ij is a smoothness term: eij = ? + ?(|I(i) ? I ?fk (i)| + |I(j) ? I ?fk (j)|) (6) ?fk Here I denotes the deconvolved image. The smoothness term is combined from two parts. The first is just a constant penalty for assigning different labels to neighboring pixels, thus preferring smooth segmentations. The second part encodes the fact that it is cheaper to cut the image in places where there is no visual seam between the original and the deconvolved images (e.g. [1]). Given the local likelihood scores and the energy definition, we would like to find the minimal energy segmentation. This reduces to finding a min-cut in a graph. Given the segmentation mask x we convolve it with a Gaussian filter to obtain a smoother seam. The final restorated image is computed as: R(i) = x(i)I ?fk (i) + (1 ? x(i))I(i) (7) 3 Results To compute a deconvolved image I ?fk given the blurring kernel, we follow [12] in using the matlab implementation (deconvlucy) of the Richardson-Lucy deconvolution algorithm [23, 18]. Figure 2 presents results for several example images. For the doll example the image was segmented into 3 blurring layers. The examples of figure 2 and additional results are available in a high resolution in the supplementary material. The supplementary file also includes examples with non horizontal blurs. To determine the blur direction in those images we select the direction with minimal derivatives variation, as in [28]. This approach wasn?t always robust enough. For each image we show what happens if the segmentation is ignored and the entire image is deconvolved with the selected kernel (for the doll case the wider kernel is shown). While this improves the result in the blurred area, strong artifacts are observed in the rest of the image. In comparison, the third row presents the restorated images computed from eq 7 using the blurring layers segmentation. We also show the local MAP labeling of the edges. White pixels are ones for which an unblurred model receives a higher likelihood, that is  1 (i) > k (i), and for gray pixels  1 (i) < k (i) (for the doll case there are 3 groups, defined in a similar way). The last row presents the segmentation contour. The output contour does not perfectly align with image edges. This is because our goal in the segmentation selection is to produce visually plausible results. The smoothness term of our energy (eq 6) does not aim to output an accurate segmentation, and it does not prefer to align segmentation edges with image edges. Instead it searches for a cut that will make the seam between the layers unobservable. (a) (b) (c) (d) (e) Figure 2: Deblurring Results. (a)Input image. (b)Applying the recovered kernel on the entire image. (c)Our result. (d)Local classification of windows. (e)Segmentation contour The recovered blur sizes for those examples were 12 pixels for the bicycles image and 4 pixels for the bus. For the doll image a 9 pixels blur was identified in the skirt segment and a 2 pixels blur in the doll head. We note that while recovering big degrees of blur as in the bicycles example is visually more impressing, discriminating small degrees of blur as in the bus example is more challenging from the statistical aspect. This is because the derivatives distributions in the case of small blurs are much more similar to the distributions of unblurred images. For the bus image the size of the blur kernel found by our algorithm was 4 pixels. To demonstrate the fact that this is actually the true kernel size, we show in figure 3 the deconvolution results with a 3-tap filter and with a 5-tap filter. Stronger artifacts are observed in each of those cases. (input) (3-taps) (4-taps) (5-taps) (matlab) Figure 3: Deconvolving the bus image using different filters. The 4-tap filter selected by our algorithm yields best results Next, we consider several simple alternatives to some of the algorithm parts. We start by investigating the need in segmentation and then discuss the usage of the image statistics. Segmentation: As demonstrated in fig 2(b) deconvolving the entire image with the same kernel damages the unblurred parts. One obvious solution is to divide the image into regions and match a separate blur kernel to each region. As demonstrated by fig 2(d), even if we limit the kernel choice in each local window to a small set of 2-3 kernels, the local decision could be wrong. For all the examples in this paper we used 15 ? 35 windows. There is some tradeoff in selecting a good window size. While likelihood measure based on a big window is more reliable, such a window might cover regions from different blurring layers. Another alternative is to brake the image into segments using an unsupervised segmentation algorithm, and match a kernel to each segment. The fact that blur changes the derivatives distributions also suggests that it might be captured as a kind of texture cue. Therefore, it?s particularly interesting to try segmenting the image using texture affinities (e.g. [13, 19]). However, as this is an unsupervised segmentation process which does not take into account the grouping goal, it?s hard to expect it to yield exactly the blurred layers. Fig 4(b) presents segmentation results using the Ncuts framework of [19]. The output over-segments blur layers, while merging parts of blurred and unblurred objects. Unsurprisingly, the recovered kernels are wrong. 1 18 3 1 13 (a) (b) (c) Figure 4: Deblurring using unsupervised segmentation. (a) Input. (b) Unsupervised segmentation and the width of the kernel matched to each segment. (c) Result from deblurring each segment independently. Image statistics: We move to evaluating the contribution of the image statistics. To do that independently of the segmentation, we manually segmented the bus and applied the matlab blind deconvolution function (deconvblind), initialized with a 1 ? 7 box kernel. Strong artifacts were introduced as shown in the last column of fig 3. The algorithm results also depend on the actual histograms used. Derivatives histograms of different natural images usually have common characteristics such as the heavy tail structure. Yet, the histogram structure of different images is not identical, and we found that trying to deblur one image using the statistics of a different image doesn?t work that well. For example, figure 5 shows the result of deblurring the bus image using the bicycles image statistics. The selected blur in this case was a 6-tap kernel, but deblurring the image with this kernel introduces artifacts. The classification of pixels into layers using this model is wrong as well. Our solution was to work on each image using the vertical derivatives histograms from the same image. This isn?t an optimal solution as when the image is blurred horizontally some of the vertical derivatives are degraded as well. Yet, it provided better results than using histograms obtained from different images. (a) (b) (c) (d) Figure 5: Deblurring the bus image using the bicycles image statistics. (a) Applying the recovered kernel on the entire image. (b) Deblurring result. (c) Local classification of windows. (d) Segmentation contour. Limitations: Our algorithm uses simple derivatives statistics and the power of such statistics is somewhat surprising. Yet, the algorithm might fail. One failure source is blurs which can?t be described as a box filter, or failures in identifying the blur direction. Even when this isn?t the case, the algorithm may fail to identify the correct blur size or it may not infer the correct segmentation. Figure 6 demonstrate a failure. In this case the algorithm preferred a model explaining the bushes texture instead of a model explaining the car blur. The bushes area consists of many small derivatives which are explained better by a small blur model than by a no-blur model. On the other hand, the car consists of very few vertical edges. As a result the algorithm selected a 6-pixels blur model. This model might increase the likelihood of the bushes texture and the noise on the road, but it doesn?t remove the blur of the car. (a) (b) (c) (d) (e) Figure 6: Deblurring failure. (a) Input. (b) Applying the recovered kernel (6-taps) on the entire image. (c) Deblurring result. (d) Local classification. (e) Segmentation contour. 4 Discussion This paper addresses the problem of blind motion deconvolution without assuming that the entire image undergone the same blur. Thus, in addition to recovering an unknown blur kernel, we segment the image into layers with different blurs. We treat this highly challenging task using a surprisingly simple approach, relying on the derivatives distribution in blurred images. We model the expected derivatives distributions under different degrees of blur, and those distributions are used for detecting different blurs in image windows. The box filters model used in this work is definitely limiting, and as pointed out by [12, 6], many blurring patterns observed in real images are more complex. A possible future research direction is to try to develop stronger statistical models which can include stronger features in addition to the simple first order derivatives. Stronger models might enable us to identify a wider class of blurring kernels rather than just box filters. Particularly, they could provide a better strategy for identifying the blur direction. A better model might also avoid the need to detect vertical edges and artificially limit the model to windows around edges. In future work, it will also be interesting to try to detect different blurs without assuming a small number of blurring layers. This will require estimating the blurs in the image in a continues way, and might also provide a depth from focus algorithm that will work on a single image. References [1] A. Agarwala et al. Interactive digital photomontage. SIGGRAPH, 2004. [2] N. Apostoloff and A. Fitzgibbon. Bayesian image matting using learnt image priors. In CVPR, 2005. [3] L. Bar, N. Sochen, and N. Kiryati. Variational pairing of image segmentation and blind restoration. In ECCV, 2004. [4] J. Bardsley, S. Jefferies, J. Nagy, and R. Plemmons. Blind iterative restoration of images with spatiallyvarying blur. Optics Express, 2006. [5] B. Bascle, A. Blake, and A. Zisserman. Motion deblurring and superresolution from an image sequence. In ECCV, 1996. [6] M. Ben-Ezra and S. K. Nayar. Motion-based motion deblurring. PAMI, 2004. [7] 2006 Canon Inc. What is optical image stabilizer? http://www.canon.com/bctv/faq/optis.html. [8] J. Caron, N. Namazi, and C. Rollins. Noniterative blind data restoration by use of an extracted filter function. Applied Optics, 2002. [9] T. Darrell and K. Wohn. Pyramid based depth from focus. In CVPR, 1988. [10] J. H. Elder and S. W. Zucker. Local scale control for edge detection and blur estimation. PAMI, 1998. [11] P. Favaro, S. Osher, S. Soatto, and L.A. Vese. 3d shape from anisotropic diffusion. In CVPR, 2003. [12] R. Fergus et al. Removing camera shake from a single photograph. SIGGRAPH, 2006. [13] T. Hofmann, J. Puzicha, and J. M. Buhmann. Unsupervised texture segmentation in a deterministic annealing framework. PAMI, 1998. [14] D. Kundur and D. Hatzinakos. Blind image deconvolution. IEEE Signal Processing Magazine, 1996. [15] A. Levin and Y. Weiss. User assisted separation of reflections from a single image using a sparsity prior. In ECCV, 2004. [16] A. Levin, A. Zomet, and Y. Weiss. Learning how to inpaint from global image statistics. In ICCV, 2003. [17] X. Liu and A. Gamal. Simultaneous image formation and motion blur restoration via multiple capture. In Int. Conf. Acoustics, Speech, Signal Processing, 2001. [18] L. Lucy. Bayesian-based iterative method of image restoration. Journal of Ast., 1974. [19] J. Malik, S. Belongie, T. Leung, and J. Shi. Contour and texture analysis for image segmentation. In Perceptual Organization for artificial vision systems. Kluwer Academic, 2000. [20] R. Neelamani, H. Choi, and R. Baraniuk. Forward: Fourier-wavelet regularized deconvolution for illconditioned systems. IEEE Trans. on Signal Processing, 2004. [21] A. Rav-Acha and S. Peleg. Two motion-blurred images are better than one. Pattern Recognition Letters, 2005. [22] S. Reeves and R. Mersereau. Blur identification by the method of generalized cross-validation. Transactions on Image Processing, 1992. [23] W. Richardson. Bayesian-based iterative method of image restoration. J. of the Optical Society of America, 1972. [24] S. Roth and M.J. Black. Fields of experts: A framework for learning image priors. In CVPR, 2005. [25] E. P. Simoncelli. Statistical modeling of photographic images. In Handbook of Image and Video Processing, 2005. [26] T. Stockham, T. Cannon, and R. Ingebretsen. Blind deconvolution through digital signal processing. IEEE, 1975. [27] M. F. Tappen, B. C. Russell, and W. T. Freeman. Exploiting the sparse derivative prior for super-resolution and image demosaicing. SCTV, 2003. [28] Y. Yitzhaky, I. Mor, A. Lantzman, and N.S. Kopeika. Direct method for restoration of motion blurred images. JOSA-A, 1998. [29] M. Shiand J. Zheng. A slit scanning depth of route panorama from stationary blur. In Proc. IEEE Conf. Comput. Vision Pattern Recog., 2005.
3085 |@word version:1 stronger:4 simplifying:1 decomposition:1 photographer:1 inpainting:1 liu:1 contains:2 score:1 selecting:1 existing:3 current:1 recovered:5 com:1 surprising:1 assigning:1 yet:3 blur:79 hofmann:1 enables:1 shape:2 remove:2 plot:1 stationary:1 cue:3 selected:5 rav:1 provides:1 detecting:2 favaro:1 direct:1 pairing:1 consists:5 mask:1 psf:1 expected:4 frequently:1 plemmons:1 inspired:1 relying:1 freeman:1 actual:1 window:17 provided:1 estimating:2 underlying:1 matched:2 gamal:1 isotropy:1 what:2 superresolution:1 kind:1 finding:1 every:1 interactive:1 exactly:1 k2:1 wrong:3 control:1 segmenting:4 engineering:1 local:9 treat:1 limit:4 pami:3 might:7 plus:1 black:1 studied:1 suggests:2 challenging:3 perpendicular:1 camera:6 responsible:1 fitzgibbon:1 manufacture:1 area:13 significantly:5 road:1 regular:1 cannot:1 close:1 selection:1 deconvolve:2 ast:1 applying:3 www:1 map:1 demonstrated:3 deterministic:1 shi:1 brake:1 jerusalem:1 exposure:2 roth:1 independently:3 resolution:3 simplicity:1 identifying:3 pure:1 searching:1 handle:1 variation:2 limiting:1 magazine:1 user:1 us:1 deblurring:19 velocity:2 satisfying:1 particularly:2 recognition:1 continues:1 tappen:1 cut:3 observed:11 recog:1 capture:2 impressing:2 region:8 russell:1 highest:1 depend:1 segment:9 blurring:33 siggraph:2 k0:2 various:1 america:1 derivation:1 describe:3 artificial:1 labeling:1 formation:1 quite:1 supplementary:2 plausible:1 cvpr:4 statistic:21 richardson:2 final:1 sequence:1 reconstruction:2 propose:1 neighboring:2 exploiting:1 darrell:1 produce:2 ben:1 object:6 wider:3 develop:1 ij:2 school:1 eq:4 strong:2 recovering:6 bardsley:1 peleg:1 concentrate:3 direction:13 correct:2 filter:17 enable:1 material:1 require:1 f1:1 assisted:1 around:4 considered:1 blake:1 visually:2 bicycle:4 early:1 estimation:2 proc:1 label:1 sensitive:1 mit:2 gaussian:1 always:1 super:2 aim:1 rather:2 avoid:1 cannon:1 varying:1 focus:7 likelihood:14 detect:2 leung:1 entire:10 pixel:14 arg:1 unobservable:1 classification:4 agarwala:1 html:1 spatial:1 marginal:1 equal:1 once:1 field:1 photomontage:1 manually:1 identical:1 look:2 deconvolving:2 unsupervised:5 future:2 piecewise:1 serious:2 few:3 deblurred:1 cheaper:1 detection:1 organization:1 highly:1 zheng:1 introduces:1 mixture:2 hatzinakos:1 kt:1 accurate:1 edge:15 shorter:1 unblurred:12 divide:1 initialized:1 plotted:1 skirt:1 minimal:3 deconvolved:4 column:1 modeling:2 ezra:1 cover:1 restoration:8 assignment:1 entry:1 uniform:6 levin:3 answer:1 scanning:2 learnt:1 combined:2 definitely:1 discriminating:2 csail:2 preferring:1 off:1 again:2 conf:2 expert:1 derivative:38 account:2 includes:3 int:1 blurred:29 inc:1 rollins:1 caused:1 blind:16 vehicle:1 try:5 start:2 recover:1 contribution:1 degraded:1 illconditioned:1 characteristic:1 yield:2 identify:2 acha:1 bayesian:3 identification:1 detector:1 simultaneous:1 definition:1 failure:4 energy:4 frequency:2 obvious:1 josa:1 static:1 astronomical:1 car:3 improves:2 segmentation:30 actually:1 elder:2 higher:1 seam:3 follow:1 response:2 zisserman:1 wei:2 box:10 just:2 hand:2 receives:1 horizontal:12 stockham:1 artifact:6 quality:1 gray:1 usage:1 name:1 building:1 contain:1 true:1 ncuts:1 soatto:1 spatially:1 alevin:1 illustrated:1 deal:1 white:1 during:1 width:3 generalized:1 trying:2 demonstrate:2 motion:24 reflection:2 bring:1 sctv:1 image:129 variational:1 common:1 anisotropic:1 tail:1 mor:1 kluwer:1 significant:1 caron:1 smoothness:4 automatic:1 reef:1 fk:7 pointed:1 moving:3 zucker:2 align:2 recent:3 route:1 captured:3 seen:1 additional:1 somewhat:1 canon:2 determine:2 maximize:3 signal:4 smoother:1 multiple:5 photographic:1 simoncelli:1 reduces:2 infer:1 smooth:3 segmented:2 match:3 characterized:1 academic:1 cross:1 vision:3 histogram:16 kernel:53 faq:1 pyramid:1 receive:1 background:3 addition:4 interval:1 addressed:1 annealing:1 source:1 rest:1 unlike:1 file:1 subject:1 noniterative:1 enough:1 variety:1 affect:1 identified:2 opposite:1 associating:1 perfectly:1 stabilizer:1 wasn:1 tradeoff:1 penalty:1 fk2:1 passing:1 cause:1 speech:1 matlab:3 ignored:1 involve:1 shake:1 amount:1 dark:1 hardware:1 concentrated:1 http:1 express:1 group:1 mersereau:1 diffusion:1 rectangle:1 graph:1 run:1 letter:1 powerful:1 baraniuk:1 place:1 utilizes:1 separation:1 decision:1 dy:2 prefer:1 layer:20 optic:2 scene:9 encodes:1 aspect:1 simulate:1 fourier:1 min:1 demosaicing:1 optical:2 creative:1 wi:1 making:1 happens:1 osher:1 explained:2 restricted:1 iccv:1 handling:1 bus:10 discus:1 fail:2 sochen:1 available:1 doll:5 simulating:1 inpaint:1 alternative:3 original:1 convolve:2 denotes:1 include:3 k1:1 especially:1 society:1 move:1 malik:1 question:1 damage:1 strategy:1 affinity:1 separate:1 degrade:1 toward:1 kundur:1 assuming:4 modeled:3 convincing:1 hebrew:1 implementation:1 unknown:4 vertical:14 observation:2 convolution:1 situation:2 defining:1 head:1 frame:4 introduced:1 pair:1 extensive:1 tap:13 acoustic:1 trans:1 address:6 suggested:1 bar:1 usually:6 pattern:5 sparsity:1 max:3 reliable:1 video:2 power:2 natural:6 rely:1 treated:1 regularized:1 buhmann:1 isn:2 prior:6 determining:1 relative:2 unsurprisingly:1 fully:1 expect:1 interesting:2 limitation:1 versus:2 digital:2 validation:1 degree:9 consistent:1 undergone:1 heavy:1 row:2 eccv:3 changed:2 summary:1 surprisingly:2 last:2 free:1 bias:1 nagy:1 wide:2 explaining:2 matting:2 sparse:1 overcome:1 depth:7 world:1 valid:1 rich:2 autoregressive:1 avoids:1 contour:6 evaluating:1 doesn:2 forward:1 far:1 transaction:1 preferred:1 aperture:1 keep:1 global:1 active:1 hist:1 investigating:1 handbook:1 belongie:1 fergus:2 don:1 search:6 iterative:3 robust:1 defocus:1 complex:2 artificially:1 domain:2 pk:3 spread:1 big:2 noise:4 fig:8 referred:1 comput:1 outdoor:1 perceptual:1 third:1 anat:1 ix:2 wavelet:1 removing:1 choi:1 deconvolution:15 grouping:1 merging:1 texture:8 bascle:1 lucy:2 eij:2 photograph:1 visual:2 horizontally:2 fkt:1 deblur:1 relies:3 extracted:1 marked:1 goal:3 change:3 hard:1 uniformly:1 zomet:1 panorama:1 denoising:1 discriminate:1 slit:1 select:1 puzicha:1 bush:3 phenomenon:1 nayar:1
2,298
3,086
Support Vector Machines on a Budget Ofer Dekel and Yoram Singer School of Computer Science and Engineering The Hebrew University Jerusalem 91904, Israel {oferd,singer}@cs.huji.ac.il Abstract The standard Support Vector Machine formulation does not provide its user with the ability to explicitly control the number of support vectors used to define the generated classifier. We present a modified version of SVM that allows the user to set a budget parameter B and focuses on minimizing the loss attained by the B worst-classified examples while ignoring the remaining examples. This idea can be used to derive sparse versions of both L1-SVM and L2-SVM. Technically, we obtain these new SVM variants by replacing the 1-norm in the standard SVM formulation with various interpolation-norms. We also adapt the SMO optimization algorithm to our setting and report on some preliminary experimental results. 1 Introduction The L1 Support Vector Machine (L1-SVM or SVM for short) [1, 2, 3] is a powerful technique for learning binary classifiers from examples. Given a training set {(xi , yi )}m i=1 and a positive semi-definite kernel K, the SVM solution is a hypothesis of the form h(x) =   sign i?S ?i yi K(xi , x) + b , where S is a subset of {1, . . . , m}, {?i }i?S are real valued weights, and b is a bias term. The set S defines the support of the classifier, namely, the set of examples that actively participate in the classifier?s definition. The examples in this set are called support vectors, and we say that the SVM solution is sparse if the fraction of support vectors (|S|/m) is reasonably small. Our first concern is usually with the accuracy of the classifier. However, in some applications, the size of the support is equally important. Assuming that the kernel operator K can be evaluated in constant time, the time-complexity of evaluating the classifier on a new instance is linear in the size of S. Therefore, a large support defines a slow classifier. Classification speed is often important and plays an especially critical role in real-time systems. For example, a classifier that drives a phoneme detector in a speech recognition system is evaluated hundreds of times a second. If this classifier does not manage to keep up with the rate at which the speech signal is acquired then its classifications are useless, regardless of their accuracy. The size of the support also naturally determines the amount of memory required to store the classifier. If a classifier is intended to run in a device with a limited memory, such as a mobile telephone, there may be a physical limit on the amount of memory available to store support vectors. The size of S may also effect the time required to train an SVM classifier. Most modern SVM learning algorithms are active set methods, namely, on every step of the training process, only a small set of active training examples are taken into account. Knowing the size of S ahead of time would enable us to optimize the size of the active set and possibly gain a significant speed-up in the training process. The SVM mechanism does not give us explicit control over the size of the support. The user-defined parameters of SVM have some influence on the size of S, but we often require more than this. Specifically, we would like the ability to specify a budget parameter, B, which directly controls the number of support vectors used to define the SVM solution. In this paper, we address this issue and present budget-SVM, a minor modification to the standard L1-SVM formulation that allows the user to set a budget parameter. The budget-SVM optimization problem focuses only on the B worst-classified examples in the training set, ignoring all other examples. The problem of sparsity becomes even more critical when it comes to L2-SVM [3], a variant of the SVM problem that tends to have dense solutions. L2-SVM is sometimes preferred over L1-SVM because it exhibits good generalization properties, as well as other desirable statistical characteristics [4]. We derive the budget-L2-SVM formulation by following the same technique used to derive budget-L1-SVM. The technique used to derive these SVM variants is as follows. We begin by generalizing the L1SVM formulation by replacing the 1-norm with an arbitrary norm. We obtain a general framework for SVM-type problems, which we nickname Any-Norm-SVM. Next, we turn to the K-method of norm interpolation to obtain the 1 ? ? interpolation-norm and the 2 ? ? interpolation-norm, and use these norms in the Any-Norm-SVM framework. These norms have the property that they depend only on the absolutely-largest elements of the vector. We rely on this property and show that our SVM variants construct sparse solutions. For each of these norms, we present a simple modification of the SMO algorithm [5], which efficiently solves the respective optimization problem. Related Work The problem of approximating the SVM solution using a reduced set of examples has received much previous attention [6, 7, 8, 9]. This technique takes a two-step approach: begin by training a standard SVM classifier, perhaps obtaining a dense solution. Then, try to find a sparse classifier which minimizes the L2 distance to the SVM solution. A potential drawback of this approach is that once the SVM solution has been found, the distribution from which the training set was sampled no longer plays a role in the learning process. This ignores the fact that shifting the SVM classifier by a fixed amount in different directions may have dramatically different consequences on classification accuracy. We overcome this problem by taking the approach of [10] and reformulating the SVM optimization problem itself in a way that promotes sparsity. Another technique used to obtain a sparse kernel-machine takes advantage of the inherent sparsity of linear programming solutions, and formalizes the kernel-machine learning problem as a linear program [11]. This approach, often called LP-SVM or Sparse-SVM, has been shown to generally construct sparse solutions, but still lacks the ability to introduce an explicit budget parameter. Yet another approach involves randomly selecting a subset of the training set to serve as support vectors [12]. The problem of learning a kernel-machine on a budget also appears in the online-learning mistake-bound framework, and it is there where the term ?learning on a budget? was coined [13]. Two recent papers [14, 15] propose online kernel-methods on a budget with an accompanying theoretical mistake-bound. This paper is organized as follows. We present the generalized Any-Norm-SVM framework in Sec. 2. We discuss the K-method of norm interpolation in Sec. 3 and put various interpolation norms to use within the Any-Norm-SVM framework in Sec. 4. Then, in Sec. 5, we present some preliminary experiments that demonstrate how the theoretical properties of our approach translate into practice. We conclude with a discussion in Sec. 6. Due to the lack of space, some of the proofs are omitted from this paper. 2 Any-Norm SVM Let {(xi , yi )}m i=1 be a training set, where every xi belongs to an instance space X and every yi ? {?1, +1}. Let K : X ? X ? R be a positive semi-definite kernel, and let H be its corresponding Reproducing Kernel Hilbert Space (RKHS) [16], with inner product ?, ?H . The L1 Support Vector Machine is defined as the solution to the following convex optimization problem:   1 min s.t. ? 1 ? i ? m yi f (xi ) + b ? 1 ? ?i , (1) 2 f, f H + C?1 f ?H, b?R, ??0 where ? is a vector of m slack variables, and C is a positive constant that controls the tradeoff between the complexity of the learned classifier and how well it fits the training data. The value of ?i is sometimes referred toas the hinge-loss attained by the SVM classifier on example i. The m 1-norm, defined by ?1 = i=1 |?i |, is used to combine the individual hinge-loss values into a single number. L2-SVM is a variant of the optimization problem defined above, defined as follows:   2 1 min s.t. ? 1 ? i ? m yi f (xi ) + b ? 1 ? ?i . 2 f, f H + C?2 f ?H, b?R, ??0 This formulation differs from mthe L1 formulation in that the 1-norm is replaced by the squared 2norm, defined by ?22 = i=1 ?i2 . In this section, we take this idea even farther, and allow the 1-norm of L1-SVM to be replaced by any norm. Formally, let  ?  be an arbitrary norm defined on Rm . Recall that a norm is a real valued operator such that for every v ? Rm and ? ? R it holds that ?v = |?|v (positive homogeneity), v ? 0 and v = 0 if and only if v = 0 (positive definiteness), and that satisfies the triangle inequality. Now consider the following optimization problem:   1 min s.t. ? 1 ? i ? m yi f (xi ) + b ? 1 ? ?i . (2) 2 f, f H + C? f ?H, b?R, ??0 L1-SVM is recovered by setting  ?  to be the 1-norm. Setting  ?  to be the 2-norm induces an optimization problem which is close in nature to L2-SVM, but not identical to it since the 2-norm is not squared. Combining the positive homogeneity property of  ?  with the fact that it satisfies the triangle inequality ensures that the objective function of Eq. (2) is convex. An important class of norms mused extensively in our derivation is the family of p-norms, defined for every p ? 1 by vp = ( j=1 |vj |p )1/p . A special member of this family is the ?-norm, which is defined by v? = limp?? vp and can be shown to be equivalent to maxj |vj |. We also use the notion of norm duality. Every norm on Rm has a dual norm which is also defined on Rm . The dual norm of  ?  is denoted by  ?  and given by u?v = max u?v . (3) u = maxm v?R v v?Rm : v=1 As its name implies,  ?  also satisfies the requirements of a norm. For example, H?older?s inequality [17] states that the dual of  ? p is the norm  ? q , where q = p/(p ? 1). The dual of the 1-norm is the ?-norm and vice versa. Using the definition of the dual norm, we now state the dual optimization problem of Eq. (2): m m m  m    max ?i ? 12 ?i ?j yi yj K(xi , xj ) s.t. yi ?i = 0 and ? ? C . ??0 i=1 i=1 j=1 (4) i=1 As a first sanity check, note that if  ?  in Eq. (2) is chosen to be the 1-norm, then  ?  is the ?norm, and the constraint ? ? C reduces to the familiar box-constraint of L1-SVM [3]. The proof that Eq. (2) and Eq. (4) are indeed dual optimization problems relies on basic techniques in convex analysis [18], and is omitted due to the lack of space. Moreover, it can be shown that the solution m to Eq. (2) takes the form f (?) = i=1 ?i yi K(xi , ?), and that strong duality holds regardless of the norm used. This allows us to forget about the primal problem in Eq. (2) and to focus on solving the dual problem in Eq. (4). As with L1-SVM, the bias term, b, cannot be directly extracted from the solution of the dual. The standard techniques used to find b in L1-SVM apply here as well [3]. We note that the Any-Norm-SVM formulation is not fundamentally different from the original L1SVM formulation. Both optimization problems have convex objective functions and linear constraints. More importantly, the only difference between their respective duals is in the dual-norm constraint. Specifically, the objective function in Eq. (4) is a concave quadratic function for any choice of  ? . These facts enable us to efficiently solve the problem in Eq. (4) for any kernel K and any norm using techniques similar to those used to solve the standard L1-SVM problem. 3 Interpolation Norms In the previous section, we acquired the ability to replace the 1-norm in the definition of L1-SVM with an arbitrary norm. We now use Peetre?s K-method of norm interpolation [19] to obtain norms that promote the sparsity of the generated classifier. The K-method is a technique for smoothly interpolating between a pair of norms. Let  ? p1 : Rm ? R+ and  ? p2 : Rm ? R+ be two p-norms, and let  ? q1 and  ? q2 be their respective duals. Peetre?s K-functional with respect to p1 and p2 , and with respect to the constant t > 0, is defined to be   wp1 + tzp2 . (5) vK(p1 ,p2 ,t) = min w,z : w+z=v Peetre?s J-functional with respect to q1 , q2 , and with respect to the constant s > 0, is given by   (6) uJ(q1 ,q2 ,s) = max uq1 , s uq2 . The J-functional is obviously a norm: the properties of a norm all follow immediately from the fact that  ? q1 and  ? q2 posses these properties.  ? K(p1 ,p2 ,t) is also a norm, and moreover,  ? K(p1 ,p2 ,t) and  ? J(q1 ,q2 ,s) are dual to each other when t = 1/s. This fact can be proven using elementary calculus, and this proof is omitted due to the lack of space. We use the K-method to interpolate between the 1-norm and the ?-norm, and to interpolate between the 2-norm and the ?-norm. To gain some intuition on the behavior of these  interpolationm p norms, first note that for any p ? 1 and any v ? Rm it holds that maxi |vi |p ? i=1 |vi | ? p 1/p m maxi |vi | , and therefore v? ? vp ? m v? . An immediate consequence of this is that  ? K(p,?,t) ?  ? ? when 0 < t ? 1 and that  ? K(p,?,t) ?  ? p when m1/p ? t. In other words, the interesting ? range of t for the 1 ? ? interpolation-norm is [1, m], and for the 2 ? ? interpolation-norm is [1, m]. Next, we prove a theorem which states that interpolating a p-norm with the ?-norm is approximately equivalent to restricting that p-norm to the absolutely-largest components of the vector. Specifically, the 1 ? ? interpolation norm with parameter t (with t chosen to be an integer in [1, m]) is precisely equivalent to taking the sum of the absolute values of the t absolutely-greatest elements of the vector. Theorem 1. Let v be an arbitrary vector in Rm and let ? be a permutation on {1, . . . , m} such that |v?(1) | ? . . . ? |v?(m) |. Then for any integer B in {1, . . . , m} it holds that vK(1,?,B) = B 1/p then it holds that i=1 |v?(i) |, and for any 1 ? p < ?, if t = B  B    B p 1/p p 1/p ? vK(p,?,t) ? + B 1/p |v?(B) | . i=1 |v?(i) | i=1 |v?(i) | Proof. Beginning with the lower bound, let w and z be such that w + z = v. Then  B    B p 1/p p 1/p = i=1 |v?(i) | i=1 |w?(i) + z?(i) |    B  B p 1/p p 1/p ? + i=1 |w?(i) | i=1 |z?(i) |   B p 1/p ? + (B maxi |zi |p )1/p i=1 |w?(i) |   m p 1/p ? + tz? , i=1 |wi | where the first inequality is the triangle inequality for the p-norm. Since  the above holds for any w m and z such that w + z = v, it also holds for the pair which minimizes ( i=1 |wi |p )1/p + tz? , and which defines vK(p,?,t) . Therefore, we have that,  B  p 1/p ? vK(p,?,t) . (7) i=1 |v?(i) | ?i = Turning to the upper bound, let ? = |v?(B) |, and define for all 1 ? i ? m, w ? +z ? = v, and that sign(vi ) max{0, |vi | ? ?} and z?i = sign(vi ) min{|vi |, ?}. Note that w B  ? 1 + B? |v?(i) | = w z? . i=1 B This proves that vK(1,?,B) ? i=1 |v?(i) | and together with Eq. (7) we have proven our claim for p = 1. Moving on to the case of an arbitrary p, we have that vK(p,?,t) = ? p + t? min (wp + tz? ) ? w z? . w+z=v ? is at most as large as the absolute value of the Since the absolute value of each element in w ? p ? ??(m) = 0, we have that w corresponding element of v, and since w ??(r+1) = . . . = w B ( i=1 |v?(i) |p )1/p . By definition, ? z? = ? = |v?(B) |. This proves that vK(p,?,t) ? B ( i=1 |v?(i) |p )1/p + t|v?(B) | and together with Eq. (7) this concludes our proof for arbitrary p. 4 Deriving Concrete Algorithms from the General Framework Our first concrete algorithm is budget-L1-SVM, obtained by plugging the 1 ? ? interpolation-norm with parameter B into the general Any-Norm-SVM framework. Relying on Thm. 1, we know that this norm takes into account only the B largest values in ?. Since ? measures how badly each example is misclassified, the budget-L1-SVM problem essentially optimizes the soft-margin with respect to the B worst-classified examples. We now show that this property promotes the sparsity of the budget-L1-SVM solution. If there are less than B examples for which yi (f (xi )+b) < 1, then the KKT conditions of optimality immediately imply that the number of support vectors is less than B. This holds true for every instance of the Any-Norm-SVM framework, and is proven for L1-SVM in [3]. Therefore, we focus on the more interesting case, where yi (f (xi ) + b) < 1 for at least B examples. Theorem 2. Let B be an integer in {1, . . . , m}. Let (f, b, ?, ?) be an optimal primal-dual solution of the primal problem in Eq. (2) and the dual problem in Eq. (4), where ? is chosen to be the 1?? interpolation-norm with parameter B. Define ?i = yi (f (xi ) + b) and let ? be a permutation of {1, . . . , m} such that ??(1) ? . . . ? ??(m) . Assume that ??(B) < 1. Then, ?k = 0 if ??(B) < ?k . Proof. We begin the proof by redefining ?i = max{1 ? ?i , 0} for all 1 ? i ? m and noting that (f, b, ?, ?) remains a primal-dual solution to our problem. The benefit of starting with this specific solution is that ??(1) ? . . . ? ??(m) . Let k be an index such that ??(B) < ?k and define ?k = 12 (?k + ??(B) ). Moreover, let ? be the vector obtained by replacing the k?th coordinate in ? with ?k , or in other words, ? = (?1 , . . . , ?k , . . . , ?m ). Using the assumption that ??(B) < 1, we know that ??(B) > 0, and since ?k > ??(B) we get that ?k < ??(B) . We can now draw two conclusions. First, ??(1) ? . . . ? ??(B) > ?k and therefore ? K(1,?,B) = ?K(1,?,B) . Second, ?k < ?k and therefore ? satisfies the constraints of Eq. (2). Overall, we obtain that (f, b, ? , ?) is also a primal-dual solution to our problem. Moreover, we know that 1 ? ?k < ?k . Using the KKT complementary slackness condition, it follows that ?k , the Lagrange multiplier corresponding to this constraint, must equal 0. Defining ?i and ? as above, a simple corollary of Thm. 2 is that the number of support vectors is upper bounded by B in the case that ??(B) = ??(B+1) . From our discussion in Sec. 3, we know that the dual of the 1 ? ? interpolation-norm is the function max{u? , (1/B)u1 }. Plugging this definition into Eq. (4) gives us the dual optimization problem of budget-L1-SVM. The constraint ? ? C simplifies to ?i ? C for all i and  m i=1 ?i ? BC. To numerically solve this optimization problem, we turn to the Sequential Minimal Optimization (SMO) [5] technique. We briefly describe the SMO technique, and then discuss its adaptation to our setting. SMO is an iterative process, which on every iteration selects a pair of dual variables, ?k and ?l , and optimizes the dual problem with respect to them, leaving all other variables fixed. The choice of the two variables is determined by a heuristic [5], and their optimal values are calculated analytically. Assume that we start with a vector ? which is a feasible point of the optimization problem in Eq. (4). When restricted to the two active variables, ?k and ?l , the  constraint i=k,l ?i yi = 0 simplifies to ?knew yk + ?lnew yl = ?kold yk + ?lold yl . Put another way, we can slightly overload our notation and define the linear functions ?k (?) = ?k + ?yk and ?l (?) = ?l ? ?yl , (8) and find the single variable ? which maximizes our constrained optimization problem. Since the constraints in Eq. (4) define a convex and bounded feasible set, the intersection of the linear equalities in Eq. (8) with this feasible set restricts ? to an interval. The objective function, as a function of the single variable ?, takes the form O(?) = P ?2 + Q? + c, where c is a constant,     P = K(xk , xl ) ? 12 K(xk , xk ) ? 12 K(xl , xl ) , Q = yk ? f (xk ) ? yl ? f (xl ) , m and f is the current function in the RKHS (f ? i=1 ?i yi K(xi , ?)). Maximizing the objective function in Eq. (4) with respect to ?k and ?l is equivalent to maximizing O(?) with respect to ? over an interval. P equals minus the Euclidean distance between the functions K(xk , ?) and K(xl , ?) in the RKHS, and is therefore a negative number. Therefore, O(?) is a concave function which attains a single (unconstrained) maximum. This maximum can be found analytically by 0 = ?O(?) = 2P ? + Q ?? ? ? = ?Q . 2P (9) If this unconstrained optimum falls inside the feasible interval, then it is equivalent to the constrained optimum. Otherwise, the constrained optimum falls on one of the two end-points of the interval. Thus, we are left with the task of finding these end-points. To do so, we consider the remaining constraints:  ? (?) ? 0 ? (?) ? C (I) k ?i . (II) k (III) ?k (?) + ?l (?) ? BC ? ?l (?) ? 0 ?l (?) ? C i=k,l The constraints in (I) translate to yk = ?1 ? ? ? ?k yl = ?1 ? ? ? ??l yk = +1 ? ? ? ??k yl = +1 ? ? ? ?l . (10) yk = +1 ? ? ? C ? ?k yl = +1 ? ? ? ?l ? C . (11)  m  1 i=1 ?i ? BC 2   m 1 i=1 ?i 2 BC ? (12) The constraints in (II) translate to yk = ?1 ? ? ? ?k ? C yl = ?1 ? ? ? C ? ?l Constraint (III) translates to yk = ?1 ? yl = +1 ? ? ? yk = +1 ? yl = ?1 ? ? ? . Finding the end-points of the interval that confines ? amounts to finding the smallest upper bound and the greatest lower bound in Eqs. (10,11,12). This concludes the analytic derivation of the SMO update for budget-L1-SVM. ? L2-SVM on a budget Next, we use the 2 ? ? interpolation-norm with parameter t = B in the Any-Norm-SVM framework, and obtain the budget-L2-SVM problem. Thm. 1 hints that setting ? t = B makes the 2 ? ? interpolation-norm almost equivalent to restricting the 2-norm to the top B elements in the vector ?. The support size of the budget-L2-SVM solution is strongly correlated with the parameter B although the exact relation between the two is not as clear as before. Again  we begin with the ? dual formulation defined in Eq. (4), where the constraint ? ? C becomes max{?2 , (1/ B)?1 } ? C. The intersection of this constraint with the other constraints defines a convex and bounded feasible set, and its intersection with the linear equalities in Eq. (8) defines an interval. The objective function in Eq. (4) is the same as before, so the unconstrained maximum is once again given be Eq. (9). To obtain the constrained maximum, we must find the end-points of the interval that confines ?. The dual-norm constraint can be written more explicitly as   ? (I) ?k (?) + ?l (?) ? BC ? ?i (II) ?k2 (?) + ?l2 (?) ? C 2 ? ?i2 . i=k,l i=k,l the constraint we had in the budget-L1-SVM case, and is given in terms Constrain (I) is similar to? of ? by replacing B with B in Eq. (12). Constraint (II ) is new, and can be written in terms of ? as m ?2 + ?? + ? ? 0, where ? = ?k yk ? ?l yl and ? = 12 ( i=1 ?i2 ? C 2 ). It can be written even more explicitly as ? ? 12 (?? + ? 2 ? 4?) and ? ? 12 (?? ? ? 2 ? 4?) . (13) In addition, we still have the constraint ? ? 0, which is common to every instance of the AnyNorm-SVM framework. This constraint is given in terms of ? in Eq. (10). Overall, the end-points of the interval we are searching for are found by taking the smallest upper bound and the greatest ? lower bound in Eqs. (10,13) and Eq. (12) with B replaced by B. 700 500 450 600 400 500 350 300 s s 400 300 250 200 150 200 100 100 50 200 400 600 B 800 1000 200 400 600 800 1000 B Figure 1: Average test error of budget-L1-SVM (left) and budget-L2-SVM (right) for different values of the budget parameter B and the pruning parameter s (all but s weights in ? are set to zero). The test error in the darkest region is roughly 50%, and in the lightest region is roughly 5%. 5 Experiments Many existing solvers for the standard L1-SVM problem define a positive threshold value close to zero and replace every weight that falls below this threshold with zero. This heuristic significantly reduces the time required for the algorithm to converge. In our setting, a more natural way to speed up the learning process is to run the iterative SMO optimization algorithm for a fixed number of iterations and then to keep only the B largest weights, setting the m ? B remaining weights to zero. This pruning heuristic enforces the budget constraint in a brute-force way, and can be equally applied to any kernel-machine. However, the natural question is how much will the pruning heuristic affect the classification accuracy of the kernel-machine it is applied to. If our technique indeed lives up to its theoretical promise, we expect the pruning heuristic to have little impact on classification accuracy. On the other hand, if we train an L1-SVM and it so happens that the number of large weights exceeds B, then applying the pruning heuristic should have a dramatic negative effect on classification accuracy. The goal of our experiments is to demonstrate that this behavior indeed occurs in practice. We conducted our experiments using the MNIST dataset, which contains handwritten digits from the 10 digit classes. We randomly generated 50 binary classification problems by first randomly partitioning the 10 classes into two equally sized sets, and then randomly choosing a training set of 1000 examples and a test set of 4000 examples. The results reported below are averaged over these 50 problems. Although MNIST is generally thought to induce easy learning problems, the method described above generates moderately difficult learning tasks. For each binary problem, we trained both the L1 and the L2 budget SVMs with B = 20, 40, . . . , 1000. Note that ?K(1,?,B) grows roughly linearly with B, and that ?K(2,?,?B) grows roughly ?like the square root of B. To compensate for this, we set C = 10/B in the L1 case and C = 10/ B in the L2 case. This heuristic choice of C attempts to preserve the relative weight of the regularization term with respect to the norm term in Eq. (2), across the various values of B. In all of our experiments, we used a Gaussian kernel with ? = 1 (after scaling the data to have an average unit norm). For each classifier trained, we pruned away all but the s largest weights, with s = 20, 40, . . . , 1000, and calculated the test error. The average test error for every choice of B (the budget parameter in the optimization problem) and s (the number of non-zero weights kept) is summarized in Fig. 1. In practice, s and B should be equal, however we let s take different values in our experiment to illustrate the characteristics of our approach. Note that the test-error attained by L1-SVM (without a budget parameter) and L2-SVM are represented by the top-right corners of the respective plots. As expected, classification accuracy for any value of B deteriorates as s becomes small. However, the accuracy attained by L1-SVM and L2-SVM can be equally attained using significantly less support vectors. 6 Discussion Using the Any-Norm-SVM framework with interesting norms enabled us to introduce a budget parameter to the SVM formulation. However, the Any-Norm framework can be used for other tasks as well. For example, we can interpolate between L1-SVM and L2-SVM by using the 1 ? 2 interpolation-norm. This gives the user the explicit ability to balance the trade-off between the pros and cons of these two SVM variants. In [20] it is shown that there exists a constant c such that, 1/2 r ?  m 2 cvK(1,2,?r) ? |v | + r v ? vK(1,2,?r) . j j=1 j=r+1 j These bounds give some insight into how such an interpolation would behave. Another possible norm that can be used in our framework is the Mahalanobis norm (v = (v M v)1/2 , where M is a positive definite matrix), which would define a loss function that takes into account pair-wise relationships between examples. Regarding our experiments, the rule-of-thumb we used to choose the parameter C is not always optimal. It seems preferable to tune C individually for each B using cross-validation. We are currently exploring extensions to our SMO variant that would quickly converge to the sparse solution without the help of the pruning heuristic. We are also considering multiplicative update optimization algorithms as an alternative to SMO. References [1] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In Proc. of the Fifth Annual ACM Workshop on Computational Learning Theory, pages 144?152, 1992. [2] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [3] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. [4] P. Bartlett and A. Tewari. Sparseness vs estimating conditional probabilities: Some asymptotic results. In Proc. of the Seventeenth Annual Conference on Computational Learning Theory, pages 564?578, 2004. [5] J. C. Platt. Fast training of Support Vector Machines using sequential minimal optimization. In B. Sch?olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT Press, 1998. [6] C.J.C. Burges. Simplified support vector decision rules. In Proc. of the Thirteenth International Conference on Machine Learning, pages 71?77, 1996. [7] E. Osuna and F. Girosi. Reducing the run-time complexity of support vector machines. In B. Sch?olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods: Support Vector Learning, pages 271?284. MIT Press, 1999. [8] B. Sch?olkopf, S. Mika, C.J.C. Burges, P. Knirsch, K-R M?uller, G. R?atsch, and A.J. Smola. Input space versus feature space in kernel-based methods. IEEE Transactions on Neural Networks, 10(5):1000?1017, September 1999. [9] J-H. Chen and C-S. Chen. Reducing SVM classification time using multiple mirror classifiers. IEEE transactions on systems, man and cybernetics ? part B: Cybernetics, 34(2):1173?1183, April 2004. [10] M. Wu, B. Sch?olkopf, and G. Bakir. A direct method for building sparse kernel learning algorithms. Journal of Machine Learning Research, 7:603?624, 2006. [11] K.P. Bennett. Combining support vector and mathematical programming methods for classification. In Advances in kernel methods: support vector learning, pages 307?326. MIT Press, 1999. [12] Y. Lee and O.L. Mangasarian. RSVM: Reduced support vector machines. In Proc. of the First SIAM International Conference on Data Mining, 2001. [13] K. Crammer, J. Kandola, and Y. Singer. Online classification on a budget. In Advances in Neural Information Processing Systems 16, 2003. [14] O. Dekel, S. Shalev-Shwartz, and Y. Singer. The Forgetron: A kernel-based perceptron on a fixed budget. In Advances in Neural Information Processing Systems 18, 2005. [15] N. Cesa-Bianchi and C. Gentile. Tracking the best hyperplane with a simple budget perceptron. In Proc. of the Nineteenth Annual Conference on Computational Learning Theory, 2006. [16] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68(3):337?404, May 1950. [17] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1985. [18] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [19] C. Bennett and R. Sharpley. Interpolation of Operators. Academic Press, 1998. [20] T. Holmstedt. Interpolation of quasi-normed spaces. Mathematica Scandinavica, 26:177?190, 1970.
3086 |@word briefly:1 version:2 norm:86 seems:1 dekel:2 calculus:1 q1:6 dramatic:1 minus:1 contains:1 selecting:1 rkhs:3 bc:5 sharpley:1 existing:1 recovered:1 current:1 yet:1 must:2 written:3 limp:1 girosi:1 analytic:1 plot:1 update:2 v:1 device:1 xk:5 beginning:1 short:1 farther:1 mathematical:2 direct:1 prove:1 combine:1 inside:1 introduce:2 acquired:2 expected:1 indeed:3 roughly:4 p1:6 behavior:2 relying:1 little:1 solver:1 considering:1 becomes:3 begin:4 estimating:1 moreover:4 bounded:3 notation:1 maximizes:1 israel:1 minimizes:2 q2:6 finding:3 formalizes:1 every:11 concave:2 preferable:1 classifier:21 rm:9 k2:1 control:4 brute:1 partitioning:1 unit:1 platt:1 positive:8 before:2 engineering:1 tends:1 limit:1 consequence:2 mistake:2 interpolation:20 approximately:1 mika:1 peetre:3 limited:1 range:1 averaged:1 seventeenth:1 horn:1 enforces:1 yj:1 practice:3 definite:3 differs:1 digit:2 significantly:2 thought:1 boyd:1 word:2 induce:1 get:1 cannot:1 close:2 operator:3 put:2 influence:1 applying:1 optimize:1 equivalent:6 maximizing:2 jerusalem:1 regardless:2 attention:1 starting:1 convex:7 normed:1 immediately:2 insight:1 rule:2 importantly:1 deriving:1 vandenberghe:1 enabled:1 searching:1 notion:1 coordinate:1 play:2 user:5 exact:1 programming:2 hypothesis:1 element:5 recognition:1 role:2 worst:3 region:2 ensures:1 trade:1 yk:11 intuition:1 complexity:3 moderately:1 cristianini:1 trained:2 depend:1 solving:1 technically:1 serve:1 triangle:3 various:3 represented:1 derivation:2 train:2 fast:1 describe:1 choosing:1 shalev:1 sanity:1 heuristic:8 valued:2 solve:3 say:1 nineteenth:1 otherwise:1 ability:5 itself:1 online:3 obviously:1 advantage:1 propose:1 product:1 adaptation:1 combining:2 translate:3 olkopf:4 requirement:1 optimum:3 help:1 derive:4 illustrate:1 ac:1 school:1 minor:1 received:1 eq:30 strong:1 p2:6 solves:1 c:1 involves:1 come:1 implies:1 direction:1 drawback:1 duals:2 enable:2 require:1 generalization:1 preliminary:2 elementary:1 exploring:1 extension:1 accompanying:1 hold:8 claim:1 smallest:2 omitted:3 proc:5 currently:1 individually:1 largest:5 maxm:1 vice:1 uller:1 mit:3 gaussian:1 always:1 modified:1 mobile:1 corollary:1 focus:4 check:1 l1svm:2 attains:1 lold:1 relation:1 misclassified:1 quasi:1 selects:1 issue:1 classification:11 dual:21 overall:2 denoted:1 constrained:4 special:1 equal:3 construct:2 once:2 identical:1 promote:1 report:1 fundamentally:1 inherent:1 hint:1 modern:1 randomly:4 preserve:1 kandola:1 homogeneity:2 individual:1 interpolate:3 maxj:1 replaced:3 intended:1 familiar:1 attempt:1 mining:1 primal:5 respective:4 euclidean:1 taylor:1 theoretical:3 minimal:2 instance:4 soft:1 subset:2 hundred:1 conducted:1 johnson:1 reported:1 international:2 siam:1 huji:1 lee:1 yl:11 off:1 together:2 quickly:1 concrete:2 squared:2 again:2 cesa:1 manage:1 choose:1 possibly:1 corner:1 american:1 knirsch:1 actively:1 account:3 potential:1 sec:6 summarized:1 explicitly:3 vi:7 multiplicative:1 try:1 root:1 start:1 il:1 square:1 accuracy:8 phoneme:1 characteristic:2 efficiently:2 handwritten:1 thumb:1 drive:1 cybernetics:2 classified:3 detector:1 definition:5 scandinavica:1 mathematica:1 naturally:1 proof:7 con:1 gain:2 sampled:1 dataset:1 recall:1 bakir:1 organized:1 hilbert:1 appears:1 attained:5 forgetron:1 follow:1 specify:1 april:1 formulation:11 evaluated:2 box:1 strongly:1 smola:3 hand:1 replacing:4 aronszajn:1 lack:4 defines:5 slackness:1 perhaps:1 grows:2 name:1 effect:2 building:1 true:1 multiplier:1 analytically:2 equality:2 reformulating:1 regularization:1 i2:3 mahalanobis:1 generalized:1 demonstrate:2 l1:30 pro:1 wise:1 mangasarian:1 common:1 functional:3 physical:1 m1:1 numerically:1 significant:1 versa:1 cambridge:3 unconstrained:3 shawe:1 had:1 moving:1 longer:1 recent:1 belongs:1 optimizes:2 store:2 inequality:5 binary:3 life:1 yi:15 gentile:1 converge:2 signal:1 semi:2 ii:4 multiple:1 desirable:1 reduces:2 exceeds:1 adapt:1 academic:1 cross:1 compensate:1 equally:4 promotes:2 plugging:2 impact:1 variant:7 basic:1 essentially:1 iteration:2 kernel:19 sometimes:2 addition:1 thirteenth:1 interval:8 leaving:1 sch:4 posse:1 member:1 integer:3 noting:1 iii:2 easy:1 xj:1 fit:1 zi:1 affect:1 inner:1 idea:2 simplifies:2 knowing:1 tradeoff:1 translates:1 regarding:1 bartlett:1 speech:2 dramatically:1 generally:2 tewari:1 clear:1 tune:1 amount:4 mthe:1 extensively:1 induces:1 svms:1 reduced:2 restricts:1 sign:3 lightest:1 deteriorates:1 promise:1 threshold:2 kept:1 fraction:1 sum:1 run:3 powerful:1 family:2 almost:1 guyon:1 wu:1 draw:1 decision:1 scaling:1 bound:9 quadratic:1 annual:3 badly:1 ahead:1 constraint:22 precisely:1 constrain:1 generates:1 speed:3 min:6 optimality:1 pruned:1 across:1 slightly:1 osuna:1 wi:2 lp:1 modification:2 happens:1 restricted:1 taken:1 remains:1 turn:2 discus:2 mechanism:1 slack:1 singer:4 know:4 end:5 ofer:1 available:1 apply:1 away:1 darkest:1 alternative:1 original:1 top:2 remaining:3 hinge:2 yoram:1 coined:1 especially:1 prof:2 approximating:1 society:1 objective:6 question:1 occurs:1 exhibit:1 september:1 distance:2 participate:1 assuming:1 useless:1 index:1 relationship:1 minimizing:1 hebrew:1 balance:1 difficult:1 negative:2 bianchi:1 upper:4 behave:1 immediate:1 defining:1 reproducing:2 arbitrary:6 thm:3 namely:2 required:3 pair:4 redefining:1 smo:9 learned:1 boser:1 address:1 usually:1 below:2 sparsity:5 program:1 max:7 memory:3 shifting:1 greatest:3 critical:2 natural:2 rely:1 force:1 turning:1 older:1 imply:1 concludes:2 l2:17 relative:1 asymptotic:1 loss:4 expect:1 permutation:2 interesting:3 proven:3 versus:1 validation:1 editor:2 bias:2 allow:1 burges:4 perceptron:2 fall:3 taking:3 absolute:3 sparse:9 fifth:1 benefit:1 overcome:1 calculated:2 rsvm:1 evaluating:1 ignores:1 simplified:1 transaction:3 pruning:6 preferred:1 keep:2 active:4 kkt:2 conclude:1 knew:1 xi:13 shwartz:1 iterative:2 nature:1 reasonably:1 ignoring:2 obtaining:1 interpolating:2 vj:2 dense:2 linearly:1 oferd:1 complementary:1 fig:1 referred:1 definiteness:1 slow:1 wiley:1 explicit:3 xl:5 theorem:3 specific:1 maxi:3 svm:78 concern:1 exists:1 workshop:1 mnist:2 restricting:2 sequential:2 vapnik:2 mirror:1 budget:32 sparseness:1 margin:2 chen:2 smoothly:1 generalizing:1 forget:1 intersection:3 lagrange:1 tracking:1 determines:1 satisfies:4 relies:1 extracted:1 acm:1 conditional:1 goal:1 sized:1 replace:2 man:1 feasible:5 bennett:2 telephone:1 specifically:3 determined:1 reducing:2 hyperplane:1 called:2 duality:2 experimental:1 atsch:1 formally:1 support:28 crammer:1 confines:2 absolutely:3 overload:1 lnew:1 correlated:1
2,299
3,087
Natural Actor-Critic for Road Traffic Optimisation Silvia Richter Albert-Ludwigs-Universit?at Freiburg, Germany Douglas Aberdeen National ICT Australia Canberra, Australia Jin Yu National ICT Australia Canberra, Australia. [email protected] [email protected] [email protected] Abstract Current road-traffic optimisation practice around the world is a combination of hand tuned policies with a small degree of automatic adaption. Even state-ofthe-art research controllers need good models of the road traffic, which cannot be obtained directly from existing sensors. We use a policy-gradient reinforcement learning approach to directly optimise the traffic signals, mapping currently deployed sensor observations to control signals. Our trained controllers are (theoretically) compatible with the traffic system used in Sydney and many other cities around the world. We apply two policy-gradient methods: (1) the recent natural actor-critic algorithm, and (2) a vanilla policy-gradient algorithm for comparison. Along the way we extend natural-actor critic approaches to work for distributed and online infinite-horizon problems. 1 Introduction Optimising the performance of existing road networks is a cheap way to reduce the environmental, social, and financial impact of ever increasing volumes of traffic. Road traffic optimisation can be naturally cast as a reinforcement learning (RL) problem. Unfortunately it is in the hardest class of RL problems, having a continuous state space and infinite horizon, and being partially observable and difficult to model. We focus on the use of the natural actor-critic (NAC) algorithm [1] to solve this problem through online interaction with the traffic system. The NAC algorithm is an elegant combination of 4 RL methods: (1) policy-gradient actors to allow local convergence under function approximation and partial observability; (2) natural gradients to incorporate curvature statistics into the gradient-ascent; (3) temporal-difference critics to reduce the variance of gradient estimates; (4) least-squares temporal difference methods to avoid wasting information from costly environment interactions. One contribution of this work is an efficient online version of NAC that avoids large gradient steps, which cannot be guaranteed to be an improvement in the presence of stochastic gradients [2]. We compare this online NAC to a simple online policy-gradient (PG) approach, demonstrating that NAC converges in orders of magnitude less environment interactions. We also compare wall-clock convergence time, and suggest that environments which can be simulated quickly and accurately can be optimised faster with simple PG approaches rather than NAC. This work has grown out of an interaction with the Sydney Road Traffic Authority. Our choice of controls, observations, and algorithms all aim for practical large-scale traffic control. Although our results are based on a simplified traffic simulation system, we could theoretically attach our learning system into real-world traffic networks. Our simple simulator results demonstrate better performance than the automatic adaption schemes used in current proprietary systems. 2 Traffic Control The optimisation problem consists of finding signalling schedules for all intersections in the system that minimise the average travel time, or similar objectives. This is complicated by the fact that many of the influencing state variables cannot be readily measured. Most signal controllers in use today rely only on state information gained from inductive loops in the streets. A stream is a sequence of cars making the same turn (or going straight) through an intersection. A phase is an interval where a subset of the lights at an intersection are green such that a set of streams that will not collide have right of way. A signal cycle is completed when each phase has been on once, the cycle time being the sum of phase times. Traditionally, control algorithms optimise traffic flow via the phase scheme, the split, and the offset. The phase scheme groups the signal lights into phases and determines their order. A split gives a distribution of the cycle time to the individual phases. Offsets can be introduced to coordinate neighbouring intersections, creating a ?green wave? for the vehicles travelling along a main road. Approaches to Traffic Control can be grouped into three categories. Fixed time control strategies are calculated off-line, based on historical data. TRANSYT, for example, uses evolutionary algorithms and hill-climbing optimisation [3]. Traffic responsive strategies are real-time, calculating their policies from car counts determined from inductive-loop detectors. SCOOT and SCATS are examples in use around the world [4, 5]. Third generation methods employ sophisticated dynamic traffic models and try to find optimal lengths for all phases, given a fixed phase scheme, e.g., by dynamic programming [4]. The exponential complexity of the problem, however, limits these approaches to local view of a few intersections. Reinforcement learning has also been applied [6], but in a way that uses a value function for each car, which is unrealistic in today?s world. Common to most approaches is that they deal with the insufficient state information by maintaining a model of the traffic situation, derived from available sensor counts. However, imperfections in the model is a further source of errors and performance may consequently suffer. Our methods avoid modelling. We focus on the system that motivated this research, the Sydney Coordinated Adaptive Traffic System (SCATS) [5]. It is used in major cities throughout Australasia and North America. It provides pre-specified plans that are computed from historical data. There is an additional layer of online adaption based on a saturation-balancing algorithm, i.e., SCATS calculates phase timings so that all phases are utilised by vehicles for a fixed percentage of the phase time. Small incremental updates are performed once per cycle. Due to this slow update, rapidly fluctuating traffic conditions pose a problem to SCATS. Furthermore, it does not optimise the global network performance, and its base plans involve hand-tuning to encode offset values. 3 Partially Observable MDP Formulation We cast the traffic problem as a partially observable Markov decision process (POMDP) with states s in a continuous space S. The system is controlled by stochastic actions at ? A drawn from a random variable (RV) conditioned on the current policy parameters ?t , and an observation of the current state o(st ), according to Pr(at |o(st ), ?t ). In general POMDPs the observation function o(st ) is stochastic. NAC, which has only been presented in literature for the fully observable case so far, can be extended to POMDPs given deterministic observations, as this ensures compatibility of the function approximation and avoids injecting noise into the least squares solution. Our traffic POMDP fulfils this requirement. To simplify notation we set ot := o(st ). The state is an RV evolving as a function of the previous state and action according to Pr(st+1 |st , at ). In the case of road traffic these distributions are continuous and complex, making even approximate methods for model based planning in POMDPs difficult to apply. The loop sensors only count cars that pass over them, or are stationary on them. The controller needs access to the history to attempt to resolve some hidden state. We later describe observations constructed from the sensor readings that incorporate some important elements of history for intersection control. 3.1 Local Rewards A common objective function in traffic control is the average vehicle travel time. However, it is impossible to identify a particular car in the system, let alone what its travel time was. Sophisticated modelling approaches are needed to estimate these quantities. We prefer a direct approach that has the benefits of being trivial to measure from existing sensors and easing the temporal credit assignment problem. For the majority of our experiments we treat each intersection as a local MDP. It is trivial to count all cars that enter the intersection with loop detectors. Therefore, we chose the instant reward rt,i to be the number of cars that entered intersection i over the last time step. The objective for each intersection i is to maximise the normalised discounted throughput: o n X? Ri (?) = E (1 ? ?) ? t rt,i ? . t=0 Discounting is important because it ensures that the controller prefers to pass cars through as early as possible. While suboptimal policies (in terms of travel time) may achieve the optimal average throughput over a time window, the discounted throughput criterion effectively minimises the total waiting time at an intersection in the finite-horizon case [7]. Ignoring effects such as road saturation and driver adaption (which we explore in our experiments), this would result in minimisation of the system wide travel time. The use of local rewards speeds up learning, especially as the number of intersections grows. Bagnell and Ng [8] demonstrate that local rewards alter the sample complexity ? from worst case ?(I), where I is the number of intersections, down to O(log I). Unfortunately, the value of Ri (?) depends directly on the local steady state distribution Pr(si |?). Thus changes to the policy of neighbouring intersections can adversely impact intersection i, by influencing the distribution of si . A sufficiently small learning rate allows controllers to adapt to this effectively non-stationary component of the local MDP. We may fail to find the globally optimal cooperative policy without some communication of rewards, but it has proven very effective empirically. 4 Natural Actor-Critic Algorithms Policy-gradient (PG) methods for RL are of interest because it can be easier to learn policies directly than to estimate the exact value of every state of the underlying MDP. While they offer only local convergence guarantees, they do not suffer from the convergence problems exhibited by pure value based methods under function approximation or partial observability [9]. On the other hand, PG methods have suffered from slow convergence compared to value methods due to high variance in the gradient estimates. The natural actor-critic method (NAC) [1] improves this with a combination of PG methods, natural gradients, value estimation, and least-squares temporal-difference Q-learning (LSTD-Q). NAC computes gradient estimates in a batch fashion, followed by a search for the best step size. We introduce an online stochastic gradient ascent using NAC estimates. Stochastic gradient ascent methods often outperform stochastic batch methods [2]. We begin with the Bellman equation for fixed parameters ? where the value of action a in state s is Q(s, a). This can also be written as the value V(s) plus the advantage of action a in state s, or A(s, a): Z Q(s, a) = V(s) + A(s, a) = r(s, a) + ? Pr(s0 |s, a)V(s0 ) ds0 . (1) S We substitute linear approximators for the value and advantage functions, with parameter vectors v ? ? a) := (?? log Pr(a|o(s), ?))| w, leading to and w respectively: V(s) := o(s)| v, A(s, Z o(s)| v + (?? log Pr(a|o(s), ?t ))| w = r(s, at ) + ? Pr(s0 |s, a)o(s0 )| v ds0 . (2) S ? a) has the nice property that The surprising choice of ?? log Pr(a|o(s), ?) as features for A(s, the parameters w turn out to be the naturalised gradient of the long-term average reward (and it is compatible with the policy parameterisation [9]). To see this we write out the policy-gradient algorithm [9], where Pr(s|?) is the steady state probability of state s and b(s) is a baseline to reduce the variance of gradient estimates Z Z ?? R(?) = Pr(s|?) ?? Pr(a|s)(Q(s, a) ? b(s)) da ds. (3) S A The obvious baseline for making Q(s, a) zero mean is b(s) = V(s), which gives Q(s, a) ? V(s) = ? a) for A(s, a) and make use of the fact A(s, a). Again, we substitute the linear approximation A(s, that our policy is actually a function of o := o(s) and ?: Z Z ?? R(?) = Pr(s|?) ?? Pr(a|o, ?)(?? log Pr(a|o, ?))| w da ds. S A Further substituting ?? Pr(a|o, ?) by Pr(a|o, ?)?? log Pr(a|o, ?) gives Z Z ?? R(?) = Pr(s|?) Pr(a|o, ?)?? log Pr(a|o, ?)(?? log Pr(a|o, ?))| da ds w =: F? w S A A key observation is that the matrix F? is the outer product of the log action gradient, integrated over all states and actions. This is exactly the Fisher information matrix [1]. On the other hand, the naturalisation of gradients consists of pre-multiplying the normal gradient by the inverse of the Fisher matrix [10], leading to cancellation of the two Fisher matrices F??1 ?? R(?) = w. We return to (2) and reformulate it as a temporal-difference estimate of Q(st , at ), noting in particular that the integral is replaced by an approximation ?ot+1 vt of the discounted value of the observed next state. This approximation introduces a zero-mean error ?. Rewriting as a linear system yields (ot ? ?ot+1 )| vt + ?(ot , st , st+1 ) + (?? log Pr(at |ot , ?t ))| wt = r(st , at ). [(?? log Pr(at |ot , ?t )| , (ot ? ?ot+1 )| ][wt| , vt| ]| + ?(ot , st , st+1 ) = r(st , at ) zt [(?? log Pr(at |ot , ?t )| , (ot ? ?ot+1 )| ][wt| , vt| ]| + ?(ot , st , st+1 ) = zt r(st , at ) =: gt (4) In (4) we pre-multiply both sides by an eligibility trace zt [11], which gives us the LSTDQ algorithm for a single sample at step t. The NAC algorithm approximates w by averaging both sides over many time steps T and solving AT [w| , v| ]| = bT , where AT = PT 1/T t=1 zt [(?? log Pr(at |ot , ?t )| , (ot ? ?ot+1 )| ] (averaging out the zero-mean noise), and PT bT = 1/T t=0 gt . By analogy with other second-order gradient methods we can view A as containing curvature information about the optimisation manifold, accelerating learning. In the case of NAC it is an elegant combination of the Fisher information matrix and critic information. 4.1 Online Infinite-Horizon Natural Actor Critic We cannot perform a line search on a real world traffic system because during the line search we may try arbitrarily poor step size values. Furthermore, the gradient estimates are noisy, disadvantaging batch methods [2]. E.g., if wt is not accurate a line search can step a long way toward a suboptimal policy and get stuck because the soft-max function used to generate action distributions has a minima for all large parameter values. Methods for preventing line search from going too far typically counteract the advantages of using one at all. Thus, we propose the online version of NAC in Algorithm 1, making a small parameter update at every time step which potentially accelerates convergence because the policy can improve at every step. The main difference between the batch and online versions is the avoidance of the O(d3 ) matrix inversion (although Cholesky factorisation can help) for solving AT [w| , v| ]| = bT , where d = |?| + |o|. Instead, lines 12 to 15 implement a trick used for Kalman filters: the Sherman-Morrison update of a matrix inverse [2]: A?1 zy| A?1 . (A + zy| )?1 = A?1 ? 1 + y| A?1 z In other words, we always work in the inverse space. The update is O(d2 ), which is still expensive compared to vanilla PG approaches. Faster methods would be possible if the rank one update of A were of the restricted form A + zz| . NAC makes up for expensive computations by requiring orders of magnitude fewer steps to converge to a good policy. We retain the aggregation of A,1 using a rolling average implemented by the ? weighting (line 12); however we only use instantaneous estimates gt to avoid multiple parameter updates based on the same rewards. The O LPOMDP, or vanilla, algorithm [12] produces per step gradient estimates from the discounted sum of ?? log Pr(at |ot , ?t ), multiplied by the instant reward rt . This is exactly wt if we set At := I for all t. Other PG approaches [9, 10] are also specialisations of NAC. As the simplest and fastest infinite-horizon algorithm we used O LPOMDP for comparison. 5 Policy-Gradient for Traffic Control PG methods are particularly appealing for traffic control for several reasons. The local search, Monte-Carlo gradient estimates, and local rewards improve scalability. The (almost) direct mapping 1 This means that At is a mixture of the Fisher matrices for many parameter values. This is unappealing and we expected a discounted average to yield an At that better represents ?t . However, this performed poorly, perhaps because decaying ? mitigates ill-conditioning in the Fisher matrix as parameter values grow [10]. Alg. 1: An Online Natural Actor-Critic 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: t = 1, A?1 1 = I, ?1 = [0], z1 = [0] =step size, ?=Critic discount, ?=Actor discount Get observation o1 while not converged do Sample action at ? Pr(?|ot , ?t ) zt = ?zt?1 + [?? log Pr(at |ot , ?t )| , o|t ]| Do action at Get reward rt gt = rt zt Get observation ot+1 yt = [?? log Pr(?|ot , ?t )| , o|t ]| ? ?[0| , o|t+1 ]| ?t = 1 ? 1t ut = (1 ? ?t )A?1 t?1 zt q|t = yt| A?1 t?1 | ut qt 1 A?1 ? 1.0+q | ?t t?1 t zt g [wt| , vt| ]| = A?1 t t ?t+1 = ?t + wt = A?1 t 16: 17: 18: t?t+1 19: end while Fig. 2: Convergence properties of NAC (top) compared to O LPOMDP (bottom) over 30 runs in the Offset scenario. of raw sensor information to controls means that we avoid modelling, and also creates a controller that can react immediately to fluctuations in traffic density. We emphasise that, while SCATS has to adapt a single behaviour slowly, our policy is a rich mapping of sensor data to many different behaviours. Neighbouring intersection controllers cooperate through common observations. 5.1 Our Simulator We implemented a simple traffic simulation system, aiming at maximum simulation speed rather than at an accurate model of traffic flow. We did, however, implement a realistic control interface. We modelled the phase control protocol and sensor system based on information from the Sydney traffic authority. Given that the learning algorithm does not depend directly on a model of the system, just the ability to interact with it, our controller can be plugged into a more accurate simulation without modification. Our simplifying assumptions include: all vehicles move at uniform speed; road length is a multiple of the distance cars travel in one step; we ignore interactions between cars or the relative positions of cars within one road segment except in intersection queues. Roads are placed in a sparse grid, and intersections may be defined at grid nodes. Every intersection has two queues per incoming road: one queue for right turns and one for going straight or left. Vehicles drive on the left. Every vehicle has a destination intersection that it navigates to via a shortest path. If a driver can select either intersection queue without changing route distance, they choose the one that is currently green, or has the fewest cars in the queue. This models the adaption of drivers to control policies. To account for the gap between a phase ending and the next starting (inter-green time), and the fact that cars start up slowly, we restrict the number of cars that pass through an intersection in the first green step. This factor deters strategies that flip rapidly between phases. We represent saturated traffic conditions through a road capacity parameter. Cars are only allowed to move into the next road segment if the number of cars there does not exceed 20. 5.2 The Control Architecture Commonly intersections have 2 to 6 phases. Ours have 4 phases: for traffic coming from east or west (EW) straight and left turns, EW right turns, north/south (NS) straight and left turns, and NS right turns. At each time step (corresponding to about 5 seconds real-time) the controller decides which phase to activate in the next step. We do not restrict the order of phases, but to ensure a reasonable policy we enforce the constraint that all phases must be activated at least once within 16 time steps. The controller input for intersection i is ot,i , constructed as follows: Cycle duration: 16 bits, where the nth bit is on in the nth step of the cycle, supporting time based decisions like offsets. Current phase: 4 bits, indicating the previous phase. Current phase duration: 5 bits, indicating that we have spent no more than 1, 2, 4, 8 or 13 continuous time steps in the current phase. Phase durations: 5 bits per phase, in the same format as cur- Fig. 3: The intersection model, rent duration, counting the total time spent in each phase in the showing 2 phases and detectors. current cycle. Detector active: 8 bits for the 8 loop sensors indicating whether a car is waiting. Detector history: 3 bits per loop sensor, indicating a saturation level of more than 0, more than half capacity, or capacity, in the current cycle. Neighbour information: 2 bits, giving a delayed comparison of the flows from neighbouring intersections, indicating where traffic is expected from. ?t,i over the The controller maps observations ot,i for intersection i to a probability distribution a P phases using a linear approximator with outputs xt,i and the soft-max function. Let ?i be the P ? |ot,i | matrix of parameters for intersection i. We additionally define U(p) as the unit vector with a 1 in row p. Thus, assuming exp(xt,i ) is element-wise exponentiation xt,i = ?i ot,i , ?t,i = PP a exp(xt,i ) p=1 exp(xt,i (p)) ?t,i )o|t,i , ??i log Pr(at,i = p|ot,i , ?i ) = (U(p) ? a , ?t,i (p); Pr(at,i = p|ot,i , ?i ) = a ?? log Pr(at |ot , ?) = [?|?1 , . . . , ?|?i ]| . We implemented two baseline controllers for comparison: (1) a uniform controller giving equal length to all phases; (2) A SCATS inspired adaptive controller called S AT that tries to achieve a saturation of 90% (thought to be used by SCATS) for all phases. The exact details of SCATS are not available. We aimed to recreate just the adaptive parts of SCATS. It updates the policy once per cycle depending on the current flows [7]. It does not implement the hand-tuned elements of the SCATS controller, such as offsets between neighbouring intersections. 6 Experiments Our first 4 experiments demonstrate scenarios where we expect PG learning to outperform the adaptive S AT controller. The 5th experiment is a large scale experiment where we had no particular prior reason to expect PG to outperform S AT. Fluctuating. This scenario focuses on an intersection in the centre of a crossroads. The traffic volume entering the system on the NS and EW traffic axes is proportional to a sine and cosine function of the time, respectively. Thus the optimal policy at the centre intersection also oscillates with the traffic volume. This scenario is realistic because upstream intersections release periodic bursts of traffic, which then disperse as they travel along the road. SCATS is known to adapt too slowly to deal well with such situations. The road length is 3 units, and vehicles travel through 3 intersections and along 4 roads, leading to an optimal travel time of 12. On average 3 cars enter the system per time step from each direction. Our results quote the average travel time (TT). Tab. 1 shows that NAC and O LPOMDP both improve upon the uniform controller and S AT. The two PG algorithms get similar results across all scenarios. However, Tab. 2 shows that NAC does so in up to 3 orders of magnitude fewer learning steps, but sometimes requires more CPU time. In a real deployment NAC would be able to keep up with realtime, thus we are much more concerned about reducing learning steps. The tables quote a single run with tuned parameters. To check the reliability of convergence and compare the properties of the two algorithms, Fig. 2 displays the results of 30 runs for both algorithms in one of our scenarios. We also ran the original NAC algorithm, using batch estimates of the direction in a line search to test whether an online NAC is advantageous. We were able to construct a line search that converged faster than online NAC, but always to a significantly worse policy (TT 23 instead of 14.3); or a line search that reached the same policy, but no faster than online. We analysed the sensitivity of training to the removal of observation features. Performance was degraded after removing any set of observations. Removing multiple observations caused a smooth degradation of the policy [7]. Burst. Intersection controllers can learn to ?cooperate? by using common observations. In this scenario, we make use of only the neighbours feature in the observations, so the controller must use the detector counts of its neighbours to anticipate traffic. The road network is the same as in the Fluctuating scenario. A steady stream of 1 car per step travels EW, so that it is usually good for the centre controller to give maximum time to the EW-straight phase. With the small probability of 0.02 per step, we input a group of 15 cars from the north, travelling south. When this happens, the controller should interrupt its normal policy and give more time to the NS-straight phase. Table 1 shows that both algorithms learn a good policy in terms of travel time. When examining the learned policies, we noted that the centre intersection controller had indeed learned to switch to the NS-straight phase just in time before the cars arrive, something that S AT and SCATS cannot do. Offset. Many drivers have been frustrated by driving along a main street, to be constantly interrupted by red lights. This scenario demonstrates learning an offset between neighbouring intersections, a feature that needs to be hand-tuned in SCATS. We model one arterial with 3 controlled intersections, neglecting any traffic flowing in from side roads. The road length is two units for 4 roads, resulting in an optimal travel time of 8. We restricted the observations to the cycle duration, meaning that our controllers learned from time information only. NAC learned an optimal policy in this scenario. S AT performed badly because it had no means of implementing an offset. We discovered, however, that learning an optimal policy is difficult, e.g. we failed for a road length of 3 (given limited time). Learning is hard because intersection n + 1 can only begin to learn when intersection n has already converged to an optimal policy. Adaptive Driver. In this scenario, local reward optimisation fails to find the global optimum, so we use the number of cars in the system as the global reward (which minimises the average travel time assuming constant input of cars). This reward is hard to estimate in the real world, but we want to demonstrate the ability of the system to learn cooperative policies using a global reward. Like the previous crossroads scenarios, we have NS and EW streams that interact only at a central intersection (D, in Fig. 4). The streams have the same volume, so the optimal policy splits time uniformly between the NW-straight and EW-straight phases. An additional stream of cars is generated in the south-west corner, at Intersection H, and travels diagonally east to Intersection E. Two equally short routes are available by going straight, or turning east at Intersection F. However, cars that turn east to join the northbound traffic flow must then turn east again at Intersection D, forcing the controller of that intersection to devote time to a third NS-right phase and forcing the main volume of traffic to pause. The optimal strategy is actually to route all cars north from Intersection F, so they join the main eastbound traffic flow. This scenario relies on a driver model that prefers routes with shorter waiting times at intersections among routes of equal distance. Our observations for this scenario consist only of the phase durations, informing the controller how much time it has spent in each phase during the current cycle. Although the average TT of the PG algorithms was only slightly better than S AT?s, their policies were radically different. S AT routed cars equally north and east at the critical intersection F, whilst the PG algorithms routed cars north. In this scenario, a slightly larger volume of of vehicles made S AT cause permanent traffic jams, while the PG algorithms still found the correct policy. In this scenario PG even beat our hand-coded optimal deterministic policy because it used stochasticity to find a ?mixed? policy, giving phases a fractional number of steps on average. Large Scale Optimisation. In a 10 ? 10 intersection network, perhaps modelling a central business district, each node potentially produces two cars at each time step according to a randomly initialised probability between 0 and 0.25. The two destinations for the cars from each source are initialised randomly, but stay fixed during the experiment to generate some consistent patterns within the network. Driver adaption and stochastic route choices also create some realistic variance. We used local rewards and all observations. O LPOMDP gave an average travel time improvement of 20% over S AT even though this scenario was not tailored for our controller. Such savings in a real city would be more than significant. NAC required around 52,000 iterations to improve on S AT. This would be 3 days of experience in the real world to achieve equivalent performance. Tab. 1: Comparison of best travel times (TT)for all methods and all scenarios. Evaluation for the PG algorithms was over 100,000 steps, after reaching steady state. Scen. Random Unif. S AT NAC O LPOMDP Fluct 250.0 102.0 21.5 14.3 13.4 197.0 35.0 18.4 13.4 13.5 Burst Offset 17.9 15.0 12.0 8.0 8.0 A.D. 251.0 74.2 17.2 15.8 16.0 60.5 54.7 35.1 29.8 27.9 100 int. Fig. 4: Adaptive driver scenario. Tab. 2: Optimisation parameters and run times for all scenarios for the PG algorithms. Optimisation was performed for t steps. ?Secs? is wall-clock time. Scen. NAC TT t secs  Fluct. 14.3 4.5 ? 106 860,549 10?5 13.4 4.4 ? 106 25,454 10?4 Burst Offset 8.0 2.1 ? 106 1,973 5 ? 10?5 A.D. 15.8 9.3 ? 107 867,267 10?7 5 100 int. 29.8 2.9 ? 10 1,077,151 10?4 7 ? 0.9 0.9 0.98 0.98 0.9 ? 0.95 0.95 0.9 0.95 0.95 TT 13.4 13.5 8.0 16.0 27.9 t 1.1 ? 109 9.7 ? 108 6.3 ? 108 2.2 ? 109 3.0 ? 108 O LPOMDP secs  491,298 10?3 35,572 10?4 8,546 5 ? 10?6 807,496 10?6 1,029,428 10?5 ? 0.9 0.9 0.98 0.98 0.9 Conclusion We described an online stochastic ascent policy-gradient procedure based on the natural actor-critic algorithm. We used it in a distributed road traffic problem to demonstrate where machine learning could improve upon existing proprietary traffic controllers. Our future work will apply this approach to realistic and approved simulators. Improved algorithms will be developed to cope with the increased noise and temporal credit assignment problem inherent in realistic systems. Acknowledgments National ICT Australia is funded by the Australian Government?s Backing Australia?s Ability program and the Centre of Excellence program. References [1] J. Peters, S. Vijayakumar, and S. Schaal. Natural actor-critic. In Proc. ECML., pages 280?291, 2005. [2] L. Bottou and Y. Le Cun. Large scale online learning. In Proc. NIPS?2003, volume 16, 2004. [3] N. H. Gartner, C. J. Messer, and E. Ajay K. Rathi. Traffic Flow Theory: A State of the Art Report Revised Monograph on Traffic Flow Theory. U.S. Department of Transportation, Transportation Research Board,Washington, D.C., 1992. [4] M. Papageorgiou. Traffic Control. In Handbook of Transportation Science. R. W. Hall, Editor, Kluwer Academic Publishers, Boston, 1999. [5] A. G. Sims and K. W. Dobinson. The Sydney coordinated adaptive traffic (SCAT) system philosophy and benefits. IEEE Transactions on Vehicular Technology, VT-29(2):130?137, 1980. [6] M. Wiering. Multi-agent reinforcement learning for traffic light control. In Proc. ICML 2000, 2000. [7] S. Richter. Learning traffic control - towards practical traffic control using policy gradients. Diplomarbeit, Albert-Ludwigs-Universit?at Freiburg, 2006. [8] J. A. Bagnell and A. Y. Ng. On local rewards and scaling distributed reinforcement learning. In Proc. NIPS?2005, volume 18, 2006. [9] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Proc. NIPS, volume 12. MIT Press, 2000. [10] S. Kakade. A natural policy gradient. In Proc. NIPS?2001, volume 14, 2002. [11] J. A. Boyan. Least-squares temporal difference learning. In Proc. ICML 16, pages 49?56, 1999. [12] J. Baxter, P. Bartlett, and L. Weaver. Experiments with infinite-horizon, policy-gradient estimation. JAIR, 15:351?381, 2001.
3087 |@word version:3 inversion:1 advantageous:1 approved:1 unif:1 d2:1 simulation:4 simplifying:1 pg:17 tuned:4 ours:1 existing:4 current:11 surprising:1 analysed:1 si:3 written:1 readily:1 must:3 interrupted:1 realistic:5 cheap:1 update:8 stationary:2 alone:1 fewer:2 half:1 signalling:1 short:1 provides:1 authority:2 node:2 district:1 along:5 constructed:2 direct:2 burst:4 driver:8 consists:2 introduce:1 excellence:1 theoretically:2 inter:1 indeed:1 expected:2 planning:1 simulator:3 multi:1 bellman:1 inspired:1 discounted:5 globally:1 resolve:1 cpu:1 window:1 increasing:1 begin:2 notation:1 underlying:1 what:1 developed:1 whilst:1 finding:1 wasting:1 guarantee:1 temporal:7 every:5 exactly:2 universit:2 oscillates:1 demonstrates:1 control:20 unit:3 before:1 maximise:1 influencing:2 local:14 timing:1 treat:1 limit:1 aiming:1 sutton:1 optimised:1 fluctuation:1 path:1 easing:1 chose:1 au:2 plus:1 deployment:1 fastest:1 limited:1 practical:2 acknowledgment:1 practice:1 implement:3 richter:3 procedure:1 evolving:1 thought:1 significantly:1 pre:3 road:24 word:1 suggest:1 get:5 cannot:5 impossible:1 equivalent:1 deterministic:2 map:1 yt:2 transportation:3 arterial:1 starting:1 duration:6 pomdp:2 immediately:1 pure:1 factorisation:1 react:1 avoidance:1 financial:1 traditionally:1 coordinate:1 pt:2 today:2 exact:2 neighbouring:6 programming:1 us:2 trick:1 element:3 fluct:2 expensive:2 particularly:1 cooperative:2 observed:1 bottom:1 worst:1 wiering:1 ensures:2 cycle:11 ran:1 monograph:1 environment:3 complexity:2 reward:16 dynamic:2 trained:1 depend:1 solving:2 segment:2 singh:1 creates:1 upon:2 collide:1 america:1 grown:1 fewest:1 describe:1 effective:1 monte:1 activate:1 larger:1 solve:1 ability:3 statistic:1 noisy:1 online:16 sequence:1 advantage:3 propose:1 interaction:5 product:1 coming:1 loop:6 rapidly:2 entered:1 poorly:1 achieve:3 scalability:1 convergence:8 requirement:1 optimum:1 produce:2 incremental:1 converges:1 help:1 spent:3 depending:1 pose:1 minimises:2 measured:1 qt:1 sydney:5 implemented:3 australian:1 direction:2 correct:1 filter:1 stochastic:8 australia:6 mcallester:1 implementing:1 jam:1 government:1 behaviour:2 wall:2 anticipate:1 around:4 credit:2 sufficiently:1 normal:2 exp:3 hall:1 mapping:3 nw:1 substituting:1 driving:1 major:1 early:1 estimation:2 proc:7 travel:17 injecting:1 currently:2 quote:2 grouped:1 create:1 city:3 mit:1 sensor:11 imperfection:1 always:2 aim:1 rather:2 reaching:1 avoid:4 minimisation:1 encode:1 derived:1 focus:3 ax:1 schaal:1 release:1 improvement:2 interrupt:1 modelling:4 rank:1 check:1 baseline:3 integrated:1 bt:3 typically:1 hidden:1 going:4 germany:1 backing:1 compatibility:1 among:1 ill:1 plan:2 art:2 equal:2 once:4 construct:1 having:1 ng:2 washington:1 zz:1 optimising:1 represents:1 saving:1 yu:2 hardest:1 throughput:3 icml:2 alter:1 future:1 report:1 simplify:1 inherent:1 employ:1 few:1 randomly:2 neighbour:3 national:3 individual:1 delayed:1 replaced:1 phase:39 unappealing:1 attempt:1 interest:1 multiply:1 disperse:1 evaluation:1 saturated:1 introduces:1 mixture:1 light:4 activated:1 accurate:3 integral:1 partial:2 neglecting:1 experience:1 shorter:1 plugged:1 increased:1 soft:2 assignment:2 subset:1 rolling:1 uniform:3 examining:1 too:2 crossroad:2 periodic:1 st:16 density:1 sensitivity:1 retain:1 stay:1 destination:2 off:1 vijayakumar:1 quickly:1 again:2 central:2 containing:1 choose:1 slowly:3 worse:1 adversely:1 creating:1 corner:1 leading:3 return:1 account:1 de:1 sec:3 north:6 int:2 coordinated:2 permanent:1 caused:1 depends:1 stream:6 vehicle:8 try:3 view:2 utilised:1 performed:4 later:1 traffic:53 tab:4 wave:1 aggregation:1 decaying:1 complicated:1 start:1 reached:1 red:1 contribution:1 square:4 degraded:1 variance:4 yield:2 ofthe:1 identify:1 climbing:1 modelled:1 raw:1 accurately:1 zy:2 carlo:1 multiplying:1 pomdps:3 drive:1 straight:10 history:3 converged:3 detector:6 pp:1 initialised:2 obvious:1 naturally:1 cur:1 car:30 improves:1 ut:2 fractional:1 schedule:1 sophisticated:2 actually:2 jair:1 day:1 flowing:1 improved:1 formulation:1 though:1 furthermore:2 just:3 clock:2 d:3 hand:7 web:1 perhaps:2 mdp:4 grows:1 nac:26 effect:1 requiring:1 inductive:2 discounting:1 entering:1 deal:2 during:3 eligibility:1 noted:1 steady:4 cosine:1 criterion:1 freiburg:2 hill:1 tt:6 demonstrate:5 interface:1 cooperate:2 meaning:1 wise:1 instantaneous:1 common:4 sine:1 rl:4 empirically:1 conditioning:1 volume:10 extend:1 approximates:1 kluwer:1 significant:1 enter:2 automatic:2 vanilla:3 tuning:1 grid:2 cancellation:1 centre:5 stochasticity:1 sherman:1 had:3 reliability:1 funded:1 access:1 actor:12 gt:4 base:1 navigates:1 something:1 curvature:2 recent:1 forcing:2 scenario:20 route:6 arbitrarily:1 vt:6 approximators:1 minimum:1 additional:2 converge:1 shortest:1 signal:5 morrison:1 rv:2 multiple:3 smooth:1 faster:4 adapt:3 academic:1 offer:1 long:2 equally:2 coded:1 controlled:2 impact:2 calculates:1 controller:28 optimisation:10 ajay:1 albert:2 iteration:1 represent:1 sometimes:1 tailored:1 want:1 interval:1 scat:14 grow:1 source:2 suffered:1 publisher:1 ot:29 exhibited:1 ascent:4 south:3 elegant:2 flow:8 presence:1 noting:1 exceed:1 split:3 counting:1 concerned:1 baxter:1 switch:1 gave:1 architecture:1 restrict:2 suboptimal:2 reduce:3 observability:2 minimise:1 whether:2 motivated:1 recreate:1 bartlett:1 accelerating:1 suffer:2 queue:5 routed:2 peter:1 cause:1 action:9 proprietary:2 prefers:2 involve:1 aimed:1 discount:2 category:1 simplest:1 generate:2 outperform:3 percentage:1 per:9 write:1 waiting:3 group:2 key:1 demonstrating:1 drawn:1 d3:1 changing:1 douglas:1 rewriting:1 sum:2 counteract:1 inverse:3 run:4 exponentiation:1 arrive:1 throughout:1 almost:1 reasonable:1 realtime:1 decision:2 prefer:1 scaling:1 bit:8 accelerates:1 layer:1 guaranteed:1 followed:1 display:1 badly:1 constraint:1 ri:2 fulfils:1 speed:3 vehicular:1 format:1 department:1 according:3 combination:4 poor:1 across:1 slightly:2 appealing:1 cun:1 parameterisation:1 making:4 modification:1 happens:1 kakade:1 restricted:2 pr:32 equation:1 gartner:1 turn:9 count:5 fail:1 needed:1 flip:1 end:1 travelling:2 available:3 multiplied:1 apply:3 fluctuating:3 enforce:1 responsive:1 batch:5 substitute:2 original:1 top:1 include:1 ensure:1 completed:1 maintaining:1 instant:2 calculating:1 giving:3 especially:1 objective:3 move:2 already:1 quantity:1 strategy:4 costly:1 rt:5 bagnell:2 devote:1 evolutionary:1 gradient:32 distance:3 simulated:1 capacity:3 street:2 majority:1 outer:1 manifold:1 trivial:2 toward:1 reason:2 assuming:2 length:6 kalman:1 o1:1 insufficient:1 reformulate:1 difficult:3 unfortunately:2 potentially:2 trace:1 zt:9 policy:45 perform:1 observation:19 revised:1 markov:1 finite:1 jin:2 ecml:1 supporting:1 beat:1 situation:2 extended:1 ever:1 communication:1 discovered:1 mansour:1 deters:1 introduced:1 cast:2 required:1 specified:1 z1:1 ds0:2 learned:4 nip:4 able:2 usually:1 pattern:1 reading:1 saturation:4 program:2 optimise:3 green:5 max:2 unrealistic:1 critical:1 natural:13 rely:1 attach:1 business:1 turning:1 boyan:1 pause:1 weaver:1 nth:2 scheme:4 improve:5 technology:1 doug:1 nice:1 ict:3 literature:1 prior:1 removal:1 relative:1 fully:1 expect:2 mixed:1 generation:1 proportional:1 proven:1 analogy:1 approximator:1 scen:2 degree:1 agent:1 consistent:1 s0:4 editor:1 critic:13 balancing:1 row:1 compatible:2 diagonally:1 placed:1 last:1 side:3 allow:1 normalised:1 wide:1 emphasise:1 sparse:1 distributed:3 benefit:2 calculated:1 world:8 avoids:2 rich:1 computes:1 preventing:1 stuck:1 ending:1 reinforcement:6 adaptive:7 simplified:1 commonly:1 historical:2 far:2 made:1 social:1 cope:1 transaction:1 approximate:1 observable:4 ignore:1 keep:1 global:4 decides:1 incoming:1 active:1 handbook:1 continuous:4 search:9 table:2 additionally:1 learn:5 ignoring:1 alg:1 interact:2 bottou:1 complex:1 upstream:1 papageorgiou:1 protocol:1 da:3 did:1 main:5 silvia:1 noise:3 ludwigs:2 allowed:1 fig:5 canberra:2 west:2 join:2 board:1 fashion:1 deployed:1 slow:2 n:7 fails:1 position:1 exponential:1 rent:1 third:2 weighting:1 down:1 removing:2 xt:5 showing:1 mitigates:1 offset:11 specialisation:1 sims:1 consist:1 effectively:2 gained:1 magnitude:3 conditioned:1 anu:2 horizon:6 gap:1 easier:1 boston:1 aberdeen:2 intersection:49 explore:1 failed:1 partially:3 lstd:1 radically:1 environmental:1 adaption:6 determines:1 frustrated:1 constantly:1 relies:1 consequently:1 informing:1 towards:1 fisher:6 change:1 hard:2 infinite:5 determined:1 except:1 reducing:1 wt:7 averaging:2 uniformly:1 degradation:1 total:2 called:1 pas:3 east:6 ew:7 indicating:5 select:1 cholesky:1 philosophy:1 incorporate:2